furia furialog · Every Noise at Once · New Particles · The War Against Silence · Aedliga (songs) · photography · other things
12 July 2006 to 20 May 2006

Every once in a while someone asks me, with the sudden leer of a half-starved mid-food-chain predator thinking it's finally about to get a snack, whether I'm an agnostic or an atheist. Usually this comes right after I've said I'm an atheist, which kind of spoils the surprise of the prepared follow-up attack, which attempts to structurally equate theism and atheism as mortal assertions about the immortal, rendering atheism vaguely self-undermining. Thus, in theory, I'd be forced to concede that I'm "only" an agnostic, and presumably if the worst objection one may raise against religion is that "we don't know", then religious belief is a more reasonable response than it might be if it were allowable to just say "no".  

But it is allowable to just say no. In fact, theism and atheism are not structurally equivalent. Theism is the set of mortal assertions about the immortal. Atheism is a rejection of making mortal assertions about the immortal. Religion is not a real question, it's metaphor cowering beligerently as axiom, and thus it can neither require nor benefit from answers. I should be no more defined or labeled by my disbelief in gods than by my disbelief in elves. Myth is where we keep our old placeholders for the things we didn't used to know how to know. The good myths are the ones that are worth something as art after we stop inanely insisting that they're still, or were ever, science.
As an odd and delayed result of having once written about music for a long time, I got to be a judge for a Battle of the Bands on Monday night.







If you're willing to look foolish when your guesses all turn out to be wildly wrong, join me and Michael Zwirn in our vF World Cup prediction thread.
One of the BarCamp principles is that if you aren't learning or contributing, you should get up and go somewhere else. So today, day 2 of BarCampBoston, I have gotten up and not gone anywhere, and am sitting at home in Cambridge with an IRC window open just in case something mind-blowing happens in Maynard and somebody there thinks to type about it. I'd guess there were something like 150 people who showed up for day 1, and IRC reports estimate more like 30 who returned for day 2, so I don't appear to be alone in my decision.  

Actually, there's probably nothing really un about this unreport, and for me there wasn't enough un about this nominal unconference. The reason I don't go to a lot of regular technology conferences is that I find them too often to consist of a series of sessions driven by over-specific presentations that are, at best, distantly related to some topic in which I am interested. My best hope is usually that they will inspire some discussion that wanders closer, but the structure usually almost guarantees that the presentations will be too long, and the discussions (if they happen at all) too short.  

BarCamp was basically just like this. I went to some sessions I liked, but all of these ended just when they were getting going, if not sooner. I went to some I didn't, and these went on too long, or without viable alternatives. At least on day 1, the meeting spaces were far from the staging center, and even farther from each other, making it hard to do anything but pick a room and hope for the best. Day 1 was over-scheduled, day 2 under-scheduled, and in the absence of any compensating plan, this became self-reinforcing: I'm sure I wasn't the only person who moved their presentation from day 2 to day 1 in the fear that if I gave it on day 2 there'd be no audience, and then once I'd given it there was one less reason to come back on day 2 myself.  

I can think of two obvious ways to combat this structural problem. One is to have better presentations. I don't know any shortcut to this. The longcut is to have the presentations solicited, proposed, submitted, judged and endorsed ahead of time. This could probably be done in a democratic, ad hoc, self-organizing, non-authoritarian, grassrooted, BarCampy way, but it would be hard, and I don't think that approach takes any fundamental advantage of the unique nature of BarCamp.  

The other approach is to acknowledge explicitly that unless the presentations are exceptional, the real value is much more likely in the discussions. This, conversely, is exactly in keeping with the ad hoc, participatory nature of BarCamp. So my proposal for the next BarCampBoston is that it be mostly, or perhaps entirely, about discussions. Forget 30-minute presentations, which usually produce the kind of information quantity and density you could just as well read on the web anyway, and produce way too much scheduling churn and running around.  

Instead, organize everything into 90-minute discussion groups with 30-minute blocks in between for logistics. So 10-11:30, 12-1:30 (over lunch), 2-3:30 and 4-5:30. For each discussion group there should be:  

- A framing topic, ideally in the form of a theoretically answerable question, like "What is the killer app for the semantic web?", or "What is the next step towards practical distributed identity-management?"  

- 3-6 participants prepared to contribute 5-minute demos of (or talks about) directly relevant work, so that the discussion has both concrete references and personal investments.  

- A moderator, who is in charge of nudging the conversation out of ruts and back from digressions when necessary, and helping spot the right places to insert the demos. One of the contributors can also serve as moderator, or the moderator can be somebody else.  

- Some idea, explicitly stated if it isn't adequately implied by the topic and the list of demos, of the expected context or background of discussion participants. It's particularly worthwhile to distinguish between introductions ("What is the semantic web and why might I care?: An invitation for web developers discovering data structure.") and working groups ("From DTDs to RDFS to OWL: How do you decide how much ontology is worth modeling?")  

- Few enough other people that all of you can actually have a discussion. No more than 12 total people who plan on talking, probably, including the moderator and the contributors. And additional silent observers only if the space can still accommodate everyone comfortably for 90 minutes.  

- Comfortable, discussion-suited spaces, ideally with net connections and presentation screens, and double-ideally arranged so that they feel connected, and there's a central common area from which everything else can be staged. Nobody should end up sitting somewhere bored because it's too hard to figure out where else they could be.  

This can all be totally self-organizing, but most (if not all) of it should be self-organizing in advance: at least the topics, demos, moderators and contexts; and any amount of schedule- and space-assignment will only help. Among other virtues, laying out the options in advance allows participants to anticipate a good experience by actually planning it, and allows the group to potentially consolidate less popular topics and split (or clone) more popular ones. Clone with impunity, in fact. In a self-organizing conference, the participants are generally going to be self-motivated and self-filtered, so the chance of a large group having too much to say is far higher than the chance of splitting it in two and finding that either half runs out of ideas.  

And if you can manage to have the unconference be self-reorganizing on the fly, then fantastic. Whether you plan for this or not is a question of your optimism. It sounds cool to say that you'll leave the 4:00 time-slot open until 3:30, for example, so that sessions inspired by earlier developments can spontaneously materialize. I bet it generally won't work, though. At most I might leave some of the spaces in a time-slot unbooked so that a planned session that overflows can be split or cloned on the fly. But ideally I'd state even this possibility ahead of time, so that the potential second session can be provisionally self-organized with a moderator and a share of the contributors. Remember that it's always easier to not do something you were prepared to do than it is to do something you weren't prepared to do.  
 

The one other hope I have for future BarCamps and related gatherings (Geek Dinners, WebInno, RSS Alley experiments...) is that we find a way to get a little more participation from non-startups. I don't want to crowd out the self-employed, the unemployed and the aspiringly employed, but there are times when I feel like this is the Boston Technology Underdogs Club, rather than any kind of representative sample of the real community of people interested in new technology. Credit to Monster for hosting, but the people in the orange shirts weren't acting so much as ushering, and even if they had been participating, when the next largest concern represented is Tabblo, you know we're missing some people.

 

From our door to M Street Beach to Thompson to Spectacle, 1 hour 20 minutes. Not run over by catamarans. Four bags of shards and glass, back home by noon.

Orchard Beach, Squantum  


Spectacle Island, Boston Harbor
Two notes of rather different kinds:  

- I'll be at BarCampBoston in Maynard on June 3/4. I haven't been to any unconferences before, so I don't have any guess at how successful this one might be, nor even exactly what the criteria for success are, but it's free and it's experimental, and if it turns out to be interesting I'll be sure to brag about having been at the first one.  

- Bethany has finally given in and started a blog, called rantum scoot, and her desire for feedback has overcome her public reticence, so she said it's OK for me to mention it now.
Although I'm not at all sure this is factually fair, I have begun to mentally, and maybe emotionally, blame Flickr for what feels to me like a plague of subject-oblivious square photo-cropping.  

I should admit, I guess, that when it's me operating the camera, I'm a pretty extreme horizontalist. I'm happiest at about 3:1. In a tool universe built around 4:3, though, this is kind of a pain in the ass. I could mask my camera's LCD for 3:1 feedback, but then the picture is really too small to work from. For online display I have to assume 4:3-ish frame spaces, so 3:1 images end up in practice being shorter instead of wider, which is unsatisfying. And digital cameras will have to pack a lot more pixels into 4:3 sensors before I'll be informationally content to throw away more than half of them. And my obsessive preference for aspect-ratio consistency in exhibition sets means that I would usually rather stick to 4:3 for everything than mix in the occasional 3:1 where I spot an opportunity despite the obstacles.  

So I understand, of course, the value of square-cropping in any content-neutral photo-showing application. It's possible to do an attractive job of mixing aspect-ratios, but it's exponentially easier to do an even more attractive job of displaying consistent aspect-ratios. Cropping 4:3s and 3:4s to 1:1 symmetrically is technically trivial, and although it's aesthetically unreliable in the abstract, the vast majority of amateur photographs are center-weighted, so it usually turns out OK. Actually, the vast majority of amateur photographs are also probably framed too widely, so a little universal symmetrical cropping almost certainly improves more Flickr pictures than it damages.  

So cropping all pictures to squares for thumbnailing makes perfect sense as a Flickr design decision. It simplifies away arguably the biggest visual design problem in mass photo display. If you're looking at somebody else's photographs, it's easy to fall into assuming they are square, so any weirdness in framing you're likely to implicitly attribute to the photographer. The same applies to your own photographs unless you've spent some time seriously considering the originals, and the more you use Flickr, the more it is the way you consider your own photographs en masse.  

But if your exhibited photographs are usually going to be approached through thumbnail galleries (the prevalence of which Flickr has also hugely influenced), and the thumbnails are usually going to be squares, it will simplify the rest of the experience if your photographs actually are square. I don't know if any digital cameras are already shipping with built-in square-cropping modes, but I expect those to start appearing very soon if they haven't already. The more square photos people have, the more display-tools will cheat and optimize for them, and the more incentive there will be to be square.  

But square is a bad base ratio for photography, at least if by "photography" we mean people taking pictures of things people see, for other people to later share (or imagine) the experience of seeing. We see our world horizontally. Our eyes are side-by-side, our lives are gravity-flattened, our emotional landscapes are literal landscapes as often as metaphorical. My 3:1 fetish may be extreme, but I'm pretty sure that if you take photographs on their own terms, humans instictively respond more positively to wide aspect-ratios. The standard terms are actually telling: "landscape" refers to the subject of the picture, "portrait" to the act of picturing it. We can appreciate photographs in all sorts of shapes, but we can empathize with seeing most readily when the shared vision is the shape of experienced vision.  

So this self-reinforcing dependent vogue for square photography is, I think, a machine gain and a human loss. Worse, it's a sparkly machine-gain that humans are lining up to lose. Machine gains are almost always sparkly, if only because it's far easier to polish a working machine than it is to figure out the machine you should have built instead, or admit that it was better, even if it was harder, to do something by hand. And we form machine-polishing clubs, and start companies to make machine polish, and open shops to sell it, and years go by before we stop and think about the flaws in which we've become invested.  
 

So too with this idiotic chronology-switchback setup we've tempoarily settled on for blog formats. The right way to read incremental written forms, beyond any vague doubt, is in serial order. You start at the top of the first entry, you read to the bottom, and then the top of the next entry should follow the bottom of the previous one. Thousands of years of usability research has validated this basic design.  

All of which was summarily and obliviously ignored by the original engineers of HTML and web browsers, with the result that they neglected to provide a simple and reliable mechanism for one absolutely essential bit of visual behavior: a fixed identity header and indepdently scrollable/pageable content. Without this, a designer of serial content can have identity reinforcement (come into the page at the top) or currency (come into it in the middle, where the new content starts), but not both. And since they didn't build in any meaningful tools for handling the user-subjectivity of "current", identity basically wins by forfeit.  

The reverse-chron blog format is a sparkly-machine solution to this problem. It puts the newest entry next to the identity, thus at least superficially addressing both goals at once. For every other purpose, though, it's actively reader-hostile. If the entries form any kind of overall narrative, you have to read it in a painful zig-zag. If you are following a blog and miss a single update, you have to use the same awful up-and-down to find where you left off, read down, scroll back up, read down, repeat. This is bad.  

But it's bad in what has become an established way, so even if you don't believe the alternatives are worse on their own terms, they almost certainly become worse in public practice. As with square photographs, we make our tools in the easiest shapes, and then we accomodate their limitations, and then we hone them to perfect their limits, and then we forget that this is not how we wanted to live.  
 

Next time you make a crude tool, don't polish it, and don't accomodate its limitations. Use it the way you wish it worked, pay attention to how that hurts, and then throw it away and try to make the next crude, unsparkly tool so that using it doesn't make the tool better, it makes us better.
Site contents published by glenn mcdonald under a Creative Commons BY/NC/ND License except where otherwise noted.