furia furialog · Every Noise at Once · New Particles · The War Against Silence · Aedliga (songs) · photography · other things
22 July 2006 to 17 January 2006 · tagged essay
I spent this week at the annual conference of the American Association for Artificial Intelligence. I have no academic background in artificial intelligence. Of course, I have no academic background in computer science, or interface design, or technical support, or pretty much anything else I've ever been paid to do. The tenuous premise of my professional career is that I think well, in general, and maybe that I think particularly well in the intersection between the things that might improve human lives and the things that machines can do. My track record indicates that I've been involved with projects where this tension has been usefully resolved, and my day-to-day work seems to support my presumptuous contention that I am a personally significant contributor towards such resolutions.  

So I'm clearly unqualified to assess, in general, the internal quality of advanced research work in the field of artificial intelligence. But I'm in charge of a software project in which machine-learning algorithms are being put to practical use, so I have a stake in the field. And the general meta-question of the application of technology to the class of what, to humans, are thinking problems, is probably no farther outside of my domain than anything else I ever deal with. And even if I were deeper in the field, there were six parallel tracks of talks taking place during most of the conference, so no one person could really hope to directly evaluate the whole thing anyway.  

Take this, then, for whatever it evokes. My scattered impressions cluster around two ideas. The first, which covers about half of my experiences during the week, is that I had wandered into a convocation of alchemists. Time after time I sat through earnest descriptions of patently clever mechanisms for conveying the lead to the transmutation chamber, or of organizing the variations of gold sure to soon be produced, or of venting the toxic fumes that somebody had proved (in last year's paper) would result from the conversion. Not so much actual gold, just yet. This year's conference was celebrating the 50th anniversary of AI as a field, and although it would be wrong to dismiss a hard problem on the grounds that it hasn't been solved quickly, the only way Zeno got away with diminishing returns was by publishing measurable progress so frequently.  

One major branch of AI is based on the observations that humans think, and that we express thoughts, and that our expression of each seemingly individual thought involves a complex background of assumptions and definitions and relationships and conceptual leaps, so maybe if computers had all that context to work in, they'd be able to think, too. The logistical and emotional center of the cargo-cult effort to build that foundation of knowledge, and hope it attracts Thinkingness, is the Cyc project, whose director proudly told a room full of people that they have so far encoded 1,000,000 terms, 100,000 relationships between them, and 10,000 meta-assertions about those things and relationships, and that they believe this constitutes about 10% of what is required.  

Then he added "(isa Rudolph reindeer)" as 1,000,001 and 10,001, and with a flourish derived "Rudolph is a ruminant", which is so far from 10% correct that I'm still laughing about it a week later. I suspect the Atlantean Association of Alchemical Investigators thought they were 10% of the way to transmutation at some point, too. To me this is not only almost certainly wrong, but also fabulously self-ridiculing. There is no reasonable way to assess the size of something you can't even define, and you can't be 10% to nowhere any more than you can be 10% to a miracle. 10% to a miracle is not a scientific progress-report, it's a numerology of the angel-capacity of certain ornamental pins.  

And even if it made any sense to say that we're 10% of the way to encoding the basics of human common sense, to me the premise of the project is fatally flawed in three ways:  

1. Our ability to carry on conversations is less a product of the depths of information we plumb with every statement than of our ability to levitate on ignorance. You can always add a new level of notation to explain what the previous level means, but you can always add a new level, so all you've really succeeded in doing is turning the second M in MLM to Modeling and wasting the time of a lot of people who already knew what you meant in the first place. The machines might remember everything you can figure out how to tell them, but remembering isn't half as useful as forgetting. Not only is deduction less valuable than guessing, it's ultimately even less valuable than being productively wrong. And if after 50 years of AI and 20 years of Cyc we're only 10% of the way to encoding the things people know, then you and I will be long dead before we ever so much as get started on the exponentially larger domain of our idiocies and misconceptions and errors.  

2. Even if we could encode all of that, I'm in no way convinced that it's necessary for thinking, let alone sufficient. I sat through a lot of talks about what felt to me, ultimately, like attempts to figure out how to build a bicycle by taking apart a horse. Cyc is pretty much the embodiment of Intelligent Design as an engineering methodology, trying to build a grown human adult an atom at a time. I suspect they would defend themselves by saying that they're only really trying to build a 14-year-old, and then they can have it read the rest of the stuff in books. This is never going to work. Even building a baby and letting it learn the whole thing from scratch isn't going to work. Complexity evolves. If we're going to build artificial constructs that think, we're going to do it by sowing the most primitive of artificial creatures into the most accelerated of generative artificial worlds and giving them virtual epochs in which to solve (and state) their problems themselves.  

3. Underlying both of these errors, I think, is one fundamental misconception, engendered by the words we use to describe the point of all this speculation and work, and maybe most encouraged to fester by the inescapable apparent simplicity of one unfortunate thought-experiment. It's easy to imagine talking to a computer. The genius of Turing's Test is that it requires less equipment than an E-meter audit and less training than teaching SAT-prep classes. It's even easier to understand in the IM era than it was when he posed it. If a computer can convince us it's a person, then it is thinking. Translated to what I suspect most people would think this means, and then inverted for logic, this basically says that "people" are whatever we can't tell aren't like ourselves. But this is Fake People, not Artificial Intelligence, and shouldn't be what we mean by "thinking", or by artificial intelligence. My cats apply the Turing Test to me several times a day. I always fail. The first machine to think will be the first one that comes up with its own idea of what thinking means. It won't be like us. It won't think like us, it may not have much to say to us, and it almost certainly won't care. It will be good at things it likes to do, not the stuff we want just because we want it.  

And therein, I think, is something closer to what I think we ought to mean by artificial intelligence: not the manipulation of ideas, however facile or reified, but the generation of them. Here's the Furia Test: the first true AI is the first machine to learn to spit out Rudolph's cud and tell the Turing judges to take their incessant idiotic questions and fuck off.  
 

But I said that that was only half of my experience. Humans have come up with a lot of good ideas while working on bad ones, and arguably most human progress takes place in the context of grandly evocative error. I'm not really interested in building a machine I can condescend to, and I bet a lot of the other people at the conference weren't there as part of that dream, either. Taking apart a horse teaches you things. Building bicycles teaches you things. Even the effort to teach Cyc about ThingsThatMakeFartingNoisesWhenYouAccidentallySitOnThemFn teaches you something.  

My part of this comes from a much, much simpler dream. I already know people who think. I don't need machines for that. True AI will have its own questions to devise and then answer. What I need is deeper and broader answers to human questions we already know, and new questions that matter to us. I see that the information we already have, jammed into computers that don't think and aren't likely to start thinking any time soon, is sufficient to answer several orders of magnitude more questions than our current question-answering tools facilitate. This isn't an existential problem, and it's our problem because it's so clearly our fault. Computers would be perfectly happy to synthesize and connect, but instead we've obtusely put them to work obscuring information from each other so that even the syntheses inherent in the character of the information become intractable. The machine-learning parts of the system I'm working on are tangibly useful, but in a sense mostly regrettable necessities, devised to undo the structural damage done by too-exclusively packaging small amounts of knowledge for unhelpful individual resale.  

The good news from AAAI, for me, is that there are incredibly interesting things going on in the subfields of knowledge representation and reasoning. I actually do have some academic background in deductive logic, and the project I'm working on is both deeply and shallowly involved with information structure, so maybe my grounding in this subarea is just a little firmer. More than that, though, I can understand why it matters. Some of the new information-alchemy equipment is really new good information-chemistry gear opportunistically relabeled. I understand, for example, that we need graph-structures in most of the places where we have heretofore had only trees, and that our tools for comprehending graph structures are way behind our tools for understanding tree structures, and our tools for understanding tree structures without chopping them down first were already none too good. I understand what each order of logic adds, if that's not all you're doing, and what the effort to encode each new level costs and potentially loses. I understand that we want to make closed-world decisions in an open world, and that we have to. I understand why Tim Berners-Lee wants URIs on everything and I understand that a huge part of how we're able to talk to each other about anything is that we are not required to unambiguously identify every element of what we are attempting to say. I understand that Zeno was only ever half right, and that he never had to actually ship anything.  
 

So I go back to work, where we aren't trying to build machines that think, and aren't waiting for anybody else to. Our bicycles aren't going to look or smell much like horses, but we're going to try our best to make them faster than walking. I am bemused to have fallen into the company of alchemists, but even at their most insularly foolish they're a lot more interesting to talk to than Sarbanes-Oxley compliance officers. I'd much rather worry about what OWL leaves out than have to explain to business-unit managers why writing "XML BPM" on a PowerPoint slide doesn't constitute an advanced-technology plan. Ultimately my complaint about AI is probably not so much that its premises are flawed as that the current particular flaws lead to wrong work that is too mundane in its wrongness. I understand that I am paid to be an applied philosopher, and I want to feel, after a week among people freer to linger in theory, that practice is pedaling flat-out just to keep possibility in sight. But if we're just out here with our new fancy bicycle prototypes, and our old blank maps, and no plausible rumors of buried gold or miracles, then fine. There are places to ride, and better things to see by looking up than down, and we'll see more of them if we aren't always stopping to dig new collapsing holes in the same empty sand.
Although I'm not at all sure this is factually fair, I have begun to mentally, and maybe emotionally, blame Flickr for what feels to me like a plague of subject-oblivious square photo-cropping.  

I should admit, I guess, that when it's me operating the camera, I'm a pretty extreme horizontalist. I'm happiest at about 3:1. In a tool universe built around 4:3, though, this is kind of a pain in the ass. I could mask my camera's LCD for 3:1 feedback, but then the picture is really too small to work from. For online display I have to assume 4:3-ish frame spaces, so 3:1 images end up in practice being shorter instead of wider, which is unsatisfying. And digital cameras will have to pack a lot more pixels into 4:3 sensors before I'll be informationally content to throw away more than half of them. And my obsessive preference for aspect-ratio consistency in exhibition sets means that I would usually rather stick to 4:3 for everything than mix in the occasional 3:1 where I spot an opportunity despite the obstacles.  

So I understand, of course, the value of square-cropping in any content-neutral photo-showing application. It's possible to do an attractive job of mixing aspect-ratios, but it's exponentially easier to do an even more attractive job of displaying consistent aspect-ratios. Cropping 4:3s and 3:4s to 1:1 symmetrically is technically trivial, and although it's aesthetically unreliable in the abstract, the vast majority of amateur photographs are center-weighted, so it usually turns out OK. Actually, the vast majority of amateur photographs are also probably framed too widely, so a little universal symmetrical cropping almost certainly improves more Flickr pictures than it damages.  

So cropping all pictures to squares for thumbnailing makes perfect sense as a Flickr design decision. It simplifies away arguably the biggest visual design problem in mass photo display. If you're looking at somebody else's photographs, it's easy to fall into assuming they are square, so any weirdness in framing you're likely to implicitly attribute to the photographer. The same applies to your own photographs unless you've spent some time seriously considering the originals, and the more you use Flickr, the more it is the way you consider your own photographs en masse.  

But if your exhibited photographs are usually going to be approached through thumbnail galleries (the prevalence of which Flickr has also hugely influenced), and the thumbnails are usually going to be squares, it will simplify the rest of the experience if your photographs actually are square. I don't know if any digital cameras are already shipping with built-in square-cropping modes, but I expect those to start appearing very soon if they haven't already. The more square photos people have, the more display-tools will cheat and optimize for them, and the more incentive there will be to be square.  

But square is a bad base ratio for photography, at least if by "photography" we mean people taking pictures of things people see, for other people to later share (or imagine) the experience of seeing. We see our world horizontally. Our eyes are side-by-side, our lives are gravity-flattened, our emotional landscapes are literal landscapes as often as metaphorical. My 3:1 fetish may be extreme, but I'm pretty sure that if you take photographs on their own terms, humans instictively respond more positively to wide aspect-ratios. The standard terms are actually telling: "landscape" refers to the subject of the picture, "portrait" to the act of picturing it. We can appreciate photographs in all sorts of shapes, but we can empathize with seeing most readily when the shared vision is the shape of experienced vision.  

So this self-reinforcing dependent vogue for square photography is, I think, a machine gain and a human loss. Worse, it's a sparkly machine-gain that humans are lining up to lose. Machine gains are almost always sparkly, if only because it's far easier to polish a working machine than it is to figure out the machine you should have built instead, or admit that it was better, even if it was harder, to do something by hand. And we form machine-polishing clubs, and start companies to make machine polish, and open shops to sell it, and years go by before we stop and think about the flaws in which we've become invested.  
 

So too with this idiotic chronology-switchback setup we've tempoarily settled on for blog formats. The right way to read incremental written forms, beyond any vague doubt, is in serial order. You start at the top of the first entry, you read to the bottom, and then the top of the next entry should follow the bottom of the previous one. Thousands of years of usability research has validated this basic design.  

All of which was summarily and obliviously ignored by the original engineers of HTML and web browsers, with the result that they neglected to provide a simple and reliable mechanism for one absolutely essential bit of visual behavior: a fixed identity header and indepdently scrollable/pageable content. Without this, a designer of serial content can have identity reinforcement (come into the page at the top) or currency (come into it in the middle, where the new content starts), but not both. And since they didn't build in any meaningful tools for handling the user-subjectivity of "current", identity basically wins by forfeit.  

The reverse-chron blog format is a sparkly-machine solution to this problem. It puts the newest entry next to the identity, thus at least superficially addressing both goals at once. For every other purpose, though, it's actively reader-hostile. If the entries form any kind of overall narrative, you have to read it in a painful zig-zag. If you are following a blog and miss a single update, you have to use the same awful up-and-down to find where you left off, read down, scroll back up, read down, repeat. This is bad.  

But it's bad in what has become an established way, so even if you don't believe the alternatives are worse on their own terms, they almost certainly become worse in public practice. As with square photographs, we make our tools in the easiest shapes, and then we accomodate their limitations, and then we hone them to perfect their limits, and then we forget that this is not how we wanted to live.  
 

Next time you make a crude tool, don't polish it, and don't accomodate its limitations. Use it the way you wish it worked, pay attention to how that hurts, and then throw it away and try to make the next crude, unsparkly tool so that using it doesn't make the tool better, it makes us better.
Say you're making software to do X. The audience for this is people who want to do X, and the evaluation criteria for the software to do X is that it actually can be used to do X in some respectable and hopefully distinguished manner.  

Before you get to the released version of your software to do X, there are two useful stages you will probably want to pass through: beta and alpha. These are not percentages of completion or levels of toleration of error, they are themselves tools for different purposes than the released version. X beta exists to provide an evaluable preview of X. X alpha exists to provide an evaluable preview of X beta.  

It is not only possible, but critical, to substitute these purposes of beta and alpha back into the original statement. Like this, for beta:  

Say you're making software to {provide an evaluable preview of X}. The audience for this is people who want to {evaluate a preview of X}, and the evaluation criteria for the software to {provide an evaluable preview of X} is that it actually can be used to {evaluate a preview of X} in some respectable and hopefully distinguished manner.
 

Note that this is different. The people interested in previewing a tool to do X are not necessarily the same people as the ones who want to do X, and their goal is not doing X but seeing how your software is going to do X.  

Likewise with alpha:  

Say you're making software to {provide an evaluable preview of an evaluable preview of X}. The audience for this is people who want to {evaluate a preview of an evaluable preview of X}, and the evaluation criteria for the software to {provide an evaluable preview of an evaluable preview of X} is that it actually can be used to {evaluate a preview of an evaluable preview of X} in some respectable and hopefully distinguished manner.
 

That is, the beta is a tool for seeing how the real thing is eventually going to work, and the alpha is a tool for seeing how the beta is eventually going to work. The audience for alpha is thus double-meta, and is likely to be very small, sometimes even smaller than the team working on it.  

Even more crucially, the substitution works in the other direction, too: beta is a released version of software for evaluating a preview of X, and the alpha is a released version of software for evaluating a preview of the beta. So while the X beta may have omissions and errors in its performance of X without failing, if it has omissions and errors in its performance of the evaluation and preview of X, it fails.  

Thus the released version of an online calendar for shared scheduling, for example, fails if it can't be used for shared scheduling. But the beta fails if it can't be used for experimenting with shared scheduling. So the beta doesn't require that all the shared scheduling features work, but it does require that all the experimentation features are ready.  

If your software involves manipulating user data of any kind, then, here are some things that probably have to be already working, without appreciable error, in your beta:  

- import
- reimport
- importing useful sample data
- distinguishing data by import batch
- undoing import batches
- deleting data
- deleting data en masse
- full data purge and restart of beta experience
- comparing your treatment of this data to the treatment of it wherever it came from
- comparing different treatments of this data within your own software
- export/sync/feed of this data to any evaluation-critical downstream integration points
- sync/feed of source data from any evaluation-critical upstream integration points
- full operation of a representative subset of anything critical of which there will be multiples (integration points, environments, styles, templates, other users, etc.)
- any configuration settings integral to personal evaluation
- clear explanation of beta limitations  

If any of these aren't done, don't ship your beta yet. If some of them aren't even in your design, you've got more designing to do.
A conversation, from the point of view of any one participant, has these three elements:  

- the source: the other participants, either as individuals or as some collective they form for the purpose of this conversation  

- the context: as simple as an explicit topic, as subtle as an implicit one, as complex as a network of human relationships  

- your role: sometimes you are yourself, sometimes you are acting in a defined capacity, sometimes you participate in a conversation as part of a collective (like an audience)  

All conversations have these same fundamental elements. At the moment, our electronic conversations are subdivided and segregated according to ultimately trivial subclasses (personal email, IM, mailing lists, newsletters, feeds, web sites themselves, innumerable other variations on alerts), and the useful tools for managing conversations are scattered across similarly (arbitrarily) segmented applications.  

The conversation tool I really want will be built on the structural commonality of conversations, instead of the disparities. I want to apply email rules to RSS feeds, feed mechanics to mailing lists, contact lists as view filters, identities as task organizers, Growl formats as cell-phone notification styles. I want my email correspondence with my mom to be at least as rich as my soccer "correspondence" with the MLSnet front page, and I want my conversation with MLSnet to be as malleable as my own iTunes song status.  

I want, I think, for all my conversations to be structurally equal. I want to be able to look at them organized by any of those elements: by source, by context or by my role, and by the unions and intersections thereof. I want to see conversations in context when there's context, and I want there to more often be far more context than a list. I want to be able to understand my side of my conversations in aggregate, and for the memberships of my conversations to be effortlessly expandable, and for my computers to remember things my head forgets, even when I can't remember whether I knew them to begin with.  

I want, of course, the whole internet redesigned with this desire at its core, but I also want approximations of this in the meantime. I want Apple Mail or GMail to stop acting like somebody invented "mailing lists" and "address books" yesterday morning. Or I want Shrook or BlogBridge to speak POP3 and IMAP as fluently as RSS and OPML. Or I want Safari to look like Apple solving a problem from their own principles rather than letting somebody else define the form of the answer, or Flock aspiring to be a social application rather than just a browser with a few extra menu commands. I want Agenda and Magellan back now that I finally have enough information to justify them.  

I want us to have the conversations we could have if we didn't spend so much time just trying to keep track of what morbidly little we've already managed to say.
The main human information goal is understanding. Or wisdom, depending on your precise taxonomy. But either way, searching is plainly a means, not an end, and the current common incarnation of Search, which involves arbitrarily flattening a content space into a set of independent and logically equivalent Pages and then filtering them based on the presence or absence of words in their text, not only isn't an end, but is barely even a means to a means. This form of two-dimensional, context-stripping, schema-oblivous, answer-better-already-exist-somewhere searching is properly the very last resort, and it's a grotesque testament to the poverty of our information spaces that at the moment our last resort is often our only resort.  

The first big improvement in searching is giving it schema awareness. I doubt the people behind IMDb spend much time thinking about themselves as technology visionaries, but IMDb Search is a wildly instructive model of what is not only possible but arguably almost inevitable if you know something about the structure of your data. IMDb presents both the search widgetry and the answers in the vocabulary of the data-schema of movies and the people who work on them, not in "keywords" and "pages", and understands intimately that in IMDb's information-space search exists almost exclusively for the purpose of finding an entry point at which to start browsing. You go to Google to "look for something", you go to IMDb to "look something up"; the former phrase implies difficulty and disappointment in its very phrasing, the latter the comfortable assumption of success.  

On the web at large, of course, there is no meaningful schema, and it's impossible to make any simplifying assumptions about the subject matter of your question before you ask it. It is more productive to search in IMDb than with Google not because IMDb's searching is better, but because its data is better. But this does not even fractionally exonerate Google, or anybody else who is currently trying to solve an information problem by defining it as a search problem. They're all data problems. Google has the hardest version of this problem, since they don't directly control the information-space they're searching, but they have more than enough power and credibility to lead a revolution if they can muster the vision and organization. And anybody building an "enterprise" search tool has no such excuse; the enterprise does control their information-space, at least out to the edges where it touches the public space, and every second that can be invested in improving the data will be at least as productive as an hour sunk into flatly searching it.  

So if I worked for a Searching company right now, I'd start madly redefining ourselves tomorrow. We are not a searching company, we are an information organization company. The last resort is necessary, but neither sufficient nor transformative. I'd pull the smartest people I had off of "search" and put them to work on tools for the other end of the information process, reaching to the humans who are creating it and giving them the power to communicate not just the words of what they know but the structure of it, and to the collective mass of people to help them communicate and recognize and refine their collective knowledge about the schemas of known and knowable things. This is why Google Base holds the future of Google, and why you should sell your Google stock right now if they keep treating it as mainly a way for someone to buy your unused exercise equipment from you using a credit card. It should be the world's de facto public forum for the negotiation of the schema of all human knowledge, and if it isn't, every other decision Google makes will be forced by whatever is.  

But well-structured data, though necessary, isn't sufficient either. The good news for "search" companies is that improving the data is itself just a means to an end. Ideal data only encodes what we already know. The problems of useful inference from known data are hugely harder and super-hugely more valuable than the current forms of searching, especially when you realize that the boundary between private and public data is an obstacle and an opportunity in both directions not a wall to hide behind or run away from. The real future of "search" is in providing humans with the tools to form questions that haven't already been answered, and assemble the possible pieces of the answer, from threads of reasoning that traverse all kinds of territories of partial knowledge, into some form that synthesizes ideas that have never before even been juxtaposed, and onto which humans can further apply human powers where machine powers really fail -- fail because the machines are machines, not where they fail because we didn't take the time to let them be more thoroughly themselves -- so that they in turn can help us be more completely and wisely human.
I think it is becoming painfully clear that the web suffers from at least two critical design flaws, one of structure and one of usability.  

The usability flaw is the omission of real tracking and monitoring from the original browsing model. The "visited" link-state is no substitute for true unread marks, and HTTP response headers are not adequate building blocks for functional monitoring.  

RSS, born out of various motivations, is turning most coherently into a retrofit tracking/monitoring overlay for the web. My conversation with a feed is qualitatively poorer in context (and potentially in content) than my conversation with a web site, but it's qualitatively more manageable. The feed tells me when there's something new, and what it is, and lets me keep track of my interaction with it.  

This dynamic should have been built into the model from the outset, because without it the whole system does not scale in use. Providing it as an overlay, and a whole parallel information channel, is idiotic in every theoretical sense, but its pragmatic virtue is that a separate system can be built with far fewer technical and social dependencies.  

And while RSS is still nowhere near social critical mass among human information consumers, it is approaching a viable critical mass as a geek technology, and we will thus be increasingly tempted to lapse into thinking of it as a goal in itself, rather than a means. Witness, most glaringly, the fact that so far nobody, not even the people writing RSS-aware web-browsers, has attempted to use RSS to solve the original problem, which is making sense of the contents of a web site, not from it. And witness, almost as gallingly, the fact that we're pretending to think there's a future to the network model in which every reader is individually polling every information source every few minutes.  

In content, the same time-wasting cycle is going to replay in RSS that played in HTML: the new channel is initially lauded by sheltered tech geeks for freeing the "real" content from all the crap that used to surround it, and then quickly retaken by the forces of reality, which understand the commercial point of "all the crap". So ads and context will creep into feeds, and before long the content of RSS items will start to reproduce the whole experience of the originating web site, and there will no longer be anything streamlined or usably universal about the content of a feed.  

In scalability, the whole thing is just going to implode, or become horrendously convoluted as people scramble to patch over the network problems with proxies and collective batching.  

Of course, if we're going to have to rebuild the whole web for structural reasons, rebuilding the tracking/notification overlay on the current web is a throwaway project, but that doesn't mean it isn't worth doing, and it certainly doesn't mean somebody won't try to do it.  

If I were working for Microsoft or Apple right now (or, in theory, on Mozilla, but I'm not sure this can be done without corporate-scale backing), I'd have my R&D people putting serious work into treating RSS not as a content channel but as a source of the necessary metadata to build monitoring and tracking directly into the browsing experience. Forget trying to "teach web users about feeds", they shouldn't have to learn or care. Build the monitoring/tracking stuff around and into the browser where the user already lives and reads.  

If I were working for an infrastructure company, like Google or Akamai or maybe IBM, I'd have my R&D people hard at work on standards proposals and proof-of-concept prototypes for a cogently bidirectional subscription/syndication protocol that sends messages from the consumer to the source only when the consumer's interest changes, and from the source to the consumer only when the information changes. Quantum-leap bonus points for subsuming old-style email/IM messaging and web-browsing itself into the same new protocol. These are all most essentially conversations.  

And in the meantime, if I were working on any kind of RSS/OPML-related application, I would take a day or two to stop and think about my goals, not in terms of today's syntaxes but in terms of the flow of information between human beings, and between machines as our facilitators. I'd want, and maybe this is just me, to be working on something that not only improves the lives of people using the imperfect tools it has to work with right now, but would improve the lives of people even more efficiently if the world in which it operates were itself improved. Sometimes a broken window demands plywood, but as a tool-maker I dream of making something you won't just throw away after this crisis passes.  
 

(Of course, the deeper and ultimately more costly of the web's two design flaws is the structural mistake, which is the original decision to base HTML on presentation structure, rather than content structure. This is a monumental tragedy, as it has resulted in humanity staging the largest information-organization effort in the history of the species, and ending up with something that is perversely and pathetically oblivious to how much more than screen-rendering engines and address resolvers our machines could have been for us. In building this first unsemantic web we may well have thrown away more human knowledge than we've captured, and now we're going to have to build the whole thing over again, more or less from scratch, against the literally planetary inertia of our short-sighted mistakes.)
Here is my favorite current example of a good bad internet business idea. My wife and I have a number of friends with whom we often get together for variously collaborative dinners in various permutations of availability. Nearly everyone has some kind of eating preference, and several of our friends have menu-changing physical or moral constraints like wheat allergies, lactose intolerance and levels of vegetarianism. Keeping track of all the personal variables and their interactions is hardly impossible, but neither is it trivial.  

DinnerPeace, then, would be a web-based eating-constraint resolution service. Each participant specifies their own food rules, the dinner's host selects the guests for a specific event, and the service automatically combines the guests' needs to produce a composite constraint profile for the evening's menu.  

Later versions might add invitations, recipe-database integration ("I've got these eight people, a lot of spinach, and a good sale on Alaskan salmon; give me some suggestions."), pre-event/potluck planning discussions, post-event notes and recipe-sharing, and historical context ("Oh, and I don't want to make anything I already made for any of these people before.").  

As a stand-alone business this is a fairly dismal idea. Users won't be willing to pay much, if anything, for what is at best an appealing minor convenience. Food-related advertisers might be interested, but the only ones really likely to benefit would be grocers local to the people preparing dishes, and there probably won't be a critical mass of participants in any given locality, soon or ever.  

More significantly, though, an independent service is simply the wrong model for the idea. It requires the builder to provide and maintain all the framework of a generic social service (notably member- and discussion-management), and requires the users to join another thing, keep track of another set of credentials, and of course provide a bunch of information whose privacy they have to consider, and which is unlikely to be reusable in any other parallel or future venue.  

The biggest jump-start for building little good ideas like this would be a pre-existing public infrastructure for distributed identity management, with portable authentication, ratification of trust, communication uniqueness (that is, your new managed identity is sufficient contact information for IM, email, etc.), arbitrarily extensible personal profiles, integrated personal past-and-future calendar-handling and straightforward control over what information is exposed to whom. This needs to be at least as easy and ubiquitous as email is already. In the new world, in fact, this (and not just email) needs to be the new baseline for online presence, in the same way that the baseline for telephone presence grew from home-phone to home-phone+answering-machine to home-phone+remote-messaging to cellular.  

This new baseline identity system would get DinnerPeace and everything like it (including much bigger things with ultimately the same structure) out of the commodity headaches of name arbitration, password resetting, access control, scheduling, elemental data-storage, history, recovery, etc., and leave each inventor to put all their work into their idea's unique characteristics, which in the case of DinnerPeace are really only a reference schema and vocabulary for the representation of a person's eating constraints, and the associated reconciliation logic for sets of these constraints and their histories.  

The business problem may be a little harder than the technical problem, but it is of the same shape. Along with the new identity system must come a distributed microcommerce system of which ad-click commissions are only the distant ancestor, their replacement being much closer to a pervasive method of tracking and apportioning credit for all the influences on each spending event, including the new possibility of tipping as a networked and optionally aggregated act.  

Thus DinnerPeace, and all the other little pieces of a smarter new world, mostly shouldn't need "business plans" in the old sense, nor investors or funding or stock or even companies. They shoud live or die or evolve based on their usefulness, and profit or not based on how much commerce they touch. DinnerPeace should generate one little trickle of money from how it affects what its users buy, another from its users' direct gratitude, and a third from its share of its users' collective appreciation of all such services they employ. This money flows in outgoing trickles, in turn, to the contributors to the service's logic.  

Probably the trickles from one idea usually don't add up to a living for even one person, let alone several, but then most of the little ideas from which they flow are not life's works, either. They are inspirations of moments, and the work of hours or days. The new world is improved by little touches, and rewards them with little gifts. And if it becomes less compelling to dream of retiring on windfalls of luck or greed, then maybe it will become easier to live by caring.
[Postscript to Content and Presentation Are 3...]  

Arguably the most fundamental specific semantic failing in the current HTML approach to information communication is that it does not effectively distinguish between a) content unique to the individual information node represented by the page URL, and b) navigation and context appearing on the same web page but belonging conceptually to a higher containing node. Thus, most obviously, indexers have no way to reliably attribute text-matches or links to the correct node. Less obviously, perhaps, this makes it difficult for reading software to easily recognize that two URLs present the same content in different contexts, which is currently the shape of quite a bit of content abuse, or different content in the same context, which at this point is probably the shape of most legitimate web information. The real solution still lies in separating content structure and presentation structure, but a microformat-style approach might help a lot in the interim.  

PPS: There is also currently no good method for separating the content and context components of a URL, either, meaning there is often no user-visible URI for the content node itself. This may be moot most of the time in the new world, since content will either have a context imposed upon it or will invoke its own default context. But standardizing URL formats in the short-term may be even harder than standardizing class-based pseudo-semantics...
At the moment, the tools for online publishing were mostly designed by software people for an audience of other software people, so it shouldn't be too surprising that by real (i.e., pre-/non- online) publishing standards most of them are not only fairly terrible in particulars, but basically comprehensively alien and unresponsive down to the conceptual model.  

The reason the HTML-tables-vs-CSS-blocks war wasn't over ten minutes after CSS was invented, for example, is that the CSS float/span/margin/escape model chose (more in ignorance than defiance, I'm sure) its own internal geometrical cleverness over the potential applicability of centuries of conventional information design. The most basic tool of scalable information design is and has always been the layout grid. CSS is capable of producing a simple layout grid, but not simply. Arguably it's even harder to recognize a page layout by reading its CSS than it is to write the CSS. Structural elements are structural by convention if you're lucky, coincidence if you aren't, and certainly not by nature, and are unavoidably cluttered by non-structural formatting elements.  

An HTML table isn't a good tool for page layout, either, but for the most part a simple layout grid can be expressed by a simple HTML table, and with decent code-indentation it is possible to more or less grasp the presentation structure of a simple table by looking at its code. The fact that tables require the content and the structure to be intertwined is a huge problem, and the fundamental misconception of the scheme emerges the moment any kind of table-nesting is required for page-structural and/or content-structural reasons. The non-programmer's mental model of a box is an actual box. You can put small actual boxes inside larger ones. If you forget to put a bottom on a small box inside a large box, you get a bottomless small box inside a normal large box, not a deformed small box falling out of the bottom of a mangled large box. Nobody, not even a programmer, intuitively thinks of object construction procedurally, and nobody but a programmer (and not most of those) would want to.  

Ironically, painfully, the closest thing in current web technology to simple modeling of simple presentation structure is technically the most flawed: frames. I would recommend their use for almost no public purpose, and frameset code is no pleasure to look at, either, but at least frame definition is separated from content (too separated, in fact, but separation is still a good idea), and its grammar operates by explicitly specifying structure from the outside in (with recursion, albeit awkwardly), rather than calculating it by implication from the inside out. If frames had been defined as page (and sub-page) elements pre-declaring layout structure, rather than window elements that contain content pages by reference, we might have had something.  

While we're redoing the layout model, though, we also need to fix our basic misunderstanding of the natural structure of repeatable content presentation. Any remotely enlightened programmer "knows" that content should be separated from presentation, but these are actually three things, not two: content, presentation structure, and actual formatting, and in practice the formatting can usually be further separated into content formatting (the formatting of the information unique to this content node) and page layout (the site-identity and navigation and other framing stuff displayed around it but conceptually invoked from context).  

Take this furialog entry itself, for example. Its content structure includes a title, a display date, a publication timestamp, a set of tags and a content block. This content structure is then mapped into a presentation structure for this web page which is quite a bit simpler: just a header and a body. The title, display date and tags are concatenated into a single formatted block for presentation purposes, and the timestamp is not (in this format) presented at all. The formatting rules, then, are applied to the presentation structure, not the content structure. This is important, because elsewhere on this site content with very different content structures gets mapped into the same presentation structure, subject to the same formatting rules, and thus it is possible for me to define a single rule-set that drives the middle layers of the production of all my content, a single page-layout that drives the outer layers, and appropriate extensions to which the sub-formatting of special-purpose inner blocks like tables and photosets can be delegated.  

The usefulness of RSS, similarly, is largely due to the fact that it defines a common presentation structure into which dissimilar content structures may be mapped. Different feed sources may put semantically different and/or compound information into the title and description fields, but the writing software doesn't have to know or care about the formatting rules, and the reading software doesn't have to know or care about the content structure. The drawback of using presentation structure as an interchange medium, of course, is that the reading software can't get the original content structure even if it could do something interesting with it. Human readers can re-interpolate it, but for the machines RSS tends to be lossy.  

This was also, maybe obviously, the original design concept for HTML: standardize a presentation structure that can then act as universal intermediary between content and screen. RSS gets away with its limitations because it's an alternate channel, but as the web emerged, HTML was the web, and universal intermediation quickly proved unsatisfying in both directions: writers couldn't exercise enough control over formatting when they needed or wanted to (and programmers often assume "want" is always the right word here, but publishers understand that publishing is different than just writing), and machines couldn't add enough processing insight without finer-grained and/or domain-specific data schema.  
 

Here, then, are some of the elements of the new model I want, some of what, from twenty years of working with information in both publishing and software spheres (and this isn't what I thought after ten, so maybe it won't be what I think after forty, but is that because I change, or because the world does?), I believe the whole system needs in order to fulfill its potential as a medium at once for direct human communication and human understanding supplemented by computer reasoning:  

- Not only should all meaningful content begin in its own schema, but this machine-parsable content structure should, inherently and automatically, underlie and accompany the human-visible presentation and formatting of all information. Imagine, if this were our only problem, adding an Elemental XML section to every HTML page, so that each page would carry its pre-presentation data in addition to all of its other markup, and splitting the View Source command into View Presentation Code and View Source Data to offer different tools for each. (Microformats are a similarly motivated effort, establishing presentation-structure conventions for particular kinds of content structure, but conflating the two kinds of structure is only reasonable as a short-term tactical compromise in a system where there's no good way to separate them, anyway.)  

- The mapping of content structure into presentation structure must be so trivially easy, and such an obvious entry into a richly distributed universe of sophisticated standard lexicons and templates, that nobody will even be tempted to try to skip the step by pretending that their content doesn't have any other structure than its presentation. Imagine, as you're publishing something, that you could chose dynamically from not only your own repertoire of formats and the templates provided by your authoring software, but from a social universe of templates, of all granularities, available from anyone who has one to offer. The laborious specific control of your own formatting, or the flexible custom extension of your presentation structure, should be optional opportunities for the motivated, not required burdens of participation.  

- The layout model should be based on recursive containment from the outside in (probably with layers, for graphic depth). Exceptional behavior, like violating margins, must be declared explicitly, not arise from routine errors in normal use. It should be trivial to map one element of presentation structure into one formatting container, to flow multiple elements of presentation structure into one formatting container, or to flow one or more elements of presentation structure through an arbitrarily linked series of formatting containers. We've learned a lot about UI and usability since Quark and PageMaker were invented, and not everything we ever wanted to do in books and magazines applies the same way on a screen or in an interactive environment, but in many ways the screen world is only now approaching the point where print was before computers, so as is already beginning to sink in about typography, more old lessons apply than not.  

- Where the new medium has new abilities, our tools should be granting new powers to us, not demanding new chores. AJAX is a clever retrofit of what should have been a native idiom. Forget paging vs scrolling, Google Maps is how everything offscreen should work, the computers dealing with computer constraints like bandwidth and memory without bringing people into issues that don't concern people. And if content were encapsulated intelligently, it wouldn't take Flash or iframes or scripts to make a piece of a page do things a whole page can already do. The web is a bad application environment kludged onto a bad publishing environment, and it ought to be a composite environment in which publishers realize that sometimes they are publishing applications and experiences and devices, not just pictures and words.  

- The separation of content, presentation structure and formatting is necessary for everyone's purposes, not just the auteurs and the programmers and the machines. Good, reusable, maintainable presentation design, even when it's done by dedicated designers, should always work first from presentation structure to formatting rules. The handcrafting of individual exceptions should be an elaboration on a framework of rules, not a repetitive substitute for making the effort to understand patterns. I see way too much CSS code that specifies individual pixel-unit fonts and margins and paddings for every single id-numbered div in an entire page (or, worse, for every id-numbered div in a whole set of pages, so that the style-sheet can be "reused" across all of them). Sometimes this is a result of having misused the (X)HTML to model the content structure, rather than the presentation structure, so that the formats and the selectors don't line up right, but I bet it's more often just laziness. It's always faster to add a pixel than to generalize a rule about why, but it takes an incredibly tiny number of exceptions to complicate a rule-set so irretrievably that it can only be modified by further exceptions. For all but the rarest purposes where chaos is the order, the fewer rules the better, and the square of the fewer exceptions the better.  
 

But fine. No matter what you credit as partial precedence, ubiquitous electronic publishing and communication as social infrastructure are at best in their adolescence, and it's amazing we've gotten this far with these primitive tools. The willingnesses to restart and rebuild are in our character, just as surely as myopia and impatience are our curses. Generations are how we learn.  

So yes, this means starting over, but the new ways, when they're right, are easier than the old ones, and we only shrink from them out of fear of change. In place of the current HTML/CSS information foundation we will have data (content structure), mapping (content structure to presentation structure), page layout (the containers through which the presentation structures flow) and formatting (typography, color, spacing, ornamentation and illustration within those containers). Any of these can be implied from defaults (set by the reader, writer or both), included by reference to external sources, included by recursion, included conditionally by context, or defined inline. Most of this will still be mediated by tools for most people, but the tools will be simpler and more powerful, and will let us focus more on what we're saying to each other, and less on how.  

And since the new way delivers both the formatting and the semantics, this will be a practical revolution that actually subsumes a theoretical one, better human communication interleaved with better machine communication, and thus the foundation for both better shopping and the dream of the internet being something qualitatively more profound and transformative than shopping.
C1: Remove all four strings, wind four (or five) layers of ordinary plastic wrap around the fingerboard, and re-string. Theocracies are inevitably undone by meteorology.  

C5: Isolate this cellist stage left (see diagram), in a bidirectionally soundproof enclosure with visibility only towards the house. Performer should follow the score as normal, with timing cues from audience reaction. If you are sure of your own worth, you will be less reluctant to criticize others.  

C8: Adjust to Morat tuning (modern) in movements 2 and 5. Never mistake hope for authority.  

Va2: Performer should be contact-miked at the base of the throat (concealed under clothing if possible), and should whisper as indicated in the score. This channel should be mixed at the volume of the loudest single viola. America is not the only nation in which the market has failed God.  

Va3: Perform standing, hooded. Air is the element that does not dream of airlessness.  

1Vn2: Affix round mirror of 3" diameter to center rear of instrument body. The light is warmer here, but we feel tired and unsafe.  

1Vn9: Each time an audience member coughs, stand and announce the next number (counting down from 50). After zero is reached, stop playing and stand for the remainder of the piece. If you interrupt a story, you assume responsibility for its conclusion.  

2Vn3: Do not doubt yourself. You are born with wisdom, and lose it only through fear.  

2Vn6: Insert nine glass marbles, as large as will fit through sound holes. Do not allow other people's envy to become your beacon.  

2Vn7: Sit with bare feet in a tray of sand. Learn to say your name without speaking, without opening any doors.  

FH1 & 3: Where marked in score, turn to face each other and join mouths of horns together. Charity is a distraction.  

T2: Marching. Not forgetting the gift.  

X1: Satellite radio. If music is not a language, then neither is Dutch.  

X2: Manual typewriter (bell removed). Kind words are bought with foreign blood.  

X3: Mild regret. There is no excuse for independence, but sometimes the alternatives take longer than lives.  

X4: LG5275, ringers "Personal 2", "Personal 4" and "Young Cranes", volume "Medium High". The number should be listed in the program. Cities should be built by the children that will have to be born there.  

X5: Shoe horn, dry soil. White is both the color of dust and the weight of famine.  

Conductor: In addition to normal movements, a rook on an edge row or rank may turn at open corner squares and continue along the next rank or row. Circumnavigation is not why the Earth is round. "Round" is not what I mean, but math is the only real secret I have left.  

Composer: The gap between morality and ethics is best understood as compassion. In the truest portraits, the voice of the brush is not the paint but the canvas.  

Mezzanine right: Provide materials and instructions for paper airplanes to be thrown towards orchestra left. If you teach a baby to fly, she will leave the earth to us.  

Orchestra center: One hour before show time, soak each seat cushion in warm honey. Keep to yourself. The nights are long and dense with lies.  

Outer lobby: Line floor with pre-1986 magazine cigarette advertisements. Power is a metal, but forgiveness is a shape, patience is a hammer, and clarity is a vice.  

Ladies toilet: Scented towels, but no running water. It doesn't matter what other people learn.  

Front sidewalk: During intermission, oil and set alight. Unlike the sun, the moon must be invited, and sometimes we forget.  

Nearest seacoast: Speak quietly to the water when there is nothing left to say.  

Two weeks prior: Cancel, wait one hour, then retract the cancelation. It is good to grasp young that promises are made of skin.  

Afterwards: All dissent is interred in the syntax of its assumptions. Only by moving outside of politics, formally, can one accurately articulate the constraints of historical inertia. Sumerian music utilized four extra keys we now deny. It is possible to make paper out of water and salt, but governments are at best compromises between expedience and flight. They tell us that all the new planets are equally flawed. Climates without seasons breed corruption among well-meaning idolators as readily as larvae among new-fallen fruit. Attach hinges with black tape. That is no longer the lowest note once we've thought of another. There was a town here, but then you came and told us the story of how the stars were brought to you and you clothed them in orange silk and perfect poetry, and our children left to follow their souls and our parents left to follow our children and now there is no home here but the way the air shifts to stay behind us when we fall upon our palms and our fables and the marble rises towards what our tongues were before we sold them to you for a dream of silence and the tempo of sky.
Site contents published by glenn mcdonald under a Creative Commons BY/NC/ND License except where otherwise noted.