furia furialog · Every Noise at Once · New Particles · The War Against Silence · Aedliga (songs) · photography · other things
18 January 2006 to 27 April 2005 · tagged tech/essay
At the moment, the tools for online publishing were mostly designed by software people for an audience of other software people, so it shouldn't be too surprising that by real (i.e., pre-/non- online) publishing standards most of them are not only fairly terrible in particulars, but basically comprehensively alien and unresponsive down to the conceptual model.  

The reason the HTML-tables-vs-CSS-blocks war wasn't over ten minutes after CSS was invented, for example, is that the CSS float/span/margin/escape model chose (more in ignorance than defiance, I'm sure) its own internal geometrical cleverness over the potential applicability of centuries of conventional information design. The most basic tool of scalable information design is and has always been the layout grid. CSS is capable of producing a simple layout grid, but not simply. Arguably it's even harder to recognize a page layout by reading its CSS than it is to write the CSS. Structural elements are structural by convention if you're lucky, coincidence if you aren't, and certainly not by nature, and are unavoidably cluttered by non-structural formatting elements.  

An HTML table isn't a good tool for page layout, either, but for the most part a simple layout grid can be expressed by a simple HTML table, and with decent code-indentation it is possible to more or less grasp the presentation structure of a simple table by looking at its code. The fact that tables require the content and the structure to be intertwined is a huge problem, and the fundamental misconception of the scheme emerges the moment any kind of table-nesting is required for page-structural and/or content-structural reasons. The non-programmer's mental model of a box is an actual box. You can put small actual boxes inside larger ones. If you forget to put a bottom on a small box inside a large box, you get a bottomless small box inside a normal large box, not a deformed small box falling out of the bottom of a mangled large box. Nobody, not even a programmer, intuitively thinks of object construction procedurally, and nobody but a programmer (and not most of those) would want to.  

Ironically, painfully, the closest thing in current web technology to simple modeling of simple presentation structure is technically the most flawed: frames. I would recommend their use for almost no public purpose, and frameset code is no pleasure to look at, either, but at least frame definition is separated from content (too separated, in fact, but separation is still a good idea), and its grammar operates by explicitly specifying structure from the outside in (with recursion, albeit awkwardly), rather than calculating it by implication from the inside out. If frames had been defined as page (and sub-page) elements pre-declaring layout structure, rather than window elements that contain content pages by reference, we might have had something.  

While we're redoing the layout model, though, we also need to fix our basic misunderstanding of the natural structure of repeatable content presentation. Any remotely enlightened programmer "knows" that content should be separated from presentation, but these are actually three things, not two: content, presentation structure, and actual formatting, and in practice the formatting can usually be further separated into content formatting (the formatting of the information unique to this content node) and page layout (the site-identity and navigation and other framing stuff displayed around it but conceptually invoked from context).  

Take this furialog entry itself, for example. Its content structure includes a title, a display date, a publication timestamp, a set of tags and a content block. This content structure is then mapped into a presentation structure for this web page which is quite a bit simpler: just a header and a body. The title, display date and tags are concatenated into a single formatted block for presentation purposes, and the timestamp is not (in this format) presented at all. The formatting rules, then, are applied to the presentation structure, not the content structure. This is important, because elsewhere on this site content with very different content structures gets mapped into the same presentation structure, subject to the same formatting rules, and thus it is possible for me to define a single rule-set that drives the middle layers of the production of all my content, a single page-layout that drives the outer layers, and appropriate extensions to which the sub-formatting of special-purpose inner blocks like tables and photosets can be delegated.  

The usefulness of RSS, similarly, is largely due to the fact that it defines a common presentation structure into which dissimilar content structures may be mapped. Different feed sources may put semantically different and/or compound information into the title and description fields, but the writing software doesn't have to know or care about the formatting rules, and the reading software doesn't have to know or care about the content structure. The drawback of using presentation structure as an interchange medium, of course, is that the reading software can't get the original content structure even if it could do something interesting with it. Human readers can re-interpolate it, but for the machines RSS tends to be lossy.  

This was also, maybe obviously, the original design concept for HTML: standardize a presentation structure that can then act as universal intermediary between content and screen. RSS gets away with its limitations because it's an alternate channel, but as the web emerged, HTML was the web, and universal intermediation quickly proved unsatisfying in both directions: writers couldn't exercise enough control over formatting when they needed or wanted to (and programmers often assume "want" is always the right word here, but publishers understand that publishing is different than just writing), and machines couldn't add enough processing insight without finer-grained and/or domain-specific data schema.  
 

Here, then, are some of the elements of the new model I want, some of what, from twenty years of working with information in both publishing and software spheres (and this isn't what I thought after ten, so maybe it won't be what I think after forty, but is that because I change, or because the world does?), I believe the whole system needs in order to fulfill its potential as a medium at once for direct human communication and human understanding supplemented by computer reasoning:  

- Not only should all meaningful content begin in its own schema, but this machine-parsable content structure should, inherently and automatically, underlie and accompany the human-visible presentation and formatting of all information. Imagine, if this were our only problem, adding an Elemental XML section to every HTML page, so that each page would carry its pre-presentation data in addition to all of its other markup, and splitting the View Source command into View Presentation Code and View Source Data to offer different tools for each. (Microformats are a similarly motivated effort, establishing presentation-structure conventions for particular kinds of content structure, but conflating the two kinds of structure is only reasonable as a short-term tactical compromise in a system where there's no good way to separate them, anyway.)  

- The mapping of content structure into presentation structure must be so trivially easy, and such an obvious entry into a richly distributed universe of sophisticated standard lexicons and templates, that nobody will even be tempted to try to skip the step by pretending that their content doesn't have any other structure than its presentation. Imagine, as you're publishing something, that you could chose dynamically from not only your own repertoire of formats and the templates provided by your authoring software, but from a social universe of templates, of all granularities, available from anyone who has one to offer. The laborious specific control of your own formatting, or the flexible custom extension of your presentation structure, should be optional opportunities for the motivated, not required burdens of participation.  

- The layout model should be based on recursive containment from the outside in (probably with layers, for graphic depth). Exceptional behavior, like violating margins, must be declared explicitly, not arise from routine errors in normal use. It should be trivial to map one element of presentation structure into one formatting container, to flow multiple elements of presentation structure into one formatting container, or to flow one or more elements of presentation structure through an arbitrarily linked series of formatting containers. We've learned a lot about UI and usability since Quark and PageMaker were invented, and not everything we ever wanted to do in books and magazines applies the same way on a screen or in an interactive environment, but in many ways the screen world is only now approaching the point where print was before computers, so as is already beginning to sink in about typography, more old lessons apply than not.  

- Where the new medium has new abilities, our tools should be granting new powers to us, not demanding new chores. AJAX is a clever retrofit of what should have been a native idiom. Forget paging vs scrolling, Google Maps is how everything offscreen should work, the computers dealing with computer constraints like bandwidth and memory without bringing people into issues that don't concern people. And if content were encapsulated intelligently, it wouldn't take Flash or iframes or scripts to make a piece of a page do things a whole page can already do. The web is a bad application environment kludged onto a bad publishing environment, and it ought to be a composite environment in which publishers realize that sometimes they are publishing applications and experiences and devices, not just pictures and words.  

- The separation of content, presentation structure and formatting is necessary for everyone's purposes, not just the auteurs and the programmers and the machines. Good, reusable, maintainable presentation design, even when it's done by dedicated designers, should always work first from presentation structure to formatting rules. The handcrafting of individual exceptions should be an elaboration on a framework of rules, not a repetitive substitute for making the effort to understand patterns. I see way too much CSS code that specifies individual pixel-unit fonts and margins and paddings for every single id-numbered div in an entire page (or, worse, for every id-numbered div in a whole set of pages, so that the style-sheet can be "reused" across all of them). Sometimes this is a result of having misused the (X)HTML to model the content structure, rather than the presentation structure, so that the formats and the selectors don't line up right, but I bet it's more often just laziness. It's always faster to add a pixel than to generalize a rule about why, but it takes an incredibly tiny number of exceptions to complicate a rule-set so irretrievably that it can only be modified by further exceptions. For all but the rarest purposes where chaos is the order, the fewer rules the better, and the square of the fewer exceptions the better.  
 

But fine. No matter what you credit as partial precedence, ubiquitous electronic publishing and communication as social infrastructure are at best in their adolescence, and it's amazing we've gotten this far with these primitive tools. The willingnesses to restart and rebuild are in our character, just as surely as myopia and impatience are our curses. Generations are how we learn.  

So yes, this means starting over, but the new ways, when they're right, are easier than the old ones, and we only shrink from them out of fear of change. In place of the current HTML/CSS information foundation we will have data (content structure), mapping (content structure to presentation structure), page layout (the containers through which the presentation structures flow) and formatting (typography, color, spacing, ornamentation and illustration within those containers). Any of these can be implied from defaults (set by the reader, writer or both), included by reference to external sources, included by recursion, included conditionally by context, or defined inline. Most of this will still be mediated by tools for most people, but the tools will be simpler and more powerful, and will let us focus more on what we're saying to each other, and less on how.  

And since the new way delivers both the formatting and the semantics, this will be a practical revolution that actually subsumes a theoretical one, better human communication interleaved with better machine communication, and thus the foundation for both better shopping and the dream of the internet being something qualitatively more profound and transformative than shopping.
My online information life is easily complicated enough for me to be sure that our current tools and models for how a person relates to information connectivity are still wildly immature. "Web 2.0" is a hopelessly inadequate vision of the future because the separation of "Web" as its own world is exactly part of the problem. My email, chat, feed-reading and web-browsing software perform intently interrelated (to me) communication and monitoring functions with a magnificent (to them) disregard for each other, and often for me. Only slightly less glaringly, but of increasing significance, my information publishing activities (both conventional and micro-) go on in a similarly near-complete isolation from my information consumption.  

What I need, and what everyone will need before the net can be considered an inclusive participatory extension of human social communication, is a much simpler, more straightforward and more agile mapping of system tools onto the four fundamental components of information interaction:  

- My information.  

- Distribution of my information to others.  

- Monitoring the flow of other people's information.  

- Consumption of other people's information (including connecting it back to my information).  

At the moment, this whole cycle is a mess.  
 

Information  

At the moment there are marginally acceptable tools only for the most tightly constrained special-case forms of information creation. I think person-to-person email is now mature enough, for example, that the nature of the tool set isn't excluding anybody who would otherwise be eligible, although obviously economics, technology and literacy still post significant obstacles to many people getting to use those tools in the first place. As soon as you try to raise the level of function and abstraction above the individual message to or from a known correspondent, though, the tools begin to show their limitations. Communicating with groups is cumbersome. Communicating with strangers is almost systemically incapacitating. Carrying on extended or episodic conversations is difficult, and relating the accumulating bodies of correspondence to the personal relationships they nominally express and inform is so poorly supported by the tools that I suspect most users effectively do not retain any value from their electronic interpersonal correspondence outside of their own heads.  

Nothing beyond email is even remotely comparable in developmental maturity. There are individually decent tools for extremely rudimentary self-publishing in the forms of simple streams of text and photos, and anything beyond that falls off into the usable domain of a vanishingly tiny minority of participants.  

Of course, arguably the only mature pre-connectivity computer function was ever unstructured word-processing, anyway, and if creating and managing structured information for their own purposes is beyond most users, then it hardly matters that there are no easy publishing methods awaiting that information. And thus any new concepts of syndicating a user's own information out to external forums, or re-consolidating distributed contributions back into central management and retention, don't even have a foundation on which to build.  

Thus it seems to me that the first thing we really need, underneath all of these tools and before we really start talking about communication at all, is an underlying data system, as opposed to (but just as native and optimized and standardized as) a file system. All our information creation tools should be manufacturing data, not files, and always with the bias towards representing that data in the most application-neutral, self-describing, reusable and standardized way (like Elemental XML, for example, or some isomorph). It should not only be effortless to exchange information between, for a crashingly trivial example, iTunes and Excel and your blog sidebar, but more than that, the way everyone (people and systems alike) should be thinking about the process is that the data exists independent of all the applications that merely happen to manipulate it and give it back.  
 

Distribution  

Possibly this is just a function of my own egotism and information-retentitive nature, but I continue to think that my information system should remember my information first, and send it elsewhere second. Moreover, I think that ultimately we will understand that email is merely a historically earlier-understood instance of the same general distribution problem as blog publishing, photo sharing, restaurant-review syndication, collaborative filtering and everything else. Put another way, general information publishing/sharing tools must mature to the point that they accomplish the personal-correspondence special-case as easily as (and preferably more easily than) our current single-purpose email tools.  

The large conceptual shift that needs to take place in the rest of the information world, as the new storage model makes it possible, is away from the assumption that a sharing format is necessarily an authoring environment. Collaborative tagging, for example, would be much more effective if annotation was a native function whose results could also be shared. Sharing (including full privacy control) should be an elemental function of the underlying data system, so that (for example) you don't "export" or "upload" a photograph to Flickr and then assign it tags and descriptions and sets, you tag and describe and group photos for your own purposes, and then choose some of them to share via a particular online medium.  
 

Monitoring  

Although several different ideas commingle in the current state of RSS/Atom feeds, the two most-central innovations are the provision of an automatic monitoring framework for the otherwise manually-browsed web, and the creation of a parallel lowest-common-denominator content format to go with the style- and context-heavily publishing forms used on the web.  

These will need to be disentangled, because the monitoring function properly belongs to a higher level. A conversation should not be constrained to different monitoring tools because it happens to take place in email, or on a mailing list, or in a group real-time conference or in the comment thread of a blog. Just as all the forms of content sharing should arise from a common creation and storage framework, we need a general form of monitoring that subsumes the current functions (and far exceeds the current usability) of email inboxes and filtering, IM buddy lists and presence, on-screen bezel/pop-up notification, RSS updates, menubar/Dashboard/system-tray widgets, SMS alerts and even web-browsing history and general read/unread flagging. For the new monitoring system we must figure out significantly better ways of understanding a user's dynamic segmentation of their monitoring needs along the continuum between urgent active notification and ongoing passive tracking. Ultimately I want a single console to watch, or more precisely a single logical console that can take multiple particular forms tailored to my different mental and physical modes, and adjust its common reporting to the subtly and radically differing natures of particular information sources.  
 

Consumption  

The corresponding evolution in information consumption, as is almost implicit in the other parts of the system, is that consumption is not really separate at all. What you read can be as much a part of your information flow as what you write, and the nature of your relationship to what you read should not change based on the incidental mechanics of the medium. The same human conversation could take place on IM, in email or on a web site; the same options for retaining and correlating and re-using should be available in all those cases. At the moment the tools for bookmarking web pages are only narrowly adequate, and the tools for usably retaining web information are nearly non-existent. We email ourselves web pages; this should prove that there's something very important missing.  

And, too, as we are only barely starting to understand with tagging and blogging, what I read flows into what I write, and into all kinds of information that I create implicitly and may or may not want to use and share. Conversations and connections flow through all this information, or try to. The new information world will understand and encourage and benefit from this flow.  

The new information world will be formed of this flow.
It is becoming increasingly possible for separate systems to perform each of the six major functions of data applications: storage, transformation (including creation), categorization (including tagging, indexing, search and retrieval), visualization, monitoring and the administration of trust.  

Historically, of course, these functions were usually not only performed by a single unified system, but mostly limited to that system. At its most insular, old-world applications entailed embedded storage in proprietary formats, integrated authoring tools and UI, integrated (if any) notification, and integrated (if any) user-management.  

In the old world of personal data applications, like spreadsheets and word-processors and whatever, standardized file systems at least separated storage and categorization from application logic (you could put your Excel file on a floppy disk, or in your "Budgets" folder). Semi-standardization of formats helped open data transformation and/or visualization a little bit (you could use Word's search-and-replace tools on an RTF file, or run Crystal Reports on your Paradox databases), but published formats are not quite the same thing as open formats. And monitoring and trust were usually expendable for personal applications, or solvable a function at a time.  

Old-world online applications changed the distribution of insularities. You could actually use several different tools to send, receive and monitor CompuServe data. Prodigy let you use any tool you wanted, as long as it was a construction-paper hammer designed by runt warthogs for use by cartoon infants. But the online service very clearly owned the physical storage, the content space and the identity space.  

The early web was a combination of progress and regress, pretty much no matter which directions you think those go. HTML offered the tantalizing prospect of separating the presentation logic from the data structures, but in practice browser convergence quickly resulted in this being true in only a somewhat obscure development-tools sense. You could produce your HTML files with different software than they would be read with, but you still had to take the client constraints heavily into account. HTML files could be moved around fairly easily, but cross-server transclusion got pretty ugly the moment you tried to move much beyond linking. And identity management was reinvented from scratch anywhere it was unavoidable.  

But we now have at least nominal rudimentary pieces of the ability to separate all of these. XML offers an interchangeable application-neutral storage format (or at least a meta-format), XML+HTTP gives us a way to virtualize storage (as long as we don't virtualize it too far), Google has demonstrated the scalable separation of categorization and some amount of visualization, and RSS is at least a conceptual step towards the separation of tracking. LDAP separates identity management for at least certain kinds of communities. These may not be the solutions, but they are indications that the solutions are possible and closer.  

But the next steps, in all cases, are huge, and at least as difficult culturally as they will be technically.  

Storage  

All systems must be prepared to handle external storage transparently to the data's owner, whether this actually means live reading and writing over the network or caching and mirroring to simulate it. An indexer must be able to hand you back the indexes it makes and updates, an image organizer must allow you to store the images on your own server, etc.  

Transformation  

All data must be stored in as neutral and open a format as possible. Application-neutral information must be tagged in standard self-describing ways. Proprietary information is acceptable only when mandated by definition (for internal security functions and precious little else), and where necessary must be clearly identified and attributed. These will be practical imperatives, not just moral ones. Secrecy is fragile, and the net routes around it instinctively.  

Categorization  

Anything that exists can be categorized. In many cases, the categorization will end up being qualitatively more valuable than the original information. The only difference between data and meta-data is that meta-data is data the owner of the thing didn't anticipate or provide for. The more fluidly a system can re-integrate the meta-data it spawns, the more powerful it will be. The more afraid you are of your audience, the faster they will depart.  

Visualization  

Similarly, the more readily a system opens itself to external visualization, the better off it will be. Whatever it is you own and control, it's never more than part of the experience. The default techno-social goal of a data application is to be the reference source for some kind of data. (The default business goal is to have some way to make money, not from that data but from that status.)  

Monitoring  

Various malformed and over-constrained attempts have been made to generalize the problems of monitoring, change tracking and notification into email, IM, RSS, Trackback, Konfabulator, Dashboard and countless proprietary and special-purpose schemes. The next generation has to supply a version that scales to the entire world, including not only its size and its bandwidth but also its heterogeneity and its self-organization. The new system has to rationalize all flows, including the malevolent ones.  

Trust  

Ultimately, though, the native currency of the new connected world will be trust. Every interaction of people and systems relies on it, usually inherently and implicitly. Existing systems have mostly survived on trust by exclusivity (firewalls, closed networks, internal identity management) obscurity (mostly self-selection) or informal accountability (feedback and self-policing). None of these scale. The new identity systems must be built not to administer specific applications but to provide universal credentials that verify a user's membership in defined communities. The new data systems must be built so that unknown individuals can be accepted on the basis of delegated authority. In the old world people were "users", users existed inside careful boundaries, and outside of those boundaries all there were were names. In the new world, people are the signals themselves, and a name is a name only by virtue of some authority, and maybe that authority by virtue of another one. In the new data world, where the scope of the network is as big as the scope of the planet, and the size is exponentially larger, the primary component of every transaction of storage, transformation, categorization, visualization or monitoring will be the intimate initialization of the basis of trust under which any two of us say anything to each other at all.
An abacus is a state machine. It executes no instructions, and maintains no history, but it does store a single state, semi-persistently and nearly-infinitely rewritably, and it stores it in a representation that facilitates operator-initiated state-changes of certain types. An electric typewriter with a single-character backspace function is approximately equivalent in computational terms. Both of these are very useful devices.  

A typewriter with a multiple-character backspace function has both state and memory. The simplest electric calculator has both state and automated instruction execution. A semi-modern calculator has state, instruction and memory, and at this point we can call it a basic computer. The subsequent history of human-computer interaction design has been a slow process of iteratively transcending decreasingly unimaginative understandings of the implications of state, instruction and memory.  

The conceptual breakthroughs of the earliest text-processing programs were 1) that semantically non-numeric information could be represented in numeric memory, 2) that semantically non-mathematical operations could be modeled in mathematical instructions, and 3) that quantitative increases in memory capacity could enable qualitatively different uses of that memory. Further thought about representation led to storing formatting in addition to text itself. Further thought about instructions led to the automation of layout operations, and the addition of text-processing operations like search-and-replace. This makes for a more interesting state-machine than an abacus, but still effectively a state machine in most user-apparent aspects.  

The conceptual shift from state machine to information appliance can be reduced, symbolically, to model-altering insights embodied in three perhaps seemingly incremental features. From the critical realization that the computer's representation of information could include more than the ostensible current state came the radical notion of Undo, and later its extrapolation to Undelete. From the realization that a significant body of pre-existing external human knowledge could be represented and usefully applied to user-generated information came the extraordinary new idea of machine proofreading. File transmission applied signal-bearing wires to the space between people, rather than just between devices. Combine data application and wires and you get the net as gigantic reference library and perpetual market. Combine wires and internal state and you get distributed applications and the net as communication infrastructure. Combine data application and internal state and you get data mining and machine translation. Combine all three and you get more or less everything in modern computing up to the night before IM, online dating, eBay, Mapquest, Napster, Google, SETI@Home, "people who bought this also bought", phonecams and the blogosphere.  

But it's the next morning, now, and I don't really want an information appliance. I want a virtual personal assistant. I want my writing software to think of its job not just as formatting documents, but as remembering everything I do when I write, including things I don't realize I'm doing, and things I do while writing that aren't themselves writing. I don't just want document-level Undo, I want a coherent journal view of everything I typed, including all the dead-end phrases I tried and deleted and might now want to revisit. Actually, I don't want fundamentally document-level anything, I want a dynamic evolving history of my entire interaction with my computer and the network beyond it, navigable by chronology or association. I want to jump from an email to the web page on which I found the stat I cited four replies ago in the note that started the conversation. I want to go from the song I'm playing to the birthday of the person who told me about it, to a cross-referenced list of the other music I associate with the song and the other music all my friends have mentioned in emails and IMs and forum notes and shared playlists and now-playing monitors. I want to see rhythms of correspondences and patterns of discovery and contours of neglect. I want the things I've forgotten to know when to remind me of themselves, and the things I think I know to have the humility to volunteer for their retirements.  

The primary challenges for the design of virtual personal assistants are of a different nature (naturally) than the challenges for the design of information appliances or state machines. What the state machine worries about representing, the assistant thinks about communicating and transforming and connecting. What the information appliance struggles to remember, the assistant has to decide how to share and correlate, and when if ever to forget. The state machine works to its capacity. The information appliance works to its parameters. The assistant, however, must be self-governing and evolvingly aware of its own limits, able to differentiate between automating and advising. The assistant will be evaluated not only on what it accomplishes, but on what it knows to ask and when. The state machine's applications were solipsists, however creative. The information appliance's applications were autocrats, however occasionally beneficent or enlightened. The assistant's applications are inventors and ambassadors and advocates and court jesters, and sometimes mercenaries and cannon fodder, and every once in a while oblivious innocent bystanders willing to go home without complaint when you promise them there's nothing to see here.  

And in a connected and definingly social world, the virtual personal assistant is a distributed and intimately negotiated function, and the rules that maintain the productive tension between isolation and aggregation are even more complex. What is the currency of the economy of privacy and trust? On what grounds do you delegate a privilege or retain it? What of yourself are you willing to reveal in return for what collective wisdom, from what collectives, and for that matter which and how much "wisdom" are you prepared to consume, and in what forms? When is it information we seek, and when is information-exchange merely a proxy for personal contact? When does a system become more humane by modeling its users more precisely, and when does it serve them better by leaving them to their own improvisation and compromise?  

The new world will be many things, some of which are already emerging and some of which are yet deeply hidden, but here are a few of what may be its truths:  

- Millions now stored will never be erased. In the last era, everything not saved was instantly lost. In the new era, everything not meticulously preconstructed for disintegration will be indexed and archived forever.  

- Data belongs to people, not processes. There are no silos in the new architecture. Persistence doesn't mean writing something so that it can be reconstructed by its originating code, it means writing it so that it can be reconstructed without its originating code.  

- You are in a maze of twisty passages, each explicably unique and enticingly beckoning. The new systems must not only know when to ask you questions, they must know how to categorize the properties of the possible answers. They must know how to empower your responses with nuance rather than luring you into literalist traps.  

- Everything good is relative. The old era was about identification and instantiation and encapsulation. The new era is about connection and abstraction and subcomposition and change. The old tools had files and records and pages. The new ones have links and self-description and self-direction. The old world was measured in assignments and addresses, the new one in associations and relationships. The old tools took knowledge apart, the new ones must put it back together again.  

- There are three classes of the acted-upon and the acting: objects, creatures and artists. Objects have no value except as they benefit creatures or express the work of artists, and perform no act except in response. Creatures are to be respected and defended and delighted, and acknowledged in their free will, but not burdened with responsibility or solicited for decisions. Artists are the source of all authority and the ultimate ends of all means. Humans are sometimes artists but always at least creatures. Machines and systems and programs (and policies and corporations and governments and precepts, including these) are never more than objects. The first obligation of any designed system is to be obsessively devoted to the intricate cognizance of these boundaries.  

The simplest worthy tools exist to protect or sustain something alive. The best ones express something that makes living more beautiful. What numbers do your machines safeguard that an abacus wasn't sufficient to protect? What do your machines make beautiful, that was ugly when all we had were wood and beads and hands?
Site contents published by glenn mcdonald under a Creative Commons BY/NC/ND License except where otherwise noted.