In 1968, when timesharing by users behind teletype terminals was regarded as avant-garde, Engelbart gave a demo that featured a number of firsts: a screen display for both and text and graphics, interactive text-editing, a mouse. All that was integrated into a fluidly handled medium. Kay gives testimony to the huge impression this made on him: "Engelbart was a prophet of Biblical proportions".
Shortly before reading Kay's paper I had been browsing "Selected Writings on Computing" by E.W. Dijkstra. For most of his career Dijkstra adhered to the admirable and enviable discipline of writing down any fruit of his brain that was write-downable, and to do so as soon as it was write-downable. This discipline resulted in a long sequence of documents. The Dijkstra archive starts with EWD 28 ("Substitution Processes", 1962) and ends with EWD 1318 ("Coxeter's Rabbit", 2002). It is from these that "Selected Writings" was compiled. The one I happened on recently was EWD 387, a report on a trip to attend the IBM seminar "Communication and Computers", Newcastle (UK), September 1973.
EWD 387 consists mainly of reviews of the talks given in the seminar. Dijkstra loathed two of the speakers so much that he substituted their names by the symbols NN0 and NN1. The former is dismissed in one sentence. The latter arouses Dijkstra's ire so much that he needs a whole page of vituperative prose to offload his emotions. NN1 is denounced, among other things, for "appealing to mankind's lower instincts" and for "undisguised appeal to anti-intellectualism". I got curious about what a researcher speaking at a seminar on computers and communications could have done to arouse such primitive emotions in a recent winner of the Turing Award? I had noticed this strange phenonomen in EWD 387 a long time ago when I bought the book. Rereading it with Kay's "prophet of Biblical proportions" fresh in my mind caused the penny to drop: NN1 is Douglas Engelbart!
Alan Kay was not the only one to be inspired by Engelbart. Another was Howard Rheingold, whose "Tools for Thought" (Prentice-Hall, 1985) made a big impression on me. The book covers a relay race of thinkers and doers united by one idea: how recent developments in technology could enhance intellectual work as much as writing and the printing press had done in centuries past. The chapter in Rheingold's book that made the biggest impression on me was "The Loneliness of the Long-Distance Thinker", the one devoted to Engelbart. The chapter starts with Vannevar Bush, an engineer and science administrator, who published in the July 1945 issue of The Atlantic Monthly an article with title "As We May Think". The editor summarizes the message with
For years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but not the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work.When Engelbart was a Navy radar technician waiting in the Philippines to be shipped home after World War II he read this article. It set him off on a life-long quest.
When I read Rheingold's book in 1990 I received funding for research in Artificial Intelligence (the "AI" in the title of this article). AI was a broad church that tolerated people who wanted to explore programming languages like Lisp and Prolog. But I was uncomfortable with the goal of endowing computers with "intelligence", whatever that might be. The idea that united the researchers described in Rheingold's book was to use computers to augment whatever humans do when trying to solve a problem. This is what I tried to bring to the attention of the AI community in a paper  I presented at FGCS 92, the conference that marked the end of the Fifth-Generation Computer Systems project at ICOT in Tokyo. My title was "Mental Ergonomics as Basis for New-Generation Computer Systems". The message was to aim at Intelligence Augmentation in humans rather than Artificial Intelligence in computers -- "IA rather than AI".
In 1990 it was the first half of Rheingold's chapter on Engelbart that made a big impression on me. That first half is devoted to Engelbart's quest up to and including the 1968 demonstration of the NLS system. The exciting part of Kay's paper is the first part up to the completion of the Interim Dynabook, an advance over NLS. In 1975 Interim Dynabook presented the user with a full-featured point-and-click window system giving access to interactive text, graphics, and sound editing combined with interactive database use. It is now two decades since Interim Dynabook has finished shrinking from a refrigerator-sized server in 1975 to the notebook format boldly envisioned at that time. Have people taken flight en masse with augmented intellect on mass-market Ultimate Dynabooks? No, they use the mighty processors and memories for ... Microsoft Office.
What went wrong? After the awesome 1968 demo of NLS, Engelbart and his group fully expected to develop equally awesome feats of intelligence unleashed from the centuries-old tethers. Instead, the group floundered. In 1975, after the completion of, and after beautiful demos with, Interim Dynabook, Kay's group wanted to properly re-implement the Smalltalk programming language. But Kay saw that the existing Smalltalk implementation was not the bottleneck on the way to intelligence augmentation, left the group, and was heard of no more.
In both cases, NLS and Dynabook, the building of the tool, challenging though it was, was not as hard as what came after: finding out how to use it as an intelligence-amplifying tool. To find out what went wrong let us go back to what started off the intelligence-augmentation idea. Here is my attempt to boil it down to a few words:
Look at what intellect would be without writing or the printing press—these primitive technologies already make such a difference. Just think of what the latest computer technology will be able to do to further augment the intellect!But this is a non-sequitur: what makes writing and paper powerful is technology to some extent, but it is mostly the rich culture that grew up around it.
Let's look at writing first. The earlies extant samples of writing are Babylonian clay tablets containing tax records, government regulations, and astronomical records and calculations. Compared to these the books by Plato, Aristotle, and Euclid are very sophisticated. The potential of their level of technology was not exhausted as late as the 20th century, when A.N. Whitehead wrote "The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato". These footnotes were mostly written on paper, technologically not a spectacular advance over the parchment used by Plato.
This stagnant level of technology was for a long time constrained by natural language. Over time this improved, but only slowly. By medieval times we see the rise of musical notation. Another powerful innovation is the algebra of François Viète (late 16th century). The very symbol of The Intellect is Einstein pondering a blackboard filled with picturesque formulas. This is basically Viète with a few more recent additions such as Leibniz's notation for calculus (1686) and Gibbs's vector analysis (1870s). It is remarkable how few the enhancements have been in the four hundred years since Viète. Observations like this are summarized in bits of folklore to the effect that a good notation is worth a whopping increment in IQ points. Except that the really good ones allow one to have thoughts that are impossible without.
Something like this has not happened in programming languages. But it's still early days. Paul Graham observed in his essay "The Hundred-Year Language"
[Programming] Languages evolve slowly because they're not really technologies. Languages are notation. A program is a formal description of the problem you want a computer to solve for you. So the rate of evolution in programming languages is more like the rate of evolution in mathematical notation than, say, transportation or communications. Mathematical notation does evolve, but not with the giant leaps you see in technology.
To summarize the role of writing as a tool for thought: a bit of technology; most of it a culture that took thousands of years to evolve. How about printing?
What makes printing a powerful tool for thought is mostly due to other things than technology. Much of its power comes from publishers and editors, who sniff out what is worth printing. Another important component is provided by libraries and librarians. Much is due to scholarly societies, which started printing their proceedings and to commercial publishers, which created journals, each with their editorial board and unseen bevy of reviewers.
The point I am making here is that Engelbart and Kay were unrealistic in expecting that their technologies would give quick results in the way of Tools for Thought. They had no appreciation for the vast and rich culture that produced the tools for thought enabled by the traditional technologies of writing and printing. They did not realize that a similar culture needs to arise around a new technology with augmentation potential. Now, again two decades later, it may be starting to happen. Perhaps it was this that Dijkstra sensed in 1973 when he wrote "anti-intellectualism" in reaction to Engelbart.
If so, he saw further than I did when I first read Rheingold's "Tools for Thought". I fell for the seductive message just as the protagonists in the book did. In retrospect it seems that the message is essentially true, but it is wrong in suggesting that the effect of new technology in computing and communication will be quick in enhancing human intelligence. We all confused clarity of vision with proximity. Those who have hiked in the Alps will know the phenomenon: at the breakfast table you see these majestic mountains, sharp and clear. "What about that one; should give us a good appetite when we get back for lunch". The reality is, yes, you can there, but it takes something that is more like an expedition and you need to train a lot first.
In the long run it could be worthwhile. In the meantime don't forget that this is about using computers to augment the intellect and not about making computers intelligent. It's about IA rather than AI.