I recently watched Alan Kay’s OOPSLA 1997 lecture “the Computer Revolution hasn’t Happened Yet”. Watching it was mildly frustrating as Kay would make tantalising abstract points and not elaborate upon them.
Kay makes some points about the web which I initially found jarring. But after thinking about the web in the context of the message passing environments that Kay helped pioneer I began to see the flaws that Kay may have had in mind.
So, what’s wrong with the web and how can it be fixed? In my view there are two main problems, both of which can be solved by without compromising the simplicity of the web. These problems are the assumption of prior knowledge and the richness of communication.
When the web was created it consisted of one data format, HyperText Markup Language documents (HTML), and a protocol for transporting them, HyperText Transport Protocol (HTTP). The only web clients were web browsers and the only action they could perform was to GET documents. Browser only had to know how to handle HTML data as that was the only type of data on the web. This was the web as codified in the 0.9 spec.
The short comings of the 0.9 spec were quickly addressed by the 1.0 spec (and further expanded in the 1.1 spec) which extended HTTP in two ways. Firstly it allowed arbitrary data, not just HTML, to be transported. Secondly, it augmented the GET request method with the POST method, thus providing a mechanism for the client to send data to the server. It is the implementation of these two features which I think Alan Kay may have had in mind when he criticised the web.
In the 0.9 spec HTTP was coupled to HTML - any data sent over HTTP was assumed to be HTML. The web was initially a system for sharing interlinked documents which were to be displayed on a screen for a human to read. This coupling allowed the system to remain simple but at a cost - if the data could not be represented in an HTML document then it could not be made available on the web.
The coupling of HTTP and HTML was addressed by the introduction of the content-type header. The content-type header allowed the server to send metadata that described the format of the data that the client had requested. The content-type header effectively de-coupled HTTP from HTML resulting in a more flexible system. This opened up HTTP to any situation that required data to be transmitted from point A to point B. The data no longer had to be HTML and there was no requirement for it to be consumable by humans. A client could be any software that consumed data, not just a browser.
This increase in scope created a conflict with the primary intent of the web as a system for sharing documents. In the 0.9 spec the browser only had to render HTML documents to be able to display all of the documents available on the web. The implication of the content-type header is that if a browser is to render everything on the web then it has to be able to render any data that the world can throw at it - which is a seemingly impossible task. This problem is currently address by the following measures:
<object>
element)
A better solution would be for the server to provided rendering instructions in addition to the data. This would effectively separate the communication of the data from the rendering of the data. Therefore the browser would only have to fetch the data and provide a screen space for the data renderer. A HTTP headers listing a URI would be an adequate mechanism for locating a renderer - for example pargma: content-render http://example.com/renders/mathml. This approach could resolve or help to resolve all sorts of problems:
pargma: content-render http://microsoft.com/guano/trident
pargma: content-render http://xiph.org/codec/ogg
I think this is what Alan Kay was getting at when he said the following:
"HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats. This has to be one of the worst ideas since MS-DOS."
In the 0.9 spec there was one request method that a client could use to communicate with the server; GET. Later this was augmented to include other methods, most noticeable POST, PUT and DELETE. The HTTP spec outlines the intent of this verbs, but due to technical limitations, ambiguity in the spec (and poor programming practice) request methods do not adequately describing the intent of the request. Some examples:
Pretty much the only good reason for a document to disappear from the Web is that the company which owned the domain name went out of business or can no longer afford to keep the server running.
In which case why is DELETE one of the HTTP verbs?
These problems could be addressed by using semantically relevant methods. Such methods are permissible within the 1.1 spec:
The set of common methods for HTTP/1.1 is defined below. Although this set can be expanded, additional methods cannot be assumed to share the same semantics for separately extended clients and servers.
Accompanied by the “405 Method Not Allowed” status code HTTP provides a very clean mechanism for semantically correct communication. The two problems outlined above could then be easily addressed:
Conclusion
The first of these suggestions is certainly grander than the second and would require significant changes to client software to implement in full. The second of these suggestions would only require a guiding hand as there are no (theoretical) infrastructure changes required. The creation of a registry for extended HTTP request methods is all that would be required. The HTTP 1.1 spec is currently being revised and it seems that such a registry has been suggested.