Is "The RADAR Architecture" trying to solve a problem which should be solved in the browser?

Designing web applications in a RESTful way from the User Interface perspective can be very challenging and many people argue that REST just isn't suited to user interfaces and should be restricted to Application Interfaces. I've come across this recently when trying to understand why the Bongo Project uses a separate URI scheme for the user interface to the application interface. I've always thought that a resource is a resource, regardless of whether you're a man or a machine. Different people and different machines want different representations of the resource, but it is still the same resource.

Dave Thomas is a pragmatist with knowledge and experience behind him. But in reply to Dave's insightful blog post, The RADAR Architecture: RESTful Application, Dumb-Ass Recipient, I play devil's advocate and put forward an idealistic alternative argument:

From a pragmatic point of view this makes an awful lot of sense because it's easier to bend your server application to the will of millions of existing web browsers than to change the way browsers work.

However, I wonder whether from an idealistic point of view this would be better addressed in the design of the “dumb browsers” you talk about. The dumb browser doesn't need to be quite as dumb if we don't want it to.

What if XHTML forms and Web Browsers *did* support the PUT and DELETE verbs? I'm not familiar with the reasons that browsers do not already support these HTTP methods, but I do know that the original vision Tim Berners-Lee had of a web browser was of an application which could write data as readily as it could read it.

Maybe it is a user interface problem.

Asbjørn Ulsberg asked what a request/response would like if a web browser wants to GET a resource represented as an HTML form so it can edit the information and then PUT it back to the same URI. In answer to his question, I don't believe we should have a MIME type to put into an Accept header specifically for an HTML form view. It's still HTML after all!

Perhaps the answer is that when HTML is displayed in a browser, if the user has write permission, it is editable in the same way that a document in a word processor is editable, rather relying exclusively on forms for user input. If you've ever used Google Docs you may have noticed that if you click on the title of a document, it becomes editable using JavaScript. There is no separate “edit” and “view” mode, it's all one thing so you don't need to tag ;edit to the end of the URI. The reason web browsers themselves don't work this way is probably because the HTML view is only one representation of a resource and it might be hard to translate changes made by the user to this presentation of the information into changes in the underlying data model. Using forms allows the application designer to restrict user input to specific fields in the underlying data model. This is something which needs more thought because I believe it's also the core reason behind the “offline problem”, but that's a different story.

roberthahn writes that “The dumb browser doesn't provide the user with a way of submitting requested Mime types.” Well, again, maybe this is a user interface issue with the browsers, maybe they should! Remember that the web is just a collection of resources, HTML is only one representation of those resources.

If “smart” client interface can specify which representation it wants, why can't a user? Maybe a user should be able to choose whether they want to browse the web in plain text, formatted text, or even 2D or 3D vector graphics or a voice representation of a resource. The user agent could allow the user to choose a particular mode in which to browse the web depending on their current environment, and offer alternative representations where they are available. For devices that are only capable of certain modes of interaction with the user (e.g. a voice interface), this could be fixed by the user agent.

We shouldn't entirely dismiss the idea of changing web user agents themselves, just because it is a difficult option.

Starting a Business

My plans for this summer have changed many times now. Due to my exam timetable I have a four month summer break and I've been trying to think of ways to use this valuable time.

I've had interesting conversations with IBM, a chat with someone from Sun, three interviews with Google, missed the deadline for the Google Summer of Code while waiting to hear about a summer placement. But still, I have nothing to fill my summer.

I've decided that I'm going to take the opportunity to start implementing some ideas I've been working on for three years, I'm going to start a business.

OK, so I've done this kind of thing a few times before. I've even ended up as part of a limited liability company building web applications and doing installations in London. But it's always just been moonlighting, a job on the side. This time it's all my own ideas, it's ambitious and I'm doing it my way.

I'm currently working on a business plan and cash flow forecast so that I can apply for some funding from my university.

Introducing Krellian. Watch this space.

"Megafreeze" development broken, Abstract User Interfaces

Melt the Megafreeze, let it trickle

Tuomo Valkonen writes that The megafreeze development model is broken in GNU/Linux distributions. He argues for a very long release cycle for an extremely stable base system (in line with Kernel releases) and then separate repositories for applications which are constantly upgraded.

I've often thought that in a world where security updates can be trickled over the Internet as they become available, it's odd that new features come in big chunks with each new release of a distribution. With Ubuntu, I upgrade every 6 months to see new features, why can't the features just appear as they become available like we're used to with Software as a Service?

Sam has tried to explain the reasons for the status quo to me on numerous occasions (him knowing a lot more about building Linux distributions than I), but like Valkonen I still remain unconvinced that the Megafreeze is the best approach.

Abstract User Interfaces: “Plasticity”

While I was on Tuomo Valkonen's homepage I noticed the Ion window manager that he developed. I found the UI ideas very interesting because they're very similar to a lot of things I'm trying to achieve with Webscope.

Ion has “tiling workspaces with tabbed frames” and the screen is always filled at any one time, like the multi-level resource tabs I want to create.

Ion also has a “query module” which “implements a line editor similar to mini buffers in many text editors. It is used to implement many different queries with tab-completion support: show manual page, run program, open SSH session, view file, goto named client window or workspace, etc.” which is a similar concept to the Natural Language Command Line I am trying to develop.

In a paper entitled Vis/Vapourware Interface Synthesiser Valkonen describes a system for describing user interface semantics and then automatically generating actual interfaces based on user's preferences with the use of stylesheets. This seems very much like a transform view in a Model View Controller design pattern and he's essentially talking about doing for the desktop what I want to do for the multimodal web. Starting with a semantic description of a user interface (e.g. using DIAL) and then transforming that semantic description into various different presentations using XSL stylesheets.

In his bibliography, he links to papers which use the term “Plasticity” in user interfaces, which I might explore further. User interfaces these days have to go “above the level of a single device” — O'Reilly.

Why *not* to make the "Metaverse" a direct extension of the web

Further to my previous blog entry, Why I would make the “Metaverse” a direct extension of the web I have found a strong argument to the contrary in the documentation of the Virtual Object System.

In a section of their manual called The 3D Web the authors point out “three basic limitations of HTTP which have caused 10 years of pain, suffering and hacky workarounds for developers trying to build interactive applications over the web. These are that HTTP is a stateless protocol, that URLs represent opaque handles to resources, on which no reliable introspection is possible, and that HTTP is explicitly asymmetric so that a server typically cannot initiate sending new data to a client.”

The reponse of the Virtual Object System community is to create an entirely new protocol stack which is a mirror of the technologies used on the web, but with a new technology for each layer:

  • VIP is like TCP
  • VOS is like HTTP
  • A3DL is like HTML
  • CSVOSA3DL is like an HTML rendering engine such as Gecko or KHTML
  • Ter'Angreal is like the web browser

The fact that HTTP is a synchronous, stateless protocol has come up in the past with regards to web applications – raising the possibility that AJAX is just a hack, waiting for a new protocol to replace it. Perhaps a replacement or extension of HTTP is due.

The current approach I am taking to a 3D Web client for Webscope is:

  • TCP is TCP
  • HTTP is HTTP
  • X3D is like XHTML
  • FreeWRL (and others) are like an HTML rendering engine such as Gecko
  • Webscope is the web browser.

Because of the limitations of HTTP I have considered building a protocol like XMMP into Webscope, and the argument the Virtual Object System community make will certainly prompt me to explore alternatives further.

What I think I would like to see is a solution that sits somewhere between the plain X3D over HTTP approach and the radical VOS approach of replacing the whole protocol stack. I don't want to throw away HTTP entirely because of its Content Negotiation abilities and the vision of the Multimodal Web.

I'd like to see some discussion on this by some people who know more about networking than I do.

Distributed Social Networking, Internet identity and trust

Distributed Social Networking

Social networking is a huge phenomenon on the Internet and web sites such as Facebook, MySpace and Orkut have enormous user bases. All of these social networks are currently centralised and controlled by a single company and do not allow users to interact between different networks. This can be frustrating for users who may have to sign up to several social networking web sites just to keep in touch with different groups of friends. Several efforts are in place to attempt to cross the boundaries between social networks, but most of these efforts work on the basis of another centralised system which aggregates all of the networks together using their respective proprietary APIs where they exist.

Open standards like FOAF and XFN already exist for expressing the relationships between people on the web, using semantic markup. In fact, I would argue that an open standard exists for every aspect of current social networking sites. By creating applications which use these open standards we can form a distributed social network which uses the web itself and does not require users to sign up to an isolated network. Each user need only create a personal home page using a service which supports the open standards to be part of the worldwide network.

I have started a new design concept on my web site listing common social networking features and corresponding open standards which could be used to implement them in Distributed Social Networking.

It's worth noting that services like Videntity are already supporting standards like FOAF.

Identity and Trust on the Internet

An interesting article in the New York Magazine a couple of weeks ago described how social networking sites are creating the biggest generation gap since Rock and Roll as teenagers are developing a completely separate concept of privacy to their parents. Teens can be very willing to talk about their personal lives and post pictures on public web sites. I don't even believe this is because they don't understand the issues of privacy, I just think they have a different attitude to privacy and are perhaps more open about their feelings than previous generations.

However, this did get me thinking. Whilst compiling this list I realised that one thing I wasn't sure how to achieve was the privacy features of social networking sites. Many of the sites allow you to define which information will be visible to which users. In a distributed system with no central authority to authenticate against it can be very difficult to define trust and granular permissions for information.

I searched the web for a solution and came up with OpenID, SAML and XDI.

OpenID

Being an ex-LiveJournal user I'm familiar with OpenID but hadn't realised how big it has become. AOL and Yahoo have now adopted the standard and even Microsoft are talking about integrating OpenID into Windows Vista.

“OpenID starts with the concept that anyone can identify themselves on the Internet the same way websites do-with a URI”. Once someone has confirmed that they own a particular URI and they come across a web site which supports OpenID, they can use their URI to identify themselves. They are simply redirected to their URI's OpenID authentication if they need to log in. No more signing up for an account on every. site. you. visit!

XDI

An article called The Social Web: Creating An Open Social Network with XDI describes an ambitious project to create a new system of unique identifiers for information resources to create a Social Web of people, or more generally, a Data Web. The new scheme uses eXtensible Resource Identifiers (XRIs) to identify resources independent of a specific physical network path, location, or protocol – in a way which is compatible with URIs and IRIs. XRIs are then linked with “link contracts” which express authority, security, privacy, and data sharing rights in a machine-readable format.

Analogies are drawn with the identification and authentication system used in banking where “I-brokers” are “a trusted third party that helps individuals and organizations share private data the same way banks help exchange funds”. The XDI project also has ambitious aims like anti-spam protection and identity theft protection.

SAML

According to Wikipedia, SAML is an “XML standard for exchanging authentication and authorization data between security domains, that is, between an identity provider and a service provider.” Google are using SAML for Google Apps. Basically it allows a service provider to assert that a user has the permission to access a certain resource, by querying a separate identity provider (which could be common across all service providers).

Converging

It turns out that all of these technologies are converging and moving towards the holy grail of system administration – the “Single Sign On”. OpenID can now use an XRI to identify a user and there is talk of using SAML in conjunction with OpenID to assert privledges.

Why I would make the "Metaverse" a direct extension of the web

In answer to Bob Sutor's question “If we didn’t have web browsers as we do today and started today to do everything that you imagine [for a distributed 3D virtual world], what would you create to do all that?”

I would probably create something very much like Second Life and open source the server source code.

Anything anyone ever creates is based somehow on someone else's ideas (standing on the shoulders of giants and all that). If we didn't have the web but we had video games, I would start with an existing gaming engine. Then in the absence of a worldwide network of linked information resources, I would take the next best thing to existing technology, science fiction. I'd buy Snow Crash by Neal Stephenson and start writing network protocols and file formats!

I'd start by separating the storage of content, logic and presentation into different formats and come up with some kind of distributed TCP/IP streaming protocol with heavy compression.

I suspect that you're asking whether the web is really a suitable platform for all this, whether if we weren't stuck in the mind set of the existing world wide web we might come up with a better solution. Perhaps.

But if I was creating the web from scratch (but happened to benefit from the hindsight of all the great minds that came after me), I wouldn't use XML-like syntax for web pages, I would use something more efficient. I would try to make the DNS system more decentralised and URIs would be of the form http:uk.co.companyname.department/resource instead of http://department.companyname.co.uk/resource. I might make HTTP requests asynchronous, build comment spam protection and Denial of Service protection into the protocols of the web. However, I wouldn't necessarily attempt to make those changes now.

What's amazing about the web for me isn't that it's perfect technology that could not have been done better, it's that it's openness and adoption has made it almost ubiquitous in the world. Creating new protocols suited to new applications is definitely a good idea, but if the online 3D virtual world is to become as ubiquitous as the World Wide Web, we should learn from the lessons of how web technology was created and build on an already ubiquitous platform. Adoption of a well defined standard is more important than a perfect technology.

Another motivation behind making Stephenson's “Metaverse” a direct extension of the web is device independence. It's all very well creating a 3D virtual world which requires a large amount of processing to render, but what if I want to access the information on a small information appliance with little processing power? What if I live in a developing country and want to be able to access some information but only have a text based browser? What if I'm blind and can't see the virtual world and want to hear it instead? We need not carry over all the limitations of First Life into Second Life. I don't know about you, but I hate having to pay for physical objects and I love flying!

Independent artists going big, Creative Commons growing up

Independent artists hit top 40

The BBC report that under new chart rules, Essex rock band Koopa have made chart history by becoming the first unsigned band to land a UK top 40 hit. This is brilliant! It shows what the Internet is doing for Independent artists.

What's really sad however, is that they used their fame in being the first “unsigned” band to get in the charts to… uh… get signed with a record label!

Wonchop hits MTV

On a related note, the animator Ben “Wonchop” Smallman, who first found his home at wonchop.hippygeek.co.uk, has released a music video for Hypocrite by Akira the Don. The video is going to be aired on MTV Europe!

Congratulations Ben! It gives me a warm and fuzzy feeling inside to know that I made just a small contribution to his success by setting up his first web site, allowing him to share his animations with the world!

Creative Commons 3.0

The Creative Commons 3.0 licenses are now available. It's brilliant to see that they've taken into account the concerns of Debian Legal, which I mentioned to them back in 2004.

However, the “parallel distribution provision” suggested by Debian Legal was not adopted, which may or may not mean that the CC licenses will still not be considered “DFSG free”. To be honest, this particular requirement doesn't really bother me. The issue centres on “Technological Protection Measures” or “Digital Rights Management” as the media industry likes to call it. As I see DRM as being flawed by the laws of physics (if it can be perceived, it can be recorded) and economics (this is the information age, not the industrial age) – it doesn't really bother me either way.

Creative Commons make a bit of a jab at Debian Legal by basically claiming that the fact that they're allowing documentation under the GNU Free Documentation License into Debian, means they would now be hypocrites not to allow certain Creative Commons 3.0 licenses.

Graphical Software Design with UML and XMI

I'm currently studying parallel modules in UML and Java and whilst reading through the notes for UML an idea occurred to me.

If you could store the semantics of a UML diagram in an XML format you could tranform your models into SVG diagrams, XHTML documentation and even generate a framework of code for the implementation of a computer program.

A computer program could be designed graphically using a drag and drop application with SVG and collaboratively using a version control system. This would also, in theory, make it much easier to implement your program in multiple languages if you wished. In combination with reverse engineering of code into UML it would allow people graphical, textual and code views of an application, depending on personal preference or their role in the development process.

As this is quite an obvious use of UML I searched the web for UML and XML to find out who had already done this.

I found:

I'd be interested if anyone has experience of using these types of tools in practice and how useful they are.