LugRadio Live 2007

LugRadio is now finished for another year. I met lots of really interesting new people at LugRadio Live and caught up with some old friends.

I had quite a long chat with Aaron Seigo from the KDE project about Software as a Service (he's not a fan). I realised that we're interested in solving the same problems, but have very different views on how to solve them. I commented that I'd love to see him have a discussion with Chris DiBona from Google about it (he was also at the event). I then wandered into the cafe and spotted Chris sitting in the corner on his laptop so I stopped to say hello. I mentioned my conversation with Aaron, and Chris said to invite him in for a chat. I went and fetched Aaron, bought them both a drink and then tried to keep quiet while they discussed the issue amongst themselves.

It was really quite interesting to see two people, both from the open source community but with differing views have a productive conversation. I actually thought this little encounter was a microcosm for the whole weekend. A diverse community of people with common goals but differing views all together in one building, having informal, intelligent debates about the issues they consider to be important in the software community.

There were representatives from lots of companies like Canonical, Google, Neuros and the BBC and particularly brave attendees from Novell and Microsoft. There were also people from community projects, notably the youthful Bongo Project and MythTV.

On Sunday I had lunch with Elliot from OpenAdvantage (and his friend whose name I've forgotten but would like to keep in touch with) and we must have been having interesting conversations because an hour seemed to vanish in a blink

Also, I think I found a hosting company for Krellian, the Linux friendly Bytemark Hosting.

So well done to the LugRadio team and the event crew who pulled off another great event.

Oh, and hi to all the Wolves LUG people who I had a few drinks with on Saturday night, they were a very friendly bunch!

Krellian Web Site Launch

As I announced in April, this summer I'm starting a business called Krellian.

Krellian

Since April I have written an outline business plan and given a presentation in front of a Dragon's Den style panel of judges. The judges were impressed and awarded me a £4500 grant on the SPEED Programme. SPEED is a project led by Wolverhampton University with the involvement of many other universities to support Entrepreneurship in students.

On the SPEED programme I've attended a 3 day training course on running a business and am now developing the business idea further.

Well, it's 07/07/07 and as promised I'm launching my company web site(s) today. They're a little light on information at the moment but they're enough to point people towards, bearing in mind I'm working to a five year plan here.

Products – krellian.com

This is the main company web site where products will appear, products being Web Appliances. If that doesn't quite tell you exactly what the products will be then that's partly intentional. I've not decided what the first product is going to be yet, I'm working through a shortlist. The software projects at krellian.org might give some clues.

Services – krellian.net

krellian.net

This will eventually be the home of a suite of web services, but for now it's serving as a bit of an experiment in natural language command.

Community – krellian.org

Now this is the bit where I'm going to need some help. Once I've kicked off development of some software projects, I'm hoping to get developers interested and build a community around the projects. I don't expect this to happen overnight and I think I really need to get a release out first. I'll be working towards a release of something over the next three months.

Webtop

Webtop will carry on where Webscope left off. It's a suped-up web browser for devices that don't need a desktop.

Webdoors

How I start on the Webdoors project will depend on what I choose to be my first product, but the long term vision is a Webtop Linux Distribution, much like the Desktop Linux distributions we have today. Essentially a collection of libre web applications given a consistent look and feel.

The vision that Krellian is working towards is a Ubiquitous Web facilitating the free sharing of information and ideas. The Ubiquitous Web is device independent. That means you can access information in a format suited to the device you're using, be that plain text, html, vector graphics, voice or even a 3d virtual world. This gives you an idea of the direction of the Webdoors project.

W3C Compliance

All three web sites are fully W3C compliant XHTML and CSS and are tested in IE6, Firefox and Safari (if anyone could test them in IE7 it would be helpful).

Balancing W3C compliance with the web page actually looking OK in the most widespread but worst standards supporting browser is a bit of a pain. My advice is keep it simple!

Hosting

It wasn't the original plan but the websites (apart from the software projects) are currently temporarily hosted in my bedroom. That's very bad because out here in the sticks we have a very unreliable power supply and I've recently had a lot of problems with PlusNet, my Internet provider.

I'm looking at hosting options at the moment, trying to get my head around Amazon's Elastic Compute Cloud and weighing up the advantages and disadvantages of renting a dedicated server from day one.

VoIP

Thanks to ALUG for the advice on VoIP service providers, I now have an 0845 number for my business provided by sipgate which points at wherever I happen to be on the Internet.

Business Cards

I've ordered 250 double sided business cards from VistaPrint for about £15 including a completely custom design and delivery etc.

LugRadio Live 2007

See you at LugRadio Live!

Bob's Perfect Virtual World as the 3D Web

In this blog entry, I'd like to address Bob Sutor (of IBM)'s three blog posts about his requirements for a perfect 3D World, implemented as a direct extension to the World Wide Web, as described in my 3D Web design concept.

A pure offline Mode

I think this is part of a wider requirement for certain web applications to work offline. With the recently announced Google Gears and other projects from major industry players like Adobe's Apollo, Mozilla's Firefox 3 (and Parakey, currently vapourware), Django's Offline Toolkit, Microsoft's Silverlight and Joyent's Slingshot I think this is going to become an extremely hot topic. I think we're going to see the boundary between web server and web user agent blur considerably into "Web Servents". So in short, an offline mode can use the same technology as an online 3D web, with a local server or a local cache of data, logic and presentation.

A peer-to-peer model

By using web technology, we can take this for granted to the extent that anyone can run their own 3D web server and we can make hyperlinks between them. The peer to peer idea could be taken a lot further than this though, by users in the same virtual space swarming the data between each other. I don't know about that bit.

A model of many planets

Again, this is basically what the web is.

Much better zoning

This almost touches on the contraversial subject of the .xxx domain.

However, with Second Life the geography works in much the same way as First Life with blocks of land having permenant neighbours. This is a limitation of real physical space that while it might be nice to reflect in virtual worlds, is not necessary. We could have lots of areas of "virtual land" who's boundaries are defined only by their own content and then have portals (hyperlinks) which allow you to move into another space, there is no reason to have permenant neighbours because your neighbours are simply whatever you link to, which is under your control. In this way zoning just becomes a result of the links people make, which works reasonably well on the current web.

If you do want to build a planet with geography like the real world (like Second Life), you can still do so, but you could decide to ban certain activities in that particular planet. That way, if there's some content you don't like you simply don't link to it, and it is only as close as 6 degrees of separation dictate.

In-world Secure Chat

I would argue that secure chat in general isn't particularly widespread on the Internet yet, so this is an issue for the Internet in general. However, see later for more discussion on in-world chat. In short, XMPP encryption extensions.

AI

This would just be part of a web application. A 3D web application that is a 3D game may have AI controlling faux avatars and objects, a sales site may have an AI shop assistant or human-AI hybrid. Server side scripting languages and javaScript manipulating X3D files.

World-to-world communication

XMPP (Jabber) and either Jingle or SIP for voice (and video?) would be great for person to person chat. A couple of interesting points spring to mind:

Firstly, should the jabber client be part of the 3D Web user agent or should it just be another web interface like Google Talk in GMail? Especially with regards to advertising prescense or status of the user (available, away, busy, offline).

Secondly, how do we deal with the issue of hearing people around you in a virtual space and adjusting the sound as people move, in addition to person-to-person conversation between worlds. We certainly don't have standards for this yet so it wil be interesting to watch Linden Labs.

World to world teleportation

Hyperlinks.

Do I need a membership in the other world or is there a notion of guest?

We have the same issue on the web. I think distributed authentication like OpenID is a giant leap forward in this field.

How do I deal with cross-world identity?

By using a URI as a person's identity as in OpenID. You can still have your friendly screename in Jabber, but the URI uniquely identifies you.

Can I bring my money with me?

That's a good question, but I think the answer is that if you want some kind of virtual currency, it simply becomes a service like PayPal where you buy credits of some kind and they sort out "exchange rates". You could then use that currency in any world or any web site by using that service as a broker for payments. I'm obviously making this sound a lot more simple than it really is, nothing is straight forward where money is involved.

Can I bring my clothes with me?

Yes, your avatar and everything your avatar wears is hosted on an avatar server (just an 3D web server) and can simply be included into a scene. This only works if all the worlds use the X3D (or other) standard, which is one of the fundamental requirements of a 3D web in my opinion.

Can I bring more general objects between worlds?

The same as above, "objects" can be an X3D file hosted somewhere on the web which can be included into another X3D file dynamically. This requires a certain level of write access to all 3D web servers, which is probably going to cause all sorts of spam problems like we have on wikis. (Imagine a spamming company putting up billboards everywhere).

Search

Don't worry, Google will sort that out ;). Seriously though, it could work the same way as the web with spiders and giant indexes.

Device and world compatible link redirection

Now that is a very interesting topic which you could call the Device Independent or Multimodal Web. I think this can be solved with HTTP Accept headers and content negotiation. This is a major part of what my Webscope project is about, a Multimodal Web User Agent.

Google Gears @ Google Developer Day 2007

London Google Developer Day

I spent yesterday at the London Google Developer day, one of 10 simultaneous events in cities around the world. This international event was the first of its kind and Google took the opportunity to launch some new products relevant to developers. By far the most interesting project for me was "Google Gears", a web browser extension that allows web applications to work offline. I seriously believe this web browser advancement is as significant as the APIs which put the "A" in "AJAX".

Google Gears

Google gears is interesting not only because of what it does (there are plenty of projects tackling the "offline problem"), but because of the way Google are going about it. Google are collaborating with important industry partners like Mozilla and even Adobe to try to create a standard API for offline applications that they hope all web clients will use. All major browsers are already supported, with Opera support in the works. Adoption of the standard by projects like Adobe's Apollo platform and discussion with projects like the Dojo Toolkit greatly increase the chances of proliferation of the standard.

Google Gears consists of three main parts – LocalServer, Database and WorkerPool.

LocalServer acts as a local HTTP server inside the web browser and caches and serves resources locally.

The Database part uses SQLite as a local store, a kind of giant cookie implemented as a relational database that web applications can access both online and offline.

WorkerPool creates a kind of multi-threading in JavaScript so that processor-intensive operations can run asynchronously, with a particular focus on preventing user interface lock-ups. This is useful not only for offline operation but also to increase the responsiveness of the user interfaces of online applications. The idea of Google Gears is that web applications will be usable offline when network connectivity is intermittent or non-existant and changes made by the user will be passed to the server opportunistically when a network connection returns.

The missing component

However, Google Gears is missing a key part of the solution required to make web applications work offline. Currently, if you write a web application which modifies data in the local SQLite database there is no provided method for synchronising those changes with the server. This is left for the developer to figure out on a per-application basis.

After the event I had a chat with Chris Prince over a pint (paid for by Google of course). Chris is one of the main engineers who has been working on Google Gears in Mountain View. He said that three separate teams were given the task of figuring out a standard method for synchronisation and all three came up with completely different answers. It turned out that they couldn't figure out a standard synchronisation method that worked well in most cases so they just left that bit out, for now.

I asked Chris whether he thought a dominant standard would emerge or whether things were always likely to be this way. He said that he expected a standard to emerge which worked well 80% of the time, with different methods for special cases, financial transactions being an example. He thinks that once Google Gears capability has been added to around three major applications (I suggested GMail, Google Calendar and Google Docs!), a useful standard method may emerge.

Other Happenings at GDD07

I attended a talk by Chris DiBona, Google's Open Source Programmes Manager, about Open Source in Google. I grilled him about how Google decides whether a project be open source or not (with particular reference to hosted services like GMail and software bundled with Google Appliances) and asked him about GPLv3. I then sheepishly asked him to sign my copy of "Open Sources" which he co-edited.

I also attended talks on Google Gadgets and GData APIs and asked whether Google plans on supporting authentication mechanisms other than Google Accounts in their APIs, but was basically told to make a feature request.

I met up with Darren from PHPWM and talked with a Cambridge PhD student about his work on AI in virtual learning environments. I explained my business idea to him and had an interesting conversation about intellectual property in universities.

Sergey Brin gave an international live webcast to all the event attendees and gave an amusing and bizarre talk about how the Internet is now fuelling its own growith through relationships formed on dating websites which lead to offspring who go off to work on making the Internet better. He was referring to the fact that a child born as the result of the first dating sites would now be around 12 years old, and presumably old enough to use Google's web based IDEs for developing mashups and Google gadgets!

The food at the Google event was characteristically fantastic and the "Blogger Lounge" was full of lava lamps and floor cushions, with free WiFi and coffee. A goody bag was provided including a Google branded T-shirt, mouse mat, USB stick, yo-yo, notepad, pen, sweets and silly putty! All in all it was great to rub shoulders with Googlers and I had some extremely interesting conversations with lots of smart people. The food, coffee and beer was all provided by Google and was brilliant. The train journey and mianderings around the London Underground weren't even that troublesome.

Robot Race in the Press

The robot race mentioned in my last post was featured in the Birmingham Post and the Birmingham Mail yesterday. The Birmingham Mail has a couple of pictures of me looking like a goon cheering on the robot in the semi-final.

Birmingham Mail Clipping

It's not very clear in this bad copy, but that's a nice clean head on picture of the HD DVD crack code published in a Birmingham newspaper. Mission accomplished 🙂

I've uploaded a grainy video of the final moment of that race, when our (much slower but more reliable) robot crawls past the opponent which keeps going round in circles with a burnt out motor.

There's also a video of the robot completing a full lap, note the lecturer towards the end asking what the number on the front of the robot is and us trying to claim that it's just a random number.

Robot Race Success

We came 2nd!

For a second year group project we had to design an autonomous line following robot. We were given a budget (£40, of which we spent £17), a vague specification and told to go away and figure out how to build one. Below is a picture of our finished robot, BEAST (Ben, Evelyn, Alex, Sam, Tom) and yes, that is the HD DVD crack code printed across the front for the benefit of the unsuspecting press photographers who will hopefully publish photos of our robot 🙂

BEAST

The robot was built for simplicity and reliability with a single PIC chip, a couple of stepper motor drivers, infra-red photodarlington sensors, 5V and 12V voltage regulators and little else (I'll open up our project wiki to the public when I fix python). This approach paid off today because we came 2nd out of 18 in the final robot races in the Great Hall at Birmingham University in front of a cheering crowd.

In our first heat we came off at the first corner, but after a quick nudge completed the course with no problems, albeit quite slowly. Unfortunately we were still running the software we programmed for the qualifying race which was designed for accuracy rather than speed. We had intended to replace the software with the fast version of the code which accelerated from low speed/high torque to high speed/low torque at appropriate times and automatically calibrated the white/black threshold for the infra-red sensors. Unfortunately our department is being refurbished and we didn't have access to a lab to do the reprogramming. But it didn't matter, we got through.

To our surprise we reached the semi-finals. Despite the technician putting our power supply in the wrong way around to start off with (thank you Alex for designing the electronics in such a way that this didn't just fry the robot!) we actually won the semi final. The opposing team's robot was built for speed. It was extremely fast but it turned out it was a little too fast because it burnt out one of its motors and we crawled past it on the home straight in a perfect case of the tortoise beating the hare!

We didn't win the final, but were very fairly beaten by a robot which had both speed and endurance, so congratulations to the winning team!

We were all absolutely thrilled with the result and it makes the hours of work worthwhile.

I want to say thanks to the whole team who worked brilliantly together (I didn't have to enforce my position of team leader once :P). Special thanks go to Alex for his algorithm simulations using a 2D gaming engine, his programming skills (I now trust him to compile C more than the CCS compiler) and his help with the electronic design. I also want to thank Laura for providing the tiger who did a great job of driving and turned out to be our lucky mascot!

All in all, we were on time, under budget and we came 2nd in the race! Suffice to say we all went down the pub afterwards to celebrate 🙂

BBC Radio 4 on Open Source

MJ Ray points towards a BBC Radio 4 programme on Open Source by Paul Bennun.

I've just listened to the programme and it contains some wonderful ideas about how software is unique in the world, being a collection of ideas rather than a collection of physical components. It's an insightful, well balanced exploration of Free and Open Source software and how it is a social rather than technological phenomenon.

I particularly liked a quote from Nick McGrath, Head of platform strategy for Microsoft in the UK.

“We don't compete with the Open Source community, that would be like trying to compete with the weather”.

DRM-free iTunes , EMI++

Kudos to EMI for their deal with iTunes to offer DRM-free songs on iTunes. It seems Steve Job's has also put his money where his mouth is.

What's amazing is that it's taken this long for the big boys in the music industry to realise that DRM is flawed by the laws of economics and physics. Other companies like eMusic realised this a long time ago.

No matter how they dress it up as a “premium” service, this hopefully signals the end of DRM protected media. RIP DRM.