Creating compelling content in the Web 5.0 world

Whoa, there. Web 5.0?

Okay, so I made up web 5.0. Actually, I detest the numbered generations we’ve applied to the web. The main problem I have with these terms is that they imply a linear progression. They suggest that we are going to abandon the interactive web, Web 2.0, for the semantic web, Web 3.0. Obviously we aren’t. I doubt anyone would even suggest it. Web developers will continue to use both. Hence Web 5.0 (do the maths).

I’m going to drop the term now – it was just a joke. The modern World Wide Web is, in fact, much more than just the three so-called generations – although clearly they are very important. I can identify three main concepts (not technologies) which are facilitating the current evolution of the web:

  • Interactivity (2.0)
  • Semantic understanding (3.0)
  • Commoditization (the Cloud)

Nothing ground breaking there. And we, as users, are certainly seeing more and more of these big three in our daily use of the web.

Interactivity is fairly obvious. I think the biggest revolution in interactive content came about as Wikipedia took off. Undoubtedly the most expansive (centralized) base of knowledge the world has ever seen – and written by volunteers, members of the public. It really is a staggering collaborative achievement. Then there’s blogging, micro-blogging, social networking, professional networking, content discovery (digg, etc), pretty much anything you might want to contribute, you can.

Semantic understanding is a little trickier to see. That’s hardly suprising as it is so much newer and far less understood. Believe the hype, though. The sematic web is coming and it will change everything (everything web related, that is). If you don’t believe me try googling for “net income IBM”. You should see something like this:

Google results using RDF infoThat top result is special. It’s special because it’s the answer; it’s what you were looking for. No need to trawl through ten irrelevant pages to find the data – it’s just there. Google managed to display this data because IBM published it as part of an RDF document. If you search for the same information about Amazon – who don’t, no such luck. (That particular example was given by Ellis Mannoia in a great Web 3.0 talk at Internet World this week – so thanks Ellis.)

That leaves us with commoditization. Specifically, the commoditization of functionality from a developers point of view. This concept is largely, although not exclusively, linked to the Cloud. The term “the Cloud” is used broadly to describe services make avalible over the internet. GMail, for example, is email functionality in the cloud. Users don’t need to install anything to use GMail (bar a web client) they just use it when they want, from any computer. Many of the Cloud services out there are available as APIs, and that leads to the commoditization of functionality. Say I want to add a mapping application to my web site to show my audience where I am. A few years ago that would have been a significant amount of development work. These days it’s trivial – you just make a call to the GoogleMaps API. And so map functionalities become a commodity.

The point of this post, however, is that these are not mutually exclusive concepts. There is no reason why you cannot combine semantic understanding with Cloud computing, or UGC, or both. Quite the opposite: combining the three should be the goal.

There are problems, however. Utilizing Cloud computing requires a certain amount of adherence to standards – fitting in to an API. And semantic understanding (and meta data, in general) takes time to accrue. In general those two constraints don’t work well with Web 2.0 functionality.

Let me give an example: If a user contributes a comment to an article they probably won’t take the time to add the meta data required for semantic understanding to be achieved. In the same way if they don’t give their location you can’t show them as a pin on GoogleMaps.

However semantic understanding is (IMHO) more than just the use of RDF documents. Tools like Nstein’s Text Mining Engine can be used to create a semantic footprint describing a piece of text. I’ve talked, in previous posts, about using the data gleaned by the TME in imaginative and experimental ways. Take the example above. If a user were to post a comment about a talk they attended the TME could extract, not only the concepts of the comment, but also data like the location of the subject. That semantic understanding can be used to programatically call the GoogleMaps API to add a new pin in your map.

And there you have it. Semantic understanding of interactive content used to harness the power of Cloud computing. One of the most important benefits of the TME, for me, is the flexibility it affords you. If you know that you can get access to that kind on information it opens up all kinds of possibilities. Exploring some of these possibilities has to be the focus for making a brand stand out against the plethora of content suppliers and aggregators available; for improving the users experience and gaining their loyalty.

So it’s time to stop thinking about Web 2.0 or Web 3.0 and start thinking about the technology and techniques available and how they can be used to the greatest effect.

2 Responses to “Creating compelling content in the Web 5.0 world”

  1. Steve says:

    Hi Chris,
    Great post. We are on the same page.
    This is the evolution of information production and consumption. Better media or access -> more content -> better filters/management -> better search -> better access -> more content -> … The technology act in most cases as a catalyst. As you said, for the current evolution, the text mining is one of the technologies that will play a huge role because text mining can play at different levels: help publish advanced semantic annotated content which is hard to achieve by hand, create better filtering and management, better search, exploration and analysis of all kind. Text mining takes advantage of that new era of content of any kind on anything. We have to remember that text mining is using a lot of content and annotated content to feed machine learning algorithms. At the same time, we can use the collaborative work of millions of peoples to feed the text mining algorithms with huge mass of organized and linked semantic content. This is a paradise for text mining compared to what we had in 2000 🙂

    From your colleague de l’autre côté de l’océan,


Leave a Reply

Your e-mail address will not be published. Required fields are marked *