Steve Krug’s Ironic Law of Usability

Get rid of half of the words on each page, then get rid of half of what’s left – Steve Krug’s Third Law of Usability

So that’s “Remove three quarters of the words”, then?

Why I would hire a social media expert

Just a quick one.

I came across this post a couple of weeks ago (and just got round to following it up): http://shankman.com/i-will-never-hire-a-social-media-expert-and-neither-should-you/.

Basically, @petershankman is very much against the “Social Media Expert”. His argument is based upon social media being just one component of a good marketing stratergy – and not so alien or complex as to need a dedicated expert on your payroll:

Social media is just another facet of marketing and customer service. Say it with me. Repeat it until you know it by heart.

While I do see his point of view – and do agree that social media should be considered part of a broader marketing effort (even if it’s the main part) – I think he (and a staggering amount of commenters) have missed the point. It does sound like something a social media expert would say, but…. they don’t get it.

The point is this: social media isn’t just used as a soapbox to shout from. Unlike many other marketing channels, social media makes it incredibly easy to listen.

Let’s compare it to another “facet of marketing”, email campaigns. What do we learn when we send out a marketing email? Maybe we get an inbound lead – if we’re lucky. Then, we learn that the particular prospect who responded really liked our message. However, we’ve learned very little about all those who didn’t. And aren’t they the more important ones to listen to?

True, you could argue that potential customers might email you to tell you about their requirements and expectations and the preconceptions they have about you. I’m sure it happens. Not often, though.

Using social media, on the other hand, one just needs to do a quick search on Twitter and they can see potentially huge numbers of opinions expressed about them. Combine that information with some clever analytics and a marketing organization has a huge amount of usable and very valuable data to hand.

You could still argue: you don’t need an expert to search on Twitter, which is all Peter asserted. True, you don’t, but that’s only the most obvious use of social media outside of the normal scope of marketing channels. What about automating reactive messaging (as lots of big brands now do already)? What about feeding positive trends, being discussed about your competitors, to your R&D team? What about personalizing how a visiter sees a Web site based upon the publically available information about their likes and dislikes (as idio does)?

What if you don’t know how you could use social media to imprive your business stratergy? Well then, I’d say, you need to hire a social media expert.

Native apps – a necessary evil?

I recently saw a tweet quoting TBL, talking at Profiting From The New Web (which I’m very sad to have missed!), which went along the lines of: don’t develop apps, use open standards.

It’s a very interesting instruction. One I used to agree with…

Purely from a technical point of view it is difficult to argue with the logic. Being able to develop once and still create Web applications which work well on many devices – with different hardware features, screen sizes, etc – is very possible using the latest iterations of Web standards.

HTML5 allows developers to produce rich and interactive graphics and animations in their pages using the Canvas element. One can handle streaming media (fairly) effectively using it. Persistent client-side storage is even available, which can be used for offline applications, amongst other things. The very brilliant, and now widely supported, CSS level 3 specifications make it really easy to accommodate different devices and screen sizes by using Media Queries – an excellent example of that is colly.com (play with your browser window size or use your phone/tablet).

All of those technologies are very interesting in their own right but for the purpose of this post I won’t delve into the mucky details. If you are interested, or feel you need to brush up, I recommend following the excellent @DesignerDepot on Twitter.

The important point here, and I assume (which may be a dangerous thing!) the key point for the Open-Standards-over-native-apps argument, is that Web sites built using modern Web standards – running on a modern browser – have the potential to be just as feature rich as any native app.

That may not be quite true: to the best of my knowledge HTML/javascript does not support device specific features such as compasses and native buttons. So in that regard, apps have a slight edge. That said, I don’t think there are any huge gaps in the functionality available through the standard Web technologies. Certainly there are no insurmountable ones.

I do see one bit problem for the Web standards supporters, though. One big difference; one thing that Apple – and the others – offered which so far Web applications have failed to match. Micro-payments.

I’m sure there may be many people who disagree with that statement, but here me out.

Firstly, the importance of micro-payments. Whatever the difference in the functional capabilities of Web applications and native apps, Apple’s App Store essentially created an industry – or at least a sub industry. It was probably the first to market it’s applications as apps and certainly the most successful. 10 million apps have been downloaded and the App Store’s revenue in 2010 reached $5.2 billion. The median revenue per third party app is $8,700 – according to Wikipedia (and source) – and there are more than 300,000 apps available, equating to a $2.6 billion market for the app developers themselves. No doubt, then, this is big business. And (as big a fan as I am of FOSS) that kind of potential revenue has been the main driver in the success of the app movement, driven many software innovations in the name of competition and, most importantly, created an environment for entrepreneurs and developers to benefit financially from their creations. I would say – in my opinion unquestioningly – that the success of the App Store and its contemporaries is primarily down to a no-fuss micro-payment system where users feel safe and comfortable and not under pressure when parting with $3.64 (on average) of their hard earned cash.

You may argue that these micro-payment systems already exist on the Web; Amazon’s payment system – including 1-click – and Paypal could be seen as successful examples, amongst many others, no doubt. However, the argument which I’m contesting is not that Web-based systems are a better alternative to native apps but that open standards are… Amazon is no more open than Apple or Android. Google App’s – while very much Web based – doesn’t use a payment system which is compatible with any of it’s competitors or described in any specification (W3C or otherwise). In fact the W3C did look into this very issue in the 1990’s, although due to lack of uptake of micro-payments (how wrong they were) the working group was closed.

Anyway, to conclude this (now rambling) post…. it’s all well and good to plug open standards but, until the big issue of payments is resolved with an open standard, a healthy applications market – and hence the competitive pressures which have built pushed the boundaries of mobile software to what it is today – could not exist without closed systems and APIs.

Publishing from a Content Hub

Web CMS

Working as part of a sales team, one of the questions that I’m asked again and again – by my management as well as the Marketing department – is “who are your biggest competitors?” For a Web content management system or text analytic tool (Nstein’s WCM and TME respectively), that’s a fairly easy question to answer. In the DAM space, however, because of Nstein’s particular focus upon the Publishing industry the answer is less clear.

A simplified publishing workflow.
A simplified example of a publishing workflow.
Content Hub workflow

With assets stored in a central repository all systems and processes have direct access to them.

In fact, over the last couple of years Nstein has been positioning its DAM offering as a strategic centre-point for publishing workflows – Content Hub seems to be the prevailing (if slightly uninspired) label for this kind of system. Essentially, a Content Hub is a DAM with integration points so that all assets which come into the wider system (the company, publication, etc) are ingested straight into it; all content which is created internally is written directly into it; and then, all systems which utilize, display, edit or distribute content do so from the Hub directly. This is not a new model – it is sometimes referred to as a single version of the truth – however it often represents significant change and significant challenges in environments which have naturally developed around a (fairly) linear workflow. Magazines, in particular, as well as any breaking news publications, tend to have a from A to B style workflow which involves filtering incoming media, bring it together as a publication of some description and then publishing it out. By repositioning the processes and applications along such a workflow around a central Hub, dependencies and bottlenecks are broken down and assets, and access to them, become standardized. As a symptom of this shift, efficiency improves, asset re-use is encouraged and assets, their rights and usage information are better tracked. And by creating packages of content, independent of both source and output channel, features can be efficiently published on multiple channels (such as Print and Web) and new properties can be created cheaply with lower risk.

So, coming back to the original question, the DAM space doesn’t present that many competitors for Nstein (although there are, of course, a few) as few DAM systems have the out-of-the-box capabilities required by the vertical – handling extended metadata, transforming images, re-encoding video, printing contact sheets, managing page content, &c. In fact, the biggest competition in these cases comes squarely from Print Editorial System vendors who would, like us, endorse a Content Hub approach except with their CMS at the centre of the publishing universe.

In some ways both sets of vendors – DAM and Editorial System – are using the same arguments. One version of the truth, certainly. Single workflow and security. To some extent the multiple-channel publishing argument would also be used by both, certainly most Print Editorial Systems come with some option to publish a Web site as well.

These two approaches to the same Content Hub strategy raise a couple of key questions: what is the difference between the two solutions and how do those differences affect the buyer?

The former question is the simplest to answer: A DAM based Hub disassociates itself from the editing and creation of products whereas an Editorial System is strongly tied in to the production process. Take the creation of a newspaper, for example. The collaborative effort needed to construct a modern edition in an efficient and reliable manner relies heavily upon Editorial Systems to manage the agglomeration of the content and design in real time. The question is; should that System be the hub or a spoke?

How do these differences affect the buyer? What are the relative merits of the approaches? These questions are the ones which are being debated and rely upon strategic visions that the publisher may just not share. However, from my point of view, here are the main points.

On the plus side for the Editorial Systems, as they are so connected to the production process, they  can offer advanced and specific functionalities, tying in closely with DTP tools and offering collaborative working features which a DAM cannot compete with.

That strength, however, is also the biggest weakness for the Editorial Systems. By abstracting themselves from the production process the DAMs become far more agile. We can look at a fairly simple example of this in publishing the same content to both print and the Web, a process which should, by now, be a commodity. At its simplest this task should work smoothly in any Print Editorial System; text and images from a print feature are transformed into Web pages and published online. What happens, though, when other media is introduced? Most Print Editorial Systems that I have seen struggle to (or cannot) display and edit video. Maybe they can store them but the advanced features available for print content are gone, as are many simple features such as previewing and usage tracking. Now in many cases, the Print Editorial System may be coupled with a Web CMS (potentially from the same vendor) which does feature better handling of video but in that scenario there are now two production points. That means compromised security, more staff training, more convoluted audit trails. Then when you take audio, Software Flash, or any other format of content that the publisher may use – online or elsewhere – and the problem is magnified.

One solution for the Editorial Systems would be to develop the extra functionality required to handle these formats with the same level of functionalities as the print content which they are familiar with. The obvious problem with that is the effort and available resources required to build and maintain such a suite. So by steering clear of the production process the DAM based systems can handle content in a channel-ambiguous fashion.

Particularly when one looks at the creativity in digital media these days, the strength of agility should be clear. There are the obvious ones: Facebook apps, QR codes, iPad channels, etc. There are also some less well adopted mediums.

In October 2008 Hearst released a special edition Esquire (sponsored by Ford) featuring an e-ink, animated front-cover. Bauer last week released an issue of Grazia featuring Florence (and the Machine) dancing in an augmented reality world activated by pointing your webcam/iPhone at the cover. While this was pretty disappointing in comparison with many other AR examples (such as the great GE ones) due to the fact that the real page was not displayed – more on that in a future post. While neither of those examples where particularly well implemented they definitely show signs of what could become mainstream technologies in the future. The question about adding the functionality to manage the production of publications including these kinds of technologies into Editorial Systems is a far-fetched one. Not only is the investment significant and the road to maturity slow but if a technology ultimately fails to gain mainstream accessibility the investment becomes a wasted one. For that reason companies that rely upon an Editorial System at the core of their business have to wait until new technologies reach general acceptance to embrace them and lose the ability to stay ahead of the curve – at least without excessive risk. In those cases, as with more mundane ones, the channel ambiguous and content ambiguous DAM systems project their flexibility directly on to the publications which use them.

That’s not to say that there are not downsides to using the DAM as the Hub. In particular, collaborative working cannot be handled to the depth that the Editorial Systems manage without their level of detail and understanding of the specifics. And in both cases there are overlaps in functionality; most Editorial Systems have some kind of repository, for example, and many top tier DAM systems integrate well with DTP tools.

Inevitably, those two questions, drive towards the ultimate conclusion of the debate: “Which would make a better Content Hub, an Editorial System or a DAM?” I won’t attempt to answer that directly as I’m obviously biased towards the solution I sell and know the most about but will encourage debate from those who have an opinion…

The future of video on the web

I’m getting rather excited about video media online. We’re on the cusp of a revolution in the way we produce and consume the medium.

I was working on a project recently which involved video content. It struck me that, although we have come on no end in terms of our ability to distribute video over the web in the last half decade, video content still has huge holes in the orthodox functionalities of more established media.

Most obviously, there is the dependency upon external codecs (i.e. not native to the browser). The solution to which, in the most case, is a Flash player. There are numerous Flash players available freely and cheaply on the web; they can usually play most of the common video types and depend only upon a single plugin, Flash. YouTube is probably the best known example of using Flash to play videos.

This approach creates problems all of it’s own, though:

  • Flash players still have a dependency upon a browser plugin.
  • The binary video – the original file – is not transparently available in the way that images and text are.
  • Flash does not always cohere with de facto web standards: you cannot apply CSS to Flash, it does not respect z-indexes of objects (ever seen a drop-down menu disappear underneath a Flash component?).
  • It does not have a full set of properties directly accessible for the content it wraps, as a other elements in a pages DOM do.

Don’t get me wrong, Flash has it’s place in the modern web. It is a fantastic platform for RIAs and rich, animated and interactive components of web sites. However as far as video presentation goes it is, essentially, a hack.

These drawbacks for video (and, in fact, audio) presentation, manipulation and playback have not gone unnoticed. One of the most important changes for HTML5 – first drafted back in January 2008 – is the handling of these mediums with the <video> and <audio> tags, now supported in both Gecko and Webkit.

The initial specifications for HTML5 recommended the lossy Ogg codecs for audio and video:

“User agents should support Ogg Theora video and Ogg Vorbis audio, as well as the Ogg container format”

The reasoning behind this drive for a single format seems obvious enough. Going-it-alone doesn’t really work as far as web standards are concerned (does it IE?). There were, however, some objections as to the choice of codec, namely from Apple and Nokia. The details of the complaints are not really relevant to this article but can be read in more detail on the Wikipedia page, Ogg controversy. At the end of the day it doesn’t really matter which format is used as long as it is consistent with the requirements of the W3C specifications; for this article I am going to assume that the Ogg codecs and container will be standard.

So, now that we have browsers (Firefor 3.5, Safari 3.1) which support the <video> tab and have native Ogg Coder/Decoders (At least FireFox) all of the deficiencies of video we discussed earlier become inconsequential. If video works as part of the HTML then it will behave as such. CSS, for example, will operate on a video element in exactly the same way as it would for an image element, z-index and all. The DOM tree for the page will include the video with all of its properties as expected. And, crucially, events and Javascript hooks allow web developers with no special skills (such as ActionScript) to control the behaviour of videos.

Silverorange.com have provided a nice example of using video with CSS. If you are running FireFox 3.5 or later you can check it out by clicking on the image.

Silverorange.com have provided a nice example of using video with CSS. If you are running FireFox 3.5 or later you can check it out by clicking on the image.

But there is another – for me more interesting – feature of Ogg video (and, presumably, its alternatives): metadata. Now, metadata in video is nothing new, for sure, but having access to a video’s metadata as described above will lead to a whole new level of video media integration in webpages. The Ogg container, for example, supports a CMML (Continuous Media Markup Language) codec and, in a developmental state, Ogg Skeleton for storing metadata withing the Ogg container. Both of these formats facilitate timed metadata. In CMML one could define a clip in a video – say from 23 seconds into the movie up to 41 seconds in – and add a description, including keywords, etc, to that clip specifically. I will resist the temptation to go into a description of how Javascript listeners could be used to access that data but, in essence, the accessibility of the information to the web page containing it would allow a programmer to accomplish fantastic features with trivial techniques.

The most obvious example has to be for search. Being able to display a video from a specific point (where the preceding data may not be relevant) is not out of scope of the Flash based players but would be much easier to accomplish.

If we squeeze our imaginations a bit further, though, I think there is great potential for highly dynamic, potentially interactive sites to be based around video as the primary content. When demonstrating front-end templates for Nstein’s WCM I always pay particular attention to in-line, Wikipedia style, links which we create in a block of text using data derived from the TME (Text Mining Engine); in-line for text equates, with timed metadata, to in-flow for video. In the past video has, by and large, been limited to a supporting medium, a two minute clip to illustrate a point from the main article. With timed metadata this could be a thing of the past.

Imagine this: you have just searched for a particular term and been taken to a video of a lecture on the subject playing from 20 minutes through – the section relevant to your query. As the video is playing data is displayed alongside it, images relevant to the topic, definitions of terms, and as the video moves into new clips, with new timed meta data, the surrounding, supporting resources are changed to reflect – in-flow.

An example of using CSS3 with the video element from Mozilla.

An example of using CSS3 with the video element from Mozilla.

As people appear in films and episodes links could be offered to the character’s bio and the author’s home page. Travel programs could sit next to a mapping application (GoogleMaps, etc) showing the location of the presenter at the current time. There are huge opportunities with this kind of dynamic accompanying data to enrich video based content. And, of course, all of the data from a particular clip can integrate into the Semantic Web seamlessly. RDF links and TME generated relations could easily be used to automate the association of content to a particular clip of a video.

The downside? Well the biggest one as far as I can see is the time-frame. Most publishers are continuing to commit to, and develop, black box style video players due to the fact that no one – a few geeks, such as myself, excluded – use cutting edge browsers. But when HTML5 gets some momentum behind it from a web developer/consumer point of view the horizons for video will be burst open wide.

http://en.wikipedia.org/wiki/Ogg_controversy

Brand: the new pretender

Content is king, is it? Well maybe. There’s no getting away from the fact that good quality content drives traffic. But in the struggling publishing industry, with waning advertising revenues, we might have to conclude that the current approach to web publishing is just not working.

That’s not to say there aren’t exceptions. Julian Sambles (@juliansambles), head of audience development at the Telegraph Media Group, talked at the resent ePublishing forum on his success in terms of SEO and bringing new audiences to the Telegraph site. No doubt other publishers have had similar successes. However there are problems associated with that kind of drive for SEO – not least because it is a very expensive process in a climate where large budgets are scarce. But, for me, I have more important reservations about focusing heavily on search engine optimised content.

Firstly, there is the issue of editorial integrity. If content was truly king then its quality would be the single most important factor in growing (and keeping) an on-line audience. For a lot of publishers  content isn’t king though – search is. In that scenario a publisher is not controlling how it’s content is consumed, or in what order. They will, undoubtedly, find that their political and social stances are watered down as well, as traffic heads more to soft news and opinion. In circumstances like these the focus actually moves away from the content and towards how the content is structured – the role of the publisher gets closer to that of an aggregator.

The next problem with relying on search engines to supply ones’ on-line audience is inherent: the consumer is researching not discovering (@matt_hero‘s search trilogy is, loosely, relevant here). I seriously doubt Google is inundated with searches for the word “news”. Perhaps terms like “football results” are more common but still not that frequent. If a visitor arrives at a site from a search engine it is fairly safe to assume they fall into one of two categories:

  1. They’ve already read the news elsewhere, first.
  2. An aggregator has presented them with summaries and the content suppliers only get a hit (and, hence, revenue) for the stories they are really interested in.

Of course, if that visitor then stays on the site – or book marks it even – then great. Of course search engine optimisation creates new users and they can become regular visitors. The problem is that without a strong brand the proportion of stray surfers who end up on a content producers site to those which are converted into frequent readers is much smaller.

The prevailing opinion these days is that the fickleness of consumers comfortable with search is inescapible; that hitting the top spot on Google is overwhelmingly the best way to drive traffic. I just can’t believe that. Certainly that sentiment doesn’t apply to me. I’m quite modern in my consumption of the news: I almost never buy a physical paper any more. But that doesn’t mean I don’t appreciate the editorial “package”, as Drew Broomhall (@drewbroomhall), search editor for the Times, described the journey a (print) newspaper reader is guided through. Every morning I embark on such a journey, lead (very ridgedly) by the BBC’s mobile site. And, while monetizing mobile content is harder than on traditional web pages, that builds a very strong brand loyalty for me. If I read any news at work, or explore in more depth a story I read that morning, it’s always on the BBC news site.

So I would argue that the readers experience – the editorial journey – is far from a thing of the past and, in fact, is as important now as it ever was for print media. There is no need to limit that experience to mobile channels, either. There are a wealth of frameworks available for producing widgets and apps on all kinds of platforms. Another talk at the ePublishing forum, by  Jonathan Allen (@jc1000000), explored in more depth how to take advantage of these output channels. iGoogle widgets, iPhone apps, Facebook applications are all great examples.

This approach not only allows publishers more of the editorial control which they had in producing print media (and lost to the search engine) but also creates a better user experience. Focused distribution channels for on-the-rails feeds can give a consumer the feeling that a publisher is doing something for them. With news being such a commodity in the on-line world these channels add real value for the audience. And if there is value for the audience, they will promote that content themselves. Creating, for example, a widget for an iGoogle user’s homepage, which displays featured articles, engages them (and presents a link back to the original content) before they have even done a search.

We see this kind of, selected content, approach commonly in the form of RSS feeds (although, too often as “latest” not greatest). Widgets and apps aren’t really doing anything different, rather they are making the stream more accessible, more user friendly. There’s another attraction to widgets and apps over RSS feeds, though – a point from Jonathan’s talk which almost makes these channels a no-brainer – they really help to boost the main document’s search engine ranking. So contrary to being an alternative to SEO widgets help drive traffic both ways.

You can take this one step further and allow the audience to define their own paths through content. As semantic understanding becomes more and more achievable, through tools such as Nstein‘s Text Mining Engine (TME) and the dawning of an RDF bases semantic web, publishers will be able to offer dynamic widgets with content ordered by an editorial team and filtered by a user. The iGoogle widget described above could easily be filtered for a Formula One fan based upon data from the TME to create a custom feed of stories they are interested in. Or if a consumer enjoys the “package” they can take the unfiltered list.

No silver bullet for publishers struggling in the migration to the web, for sure, but thinking about how content is offered as a package is a strong, and often underused, way of strenthening a brand and driving traffic. As always, IMHO…

Open Source v traditional Software (ding, ding, ding)

At the tail end of last month I spent two days attending talks at the yearly Internet World exhibition. I always enjoy listening to speakers and the quality was, by and large, very good. On the final day CMS Watch (@cmswatch) hosted a panel discussion in the Content Management theatre entitled: “Open Source v Traditional Software”. It’s was a strange title, I thought, as the line, for many vendors, between open and closed source becomes more and more vague. This blending was, however, represented in the panel, which included Stephen Morgan (@stephen_morgan) of Squiz – a commercial open source vendor.

On the whole the panel was very good and the debate interesting. The open source contingent argued eloquently  the pros of spreading knowledge throughout the community and of the response times to bug fixes compared with the release cycles of proprietary software. One of Stephen’s responses when asked for reasons to go with an open source system, however, struck me as – at best – ill conceived.

Stephen had argued that as a customer of a closed source software retailer you fall, entirely, to their mercy in terms of functional changes. The assertion was that when you – as a customer – have access to source code you can modify it to suit your needs. Conversely, he claimed that changes to a closed source solution could only be requested, may never happen and would be subject to a lengthy release cycle even if they were implemented.

Now I’m sorry but that is just not the case; as I told the panel once the discussion was opened to the audience. The software I work with, Nstein’s WCM, features an expansive and  well designed extension framework to do just what Stephen was referring to. In fact, I went further and put the polemic to the panel that hacking core source code is obviously not desirable and severely hinders an applications upgrade path. Stephen’s countered with the fact that changes made to the code-base can be submitted to Squiz (or almost any other open source software maintainer, for that matter) and may be committed into the core application.

Before I start a holy war here (and a succession of flames in this sites comments) I would like to state my position on open source: I love it. I love the concept. I love free software. I love the freedom to modify and distribute software. Basically, I get it. I’m a huge fan of Linux and at the end of the day a PHP programmer. Just yesterday, I spent my Saturday contributing PHPTs (that’s PHP tests, for non-geeks) with the PHP London user group. I really do dig open source. Also, for the record, I thought Stephen Morgan represented his brand and community very well and I enjoyed his commentary; this is not meant to be a personal attack 😉 .

In fact, this post is not criticizing open source software at all. The discussion here, as far as I am concerned is about best practices. Okay, sure, one can modify the source code to an open source project and that change may be incorporated into the software. May be incorporated; probably won’t be. And with closed source software that option is not available – you have less choice. But that is, I think, a good thing.

At least the prelude to a good thing. Software evolves, like all technology, and the beautiful simplicity of Darwinian evolution applies. It’s survival of the fittest. If we, at Nstein, were to compete with open source CMS projects with a solution which was not customisable, which had no mechanism for modification we would have died out. The fact is we make a vast amount of customisation possible – we’ve had to. Because we don’t encourage customers to delve into the core source code (it’s a PHP app so they can if they really want) we’ve had to employ other methods. Extensible object models built around best practices derived from industry experience. Plug-in frameworks. Generic extension frameworks. If one of our customers cannot extend or change something that they need to the chances out that another client will at some point want that same, absent flexibility. So, through good design practices we have constructed a system which clients can (and do) modify, yet when they decide to upgrade to the next point release it is a trivial process.

Now, I’m not saying that open source software is poorly designed. I’m writing this piece now on WordPress – a fantastic example of an open source project – which features an extremely rich and well documented plug-in framework. The sheer number of plug-ins and themes available for WorldPress is a testament to the system. And, as with Nstein’s software, when I upgrade WordPress all of my extensions still work (at least 95%, or more, of the time).

I doubt anyone would disagree with the merits of a plug-in based system. My interest, however, is in this question: how much of a temptation is there to hack open source software? I know I’ve done it in the past. I’ve heard a number of times that Drupal upgrades are nigh on impossible due to the nature of the inevitible customisations a Web content management system requires. I’m not in a position to answer that question authorititively, and I won’t attempt to. I would like to stir the debate up though. So, thoughts, please….

Creating compelling content in the Web 5.0 world

Whoa, there. Web 5.0?

Okay, so I made up web 5.0. Actually, I detest the numbered generations we’ve applied to the web. The main problem I have with these terms is that they imply a linear progression. They suggest that we are going to abandon the interactive web, Web 2.0, for the semantic web, Web 3.0. Obviously we aren’t. I doubt anyone would even suggest it. Web developers will continue to use both. Hence Web 5.0 (do the maths).

I’m going to drop the term now – it was just a joke. The modern World Wide Web is, in fact, much more than just the three so-called generations – although clearly they are very important. I can identify three main concepts (not technologies) which are facilitating the current evolution of the web:

  • Interactivity (2.0)
  • Semantic understanding (3.0)
  • Commoditization (the Cloud)

Nothing ground breaking there. And we, as users, are certainly seeing more and more of these big three in our daily use of the web.

Interactivity is fairly obvious. I think the biggest revolution in interactive content came about as Wikipedia took off. Undoubtedly the most expansive (centralized) base of knowledge the world has ever seen – and written by volunteers, members of the public. It really is a staggering collaborative achievement. Then there’s blogging, micro-blogging, social networking, professional networking, content discovery (digg, etc), pretty much anything you might want to contribute, you can.

Semantic understanding is a little trickier to see. That’s hardly suprising as it is so much newer and far less understood. Believe the hype, though. The sematic web is coming and it will change everything (everything web related, that is). If you don’t believe me try googling for “net income IBM”. You should see something like this:

Google results using RDF infoThat top result is special. It’s special because it’s the answer; it’s what you were looking for. No need to trawl through ten irrelevant pages to find the data – it’s just there. Google managed to display this data because IBM published it as part of an RDF document. If you search for the same information about Amazon – who don’t, no such luck. (That particular example was given by Ellis Mannoia in a great Web 3.0 talk at Internet World this week – so thanks Ellis.)

That leaves us with commoditization. Specifically, the commoditization of functionality from a developers point of view. This concept is largely, although not exclusively, linked to the Cloud. The term “the Cloud” is used broadly to describe services make avalible over the internet. GMail, for example, is email functionality in the cloud. Users don’t need to install anything to use GMail (bar a web client) they just use it when they want, from any computer. Many of the Cloud services out there are available as APIs, and that leads to the commoditization of functionality. Say I want to add a mapping application to my web site to show my audience where I am. A few years ago that would have been a significant amount of development work. These days it’s trivial – you just make a call to the GoogleMaps API. And so map functionalities become a commodity.

The point of this post, however, is that these are not mutually exclusive concepts. There is no reason why you cannot combine semantic understanding with Cloud computing, or UGC, or both. Quite the opposite: combining the three should be the goal.

There are problems, however. Utilizing Cloud computing requires a certain amount of adherence to standards – fitting in to an API. And semantic understanding (and meta data, in general) takes time to accrue. In general those two constraints don’t work well with Web 2.0 functionality.

Let me give an example: If a user contributes a comment to an article they probably won’t take the time to add the meta data required for semantic understanding to be achieved. In the same way if they don’t give their location you can’t show them as a pin on GoogleMaps.

However semantic understanding is (IMHO) more than just the use of RDF documents. Tools like Nstein’s Text Mining Engine can be used to create a semantic footprint describing a piece of text. I’ve talked, in previous posts, about using the data gleaned by the TME in imaginative and experimental ways. Take the example above. If a user were to post a comment about a talk they attended the TME could extract, not only the concepts of the comment, but also data like the location of the subject. That semantic understanding can be used to programatically call the GoogleMaps API to add a new pin in your map.

And there you have it. Semantic understanding of interactive content used to harness the power of Cloud computing. One of the most important benefits of the TME, for me, is the flexibility it affords you. If you know that you can get access to that kind on information it opens up all kinds of possibilities. Exploring some of these possibilities has to be the focus for making a brand stand out against the plethora of content suppliers and aggregators available; for improving the users experience and gaining their loyalty.

So it’s time to stop thinking about Web 2.0 or Web 3.0 and start thinking about the technology and techniques available and how they can be used to the greatest effect.

How long is a (piece of) string?

I recently posted an article about a workflow script I cooked up for automatically tweeting about an article when it gets published via Nstein’s WCM (here). Basically, the script to which the article referred was leveraging data from Nstein’s Text Mining Engine (TME) to create concise but still descriptive tweets. As a brief reminder of that post, the script was using a computer generated summary and adding hash-tags extracted from the text to create a micro-blog like this:

I’ve made use of the TME’s concept and entity extraction features to create hash-tags. #tweet #nsteinswcm http://tinyurl.com/d3ozzn

It seems to be an idea which the industry finds interesting (judging by my Twitter account and the comments on the article). Sarah Bourne’s (@sarahebourne) offer – in particular – I could not pass up. Sarah, who is the Chief Technology Strategist for the Commonwealth of Massachusetts (@massgov), had suggested that I try my micro-blogging bot on some of the MassGov content from their Twitter stream. So I did…

Well, as one comment in the last entry (by “Rob”) alluded to, no matter how relevant my tweet is it still needs to comply to the 140 character limit set by Twitter. This seemed to be presenting some problems with the MassGov content. A big part of the problem was that the subjects of the Massachusetts articles were often political; they tend to have long sentences with complex subject matters and feature lots of relatively long words (“Massachusetts” for example). So although pertinent hash-tags and relevant teasers were being generated some times these were still over the limit.

The way my bot dealt with this situation was by using progressively more aggressive truncation techniques. At the light end of the scales it might swap all occurrences of “with” for “w”, “and” for “&”, etc. After each pass the tweet’s character count gets remeasured, if it’s still to long the next truncation technique is applied. Ultimately, if all else fails, the tweet is truncated by removing words from the end until it no longer exceeds the limit.

Obviously, this can lead to the very problem the original post was discussing: ending up with automatically generated tweets which do not describe the article they are plugging. Now, the bot I created makes this situation far less common, no doubt – but not impossible. Adding hash-tags guarantees a level of meaning which would otherwise be impossible to achieve with an automated system and that makes up for truncated sentences to some extent, however I was not satisfied. Here’s an example of a tweet which was too long:

Attorney General Martha Coakley Sponsors Legislation to Enhance Victim compensation Assistance. #massachusetts #compensation http://tinyurl.com/6ht573

In fact it’s 9 characters too long. Now the bot would have truncated it to this:

Attorney General Martha Coakley Sponsors Legislation to Enhance Victim compensation. #massachusetts #compensation http://tinyurl.com/6ht573

As it turns out, that wasn’t too destructive but I may not have been so lucky.

That tweet had given me an idea, though. The inspiration? TinyURL.

I don’t use TinyURL.com when I’m tweeting. These days who does? Twhirl (or Seesmic) is my twitter client and when I want to shorten a URL it offers me a list of services to use. I always make the same choice: “is.gd”.The reason is pretty obvious – their domain name is 6 characters shorter.

Okay, so a bit of a no-brainer there then. Switch my bot’s shortening service to “is.gd”, save at least 6 characters per tweet. But that wasn’t really the point. I would never have used TinyURL so why had I programmed my bot too? What was I thinking?

Well the truth of the matter is this: I wasn’t. I’d used the TinyURL API before and so just stuck it into the code. So I started thinking about what else I might have done wrong. Or, more specifically, I started to think about how I tweeted (in the flesh, as it were) and if my bot was doing as good a job.

Once I started down that trail-of-thought one big difference struck me: Where possible I use inline hash-tags. If the keyword you are tagging already exists in the post then you are not adding meaning, per se. You may be emphasizing that word and you may also be starting a trend for replies and retweets. Therefor, it stands to reason that you can use the hash-tag inline and not waste space by duplicating the word.

So, having made those changes to the program I republished the MassGovs article. This time my bot tweeted:

Initiative will help municipalities pursue clean #energy projects make best use of federal stimulus funds. #massachusetts http://is.gd/uggP

Much better. It actually transpires that (perhaps unsurprisingly) these inline tags occur pretty frequently in the tweets. I’ve republished a selection now, here are the tweets:

Officials “flex” highway stimulus funds to support “net zero” transit center. #transportation #greenfield http://is.gd/ugns

Attorney General Martha Coakley Sponsors Legislation to Enhance Victim #compensation Assistance. #massachusetts http://is.gd/ugtL

Patrick Administration Credits Dropout Prevention Efforts for Improvement. #student #malden http://is.gd/ugs7

#patrickadministration Receives $1 Million Grant to Support Expanded Services for People with #traumaticbraininjuries. http://is.gd/ugie

Welcome to DCR Park Server Day. #volunteer #capecod http://is.gd/ugqD

Costs to Employers ThirdLowest #oregon Survey Reports Under Patrick Administration Rates Have. #compensationrates http://is.gd/ugwn

The results there are, I think, pretty good. Out of the seven articles I’ve republished only the last one has needed to be to truncated.

My bot isn’t perfect and it won’t create faultless tweets every time, however, it is a huge improvement over the traditional blind truncation. My conclusion – from the previous post, the discussion around it and the experiments I have carried out – is that Twitter automation has too many benefits for it not be used by online publishers but will (probably) never be perfect 100% of the time. What we’ve accomplished here, so far, is a much higher and more consistent level of readability and relevancy and a much reduced frequency of the need to truncate teasers. I’m sure there are many techniques I could implement to improve the results (and I may do in the future) but for now there is just one more change I’m going to make…

As I mentioned at the beginning of this article (and in the previous one) this experiment has be done using the workflow engine in Nstein’s WCM. It’s a scripted state transition engine, so when I published articles they were also passed to the Twitter-bot for it to create a tweet. The change I am going to make is this: create a new, “Needs tweeting”, workflow state. Then in the minority of cases where the bot cannot tweet about an article without truncating the teaser it passes the responsibility onto a human twitterer.

There are a huge (really, really huge) number of things that we can accomplish with the TME. Some of the key ones, like SEO, have already been taken to very high standards, but we are only scraping the surface of possible uses. Ideas and experiments, such as this one, are key to our industries growth. From my point of view accomplishing automation in 85% of cases and a high level of quality in 100% would be a fantastic acomplishment. Let’s face it: in this day and age information has been commoditized so quality become the only differentiator between publishers. Quality is what attracts an audience and certainly what keeps them… even on Twitter.

Asimov’s 4th law: A robot will not tweet.

Well, that might be a bit extreme. At least if they do they should put in a bit more effort.

Perhaps I need to explain my problem here. The complaint I have concerns automatic tweets – popular with bloggers and online publshers in general. Extremely unpersonal, often unhelpful clipits drawing the audiences attention to a new article or blog entry. Here’s an example:

[news] Pepsi drinkers join the dots: Anyone buying a Pepsi Max soft drink over the next few w.. http://tinyurl.com/5qu3w3

@guardianmedia

Ok, so it’s pretty obvious what’s wrong with this tweet. The article the Guardian Media is trying to promote is about a campaign by Pepsi which uses QR codes on the side of their cans – not that you’d have known from the tweet.

The problem is they’ve used a witty headline not a descriptive one. In itself that is fine. Like many online publishers, however, the Guardian have opted against manually tweeting and have integrated (presumably) their CMS with Twitter. More specifically, the tweet is a concatination of the articles title and the begining of the text. It just so happens that neither of those blocks of text mension QR codes.

There is a lot to be said for automation, though. It’s not just that this system saves the author of the article or blog time. It also ensures consistency – all articles get posted. And, to be fair, most of the time these posts are okay…

…not always though. Personally, I’ve stopped following the Guardian Media on twitter (and Scientific American) because these badly formed tweets annoy me way too much. Take the article above, for example. A human author might tweet something like this:

Pepsi launch campaign using QR codes on cans. Drinkers get access to secret content through phone browser.

That sums up the article much better, with 33 characters spare for the URL. I’d be far more likely to read the article having read that tweet, as I think QR codes are interesting (I’m a bit of a geek) and appreciate imaginative marketing.

So what’s the answer? Is there a way to achieve the normalization and efficiency of an automated system while being a good Twitterer? Well yes, I think there is.

I’ve been playing with the workflow engine in Nstein’s WCM and have written a nifty little Twitter-bot. It’s secret is it’s ability to understand content. Nstein also produce a text mining engine (TME) which is ingrained into the WCM right down to the core. This means that semantic data about an article is always easily accessible. I’ve used this automatically extracted meta data in two ways for my bot.

Firstly, I’ve made use of the TME’s concept and entity extraction features to create hash-tags. For those who don’t know, a hash-tag is a peice of meta-data associated to a tweet. They are prefixed with a hash (#) character and generally are alpha numeric. A lot of automated tweets now use hash-tags with vary degrees of success. @northamptonrfc (the rugby team I support), for example, tags all tweets with “#rugby”. Well I never. The correct use of hash-tags (IMHO) is to:

  1. Add relevant meta data to a tweet which adds meaning.
  2. Create a trend to follow (essencially a thread accross all Twitter users).

In order to meet those criteria the tag needs to be meaningful. It stands to reason. In the Pepsi example above two tags spring to mind: “#pepsi” and “#qrcode”. Including 2 spaces that makes an extra 15 characters which can (relatively) easily be fitted in before the TinyURL. Nstein’s TME would, undoubtedly, have picked these concepts out.

“QR Code” is what the TME refers to as a complex concept, that is, a phrase. “Pepsi” is an entity, specifically an organisation name. A simple regex can transform these strings into hash-tags. Using this technique the bot imediately adds a great deal of meaning to the tweet.

The second way in which I’ve leveraged the meta data extracted by the TME is using NSummarizer. This cartridge takes a document, splits it into sentence components, rates each component on its relevance to the article and returns the best scoring one(s) as a brief summary of the document. This is a really useful tool for getting around the issue of having a first sentence which is not (particularly) descriptive of the article as a whole.

So, does it work? Well I’ve used this blog as a test, here’s the resultant tweet:

I’ve made use of the TME’s concept and entity extraction features to create hash-tags. #tweet #nsteinswcm http://tinyurl.com/d3ozzn

Personally, I count that as a success.

About me

I’m an entrepreneur and technologist. I’m passionate about building SaaS products that provide real value, solving hard problems, but are easy to pick up and scale massively.

I’m the technical co-founder of a venture-backed start-up, Zephr. We have built the worlds most exciting CDN which delivers dynamic content to billions of our customer’s visitors, executing live, real-time decisions for every page.