Julia Set tattoo

In a recent post I put up some JS to generate a Julia set. For me, the Julia Set epitomizes the things I most love about Maths: beauty and complexity from simple rules.

I like it so much I had it tattooed onto my arm ūüėČ

image

Steve Krug’s Ironic Law of Usability

Get rid of half of the words on each page, then get rid of half of what’s left – Steve Krug’s Third Law of Usability

So that’s “Remove three quarters of the words”, then?

Factmint Charts WordPress plugin

We, at Factmint, have just released a WordPress plugin to create data visualizations in a WordPress blog. So, I thought I’s give it a go:

Top ten countries by GDP

source wikipedia

Country GDP
United States 16768100
China 9181204
Japan 4898532
Germany 3730261
France 2678455
United Kingdom 2267456
Brazil 2243854
Italy 2149485
Russia 2096774
India 2047811

The plugin is here, if people are interested.

Nice numbers part 2: scales

In part 1 I talked about a problem when developing scales built algorithmically. For example choosing numbers on an axis:

If you want five ticks on the side of bar chart, and the range is 0 ‚Äď 8, you could do 0, 2, 4, 6, 8. But what about 0 ‚Äď 5? That would be 0, 1.25, 2.5, 3.75, 5. How about if the chart is displayed on a small screen and you can only fit in 4 ticks, then 0 ‚Äď 8 is 0, 2.666, 5.333, 8. Those are not nice numbers.

So, an axis (say on a column chart) for those values would look like this:

A poorly formed scale

 

In that post we developed an algorithm for rounding numbers to an aesthetically pleasing value. However, the value returned from that function was a string – a human readable representation of the number, such as “1.2 trillion”. In order to tackle the problem described above we will need to keep the number as a Number.


function getNiceNumber(uglyNumber, precision) {
    if (! precision) precision == 2;

    return parseFloat((uglyNumber).toPrecision(2));
}

function getDisplayNumber(niceNumber) {
    var order = Math.floor(Math.log10(niceNumber));

    var suffix = '';
    if (order >= 12) {
        niceNumber = niceNumber / Math.pow(10, 12);
        suffix = ' trillion';
    } else if (order >= 9) {
        niceNumber = niceNumber / Math.pow(10, 9);
        suffix = 'bn';
    } else if (order >= 6) {
        niceNumber = niceNumber / Math.pow(10, 6);
        suffix = 'm';
    } else if (order >= 3) {
        niceNumber = niceNumber / Math.pow(10, 3);
        suffix = 'k';
    } else if (order <= -3) {
        niceNumber = niceNumber / Math.pow(10, order);
        suffix = ' √ó 10' + order + '';       
    }
  
    return niceNumber + suffix;
}

var uglyNumber = 0.000326343;
var niceNumber = getNiceNumber(uglyNumber);
console.log(getDisplayNumber(niceNumber));

As before, that logs:

3.3 √ó 10-4

Choosing an increment

Now, to return to the problem at hand. In our example, generating 4 tick marks on a scale from 0 – 8 doesn’t give good results. We can see that it won’t. 4 tick marks equates to 3 regions: look at the axis illustration, there are 3¬†regions between the 4 ticks; in general, for n ticks there are always n – 1 regions. Therefore the (bad) increment chosen in that example is 8 (the range) divided by 3, which is 2.66 reoccurring.

So, what would make a good increment? Well, a nice number would be a good start, probably. If we  decrease the precision to 1, the increment becomes 3:


function getNiceIncrement(start, end, numberOfTicks) {
    var range = end - start;
    var numberOfRegions = numberOfTicks - 1;

    return getNiceNumber(range / numberOfRegions);
}

var start = 0;
var end = 8;

var increment = getNiceIncrement(start, end, 4);

console.log(increment);

Which will log out “3”.

Now, for the example case we could say we are done. If we generate the tick marks by incrementing from the start we get:

0,3,6,9

Which quite nicely covers the area of interest and only uses nice numbers. As ever, it’s not that simple though. How about 0 to 4, with 4 ticks? Our nice increment is then 1 (4/3 rounded down); the scale would be:

0,1,2,3

It doesn’t cover the range, so is completely invalid. The obvious solution here (assuming you are not willing to use 5 ticks) is to always round numbers up. However that is not quite as trivial as it may appear: 1203.5 should round up to 2000, not 1204. As with the getDisplayNumber function, the solution lies in logarithms.


function getNiceIncrement(start, end, numberOfTicks) {
    var range = end - start;
    var numberOfRegions = numberOfTicks - 1;

    var uglyIncrement = range / numberOfRegions;

    var order = Math.floor(Math.log10(uglyIncrement));
    var divisor = Math.pow(10, order);

    return = Math.ceil(uglyIncrement / divisor) * divisor;
}

var start = 0;
var end = 4;

var increment = getNiceIncrement(start, end, 4);

console.log(increment);

This time, the generated increment is 2. That is better, we get 0,2,4,6.

By way of a quick explanation into the algorithm, rounding down the log-base-ten of the ugly increment gives us the number of digits before the decimal place minus one (i.e. the integer you would raise 10 to to get the largest possible number which is still less than the ugly increment), I will call this the order. Dividing the ugly increment by 10 raised to the power of the order always gives a number between 1 and 10; which we can round up, safely, then multiple by the order to return it to the correct size. By example:

  • start = 120010, end = 863209, numberOfTicks = 4
  • range = 743199
  • numberOfRegions = 3
  • uglyIncrement = 247733
  • order = 5 (as there are 6 digits or 105 = 100000 < 247733
  • divisor = 100000
  • uglyIncrement / divisor = 2.47733
  • Math.ceil(uglyIncrement / divisor) = 3
  • Math.ceil(uglyIncrement / divisor) * divisor = 300000

So, to recap, we now have a function that will generate nice increments and another that will generate display numbers (and getNiceNumber, although that is no longer used).

Centring the range

Consider the case where the scale runs from 11 to 15, with 4 ticks. The range is still 4, so the nice increment will be calculated as 2 and the scale will run:

11,13,15,17

That is okay, but the 17 seems superfluous as our original scale ended at the penultimate tick mark. We can account for this by moving the start tick backwards to centre the range. We should be a little cautious, though, in the case where the start of the range is zero. In the case of 0 – 4 our scale was 0,2,4,6 – would -1,1,3,5 have been better? Probably not: the negative number are an unnecessary complication. So, we might use:


function getNiceTickMarks(start, end, numberOfTicks) {
    var range = end - start;
    var numberOfRegions = numberOfTicks - 1;
    var niceIncrement = getNiceIncrement(range, numberOfRegions);

    var rangeOfTicks = niceIncrement * numberOfRegions;

    if (rangeOfTicks > range) {
        if (start >= 0 && start < niceIncrement) {
            start = 0;
        } else {
            start -= (rangeOfTicks - range) / 2;
        }
    }

    var tickMarks = [];

    var counter = start;
    while (tickMarks.length < numberOfTicks) {
        tickMarks.push(counter);
        counter += niceIncrement;
    }

    return tickMarks;
}

function getNiceIncrement(range, numberOfRegions) {
    var uglyIncrement = range / numberOfRegions;

    var order = Math.floor(Math.log10(uglyIncrement));
    var divisor = Math.pow(10, order);

    return Math.ceil(uglyIncrement / divisor) * divisor;
}

var start = 11;
var end = 15;

var tickMarks = getNiceTickMarks(start, end, 4);

console.log(tickMarks);

That will return:

10,12,14,16

Perfect. And if we had 0 – 4 (or 0.5 – 4.5, etc) we would get:

0,2,4,6

That, for now at least, is all I have to say on nice numbers.

Nice numbers part 1

Rendering “nice numbers” has been the subject of a few debates at Factmint over the last few weeks, as we expand our collection of data visualizations, so I thought I’d put some thoughts down…

What’s a “nice number”?

Well, that’s a difficult question; 1729 is a nice number but I am specifically talking about aesthetically pleasing numbers. For example, the series 10, 20, 30 is more aesthetically pleasing than 9, 18, 27 – certainly, if you were using those numbers of the ticks on a bar chart’s axis.

If you want five ticks on the side of bar chart, and the range is 0 – 8, you could do 0, 2, 4, 6, 8. But what about 0 – 5? That would be 0, 1.25, 2.5, 3.75, 5. How about if the chart is displayed on a small screen and you can only fit in 4 ticks, then 0 – 8 is 0, 2.666, 5.333, 8. Those are not nice numbers.

When you look at very big and very small numbers the problem is similar. 1452519892 is not nice; 1500000000 is better; 1.5bn is better still.

Rounding

So, JavaScript (as with many other languages) provides a simple solution to this problem: Number.prototype.toPrecision.


(1452519892).toPrecision(2); // "1.5e+9"

Doing better

Rounding to two significant figures is definitely an improvement but we can do a lot better, especially if you are not happy with standard form. First off, let’s parse a float so we can work with numbers again:


parseFloat((1452519892).toPrecision(2)); // 1500000000

Now, we need to know the power of 10 from the standard form bit. You could parse that from the "1.5e+9" but it’s easier to get as a log-base-10.


var number = parseFloat((1452519892).toPrecision(2)); // 1500000000
var order = Math.floor(Math.log10(number)); // 9

That is saying that the smallest power of 10 that is less than 1500000000 is 9.

Now we need some more fiddly code to deal with all of the cases. Let’s start with the big numbers:


function niceNumber(uglyNumber) {
    var niceNumber = parseFloat((uglyNumber).toPrecision(2));
    var order = Math.floor(Math.log10(niceNumber));

    var suffix = '';
    if (order >= 12) {
        niceNumber = niceNumber / Math.pow(10, 12);
        suffix = ' trillion';
    } else if (order >= 9) {
        niceNumber = niceNumber / Math.pow(10, 9);
        suffix = 'bn';
    } else if (order >= 6) {
        niceNumber = niceNumber / Math.pow(10, 6);
        suffix = 'm';
    } else if (order >= 3) {
        niceNumber = niceNumber / Math.pow(10, 3);
        suffix = 'k';
    }
  
    return niceNumber + suffix;
}

niceNumber(1452519892); // "1.5bn"

That’s pretty cool and does what we want – now for the little ones…

There are a few ways you can deal with very small numbers. One option, is to switch back to standard for, but to display it a little more elegantly. This example uses HTML in the output (which may not be an option for all cases) but you could use the “e” syntax or even a standard fraction.


function niceNumber(uglyNumber) {
    var niceNumber = parseFloat((uglyNumber).toPrecision(2));
    var order = Math.floor(Math.log10(niceNumber));

    var suffix = '';
    if (order >= 12) {
        niceNumber = niceNumber / Math.pow(10, 12);
        suffix = ' trillion';
    } else if (order >= 9) {
        niceNumber = niceNumber / Math.pow(10, 9);
        suffix = 'bn';
    } else if (order >= 6) {
        niceNumber = niceNumber / Math.pow(10, 6);
        suffix = 'm';
    } else if (order >= 3) {
        niceNumber = niceNumber / Math.pow(10, 3);
        suffix = 'k';
    } else if (order <= -3) {
        niceNumber = niceNumber / Math.pow(10, order);
        suffix = ' √ó 10' + order + '';       
    }
  
    return niceNumber + suffix;
}

console.log(niceNumber(0.000326343));

That returns:

3.3 √ó 10-4

Ranges

So, that sorts out rendering single values nicely but doesn’t fix the problems, outlined at the beginning of this post, when dealing with picking numbers in a range. Out issue of the picking 4 numbers from 0 – 8 would still be there, just to two decimal places. We have sorted that out in our visualizations and I’ll write about choosing sensible numbers in another post soon.

Why I would hire a social media expert

Just a quick one.

I came across this post a couple of weeks ago (and just got round to following it up): http://shankman.com/i-will-never-hire-a-social-media-expert-and-neither-should-you/.

Basically, @petershankman is very much against the “Social Media Expert”. His¬†argument¬†is based upon social media being just one component of a good marketing stratergy – and not so alien or complex as to need a dedicated expert on your payroll:

Social media is just another facet of marketing and customer service. Say it with me. Repeat it until you know it by heart.

While I do see his point of view – and do agree that social media should be considered part of a broader marketing effort (even if it’s the main part) – I think he (and a staggering amount of commenters) have missed the point. It does sound like something a social media expert would say, but…. they¬†don’t get it.

The point is this: social media isn’t just used as a soapbox to shout from. Unlike many other marketing channels, social media makes it incredibly easy to listen.

Let’s compare it to another “facet of marketing”, email campaigns. What do we learn when we send out a marketing email? Maybe we get an inbound lead – if we’re lucky. Then, we learn that the particular prospect who responded really liked our message. However, we’ve learned very little about all those who didn’t. And aren’t they the more important ones to listen to?

True, you could argue that potential customers might email you to tell you about their requirements and expectations and the preconceptions they have about you. I’m sure it happens. Not often, though.

Using social media, on the other hand, one just needs to do a quick search on Twitter and they can see potentially huge numbers of opinions expressed about them. Combine that information with some clever analytics and a marketing organization has a huge amount of usable and very valuable data to hand.

You could still argue: you don’t need an expert to search on Twitter, which is all Peter asserted. True, you don’t, but that’s only the most obvious use of social media outside of the normal scope of marketing channels. What about automating reactive messaging (as lots of big brands now do already)? What about feeding positive trends, being discussed about your competitors, to your R&D team? What about personalizing how a visiter sees a Web site based upon the publically available information about their likes and dislikes (as idio does)?

What if you don’t know how you could use social media to imprive your business stratergy? Well then, I’d say, you need to hire a social media expert.

Native apps – a necessary evil?

I recently saw a tweet quoting TBL, talking at Profiting From The New Web (which I’m very sad to have missed!), which went along the lines of: don’t develop apps, use open standards.

It’s a very interesting instruction. One I used to agree with…

Purely from a technical point of view it is difficult to argue with the logic. Being able to develop once and still create Web applications which work well on many devices Рwith different hardware features, screen sizes, etc Рis very possible using the latest iterations of Web standards.

HTML5 allows developers to produce rich and interactive graphics and animations in their pages using the Canvas element. One can handle streaming media (fairly) effectively using it. Persistent client-side storage is even available, which can be used for offline applications, amongst other things. The very brilliant, and now widely supported, CSS level 3 specifications make it really easy to accommodate different devices and screen sizes by using Media Queries Рan excellent example of that is colly.com (play with your browser window size or use your phone/tablet).

All of those technologies are very interesting in their own right but for the purpose of this post I won’t delve into the mucky details. If you are interested, or feel you need to brush up, I recommend following the excellent @DesignerDepot on Twitter.

The important point here, and I assume (which may be a dangerous thing!) the key point for the Open-Standards-over-native-apps argument, is that Web sites built using modern Web standards – running on a modern browser – have the potential to be just as feature rich as any native app.

That may not be quite true: to the best of my knowledge HTML/javascript does not support device specific features such as compasses and native buttons. So in that regard, apps have a slight edge. That said, I don’t think there are any huge gaps in the functionality¬†available¬†through the standard Web technologies. Certainly there are no insurmountable ones.

I do see one bit problem for the Web standards supporters, though. One big difference; one thing that Apple – and the others – offered which so far Web applications have failed to match. Micro-payments.

I’m sure there may be many people who disagree with that statement, but here me out.

Firstly, the importance of micro-payments. Whatever the difference in the functional¬†capabilities¬†of Web applications and native apps, Apple’s App Store essentially created an industry – or at least a sub industry. It was probably the first to market it’s applications as apps and certainly the most successful. 10 million apps have been downloaded and the App Store’s revenue in 2010 reached $5.2 billion. The median revenue per third party app is $8,700 – according to Wikipedia (and source) – and there are more than 300,000 apps available, equating to a $2.6 billion market for the app developers themselves. No doubt, then, this is big business. And (as big a fan as I am of FOSS) that kind of potential revenue has been the main driver in the success of the app movement, driven many software innovations in the name of competition and, most¬†importantly, created an environment for¬†entrepreneurs¬†and developers to¬†benefit¬†financially from their creations. I would say – in my opinion unquestioningly – that the success of the App Store and its contemporaries is primarily down to a no-fuss micro-payment system where users feel safe and¬†comfortable¬†and not under pressure¬†when parting with $3.64 (on average) of their hard earned cash.

You may argue that these micro-payment systems already exist on the Web; Amazon’s payment system – including 1-click – and Paypal could be seen as successful examples, amongst many others, no doubt. However, the argument which I’m contesting is not that Web-based systems are a better alternative to native apps but that open standards are… Amazon is no more open than Apple or Android. Google App’s – while very much Web based – doesn’t use a payment system which is compatible with any of it’s competitors or described in any specification (W3C or otherwise). In fact the W3C did look into this very issue in the 1990’s, although due to lack of uptake of micro-payments (how wrong they were) the working group was closed.

Anyway, to conclude this (now rambling) post…. it’s all well and good to plug open standards but, until the big issue of payments is resolved with an open standard, a healthy applications market – and hence the¬†competitive¬†pressures which have built pushed the¬†boundaries¬†of mobile¬†software¬†to what it is today – could not exist without closed systems and APIs.

Publishing from a Content Hub

Web CMS

Working as part of a sales team, one of the questions that I’m asked again and again ‚Äď by my management as well as the Marketing department ‚Äď is ‚Äúwho are your biggest competitors?‚ÄĚ For a Web content management system or text analytic tool (Nstein’s WCM and TME respectively), that’s a fairly easy question to answer. In the DAM space, however, because of Nstein’s particular focus upon the Publishing industry the answer is less clear.

A simplified publishing workflow.
A simplified example of a publishing workflow.
Content Hub workflow

With assets stored in a central repository all systems and processes have direct access to them.

In fact, over the last couple of years Nstein has been positioning its DAM offering as a strategic centre-point for publishing workflows ‚Äď Content Hub seems to be the prevailing (if slightly uninspired) label for this kind of system. Essentially, a Content Hub is a DAM with integration points so that all assets which come into the wider system (the company, publication, etc) are ingested straight into it; all content which is created internally is written directly into it; and then, all systems which utilize, display, edit or distribute content do so from the Hub directly. This is not a new model ‚Äď it is sometimes referred to as a single version of the truth ‚Äď however it often represents significant change and significant challenges in environments which have naturally developed around a (fairly) linear workflow. Magazines, in particular, as well as any breaking news publications, tend to have a from A to B style workflow which involves filtering incoming media, bring it together as a publication of some description and then publishing it out. By repositioning the processes and applications along such a workflow around a central Hub, dependencies and bottlenecks are broken down and assets, and access to them, become standardized. As a symptom of this shift, efficiency improves, asset re-use is encouraged and assets, their rights and usage information are better tracked. And by creating packages of content, independent of both source and output channel, features can be efficiently published on multiple channels (such as Print and Web) and new properties can be created cheaply with lower risk.

So, coming back to the original question, the DAM space doesn’t present that many competitors for Nstein (although there are, of course, a few) as few DAM systems have the out-of-the-box capabilities required by the vertical ‚Äď handling extended metadata, transforming images, re-encoding¬†video, printing contact sheets, managing page content, &c. In fact, the biggest competition in these cases comes squarely from Print Editorial System vendors who would, like us, endorse a Content Hub approach except with their CMS at the centre of the publishing universe.

In some ways both sets of vendors ‚Äď DAM and Editorial System ‚Äď are using the same arguments. One version of the truth, certainly. Single workflow and security. To some extent the multiple-channel publishing argument would also be used by both, certainly most Print Editorial Systems come with some option to publish a Web site as well.

These two approaches to the same Content Hub strategy raise a couple of key questions: what is the difference between the two solutions and how do those differences affect the buyer?

The former question is the simplest to answer: A DAM based Hub disassociates itself from the editing and creation of products whereas an Editorial System is strongly tied in to the production process. Take the creation of a newspaper, for example. The collaborative effort needed to construct a modern edition in an efficient and reliable manner relies heavily upon Editorial Systems to manage the agglomeration of the content and design in real time. The question is; should that System be the hub or a spoke?

How do these differences affect the buyer? What are the relative merits of the approaches? These questions are the ones which are being debated and rely upon strategic visions that the publisher may just not share. However, from my point of view, here are the main points.

On the plus side for the Editorial Systems, as they are so connected to the production process, they  can offer advanced and specific functionalities, tying in closely with DTP tools and offering collaborative working features which a DAM cannot compete with.

That strength, however, is also the biggest weakness for the Editorial Systems. By abstracting themselves from the production process the DAMs become far more agile. We can look at a fairly simple example of this in publishing the same content to both print and the Web, a process which should, by now, be a commodity. At its simplest this task should work smoothly in any Print Editorial System; text and images from a print feature are transformed into Web pages and published online. What happens, though, when other media is introduced? Most Print Editorial Systems that I have seen struggle to (or cannot) display and edit video. Maybe they can store them but the advanced features available for print content are gone, as are many simple features such as previewing and usage tracking. Now in many cases, the Print Editorial System may be coupled with a Web CMS (potentially from the same vendor) which does feature better handling of video but in that scenario there are now two production points. That means compromised security, more staff training, more convoluted audit trails. Then when you take audio, Software Flash, or any other format of content that the publisher may use – online or elsewhere ‚Äď and the problem is magnified.

One solution for the Editorial Systems would be to develop the extra functionality required to handle these formats with the same level of functionalities as the print content which they are familiar with. The obvious problem with that is the effort and available resources required to build and maintain such a suite. So by steering clear of the production process the DAM based systems can handle content in a channel-ambiguous fashion.

Particularly when one looks at the creativity in digital media these days, the strength of agility should be clear. There are the obvious ones: Facebook apps, QR codes, iPad channels, etc. There are also some less well adopted mediums.

In October 2008 Hearst released a special edition Esquire (sponsored by Ford) featuring an e-ink, animated front-cover. Bauer last week released an issue of Grazia featuring Florence (and the Machine) dancing in an augmented reality world activated by pointing your webcam/iPhone at the cover. While this was pretty disappointing in comparison with many other AR examples (such as the great GE ones) due to the fact that the real page was not displayed ‚Äď more on that in a future post. While neither of those examples where particularly well implemented they definitely show signs of what could become mainstream technologies in the future. The question about adding the functionality to manage the production of publications including these kinds of technologies into Editorial Systems is a far-fetched one. Not only is the investment significant and the road to maturity slow but if a technology ultimately fails to gain mainstream accessibility the investment becomes a wasted one. For that reason companies that rely upon an Editorial System at the core of their business have to wait until new technologies reach general acceptance to embrace them and lose the ability to stay ahead of the curve ‚Äď at least without excessive risk. In those cases, as with more mundane ones, the channel ambiguous and content ambiguous DAM systems project their flexibility directly on to the publications which use them.

That’s not to say that there are not downsides to using the DAM as the Hub. In particular, collaborative working cannot be handled to the depth that the Editorial Systems manage without their level of detail and understanding of the specifics. And in both cases there are overlaps in functionality; most Editorial Systems have some kind of repository, for example, and many top tier DAM systems integrate well with DTP tools.

Inevitably, those two questions, drive towards the ultimate conclusion of the debate: ‚ÄúWhich would make a better Content Hub, an Editorial System or a DAM?‚ÄĚ I won’t attempt to answer that directly as I’m obviously biased towards the solution I sell and know the most about but will encourage debate from those who have an opinion…

The future of video on the web

I’m getting rather excited about video media online. We’re on the cusp of a revolution in the way we produce and consume the medium.

I was working on a project recently which involved video content. It struck me that, although we have come on no end in terms of our ability to distribute video over the web in the last half decade, video content still has huge holes in the orthodox functionalities of more established media.

Most obviously, there is the dependency upon external codecs (i.e. not native to the browser). The solution to which, in the most case, is a Flash player. There are numerous Flash players available freely and cheaply on the web; they can usually play most of the common video types and depend only upon a single plugin, Flash. YouTube is probably the best known example of using Flash to play videos.

This approach creates problems all of it’s own, though:

  • Flash players still have a dependency upon a browser plugin.
  • The binary video – the original file – is not transparently available in the way that images and text are.
  • Flash does not always cohere with de facto web standards: you cannot apply CSS to Flash, it does not respect z-indexes of objects (ever seen a drop-down menu disappear underneath a Flash component?).
  • It does not have a full set of properties directly accessible for the content it wraps, as a other elements in a pages DOM do.

Don’t get me wrong, Flash has it’s place in the modern web. It is a fantastic platform for RIAs and rich, animated and interactive components of web sites. However as far as video presentation goes it is, essentially, a hack.

These drawbacks for video (and, in fact, audio) presentation, manipulation and playback have not gone unnoticed. One of the most important changes for HTML5 – first drafted back in January 2008 – is the handling of these mediums with the <video> and <audio> tags, now supported in both Gecko and Webkit.

The initial specifications for HTML5 recommended the lossy Ogg codecs for audio and video:

“User agents should support Ogg Theora video and Ogg Vorbis audio, as well as the Ogg container format”

The reasoning behind this drive for a single format seems obvious enough. Going-it-alone doesn’t really work as far as web standards are concerned (does it IE?). There were, however, some objections as to the choice of codec, namely from Apple and Nokia. The details of the complaints are not really relevant to this article but can be read in more detail on the Wikipedia page, Ogg controversy. At the end of the day it doesn’t really matter which format is used as long as it is consistent with the requirements of the W3C specifications; for this article I am going to assume that the Ogg codecs and container will be standard.

So, now that we have browsers (Firefor 3.5, Safari 3.1) which support the <video> tab and have native Ogg Coder/Decoders (At least FireFox) all of the deficiencies of video we discussed earlier become inconsequential. If video works as part of the HTML then it will behave as such. CSS, for example, will operate on a video element in exactly the same way as it would for an image element, z-index and all. The DOM tree for the page will include the video with all of its properties as expected. And, crucially, events and Javascript hooks allow web developers with no special skills (such as ActionScript) to control the behaviour of videos.

Silverorange.com have provided a nice example of using video with CSS. If you are running FireFox 3.5 or later you can check it out by clicking on the image.

Silverorange.com have provided a nice example of using video with CSS. If you are running FireFox 3.5 or later you can check it out by clicking on the image.

But there is another – for me more interesting – feature of Ogg video (and, presumably, its alternatives): metadata. Now, metadata in video is nothing new, for sure, but having access to a video’s metadata as described above will lead to a whole new level of video media integration in webpages. The Ogg container, for example, supports a CMML (Continuous Media Markup Language) codec and, in a developmental state, Ogg Skeleton for storing metadata withing the Ogg container. Both of these formats facilitate timed metadata. In CMML one could define a clip in a video – say from 23 seconds into the movie up to 41 seconds in – and add a description, including keywords, etc, to that clip specifically. I will resist the temptation to go into a description of how Javascript listeners could be used to access that data but, in essence, the accessibility of the information to the web page containing it would allow a programmer to accomplish fantastic features with trivial techniques.

The most obvious example has to be for search. Being able to display a video from a specific point (where the preceding data may not be relevant) is not out of scope of the Flash based players but would be much easier to accomplish.

If we squeeze our imaginations a bit further, though, I think there is great potential for highly dynamic, potentially interactive sites to be based around video as the primary content. When demonstrating front-end templates for Nstein’s WCM I always pay particular attention to in-line, Wikipedia style, links which we create in a block of text using data derived from the TME (Text Mining Engine); in-line for text equates, with timed metadata, to in-flow for video. In the past video has, by and large, been limited to a supporting medium, a two minute clip to illustrate a point from the main article. With timed metadata this could be a thing of the past.

Imagine this: you have just searched for a particular term and been taken to a video of a lecture on the subject playing from 20 minutes through – the section relevant to your query. As the video is playing data is displayed alongside it, images relevant to the topic, definitions of terms, and as the video moves into new clips, with new timed meta data, the surrounding, supporting resources are changed to reflect – in-flow.

An example of using CSS3 with the video element from Mozilla.

An example of using CSS3 with the video element from Mozilla.

As people appear in films and episodes links could be offered to the character’s bio and the author’s home page. Travel programs could sit next to a mapping application (GoogleMaps, etc) showing the location of the presenter at the current time. There are huge opportunities with this kind of dynamic accompanying data to enrich video based content. And, of course, all of the data from a particular clip can integrate into the Semantic Web seamlessly. RDF links and TME generated relations could easily be used to automate the association of content to a particular clip of a video.

The downside? Well the biggest one as far as I can see is the time-frame. Most publishers are continuing to commit to, and develop, black box style video players due to the fact that no one – a few geeks, such as myself, excluded – use cutting edge browsers. But when HTML5 gets some momentum behind it from a web developer/consumer point of view the horizons for video will be burst open wide.

http://en.wikipedia.org/wiki/Ogg_controversy

Brand: the new pretender

Content is king, is it? Well maybe. There’s no getting away from the fact that good quality content drives traffic. But in the struggling publishing industry, with waning advertising revenues, we might have to conclude that the current approach to web publishing is just not working.

That’s not to say there aren’t exceptions. Julian Sambles (@juliansambles), head of audience development at the Telegraph Media Group, talked at the resent ePublishing forum on his success in terms of SEO and bringing new audiences to the Telegraph site. No doubt other publishers have had similar successes. However there are problems associated with that kind of drive for SEO – not least because it is a very expensive process in a climate where large budgets are scarce. But, for me, I have more important reservations about focusing heavily on search engine optimised content.

Firstly, there is the issue of editorial integrity. If content was truly king then its quality would be the single most important factor in growing (and keeping) an on-line audience. For a lot of publishers¬† content isn’t king though – search is. In that scenario a publisher is not controlling how it’s content is consumed, or in what order. They will, undoubtedly, find that their political and social stances are watered down as well, as traffic heads more to soft news and opinion. In circumstances like these the focus actually moves away from the content and towards how the content is structured – the role of the publisher gets closer to that of an aggregator.

The next problem with relying on search engines to supply ones’ on-line audience is inherent: the consumer is researching not discovering (@matt_hero‘s search trilogy is, loosely, relevant here). I seriously doubt Google is inundated with searches for the word “news”. Perhaps terms like “football results” are more common but still not that frequent. If a visitor arrives at a site from a search engine it is fairly safe to assume they fall into one of two categories:

  1. They’ve already read the news elsewhere, first.
  2. An aggregator has presented them with summaries and the content suppliers only get a hit (and, hence, revenue) for the stories they are really interested in.

Of course, if that visitor then stays on the site – or book marks it even – then great. Of course search engine optimisation creates new users and they can become regular visitors. The problem is that without a strong brand the proportion of stray surfers who end up on a content producers site to those which are converted into frequent readers is much smaller.

The prevailing opinion these days is that the fickleness of consumers comfortable with search is inescapible; that hitting the top spot on Google is overwhelmingly the best way to drive traffic. I just can’t believe that. Certainly that sentiment doesn’t apply to me. I’m quite modern in my consumption of the news: I almost never buy a physical paper any more. But that doesn’t mean I don’t appreciate the editorial “package”, as Drew Broomhall (@drewbroomhall), search editor for the Times, described the journey a (print) newspaper reader is guided through. Every morning I embark on such a journey, lead (very ridgedly) by the BBC’s mobile site. And, while monetizing mobile content is harder than on traditional web pages, that builds a very strong brand loyalty for me. If I read any news at work, or explore in more depth a story I read that morning, it’s always on the BBC news site.

So I would argue that the readers experience Рthe editorial journey Рis far from a thing of the past and, in fact, is as important now as it ever was for print media. There is no need to limit that experience to mobile channels, either. There are a wealth of frameworks available for producing widgets and apps on all kinds of platforms. Another talk at the ePublishing forum, by  Jonathan Allen (@jc1000000), explored in more depth how to take advantage of these output channels. iGoogle widgets, iPhone apps, Facebook applications are all great examples.

This approach not only allows publishers more of the editorial control which they had in producing print media (and lost to the search engine) but also creates a better user experience. Focused distribution channels for on-the-rails feeds can give a consumer the feeling that a publisher is doing something for them. With news being such a commodity in the on-line world these channels add real value for the audience. And if there is value for the audience, they will promote that content themselves. Creating, for example, a widget for an iGoogle user’s homepage, which displays featured articles, engages them (and presents a link back to the original content) before they have even done a search.

We see this kind of, selected content, approach commonly in the form of RSS feeds (although, too often as “latest” not greatest). Widgets and apps aren’t really doing anything different, rather they are making the stream more accessible, more user friendly. There’s another attraction to widgets and apps over RSS feeds, though – a point from Jonathan’s talk which almost makes these channels a no-brainer – they really help to boost the main document’s search engine ranking. So contrary to being an alternative to SEO widgets help drive traffic both ways.

You can take this one step further and allow the audience to define their own paths through content. As semantic understanding becomes more and more achievable, through tools such as Nstein‘s Text Mining Engine (TME) and the dawning of an RDF bases semantic web, publishers will be able to offer dynamic widgets with content ordered by an editorial team and filtered by a user. The iGoogle widget described above could easily be filtered for a Formula One fan based upon data from the TME to create a custom feed of stories they are interested in. Or if a consumer enjoys the “package” they can take the unfiltered list.

No silver bullet for publishers struggling in the migration to the web, for sure, but thinking about how content is offered as a package is a strong, and often underused, way of strenthening a brand and driving traffic. As always, IMHO…

About me

Contrary to the massive "Chris Scott" at the top of the page, I'm not a (complete) ego-maniac. I just liked the font and couldn't think of anything more interesting to say.

I'm a passionate developer and entrepreneur. My company Factmint provides an elastic RDF triplestore and a suite of Data Visualization tools, so I largely talk about those things.

Fork me on GitHub

GitHub Octocat

chrismichaelscott @ GitHub

  • Status updating...