Devices and desires: why the portable device wars are a red herring

June 3, 2010 | 3 comments

A little pre-history

When I was a kid, I had an Atari 130XE. You’ve probably never heard of it. It was an 8-bit, all-in-one box that booted straight into BASIC; a flexible, well-built, sturdy computer.

There was just one problem: it wasn’t a ZX Spectrum or a Commodore Amiga.

At the time, Britain was undergoing a low-budget computing renaissance. Bedrooms up and down the country were filled with skinny boys (and yes, it was mostly boys) noisily loading games from cassette tapes and dutifully copying down source code listings from specialist magazines. The two engines of this renaissance were the Spectrum and the Amiga, and as such, the games, the tutorials and the social infrastructure were built for these two machines. Perhaps this helped me become more of a creative self-starter: I wrote my own games and stories instead of consuming other peoples’.

Later on, 16 bit computers became popular, and everyone upgraded to the Atari ST: a home machine powerful enough for creatives and musicians, but cool enough for game-playing kids. Except, perhaps inevitably, we had a PC. Running DOS. With a black-and-white Hercules display. Great if you wanted to plug economic figures through a spreadsheet, but lousy if you were a twelve-year-old who was mostly interested in playing The Secret of Monkey Island. Not only was the wholly PC incompatible with the Atari ST, but the PC was actually incompatible with itself: a game that worked on PCs with an EGA or VGA screen wouldn’t work with CGA or Hercules. Back then, the parts inside your computer were at least as important as the operating system you ran or the software you bought.

Plug and Play

Through heavy force and heavy lifting, Microsoft changed all that. Windows 95 was the first widely-accessible operating system that unified hardware platforms. Sure, you had to have an Intel-compatible processor, and it took them a while to get it right (for a while the system was redubbed “plug and pray”), but you didn’t have to mess with configuration files to get your computer working. This was a Big Deal.

Today, we’re used to not having to tinker with our machines. Windows will adapt to just about any hardware you throw it at, and even Linux has become an easy-to-use operating system (relatively speaking).

Better yet, we have data portability: in my house we’re running Windows 7, Mac OS X and Ubuntu, and I can move my documents between them interchangeably. Thanks to the web, and Java before it, we even have applications that don’t care what kind of operating system they run on. For an end user, things just work. That’s exactly how it should be.

Finally, computing is simple, data is interoperable and consumers are in control.

Uh oh: enter the portables

So just as we get a unified computing platform that’s easy to use and relatively simple for consumers to navigate, in comes a new device market that’s as fragmented and consumer-unfriendly as the computing market was in the eighties.

Android. iPhone OS. Windows 7 tablet edition. Windows Embedded Compact. Windows Phone. WebOS. ChromeOS. Kindle OS. Whew! It’s like 1986 all over again.

As a publisher or developer, figuring out which device to build for is a headache. Each one has a different operating system, possibly a different app store (something nobody had to worry about in the eighties), and a different set of underlying technologies. Do you exploit the iPad’s current success and develop for the locked-down Apple platform? Do you take advantage of Amazon’s huge built-in market and write a Kindle app? Do you hold out and wait for HP’s exciting-looking WebOS-powered tablet (which caused a storm recently by publicly moving away from Windows)?

Plug and Play (again)

The truth is, market forces are going to apply the same pressures to the mobile market that the personal computing sector felt in the early nineties. This story has played itself out several times now: one platform will emerge victorious. Judging by the lessons learned by IBM with their Personal Computer architecture, and both Microsoft and Linux for operating systems, it’s likely to be one which is:

  • Open: anyone can add it to their system for little cost, allowing hardware manufacturers to maximize profits by concentrating on the device itself rather than the ecosystem around it
  • Sustainable: it’s powered by a solid business ecosystem that will ensure the longevity of the platform
  • Friendly: it’s a system for everyone, not just hobbyists or developers
  • Flexible: it can be used in multiple contexts, from living rooms to science labs

By this measure, Apple is condemned to be a niche player, operating at the premium end of the market. Sure, right now technophiles everywhere are salivating over the iPad, but that will last until someone comes out with something nicer. In any event, Apple’s grasp is limited to the wealthier western nations – there are far more people seeking more affordable devices waiting in the wings in other places. The third world computer revolution is very much underway.

My bet, of course, is on web technologies. But it isn’t necessarily on the Internet: it’s time we separated web technologies from the World Wide Web. Indeed, connectivity isn’t ubiquitous, and isn’t likely to become ubiquitous world-wide for a very long time. Therefore, the ability to download, install and run apps offline, as we always have with software applications, is incredibly important.

With its Chrome Web App Store, Google is leading the way, and showing that it understands what it takes to create a next-generation application platform. It’s also shown leadership over HTML 5, which it is clearly investing in as a genuine method for powering both content and software. The genius is this: anyone can build using web technologies, and web technologies can run on virtually any hardware. Google makes its money through value-added services, like advertising (to allow both device manufacturers and software developers to supplement their incomes), its app store and underlying logic via some powerful APIs. It’s not an operating system, but for most end-users, they’re making the operating system irrelevant: it’s simply the thing that runs the web browser.

My advice: ignore the hardware

Computers as we know them today will always exist, but they won’t be for everybody. If you’re developing for non-technical end users, the plethora of hardware devices available to you is a red herring. You should be thinking of the web as the platform your products will be based on. Make no mistake: you need to become an expert in web technologies now – or, of course, find someone who is.


Related entries


  1. I think that the web gives us a marvellous lowest-common-denominator. If you want to reach as many people as possible then you produce a web app. Of course, apps written specifically for a given platform are going to produce a better experience on that platform, but you can fine tune what you produce based on what will give you a good return on investment.

    Now, if only these various phone system suppliers would agree on a common API then we could have apps that run everywhere with a little tweaking. Or, of course, they could allow you to produce apps that use middleware. There’s only one company that forbids that, and they’re the one you condemn to a niche in your post. I wonder if these two facts are linked…

    Andrew Ducker June 3, 2010 (4:40 pm)
  2. A wonderful summary and nary a word to disagree with. The only issue, really, is that companies and developers must still weather the period between Now (fragmented devices and OS’s) and Then (web tech mediated application ecosystem), and there’s no telling how long this unfortunate period is going to last.

    Alfie June 3, 2010 (9:37 pm)
  3. I agree with your premise, but you reach a conclusion which is far from inevitable. As you say, the offline use case is _extremely_ important, but what you don’t stress enough is that mobile devices are even now becoming more numerous than always plugged in, always online devices.

    In this space, you cannot ignore the hardware. Advice like this leads to slow apps, and dead batteries. My own estimates are that we are about 5 years behind the desktop CPU & memory curve (3-4 iterations of Moore’s Law cycle), and battery technology has no analogue in desktop.

    This gives a number of years where the task of writing in native is worth it compared to writing in web technologies. In that time any number of things could happen that would still fit in with your open sustainable friendly flexible constraints: e.g. Android could be a “serious” computing platform e.g. desktop; Javascript on a chip.

    jamesh July 11, 2011 (10:37 am)

Leave a comment