The Internet Explorer 8 web developer’s dilemma

September 16, 2013 | Leave a comment

This post originally appeared on

Google Analytics has announced it will end IE8 support by the end of the year, following Google Apps, which ended support for the browser last November.

Legacy browser support remains one of the hardest problems in web development. For years, Internet Explorer 6 was a bugbear, because enterprise applications were written with it in mind. Sadly, the same is true of its descendent: nobody uses IE8 on the weekend, which means that it’s probably forcibly installed on enterprise networks, where users aren’t allowed to install their own software.

Internet Explorer lock-in is rife in the enterprise, because of the browser’s non-standard web support and ubiquity on Windows computers. Faced with supporting IE8 or web standards as they were actually specified, many enterprise vendors went with IE8, because that’s where the customers were.

Compounding the problem, IE8 is the last browser in its line that will run on Windows XP, which is still prevalent in enterprise environments (even if users are slowly making the migration to Windows 7). In other words, to run a better version of Internet Explorer, enterprise IT departments don’t just have to give permission for it to be installed; they must upgrade their computers from another operating system first. This is a significant expense.

In the web development community, it’s easy to be dismissive and say that these organizations should be running Linux, and shouldn’t have got themselves into this situation to begin with. (I’ve heard this attitude a lot.) That ignores the much broader context that Windows enterprise computing sits in, including the software ecosystem and the support infrastructure that’s grown up around it. Most importantly, though, if we want to sell to a customer, it’s probably a good idea to support the platforms that they actually use. The larger and more security-conscious the customer, the more reticent they may be to upgrade their platform software more regularly.

So how do you balance the fact that so many customers are on Windows XP with the fact that Internet Explorer 8 is a hideous, insecure platform that must be developed for separately?

One option is to gently suggest Firefox or Chrome, which both work with Windows XP SP2. At latakoo, we’ll be doing that increasingly less gently; we’ve already communicated to our customers that we’ll be slowly phasing out support, and we’ll soon be adding some visible messaging urging them to switch browsers. However, the pragmatic reality is that many users can’t switch, because of their IT rules, and often because of the IE8-specific in-house apps they’re running, so we can’t simply turn off support, even though maintaining IE8-only code costs us extra.

Moving away from IE8 will be more secure for every organization. (Microsoft is ending support for Windows XP in 2014.) Until then, if you’re an enterprise IT manager, I recommend encouraging a two-browser solution: IE8 for the apps that really need it, and a secure, modern browser for everything else (including latakoo).

For developers, there’s a lot to be said for increasingly less-subtle messaging explaining why Internet Explorer 8 is a bad choice. You’re providing useful advice, while also encouraging your customers to get better value for money out of your service (because more developer time can go into new and more resilient features rather then legacy browser support). But don’t switch off support completely – not quite yet at least – lest you leave some of your most important customers out in the cold.

Make sure to check out iTrellis for a custom software development done by experts Consultants.

You can’t empower users by targeting ads

August 19, 2012 | Leave a comment

Scott Hanselman has a great take on platforms and ownership on the web:

  • Why doesn’t someone make a free or cheap social network for the people?
  • Why can’t I control my content?
  • Why can’t I export everything I’ve written?

[...] All these questions are asked about social networks we don’t control and of companies who don’t have our best interests at heart. We are asking these questions in 2012?

[...] You want control? Buy a domain and blog there.

His whole post is worth a read. But I think this goes far beyond blogging.

The “cloud”, at least as it’s popularly thought of, is really just a user-friendly, web-based take on mainframe computing, which was super-popular in the seventies (before personal computers took off), and had a resurgence in the early nineties (through the likes of AOL, Prodigy and CompuServe). Applications are stored on servers, and you access them through thin clients about which you can learn more on the Salesforce website (in this case, the browser). It’s been a valuable way to circumvent IT departments, stop caring about upgrades and pesky computing issues like viruses, level the playing field by making it irrelevant whether you’re using a $300 Asus or a $2000+ MacBook Pro, and bring software to the masses like never before. Other ad companies use dedicated development center to help create new software. Unfortunately, it’s also been an opening for people to abuse the trust inherent in that relationship, and create models where users are exploited in ways they may not have foreseen.

There are some key differences between cloud computing and the mainframe model of old, even leaving aside the obvious accessibility and ease-of-use gains:

  • Anyone can build an application.
  • Anyone can run their own mainframe.

It’s certainly true that most people don’t want to build applications or run their own servers. However, it’s also true that inside any medium-to-large company you care to think of, there will be a dizzying array of string-and-blu-tack semi-applications written in things like Microsoft Access. This largely doesn’t happen on the application web: somewhere in the mix, we’ve lost the control and interactivity that allowed people to use software on their own terms.

Now, sure, typically those Access databases are a mess, are stored in hard-to-find places, and duplicate work within an organization. In every non-tech enterprise I’ve ever worked in, it’s been a terrible situation; muddled and complex. But that’s what we’re here for as technologists: to create tools that empower users and improve their lives. (Contrast that with farming users and harvesting their lives, which is becoming the dominant business model on the social web.) Where are the tools that allow users to build their own solutions and find their data, easily and on their own terms? Why are people working on ways to deliver ads instead of those problems? Why can’t a non-technical user procure a server with the applications they need, under their control, as easily as buying an app on their iPhone?

The irony is that these kinds of applications have a much higher chance of making their founding entrepreneurs billionaires than trying to be the next Instagram. They’re not as cool, perhaps, but they are the foundations of real businesses, that take money from customers and create real value in return.

I know people who are making great strides in these areas, and I’m bullish about their future success. But it’s fascinating to me that more people aren’t following suit.

In the meantime, if you want control, definitely buy a domain and blog there.

Direct messaging in a social web architecture

March 31, 2010 | 3 comments

This post is the third segment in my series on an architecture for the social web. Previously: How social networks can replace email, which is a non-technical approach to the issues, and my follow-up describing how to build a social web architecture using available technology today.

So what about direct messaging?

In my previous post, I described content notifications in the social web as being Activity Streams updates in response to requests signed with an OAuth key. Each individual contact would have his or her own OAuth key, and the system would adjust delivered content depending on access permissions I had assigned to them.

A private message in this architecture could just be represented as an item of content restricted to a small set of recipients (in the email use case, this is typically just one), with replies delivered using Salmon. The advantage of this approach is that the message doesn’t have to be text; it can be audio, video, a link to live software, or something else entirely.

However, while this is technically feasible, it may not always be desirable. We know from Google Wave, which also pushes the boundaries of person-to-person messaging, that an open definition of what a message contains can get very messy very quickly. Although I was one of the first people to have one, I no longer check my Wave account regularly. I believe this is mostly a user interface issue: Wave is an awesome collaborative document editor (what I’ve heard described as “a massively multiplayer whiteboard”), but not in any way the evolution of email that its development team claimed.

Therefore, I think it’s useful to think about the difference between a document and a message:

  • A message is the body of a communication.
  • A document is a bounded representation of some kind of information.

While in many ways they’re the same, I think it makes sense to make a separation on the UI level. As we’re discussing a decentralized architecture here, some kind of semantic marker in our activity stream feed to mark something as a message would be a useful feature.

Messaging “out of the blue”

You know where you are with an email address. Mine is Anyone who encounters that string of characters, whether on a website like this one, a business card or a scribbled note on a piece of paper, is able to send me a message from anywhere in the world. In the 17 years I’ve had an email address, the list of friendships and business connections I’ve made, and opportunities I’ve received and developed, through this simple mechanism has been uncountable. It’s also likely to continue far into the future.

Compared to this, visiting someone’s social web profile and sending them a message from their web presence is a hassle. Compare these steps:

  1. Receive the address of someone’s profile
  2. Click the “follow” button either on the profile itself or on the toolbar of your social web compatible browser
  3. Wait for the contact to follow you back
  4. Send your message


  1. Receive someone’s email address
  2. Send a message to that address

It’s simple, ubiquitous, decentralized and universally compatible. In fact, it seems hard to improve on, doesn’t it?

However, as this is a thought experiment about how social networking can replace email, let’s see if we can simplify this process somewhat. In my previous post, I discussed how a connection could be established with OpenID and OAuth through a web-based interface on a social web profile. How can we make this as simple as emailing someone, and cut out most of the steps I’ve listed above?

Connecting programmatically

I propose two additions to my previously discussed mechanism. The first is to expand the connection protocol to include a message. If someone connects to me on LinkedIn or Facebook, I receive some explanatory text from them, so it makes sense to include this feature in our decentralized social web architecture. It is likely that this would be an added parameter to the OAuth request token procedure.

The second is to allow connections to be made programmatically through a custom application. Just as we use email clients now, a social web client could automatically send a connection request. In keeping with our principle of using existing technology where possible, this is a simple OAuth connection request from the application, which includes a user message as described above. The application knows our details because we’ve set our preferences, so we’re never visibly redirected to a web browser to complete authentication. (In fact, this could take place using xAuth, a version of the OAuth protocol being developed for just these sorts of browser-free use cases.)

Whether we can send a follow-up message now depends on the receiving party. We have our OAuth token, and while it remains valid, the receiving social web node may choose to ignore any follow-up requests.

Our procedure has become:

  1. Obtain address of someone’s social web node (you could even infer it using WebFinger)
  2. Send a message to that node, bundled with a connection request

This is significantly better, and is comparable to the simplicity of email.

You may be wondering about the wisdom of adding everyone you contact as a connection. In fact, there’s some precedent for this already in applications like GMail. It’s important to note that not every connection need be a friend: in some ways, you can think of your total list of connections as your contact book. Some are important, some can be safely squirreled away until you need to contact them again. In this context (or any context where people you have a relationship with and people you’ve contacted are merged into one set), an adequate person management interface – or CRM to you and me – becomes important.

Next, and finally: let’s make our distributed social web architecture reliable enough to use in enterprise environments, using message queue protocols like ZeroMQ and AMQP.

Activity Streams and OAuth: a social web architecture

March 12, 2010 | 3 comments

My previous post was a response to Gartner’s prediction last month that social networking would replace email as the “primary vehicle for interpersonal communications for 20 percent of business users.” In it, I named some properties that would need to be held by any social networking system that would successfully replace email.

  • Ease of use
  • Ubiquity across devices
  • Platform, service and infrastructure independence

My argument boiled down to the following statement:

Email has succeeded because it’s open, standard and decentralized; for social networks to replace it, they must also be open, standard and decentralized.

Email is useful because just about everybody has an email address. I can get in touch with my clients in London, my friends here in Oxford or my grandfather in Austin, Texas, with equal ease, even though all of them are using different infrastructure and software provided by different companies. I use Gmail, but there doesn’t need to be any kind of formal agreement between Google and whoever’s providing my grandfather’s email, say. It just works; nobody owns email as a communications method, and anyone can set up an email server. The same is true with websites: anyone can set one up, and nobody owns the web.

For social communications to be as popular and ubiquitous as email, there must be one social web, and it must be owned by nobody. That means that each socially-aware site or application must implement the same social communication standards.

The best standards aren’t dictated: they evolve through common usage. If you look at HTTP (the protocol that the web relies on), SMTP (one of the protocols behind email) and file formats like RSS and HTML, the common thread behind them is that they’re simple. It turns out that through excellent work at companies like Google, Plaxo, SixApart, Twitter, JanRain and – perhaps incredibly – JPMorgan Chase & co, we already have a number of technologies that collectively embody the properties I listed above.

Notes and server architecture for one possible social web

These are my ideas about how these standards might be used. These aren’t intended as replacements for existing social networking platforms or services; rather, they could easily be added as additional features both to those and to many other types of application. The ability to share isn’t a uniquely required feature of social networking software – think about its usefulness in applications like Word or Google Docs, for example.

With email, you use a software client (Outlook, say, or the Gmail web interface) that speaks to an email server which does the hard business of sending and receiving messages to and from the wider Internet. Here, I will be describing a system where everyone has their own node on the social web, which effectively acts as a client and server. Mine might be here at, for example. It’s my website – my profile on the social web – and it’s where I send social communications. That’s the server side. However, it also acts as the client when I’m accessing resources stored on other peoples’ servers.

Establishing connections and granting permissions

Let’s say I want to make a resource available to my clients. With email, I’d send them each a separate copy. This is both insecure and inefficient: I have no control over what happens to that copy, and each time I send it I create a new version. With some back-and-forth, there could easily be ten or twenty individual copies of a document floating around. (I often bounce software specifications – typically Word documents – around with my clients, and this is something that happens to me regularly. Google Docs is probably a better solution, but not everybody has a Google account.)

With the social web, only one version needs to exist, which I own. If my clients have established a connection with me, I can restrict that resource so that only they may see it. The tricky bit is that in order to know if it’s really them, they must be authenticated in some way.

In monolithic systems like Facebook, where everyone uses the same website, that’s easy: my client must be logged in, and we must have established a friend connection. In a decentralized system, that’s a much harder problem, but not insurmountable. Two technologies will help us:

  • OpenID: the open, decentralized authentication standard, which currently uses a website address as a kind of universal username
  • OAuth: an open protocol that “allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their username and password.” OAuth provides a secret token to applications that they can use to access authenticated services and resources behind the scenes

Specifically, we’ll need OpenID Connect (or, until that’s up and running, the OpenID / OAuth hybrid protocol), because we’ll be using OpenID to authenticate, OAuth to power our decentralized access permissions, and a number of other protocols and endpoints along the way. It’s much neater if these are all established at once.

Making friends and getting updates

The process would work in the following way. Let’s say I want to make a connection with my friend Marcus Povey.

  1. I visit his site, and see that he is displaying a “connect to me” icon, indicating that it is a node on the social web. Later on, perhaps my browser would detect that this was a social web node in the same way that most browsers detect RSS feeds today, and light up an icon. Chris Messina has started a five part series on the browser as a social agent, which is worth a read.
  2. Either way, I click on “connect to me”. Marcus’s site prompts me for the address of my profile, which I enter. (Later on, my browser does this bit for me.)
  3. My profile address is an OpenID, and through the authentication process my social web node receives an OAuth token from him. No further authentication is required.
  4. On his social web node dashboard, Marcus sees that I’ve established a connection with him. He can ignore it, in which case nothing happens, or he can mark me as a friend (or any other arbitrary designation, which could be unique to the software he’s using).
  5. My social web node periodically checks for activity updates from Marcus’s, signing each request with that OAuth token so it knows who I am. This may be at my direct request; through repeated polling, RSS-style; or the update may be pushed to me through a PubSubHubbub ping.
  6. Depending on the assignation he’s given me, Marcus’s node either responds with just a feed of public activity (if he’s ignored the request), or with additional activity he’s allowed me to see, in Activity Streams format.
  7. Marcus can change my assignation or withdraw my OAuth token at any time from his dashboard. (Of course, throughout all this, the OAuth token mechanism is invisible to both users: it’s simply presented as a social connection.)

Embedded content and interacting directly on other social web nodes

Activity Streams is based on Atom, so content for items like blog posts (and resources like photos, using Atom Media) can be embedded directly in the activity feed. (Rob Dolin from Windows Live has some great examples.)

However, not all content is standard enough to be embeddable. In those cases, I can simply click through from Marcus’s activity update to his site, possibly log in again using OpenID, and interact with the content there. Additionally, by allowing users to log directly into his site via OpenID, Marcus can show selected people restricted content even if they don’t have the full range of social web software.

Friends lists and commenting

Further standards help us add extra functionality. If Marcus gives me permission, I might be able to download his contacts via Portable Contacts. Salmon is a protocol for commenting on distributed resources and allowing those comments to find their way upstream to the original, which is compatible with Activity Streams. Using this, I might be able to comment on Marcus’s activity items from within my dashboard and have them show up in his. Through this mechanism, all his friends could have a conversation on his activity stream items.


So far, so good: we have a simple technological basis for permissive social communications. But if the social web is really going to replace email, we have to address one of the most important features for enterprise users: reliability. Businesses will not accept their critical communications being subject to fail whales.

In my next posts in the series, then, I’ll discuss person-to-person messaging and the thorny issue of guaranteed delivery.

Next Page »