The Progressive (Profitable) Web

April 2, 2013 | 2 comments

Ryan Holiday laments the loss of Google Reader and RSS in general in Our Regressive Web, arguing that if someone came up with them today, we’d think they were brilliant ideas:

Nothing better has risen up to replace them. The underlying needs of a fairly large user base (that these services meet) still exist.

We’re just regressing.

[...] RSS is impervious to blogging’s worst, but most profitable traits. [...] No wonder nobody ever pushed for widespread adoption. Of course it died a slow death—along with Google Alerts and Delicious. Their mission is antithetical to the ethos of our new media age. Where noise, chatter and pushing—not pulling—rule the day.

Our Regressive Web by Ryan Holiday, on Medium

He’s right. Aggregated content – content on the reader’s terms – has a huge potential userbase, but it wasn’t profitable for either the bloggers or the aggregators, so it languished. Sure, you could tack some Google Ads onto the end of each post in a feed, but control over the form that the content is presented in is granted fully to the user. Where’s the opportunity to upsell? Where are the branding opportunities or the baked-in communities, carefully designed to maximize ongoing engagement?

The irony is that blogs have actually downgraded their on-page advertising over time. If you visit TechCrunch today, you’ll only see two ads above the fold. Check out io9, and you’ll see none at all. The redesigned ReadWrite has a few more: a giant banner above the fold, and then four small squares with another ad in the stream of content itself.

Wouldn’t it be nice if you could have your cake and eat it, too? Allow the user to consume content on his or her terms, while also allowing the content producer to make money?

Here’s an idea I’ve been working on in my own time. It’s a little technical, but bear with me:

  1. Add a simple social layer to the web. I still like the idea of the HTTP header I described in httpID. Your site may connect to my site with a mechanism like OpenID Connect and get an authentication token automatically. Think of it like a one-way friend request. Of course, I can then reciprocate by connecting to your site to create a two-way relationship.
  2. Add authentication to feeds. Each feed has just one URL. An aggregator may sign the request for a feed with an OAuth-like signature. (We’re sidestepping HTTP digest auth for obvious reasons.) The software producing the feed may choose to acknowledge the signature, or not; by default, you get all the public posts you’d normally get when accessing a feed.
  3. Manage connections and restrict access to content. I see everyone who’s connected to me from a control panel, and can reciprocate from there. More importantly, I can add any of my connections to access groups. So if I add you to a group and publish a piece of content so that it is only accessible by that group, when your site requests my feed using a signed request, you’ll see that content.
  4. Optionally: sell access to premium content. Once you can selectively broadcast content to a finite group of people, you can sell access to that group. (And of course, you can have more than one paid-access group.) For example, I’m a subscriber to NSFW, a paid publication with an online presence. They could push all their articles to me as a subscriber, while making a handful of taster articles available to everyone. You could even include a pointer to a subscription URL within that social handshake from part 1. If you decentralize the financial transactions (and why not?), you could even give a small cut to the platform owner.

All of the above is complementary to feed standards like RSS and Activity Streams, as well as to federated social web protocols and methodologies like OStatus. It’s super simple to both use and implement – but could add a layer of commerce to the content web, while also decreasing our dependence on large content silos whose interests are not in line with their customers.

2011: happy new year

January 3, 2011 | 5 comments

sydney habour bridge & opera house fireworks new year eve 2008I’m a little late to the party for end-of-year wrapup / start-of-year prediction posts. Instead, I thought I’d write about some of the things I’m looking forward to playing with this year.

Vanquishing piracy through better business

First, though, I do have one prediction: this will be the year that traditional content producers finally get to grips with piracy. They won’t do it using restrictive DRM and other counter-productive tactics that have been shown not to work; instead, they’ll do it by allowing anyone to buy their content in a convenient way.

The BBC is already talking about broadcasting Doctor Who simultaneously in the US and the UK; they are also planning to release their iPlayer on-demand service internationally. Its US counterpart Hulu, meanwhile, is also planning an international release. All of this is a tacit acknowledgement that a great deal of piracy is the direct result of artificially enforced border restrictions, but it’s also a bigger, more general change: the realignment of incumbent media companies around the Internet, instead of treating it as just another conduit. Just in time to save their businesses – maybe.

The year of the tablet?

Last year, the iPad shook everyone up. It’s a great device, which somehow makes computing a more intimate, human experience – I bought one, and it gets far more use than any other computer I own. (This Christmas, it’s got at least a couple of hours every day from Celia playing Angry Birds.) It’s so good that everyone’s prediction posts for 2011 have been colored by it. Wired; Leonard Lin; Technorati; The Times of India; The New York Times; GigaOm; etc etc etc. I’ll be at CES in Las Vegas next week, and I fully expect tablets to dominate the talk of the town. (Most interesting tablet advice I’ve heard lately: buy a Nook Color and root it to turn it into a fully-featured Android tablet. Not bad for $250, if it works.)

After a rocky start with the operating system, I’m looking forward to developing Android apps. Although I’m still not sure what the platform’s developers were thinking in the early years, the 2.2 release was a major one, and the 3.x previews look pretty good. It’s got a very good chance of being as popular as Microsoft Windows for non-PC devices. Either way, the devices are now exciting enough for me to want to kick the tires and play with some new kinds of social interaction.

Here, my obsession with decentralized models continues. I believe that WikiLeaks represents the Internet beginning to fulfill its true potential, but the furor over it illustrates how dangerous building an information outlet or essential service around a single point of failure can be. The web is decentralized; social, content and information applications should follow the platform’s example.

The couch potato is dead; long live the couch potato

But it’s going to go beyond interaction. With the advent of consumer-friendly devices like the iPad, and living room web clients like Google TV, I think we’re going to see more web apps designed for the couch potato set: people who want to sit down and passively consume content after (for example) a hard day’s work. Right now, even products like the Roku require a fair amount of clicking around before you watch something. Nothing quite has the ease-of-use of television – but apps like Flipboard come close.

Just how do you filter the hundreds of millions of content streams the Internet has to offer so that I see just the right thing when I collapse into my armchair at the end of the day? Could channels, one day, be individually curated content streams, with the content itself sold directly from the producer to us? That would make companies like Apple the new Viacoms and Universals, and make our friends into our TV Guides, with the net result that we will have a much larger range of content available to us, and content producers will have a much easier route to market. I will certainly be playing with this in 2011, from a number of angles.

Getting paid

Ultimately, I think this is the year that analogue content producers – filmmakers, writers, musicians, artists, animators and so on – find a model that really pays for their work online. Once that’s happened, the decentralized, monetized web will be our mainstream source for all content. That means fewer gatekeepers, better content, and a much better information environment for consumers and democracy.

Photo by Hai Linh Truong, released under a Creative Commons license.

User control on the open web

February 21, 2009 | 9 comments

Data portability and the open data movement (“the open web” for simplicity’s sake) revolve around the idea that you should be able to take your data from one service to another without restriction, as well as control who gets to see it and how. Very simply, it’s your data, so you should have the ability to do what you like with it. That means that, for example, if you want to take your WordPress blog posts and import them into MovableType (WordPress’s competitor), you should be able to. Or you should be able to take your activity from Facebook and include it in your personal website, or export your Gmail contacts for backup or transfer to a rival email service.

You can do this on your desktop: for example, you can open a Word document in hundreds of wordprocessors, and Macs will happily talk to Windows machines on a network. Allowing this sort of data transport is good for the web in the same way it’s good for offline software: it forces companies to compete on features rather than the number of people they can lock into their services. It also ensures that if a service provider goes out of business, a user’s data on that service doesn’t have to disappear with it.

In 2007, before the open web hit most peoples’ radars, Marc Canter organised the first Data Sharing Summit, which was a communal discussion between all the major Silicon Valley players, as well as many outside companies who flew in specially to participate (I attended, representing Elgg). One of the major outcomes was the importance of central control: the user owns their data. Marc, Joseph Smarr, Robert Scoble and Michael Arrington co-signed a Bill of Rights for the Social Web which laid these out. It wasn’t all roses: most of the large companies present took issue with the Bill of Rights, and as I noted in my write-up for ZDNet at the time, preferred the term “data control” rather than “data ownership”. The implication was simple: users didn’t own the data they added to those services.

Since then, the open web has been accelerating as both an idea and a practical reality. Initiatives like Chris Saad’s, Marc Canter’s Open Mesh treatise, as well as useful blunders like Facebook’s recent Terms of Service mis-step, have drawn public attention its importance. Facebook in particular force you to license your content to them indefinitely, and disable (rather than delete) your account details when you choose to leave the site. Once you enter something into Facebook, you should assume it’s there forever, no matter what you do. This has been in place for some time to little complaint, but when they overreached with their licensing terms, it made international headlines across the mainstream press: control over your data is now a mainstream issue.

Meanwhile, technology has been improving, and approaches have been consolidated. The Open Stack is a collection of real-world technologies that can be applied to web services in order to provide a base level of openness today, and developments are rapidly emerging. Chris Messina is leading development around activity streams portability, which will allow you to subscribe to friends on other services and see what they’re up to. The data portability aspect of the open web is rapidly becoming a reality: you will be able to share and copy your data.

Your data will be out there. So, what happens next?

The same emerging open web technologies which allow you to explicitly share your data from one service to another will also allow tools to be constructed cheaply out of functionality provided by more than one provider. Even today, a web tool might have a front end that connects behind the scenes to Google (perhaps for search or positioning information), Amazon (for storage or database facilities), and maybe three other services. This is going to drive innovation over the next few years, but let’s say a user on that conglomerated service wants to delete their account. Can they reliably assume that all the component services will respect his or her wishes and remove the data as requested?

As web tools become more sophisticated, access control also becomes an issue. When you publish on the web, you might not want the entire world to read your content; you could be uploading a document that you’d like to restrict to your company or some other group. How do these access restrictions persist on component services?

One solution could be some kind of licensing, but this veers dangerously close to Digital Rights Manamgent, the hated technology that has crippled most online music services and players for so long and inhibited innovation in the sector. Dare Obasanjo, who works for Microsoft and is usually a good source for intelligent analysis, recently had this to say:

[..] I’ve finally switched over to agreeing that once you’ve shared something it’s out there. The problem with [allowing content to be deleted] is that it is disrespectful of the person(s) you’ve shared the content with. Looking back at the Outlook email recall feature, it actually doesn’t delete a mail if the person has already read it. This is probably for technical reasons but it also has the side effect of not deleting a message from someone’s inbox that they have read and filed away. [..] Outlook has respected an important boundary by not allowing a sender to arbitrarily delete content from a recipient’s inbox with no recourse on the part of the recipient.

The trouble is that many services make money by selling data about you, either directly or indirectly, and these are unlikely to relinquish your data (or information derived from it) without some kind of pressure. I agree with Dare completely on the social level, with content that has been shared explicity. Certainly, this model has worked very well for email, and people like Plaxo’s John McCrea are hailing the fall of ‘social DRM’. However, content that is shared behind the scenes via APIs, and content that is shared inadvertently when agreeing to perform an action over something like OAuth or OpenID, need to obey a different model.

The only real difference between data shared as a deliberate act and data shared behind the scenes is user interface. Everyone wants the user to have control over data sharing via a clear user interface. Should they also be able to enforce what’s done with that data once it transfers to a third-party service, or should they trust that the service is going to do the right thing?

The open web isn’t just for trivial information. It’s one thing to control what happens to my Dopplr information, or my blog posts, or my Flickr photographs. I really don’t mind too much about where those things go, and I’d imagine that most people would agree (although some won’t). Those aren’t, however, the only things the web is being used for: there are support communities for medical disorders, academic resources, bill management services, managed intranets and more out there on the web, and these will begin to also harness the benefits of the open web. All of them need to be careful of their data. Some of them need to do so for legal reasons; some of them need to do so for ethical reasons. Nonetheless, they could all benefit from securely being able to share data in a controlled way.

To aid discussion, I propose the following two categories of shared data:

  • Explicit shares – information that a user asks specifically to share with another person or service.


    • Atomic objects like blog posts, contacts or messages
    • Collections like activity streams
  • Implicit shares – information that is shared behind the scenes as a result of an explicit share, or to provide some kind of federated functionality.


    • User information or shadow accounts transferred or created as a result of an OpenID or OAuth login
    • User settings
    • User contact details, friend lists, or identifiers

For the open web to work, both clearly need to be allowed. At a very base level, though, I think that users need to be aware of implicit shares, in a clear, non-technical way. (OpenID and OAuth both allow the user to grant and revoke access to functionality, but they don’t control what happens to the data when access is granted once, which is likely to be kept.) They also need to provide a facility for reliably controlling this data. Just as I can Creative Commons license a photograph and allow it to be shared while restricting anyone’s ability to use it for commercial gain, I need to be able to say that services can only use my data for a limited time, or for limited purposes. I’m not calling for DRM, but rather a published best practice that services would adhere to and publicly declare their allegiance to.

Without this, the usefulness of the open web will be limited to certain kinds of use cases – which is a shame, because if it’s allowed to reach its full potential, it could provide a new kind of social computing that will almost certainly change the world.