The Progressive (Profitable) Web

April 2, 2013 | 2 comments

Ryan Holiday laments the loss of Google Reader and RSS in general in Our Regressive Web, arguing that if someone came up with them today, we’d think they were brilliant ideas:

Nothing better has risen up to replace them. The underlying needs of a fairly large user base (that these services meet) still exist.

We’re just regressing.

[...] RSS is impervious to blogging’s worst, but most profitable traits. [...] No wonder nobody ever pushed for widespread adoption. Of course it died a slow death—along with Google Alerts and Delicious. Their mission is antithetical to the ethos of our new media age. Where noise, chatter and pushing—not pulling—rule the day.

Our Regressive Web by Ryan Holiday, on Medium

He’s right. Aggregated content – content on the reader’s terms – has a huge potential userbase, but it wasn’t profitable for either the bloggers or the aggregators, so it languished. Sure, you could tack some Google Ads onto the end of each post in a feed, but control over the form that the content is presented in is granted fully to the user. Where’s the opportunity to upsell? Where are the branding opportunities or the baked-in communities, carefully designed to maximize ongoing engagement?

The irony is that blogs have actually downgraded their on-page advertising over time. If you visit TechCrunch today, you’ll only see two ads above the fold. Check out io9, and you’ll see none at all. The redesigned ReadWrite has a few more: a giant banner above the fold, and then four small squares with another ad in the stream of content itself.

Wouldn’t it be nice if you could have your cake and eat it, too? Allow the user to consume content on his or her terms, while also allowing the content producer to make money?

Here’s an idea I’ve been working on in my own time. It’s a little technical, but bear with me:

  1. Add a simple social layer to the web. I still like the idea of the HTTP header I described in httpID. Your site may connect to my site with a mechanism like OpenID Connect and get an authentication token automatically. Think of it like a one-way friend request. Of course, I can then reciprocate by connecting to your site to create a two-way relationship.
  2. Add authentication to feeds. Each feed has just one URL. An aggregator may sign the request for a feed with an OAuth-like signature. (We’re sidestepping HTTP digest auth for obvious reasons.) The software producing the feed may choose to acknowledge the signature, or not; by default, you get all the public posts you’d normally get when accessing a feed.
  3. Manage connections and restrict access to content. I see everyone who’s connected to me from a control panel, and can reciprocate from there. More importantly, I can add any of my connections to access groups. So if I add you to a group and publish a piece of content so that it is only accessible by that group, when your site requests my feed using a signed request, you’ll see that content.
  4. Optionally: sell access to premium content. Once you can selectively broadcast content to a finite group of people, you can sell access to that group. (And of course, you can have more than one paid-access group.) For example, I’m a subscriber to NSFW, a paid publication with an online presence. They could push all their articles to me as a subscriber, while making a handful of taster articles available to everyone. You could even include a pointer to a subscription URL within that social handshake from part 1. If you decentralize the financial transactions (and why not?), you could even give a small cut to the platform owner.

All of the above is complementary to feed standards like RSS and Activity Streams, as well as to federated social web protocols and methodologies like OStatus. It’s super simple to both use and implement – but could add a layer of commerce to the content web, while also decreasing our dependence on large content silos whose interests are not in line with their customers.

Activity streams: not just for the cloud

April 24, 2012 | 7 comments

At the end of last year, I was asked to contribute my wishlist for Linux on the desktop for an issue of Linux Format magazine. Here’s what I submitted:

I want an activity stream for my activity on my local computer, and across my network. When, for example, I make a change to a document, I want my PC to record it on my activity stream as “Ben Werdmuller edited ‘Linux Format wishlist’ in LibreOffice Writer.” By default, those changes are private to me only, but I can set access permissions per file, application, location on disk and type of update (“status update”, “text file”, etc). In a network environment, I can share my activity streams across the network, and see the updates that other network users have allowed me to view. This stream is at an infrastructure data level, so I can choose a number of applications to view it with – although I can easily imagine Ubuntu, for example, shipping a beautiful default app.

Then, I want to be able to program against the activity stream, and the activity streams I can see on my network, using a simple API. This would allow me to sync files, status updates and other things, while not being bound to any one application or utility. It also could provide an interesting underlying basis for social web applications running on Linux servers.

This is a little convoluted, so let me explain: I want my activity on my computer, my activity across my enterprise network, and my activity on the web to be saved to a single activity stream that I control. I want to be able to conditionally share and have access to the entire activity stream – and then do stuff with it, using tools like the excellent ifttt.

Consider the following unified stream:

  • Ben Werdmuller saved Technical white paper to Work out tray 3 seconds ago
  • Ben’s mom sent you an email: A little family news to 15 minutes ago
  • Ben’s cousin sent you a message: I’m engaged! on Facebook 1 hour ago
  • Your task: Finish technical white paper is due 3 hours ago
  • You were tagged in a photo: ElggCamp San Francisco 2012 on Flickr 4 hours ago

In the example above, the act of saving something to the folder Work out tray could automatically cause it to be uploaded to Basecamp, or emailed to a few people for review. Similarly, my being tagged in a photo on Flickr could cause it to be automatically downloaded into my local Photos folder.

Why should my activity stream just contain stuff that happened on the web? Now that we have apps like Google Drive, these separations are arbitrary at this point. What matters is that I did something, not where I did it.

An introduction to Activity Streams

June 22, 2010 | Leave a comment

I’ve written an introduction to the Activity Streams standard for IBM DeveloperWorks:

Enter Activity Streams, an evolving standard that extends Atom for expressing social objects. Although it is a young standard, Activity Streams is fast becoming the de facto method for syndicating activity between web applications. For example, MySpace, Facebook, and TypePad all now produce Activity Streams XML feeds. But this technology isn’t just for the consumer web environment. As corporate intranets and internal software become more social, solid business reasons support implementing Activity Streams as a feature. This article describes Activity Streams in detail, considers its potential uses in enterprise environments, and provides some examples for interpreting Activity Streams feeds using PHP.

The full article is over here.

Direct messaging in a social web architecture

March 31, 2010 | 3 comments

This post is the third segment in my series on an architecture for the social web. Previously: How social networks can replace email, which is a non-technical approach to the issues, and my follow-up describing how to build a social web architecture using available technology today.

So what about direct messaging?

In my previous post, I described content notifications in the social web as being Activity Streams updates in response to requests signed with an OAuth key. Each individual contact would have his or her own OAuth key, and the system would adjust delivered content depending on access permissions I had assigned to them.

A private message in this architecture could just be represented as an item of content restricted to a small set of recipients (in the email use case, this is typically just one), with replies delivered using Salmon. The advantage of this approach is that the message doesn’t have to be text; it can be audio, video, a link to live software, or something else entirely.

However, while this is technically feasible, it may not always be desirable. We know from Google Wave, which also pushes the boundaries of person-to-person messaging, that an open definition of what a message contains can get very messy very quickly. Although I was one of the first people to have one, I no longer check my Wave account regularly. I believe this is mostly a user interface issue: Wave is an awesome collaborative document editor (what I’ve heard described as “a massively multiplayer whiteboard”), but not in any way the evolution of email that its development team claimed.

Therefore, I think it’s useful to think about the difference between a document and a message:

  • A message is the body of a communication.
  • A document is a bounded representation of some kind of information.

While in many ways they’re the same, I think it makes sense to make a separation on the UI level. As we’re discussing a decentralized architecture here, some kind of semantic marker in our activity stream feed to mark something as a message would be a useful feature.

Messaging “out of the blue”

You know where you are with an email address. Mine is Anyone who encounters that string of characters, whether on a website like this one, a business card or a scribbled note on a piece of paper, is able to send me a message from anywhere in the world. In the 17 years I’ve had an email address, the list of friendships and business connections I’ve made, and opportunities I’ve received and developed, through this simple mechanism has been uncountable. It’s also likely to continue far into the future.

Compared to this, visiting someone’s social web profile and sending them a message from their web presence is a hassle. Compare these steps:

  1. Receive the address of someone’s profile
  2. Click the “follow” button either on the profile itself or on the toolbar of your social web compatible browser
  3. Wait for the contact to follow you back
  4. Send your message


  1. Receive someone’s email address
  2. Send a message to that address

It’s simple, ubiquitous, decentralized and universally compatible. In fact, it seems hard to improve on, doesn’t it?

However, as this is a thought experiment about how social networking can replace email, let’s see if we can simplify this process somewhat. In my previous post, I discussed how a connection could be established with OpenID and OAuth through a web-based interface on a social web profile. How can we make this as simple as emailing someone, and cut out most of the steps I’ve listed above?

Connecting programmatically

I propose two additions to my previously discussed mechanism. The first is to expand the connection protocol to include a message. If someone connects to me on LinkedIn or Facebook, I receive some explanatory text from them, so it makes sense to include this feature in our decentralized social web architecture. It is likely that this would be an added parameter to the OAuth request token procedure.

The second is to allow connections to be made programmatically through a custom application. Just as we use email clients now, a social web client could automatically send a connection request. In keeping with our principle of using existing technology where possible, this is a simple OAuth connection request from the application, which includes a user message as described above. The application knows our details because we’ve set our preferences, so we’re never visibly redirected to a web browser to complete authentication. (In fact, this could take place using xAuth, a version of the OAuth protocol being developed for just these sorts of browser-free use cases.)

Whether we can send a follow-up message now depends on the receiving party. We have our OAuth token, and while it remains valid, the receiving social web node may choose to ignore any follow-up requests.

Our procedure has become:

  1. Obtain address of someone’s social web node (you could even infer it using WebFinger)
  2. Send a message to that node, bundled with a connection request

This is significantly better, and is comparable to the simplicity of email.

You may be wondering about the wisdom of adding everyone you contact as a connection. In fact, there’s some precedent for this already in applications like GMail. It’s important to note that not every connection need be a friend: in some ways, you can think of your total list of connections as your contact book. Some are important, some can be safely squirreled away until you need to contact them again. In this context (or any context where people you have a relationship with and people you’ve contacted are merged into one set), an adequate person management interface – or CRM to you and me – becomes important.

Next, and finally: let’s make our distributed social web architecture reliable enough to use in enterprise environments, using message queue protocols like ZeroMQ and AMQP.

Next Page »