HTTP signatures

May 6, 2013 | Leave a comment

It looks like I’m not the only person who likes the idea of signed HTTP requests as an authentication method.

Joyent and Digital Bazaar have co-written an Internet draft for cryptographically signed HTTP requests:

Several web service providers have invented their own schemes for signing HTTP requests, but to date, none have been placed in the public domain as a standard. This document serves that purpose. There are no techniques in this proposal that are novel beyond previous art, however, this aims to be a simple mechanism for signing these requests.

Signed HTTP requests are also a key feature of something I’ve been working on. It’s great to see the idea pick up momentum.

The Progressive (Profitable) Web

April 2, 2013 | 2 comments

Ryan Holiday laments the loss of Google Reader and RSS in general in Our Regressive Web, arguing that if someone came up with them today, we’d think they were brilliant ideas:

Nothing better has risen up to replace them. The underlying needs of a fairly large user base (that these services meet) still exist.

We’re just regressing.

[...] RSS is impervious to blogging’s worst, but most profitable traits. [...] No wonder nobody ever pushed for widespread adoption. Of course it died a slow death—along with Google Alerts and Delicious. Their mission is antithetical to the ethos of our new media age. Where noise, chatter and pushing—not pulling—rule the day.

Our Regressive Web by Ryan Holiday, on Medium

He’s right. Aggregated content – content on the reader’s terms – has a huge potential userbase, but it wasn’t profitable for either the bloggers or the aggregators, so it languished. Sure, you could tack some Google Ads onto the end of each post in a feed, but control over the form that the content is presented in is granted fully to the user. Where’s the opportunity to upsell? Where are the branding opportunities or the baked-in communities, carefully designed to maximize ongoing engagement?

The irony is that blogs have actually downgraded their on-page advertising over time. If you visit TechCrunch today, you’ll only see two ads above the fold. Check out io9, and you’ll see none at all. The redesigned ReadWrite has a few more: a giant banner above the fold, and then four small squares with another ad in the stream of content itself.

Wouldn’t it be nice if you could have your cake and eat it, too? Allow the user to consume content on his or her terms, while also allowing the content producer to make money?

Here’s an idea I’ve been working on in my own time. It’s a little technical, but bear with me:

  1. Add a simple social layer to the web. I still like the idea of the HTTP header I described in httpID. Your site may connect to my site with a mechanism like OpenID Connect and get an authentication token automatically. Think of it like a one-way friend request. Of course, I can then reciprocate by connecting to your site to create a two-way relationship.
  2. Add authentication to feeds. Each feed has just one URL. An aggregator may sign the request for a feed with an OAuth-like signature. (We’re sidestepping HTTP digest auth for obvious reasons.) The software producing the feed may choose to acknowledge the signature, or not; by default, you get all the public posts you’d normally get when accessing a feed.
  3. Manage connections and restrict access to content. I see everyone who’s connected to me from a control panel, and can reciprocate from there. More importantly, I can add any of my connections to access groups. So if I add you to a group and publish a piece of content so that it is only accessible by that group, when your site requests my feed using a signed request, you’ll see that content.
  4. Optionally: sell access to premium content. Once you can selectively broadcast content to a finite group of people, you can sell access to that group. (And of course, you can have more than one paid-access group.) For example, I’m a subscriber to NSFW, a paid publication with an online presence. They could push all their articles to me as a subscriber, while making a handful of taster articles available to everyone. You could even include a pointer to a subscription URL within that social handshake from part 1. If you decentralize the financial transactions (and why not?), you could even give a small cut to the platform owner.

All of the above is complementary to feed standards like RSS and Activity Streams, as well as to federated social web protocols and methodologies like OStatus. It’s super simple to both use and implement – but could add a layer of commerce to the content web, while also decreasing our dependence on large content silos whose interests are not in line with their customers.

httpID: adding identity to standard HTTP requests

April 19, 2011 | 17 comments

This is a more technical post than I’ve been writing lately. I’m considering splitting out into two blog channels; let me know if you’d prefer this.

This is a request for comments and ideas. Please let me know what you think in the comments. Thanks!

One of the advantages of the decentralized social web, as opposed to a social network (federated or otherwise), is that identity can, theoretically, be shared with any web page, anywhere. That page doesn’t have to be running any particular software or provide any particular function; it should optionally be able to support identity-related features. That could then be used to tailor the page to the viewing user. (Of course, sharing identity should never be required, for security reasons.) This is part of three broad activities that I see as being part of the social web:

  • Publishing web content in an identity-aware way
  • Consuming web content in an identity-aware way
  • Sharing socially

Much of the decentralized social web development activity to date has been focused on the third point, and on reading and writing as part of a social web application like StatusNet or Diaspora. However, I’d like to look at the first two points with a view to make them web infrastructure, rather than features of a web application.

To achieve this, I’d like to be able to report, as an option, the identity of the person making an HTTP request, as part of the headers to that request. This might come from the browser itself, eg via an identity plugin, or it might come from a web-based identity proxy.

HTTP supports basic authentication, which involves sending a username and password, potentially in the clear. Out of necessity, we’ve moved beyond this, eg for things like API authentication. Often tokens, hashes and encrypted requests are included as extra header values to authenticate a request.

I’d like to use the same general principle for identifying a user. Here’s how it might work:

  1. The user visits a site for the first time. The browser sends a standard HTTP request. (Or, alternately, a HEAD request, if the site content isn’t required.)
  2. The site responds as normal, but with an extra HTTP header indicating that it’s identity-aware, including the URL of a handshaking endpoint. This will be ignored by clients that aren’t looking for it.
  3. If this is a standard browsing scenario, the user’s browser asks if he or she would like to share identity information with the site. For the purposes of this example, the user clicks “yes”. (This step can be left out if this isn’t a standard browsing scenario.)
  4. Via the handshaking endpoint from step 2, the user’s browser gives the site a public and private key, and a URL, through which it can access the user’s identity information as an XRD file (as in Webfinger). This is exactly the same as the public and private key system used to retrieve social information in points 5 and 6, using the same method. The site simply makes a signed request to the user’s identity URL, which can be anywhere.
  5. The browser receives public & private keys for use with this server only. These might be stored in the browser, or in some central identity store that all the user’s browsers access.
  6. Whenever the browser makes a request to the server, it adds extra headers using these keys (and HMAC-SHA-1), signing each request with the user’s identity until he or she says otherwise. It also sends a header to indicate when the user’s identity information was last changed, in order to prompt the site into obtaining new information if it needs to.
  7. If the site in point 4 is associated with a specific person (for example benwerd.com would be associated with Ben Werdmuller), he or she can use the public and private key generated in step 4 to browse the user’s site.

The publisher would get a list of users who have identified with the site, and, depending on their server or content management system, might add some of them to special access control groups that would allow access to different content. The next time the user visited the site, they’d see more privileged content. A notification would probably be sent to them to let them know this had happened, but this is out of scope for what I’m discussing here. (Perhaps notification methods could be shared as part of a user’s identity information?)

Conversely, the user’s XRD file containing their identity information can also change depending on who’s accessing it (as the requesting site always makes a signed request).

This system has a number of advantages:

  • It’s server and system agnostic. It simply uses the building blocks of the web.
  • It’s very easy to build for. Checking and setting HTTP headers are easy to do, and don’t require any front-end work like HTML parsing or JavaScript libraries. This makes it usable for APIs and feeds as well as web pages, and for clients that use web APIs as well as web browsers.
  • The web isn’t just a platform for people to read these days. This method doesn’t depend on anything visual.
  • You don’t need to control the root of a domain to make it work. If you install a script at http://yourdomain/~foobar/banana/hockeystick.php, the system will be happy there too.
  • It’s passive. There are no blockers if you don’t supply identity information – you just see something different.
  • It’s based on similar assumptions to WebID, but doesn’t require SSL certificates in the browser, and it’s as easy for a web app to implement as it is for browser software.

It incorporates the following assumptions:

  • Relationships are assymetrical. (Here, there’s a set of keys for each side of a relationship. If one side stops participating, perhaps by removing the other from an access control group, the other side is still valid.)
  • Privacy isn’t binary. (Everyone gets a different view on a given page or piece of data.)

Let’s call it httpID. I’m looking for feedback on the idea and process. Does it make sense? Have I missed something obvious? Let me know. If there are no major blockers, I’ll firm up the spec and create some libraries.