httpID: adding identity to standard HTTP requests

April 19, 2011 | 17 comments

This is a more technical post than I’ve been writing lately. I’m considering splitting out into two blog channels; let me know if you’d prefer this.

This is a request for comments and ideas. Please let me know what you think in the comments. Thanks!

One of the advantages of the decentralized social web, as opposed to a social network (federated or otherwise), is that identity can, theoretically, be shared with any web page, anywhere. That page doesn’t have to be running any particular software or provide any particular function; it should optionally be able to support identity-related features. That could then be used to tailor the page to the viewing user. (Of course, sharing identity should never be required, for security reasons.) This is part of three broad activities that I see as being part of the social web:

  • Publishing web content in an identity-aware way
  • Consuming web content in an identity-aware way
  • Sharing socially

Much of the decentralized social web development activity to date has been focused on the third point, and on reading and writing as part of a social web application like StatusNet or Diaspora. However, I’d like to look at the first two points with a view to make them web infrastructure, rather than features of a web application.

To achieve this, I’d like to be able to report, as an option, the identity of the person making an HTTP request, as part of the headers to that request. This might come from the browser itself, eg via an identity plugin, or it might come from a web-based identity proxy.

HTTP supports basic authentication, which involves sending a username and password, potentially in the clear. Out of necessity, we’ve moved beyond this, eg for things like API authentication. Often tokens, hashes and encrypted requests are included as extra header values to authenticate a request.

I’d like to use the same general principle for identifying a user. Here’s how it might work:

  1. The user visits a site for the first time. The browser sends a standard HTTP request. (Or, alternately, a HEAD request, if the site content isn’t required.)
  2. The site responds as normal, but with an extra HTTP header indicating that it’s identity-aware, including the URL of a handshaking endpoint. This will be ignored by clients that aren’t looking for it.
  3. If this is a standard browsing scenario, the user’s browser asks if he or she would like to share identity information with the site. For the purposes of this example, the user clicks “yes”. (This step can be left out if this isn’t a standard browsing scenario.)
  4. Via the handshaking endpoint from step 2, the user’s browser gives the site a public and private key, and a URL, through which it can access the user’s identity information as an XRD file (as in Webfinger). This is exactly the same as the public and private key system used to retrieve social information in points 5 and 6, using the same method. The site simply makes a signed request to the user’s identity URL, which can be anywhere.
  5. The browser receives public & private keys for use with this server only. These might be stored in the browser, or in some central identity store that all the user’s browsers access.
  6. Whenever the browser makes a request to the server, it adds extra headers using these keys (and HMAC-SHA-1), signing each request with the user’s identity until he or she says otherwise. It also sends a header to indicate when the user’s identity information was last changed, in order to prompt the site into obtaining new information if it needs to.
  7. If the site in point 4 is associated with a specific person (for example benwerd.com would be associated with Ben Werdmuller), he or she can use the public and private key generated in step 4 to browse the user’s site.

The publisher would get a list of users who have identified with the site, and, depending on their server or content management system, might add some of them to special access control groups that would allow access to different content. The next time the user visited the site, they’d see more privileged content. A notification would probably be sent to them to let them know this had happened, but this is out of scope for what I’m discussing here. (Perhaps notification methods could be shared as part of a user’s identity information?)

Conversely, the user’s XRD file containing their identity information can also change depending on who’s accessing it (as the requesting site always makes a signed request).

This system has a number of advantages:

  • It’s server and system agnostic. It simply uses the building blocks of the web.
  • It’s very easy to build for. Checking and setting HTTP headers are easy to do, and don’t require any front-end work like HTML parsing or JavaScript libraries. This makes it usable for APIs and feeds as well as web pages, and for clients that use web APIs as well as web browsers.
  • The web isn’t just a platform for people to read these days. This method doesn’t depend on anything visual.
  • You don’t need to control the root of a domain to make it work. If you install a script at http://yourdomain/~foobar/banana/hockeystick.php, the system will be happy there too.
  • It’s passive. There are no blockers if you don’t supply identity information – you just see something different.
  • It’s based on similar assumptions to WebID, but doesn’t require SSL certificates in the browser, and it’s as easy for a web app to implement as it is for browser software.

It incorporates the following assumptions:

  • Relationships are assymetrical. (Here, there’s a set of keys for each side of a relationship. If one side stops participating, perhaps by removing the other from an access control group, the other side is still valid.)
  • Privacy isn’t binary. (Everyone gets a different view on a given page or piece of data.)

Let’s call it httpID. I’m looking for feedback on the idea and process. Does it make sense? Have I missed something obvious? Let me know. If there are no major blockers, I’ll firm up the spec and create some libraries.

Related entries

17 Comments

  1. It looks like a neat kind of system but the generation of key pairs seems odd to me. First for all, if they are generated server-side (which seems to be the case if I’m reading this right) then it will be imperative that TLS is in use so that the private key isn’t revealed to anyone in the middle. This detracts a little from the simplicity.

    Secondly, is it really necessary to have a new key pair for every site and to have external hosting of identity information? It seems to me that similar authentication could be achieved by having a single private key in all the user’s browsers, with a matching public key that is sent as part of the request to uniquely identify the user. This way no private key matter is passed in cleartext. Users would still be vulnerable to identity substitution by an attacker with the ability to modify traffic, but an observer could do no harm.

    If the user is concerned about revealing the same identity across multiple sites, they are perfectly welcome to ask their browser to generate a new key pair for each site, but they will then have the added burden of extra keys to synchronize across devices and browsers accordingly.

    What do you reckon?

    Tom K April 19, 2011 (12:42 pm)
  2. Having the single key pair in the user’s browsers would be vastly simpler, and was the original model I had thought about. But it’s important here to be able to assign different trust levels to each person / site (i.e., each relationship on the social web has its own key pair), and it’s also important to be able to verify that the user is who they say they are. I wasn’t sure how to lay this out with a single-key solution – but I’d love to hear suggestions!

    Ben Werdmuller April 19, 2011 (1:00 pm)
  3. I’ve made a small addition to the post that includes the assumptions I’ve made about relationship asymmetry and privacy being non-binary, which may clarify some of my other decisions.

    Ben Werdmuller April 19, 2011 (1:33 pm)
  4. I like the key exchange stuff (“Hi, I’m identity-aware, would you like to identify yourself?”) and the connection endpoints, but there are two bits missing:

    1. People need names. If they don’t know their names, they’re screwed. This approach makes the browser deal with the user’s identity, instead of the site. Moreover, it doesn’t address the fact that [decentralized] online identity is useless unless people can share their identities with eachother.

    2. A persistent problem I have with the browser-based identity approaches is that I use a whole bunch of different browsers, and not all of them (a) support these sorts of identity bits nor (b) do I want to distribute private keys to them. I’m convinced that a server-to-server model (where servers might be very edge-riding entities, like browsers or plug machines) is the right way to go here; the browser is less and less the place I store my stuff.

    There’s also the fact that getting a common approach to identity crypto (even non-SSL) into all browsers is going to be very, very hard. Getting ubiquitous webfinger profiles is similarly hard, but we only need it for domains that people use as identifiers, and [trusted] services like Gravatar or about.me can be used as proxies (in fact, SSL signing authorities behave in exactly this way).

    [clicking "notify me of followup comments via e-mail – this matters! :-) ]

    Blaine Cook April 19, 2011 (2:11 pm)
  5. Blaine: 1. is a fair point. I was considering sending identity info, including names, as part of the handshake, but was worried about the verifiability. What’s your view on this?

    2. I agree – although I’ve been very widely disagreed with when I’ve put that opinion out there – but I also think that this approach could work equally well with a server-to-server system?

    Ben Werdmuller April 19, 2011 (2:14 pm)
  6. I’d like to see more details about the various public/private keys being passed around here, and what they’re associated with.

    Also – how would this deal with pseudonymity? If I have multiple different identities would I nominate one per server I dealt with? How would I log out/in to change identity?

    Andrew Ducker April 19, 2011 (2:26 pm)
  7. Blaine: also, to answer your point about exchangeable identities, the user could simply pass around the URL for an identity-aware page that they own. For example, I could pass around benwerd.com on my business card, as I already do, and it just so happens that it’s identity aware for people who want to connect with me using that mechanism.

    Ben Werdmuller April 19, 2011 (2:29 pm)
  8. Mobile at the moment, so I’ll be terse: How would this work in the case when you let someone borrow your browser temporarily?

    Dave Ingram April 19, 2011 (2:29 pm)
  9. Andrew: pseudonymity is built in. For one thing, the XRD file can display different things depending on who’s accessing it through this mechanism. But for another, you could build an interface so that you have different interfaces built into the browser or client app, and depending on user choice, a completely different XRD file could be served, representing an entirely different identity.

    Ben Werdmuller April 19, 2011 (2:33 pm)
  10. Dave: if it was built into the browser, you’d need some kind of log out / switch user functionality. Not ideal. I actually started out by disagreeing with identity in the browser, in this conversation with Evan Promodou, but came round to looking at it as an interesting idea. Of course, all of this depends on your definition of “browser”; as I’ve been saying in previous comments, there’s no reason why this mechanism couldn’t work from within a web application rather than as part of the browser framework itself.

    Ben Werdmuller April 19, 2011 (2:35 pm)
  11. Ben: re: “the user could simply pass around the URL for an identity-aware page that they own”

    They could, but they won’t. ;-) This statement is basically the whole motivation for webfinger. URLs are against understanding. Developers don’t understand URLs, and regular people aren’t going to buy their own domains. If they don’t buy their own domains, they’re going to end up with confusing (and dangerous) URLs.

    More to the point: OpenID failed because of its dependence on URLs.

    Re: server-to-server, I agree that this approach could work in a server-to-server way, but argue that the approach I described at our SXSW talk is conceptually simpler and easier to implement (the approach you’ve described here, in a S2S scenario) is quite similar to using Salmon to authenticate HTTP requests, which is a bit of a fraught direction, because requests need to be normalised to be signed.

    Re: identity in the browser, I agree that it’s a fantastic idea to have identity aware browsers, and having a “sign-in / sign-out” mechanism for the browser would be great. The bit I’m pushing back on is the idea that your browser should be the centre of that identity. As long as any approach to browser-mediated identity is external to the browser in the degenerate case (i.e., when the browser *doesn’t* provide identity features) then I’m happy. ;-)

    Blaine Cook April 19, 2011 (2:47 pm)
  12. Blaine: all of that makes sense :) And the OpenID example is very much taken on board, although I think this is slightly different in that nobody is actually being asked to log in with them. (Suddenly doubt is tugging at me: they might be asked to auth using a URL for sites that use this mechanism, which isn’t intentional, but …)

    I said in our chat before SXSW that I really like your webfinger-centric approach, and that remains true. I was trying to avoid having to control the root of a domain with this one, as well as riff on the search-centric idea I put out in my portion of the SXSW panel (where identifiers aren’t as important as the fact that identities exist and are generally readable). The first may not be a big requirement and the second may well be a red herring.

    The identity itself is stored elsewhere, and the browser is more of a conduit than the actual identity provider. In point 5 I did say “These might be stored in the browser, or in some central identity store that all the user’s browsers access.”, but perhaps the browser storage bit should be removed.

    Ben Werdmuller April 19, 2011 (2:56 pm)
  13. Ben: ahh, you’re right, I hadn’t fully read point 5, but yeah, it’s probably clearest to omit browser storage (other than for cached credentials, obviously). Also, take a look at https://github.com/hueniverse/draft-hammer-http-mac, which isn’t the same thing at all, but might be useful.

    Blaine Cook April 19, 2011 (3:09 pm)
  14. Thanks everyone for your incisive, helpful comments. I’m going to continue playing with this idea for a while, and will probably write a follow-up post which incorporates the concerns and suggestions you’ve raised. In the meantime, please keep ‘em coming :)

    Ben Werdmuller April 19, 2011 (3:10 pm)
  15. Blaine: I hadn’t seen that – thanks for pointing it my way. It looks useful and I’ll give it a read through.

    Ben Werdmuller April 19, 2011 (3:12 pm)
  16. In my opinion it’s important to mantain the distinction between “concept” and “implementation”.
    WebID is the concept (a way to uniquely identify things on the web). FOAF+SSL is surely an elegant approach to identify&authenticate users in the Web. The use of SSL is important to secure the communications against some threats (MITM: identity theft, replay attacks, …)! FOAF is not mandatory: inside the X.509 certificate, we could indicate a webfinger URI (XRD+SSL) rather then an HTTP URI(referring to a foaf file), obtaining the same outcome. So let’s call it WEBID (it’s just an other implementation)!

    Secondly, i think it’s important to develop a strong solution to identify users and, in the same time, we don’t have to repeat the “mistakes” of the past (see OAuth1.0 vs OAuth2.0). Signing every request using a shared secret, entails the same issues/complexity experienced with OAuth1.0(e.g. handling nonce, timestamp on server-side). OAuth2.0 introduces SSL to tear down this complexity.
    SSL is vital, but unfortunately, it isn’t widespread adopted (moreover, it introduces some criticalities with mobile web on limited devices: e.g. smartphones).
    I don’t understand the why a browser(client) and a server should exchange a key pair between them. The aims, here, is to identify the user and consequently provide the right information depending on the furnished authentication. If SSL isn’t available, we could use other mechanisms to exchange a shared secret with the server (e.g. Diffie-Hellman), which would be used to sign the request to the user Identity Provider(if i understood your proposal).
    How to synchronize every generated secret with the Identity Provider? Should the server also sign the responses with the key pair provided by the client?? If so, it would introduce a meaningful load/latency in the communication( so let’s use XRD+SSL or FOAF+SSL or X+SSL).
    Furthermore, why do you talk about “symmetric relationships”??
    The relation here is one-sided: a client (web browser, user A) visiting a remote profile/wall/site (user B). The other side of the relationship would be the other client (browser, user B) visiting the remote server of user A.

    Thanks for any replies.

    Andrea Messina April 19, 2011 (4:48 pm)
  17. Ben,

    Your idea sounds interesting. How would this work if I am using many devices. I use at least 3 devices(mobile, tablet and desktop/laptoo) or more on a regular basis and access web in all of them. I am sure some folks use more than 3 devices.

    Synchronization with a cloud service is a possibility, but not something that is reliable.

    Ramesh Nethi April 22, 2011 (5:34 pm)

Leave a comment