Does anyone know of an example of a RESTful client that follows the HATEOAS principle?
We've kind of half-done this on our current project. The representations we return are generated from domain objects, and the client can ask for them either in XML, JSON, or XHTML. If it's an XHTML client like Firefox, then a person sees a set of outbound links from the well-known root resource and can browse around to all the other resources. So far, pure HATEOAS, and a great tool for developers.
But we're concerned about performance when the client is a program, not a human using a browser. For our XML and JSON representations we've currently suppressed the generation of the related links, since they triple the representation sizes and thus substantially affect serialization/deserialization, memory usage, and bandwidth. Our other efficiency concern is that with pure HATEOAS, client programs will be making several times the number of HTTP requests as they browse down from the well-known link to the information they need. So it seems best, from an efficiency standpoint, if clients have the knowledge of the links encoded in them.
But doing that means the client must do a lot of string concatenation to form the URIs, which is error prone and makes it hard to rearrange the resource name space. Therefore we use a templating system where the client code selects a template and asks it to expand itself from a parameter object. This is a type of form-filling.
I'm really eager to see what others have experienced on this. HATEOAS seems like a good idea aside from the performance aspects.
Edit: Our templates are part of a Java client library we wrote on top of the Restlet framework. The client library handles all details of HTTP requests/responses, HTTP headers, deserialization/serialization, GZIP encoding, etc. This makes the actual client code quite concise, and helps to insulate it from some server side changes.
Roy Fielding's blog entry about HATEOAS has a very relevant and interesting discussion following it.
So far I have built two clients that access REST services. Both use HATEOAS exclusively. I have had a huge amount of success being able to update server functionality without updating the client.
I use xml:base to enable relative urls to reduce the noise in my xml documents. Other than loading images, and other static data I usually only follow links on user requests so the performance overhead of links is not significant for me.
On the clients, the only common functionality that I have felt the need to create is wrappers around my media types and a class to manage links.
Update:
There seem to be two distinct ways to deal with REST interfaces from the client's perspective. The first is where the client knows what information it wants to get and knows the links it needs to traverse to get to that information. The second approach is useful when there is a human user of the client application controlling which links to follow and the client may not know in advance what media type will be returned from the server. For entertainment value, I call these two types of client, the data miner and the dispatcher, respectively.
The Data Miner
For example, imagine for a moment that the Twitter API was actually RESTful and I wanted write a client that would retreive most recent status message of the most recent follower of a particular twitter user.
Assuming I was using the awesome new Microsoft.Http.HttpClient library, and I had written a few "ReadAs" extension methods to parse the XML coming from the twitter API, I imagine it would go something like this:
var twitterService = HttpClient.Get("http://api.twitter.com").Content.ReadAsTwitterService();
var userLink = twitterService.GetUserLink("DarrelMiller");
var userPage = HttpClient.Get(userLink).Content.ReadAsTwitterUserPage();
var followersLink = userPage.GetFollowersLink();
var followersPage = HttpClient.Get(followersLink).Content.ReadAsFollowersPage();
var followerUserName = followersPage.FirstFollower.UserName;
var followerUserLink = twitterService.GetUserLink(followerUserName);
var followerUserPage = HttpClient.Get(followerUserLink).Content.ReadAsTwitterUserPage();
var followerStatuses = HttpClient.Get(followerUserPage.GetStatusesLink()).Content.ReadAsTwitterUserPage();
var statusMessage = followerStatuses.LastMessage;
The Dispatcher
To better illustrate this example imagine you were implementing a client that rendered genealogy information. The client needs to be capable of showing the tree, drilling down to information about a particular person and viewing related images. Consider the following code snippet:
void ProcessResponse(HttpResponseMessage response) {
IResponseController controller;
switch(response.Content.ContentType) {
case "vnd.MyCompany.FamilyTree+xml":
controller = new FamilyTreeController(response);
controller.Execute();
break;
case "vnd.MyCompany.PersonProfile+xml":
controller = new PersonProfileController(response);
controller.Execute();
break;
case "image/jpeg":
controller = new ImageController(response);
controller.Execute();
break;
}
}
The client application can use a completely generic mechanism to follow links and pass the response to this dispatching method. From here the switch statement passes control to a specific controller class that knows how to interpret and render the information based on the media type.
Obviously there are many more pieces to the client application, but these are the ones that correspond to HATEOAS. Feel free to ask me to clarify any points as I have skimmed over many details.
Nokia's Places API (web archive snapshot) is RESTful and uses hypermedia throughout. It is also clear in its documentation to discourage the use of URI templating/hardcoding:
Usage of Hypermedia Links
Your application must use the hypermedia links exposed in the JSON responses for subsequent requests within a task flow, instead of trying to construct a URI for the next steps via URI templates. By using URI templates, your request would not include critical information that is required to create the response for the next request.