How useful/important is REST HATEOAS ( maturity level 3)?
Richardson's maturity level 3 is valuable and should be adopted. Jørn Wildt has already summarized some advantages and an other answer, by Wilt, complements it very well.
However, Richardson's maturity level 3 is not the same as Fielding's HATEOAS. Richardson's maturity level 3 is only about API design. Fielding's HATEOAS is about API design too, but also prescribes that the client software should not assume that a resource has a specific structure beyond the structure that is defined by the media type. This requires a very generic client, like a web browser, which doesn't have knowledge about specific websites. Since Roy Fielding has coined the term REST and has set HATEOAS as a requirement for compliance to REST, the question is: do we want to adopt HATEOAS and if not, can we still call our API RESTful or not? I think we can. Let me explain.
Suppose we have achieved HATEOAS. The client-side of the application is now very generic, but most likely, the user experience is bad, because without any knowledge of the semantics of the resources, the presentation of the resources cannot be tailored to reflect those semantics. If resource 'car' and resource 'house' have the same media type (e.g. application/json), then they will be presented to the user in the same way, for example as a table of properties (name/value pairs).
But okay, our API is really RESTful.
Now, suppose we build a second client application on top of this API. This second client violates the HATEOAS ideas and has hard-coded information about the resources. It displays a car and a house in different ways.
Can the API still be called RESTful? I think so. It is not the API's fault that one of its clients has violated HATEOAS.
I advise to build RESTful APIs, i.e. APIs for which a generic client can be implemented in theory, but in most cases, you need some hard-coded information about resources in your client in order to satisfy the usability requirements. Still, try to hard-code as little as possible, to reduce the dependencies between client and server.
I have included a section on HATEOAS in my REST implementation pattern called JAREST.
Yes, I have had some experience with hypermedia in APIs. Here are some of the benefits:
Explorable API: It may sound trivial but do not underestimate the power of an explorable API. The ability to browse around the data makes it a lot easier for the client developers to build a mental model of the API and its data structures.
Inline documentation: The use of URLs as link relations can point client developers to documentation.
Simple client logic: A client that simply follows URLs instead of constructing them itself, should be easier to implement and maintain.
The server takes ownership of URL structures: The use of hypermedia removes the client's hard coded knowledge of the URL structures used by the server.
Off loading content to other services: Hypermedia is necessary when off-loading content to other servers (a CDN for instance).
Versioning with links: Hypermedia helps versioning of APIs.
Multiple implementations of the same service/API: Hypermedia is a necessity when multiple implementations of the same service/API exists. A service could for instance be a blog API with resources for adding posts and comments. If the service is specified in terms of link relations instead of hard coded URLs then the same service may be instantiated multiple times at different URLs, hosted by different companies but still accessible through the same well defined set of links by one single client.
You can find an in-depth explanation of these bullet points here: http://soabits.blogspot.no/2013/12/selling-benefits-of-hypermedia.html
(there is a similar question here: https://softwareengineering.stackexchange.com/questions/149124/what-is-the-benefit-of-hypermedia-hateoas where I have given the same explanation)
Nobody in the REST community says REST is easy. HATEOAS is just one of the aspects that adds difficulty to a REST architecture.
People don't do HATEOAS for all the reasons you suggest: it's difficult. It adds complexity to both the server-side and the client (if you actually want to benefit from it).
HOWEVER, billions of people experience the benefits of REST today. Do you know what the "checkout" URL is at Amazon? I don't. Yet, I can checkout every day. Has that URL changed? I don't know it, I don't care.
Do you know who does care? Anyone who's written a screen-scraped Amazon automated client. Someone who has likely painstakingly sniffed web traffic, read HTML pages, etc. to find what links to call when and with what payloads.
And as soon as Amazon changed their internal processes and URL structure, those hard-coded clients failed -- because the links broke.
Yet, the casual web surfers were able to shop all day long with hardly a hitch.
That's REST in action, it's just augmented by the human being that is able to interpret and intuit the text-based interface, recognize a small graphic with a shopping cart, and suss out what that actually means.
Most folks writing software don't do that. Most folks writing automated clients don't care. Most folks find it easier to fix their clients when they break than engineer the application to not break in the first place. Most folks simply don't have enough clients where it matters.
If you're writing an internal API to communicate between two systems with expert tech support and IT on both sides of the traffic, who are able to communicate changes quickly, reliably, and with a schedule of change, then REST buys you nothing. You don't need it, your app isn't big enough, and it's not long-lived enough to matter.
Large sites with large user bases do have this problem. They can't just ask folks to change their client code on a whim when interacting with their systems. The server's development schedule is not the same as the client development schedule. Abrupt changes to the API are simply unacceptable to everyone involved, as it disrupts traffic and operations on both sides.
So, an operation like that would very likely benefit from HATEOAS, as it's easier to version, easier for older clients to migrate, easier to be backward compatible than not.
A client that delegates much of its workflow to the server and acts upon the results is much more robust to server changes than a client that does not.
But most folks don't need that flexibility. They're writing server code for 2 or 3 departments, it's all internal use. If it breaks, they fix it, and they've factored that into their normal operations.
Flexibility, whether from REST or anything else, breeds complexity. If you want it simple, and fast, then you don't make it flexible, you "just do it", and be done. As you add abstractions and dereferencing to systems, then stuff gets more difficult, more boilerplate, more code to test.
Much of REST fails the "you're not going to need it" bullet point. Until, of course, you do.
If you need it, then use it, and use it as it's laid out. REST is not shoving stuff back and forth over HTTP. It never has been, it's a much higher level than that.
But when you do need REST, and you do use REST, then HATEOAS is a necessity. It's part of the package and a key to what makes it work at all.
Example:- To understand it better let’s look at the below response of retrieve user with id 123 from the server (http://localhost:8080/user/123
):
{
"name": "John Doe",
"links": [{
"rel": "self",
"href": "http://localhost:8080/user/123"
},
{
"rel": "posts",
"href": "http://localhost:8080/user/123/post"
},
{
"rel": "address",
"href": "http://localhost:8080/user/123/address"
}
]
}