The reason Level 3 HATEOAS REST is hard in Backbone and most other frameworks -- both frontend and backend -- is because they have yet to obtain the same enlightenment.
Specifically, your arguments rest on the vague and indefinable quality of "enlightenment." I think that this use of "enlightenment" means something along the lines of "I got something to work within a thought framework that appeals to me." But if he had started with a different framework (like RPC), he could have gotten that to work too, and perhaps would be feeling the "RPC enlightenment" because all of these problems can be solved within RPC too.
No one is arguing that HTTP-based APIs can only do CRUD. You can tunnel basically anything over HTTP, because ultimately the payload is just bytes. But the question is what using HTTP (and/or REST) has actually bought you, compared to competing approaches.
Graydon Hoare (one of the original authors of Rust) wrote an essay about this a while ago that I really love. Unfortunately he took it off the net, but it's still available on the wayback machine: http://web.archive.org/web/20040308200432/http://www.venge.n...
The essay is written in a very whimsical and wandering style, but the point is dead on:
If you want to store or transmit a message, you can do
cryptography, steganography, normalized database tables,
huffman codes, elliptic curves, web pages, morse code,
semaphores, java bytecode, bit-mapped images, wavelet
coefficients, s-expressions, basically anything you can
possibly dream up which codes for some bits. In all cases,
if you're coding bits and you are using a lossless system,
the *only* thing which matters is how *convenient* the encoding
is. There's nothing which makes one encoding "do it better"
than any other, aside from various external measurements of
convenience such as size, speed of encoding, speed of
decoding, hand-editability, self-consistency, commonality,
etc.
HTTP/REST and XML inhabit slightly different design spaces, so for HTTP/REST some of the external measurements might be a bit different. But if you are trying to argue that REST is better than competing technologies, the question is: what objective benefits does it offer, compared to those other technologies?Clearly there are benefits to REST/HTTP (being able to use a web browser as an ad-hoc UI in many cases is one), but the idea of "enlightenment" should not be a substitute for an actual compare/contrast of approaches.
So yes, of course the author could find a solution to his problems that still used HTTP. The question is: once he does this, is he better off than if he had chosen a competing technology? (imagining a world where you aren't effectively forced to use HTTP to make it through firewalls).
Strangely, very few people (at least that I've spoken to) who are fans of REST seem to have even picked up on the HATEOAS principles. I'm guessing that their fandom is more predicated on a hatred of SOAP/XML and love of the simplicity of JSON -- which is kind of strange since HATEOAS also works beautifully for Content-type=XML over HTTP.
EDIT: An attempt to formalize the Richardson Maturity Model (L3) for REST+JSON/HTTP can be found at: http://stateless.co/hal_specification.html (I'm not affiliated with it in any way, just thought it was interesting.)
REST isn't a technology, it's an architectural style. Regarding its benefits, they are all well specified in Fielding's thesis. REST is composed of a few constraints, and abiding by each one gives you certain benefits.
That said, I agree with you vis-à-vis the need to avoid blinders. In my opinion, there's nothing wrong with using RPC if that's the most adequate solution. What bothers me are all those new RPC-over-HTTP which people insist on calling RESTful.
HTTPs ubiquity, for better or worse, makes it about the only option for any web based tool where control from end-to-end isn't a practical option. Better protocols can be made, better hardware and software can be made to utilize them, and some have, but their reach is very limited for now. Understanding the thing we're stuck with is pretty important. Luckily, countless man hours have gone into studying the protocol and building the tools that take advantage of it. Much of that information has been shared openly. I think that's a pretty big advantage over anything else.
I personally have found zen in applying simpler concepts to software development. Such as composition over inheritance to my API design, mixing in certain aspects like content negotiation or caching, when those complexities become necessary. Or separation of concerns, making sure endpoints don't do too much, and the realization of concerns vs technology [1]. Really thinking about the notion of simplicity as describe by Rick Hickley in Simple Made Easy [2]. Or "There are only two hard problems in Computer Science: cache invalidation and naming things"--putting off caching until an endpoint becomes a problem--and not worrying if my URL structure is RESTful.
Here's an example of an API that I find beautiful [3].
[1] https://www.youtube.com/watch?v=x7cQ3mrcKaY [2] http://www.infoq.com/presentations/Simple-Made-Easy [3] https://mandrillapp.com/api/docs/
I dunno, REST maps to CRUD pretty well -- but its pretty limited when you think of it as restricted to CRUD operations against the equivalent of base tables in whatever your datastore of choice is -- its more like CRUD operations against views with arbitrarily complex rules mapping operations on the views to operations against base tables...
REST lacks the ability to relay full state-change semantics without hackery. As the article pointed out, it forces you to be extra chatty over http, which is far from free over the congested, global network that is the interwebs.
For example, what if a single PUT request creates multiple sub-objects? How does my server reply with multiple location headers? Do i have to first re-get the created object's child locations and re-request each individually?
How about just sending the full object state back as the response to my PUT request? Well, according to REST, the body of the response just needs to be a description of the error or success status. Basically, REST sucks for reducing round trips if you're going to follow it pedantically. The theory is sound, but it needs to be updated to dictate how the server can send back more detailed info in response to POST/PUT/PATCH requests.
/rant
If a PUT, POST, or PATCH does that, then it does that. So what?
> How does my server reply with multiple location headers?
It doesn't respond with multiple location headers. With PUT or POST, with an 201 Created status response with the Location: header containing the URI of the parent object (the resource directly created by the PUT/POST), and an entity body, in some appropriate format, containing the URIs of all the created objects. (If there is a representation of the main resource which would do this that is acceptable to the client, this would seem to be an ideal way of communicating that.)
With PATCH, much the same, but with 200 OK.
> Do i have to first re-get the created object's child locations and re-request each individually?
There's no reason for a REST API to require that.
> How about just sending the full object state back as the response to my PUT request?
I see no reason why sending a full resource representation in a 201 status response, provided that the client has indicated that they are willing to accept the relevant media type, is problematic, especially if it does what the HTTP/1.1 spec says the 201 response entity should do. (The one thing that common resource representations might not do that they should is provide their own location and the direct locations of any embedded subentities that are separate addressable, but there is no reason why a resource representation couldn't do that, and I'd argue that in REST it would be desirable that resource representations do do that.)
> Well, according to REST, the body of the response just needs to be a description of the error or success status.
As, stated, that's accurate, but you seem to be acting as if that was "needs to be just" rather than "just needs to be" (e.g., misunderstanding it as a maximum allowed content, rather than a minimum required content.)
> Basically, REST sucks for reducing round trips if you're going to follow it pedantically.
Except that none of the problems you've pointed to in that regard have anything to do with REST.
You could actually go a step further - if you're using "Hypertext as the engine of application state" (HATEOAS), URIs for the locations of direct subentities need to be there, if the client is expected to be able to make those state transitions, for the API to be "fully RESTful". (Though, I'd personally agree with the article that features like discoverability and content negotiation are secondary to those you get from idempotence/properly using HTTP methods, and feel they should be considered bonus features, rather than strict requirements)
That is what I do. I don't see anything wrong with it. PUT effectively replaces the state of the resource so you get it back right away (get the latest state, which should be what was sent in the request, minus a conflict for example if multiple clients do it or you have some incrementing revision id thing)
> For example, what if a single PUT request creates multiple sub-objects?
I'd use POST for creating/adding objects. I think of PUT usually as idempotent and as I mentioned above, use it to replace the state of the resource. So if this one POST ends up creating multiple-sub-objects you can return a json object that encapsulates URIs do those new objects.
Remember resources don't have to map to internal objects, db rows or other such things. You can have the objects or db rows represented or used in different resources. If it makes sense and if you have this transactional interface, maybe it makes sense to have an explicit transaction resource that is in charge of managing a transaction (where multiple things happen and then it kind of becomes very explicit).
This isn't strictly true:
If the target resource does not have a current representation and the PUT successfully creates one, then the origin server MUST inform the user agent by sending a 201 (Created) response. If the target resource does have a current representation and that representation is successfully modified in accordance with the state of the enclosed representation, then the origin server MUST send either a 200 (OK) or a 204 (No Content) response to indicate successful completion of the request. [0]
Both the 200 and 201 payloads may include anything you want, including a giant representation with many constituent parts (which parts would ideally each contain a link to its respective associated resource).
By merging children into the parent resource, he traded a reduction of requests with the loss of separation of concerns.
How about adding a layer instead whose only concern is the reduction of the number of requests for the client. It would collect and merge the info from parent and children resources and return it in merged form to the client ?
http://thenextweb.com/dd/2013/12/17/future-api-design-orches...
There's keep-alive, but that still means that if you want to get a parent and its children, you have to wait for the parent request round-trip.
To add - this is a major issue we've been working through at Slant.co - we've combined child objects into the requests for the parent objects (sometimes even two levels down), but it's significantly complicated caching efforts - we're in the process of building some namespacing into our server-side request cache, so we can transparently make cached Question and Option pages stale whenever, e.g., the title of a Pro/Con gets changed. If HTTP allowed multiple responses, we could treat everything transparently server-side as individual requests, and get much higher hit-rates for proxy and client-side caches.
That seems to be a pretty significant feature of SPDY and the in-progress HTTP/2.0 work.
[0] http://jsonapi.org/format/#document-structure-compound-docum...
"All problems in computer science can be solved by another level of indirection" - David Wheeler
"...except for the problem of too many layers of indirection." - Kevlin Henney
It's honestly amazing to work with, as we can be very strict about our separation of concerns on the backend, while letting the frontend combine bits and pieces as makes sense for a given client interface.
We do have a feed (https://zapier.com/engineering/feeds/latest/) but we I think we need to add a meta tag for NewsBlur/readers to pick up on it. We'll do this!
We use CSRF protection across all POSTs/PUTs, so cookies are generally required. I'll look into removing this for certain forms (like blog subscribe, fairly safe me thinks!).
Thanks again!
I'm not certain what could have caused the cookie problem, since I've just gotten that same error message on a different device, neither of which actually have cookies turned off. (For instance, my HN cookies seem to be working fine...)
If you can relax the decoupling or independent evolution constraints, RPC over HTTP is usually easier to understand and implement. This is where most HTTP APIs fall. (…and that’s OK)