I know that it might be said in fun - and is an easy way to get clicks, but it feeds into the narrative of new, shiny tech. As the article points out at the end, there are very real engineering tradeoffs with GraphQL - and the answer isn't as easy as REST is dead, anachronistic technology that no engineer should consider (the XML analogy felt particularly inflammatory).
Kelly and I were chatting about GraphQL, and his post might be a more thoughtful engineering post: http://kellysutton.com/2017/01/02/do-we-need-graphql.html (the title - Do we need GraphQL? - is at least the way I'd expect engineers to approach the problem, where he discusses the tradeoffs)
In my MongoDB series, I point out a past case where Free Code Camp fed the hype, by telling engineers that the reason everyone had to learn the MEAN stack was due to employability in the software industry: https://medium.freecodecamp.org/the-real-reason-to-learn-the...
(you had to dig into the article to realize that their argument was more nuanced, and that they taught SQL first before MongoDB)
A number of "engineering" posts are not written as a thoughtful engineer might, and are in many ways marketing for the products sold (like a training program or code camp).
I think this is a great insight that is often overlooked in tech marketing. When any vendor comes up with a product that claims "[industry standard] is dead, use [our product]", there should be alarm bells ringing already.
Seen this from quite a ways back. When .NET came out, I remember college students in my neighborhood (e.g. when hanging out at some tea shop or restaurant), asking me (they knew me, and that I was in software), in a concerned tone, stuff like:
"We hear that now that .NET has come, Java will be dead. Is that right?"
I used to have to disabuse them of such nonsensical notions. Not that Java will live forever, but obviously a mature and widely adopted technology is not going to die off overnight. Such is the hype, though, for the new and shiny.
I'm a big fan of GraphQL and tried to write up a more low-level comparison here: https://dev-blog.apollodata.com/graphql-vs-rest-5d425123e34b
What is GraphQL?
GraphQL is all about data communication
Somehow it reminds me of good old "The S stands for Simple": http://harmful.cat-v.org/software/xml/soap/simpleIt does not seem wise to use anything with that rider if you have or would like to leave open the possibility of having patents which you license to / enforce against Facebook.
The argument for patent disarmament strikes me as reasonable, but some companies have novel tech they would like to license to Facebook, and restricting the use of software like this seems a pretty severe violation of the notion of Free Software.
1. https://raw.githubusercontent.com/graphql/graphql-js/master/...
might not be as bad as you think
In particular, I have some FUD about how to go about rate limiting, when in theory a single request could grab every resource that the client is authorized to retrieve, and thrash the database.
Looks like Github counts/restricts the number of total nodes returned: https://developer.github.com/v4/guides/resource-limitations/
Also is there any protection against pathological requests? (e.g. if there are loops in the object graph, can I build an arbitrarily deep GraphQL query that will take an arbitrarily long time to complete?)
I've seen some GraphQL servers in the wild that will respond to any query, so it's entirely possible to make abusive queries to bring a server down.
Some attempt to estimate the query complexity, and deny requests based on how long the server thinks it might take. Others, such as Facebook themselves, whitelist which queries are allowed (I have no affiliation with Facebook, this is just what I've heard).
It goes over several ways to secure your endpoints, along with pros and cons for each.
* The caching clients (Relay, Apollo) out there are dog slow for medium to large response payloads. (they're working on it)
* It can add more complexity than you need if used for service<->service chatter (pure RPC may be preferable)
* It's still pretty damn awesome as a data layer, especially to public clients
We are also working on speeding up how the default store handles large response data
I'm not sure what's different. You can actually implement the same with plain old http api's, also.
Rarely. Assuming the GraphQL server is using REST endpoints behind the scenes, i'm yet to find a request waterfall that required manual rather than automatic optimizations. I'm assuming there are cases where a manual path is faster, but they're less common that you'd think.
Another benefit of graphql is that is a "lingua franka", you can use it internally, between microservices, etc.
Until then, what you gain is "only" a completely decoupled backend that will accommodate frontend developers work in exchange for more generic logic in the backend. Not necessarily a bad thing, if this is how your shop works.
But there's still going to be a decent bit of backend tuning which in my experience is the hard part (unless the organization is so dysfunctional new API version design meetings descend into a hostage exchange).
In my experience, versioning a RESTful API is not hard (much has been written about the various approaches). The cases when versioning does get hard usually correspond to major system architecture changes (e.g., restructuring fundamental relationships between data models), and in those cases, I suspect GraphQL wouldn't help a whole lot. You may still need to build some kind of compatibility layer to support older versions.
Other than that, adding new fields (which account for 90%+ of the changes to APIs I work on) are just as easy to add to RESTful endpoints. If payload size really does become an issue (it's usually negligible), it's easy enough to add a parameter or two to control the extent of the response data.
As for reducing round trips, I do see the advantage of using GraphQL, though to be fair, well-designed RESTful APIs can avoid excessive round trips as well - they just require a bit more coordination between client and server development (which is a good thing!). RESTful APIs also have the advantage of mutation payloads that look like their corresponding response representations.
To address some of the potential concerns with GraphQL (such as resource exhaustion), I believe it would require more development time/resources for most of the projects I've worked on - even after factoring in any technical debt brought on by RESTful API limitations.
The need for multiple round-trips to the server and sparse field-sets (returning only the fields needed):
http://jsonapi.org/format/#fetching-includes http://jsonapi.org/format/#fetching-sparse-fieldsets
Note that earlier the top story on HN was about MSPaint and the top comment said:
> "All the comments that "you can just use X to do Y" is missing the point that Paint just works"
REST still (and will continue to) just work for most people, I'm sure some will switch or some will go straight to GraphQL, but lets not just go ahead and declare the whole thing dead. Haha.
Should I give it another look? Was I too quick to dismiss it?
The concerns you have with the front-end are founded. However, you can send a request to a GraphQL endpoint in a very REST-like manner. If you can build a wrapper to create the query as explained in the Stack Overflow post, you can essentially negate the need for Relay/Apollo. (At least until you need any of the helpful tools they provide.)
https://stackoverflow.com/questions/42520663/how-send-graphq...
The basic http link includes middleware for passing auth to the server.
I hope that helps!
There actually shouldn't be anything contradictory about choosing the GraphQL interface for a REST API.
Also, before anyone jumps in head first on this, keep in mind the downside to things like GraphQL. Your data API is a promise to your clients and GraphQL presents a flexible, unbounded API. This puts a heavy burden on the data service: it's going to have to handle all the valid queries and perform well doing it.
Further, there are a lot of things that will have an impact on this, such as how/where the data is persisted and what the data model is, how the service needs to scale, etc. These will all be impacted by using GraphQL as your data interface. It's a lot to commit to, especially if you aren't sure how these things will change over time.
It's called an argumentative straw man. The article attacks… something, but that thing ain't Rest, clearly. But ooooh boy, must it feel satisfying for Samer Buna to topple it over!
HN readers should not fall for the oldest trick in the book.
Maybe the recent wave of JS framework had at least this beneficial side effect.
"who in their right mind would use XML over JSON today?"
Lost me already... there are lots of reasons to still use XML today. And since someone is going to ask, here are some:* you want to use xpath
* you want to communicate with an enterprise app (salesforce, magento, etc)
* you need strongly-typed message-passing in a human-readable well-understood format (rules out messagepack, etc)
* nobody actually documented the API, but they might have a WSDL
this is stupid there is no need to go multiple rounds?
GraphQL seems a lot more complicated to consume/explore than REST, and it looks like I need to know a good bit about how the data is shaped before I write a line of code - something that can change, and something that the current REST endpoint system (happily) doesn't need.
Ok, there are a lot of reasons.
I worked on a project once that was essentially a "data platform", kind of a SQL firewall to "any data" (a bit of a stretch, but that was the concept). You could put any data system (relational, non-relational, web services, etc.) behind it, but then join across heterogeneous systems using plain old SQL. You had a query language and a network interface, and since it was separate from the actual databases themselves, you could do some interesting things with security, caching and scalability in a different layer.
GraphQL has always seemed like the same kind of query abstraction over data, with a network interface. Props to Facebook for making the tech and releasing it open source.
This also makes me think about the command query responsibility separation (CQRS) pattern, a similar discussion.
EDIT: Also, lest we forget OData... which never really took off, but offers similar functionality.
In my thinking, "taking off" would have been if the rest of the ecosystem had embraced it. OData is shared as an open standard (http://www.odata.org), but there is something about it where I still haven't seen the broader ecosystem embrace it (although certainly many have adopted some of the style of OData REST endpoints). That said, compared to where it was when it first came out, OData is still getting a lot of attention. I think opening up the tech is what kept it alive, to be honest.
Just for kicks, I did a Google trends comparing OData nd GraphQL, kind of interesting: https://trends.google.com/trends/explore?q=odata,graphql
My question is then: - will there be a "standard" way to describe the data model (aka vocabulary) of your GraphQL endpoint? Something like RDFS or OWL.
GraphQL is a wrapper around your service layer, or worse, around a number of ad hoc data sources. That service layer or those data sources may query SPARQL services.
>With GraphQL, the client speaks a request language which decouples clients from servers. This means we can maintain and improve clients separately from servers.
This is actually how the web works right now. Browsers (clients) evolve independently from web pages (servers). Even the author of REST said it is "intended to promote software longevity and independent evolution". Standards are governed by standards bodies such as IETF, IANA, W3C, WHATWG, etc., not a single corporation like Facebook.
I think only time will tell if this is just another fad that took off but didn't age well, like SOAP.
And then ends with: > There are some mitigations we can do here. We can do cost analysis on the query in advance and enforce some kind of limits on the amount of data one can consume.
... so basically you did nothing in the way of perf but add complexity.
I can imagine that it can be quite slow internally with multiple joins/subqueries in the datastore.
My take on it https://subzero.cloud (GraphQL and REST api for your database), built on top of https://postgrest.com
Also, you can definitely version nicely. You just have to document it well. If you're supporting different api version in you're bound to add more code, not sure what that argument is about, since I assume (could be wrong) you'd have to define a new schema and add new logic to handle the queries in the translating layer, correct?
Also, the author makes a bold title and then goes ahead and mellows it in the first few paragraphs. Stick to it or don't do it at all.
If a client requires a GraphQL API, I'll happily learn. It does look like an interesting technology and a valid alternative.
Lots of "hello world" stuff out there.
Throw in the requirements of a real SaaS and everyone shrugs.
mutation {
createUser(u: "user", p: "pass") {
u,
p
}
}Also XML is cool. Not so sure about JSON.