REST API's are a proven solution for the problem of other apps, including front-ends, needing data from a data store. Using JSON is much improved over the days of XML and SOAP. Beyond that there haven't been advancements in technology that cause fundamental shifts in that problem space. There have been different opinions about structuring REST calls but those aren't going to cause any real forward progress for the industry and are inconsequential when it comes to business outcomes.
There are so many developers out there that can't stand plugging in proven solutions to problems and just dealing with the trade-offs or minor inconveniences. Nothing is going to be perfect and most likely a lot of the software we write will cease to be running in a decade.
Ever seen an engineer do a loop and make n+1 REST calls for resources? It happens more often then you think because they don't want to have to create a backend ticket to add related resources to a call.
With internal REST for companies I have seen so many single page specific endpoints. Gross.
> There have been different opinions about structuring REST calls but those aren't going to cause any real forward progress for the industry and are inconsequential when it comes to business outcomes.
You could argue almost any tech solution in a non-pure tech play is largely in consequentially as long as the end goal of the org is met, but managing REST APIS were a huge point of friction at past companies.
Either it goes through a long review process to make sure things are structured "right" (ie lots of opinions that nobody can sync on) or people just throw up rest endpoints willynilly until you have no idea what to use.
GraphQL is essentially the "Black" for Python Syntax but for Web APIs. Ever seen engineers fight over spaces vs tabs, 8 vs 4 spaces, whether a space before a colon? those fights happened a lot and then `black` came out and standardized it so there was nothing to fight over.
GraphqL makes things very clear and standard, but can't please everyone.
The idea that resources and the underlying data needs to map 1-1 is wrong.
The GP's idea that a frontend developer would send a ticket to somebody so they can get all the data they need... it's just crazy.
On the other extreme, we have the HTTP 1.0 developers saying something like "networks are plenty of fast, we can waste a bit of it with legible protocols that are easier to make correct", while the HTTP 2.0 ones are all in "we must cram information into every single bit!"
Every place you look, things are completely bananas.
Even though we use GQL here, we still have a B4F, so it's browser -> B4F -> GQL -> Database.
The tooling we use make it trivial so it doesn't add any development overhead (often reduces it. No CORS bullshit for one), and our app goes ZOOOOOOM.
Hardly gross. It is what it is and it’s universal across the domain. I bet Windows has internal APIs or even external ones that were created just for one page/widget/dialog of one app. It’s the nature of things at times.
An engineer had to spend time to make that specific API for that page instead of the frontend consumer using what was already defined and get all the resources with one call and 0 backend engineer needed for that new page.
For example, a lot of times people build out nice normalized table structures for online transactional apps. The UI/UX is pretty straight forward because end users typically only CRUD an object maybe a couple nested objects at a time. The API's are straight forward as well, likely following single responsibility principles, etc. Then comes along requirements to build UI's for analytics and/or reporting types of things where nearly the entire schema is needed depending on what the end user wants to do. Its the wrong data model for doing those types of things. What should be done is ETL the data from the OLTP schema into a data warehouse style schema where data is de-normalized so that you can build reporting, etc.
With REST, though, that pain is visible to both sides. Frontend engineers generally don't want to make N+1 REST calls in a tight loop; it's a performance problem that they see and that is very visible in their Dev Tools. Backend engineers with good telemetry may not know why they get the bursts of N+1 calls that they see without asking the Frontend or digging into Frontend code, but they can still see the burstiness of the calls and have some idea that something could be optimized, that something is maybe too chatty.
There are multiple ways with REST to handle things: pagination, "transclusions", hyperlinks, and more. Certainly "single page endpoints" is a way as well, no matter how gross it is from REST theory, it's still a pragmatic solution for many in practice.
REST certainly can please everyone, given pragmatic compromises, even if it isn't very clear or standard.
Single page endpoints is exactly what you want if you have more than 5 engineers in your company anyways.
It ensures that the endpoints are maintainable and future-proof when people are working on different features.
How does GQL prohibit this? It encourages it by focusing on 1 stable API for everyone instead of a custom API endpoint for each case.
I also recall, we had similar N+1 query problems in the REST API endpoints irrespective of hydrating the returned resources.
The biggest benefit of GraphQL I can see from a user perspective is that it lowers total latency especially on mobile with fewer round trips.
there's lots of other benefits for GQL: multiple queries per request, mutation/query separation, typed errors, subscriptions support.
GET /myresource?extra=foo,bar
sure you over fetch a bit if you have multiple accessors.
But agreed, if you have highly nested data especially when accessed with multiple different query purposes then REST might not be the best fit.
I think GraphQL has been positioned as a general purpose tool and for that I am with the author, REST is a better go-to for most usecases.
Any more levels and you have now reinvented GraphQL
> With internal REST for companies I have seen so many single page specific endpoints. Gross.
As someone pointed out in reply to another comment, GraphQL is "a technological solution to an organizational problem." If that problem manifests as abuse of REST endpoints, you can disguise it with GraphQL, until one day you find out your API calls are slow for more obscure, harder to debug reasons.
That's an established pattern (backend for frontend). Like all patterns there are trade-offs and considerations to make, but it's certainly not a priori "gross".
https://learn.microsoft.com/en-us/azure/architecture/pattern...
And, resources with an index option obviously should have a db index or unique index.
The challenges with GraphQL are that it makes it too easy to DoS services, leak internal data, break referential integrity, and there were a great deal of tools, infrastructure, and monitoring systems already available for REST (and gRPC to a degree).
Company standards for REST and coding style can and should be set in the diff review pipeline. Another facet is setting standards to minimize duplication of effort or exposing a greater attack surface.
I like the thought experiment of adding a new persisted/editable field to an entity in a web app, if you've done full-stack you know all the layers that need to be touched to accomplish this and how lame most of the work turns out to be. After doing that 20 times while iterating, any dev worth their salt starts to wonder why it sucks so bad still and how it could be made better, and some will actually try.
Much like CSV, JSON isn't particularly standardised and different parsers and writers will do different things in some situations. Usually it doesn't matter, but when it does you're probably in for a lot of pain.
If you handle structured data and the structures might change over time, JSON isn't a good fit. Maybe you'll opt for JSON Schema, maybe that'll work for your use case, but with XML you can be quite sure it'll be reliable and well understood by generations of developers.
The tooling is generally very good, commonly you can just point your programming language to the XSD and suddenly you have statically typed classes to program against. Perhaps you'd like to store the data in a RDBMS? You can probably generate the DB schema from the XSD. If you want you can just throw JSON into MongoDB instead, but there will be very important tradeoffs. Same goes for UI, you can write some XSLT based on the XML schema and suddenly you get web views directly from API responses. Or you could use those classes you generated and have your GUI code consume such objects.
None of this is as easy with JSON as it is with XML, similar to how many things aren't as easy with CSV as with a RDBMS.
XML is mostly already lost on the current generation of developers though, much less future developers. Protobuf and cousins generally do typed interchange more efficiently with less complexity.
RFC 8259 is marginally better in that it at least acknowledges these problems:
This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754 binary64 (double precision) numbers [IEEE754] is generally
available and widely used, good interoperability can be achieved by
implementations that expect no more precision or range than these
provide, in the sense that implementations will approximate JSON
numbers within the expected precision. A JSON number such as 1E400
or 3.141592653589793238462643383279 may indicate potential
interoperability problems, since it suggests that the software that
created it expects receiving software to have greater capabilities
for numeric magnitude and precision than is widely available.
Note that when such software is used, numbers that are integers and
are in the range [-(2**53)+1, (2**53)-1] are interoperable in the
sense that implementations will agree exactly on their numeric
values.
But note how this is still not actually guaranteeing anything. What it says is that implementations can set arbitrary limits on range and precision, and then points out that de facto this often means 64-bit floating point, so you should, at the very least, not assume anything better. But even if you only assume that, the spec doesn't promise interoperability.In practice the only reliable way to handle any numbers in JSON is to use strings for them, because that way the parser will deliver them unchanged to the API client, which can then make informed (hopefully...) choices on how to parse them based on schema and other docs.
OTOH in XML without a schema everything is a string already, and in XML with a schema (which can be inline via xsi:type) you can describe valid numbers with considerable precision, e.g.: https://www.w3.org/TR/xmlschema-2/#decimal
Sure, protobuf is nice, but more limited in scope and closer to a JSON alternative than an XML alternative.
I use JSON every other day and have been for decades.
Go's XML parser straight-up emits broken XML when trying to output tags that have prefixed namespaces.
JSON won in the end mostly because it was easier to handle in JS specifically, which is what mattered for the frontend. Then other languages caught up with their own implementations, although in some cases it took a while - e.g. for .NET you had to use third-party libraries until 2019.
If only that were true in my experience.