And a lot of these features arguably have a poor cost/benefit tradeoff for anyone who isn't trying to solve Google problems. Or they introduce painful constraints such as not being consumable from client-side code in the browser.
I keep wishing for an alternative project that only specifies a simpler, more compatible, easier-to-grok subset of gRPC's feature set. There's almost zero overlap between the features that I love about gRPC, and the features that make it difficult to advocate for adopting it at work.
Protobufs has allowed us to version, build and deliver native language bindings for a multitude of languages and platforms in a tiny team for years now, without a single fault. We have the power to refactor and design our platform and apis in a way that we never had before. We love it.
The official Java implementation of grpc looks like abandonware. Out of the box the builder includes an annotation (javax.annotation.Generated) that was deprecated in 2019:
https://github.com/grpc/grpc-java/issues/9179
This gives me serious pause.
I have had the opposite experience. I visit exactly two repositories on GitHub, which seem to have the vast majority of the functionality I need.
> The thing is so packed with features and behaviors and temporal coupling and whatnot that it's difficult to produce a compatible third-party implementation.
Improbable did. But who cares? Why do we care about compatible third party implementations? The gRPC maintainers merge third party contributions. They care. Everyone should be working on one implementation.
> features arguably have a poor cost/benefit tradeoff for anyone who isn't trying to solve Google problems.
Maybe.
We need less toil, less energy spent reinventing half of Kubernetes and half of gRPC.
Until they get fired by Google.
Microsoft could make the case that many many features in Web Services were essential to making them work but people figured out you could just exchange JSON documents in a half-baked way and... it works.
In case of gRPC I believe specs are tied to protobuf but I have seen thrift implementation also.
Perhaps connect: https://connectrpc.com/
Edit: and, if I remember correctly, gRPC tooling for it is maintained by about 1-3 people that are also responsible for other projects, like System.Text.Json. You don't need numbers to make something that is nice to use, quite often, it makes it more difficult even.
FTFY
It solves ALL the problems of vanilla gRPC, and it even is compatible with the gRPC clients! It grew out of Twirp protocol, which I liked so much I made a C++ implementation: https://github.com/Cyberax/twirp-cpp
But ConnectRPC guys went further, and they built a complete infrastructure for RPC. Including a package manager (buf.build), integration with observability ( https://connectrpc.com/docs/go/observability/ ).
And most importantly, they also provide a library to do rich validation (mandatory fields, field limits, formats, etc): https://buf.build/bufbuild/protovalidate
Oh, and for the unlimited message problem, you really need to use streaming. gRPC supports it, as does ConnectRPC.
As someone who's used Gradle tons, 6 months ago I wrote in detail about why not gradle:
https://news.ycombinator.com/item?id=38875936
Gradle still might less bad than Bazel, though.
I think the bigger thing that I'm worried about is that gRPC has so many mandatory features that it can become hard to make a good implementation in new languages. To be honest there are some languages where the gRPC implementation is just not great and I blame the feature bloat... and I think textproto was a good demonstration of that feature bloat to me.
The problem went away with all optional fields so it was decided the headache wasn't worth it.
I suspect that not having nullable fields, though, is just a case of letting an implementation detail, keeping the message representation compatible with C structs in the core implementation, bleed into the high-level interface. That design decision is just dripping with "C++ programmers getting twitchy about performance concerns" vibes.
Then proto3 went and implemented a hybrid that was the worst of both worlds. They made all fields optional, but eliminated the introspection that let a receiver know if a field had been populated by the sender. Instead they silently populated missing fields with a hardcoded default that could be a perfectly meaningful value for that field. This effectively made all fields required for the sender, but without the framework support to catch when fields were accidentally not populated. And it made it necessary to add custom signaling between the sender and receiver to indicate message versions or other mechanisms so the receiver could determine which fields the sender was expected to have actually populated.
You could (e.g.) annotate all key fields as IDENTIFIERs. Client code can assume those will always be set in server responses, but are optional when making an RPC request to create that resource.
(This may just work in theory, though – I’m not sure which code generators have good support for field_behavior.)
I think the main problem with it, is that you cannot distinguish if the field has the default value or just wasn't set (which is just error prone).
However, there are solutions to this, that add very little overhead to the code and to message size (see e.g. [1]).
Required/validation is for application.
Like this?
> grpcurl -d '{"id": 1234, "tags": ["foo","bar"]}' \
grpc.server.com:443 my.custom.server.Service/Method
How is that even possible? How could grpcurl know how to translate your request to binary?This made it extremely hard to get answers to questions that were undocumented.
It's a shame because I know Google can put out easy to read code (see: the go standard library).
My guess is that the difference is that go is managed by a small group of engineers that have strong opinions, really care about it, and they have reached "fuck you level", so they can prioritize what they think is important instead of what would look good on a promo packet.
That’s the end of this story. There are basically two people in the world who have demonstrated that they can be trusted to generate code that isn’t an impenetrable quagmire of pain and confusion.
I doubt it’s an accident that both emitted declarative output rather than imperative, but I would be happy to be proven wrong.
Honestly I found the Java bindings to be way better designed and thought out than Golang. On a consumer level, the immutable message builders are fantastic, the one-ofs are decent compared to what Java can offer, and the service bindings actually provide a beautiful abstraction with their 0-1-many model. In Golang, if you only have to deal with Unary rpc they are OK I guess, but I really miss the immutable messages.
I've worked with people who considered anything that wasn't programmed with annotations to be "too advanced" for their use-case.
Google claims gRPC with protobuf yields a 10-11x performance improvement over HTTP. I am skeptical of those numbers because really it comes down to the frequency of data parsing into and out of the protobuf format.
At any rate just use JSON with WebSockets. Its stupid simple and still 7-8x faster than HTTP with far less administrative overhead than either HTTP or gRPC.
Everyone doing what you are saying ends up reinventing parts of gRPC, on top of reinventing parts of RabbitMQ. It isn't ever "stupid simple." There are ways to build the things you need in a tightly coupled and elegant way, but what people want is Next.js, that's the coupling they care about, and it doesn't have a message broker (neither does gRPC), and it isn't a proxy (which introduce a bajillion little problems into WebSockets), and WebSockets lifetimes don't correspond to session lifetimes, so you have to reinvent that too, and...
What people? Developers? This is why I will not do that work anymore. Don't assume to know what I want based upon some tool set or tech stack that you find favorable. I hate (HATE HATE) talk of tech stacks, the fantasy of the developer who cannot write original software, who does not measure things, and cannot provide their own test automation. They scream their stupidity for all the world to hear when they start crying about reinventing wheels, or some other empty cliche, instead of just delivering a solution.
What I want is two things:
1. Speed. This is not an assumption of speed. Its the result of various measurements in different execution contexts.
2. Less effort. I want to send a message across a network... and done. In this case you have some form of instruction or data package and then you literally just write that to the socket. That is literally 2 primitive instructions without abstractions like ex: socket.write(JSON.parse(thing));. It is without round trips, without headers, without anything else. You are just done.
The counterpoint to the fact that gRPC and RabbitMQ handle whatever you're writing better than you do is that gRPC and RabbitMQ have immense amounts of complexity that you have to deal with despite the fact that you don't care about it
gRPC is not supposed to be a standard web communication layer.
There are times where you need a binary format and extremely fast serialization/deserialization. Video games are one example where binary formats are greatly preferred over JSON.
But I do agree that people keep trying to shove gRPC (or similar) into things where they aren't needed.
It kind of is. What do you think WebTransport in HTTP/3 is? It's basically gRPC Next. The only reason gRPC didn't make it as the standard web communication layer is because of one disastrous decision by one Chrome engineer in https://issues.chromium.org/issues/40388906, maybe because he woke up on the wrong side of the bed.
I also don't see what'd stop it from being used generally for websites calling the backend API. Even if you don't care about the efficiency (which is likely), it'd be nice to get API definitions built in instead of having to set up OpenAPI.
If I was using something slow that needed flexibility I'd probably go with Avro since it has more powerful scheme evolution.
If I wanted fast I'd probably use SBE or Flatbuffers (although FB is also slow to serialise)
There's almost no reason why RPC should not just be
send(sk, (void *)&mystruct, sizeof(struct mystructtype), 0)Additionally, I think people put a lot of trust into JSON parsers across ecosystems "just working", and I think that's something more people should look into (it's worse than you think): https://seriot.ch/projects/parsing_json.html
I would open a dedicated TCP socket and a file system stream. I would then pipe the file system stream to the network socket. No matter what you still have to deal with packet assembly because if you are using TLS you have small packets (max size varies by TLS revision). If you are using WebSockets you have control frames and continuity frames and frame head assembly. Even with that administrative overhead its still a fast and simple approach.
When it comes to application instructions, data from some data store, any kind of primitive data types, and so forth I would continue to use JSON over WebSockets.
Websockets do not follow a request/reply semantics, so you'd have to write that yourself. I'd prefer not to write my own RPC protocol on top of websockets. That said, I'm sure there are some off the shelf frameworks out there, but do they have the same cross-language compatibility as protobuf + gRPC? I don't think "just use JSON with websockets" is such a simple suggestion.
Of course, gRPC does have some of its own problems. The in-browser support is not great (non-existent without a compatibility layer?) last time I checked.
That... doesn't make any sense, since gRPC is layered on top of HTTP. There must be missing context here.
Protobuf is intentionally designed to NOT require any parsing at all. Data is serialized over the wire (or stored on disk) in the same format/byte order that it is stored in memory
(Yes, that also means that it's not validated at runtime)
Or are you referencing the code we all invariably write before/after protobuf to translate into a more useful format?
That's just not true. You can read about the wire format over here, and AFAIK no mainstream language stores things in memory like this: https://protobuf.dev/programming-guides/encoding
I've had to debug protobuf messages, which is not fun at all, and it's absolutely parsed.
As others have mentioned, this is simply not the case, and the VARINT encoding is a trivial counterexample.
It is this required decoding/parsing that (largely) distinguishes protobuf from Google's flatbuffers:
https://github.com/google/flatbuffers
Cap'n Proto (developed by Kenton Varda, the former Google engineer who, while at Google, re-wrote/refactored Google's protobuf to later open source it as the library we all know today) is another example of zero-copy (de)serialization.
This is not true at all. If you have a language-specific class codegen'd by protoc then the in-memory representation of that object is absolutely not the same as the serialized representation. For example:
1. Integer values are varint encoded in the wire format but obviously not in the in-memory format
2. This depends on the language of course but variable length fields are stored inline in the wire format (and length-prefixed) while the in-memory representation will typically use some heap-allocated type (so the in-memory representation has a pointer in that field instead of the data stored inline)
- The implementation quality and practices vary a lot. The python library lacks features that the go library has because they are philosophically opposed to them. Protobuf/grpc version pinning between my dependencies has broken repeatedly for me.
- If you are a services team, your consumers inherit a lot of difficult dependencies. Any normal json api does not do this, with openapi the team can use codegen or not.
- The people who have been most hype to me in person about grpc repeat things like "It's just C structs on the wire" which is completely fucking wrong, or that protobuf is smaller than json which is a more situational benefit. My point being their "opinion" is uninformed and band-wagoning.
This article gave me some new options for dunking on grpc if it's recommended.
Not even basic syntax highlighting for IDL files in Visual Studio, but nice goodies for doing gRPC are available in Visual Studio.
Could you elaborate on this? (Heavy grpc/C# usage here and we just edit the protos)
That is the gold experience for doing COM in C++, doing COM in C# is somehow better, but still you won't get rid of dealing with IDL files, specially now that TLB support is no longer available for .NET Core.
Quite tragic for such key technology, meanwhile C++ Builder offers a much more developer friendly experience.
My preferred approach would be to map my client to a "topic" and then any number of servers can subscribe to the topic. Completely decoupled, scaling up is much easier.
My second biggest issue is proto file versioning.
I'm using NATS for cross-service comms and its great. just wish it had a low-level serialization mechanism for more efficient transfer like grpc.
You know what's ironic, Google AppEngine doesn't support HTTP/2. Actually a lot of platforms don't.
I don't want to sound flippant, but if you don't want to learn new things, don't use new tools :D
Moreover, the term “unary” is used to distinguish from other, non-unary options: https://grpc.io/docs/what-is-grpc/core-concepts/
That's precisely the problem. The author wants to convince people (e.g., his colleagues) to use a new tool, but he has to convince them to learn a bunch of new things including a bunch of new things that aren't even necessary.
I don't know why or how there isn't a one-liner option there, because my experience with using gRPC in C# has been vastly better:
dotnet add package Grpc.Tools // note below
<Protobuf Include="my_contracs.proto" />
and you have the client and server boilerplate (client - give it url and it's ready for use, server - inherit from base class and implement call handlers as appropriate) - it is all handled behind the scenes by protoc integration that plugs into msbuild, and the end user rarely has to deal with its internals directly unless someone abused definitions in .proto to work as a weird DSL for end to end testing environment and got carried away with namespacing too much (which makes protoc plugins die for most languages so it's not that common of occurrence). The package readme is easy to follow too: https://github.com/grpc/grpc/blob/master/src/csharp/BUILD-IN...Note: usually you need Grpc.Client and Google.Protobuf too but that's two `dotnet add package`s away.
The GoGoProtobuf [1] project was started to improve both. It would generate nice Go types that followed Go's conventions. And it uses fast binary serialization without needing to resort to reflection.
Unfortunately, the gRPC/Protobuf team(s) at Google is famously resistant to changes, and was unwilling to work with the GoGo. As a result, the GoGo project is now dead. [2]
I've never used Buf, but it looks like it might fix most of the issues with the Go support.
For example if I want to import some common set of structs into my protos, there isn’t a standardized or wide spread way to do this. Historically I have had to resort to either copying the structs over or importing multiple protoc generated modules in my code (not in my protos).
If there was a ‘go get’ or ‘pip install’ equivalent for protos, that would be immensely useful; for me and my colleagues at least.
One project I worked on was basically just a system for sharing a JSON document to multiple other systems. This was at a golang shop on AWS. We could have used an S3 bucket. But sure, an API might be nice so you can add a custom auth layer or add server side filters and queries down the road. So we built a REST API in a couple of weeks.
But then the tech lead felt bad that we hadn’t used gRPC like the cool kids on other teams. What if we needed a Python client so we could build a Ansible plugin to call the API?? (I mean, Ansible plugins can be in any language; it’s a rest API, Ansible already supports calling that (or you could just use curl); or you could write the necessary Python to call the REST API in like three lines of code.) so we spent months converting to gRPC, except we needed to use the Connect library because it’s cooler, except it turns out it doesn’t support GET calls, and no one else at the company was using it.
By the time we built the entire service, we had spent months, it was impossible to troubleshoot, just calling the API for testing required all sorts of harnesses and mocks, no good CLI tooling, and we were generating a huge Python library to support the Ansible use case, but it turned out that wasn’t going to work for other reasons.
Eventually everyone on that team left the company or moved to other projects. I don’t think anything came of it all but we probably cost the company a million dollars. Go gRPC!
This sounds odd to me because I don't really see how gRPC would cause any of those issues?
> layers of gRPC trash
What layers? Switching from REST (presumably JSON over http) to gRPC shouldn't introduce any new "layers". It's replacing one style of API call with a different one.
> learning multiple new tools and DSLs
New tools sure, you need protoc or buf to build the bindings from the IDL, but what is the new DSL you need to learn?
> ultimately tends to force API rigidity far sooner than is healthy
How does gRPC force API rigidity? It is specifically designed to be evolvable (sometimes to its usability detriment IMO)
There are some definite footguns with gRPC and I am becoming increasingly annoyed with Protobuf in particular as the years go on, but going back to REST APIs still seems like a huge step backwards to me. With gRPC you get a workflow that starts with a well-defined interface and all the language bindings client/server stubs are generated from that with almost zero effort. You can kind of/sort of do that with REST APIs using openapi specs but in my experience it just doesn't work that well and language support is sorely lacking.
Of course it does, starting with the protobufs and code generation. You say yourself in your very next reply:
"New tools sure, you need protoc or buf to build the bindings from the IDL, but what is the new DSL you need to learn?"
And the DSL is presumably protobuf, which you yourself are "increasingly annoyed" with.
The DSL I consider a plus. If you build REST APIs you will usually also resort to using a DSL to define your APIs, at least if you want to easily generate clients. But in this case the DSL is OpenAPI, which is an error prone mess of YAML or JSON specifications.
You don't need a binary format just to get type safety. JSONSchema, OpenAPI, etc exist after all.
> But in this case the DSL is OpenAPI, which is an error prone mess of YAML or JSON specifications.
They might not be pretty, but they're not particularly error prone (the specs themselves are statically checked).
[0] https://fivetran.com/docs/partner-built-program [1] https://fivetran.com/docs/connectors/connector-sdk
Protobuf was designed first and foremost for C++. This makes sense. All of Google's core services are in C++. Yes there's Java (and now Go and to some extent Python). I know. But protobuf was and is a C++-first framework. It's why you have features like arena allocation [1].
Internally there was protobuf v1. I don't know a lot about this because it was mostly gone by the time I started at Google. protobuf v2 was (and, I imagine, still is) the dominant form of.
Now, this isn't to be confused with the API version, which is a completely different thing. You would specify this in BUILD files and it was a complete nightmare because it largely wasn't interchangeable. The big difference is with java_api_version = 1 or 2. Java API v1 was built like the java.util.Date class. Mutable objects with setters and getters. v2 changed this to the builder pattern.
At the time (this may have changed) you couldn't build the artifacts for both API versions and you'd often want to reuse key protobuf definitions that other people owned so you ended up having to use v1 API because some deep protobuf hadn't been migrated (and probably never would be). It got worse because sometimes you'd have one dependency on v1 and another on v2 so you ended up just using bytes fields because that's all you could do. This part was a total mess.
What you know as gRPC was really protobuf v3 and it was designed largely for Cloud (IIRC). It's been some years so again, this may have changed, but there was never any intent to migrate protobuf v2 to v3. There was no clear path to do that. So any protobuf v3 usage in Google was really just for external use.
I explain this because gRPC fails the dogfood test. It's lacking things because Google internally doesn't use it.
So why was this done? I don't know the specifics but I believe it came down to licensing. While protobuf v2 was open sourced the RPC component (internally called "Stubby") never was. I believe it was a licensing issue with some dependency but it's been awhile and honestly I never looked into the specifics. I just remember hearing that it couldn't be done.
So when you read about things like poor JSON support (per this article), it starts to make sense. Google doesn't internally use JSON as a transport format. Protobuf is, first and foremost, a wire format for C++-cetnric APIs (in Stubby). Yes, it was used in storage too (eg Bigtable).
Protobuf in Javascriipt was a particularly horrendous Frankenstein. Obviously Javascript doesn't support binary formats like protobuf. You have to use JSON. And the JSON bridges to protobuf were all uniquely awful for different reasons. My "favorite" was pblite, which used a JSON array indexed by the protobuf tag number. With large protobufs with a lot of optional fields you ended up with messages like:
[null,null,null,null,...(700 more nulls)...,null,{/*my message*/}]
GWT (for Java) couldn't compile Java API protobufs for various reasons so had to use a variant as well. It was just a mess. All for "consistency" of using the same message format everywhere.So, protobuf1 which was perfectly serviceable wasn't open sourced, it was rewritten into proto2. In that case migration did happen, and some fundamental improvements were made (e.g. proto1 didn't differentiate between byte arrays and strings), but as you say, migration was extremely tough and many aspects were arguably not improvements at all. Java codebases drastically over-use the builder/immutable object pattern IMO.
And then Stubby wasn't open sourced, it was rewritten as gRPC which is "Stubby inspired" but without the really good parts that made Stubby awesome, IMO. gRPC is a shadow of its parent so no surprise no migration ever happened.
And then Borg wasn't open sourced, it was rewritten as Kubernetes which is "Borg inspired" but without the really good part that make Borg awesome, IMO. Etc.
There's definitely a theme there. I think only Blaze/Bazel is core infrastructure in which the open source version is actually genuinely the same codebase. I guess there must be others, just not coming to mind right now.
Using the same format everywhere was definitely a good idea though. Maybe the JS implementations weren't great, but the consistency of the infrastructure and feature set of Stubby was a huge help to me back in the days when I was an SRE being on-call for a wide range of services. Stubby servers/clients are still the most insanely debuggable and runnable system I ever came across, by far, and my experience is now a decade out of date so goodness knows what it must be like these days. At one point I was able to end a multi-day logs service outage, just using the built-in diagnostics and introspection tools that every Google service came with by default.
I think 2012 is when Larry became CEO (again), and 2015 is when the "Alphabet" re-org / re-naming happened.
1. Larry Page was generally unhappy with the direction and execution of the company, so he became CEO. (Schmidt would never be CEO again)
2. VP Bill Coughran was shown the door (my interpretation, which is kind of like Eric Schmidt being shown the door). For my entire time there he had managed the software systems -- basically everything in google3, or everything important there
3. Urs Hoezle took over everything in technical infrastructure. I think he had previously been focused on hardware platforms and maybe SRE; now he was in charge of software too.
Urs sorta combined this "rewrite google3" thing with the "cloud" thing. To me there was always a tenuous connection there, at least technically. I can see why it made sense from a business perspective
---
Basically Larry was unhappy with google3 because the company wasn't shipping fast enough, e.g. compared to Facebook. It was perceived as mired in technical debt and processes (which IMO was essentially true, and maybe inevitable given how fast the company had grown for ~8 years)
And I think they were also looking over their shoulders at AWS, which I think by then had become "clearly important".
Why don't we have an AWS thing? At some point GCE was kind of a small project in Seattle, and then it became more important when AWS became big.
Anyone remember when Urs declared that google3 was deprecated and everything was going to be written on top of cloud in 12 to 18 months? (what he said was perhaps open to interpretation -- I think he purposely said something really ambitious to get everyone fired up)
So there was this shift to "externalize" infrastructure, make it a real product. Not just have internal customers, but external ones too.
---
So I think what you said is accurate, and I think that is the business context where the "arguably inferior rewrites" came from
- Kubernetes is worse in many ways than Borg [1]
- gRPC (I haven't used it) is apparently worse in many ways than Stubby, etc.
I'd be interested if anyone has different memories ...
---
[1] although I spent some time reading Borg source code, and e.g. compared to say the D storage server, which was also running on every node, it was in bad shape, and inefficient. There are probably ways that K8s is better, etc.
My main beef is the unimaginable complexity of running K8s on top of GCE on top of Borg -- i.e. 3 control planes stacked on top of each other ...
I don't believe Google has (had?) any objections to using open source or open sourcing things but you have to remember two things:
1. Most companies weaponize open source. They use it to "commoditize their product's complements" [1]; and
2. Google3 are so deeply integrated in a way that you can't really separate some of the tech because of the dependencies on other tech. More on that below.
> Open sourcing Stubby could certainly have been done. You just open source the dependencies too
Yeah, I don't think it's always that simple. You may not own the rights to something to be able to open source it. Releasing something may trigger more viral licenses (ie GPL) to force you to open source things you don't want to or can't.
I actually went through the process of trying to import a few open source packages into Google's third party repo and there are a lot of "no nos". Like a project had to have a definite license (that was white listed by legal). Some projects liked to do silly things like having a license like "do whatever you want" or "this is public domain". That's not how public domain works BTW. And if you contacted them, they would refuse to change it even something like an MIT license, which basically emans the same thing, because they didn't understand what they were doing.
> And then Borg wasn't open sourced
This actually makes sense. Later on your suggest you were a Google SRE so you should be aware of this but to whoever else reads this: Google's traffic management was deeply integrated into the entire software stack. Load balancing, DDoS defense, inter-service routing, service deployment onto particular data centers, cells and racks and so on.
It just doesn't make sense to open source Borg without everything from global traffic management down to software network switching.
> I think only Blaze/Bazel is core infrastructure in which the open source version is actually genuinely the same codebase
I don't know the specifics but I believe that Bazel too was "Blaze inspired". I suspect it's still possible to do things in Blaze that you can't do in Bazel even though the days of Blaze BUILD files being Python rather than "Python syntax like" are long gone.
Also, Blaze itself has to integrate with various other systems than Bazel doesn' eg ObjFS, SrcFS, Forge, Perforce/Piper, MPM and various continuous build systems.
[1]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
The primary misfeature of gRPC itself, irrespective of protobuf, is relying on trailers for status code, which hindered its adoption in the context of web browser without an edge proxy that could translate gRPC and gRPC-web wire formats. That alone IMO hindered the universal applicability and adoption quite a bit.
The C++ one may be slightly more challenging to replace because extra care is needed to make sure protobuf message pipeline is zero-copy. Other languages are more trivial.
One place to start would be to look at the gRPC protoc plugin and see how it’s outputting code and do something similar. Pretty lean code.
Who are you working with lol? Nobody I’ve worked with has struggled with this concept, and I’ve worked with a range of devs, including very junior and non-native-English speakers.
> Also, it doesn’t pass my “send a friend a cURL example” test for any web API.
Well yeah. It’s not really intended for that use-case?
> The reliance on HTTP/2 initially limited gRPC’s reach, as not all platforms and browsers fully supported it
Again, not the intended use-case. Where does this web-browsers-are-the-be-all-and-of-tech attitude come from? Not everything needs to be based around browser support. I do agree on http/3 support lacking though.
> lack of a standardized JSON mapping
Because JSON has an extremely anaemic set of types that either fail to encode the same semantics, or require all sorts of extra verbosity to encode. I have the opposite experience with protobuf: I know the schema, so I know what I expect to get valid data, I don’t need to rely on “look at the json to see if I got the field capitalisation right”.
> It has made gRPC less accessible for developers accustomed to JSON-based APIs
Because god forbid they ever had to learn anything new right? Nope, better for the rest of us to just constantly bend over backwards to support the darlings who “only know json” and apparently can’t learn anything else, ever.
> Only google would think not solving dependency management is the solution to dependency management
Extremely good point. Will definitely be looking at Buf the next time I touch GRPC things.
GRPC is a lower-overhead, binary rpc for server-to-server or client-server use cases that want better performance and faster integration that a shared schema/IDL permits. Being able to drop in some proto files and automatically have a package with the methods available and not having to spend time wiring up url’s and writing types and parsing logic is amazing. Sorry it’s not a good fit for serving your webpage, criticising it for not being good at web stuff is like blaming a tank for not winning street races.
GRPC isn’t without its issues and shortcomings- I’d like to see better enums and a stronger type system, and defs http/3 or raw quic transport.
> Well yeah. It’s not really intended for that use-case?
Until $WORKPLACE is invaded by Xooglers who want to gRPC all the things, regardless of whether or not there's any benefit over just using HTTPS. Internal service with dozens of users in a good week? Better use gRPC!
> Why does gRPC have to use such a non-standard term for this that only mathematicians have an intuitive understanding of? I have to explain the term every time I use it.
>> Who are you working with lol? Nobody I’ve worked with has struggled with this concept, and I’ve worked with a range of devs, including very junior and non-native-English speakers.
This is just a small complaint. It's super easy to explain what unary means but it's often infinitely easier to use a standard industry term and not explain anything.
>> Also, it doesn’t pass my “send a friend a cURL example” test for any web API.
> Well yeah. It’s not really intended for that use-case?
Yeah, I agree. Being easy to use isn't the indented use-case for gRPC.
>> The reliance on HTTP/2 initially limited gRPC’s reach, as not all platforms and browsers fully supported it
> Again, not the intended use-case. Where does this web-browsers-are-the-be-all-and-of-tech attitude come from? Not everything needs to be based around browser support. I do agree on http/3 support lacking though.
I did say browsers here but the "platform" I am thinking of right now is actually Unity, since I do work in the game industry. Unity doesn't have support for HTTP/2. It seems that I have different experiences than you, but I still think this point is valid. gRPC didn't need to be completely broken on HTTP/1.1.
>> lack of a standardized JSON mapping
> Because JSON has an extremely anaemic set of types that either fail to encode the same semantics, or require all sorts of extra verbosity to encode. I have the opposite experience with protobuf: I know the schema, so I know what I expect to get valid data, I don’t need to rely on “look at the json to see if I got the field capitalisation right”.
I agree that it's much easier to stick to protobuf once you're completely bought-in but not every project is greenfield. Before a well-defined JSON mapping and tooling that adhered to it is is very hard to transition from JSON to protobuf. Now it's a lot easier.
>> It has made gRPC less accessible for developers accustomed to JSON-based APIs
> Because god forbid they ever had to learn anything new right? Nope, better for the rest of us to just constantly bend over backwards to support the darlings who “only know json” and apparently can’t learn anything else, ever.
No comment. I think we just have different approaches to teaching.
>> Only google would think not solving dependency management is the solution to dependency management
> Extremely good point. Will definitely be looking at Buf the next time I touch GRPC things.
I'm glad to hear it! I've had nothing but execellent experiences with buf tooling and their employees.
> GRPC is a lower-overhead, binary rpc for server-to-server or client-server use cases that want better performance and faster integration that a shared schema/IDL permits. Being able to drop in some proto files and automatically have a package with the methods available and not having to spend time wiring up url’s and writing types and parsing logic is amazing. Sorry it’s not a good fit for serving your webpage, criticising it for not being good at web stuff is like blaming a tank for not winning street races.
Without looping in the frontend (aka web) it makes the contract-based philosophy of gRPC much less compelling. Because without that, you would have to have a completely different language for contracts between service-to-service (protobuf) than frontend to service (maybe OpenAPI). For the record: I very much prefer protobufs for the "contract source of truth" to OpenAPI. gRPC-Web exists because people wanted to make this work but they built their street racer with some tank parts.
> GRPC isn’t without its issues and shortcomings- I’d like to see better enums and a stronger type system, and defs http/3 or raw quic transport.
Totally agree!
What's the standard term? While I agree that unary isn't widely known, I don't think I have ever heard of any other word used in its place.
> gRPC didn't need to be completely broken on HTTP/1.1.
It didn't need to per se (although you'd lose a lot of the reason for why it was created), but as gRPC was designed before HTTP/2 was finalized, it was still believed that everyone would want to start using HTTP/2. HTTP/1 support seemed unnecessary.
And as it was designed before HTTP/2 was finalized, it is not like it could have ridden on the coattails of libraries that have since figured out how to commingle HTTP/1 and HTTP/2. They had to write HTTP/2 from scratch in order to implement gRPC, so supporting HTTP/1 as well would have greatly ramped up the complexity.
Frankly, their assumption should have been right. It's a sorry state that they got it wrong.
Hello! :)
>> Well yeah. It’s not really intended for that use-case?
> Yeah, I agree. Being easy to use isn't the indented use-case for gRPC.
I get the sentiment, for sure, I guess it’s a case of tradeoffs? GRPC traded “ability to make super easy curl calls” for “better features and performance for the hot path”. Whilst it’s annoying that it’s not easy, I don’t feel it’s super fair to notch up a “negative point” for this. I agree with the sentiment though-if you’re trying to debug things from _first_ principles alone in GRPC-land, you’re definitely going to have a bad time. Whether that’s the right approach is something is I feel like is possibly pretty subjective.
> I did say browsers here but the "platform" I am thinking of right now is actually Unity, since I do work in the game industry. Unity doesn't have support for HTTP/2. It seems that I have different experiences than you…
Ahhhh totally fair. To be fair I probably jumped the gun on this with my own, webby, biases, which in turn probably explains the differences in my/your next few paragraphs too and my general frustration with browsers/FE-devs; which shouldn’t be catching everyone else in the collateral fire.
> No comment. I think we just have different approaches to teaching.
Nah I think I was just in bad mood haha, I’ve been burnt by working with endless numbers of stubbornly lazy FE devs the last few places I’ve worked, and my tolerance for them is running out and I didn't consider the use-case you mentioned of game dev/beholden to the engine, which is a bit unfair. Under this framing, I feel like it’s a difficult spot: the protocol wants to provide a certain experience and behaviour, and people like yourself want to use it, but are constrained by some pretty minor things that said protocol seems to refuse to support for no decent reason. I guess it’s a possibly an issue for any popular-yet-specialised thing: what happens when your specific-purpose-tool finds significant popularity in areas that don’t meet your minimum constraints? Ignore them? Compromise on your offering? Made all the worse by Google behaving esoterically at the best of times lol.
You mentioned that some GRPC frameworks have already moved to support http/3, do you happen to know which ones they are?
Sick burn. I like it, especially since most use of gRPC seems to be cargo-culting.
Lolwut. This is what was always said about ASN.1 and the reason that this wheel has to be reinvented periodically.