The basic premise being that you specify the interface and you can use tooling to build some skeleton code that makes the code the user writes look like any other code they write, and yet it might magically be running on half a dozen machines.
Of course the actual difference between invoking a “procedure call” which is simply a program counter change and the same stack you had before, and one where the parameters provided are marshaled into a canonical form so that at the destination you can reliably unmarshal them and correctly interpret them, where the step that had been done by the linker resolving one symbol in your binary is now an active agent that is using yet another protocol at the start of execution to resolve the symbols and plumb the necessary networking code. And the execution itself which may happen exactly as expected, or happen multiple times without you knowing it has done so, or might not happen at all.
The minimalist camp, of which I consider myself a member, says “No, you can’t make these seamless, they really are just syntactic sugar that lets you specify a network protocol.” In that simple world you acknowledge that, and plan for, any part of the process to fail. Your code had failure checks and exceptions that deal with “at most once” or “at least once” semantics, you write functions rather than procedures to be idempotent when you can to minimize the penalty of trying to maintain the illusion of procedure call semantics in what is in fact a network protocol implementation.
But there is another camp, and from the material Buf has put out they seem to be in that camp, which is “networking is hard and complicated, but we can make it so that developers don’t need to even know they are going over a network. Just use these tools to describe what you want to do and we’ll do all the rest.”
My experience is that obfuscating what is going on under the hood to lower the cognitive load on developers breaks down when trying to distribute systems. That is especially true for languages that don’t explicitly allow for it. The number of projects/ideas/companies that have crashed on that reef are numerous.
And there is this part : “All gRPC and Protobuf are doing that is different from HTTP is they're using a binary protocol based on explicitly-defined schemas. The protobuf compiler will take your schemas and generate convenient framework code for you, so you don't have to waste your time on boilerplate HTTP and parsing. And the binary encoding is faster and more compact than text-based encoding like JSON or XML. But this is all convenience and optimization, not fundamentally different on a conceptual level.”
I agree 100% with that statement, and that is exactly what ONC RPC does, and that is exactly what ASN.1 does, and that is exactly what DCS does. That same wheel, again and again. So what I was suggesting originally is that Buf should try to explain what they are doing that these other systems failed to do, and in that explanation acknowledge the reasons this wheel has been re-invented so many times before, and then explain how they think they are going to make a more durable solution that lasts for more than a few years.