One thing that I really miss from other RPC systems is the variety of debug endpoints. Stubby piggybacked on HTTP a bit like gRPC does by registering endpoints into a pre-existing HTTP server. One was a magic hidden endpoint that just converted the socket into a Stubby socket, but others let you do things like:
• Send an RPC by filling out an auto-generated HTML form, so you could also use curl to send RPCs for debugging purposes. There is an OpenAPI based thing that gives you something similar for REST these days, but it's somehow heavier and not quite as clean.
• View all RPCs that were in-flight, including cross-process traces, how often the RPCs had retried etc. This made it very easy to figure out where an RPC had got stuck even if it had crossed several machines. In the open world there's Jaeger and similar, I haven't tried those, but this was built in and didn't require any additional tools.
• View latency histograms of RPCs, connected machines, etc. View the stack traces of all the threads.
• They had a global service discovery system that was basically a form of reactive DNS, i.e. you could subscribe to names and receive push notifications when the job got moved between different underlying machines.
• Endpoints for changing the values of flags/parameters on the fly (there were thousands exposed like this).
• RPC routing was integrated with the global load balancing system.
Probably a dozen more things I forgot.
All this made it very easy to explore and diagnose systems using just a web browser, and you didn't face problems of finding servers that didn't have these features because every app was required to use the in-house stack and all the needed features were enabled by default. Whereas in most open source server stacks the authors are obsessed with plugins, so out of the box they do very little and companies face an uphill battle to ensure everything is consistent.
For clusters the main difference I remember is that Borg had a proper config language instead of the weird mashed up YAML templating thing Kubernetes uses, and the Borg GUI was a lot cleaner and more info-dense than the Material Design thing that Kubernetes had, and the whole reactive naming system was deeply integrated in a natural way. Also Kubernetes is all about Docker containers, which introduces some complexity that Borg didn't have. I had problems in the past with k8s/docker doing dumb things like running out of disk space because containers weren't being purged at the right times, and kernel namespaces have also yielded some surprises. At the time Borg didn't really use namespacing, just chroots.
There are some minor stylistic differences. The old Google internal UI had a simple industrial feel. It was HTML written by systems engineers so everything was very simple, info dense, a few blocks of pastel colors here and there. Imagine the Linux kernel guys making web pages. Meaning: very fast, lightweight, easy to scrape if necessary.