Go is really good at easy concurrency tasks, like things that have almost no shared memory at all, "shared-nothing" architectures, like a typical web server. Share some resources like database handles with a sync.Pool and call it a day. Go lets you write "async" code as if it were sync with no function coloring, making it decidedly nicer than basically anything in its performance class for this use case.
Rust, on the other hand, has to contend with function coloring and a myriad of seriously hard engineering tasks to deal with async issues. Async Rust gets better every year, but personally I still (as of last month at least) think it's quite a mess. Rust is absolutely excellent for traditional concurrency, though. Anything where you would've used a mutex lock, Rust is just way better than everything else. It's beautiful.
But I struggle to be as productive in Rust as I am in Go, because Rust, the standard library, and its ecosystem gives the programmer so much to worry about. It sometimes reminds me of C++ in that regard, though it's nowhere near as extremely bad (because at least there's a coherent build system and package manager.) And frankly, a lot of software I write is just boring, and Go does fine for a lot of that. I try Rust periodically for things, and romantically it feels like it's the closest language to "the future", but I think the future might still have a place for languages like Go.
You should calculate TCO in productivity. Can you write Python/Go etc. faster? Sure! Can you operate these in production with the same TCO as Rust? Absolutely not. Most of the time the person debugging production issues and data races is different than the one who wrote the code. This gives the illusion of productivity being better with Python/Go.
After spending 20+ years around production systems both as a systems and a software engineer I think that Rust is here for reducing the TCO by moving the mental burden to write data race free software from production to development.
So, my first job actually started as a pure Python gig. Operations for Python/Django absolutely sucked ass. Deploying Django code reliably was a serious challenge. We got better over time by using tools like Vagrant and Docker and eventually Kubernetes, so the differences between production and dev/testing eventually faded and become less notable. But frankly no matter what we did, not causing production issues with Django/Python was a true-to-life nightmare. Causing accidental type errors not caught by tests was easy and MyPy couldn't really cover all that much of the code easily, and the Django ORM was very easy to accidentally cause horrible production behavior with (that, of course, would look okay locally with tiny amounts of data.) This is actually the original reason why I switched to Go in the first place, at my first job in around 2016. The people who I worked with are still around to attest to this fact, if you want I can probably get them to chime in on this thread, I still talk to some of them.
Go was a totally different story. Yes, we did indeed have some concurrency pains, which really didn't exist in Python for obvious reasons, but holy shit, we could really eek a lot of performance out of Go code compared to Python. We were previously afraid we might have to move data heavy workloads from Twisted (not related to the Django stuff) to something like C++ or maybe even optimized Java, but Go handily took it and allowed us to saturate the network interface on our EC2 boxes. (A lot of communications were going over Websockets, and the standards for compression in websockets took a long time to settle and become universally supported, so we actually played with implementing the lz4 compression scheme in JS. I wound up writing my own lz4 implementation based on the algorithms, I believe, from the C version. It wound up being too much compute, though. But, we had to try, anyway.)
So how much reliability problems did we wind up having doing all this? Honestly not a whole lot on the Go side of things. The biggest production issue I ever ran into was one where the Kubernetes AWS integration blew up because we wound up having too many security groups. I wound up needing to make an emergency patch to kubelet in the early hours to solve that one :) We did run into at least one serious Go related issue over time, which was indeed concurrency related: when Go 1.6 came out, it started detecting concurrent misuses of maps. And guess what? We had one! It wasn't actually triggering very often, but in some cases we could run into a fairly trivial concurrent map access. It didn't seem to crash before but it could at least cause some weird behaviors in the event that it actually triggered before Go 1.6; now it was a crash that we could debug. It was a dumb mistake and it definitely underscores the value of borrow checking; "just don't mess up" will never prevent all mistakes, obviously. I will never tell you that I think borrow checking is useless, and really, I would love to just always write 100% correct software all the time.
That said though, that really is most of the extent of the production issues we had with Go. Go was a serious workhorse and we were doing reasonably non-trivial things in Go. (I had essentially built out a message queue system for unreliable delivery of very small events. We had a firehose of data coming in with many channels of information and needed to route those to the clients that needed them and handle throttling/etc. Go was just fantastic at this task.) Over time things got easier too, as Go kept updating and improving, helping us catch more bugs.
I can only come to one conclusion: people who treat Go and Python in the same class are just ignorant to the realities of the situation. There are cases where Rust will be immensely valuable because you really can't tolerate a correctness problem, but here's the thing about that Go concurrent map access issue: while it could cause some buggy behavior and eventually caused some crashing, it never really caused any serious downtime or customer issues. The event delivery system was inherently dealing with unreliable data streams, and we had multiple instances. If there was a blip, clients would just reconnect and people would barely notice anything even if they were actively logged in. (In fact, we really didn't do anything special for rolling deployments to this service, because the frontend component was built to just handle a disconnection gracefully. If it reconnected quickly enough, there was no visual disturbance.)
That's where the cost/benefit analysis gets tricky though. Python and Django and even Twisted are actually pretty nice and I'm sure it's even better than when we originally left it (to be clear we did still have some minor things in Django after that, too, but they were mostly internal-only services.) Python and Django had great things like the built-in admin panel which, while it couldn't solve everyone's needs, was pretty extensible and usable on its own. It took us a while to outgrow it for various use cases. Go has no equivalent to many Django conveniences, so if you haven't fully outgrown e.g. the Django admin panel and ORM, it's hard to fully give up on those features.
Throughout all of this, we had a lot more issues with our JS frontend code than we ever did with either Python/Django or Go, though. We went through trying so many things to fix that, including Elm and Flow, and eventually the thing that really did fix it, TypeScript. But that is another story. (Boy, I sure learned a lot on my first real career job.)
At later jobs, Go continued to not be at the center of most of the production issues I faced running Go software. That's probably partly because Go was not doing a lot of the most complicated work, often times the most complicated bits were message queues, databases and even to some degree memory caches, and the Go bits were mostly acting like glue (albeit definitely glue with application logic, to be sure.)
So is the TCO of Go higher than Rust? I dunno. You can't really easily measure it since you don't get to explore parallel universes where you made different choices.
What I can say is that Go has been a choice I never regretted making all the way from the very first time and I would choose it again tomorrow.
Thankfully though, people don't just throw their hands up there; a good amount of work has gone into figuring out the kinds of mistakes that often lead to Go concurrency bugs in the real world and writing static analysis tools that can help prevent them. That work, combined with Go's builtin tools and standard library, and the memory safety of individual isolated goroutines, makes most production Go concurrency bugs fairly boring even compared to C concurrency bugs, even though they theoretically have the same basic problem where you can freely share mutable data unsafely across concurrent threads.
So yes, it is still possible to write trivial, obvious concurrency bugs. The language won't stop you. However I've used Go across almost every job I've had since like 2016 and it has been rare to come across a concurrency bug this trivial. I hope I would catch flagrantly shared mutable state across threads during code review.
It's not so much about being "boring" or not; Rust does just fine at writing boring code once you get familiar with the boilerplate patterns (Real-world experience has shown that Rust is not really at a disadvantage wrt. productivity or iteration speed).
There is a case for Golang and similar languages, but it has to do with software domains where there literally is no viable alternative to GC, such as when dealing with arbitrary, "spaghetti" reference graphs. Most programs aren't going to look like that though, and starting with Rust will yield a higher quality solution overall.
I don't believe that for a second. Even just going from Python to Go drops my productivity by maybe about 50%. Rust? Forget it.
Sure, if you have a project that demands correctness and high performance that requires tricky concurrency to achieve, something like Rust may make sense. Not for your run-of-the-mill programs though.
But more seriously, yeah, Rust doesn't make sense for trivial programs. But these days, I write Python for a living, and it doesn't take long to stumble upon bugs that Rust would have trivially detected from within the comfort from my IDE.
But as much as I love LARPing about correctness (believe me I do,) it's just simply the case that we won't right perfect software and it's totally OK. It's totally OK that our software will have artificial limitations, like with Go, only accepting filenames that are valid UTF-8, or taking some unnecessary performance/latency hits, or perhaps even crashing in some weird ass edge case. There are very few domains in which correctness issues can't be tolerated.
I don't deal with domains that are truly mission critical, where people could die if the code is incorrect. At worst, people could lose some money if my code is incorrect. I still would prefer not to cause that to happen, but those people are generally OK with taking that risk if it means getting features faster.
That's why Go has a future really. It's because for most software, some correctness issues are not the end of the world, and so you can rely on not fully sound approaches to finding bugs, like automated testing, race detection, and so on.
Rust can also make some types of software more productive to write, but it is unlikely to beat Go in terms of productivity when it comes to a lot of the stuff SaaS shops deal with. And boy, the software industry sure is swamped in fucking SaaS.
Once your sass products get enough users, and you're dealing with millions or billions of requests per day, those rare bugs start showing up quite often... And it turns out programming towards correctness is desirable, if for no other reason than to keep pagerduty quiet. Tolerating correctness issues isn't cost-free... People having to respond during off hours costs money and stress. I think most people would rather pay the costs at dev time, when they aren't under the pressure of an incident, than during an outage.
I just wish Go supported parametric enums (sum types) and Option, rather than copying Hoare’s billion dollar mistake.
I ported some code to Go and rust a few years ago to try both languages out. The rust code ended up being 30% smaller because I could use an enum and a match expression. In Go I needed to make a set of types and interface{} to achieve the same thing - which was both slower and way more verbose. My rust implementation was as fast as my C implementation in 2/3rds as much code. And it was trivial to debug. My Go implementation took way more code to write - about the same amount of code as C, but it was harder to read than C and ran much slower.
For cookie cutter SAAS and prototypes, I prefer typescript. It’s fast enough for most things, and the type system is much more expressive without getting in your way. Not as convenient to deploy as go - especially on mobile. And the standard library is more like an attic. But in my opinion it’s a much better designed language.