- Golang doesn't fit comfortably into the spectrum of "low level & fast v.s. high level and easy" languages, largely because it does go in different directions in many ways (see: critiques on their garbage collector, type system, etc).
- Smart people are happy with Golang, but the fact that it makes different choices means that people regularly discover to their sorrow that the language isn't quite what they expected.
- In general, most of the "go was wrong for us" stories seem to come from companies that use it as a side language. Teams where Golang is their main workhorse generally seem at peace with its choices.
To me, it means I have no doubt you can build a company on Go - but maybe build that experimental high-performance replacement for a python service in Rust.
Go has some warts, but they're all things that make sense if you know some basic concepts about go (as in you can derive them from knowing how go ticks), versus thousands of random caveats in the core library of other languages. They seem to be most upset about allocation, but it is in fact pretty simple. Types that can be nil are nil upon initialisation, and everything else is a zero value (by virtue of not being a pointer). C isn't actually different here. The only thing I'd expect to surprise a C programmer here is that strings aren't pointers by default, despite being variable length.
And I'm not sure what their point about CGO is. People always say to avoid it because you leave the Go world behind and because performance could be better, but it is still a legitimate way to achieve FFI. The SQLite package uses it and nobody recommends against using that either. It just comes with a "more complexity if you add this" disclaimer.
That's exactly the problem. I don't want to spend my time learning the myriad quirks of a language. I want to spend my time building my program. As someone who is an "advanced beginner" at Go, I assure you that just knowing "some basic concepts" is not enough to avoid Go's pitfalls and footguns.
I don't want to work in a language where the solution to all these things is "just be careful when writing code". I'm fallible. Compilers, less so. The compiler should be doing this work for me.
That is just it. Go's quirks are mostly a result of its consistency, not a result of a lack of consistency.
> I assure you that just knowing "some basic concepts" is not enough to avoid Go's pitfalls and footguns.
It seems that Go has two somewhat contradictory goals: consistency and similarity to popular yet inconsistent languages. This seems to result in a lot of people who can generally use the language, but haven't learned the language model because the language doesn't require that in order to get started. This seems to frustrate a lot of people and prevent them from ever learning the language model. Languages which appear completely different from popular languages don't seem to suffer this issue because people tend to learn the language model up front.
Go really only has one major source of inconsistency and that is the late addition of type parameterization. The built-in parameterized types are special, but still generally follow the same rules as other types with a few exceptions. The main thing you need to know about slices, maps, and interfaces is what they actually are. Once you know that, the normal rules apply. Channels are a different beast though.
There is the modernc.org/sqlite package that doesn't use cgo. Currently the performance is lower, but it's an option.
- Go's approach to normal error handling assumes that correctness and proper error handling are paramount: every error must be handled explicitly, and every control flow path must be made obvious, no matter how much verbosity this creates.
- Go's approach to panic handling assumes that correctness and proper error handling don't matter: a function can halt at any given point, panics are easy to trigger by accident (e.g. methods with nil receivers), handling them explicitly is usually discouraged, and they tend to leave values in intermediate and unexpected states (unless you use "defer" very carefully and consistently).
Rust partially resolves this issue by preventing many causes of panics, by using methods like "lock poisoning" to avoid leaving shared values in unexpected states, and by having proper destructors.
Go's approach (just crash everything) makes it easy for one error to completely bring down an application. Handling panics leads to values being left in inconsistent states with operations half-completed.
The reason I say you can’t turn every fault into an error return is because an unexpected infinite loop is also a type of fault. If you call some function that has bugs in it, then there’s a chance that the function won’t return. “Won’t return” may mean that it panics, deadlocks, loops forever, or loops for so long it might as well be forever. In Haskell, all of these different behaviors are lumped in together as “bottom”, and bottom is a value which has well-defined semantics even though it covers all these different cases. There’s a whole debate in the Haskell community about whether functions should be total. A total function doesn’t return bottom unless bottom was an argument—in Go/Rust terms, a total function does not panic and does not infinitely loop.
My take—as long as you think “this function might not return because it has a bug in it”, there’s not a good reason to prohibit panic(). The panic() functionality is a more controlled, flexible way for a function to not return.
I think you could write a whole article on when to use recover() in Go. The idea that you should never panic is a bit of a hopeless dream—if that’s the kind of correctness you want, then it sounds like you want some kind of formal verification, which can be done but not in Go. The idea that you should never recover() is too severe. Yes, you can find code that leaves your program in an inconsistent state after a panic(), but in practice I’d say that these problems are relatively rare. You can also use panic/recover to simplify your code in certain ways. I’ve used panic/recover to write parsers or deserialization code, where you just use a panic() to return an error from the top-level parser/deserializer, which catches it with a recover().
I’d also say that it’s relatively normal to recover() inside your request handler for network services. You could weigh the risk of panic/recover leaving your application server in an unexpected state against the risk of getting a denial of service from panic taking down the whole app.
Which is a pretty big advantage!
No, methods nil receivers in Go don't trigger panics (if they are not dereferenced).
> every error must be handled explicitly, and every control flow path must be made obvious, no matter how much verbosity this creates.
Is this, bad? It is just the recommended way. You are not required to implement your code in this way.
Yes, but in practice most methods do dereference their receivers. Given that, you have two choices:
1. Check for nil receivers explicitly in every method. This is considered unidiomatic and libraries rarely do this.
2. Don't check for nil receivers, and have your method panic on the first dereference with no explicit check. Then, most of your methods can halt partway through in unexpected (and usually undocumented) ways, unless you pay very close attention to this failure mode.
Furthermore, such panics can occur far down the call stack, making it non-obvious from the stack trace where the error is.
Also, this means that, if you upgrade your library so that a method now dereferences its receiver, your library is suddenly no longer backwards-compatible.
It usually works okay unless there's a "query of death" causing repeated restarts.
Similarly for command line apps.
This struck me as a really insightful thing to say. I often talk about how I like languages with strong type systems, and how I like using those type systems as fully as my brain thinks is reasonable to do so, because when the compiler finishes with no error, I have much more confidence that what I wrote is correct.
But I never really thought about it from the other side: being beginner friendly doesn't just mean "you can pick up the syntax in an afternoon". It should mean that the compiler saves you from beginner mistakes. As someone who only occasionally uses Go, I'd consider myself an "advanced beginner" at the language. And I remember many of the times I wrote code that seemed correct, and the compiler accepted, but was wrong because of Go's quirks.
That alone is a reason to disqualify Go for me as a language to get serious work done in.
- “This code is awful, it’s full of all sorts of hacks, spaghetti code, bad practices, etc.”
- “This code must be fine, because the people working on it are smart, and if I think there’s something wrong, it must be because I don’t know the reasons for it.”
The problem is that junior programmers don’t have a good intuitive sense to figure out which of those reactions is the correct one. Sometimes you get junior programmers who are rightly horrified about the state your code is in, and want to fix it. Sometimes you get junior programmers who think the code is awful just because it doesn’t look like the good code they saw when they were in school, or because they don’t know what good production code looks like (which is often in flux).
The responsibility of the senior developer is to protect the junior developer’s opinions about the code like they’re a flickering candle that could get blown out at any moment. The junior developer will either hold onto those opinions and try changing the code, or will decide that they’re wrong—and the key skill you want to cultivate is the ability to make the correct decision.
It’s easy for a senior developer to simply tell the junior developers what the correct answer is, which is in this metaphor, means taking the candle away.
Manager (who was a world class electrical engineer): "why did you reuse this?"
Friend (who was also an intern): "I assumed ${staffEngineer} knew what they were doing and it would be fine to just place it in this design"
Manager: "never assume that anyone knows what they're doing here or at any other organization."
And it's pretty much always a bad choice. It might be a little bad - like if you're using PostgreSQL, you'll need TestContainers for your integration tests, which can slow the build - or it can be a lot bad - like if you're using Oracle, a horde of demonic spirits now know your True Name and are laying plans to extract the most value for their Dark Lord - but there's always tradeoffs.
Senior engineers are just better able to list and weigh those tradeoffs, and maybe hedge against some of the badness through careful architecture.
See: C++/Java OOP implementation.
And when the ship sinks under their feet they somethings don’t accept or realise that the ship is sinking because of choices they made. However the smart ones will learn and make better choices in the future.
Interesting.
How does that fit into the success of Node.js?
Even the author of Node.js accepts that it is dreadful. Yet it underpins so much current technology. Even in the Rust world.
Go's position on the simplicity vs feature set continuum is perhaps the least interesting reason it's popular. There are a myriad of other reasons for its success:
- It spits out static binaries with minimal fuss.
- The import statement maps strings to symbols. No complicated module system to learn. The strings are resolved to something like URLs, but that's not part of the language. It's very easy to see code on GitHub and import it.
- Packages are just directories.
- No new package namespace with squatters and poor infrastructure. Go packages, in practice, are named with DNS. You know, the namespace that already exists and everything else has standardized on.
- Rejection of complicated version selection. Go almost went down the npm route, there was even an "official experiment" that looked much like other package managers. Then the Go leadership stepped in and sorted things out. https://research.swtch.com/vgo-mvs
Go is set up to be the language of an ecosystem in a way that most other languages are not. If you are a language designer: take Go's import statement, packages, and modules, and just copy them. You aren't going to do better, and you probably weren't trying to innovate in that area anyway.
This is for sure untrue, unless one is already steeped in the golang ecosystem.
$ cat go.mod
requires (
github.com/pulumi/pulumi/sdk/v3 v3.43.1
)
oh, cool, I can just navigate to https://github.com/pulumi/pulumi/tree/v3.43.1/sdk/v3 to view the source ... oh, :sad-trombone: it's 404. Well, why is that? $ curl https://github.com/pulumi/pulumi/blob/v3.43.1/sdk/go.mod | head 1
module github.com/pulumi/pulumi/sdk/v3
Oh, because the "module" line is decoupled from the apparent URL. Then there's this `gopkg.in/yaml.v2` nonsense, as described by this: https://go.dev/src/cmd/go/internal/help/helpdoc.go#L251Meaning navigating to https://gopkg.in/yaml.v2 gets one thing, but https://gopkg.in/yaml.v2?go-get=1 produces another and only a view-source of the latter shows what golang is going to use. Yes, I'm aware that out of the kindness of the author's heart the browser version does link to the alleged source repo, but alleged is the key part of that
You're quoting me talking about the Go import statement, and then shifting to talking about Go modules. The import statement, at the level of the language maps a string to a symbol. Packages are just directories. I meant to compare it to the more complicated package/module systems which are built into other languages like Python or Rust.
I said:
> It's very easy to see code on GitHub and import it.
Which I maintain is true. I usually just type the import line into my text editor and the tooling figures out the rest, or prompts me to "go get". Your example talks about going in the reverse direction, which I never claimed was easy. I don't think I've ever had to do that either. If I want to see the exact version of a library, the go tool has downloaded it to the local filesystem.
I think the author is in many ways right, but has missed how important the runtime is to the average programmer.
At the time of release Go made it easy to write evented IO servers, in a way that Rust, despite its superior language design (IMHO), still hasn't. This is why people continue to use it.
Other examples are PHP, a car crash of a language, but suddenly it was easy to dynamically generate a webpage, or early versions of Java, widely derided for language design, but programmers were happy to trade the virtues of elegance and efficiency for the ease of the standard library, the GC, and portability.
Give the programmers the ability to do easily something they weren't able to do prior to using your language, and they'll use it, whether PL enthusiasts are happy about that or not.
Most folks just need concurrent code, not parallel code. NodeJS was able to provide that.
That car crash of a language can imitate the general characteristics and typing style of any language and provide for using any pattern/anti-pattern in computer languages by just modifying its ini files to however you want it to work. That's why its 80% of the web.
It is yet another example of making easy things easy.
It is not hard to write non blocking code. No harder than remembering to insert a keyword "await".
Event loops are very useful, I agree. I just do not see how a new syntax makes them "easier".
That said, I found myself vehemently agreeing with the author re: how hostile Go is to FFI/etc. IMO, a network boundary is indeed the sanest way for most other things to communicate with Go code.
Re: Go on Windows: anecdotally, things seem a lot better than even a couple years ago, but there's still room for improvement for sure. It's baffling that the semantics of e.g. os.Rename() are so different between POSIX-y platforms and Windows when they don't have to be (Windows supports POSIX-style renaming semantics if you ask nicely). I ended up having to reimplement os.Rename() on Windows in terms of SetFileInformationByHandle() with the FILE_RENAME_POSIX_SEMANTICS and FILE_RENAME_REPLACE_IF_EXISTS flags set to get the behavior I was looking for.
Re: mutable state, I think these concerns are mostly overblown. These are problems we've been dealing with in the vast majority of mainstream languages for many years now, and there are plenty of strategies for dealing with it.
For example, if I want to pass an "object" by reference in Go but do not want it to be mutated, I'll wrap it in an interface with getter methods for its fields, but no setters or anything that can cause side effects. Sure, it's a bit more involved than tagging something with the `const` keyword in e.g. C++, but it's plenty effective.
Yes.
Do not minimise this.
For very large complex systems immutable state is very helpful.
> pass an "object" by reference in Go but do not want it to be mutated, I'll wrap it in an interface
Making the point.
Horses for courses, but this is picking the course for this nag.
At the same time, I really like go. No other AOT-compiled language I know of makes concurrency so easy: `go foo()`. It’s wonderfully simple and I love that using the language doesn’t cram my working memory. It feels like all the space is there for my problem. It isn’t the right tool for some jobs, and of course, you’re perfectly entitled to your opinion. But I think it’s a welcome tool in the toolbox.
I recently had to implement Raft in Go, and I don’t know of a better language for that. Feel free to inform me otherwise. The race condition checker, RPC libs, and simple concurrency let me focus on the algorithm implementation. The iteration speeds for me were super fast too since Go keeps compile times snappy.
These are all useful tradeoffs for me to keep in mind when deciding the right tool for the job. I wouldn’t consider these “lies” personally, but maybe I do have the wool pulled over my eyes :)
Nah, enforcement would be a problem if you're just say making a protocol decoder, you might "just" write struct as defined by API but not need to use all of its fields, or not use it inside the package (say package that decodes some JSON then just returns a struct with it)
I'd like go vet option for that tho as in some cases that would be useful.
> or have provided some defaults at declaration
Outright initializers would be nice vs New*() for any type that needs it and having developer to remember to call that
Maybe I'm wrong, but when I use Go it definitely feels designed: specifically, it feels designed to discourage the worst or most-alien-to-the-language's-intended-style dep in your dep tree from being too bad or too alien. That is, the design seems to me (and again, I could be wrong and this effect was in fact an accident) most concerned with keeping the quality and style-distribution of the language's ecosystem, or of a given codebase, on a nice, narrow, pointy curve rather than a flattish wide one, so much so that the authors would prefer an expert be a little annoyed with some part of the language to making doing the Wrong Thing too easy for less-adept developers (or for "experts").
If that was in fact unintentional, well, then my favorite quality of the language (and IMO the most interesting thing about it) was an accident, I suppose. And it's not like I've dug deep into the history of the language's development, so that may be the case for all I know.
Thing I find the most frustrating about golang is the community. It almost seems to be made of people who never used another programming language in their life. Defense of things like lack of generics, lack of a good collections library, lack of good error handling, lack of good dependency management, etc. It's so bizarre to me.
As a former Java dev, we all at least admitted the flaws of the language and came up with ways to deal with it (see Effective Java).
I think it is a bit naive to think that these sorts of decisions are made by competent technical people.
In my experience these decisions are made by managers that have "Peter Principled" past their competence.
1. The answer to an unasked question
2. When your only tool is a hammer, every problem looks like a thumb
And one I just made up:
3. I had a problem, so I invented a new programming language. Now I have 769 problems
We could look at Rob Pike & the Bell Labs guys and imagine Stephen Stills saying "Hey, 'For What It's Worth' was the worst song Buffalo Springfield ever did. Let's fix it."
Too late. It's out there. And C is out there.
Portland's weather is pretty nice! Definitely better than, say, Chicago, or St Louis, or Minneapolis, or Buffalo. I'd argue also better than Austin or Dallas, though people are free to disagree about the merits of unbearably hot weather vs rainy cold weather.
The fire season of the past few years are changing the equation for me though...
Something so apparently mundane and poorly thought getting traction is a regression in the world of software engineering, supported by a Big Evil Corp that many folks dislike.
I've also personally seen a social meta-effect of this, where in a particular space all of the language aficionados would make a point of dumping on Go whenever Go was discussed (or even when a dig at Go could be shoe-horned into another discussion), and at a certain point there are only negative discussions of it, and the snobbery (justified or not) is a form of social bonding.
Of course, there are loads of legitimate criticism to be applied to the language design, the runtime, the rollout, the marketing, the framings of the authors, but there's a persistence, a snarl, to some of the critics that seems to me to go beyond an observation of the real issues. For reasons listed above, some people seem to take hating Go quite personally.
Go, philosophically, is a terrible language to show off in. It’s intentionally designed for every line to require as little brainpower to understand as possible. It’s not an aspirational language - unless you aspire to be able to hire lots of junior programmers and make sure they don’t cause too much trouble.
The lukewarm support go had here was because it was still new and trendy. Rust will lose its lustre in a decade or so too, and the tone will inevitably turn more negative. You can see the pattern slowly play itself out at the moment with docker.
> The lukewarm support go had here was because it was still new and trendy. Rust will lose its lustre in a decade or so too, and the tone will inevitably turn more negative....
Are Rust and Go not the same age?
I get that Rust or C++ has more power or expressiveness, but neither of those come without additional cognitive effort for us non-CS types. But if I don’t _need_ to work in Rust to get my work done, why bother ?
Go is good enough for me, Rust for others, etc. Hating a technology that you can choose not to use seems futile.
Only if I were adamant enough to quit each time it happens (I say this having joined a team that was explicitly Java/Scala-focused, and yet.)
https://news.ycombinator.com/item?id=31205072
(Edited to point to page 1.)
If you want to begin at page one, try this: https://news.ycombinator.com/item?id=31205072
Edit: thanks for the follow-up @dochtman, it was just a little confusing. Cheers!
I agree, I'm glad the author wrote up such thoughtful constructive criticism. They clearly care about software development and want everyone to improve. :)
It's ... familiar quibbles surrounded by noise. I'd love it if you could summarize the thoughtful constructive criticism you saw for the rest of us.
People seem to be outright offended that this language isn't a second coming of christ and dares to get some things wrong.
The author is making a living out of being a full time Rust online personality - https://fasterthanli.me/articles/becoming-fasterthanlime-ful... - maybe these unnecessarily controversial takes help them attract more attention and make a bit more money.
The only thing I can think of is a sort of substitute for immutability, so a function can take a variable by value rather than by pointer and guarantee that it won't modify it, but I don't see it being a win over just letting a variable be declared immutable.
The only language I can think of with sort of similar semantics is perl, but in pratice haven't seen many %hashes flying around, it's always $hashrefs instead.
TFA is about dismissive comments about the article and people burying their head in the sand rather than engaging with the criticisms.
Yet another whiner complaining about well-known problems that advanced users are aware of doesn't add anything and nothing in this article is new thing to any advanced user.
Especially if the asshole behind article makes the title extra confrontational and aggressive just to piss off people.
Of course, this only makes sense if you already accept the premise that Go is bad.
https://hn.algolia.com/?q=lean
I wish there was a convention to call it "Leanlang" or "leanprover" or something.
About only semi-known ones that's not terrible to search is Haskell, and I guess Perl. And "letter with funny symbols after after" like C++/C#/F#
I think the article has a conclusion it's trying to push to __stop using Go__. Maybe I misread it, but that's what it sounds like. Honestly though, I am fine with Go. Every language has its ups and downs.
What I like about Go is specifically what the article pointed out, the simplicity means I need to only remember a few concepts. In contrast, with Rust, or C, or C++, or Java, or C# I have to know _so many small details_ to not shoot myself in the foot.
Java arguably has even less foot guns then Go and is an simpler language.
Lack of generics made a lot of code less type-safe than it should, and just more complexity and/or duplication.
Lack of sum types made some types ugly (like mentioned IP type), and other ways of control flow harder. Rust-like Result<T> would be massive improvement to verboseness of error handling in Go for example, without really any drawbacks.
I'm going to focus on toolchain related issue because I feel better able to speak on these topics. The language issues the author brings up are similar though.
RPCs instead of FFI. There is some truth here. I personally find wrapping foreign code in a C interface and using CGO to be pretty easy, but RPCs are a good choice as well. The author then jumps to this necessarily including TCP overhead which is wrong. If a FFI is the alternative, then running on a single machine is acceptable and a Unix socket is an option (I'm sure there is something similar for Windows).
I have often combined these strategies where I wrap some C++ code in a C library and integrate it into a CGO gRPC service which serves on a Unix socket. I do this because I find setting up an RPC service in Go to be much easier than C++ and I want to isolate the C/CGO.
Using an RPC interface also limits the unsafe code to a process which can be optionally further sandboxed (e.g. with SECCOMP). If you are concerned about memory safety, this might be a good idea even if you are using two languages with better FFIs. Even if both languages are memory safe, the FFI almost never is. In addition to the security benefits, this also makes debugging memory issues easier as it limits their scope.
Another benefit to FFIs being bad is that it has pushed Go to be one of the only languages where most code has no C dependencies. Nearly the entire standard library is pure Go and the parts that aren't are not commonly used or provide optional functionality which is not commonly needed. Most third party libraries I find on GitHub are pure Go as well and only have pure Go dependencies. The majority of Go binaries I have encountered either are or can be built with CGO disabled.
This has two major benefits.
First, it makes Go code much safer. C dependencies are a major source of memory unsafety bugs. The most common C dependency (in Linux land anyway) is glibc. glibc is well known for its terrible code quality and history of security vulnerabilities. Go is one of the only languages which makes it easy to not depend on it. Since the author brings up Rust, I would personally feel better about untrusted data being processed by a pure Go program than a Rust program with C dependencies.
Second, even the best FFIs make code harder to read. If nothing else, now you need to be familiar with both languages. Go libraries usually being pure Go makes it easy to jump into random library code and figure out what is going on.
The author also mentions build systems with a vague comment about cases not considered important by the Go authors. In my opinion, the true build system for Go is Bazel. Bazel has great Go support because Go was built with Bazel in mind. The fact that Go and Bazel handle imports in almost the same way and that Bazel BUILD files can be generated from Go code is not an accident. Bazel is very powerful and is usually a good fit when you start running into the limitations of the built-in Go build system.
Reminds me of a diatribe by an oil painter on why people shouldn't use acrylic paints.
Probably mostly a cathartic thing after the author took a job that needed it before trying it out.