Here's what my experience is with certain features in languages: They enable some programmers to do great things, while also enabling a few programmers blinded by hubris to do maddening things. Over the life of a large project, the unfortunate coincidence of different pieces of hubris driven code sometimes causes an outsized amount of frustration.
Analogy: Most people, most of the time, have the good sense to operate cars, drones, and high powered laser pointers without becoming a dangerous nuisance. However, there is a potential for a minority of users of such devices to cause far more than their share of public nuisance. Therefore, there are rules and restrictions about how such things are used by many people at scale.
So yes, as an individual you are probably just fine. But you aggregated with a whole bunch of other programmers is likely to be a different story.
This is a narrative without specifics, and unfortunately always where the conversation seems to end with gophers. We accept some amount of features that can be abused because they offer utility that outweighs their potential for misuse. So how exactly are generics a worse offender than these other features or a worse tradeoff for their utility? Because from my perspective, being able to define parametric data types and functions is a huge win for safety and terseness of code without a lot of downside.
Exactly. Everything you add has a cost/benefit for a particular context. Evidently you disagree with how the Golang team has calculated cost/benefit with regards to generics.
Because from my perspective, being able to define parametric data types and functions is a huge win for safety and terseness of code without a lot of downside.
Terseness is a good thing? Some people say terseness is bad. Is safety the only issue or always the top priority? All production code exists in a specific context. It's best to tailor to your specific context. This may well mean that you may encounter a context where you do not want to use Go.
http://anomaly.org/wade/blog/2013/07/coding_style_terse_vs_v...
Rewriting code is expensive. As my office knows well. We maintain lots of old embedded systems and have to periodically rewrite or rehost it because the old hardware platforms aren't available or aren't performant enough for new features. These become multi-year, multi-million dollar projects, for relatively little gain.
By ensuring that developers and architects conform to certain conventions, it means (in theory) that this code maintenance is much cheaper, and that rewrites can be avoided or minimized. This is a good thing and lets organizations be more flexible and productive, as their time and money is no longer wasted on the old things, but can be spent on the new things.
What you say doesn't make sense given how Go reflection is implemented. If it was really about limiting choice Go would have no reflection . Go reflection is basically a way to opt-out of its (poor) type system. You should never have to do that in a statically typed language yet Go reflection is used a lot in the standard library itself.
Furthermore let's be honest. What do you think is more complicated ? generics or concurrency ? generics aren't complicated, at all.
> We maintain lots of old embedded systems and have to periodically rewrite or rehost it because the old hardware platforms aren't available or aren't performant enough for new features.
But Go isn't for embedded system programming. You can't run Go on bare metal without an OS.
Enforcing conventions is of course a good thing! The problem is how Go enforces conventions:
(0) When Go enforces a convention mechanically, it's a triviality that can be adequately handled by external tools (e.g., naming, formatting, unused variables, etc.).
(1) When a convention is actually useful (e.g., the correct way of using an interface), Go's type system is too dumb to understand it, let alone enforce it.
> aren't performant enough for new features
Second-class parametric polymorphism (“generics”) is purely a compile-time feature. It can be completely eliminated (that is, turned into the non-generic code you would've written otherwise) using a program transformation called “monomorphization”, before any target machine code is generated. So there's no runtime price to be paid.
"The key point here is our programmers are Googlers [...] They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."
I'll concede there's the possibility for some weird tongue-in-cheekness here, but it definitely seems to be the canonical view among gophers that Go's paucity of features is about accessibility for programmers that don't understand them or find them cumbersome to work with.
I think this idea that "paucity = good" is so easily abusable that whenever this comes up from gophers I wish they would concede that this is an unhelpful simplification of what they must actually believe. Assembly language has possibly the highest paucity of concepts given it offers no ability to introduce language-level abstractions (other than say conventions about calling, etc.), but Go is nothing like this.
The argument can't be that paucity is good as a general condition, its that there are forms of abstraction and programming language features that Gophers find unhelpful or difficult to understand. The problem I have with this when applied to parametric polymorphism, is that Gophers already work with these concepts daily, so it can't be that use of them is complicated.
I also have a hard time believing that the ability to define parametric types and functions costs you anything. It's almost always self-evident when to use parametric types or functions, things that are "wrappers" or "collections" probably account for 80% of their use. I also don't think I've ever experienced ambiguity of choice with the feature. For instance I don't think I've ever been in the situation where I had to trade off implementing a generic definition vs. N specialized definitions. The frustration of using Go is actually that I now have to consider the later as a possibility or trade off type safety by using unsafe casting.
If there's a place where parametricity truly introduces complexity, I'd love to hear about it from a Gopher instead of a blanket statement about how "programmers don't understand it", "it decreases readability", or "Go is simpler without it".
Please keep in mind that there are differences at scale. What is "easy to work with" for 1 programmer over a month might not be so for 20 programmers over years.
The argument can't be that paucity is good as a general condition, its that there are forms of abstraction and programming language features that Gophers find unhelpful or difficult to understand
The argument is that simpler is better at scale. Airplanes can move freely in 3 dimensions, but airliners are constrained to fly in particular ways around busy airports and cross country.
I also have a hard time believing that the ability to define parametric types and functions costs you anything. It's almost always self-evident when to use parametric types or functions, things that are "wrappers" or "collections" probably account for 80% of their use.
I could see an argument for parametric collections and parametric sorting in Go. Not, however, for wrappers.
The frustration of using Go is actually that I now have to consider the later as a possibility or trade off type safety by using unsafe casting.
In your experience, what kind of "cost" has there been in unsafe casting to use collections? Even in environments like Smalltalk, where all use of collections amounts to "unsafe casting," I've rarely seen situations where a mistake of this type wasn't found trivially. Does your frustration come from having to abandon the "assured safety" the type system would give you, or does it come from an experience of the costs?
Guess they don't think so highly of their hires anymore.