I'm not sure that theme is wrong even, but to many of us who lived through early java, golang's least common denominator approach to developer sourcing is eerily familiar.
Despite the taint Java left on it, it's actually a good idea! It's a common use case. After years of programming in Perl in a fairly large programmer environment I can see the virtue of the idea of using a simpler language helping keep a lot of programmers in sync in their code base, instead of living in a world where I can look at the source and virtually instantly tell both when and who it came from due to massive differences in dialect. But Java seems to have failed so badly that it soured everyone on the entire idea, and nobody has tried since.
I think Go actually has a better chance at this, not for any large reason but for a series of smaller ones. Defaulting to object composition, once you understand it, is a powerful-yet-simpler approach to putting objects together than inheritance, and makes it much easier to both use the objects of others and to create objects to use for others, because you don't need to worry about what will be F'ed up in the "child classes", as there is no such thing. Implicitly satisfied interfaces don't sound like they'd be a big change over Java, in practice it's night-and-day to be able to declare interfaces that are automatically satisfied by objects you don't control. Contrast how many ways there are to write the same Go function as there is to write it in, say, Scala. And writing some decent multithreading primitives into the core is probably a good idea for this use case, too.
It won't happen tomorrow or next year, but, arguably, rather than Rust or even Python, the language that Go is the biggest threat to in the long term is Java. And it's a threat for precisely the reasons that people on HN tend to complain about. Indeed, I personally have the same objections that most HN'ers do for personal usage, but right now bar none Go is the language I'd most like to work in, precisely because it was written for the Google use case which also happens to be the one I have.
Wind back 35 years or so and you could have said the exact same thing about COBOL.
An expressive language is good to work in, because reading 1 line of code instead of 100 is always better for readability and also verbosity is a cause of accidental bugs - one line of code that does the same work as 100 lines is much less probable to contain accidental bugs. Plus, the closer you have code that precisely models the business logic, the more readable it is (assuming that we aren't talking about a retarded, badly designed, non-composable DSL).
However Perl is a heavy language that also lacks common means for abstractions that people do use in Perl, like OOP. Therefore, not only are we speaking about a heavy language, but one in which you get a dozen libraries for doing things like OOP, all of them slightly incompatible with each other. Of course, Moose came along, but Moose was late (i.e. the damage was already done) and I'd argue that Moose is also overly complex. This goes over the principles outlined by Guy Steele in his "Growing a Language" presentation - either a language is very simple, with a core set of orthogonal and powerful features that allows you to build whatever you want on top (i.e. Scheme), or it's a more complex language that provides everything you need. Perl is neither of those. So one can argue that Perl is badly designed.
So you could say that Perl tainted the idea of using expressive languages, just like Java tainted the idea of using simple languages. And actually "simple" in this context is incorrect, because there's nothing simple about Java's flavor of OOP, it's just that people are familiar with Java's blend of OOP, but familiarity is not necessarily about simplicity. But I digress.
And it's a pity. For example right now my favorite is Scala. In my opinion Scala's expressivity is very different from Perl's expressivity, in that Scala's constructs are necessary and almost orthogonal and usually properly used and the abstractions are well defined. I also programmed in Perl and after 2 years of doing it I would still have problems reading code from other people, whereas this never happens to me in Scala and even if that would be the case, I have an IDE to help me out. And in my experience of training rookies to work with Scala, it's not the syntax that's problematic, but rather the design patterns and abstractions used and that have been imported from Haskell and hence are foreign to many developers. Yet in spite of this, I've seen many superficial opinions floating on the web that a language like Scala is too complicated and then when you ask those people why, it turns out that in most cases we are talking about misconceptions and unjustified fear coming from a superficial understanding of a language.
Going back to Go, the problem with Go it that it lacks the means for abstraction that I'm looking for. Go lacks generics for example, which means that building abstractions that involve higher-order functions is not feasible. And I'm sure that Go will get generics at some point, since it is inevitable, however Go is not the type of language to ever implement higher-kinded types and type-classes. I personally need type-classes, being a very different means for ad-hoc polymorphism, when compared with OOP, with different use-cases.
For example I want to work with monads or applicative functors, which in spite of their scary reputation, are just design patterns that aren't very hard to understand, their reputation being a direct consequence of them being explained by Haskell developers with a mathematical mindset. And I want to build generic functions that work over monads or applicative functors, because once you discover their power, it's very hard to go back and life is too short to reinvent the wheel every single time. And Go is not the kind of language that will ever appeal to my needs, whereas languages like Scala, Clojure and arguably Rust are such languages.
But here's the real problem that I'm seeing - scaling a software development team is done in two ways ... you either hire more people, or you hire really good people that can produce better abstractions. This is horizontal scalability versus vertical scalability. And the problem is, in a software development company, these 2 approaches are incompatible. The companies that want to scale horizontally are exactly the companies that prefer familiar languages, whereas the companies for which that isn't feasible (i.e. startups) are the companies that prefer powerful languages.
And of course, you would think that it's better to scale horizontally, depending on the problems solved, but therein lies another problem - the difference between really good people that can juggle with abstractions and juniors is not necessarily one of productivity, but rather in the range of problems they can solve. Given enough difficulty, you can hire how many juniors you want and they might still not be able to solve certain problems, whereas those same problems might be feasible for 2-3 people that are really good. Amdahl's law is also very relevant to software development. The more people you have, the more you introduce concurrency and points of synchronization, which in turn kills the parallelization possible and thus productivity.
What I'm talking about is the software crisis problem, which is still very relevant. And my personal bet is on vertical scalability, which implies working with better tools and abstractions. Because that's how we scaled math and that's how we went beyond the pyramids.