"Compiler devs". I think that is exactly my point. I should have said "app developers" or "system designers". If you listen to a Rich Hickey talk, you'll see that he is all about Getting Things Done in the real world, and much less about theoretical concepts. Language design is subservient to app developer needs.
Rich's writing, presentations, and example of overall conceptual discipline and maturity have helped me focus on the essentials in ways that I could not overstate. I'm glad (but not surprised) to see so much appreciation for him around here, even among non-Clojurists (like myself).
At the risk of fanboyism, I am constantly referencing his ideas* to my team, and I give them my blessing to watch any of his talks as soon as they come out.
* That is, the old but sometimes obscure ideas whose importance he's brought to his audience.
In static FP languages, the exotic features being added are almost always demanded by users in the industry. As a fun fact for Haskell, developers in academia are more likely to stick with the Haskell 98 language for stability and compatibility.
As an example Simon Peyton Jones in a recent chat with Martin Odersky at Scala eXchange was describing how he was approached by developers at GitHub, who are using Haskell for language analysis, saying that they can't wait for GHC 8.6 because they want "quantified constraints", a new and very exotic GHC extension.
The "real world" you're talking about wants expressive static type systems where they matter. And the problem with static typing is that the language acquires more features and thus becomes more prone to suffering from backwards incompatible changes. And note that more features doesn't mean more complexity, rather it can mean more tools to cope with complexity, otherwise we'd all be programming in RISC assembly.
When it comes to Clojure, sure it's stable and that's a good thing, however it has no static type system (therefore it's apples versus oranges) and by encouraging people to work with raw data (e.g. everything is a list, a vector or a map) in a sense it's actually a step backwards from classic OOP.
Rich Hickey was fond of saying that OOP complects data with the operations done on that data. However the need for OOP's encapsulation and polymorphism came from the experience people had with procedural languages like C, and that experience has been forgotten, but the fact remains that some of the biggest software projects around are built with static OOP languages.
Correlation doesn't imply causation of course, it might be that such projects are built in static OOP languages because these languages are popular (chicken and egg issue), but at the very least we have empirical evidence that they work, whereas the empirical evidence for what makes LISP great pretty much doesn't exist.
Basically as an application grows, so do the data structures, being inevitable and to be able to cope with that: (1) you need good encapsulation and (2) you need the ability to do refactoring cheaply. And I think standard Clojure, like other dynamic or LISP languages before it, fails hard at both.
My point being that ... it's really not impossible to keep compatibility in a language that doesn't care much about correctness.
And I also believe that the "real world programmer" meme is a symptom of anti-intellectualism.
Static typing isn't a bad idea, but it wasn't implemented because it had interesting logical implications. Languages like, eg, C, used static typing because it is critical to the entire design of C that you know how large an object is in memory, and that like is where I suspect most of its popularity comes from - I assume that to write high-performance code, you need fine-grained to control RAM. For the benefits of a static type system, Clojure is decoupling that implementation detail of size from the interesting logical implications of dynamic typing using the spec library (which has been present since 1.9.0, and is likely to be a pretty strict improvement on static typing for anything except raw performance). I doubt it is the first language to do that, but it is the first I've used.
As for encapsulation and refactoring it would be interesting to hear about where you think it falls down. I havn't used the language for anything large-scale, but for personal projects Clojure's approach to encapsulation with protocols is far more likely to capture what-I-actually-wanted than an inheritance system like the classic model in C++. Clojure makes it at least as easy to refactor as any other language.
Basically, Clojure cares a huge amount about correctness, that has been a focus for 1.9.0 and now 1.10.0. They just think that static types are a bad way of solving for correctness, because static typing puts a bunch more constraints on an object than are required.
I disagree. I would say it is actually easier to write performant code in lisps than in other languages. In languages with traditional syntax (ie. not homoiconic) it is usually tradeoff between performance and readability. As soon as you try to maximise performance, the program starts becoming unreadable.
In lisps, on the other hand, due to how you can have full control over every aspect of translation between notation and generated code it is much easier to write programs that don't hurt performance just because you want nice notation or DSL.
As an example, take printf. This function makes it easier to format strings and is available in many programming languages.
printf("%d", 7)
If no special compiler optimization is used (ie. compiler having hardcoded optimization just for printf) this results in code that will have to parse "%d" every time just to figure out, every time, it needs to take next argument and output it as integer.
On the other hand typical Common Lisp format macro looks very similarly:
(format t "~d" 7)
but you, as an author of a macro, have option to figure out that the format is an immutable string, so you can, at the compile time, replace call to format with an equivalent call to a simpler function that will take integer as an argument and immediately print it. This way you don't have to parse the format string every time and you don't have to pay for an extra stack frame as the expanded form of the macro takes place of the macro invocation.
For another example take a look at Peter Seibel's Practical Common Lisp chapter on parsing binary files (http://www.gigamonkeys.com/book/practical-parsing-binary-fil...)
This chapter shows a system of macros that take very high level of description of binary data structures and generates efficient code to parse, access parsed data and serialize it to the binary format.
In a typical language the requirement to have flexible description of the messages would likely result in a compromised performance. This is why so many "high performance" solution involve some variant of code generation but at the cost of additional complexity.