I agree that Common Lisp a very powerful language, but I can't live with all that power uncontrollably thrown on me. Common Lisp grossly lacks self-discipline and self-limitation when it's needed.
Haskell isn't necessarily that language, partly because it still requires centralized coordination of development of these "extensions" to ensure they're interoperable - that is, there is only one parser for the language and its supported extensions, and many of them are build into the compiler rather than added as libraries, except those extensions which are done through quasi-quotation, such as with MetaHaskell, or some EDSL. Even that has it's own problems, and you'll have issues parsing if your quoted language happens to have delimiters which conflict with Haskell's quasiquoting delimiters `[| |]` - producing syntax which cannot be parsed unambiguously (perhaps very rare or unlikely though).
Perhaps the biggest hurdle of having a modular language is that we do not understand how unambiguously parse the combination of two or more syntaxes. We only know that composition of two CFGs results in another CFG, but with no guarantee of unambiguity, and other parsers such as PEG rely on ordered choice, where the computer can't decide which choice you really want.
What makes lisps great for composition of languages (or "EDSLs" in market speak), is that it bypasses the parsing problem by asking you to just write your language directly in terms of the syntax tree which a parser would generate - and perhaps use macros or other functions to simplify the use of that tree. Instead of a language being vocabulary+syntax, we create new vocabulary for what would be done through syntax in other languages - and we can thus refer to it unambiguously. Similar can be done in haskell too, through regular functions and quotation.
The parse problem is only really a problem because we're stuck with this silly model of "sequential text files" to describe code, and we're required to limit our languages such that a parser can take one of these text files and make sense of it. When we break out of this model, and use intelligent editors, we can reach the point where syntaxes can be composed arbitrarily, because we can indicate where each new syntax begins and ends. Diekmann and Tratt have demonstrated how this can be done while still appearing much like traditional text editing, which they call Language Boxes.[1][2]
Language Boxes only provide the means to compose syntaxes, but handling the semantic composition of languages is left to the authors of the languages which are being composed. Haskell is perhaps a good choice of language for providing the kind of glue needed here, where we can decide where languages can be composed based on the types returned by their parsers.
[1]:https://www.youtube.com/watch?v=LMzrTb22Ot8, [2]:http://lukasdiekmann.com/pubs/diekmann_tratt__parsing_compos...
Even in an environment like the JVM which specifies a lot of stuff for you, it's awkward to call into Clojure from Java because of the semantic differences.
We already do write tools for such language interoperability for specific pairs of languages, which is often really awkward because it requires us to re-implement the parsers, and only deals with entire code files rather than specific productions in the syntax.
It's pointless composing languages unless it makes sense semantically, which would need to be done on a per-language basis (or per-production rule), which is where I was hinting with using Haskell as the glue for such interoperability - because if we encode the semantics into the type system, such that one syntax expects a language box of type T in it's grammar, then one should be able to use any other language whose parser returns a T, and the semantics will be well-defined for it.
It could also provide the glue for converting between nullable types and option types for example too, by requiring that a language returning a "Nullable T" be wrapped in some function "ToOption", which converts "Nullable T" into "Option T". Attempting to use the Nullable where an Option is expected would fail to parse. How ToOption is implemented is left to the author of the code.
It's much easier to have interoperability between individual production rules in different languages (which share many parts in common) versus "whole text files" which we currently have, which basically require the languages be almost equivalent to convert between them.
Also as a result of storing the semantic information as opposed to sequential text, it would be possible for the user to chose his preferred syntax for any semantic elements in the tree, since they're just working on a pretty-printed version. Most of the concerns about "code style" disappear because they're detatched from the actual meaning that is stored.
...but to be honest, in everyday work Javascript feels a lot like a bondage and discipline language, because it lacks so many features and in practice you always use a restrictive "coding guideline" and a linter configured for the maximum strictness you can have, so you end up with a pretty verbose, retarded and restricted dynamic language. In order to keep your sanity and be able to work in a team in Javascript you basically have to throw away the baby and keep the bathwater to work with :)