I doubt it.
Yes, all the points the author makes are valid benefits of static typing. But the benefits of dynamic languages still exist. There are reasons people like both types of languages.
However, I think a more relevant point was made in the article: that statically-typed languages are looking more and more dynamic. And we can add to that that we see signs of dynamic languages adding more 'static' features, like optional types. Those are signs of convergence, of both major classes of languages learning from each other and improving.
But I still don't think we'll end up in the middle with "static, but feels totally dynamic". We'll still have both types of languages around.
Elm's latest blog post said it really well:
> Compilers should be assistants, not adversaries. A compiler should not just detect bugs, it should then help you understand why there is a bug.
The middleware (at least those that ship with ring) are all documented in docstrings explaining the keys they add or work on. This is available in the repl via `clojure.repl/doc` or inspective `(:doc (meta foo))`. It's available in your editor if your editor has clojure support (I only use vim and emacs, and both of these can search/display docs on clojure code). There is also nicely formatted api docs at [2].
[1] - https://github.com/ring-clojure/ring/blob/master/SPEC [2] - http://ring-clojure.github.io/ring/index.html
That said, it would be nice if more of clojure was documented using an executable schema or type system, ala Schema or core.typed. The author would have done well to pick less well-documented libraries though.
For the most part, Ring is well-documented. But, as you noted, human-typed doc strings go only so far. I applaud the writers of ring in their discipline, but certainly we can do better than depending on human discipline.
You even noted, I picked "less well-documented libraries." I didn't pick them because they were not well-documented—I picked them because I'm using them!
From https://github.com/ring-clojure/ring:
Ring is a Clojure web applications library inspired by Python's WSGI and Ruby's Rack. By abstracting the details of HTTP into a simple, unified API, Ring allows web applications to be constructed of modular components that can be shared among a variety of applications, web servers, and web frameworks.
The SPEC file at the root of this distribution provides a complete description of the Ring interface. CL-USER 1 > (class-of *standard-output*)
#<STANDARD-CLASS EDITOR::RUBBER-STREAM 40E035A2E3>
Oh, the class of the value of *standard-output*
is EDITOR:RUBBER-STREAM... From there I can find the source, find the applicable methods, the slots, the super classes...I think that's the reason dynamic languages work because guessing types is easy for people. Guessing meaning is the hard part.
How does any IDE or editor know what some_function would be passed without a severe amount of inspection?
I might be wrong on my definitions, but almost by definition, there's just not enough information before runtime in dynamic systems to infer enough to do any real refactoring or smart completion.
I use PyCharm everyday, and I like it, but it's refactor menu is limited, can fail, and pales in comparison to my uses of java w/ IntelliJ or even a well hinted php app w/ PHPstorm.
- When two equivalent anonymous functions can be extracted out?
- When a library method already exists for an expression?
- When you fail to match every possible result in a case statement?It's nonsense, both ways. Both static and dynamic type systems have their uses and, unless we substantially change the way we use computers (which is plausible) I don't see either side wiping out the other.
Most dynamic languages I've used in my career (javascript, python, php) have either metadata to typehint, a hinted superset of the language or new language support for typing.
But I wished there was a bigger push from the top, from Rich himself, on "first-classing" typed Clojure. Aim for 100% annotation coverage for the most popular libraries. Start with core (maybe it already is? I haven't kept up) and ring, then move outwards from there.
Library writers will be more motivated to add annotations if the big libraries are doing it.
Types allow the compiler to do a lot of validation before your code is even executed, removing whole classes of bugs before they even have a chance to manifest themselves. You thought that value was an int and divided it by two? It was a string. I'm glad I didn't have to wait until runtime to find out about it, or to write tests to make sure that every single code path to that division results in the value being, in fact, an int.
Types allow developers to trust whatever data they receive without feeling the need to protect against hostile, or less experienced, application programmers passing incorrect data. They allow you to know, with absolute certainty, that what you wrote can only be executed the way you meant it to be executed (whether that's correct or not is another question altogether). A function in a dynamic language is never complete, you never know what data, or shape of data, you will receive. Types buy you that certainty.
Code efficiency is not the point - developer efficiency is. When you can trust your data, you can focus on what your code needs to do - that's usually complicated enough without adding the complexity of not being able to trust a single value.
The main argument against static languages is that, in their struggle to be sound, they end up not being complete - and it's absolutely true: there are perfectly legal programs that you can't write with a static language, but that a dynamic one would run without batting an eyelash. Duck-typing comes to mind, I'm sure there are other examples.
Another argument is that type systems get in the way of what you want to do. There are a few cases where it's true - see the point just above. But in my entirely anecdotal experience, the vast majority of times someone complains the compiler won't let him do what he wants, it's because what he wants to do is not correct. The complaint is not that the compiler is too restrictive - it's that your bugs are shoved in your face much, much more frequently and quickly than they would if you had to wait for runtime. And that's a good thing.
With some cleverness, I was able to get C's type checker to let me know when I was accessing data from the wrong thread. This was tremendously useful when moving functionality between threads. If you think type checking doesn't buy you much, it's because you don't know how to use it.
For example in Java a dynamic feature is that it has class loaders which enable to load classes into a running Java program. This way a program can be updated and extended at runtime.
Other dynamic features are 'late binding', runtime evaluation, code as data, introspection/reflection up to a full blown meta-object protocol which enables things like modifying inheritance, slot allocation, instance creation, method dispatch and so on.
One of the features of Erlang for example is that it enables application updates, while providing zero downtime. The demand for that in the telco business is far from 'end'ing.
What actually 'dynamic typing' is and if it is seeing its end is a totally different question.
But in today's world, with verbosity-eliminating type inference, more expressive type systems, and tools that can make use of all the extra information to greatly increase productivity, it's really hard to think of any meaningful benefit dynamic languages can provide (that is related to their choice to have dynamic typing, not other parts of the language).
Dynamic languages show there's a tradeoff. Repeating yourself is often useful, when maintenance is expected and code is cheap to produce, without having to worry about strict typing.