However, while optional typing gives you some benefits over purely dynamic typing, the fact that it's all bolted-on causes a variety of problems. First, there's the fact that you'll have code with types interact with code without them. This eventually causes more trouble than it solves.
IMO, the biggest issue though is that what we really see from most of these systems is to add optional type systems that are comparable to very simple type systems, like Java's. But those strongly typed systems are not really that powerful! The real power of static typing doesn't come from being able to make sure we don't mix strings and integers, but in doing type checking for much more complex abstractions. Type systems like Scala's, or Haskell's. Creating an optional typing linter that looks at that high a level, and doesn't cause much of pain, is not something I've ever seen. Type inference with generics, existential types, higher kinded types, algebraic types. That's where the real value is, and where modern typed languages are.
Aiming optional typing at where typed languages were 20 years ago is not going to help bridge the gap. If anything, I think it makes it wider, because then fans of dynamic languages think they understand the position of proponents of type systems when, in fact, they are looking at a strawman from the past.
A simple type system prevents type mismatch errors. A more complex type system like Haskell's might be able to encode interfaces and other rules but might also has it's limits.
The trade-off is usually that the more classes of errors you remove, the more complex the type-system becomes. Over the lifetime of your program, how much time did you spend hunting bugs vs time spent in long compilations or encoding all the rules.
I guess my point is that a lite-weight type system might also be enough.
Type checking Haskell, for example, is actually incredibly fast and adds basically nothing to compilation time. This isn't one of the drawbacks of a type system.
Without types, it's impossible for tools to refactor your code safely without the supervision of a human. This leads to developers being afraid of refactoring and, ultimately, code bases rot and become huge piles of spaghetti that nobody wants to touch.
With a statically typed language, I'm never afraid to refactor whenever I see an opportunity to do so.
- Some error conditions are inexpressible in your program. You cannot write classical tests to verify these, as trying to recreate the erroneous conditions will result in type errors.
- You cannot forget to "write a type" like you can forget to write a test. A type always covers all the guarantees it makes. This is especially useful when there are no tests, as you have at least some kind of helper available to you.
- Types are a form of documentation. Again, even a poorly documented, poorly named project still has types.
- Types do not make tests obsolete, but they greatly reduce the number of them you have to write and maintain.
As a result of this, most Haskell projects have abysmal test coverage compared to what you'd expect in Python or Java. Nobody would (or could!) write tests that all "instanceof" checks are covered. When you forget to implement a branch in your program, you'll get a compilation warning. When you feed a value to a well-written function, you're guaranteed to get a non-nonsense result out of it (instead of an exception). But still, you can make core changes to programs you do not understand at all - while understanding the type system in general - and have your changes work as expected.
In a sufficiently strong type system, you can express constraints stronger than "returns a string" or "takes an integer". For instance, if you want to make sure that a particular function always returns a valid file descriptor, make a structure for file descriptors that just contains a single int, and give it a private constructor that can only be called by the low-level functions that call the actual syscalls, or maybe a public constructor that confirms that the provided integer is actually a file descriptor. Then, as long as your function typechecks, it can only possibly return a valid file descriptor, for all possible inputs.
If you take this to an extreme, you get the https://en.wikipedia.org/wiki/Curry-Howard_correspondence, that there's a direct mapping between mathematical propositions and types, and between their proofs and functions of the corresponding type. This is what all modern proof assistants build on top of, though their type systems are usually extremely complicated.
Especially in the prototyping phase, I do a lot of iterating and refactoring - and I mean A LOT, because I've found that this is the best way for a good design to fall out. Sometimes like 20-30% of the code get refactored, replacing core data structures and control flow.
If I ever had to rely on unit tests for this, I'd shoot myself, mainly because it would make me 1/2 times slower (because I'd have to reimplement all unit tests every time) and this would really break my flow.
Also, relying on unit tests for refactoring means that having anything short of 100% code coverage (i.e. 70-80% isn't enough) defeats the purpose of the whole thing anyway - and I've yet to see 100% test coverage in a real-world project. I realize this isn't an argument in the context of your reply (it was either test coverage or static types), but still, something to consider.
You don't need those with static types. You only need to write tests for your actual code, not things that the compiler can check for you.
Manually writing code to check things the computer can test for you is a waste of developers time.
Without types, IDE's can't perform automated refactorings: you need to supervise them and make sure the IDE didn't break code.
This can't happen with static types.
; (> SECRET GUESS)
;
; caught WARNING:
; Derived type of GUESS is
; (VALUES STRING &OPTIONAL),
; conflicting with its asserted type
; REAL.
; See also:
; The SBCL Manual, Node "Handling of Types"
;
; compilation unit finished
; caught 1 WARNING condition
; printed 8 notes
Common Lisp allows optional type declarations. SBCL (this feature it has inherited from CMUCL) uses those as compile time type assertions.The syntax is backwards compatible, and this doesn't fit the language well. Forward references to types are handled via a really tacky kludge: "When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later." So, sometimes you write
def foo(item: 'Tree'):
instead of def foo(item: Tree):
It's not stated in what scope 'Tree' is evaluated. So don't reuse type names.Some type info is in comments:
with frobnicate() as foo: # type: int
# Here foo is an int
Also, Python is getting header files (called "stub" files), like C. In most modern languages, such as Go and Rust, that's all handled automatically by storing type info in the object files. But in future Python, that will be manual.There's function overloading, sort of. There's "@overload", but it doesn't really do anything at run time yet.
This whole thing is a collection of hacks in search of an architecture. This is the messiest type system of any mainstream language. Well designed type systems are hard enough. This is not one of them.
If this hadn't come from Python's little tin god, it would have been laughed out of the Python community. Stop him before he kills again.
Stub files are not mandatory; type annotations can be in the same file as the code. This is perfectly legal, for example:
def add(first_value: int, second_value: int) -> int:
return first_value + second_value
Stub files exist because the syntax support didn't exist in earlier versions of Python, so if you write a library that supports older versions of Python you ship a stub file rather than causing syntax errors for a subset of your users.Because it doesn't appear to actually do anything. It's a joke to call this 'static typing'.
And by the way the future looks bright as interesting PEPs are coming: PEP509, PEP510 (https://www.python.org/dev/peps/pep-0510/) and PEP511 for instance.
Has anyone ever gotten paid to add annotations to code that works?
Personally, I view type systems as like a safety line when doing work on a roof, and optional typing as having a line that might or might not be tied off.
It's not able to analyze across such boundaries, but as you expand the set of typed code you get better and better coverage.
> Has anyone ever gotten paid to add annotations to code that works?
At Dropbox, we're starting to add type annotations to code. We're confident it will catch many bugs at lint time. Source: I work at Dropbox.
> Personally, I view type systems as like a safety line when doing work on a roof, and optional typing as having a line that might or might not be tied off.
At least it has the possibility of being tied off and catching you. Better than definitely not having a line.
PHP has a (unfortunately quite limited) set of type annotations, but the interpreter actually enforces them.
I seem to be doing this a lot in this thread, but...
This is your obligatory reminder that several statically-typed languages actually run on dynamic-language runtimes, because programmers don't like to think about the cost of polymorphism/generics, but implementers have to think about that.
This is partly what the Typed Racket writers mean when they say that most gradual typing systems like this one are not sound.
http://www.cs.cornell.edu/~fabianm/tpls/papers/20151110.shtm...
This topic has been knocked around in Ruby-land for a while; I remember seeing Michael Edgar's LASER (https://github.com/michaeledgar/laser) static analysis tool including some work around optional type annotations, but seems like development there has stopped.
It seems like a generator like @Signature(...input types, output type) would solve this problem with limited language changes, and would work in python 2/3?
With type annotations you can have static guarantees, which decorators will not be able to provide, as decorator methods will be called and resolved only during runtime. Objects imported from `types` hopefully will not.
https://github.com/dobarkod/typedecorator is interesting -- I was imagining type signatures more like haskell's:
addNumber :: Int -> Int -> Int
translating to something like @TypeSignature([int, int, int])
def add_number(a,b):
....
It's just as possible to have the compile-time program that's doing the checking process the decorators.The real benefit I'd be looking for is the chance to give the compiler hints to speed up execution times.
I personally find that type safety cuts down the number of unit tests I write by half and makes refactoring work an order of magnitude easier to perform.
While this is true, static type checks do even less for this problem. Do you even create specific types for value ranges eg IntegerBetweenZeroAnd100? You're in a vast minority if so, and I'd be interested to see how tedious it is to construct all these non-native types everywhere.
> If you omit manual type checking in your code or in the respective unit test, you may miss some subtle failure scenarios.
There's generally not much that's subtle about a wrong type. If the code is executed at all, it will usually blow up. In my experience, the subtlety comes in the values.
> I personally find that type safety cuts down the number of unit tests I write by half
I just don't buy this (but then again my team goes for near total coverage). Nowhere near 50% of our tests are testing anything that would be solved by static types.
If you're going to make the argument that you can just add comments with the types - well then that's exactly what mypy is doing, but if you do it in the mypy format you get static analysis of your types for free.
Static typing and unit testing aren't mutually exclusive.
The problem in Ruby is nils, and you'd have the same problem with the same solution in a static language; creation of a duck type. You can't get away from duck types, whether it's a maybe type or whether you perform nil-checking at the earliest possible opportunity. You learn with time and experience how to deal with inconsistent data. Ruby gives me the flexibility to do it without a lot of boilerplate.
A good type system doesn't allow you to call impure code from a pure function. It just won't compile. So your skill becomes useless because it's automatically verified by the compiler.
A type system is just OOP with added math. The benefits don't outweigh the hassle. I've tried Crystal. Did not like. It was the language that taught me that there's more to Ruby than nice syntax.