2. Call-by-value gives us the ability to safely interleave effects. Now, I know you all think we should not be doing that at all, but I would say that this is only true in some cases. The point of reifying an effect in a monad is not because "effects are icky"; it is because we have written a non-extensional operation using effects, and we want it to be an extensional function. Wrapping it up in the monad "completes" the operation as such. However, there are plenty of extensional functions which may be written using computational effects (such as memoization): these should not be wrapped up in the monad. (FYI, it's the effects that preserve extensionality which is what Bob calls "benign effects", to the consternation of Haskell developers everywhere.) ML gives us the fine-grainedness to make these choices, at the cost of some reasoning facilities: more proofs must be done on paper, or in an external logical framework. I tend to think that the latter is inevitable, but some disagree.
I am hoping for a middle-ground then: something like ML, in that effects are fundamental and not just bolted onto the language with a stack of monads; something like Haskell, where we can tell what effects are being used by a piece of code.
The story hasn't been fully written on this, but I think that Call-by-push-value can help us with both recovering the benefits of laziness as well as reasoning about effects.
3. Modularity is something which Haskell people simply do not take seriously, even with the new "backpack" package-level module system they are building. One of the most-loved reasoning facilities present in Haskell depends, believe it or not, on global scope, and is therefore inherently anti-modular (this is the uniqueness of type class instances). As a result, you can never add modularity to Haskell, but we may be able to back-port some of the more beloved aspects of Haskell to a new nephew in the ML family.
(Confusingly, laziness advocates often say that their brand of functional programming has better "modularity" than strict, because of the way that you can compose lazy algorithms to get more lazy algorithms that don't totally blow up in complexity. I would say that lazy languages are more "compositional", not more "modular"—I prefer to use the latter term for modularity at the level of architecture and systems design, not algorithms.)
ocaML or even better F#?
http://mirror.ocamlcore.org/wiki.cocan.org/companies.html
hsenv, cabal-dev, cabal sandboxes, nix, stackage ... or do you include everything in your repository (any tool to automate that)?
Nix is great but you have to take some time to set it up and learn it - sandboxes are pretty standard and work as you would expect with ghc-mod (not that Nix doesn't I just had more time invested in getting it and the tooling setup to use it as a "dev environment" - cabal sandboxes are much simpler).
Stackage is great too, btw.
I can finally go and write some relevant code now.
I got seriously into Haskell a few months ago, but I've used ML in algorithmic trading and was a major fan.
I feel like it does us a disservice to complain about Haskell's (admittedly, warty) Exception system or "cabal hell" when what is actually going on is that we're holding the language to a higher standard. I agree that functions like head, fail, and possibly non-strict foldl, are warty; but these issues are downright minuscule when you consider what Haskell (language and community) brings to the table. (Also, every other language has warts.) The fact that our complaints about Haskell are minor annoyances compared to other languages' drawbacks, to me, signifies that the language got the major things right.
And C++ belongs to the C language family, but one could still say something like "the next great thing would come from a C rather than a C++ like language" (I'm not judging if this is true here, just that it's quite clear what it means).
The only time it's necessary is when you're using the FFI or working with language internals, at which point there's really no way for the type checker to work anyway.
One should avoid using exceptions in pure code. This is well established. Instead, use any of the many type-safe exception mechanisms, like Maybe or Either. Having written probably in the high thousands or low tens of thousands of LoC of Haskell, I've never once used a user-defined exception or undefined.
The point of this article seems to be "If your code breaks, it's no longer type safe.". I don't think this is news to anyone.
You should recognize that your opinions about how people should use your favorite programming language are not "more or less objectively true".
Context: the post I'm replying to used to end with a complaint that people would downvote something that's "more or less objectively true", and a request that people should explain why. I can't downvote, but I can explain.
How is this dismissal overly simplified? What have I failed to take into account?
Good for you, I guess. You'll find plenty of libraries which have no qualms about using exceptions in the real world, though. Not to mention asynchronous exceptions, obviously. Personally, I find both exceptions and type-safe error handling unsatisfactory. The former is unsafe, the latter results usually in a "god error type" which breaks modularity.
I haven't run into too many (particularly when compared to the popularity of exceptions in other languages).
>the latter results usually in a "god error type" which breaks modularity.
Have you tried using Either with a sum error type and/or an error typeclass?
Would such a language be viable? Or is it absolutely necessary to catch all possible type errors. In the past, tools like lint have been considered useful, although lint is not at all the kind of type system that I envision, namely a system that is in practice catching all the errors that ML catches -- it just doesn't (and cannot) guarantee it catches them.
The restrictive nature of static type systems today is legendary, but I wonder sometimes whether people realize how restrictive they are. If you can't make your idea work with functors or typeclasses, chances are good there's no way to get it to compile and you have to write functions in longhand.
For instance, Haskell does it in almost exactly these terms. If you have a type like
data Drums = Ba | Dum | Tish
and you have a function that consumes drums play :: Drums -> String
play Ba = "Ba!"
play Dum = "Dum!"
you've left out the Tish. Possibly because you don't have a cymbal. You know this, but the compiler doesn't. This is considered to be an incomplete pattern match and would raise some eyebrows.But often it just means there's an invariant which you know and the type system does not. In Haskell -Wall will stop you dead here, but you can relax it to say "ehhhh, I know what I'm doing".
More generally, you can also write
play :: Drums -> String
play Ba = "Ba!"
play Dum = "Dum!"
play Tish = error "Impossible! I have no cymbal"
This 'error' is a misleadingly named function. It's more like 'assert' and indicates that this is a completely impossible program state (much like running into an unbalanced tree deep inside a library function which knows that tree balance is maintained). It's outside of the reach of the programmer to fix. It's not an exception, it's an assurance to the compiler that even though there's something missing in the type safety things will be OK.It is also worth noting that if you dont want (global) type inference, you can get far in a language with permissive casting, type annotations and local inference. The results aren't a panacea though.
> Suppose you could catch 99% of type-based errors instead of 100%, and in addition, use a compiler switch to see all the case statements where a class of a type is missing.
I am not quite sure what you mean by this. Care to elaborate? (In general, with inference systems, missing type information is hard(ish) to localize. So pointing out where exactly a type error occurred is non-trivial)
In order to get precision in checking, a lot of computation needs to be done so the compiler relaxes the precision when facing large disjunctions (networks) of constraints.
Dynamic checks are inserted as necessary but a system that relaxes when things get hairy can't guarantee that it will find all errors at compile time.
The idea is similar to Soft Typing of Cartwright and others, but they were thinking of an interactive system, some kind of programmer's aid. If I recall they ran into problems giving reasonable error messages.
It's entirely possible - I believe Dylan implements something on these lines. IMO it's harder to reason about than a consistent type system with "escape hatches" like unsafePerformIO, and about equivalent in usability.
> The restrictive nature of static type systems today is legendary, but I wonder sometimes whether people realize how restrictive they are. If you can't make your idea work with functors or typeclasses, chances are good there's no way to get it to compile and you have to write functions in longhand.
I have yet to find a piece of "good code" that I couldn't implement in a typesafe way. I've occasionally found code that worked but couldn't be made typesafe (e.g. the "big global Map<String, Object>" antipattern), but it's always been code that I would have wanted to rewrite for readability and sanity anyway.
Why? The most common and popular opinion is already that it is not important.
>Suppose you could catch 99% of type-based errors instead of 100%, and in addition, use a compiler switch to see all the case statements where a class of a type is missing. The intended benefit of this system is that it accepts all correct programs.
That doesn't make sense. That's like saying suppose we had a system of math that gave the right answer 99% of the time. How would doing the wrong thing 1% of the time give you the benefit of accepting all correct programs?
>The restrictive nature of static type systems today is legendary
Mythological. Legends are supposed to have some historical root.
>If you can't make your idea work with functors or typeclasses, chances are good there's no way to get it to compile and you have to write functions in longhand.
What does that even mean? That's like saying "if you can't make your idea work with classes or inheritance, chances are good that there's no way to get it to compile and you have to write functions in longhand".
Compiler enforced static typing rejects many correct programs, which is a disadvantage.
I've talked to Harper about this post and his opposition to Haskell before. I mentioned that it was leading people to believe they could ignore typed FP, not redirecting to an ML derivative. Harper expressed dismay at this.
There are more advanced languages than Haskell, they aren't ML, and PL researchers/experimenters are still working out how to make them work nicely for day to day stuff.
First: http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fast+loos... (This isn't a throw-away reference, informal reasoning has worked extremely well in practice. Yes, I'm making that argument as a user of a pure, typed FP language.)
unsafePerformIO is primarily used to change thunk-memoization behavior [1]. Also, it says unsafe right on the tin. Nobody uses it in practice in ordinary code. It's not something you have to incorporate when reasoning about code.
Giving up reasoning about your Haskell code as if there were semantics-violating unsafePerformIOs lurking in the depths is similar to giving up reasoning about any code because a cosmic ray might flip a bit.
Lets consider the rather remarkable backflips Standard ML and OCaml had to perform in order to make FP work with strictness and impurity.
You can ignore unsafePerformIO because very very few people ever need it and you'll almost never have a reason to use it. Just don't use it.
Similarly, writing total functions is the cultural default in Haskell. The compiler warns on incomplete pattern matches which you can -Werror as well.
You can't define your own Typeable instances as of GHC 7.8 - only derive them, so that soundness issue is gone. Typeable itself is not popularly used, particularly as SYB has fallen out of favor. One critical aspect of why I like Haskell and the community around it is the willingness to fix mistakes. Meanwhile, ML implementations still use the value restriction.
>the example once again calls into question the dogma
Oh please. Haskell is an ensemble, multi-paradigm language with the right defaults (purity, types, immutability, expressions-only) and escape hatches for when you need them.
Haskellers appreciate the value of modules a la ML, but typeclasses and polymorphism by default have proven more compelling in the majority of use-cases. There are still some situations where modules (fibration) would be preferred without resorting to explicit records of functions. For those cases, Backpack [3] is in the works. It won't likely satisfy serious ML users, but should hopefully clean up gaps Haskell users have identified on their own.
Harper's case is over-stated, outdated, and not graced with an understanding of how Haskell code is written. Informal reasoning works if the foundations (purity, types, FP) are solid.
[1]: https://github.com/bitemyapp/blacktip/blob/master/src/Databa...
[2]: http://caml.inria.fr/pub/papers/garrigue-value_restriction-f...
He is an accomplished computer scientist and a major contributor to Standard ML (i.e. understandably biased), but his Haskell rants read like (type-safe) schoolyard banter.
ML is better than Haskell in many ways. This should drive Haskell to improve as much as it drives people to check out ML. And it has!
What backflips are these?
Apart from the value restriction (which is a real albeit minor inconvenience; I find it about as annoying in practice as Haskell's monomorphism restriction, and each can be removed in principle, at the cost of making it harder to reason about the semantics/cost of using a variable), I'm honestly not sure what you have in mind.
On a tangent, I'm not sure when statically-enforced purity (as opposed to purity-by-default) and pervasive laziness (as opposed to laziness-as-a-tool) became culturally defining features of FP. The original FP language, Lisp (and even its "this FP stuff is great, let's have some more" cousin Scheme) is impure and strict. I'm grateful to Haskell for trying to answer the question "just how far can we push these purity and laziness ideas", but I don't think that complete purity or laziness are necessary features for FP. They're just tools in its toolbox. But then, I have always been inclined to pluralism.
Examples, please. (I like bright, shiny things.)
The usual sequence is [1] followed by [2].
Augment with [3] and [4] as needed.
One negative thing about Coq/Agda/Idris is they don't have a satisfactory encoding of typeclasses [5]. This is a problem in general with all dependently typed languages. Proof obligations get "churned" by changing code and the only tool to address that is extremely general proof terms combined with proof search. The best proof search is Coq, but the code extraction is heartburn-inducing for Haskell users.
Whereas in Haskell, we get extremely lightweight and type-safe code refactoring because of the way in which we leverage typeclasses and parametricity. This turns out to be very valuable as your projects get more serious.
That said, still entirely worthwhile to learn Coq or Agda.
By the way, this [6] is how I teach Haskell. Working on a book as well.
[1]: http://www.cis.upenn.edu/~bcpierce/sf/current/index.html
[2]: http://adam.chlipala.net/cpdt/
[3]: http://cheng.staff.shef.ac.uk/proofguide/proofguide.pdf
[4]: http://www.amazon.com/How-Prove-It-Structured-Approach/dp/05...
[5]: http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceMa...
Edit: I just wanted to add that if you do compiler-like tree transformation code it behooves you to explore uniplate/multiplate/plated.
So if you do not use them (which I never do and it is recommended not to) then the problem is not relevant?