“So, one of the great things about Haskell actually, that is spoken about and I think it’s the sort of killer app for Haskell, is that it’s so refactorable, right? You can do a heart or lung transplant on GHC and make truly major changes and the type checker just guides you to do all the right things.”
Freely refactoring the code with worrying about unit tests, etc seems quite appealing.
To summarize the killer app for Haskell is that “it’s so refactorable”
I thought GHC was famously a nightmare to work in?
> GHC is not exemplary of good large scale system design in a pure function language. Rather ironically, it violates the properties that draw people to functional programming in the first place: immutability, modularity, and composability
> many new features have been force-fitted into the existing code without proper redesign
> While the current design isn’t glorious, it works
> On the bright side, GHC is written in Haskell, and this language is particularly well suited to performing massive refactorings with confidence that nothing breaks. Thus, it should be possible to refactor the GHC library towards a more robust and versatile design
and in fact Simon Peyton Jones continued by saying exactly that:
> You can do a heart or lung transplant on GHC and make truly major changes and the type checker just guides you to do all the right things. The difficult stuff is envisioning and being very clear about what you’re trying to do. Actually doing it is often not that hard.
So the two texts and the two opinions are completely in alignment. GHC is famously bad insofar as it has a poor design. But once a better design is designed, you can give it the heart transplant it needs without excess stress.
I have no idea how the heart transplant proposed by Sylvain Henry, John Ericson and Jeffrey M. Young is going. I suppose at some point there should be something checked in and a report about how painful or painless it was (and, potentially, if it's really completely wrong, perhaps a series of bug reports in the next seven releases of GHC).
That's not my experience. I did small changes in GHC when I had less then 6 months of Haskell experience. I felt the code was pretty straightforward.
This only tells that SPJ never had to work in hard realtime nor kernels. GHC code throws and doesn't help in violating latency bounds. Not at all. For such tasks we do have much better systems, without GC.
See, when you are defining a Haskell program, you are conceptually creating a tree of thunks that eventually get executed by the GHC runtime. Those thunks are typed, meaning they have typed inputs and outputs, and are either side-effect-free or are defined in a context that controls their side effects (e.g. the IO monad).
So you can change the definition of types willy-nilly and either get a working compiled program or some error output that tells you exactly what is broken or doesn't make sense to GHC's model of your proposed computation. Because computation itself is typed, you have a stronger guarantee that it will work as expected when executed by the GHC runtime. Because side effects are controlled, you are forced by the type checker to handle runtime errors (or crash).
At least that's how I understand it as someone who works with gifted Haskell engineers, but exists very much on the periphery of understanding.
If it compiled before and worked and your refactored version also compiles, chances are you didn't break anything.
all state is factored, so it can easily be refactored.
In my experience the ease of refactoring is more depending on the quality of the code you are refactoring than the language. That said, a strong type system helps avoiding stupid mistakes and Haskell have a very strong type system.
you can unplug, replug stuff at will
> SPJ: It’s a very different prospectus because in this case Verse is a pretty well-formed beast in Tim’s head. If we want to do something different we’re going to have to persuade him but I’m fine with that, right? But the dynamic is that he’s a sort of technical lead on the project – which is very unusual for the CEO of a multibillion dollar company and actually quite rewarding.
Quite unusual and very cool!
"So, for me, that’s as far as increasing our ability to give you statically guaranteed theorems about Haskell programs.
My money’s on Liquid Haskell at the moment and I hope that we the Haskell community"
My experience is that other refinement type systems are way less complex. See: https://github.com/hwayne/lets-prove-leftpad
In particular, compare https://github.com/hwayne/lets-prove-leftpad/blob/master/liq... to https://github.com/hwayne/lets-prove-leftpad/blob/master/daf...
For me this has been a bit of a disappointment.
[1] https://youtube.com/watch?v=wNa3MMbhwS4
[2] https://corecursive.com/015-dependant-types-in-haskell-with-...
The applications I’ve heard for Idris also have to do with finance (although not blockchain stuff).
(To be abundantly clear and uncontroversial, as it pertains to this comment, I'm not interested in discussing "is web3/metaverse {good,bad}", but rather in discussing the purely functional programming community-of-community's interests or anti-interests in it.)
Python/C++ come with their issues and when starting a project from scratch (e.g. a blockchain), there's more flexibility in the choice of language. In the blockchain field, throwing buzzwords has proven useful to gather money and make some projects stand out (functional programming, formal verification...). Also academics love these language, so starting a project in Haskell/OCaml can get you contributions from academia, possibly some big names joining your team. And of course, strongly typed languages with a clean semantics give you more safety which is important in some fields.