While fiddling around is still somewhat possible in Haskell, the language itself makes it quite difficult. Haskell kind of forces you right at the beginning to pause and think "Well, what is it that I'm actually trying to do here?" It let's you recognize and apply common patterns and implement them in abstract ways without having to think about what kind of values you actually have at runtime. In that way Haskell is the most powerful language I know.
Have a tree/list/whatever? Need to apply a function to each of the elements? Make your tree/list/whatever an instance of the Functor type class and you're done. Need to accumulate a result from all the elements? Make it foldable.
Something depends on some state? Make it a Monad.
You either get a result or you don't (in which case any further computations shouldn't apply)? Use the Maybe Monad.
You need to compute different possible results? Use the List Monad.
Need to distinguish three different possible values that are different compositions of elementary types? Make yourself your own type and pattern match the behavior of applying functions.
Need to output in a certain way? Make it an instance of the Show class.
Most concepts that are used every day have some kind of idea behind them that is abstract and implementation independent. Haskell kind of forces you to reference those ideas directly. The downside is that you actually have to know about those concepts. However, knowing about the such concepts makes you also a better programmer in other languages, so it's not like it's a bad thing.
When trying to build complex software in Haskell, I find myself spending a lot of time commenting/uncommenting swaths of code, just so I can get part of a algorithm to load in GHCi. It sucks. What I wish would happen is GHCi allowed me to load just the things that type check, and skip the rest, so I can fiddle. This is definitely possible. Not compiling is great for production, but not while developing.
Software is built in pieces, if I'm working on one piece, another statically unrelated piece shouldn't prevent me from working. In this regard Haskell GHCi (and many static languages), makes developing more complex than dynamic languages, but again it's not intrinsic.
I also wish when I run my tests, it listed all the type errors, as well as run tests on the code that do type check. Having more safety mechanism in Haskell helps with writing correct code, but compiling doesn't mean the code works. Automated testing is still more useful for writing software that works. Haskell isn't as safe as many people think [1].
sort a = a ++ a -- it compiles, so it must sort
[1] http://hackage.haskell.org/package/base-4.8.0.0/docs/Prelude...Use `-fdefer-type-errors` (should work with both GHC and GHCi), all errors become warnings, and if you try to use a function which was compiled with an error, you get a runtime error.
But I must confess, I used it in the past to have a convenient way to print out something like: 'Add "x" "y"' as "x + y". I didn't care about using read to turn it back into the internal representation since expression parsing is kind of difficult. I used Parsec instead. So I had show to output and a parser as the inverse.
But you're right, as projects get larger, a priori design and static types quickly become essential. And at that point, requirements are usually known and frozen.
I predicted that the natural resolution would be languages with both (i.e. optional static types, especially at important interfaces) - but while this feature exists, it hasn't taken off.
Instead it seems that performance is the main attraction of static types in the mainstream (java, c#, objective-c, c, c++); and ML-family and Haskell are popular where provable correctness is wanted.
There's definitely a lot of missing documentation about this folk practice of "fast, loose, shitty Haskell" due to the strong culture of pretty code that's also enabled by Haskell. I remember seeing a video presented at CUFP that went into the merits here, though.
Essentially, this is a "tricky" concept because you want to design your types to be exactly as restrictive as you can afford without having to think too much. It probably requires a good grasp of the Haskell type system applied in full glory in order to bastardize it just right.
So, tl;dr?
I think types are the ultimate fast iteration tool, but this is not a well-documented practice.
It's often said that a Lisp advantage is to be able to write sloppy - without fully understanding what's needed. I guess you can model that degree of "less constraints" in Haskell as well? Otherwise constant forcing "understand what you're doing" can sometimes be a burden.
Haskell doesn't have a theorem prover, you may want to check out Idris [1]. Haskell gives you more safety that you aren't going to get run time errors than say Java, but not completely. You'll still need automated testing for correctness, that you're not getting garbage in, garbage out.
[1] http://dafoster.net/articles/2015/02/27/proof-terms-in-idris...
And that's where Ruby, Python, Javascript, and to some extent Matlab come in. For whatever else people may say about them later (they don't scale, they're a roadblock, they're a mess, null is not a function, etc.), they were there for you when you were programmatically young and they introduced you gently into a world that's otherwise extremely complex.
After all, programming, like literally everything else, is 99% human and 1% logic, machines, data, "scaling", etc. Programs are written by people for people (incidentally they can also be read by a computer), so it's incredible important that the 99% of that equation (you the programmer) don't become discouraged at the onset by an extremely elegant, expressive, but rather rapey language before you're ready for it. In that sense, it's absolutely okay to be "seduced" by an easy scripting language in the beginning. Eventually, though, when you start lamenting about "undefined is not a function" and how that could be so easily avoided when proper type-checking, that's your body telling you that you're ready for Haskell now.
Well, no, they literally weren't there for me when I was new to programming. (MATLAB existed then, but I wouldn't actually see it for more than a decade.)
And while I don't think they are bad languages for beginners, I don't see a clear argument presented as to why they are superior for that purpose (just a somewhat vulgar analogy that presumes that people share your subjective opinions about the languages involved.)
Pro tip: If your analogy needs a disclaimer that perpetuates gender stereotypes for it to work, then its probably sexist.
Haskell is to programming what bugs are to food. Both are functional, an acquired taste and look scary from the outside.
This may be true for you, but it is not true for everyone. You assume that learning Haskell as a first programming language would be more difficult, but you don't present any evidence to support that claim. People who have done so disagree with you.
Wouldn't Scripting languages allows one to gradually build that understanding. Suppose you end up with a lot of complex code? Ditch it and build it from scratch. Usually takes around 1/10th the time it took first time with much better results.
So I think by the time one can think up and build the perfect abstractions in Haskell, one can write 3 or 4 iterations of the program in a dynamic language. Each time with better abstractions and neater organization....
In Haskell, I'll often have a problem and just stare at my laptop and think for an hour. Then write a dozen lines of simple, straightforward code. The code is easy to test, and the problem is marked as "solved" instead of "seems to work" as happens in scripting languages.
Edit: auto complete fixes
Another othing Haskell gets right is the support for parametric polymorphism (generics). You are forbidden from manipulating generic parameters other than passing them around so there is less room for error. This "theorems for free" is what makes things like monads "tick".
That said, one thing that is in vogue right now is adding optional type systems and runtime contracts to scripting languages. Its still a bit of a research area but I think it has a very promising future.
Haskell makes it very safe to change your code, but it adds some initial costs. Scripting languages make it very unsafe to change the code, unless you spend a lot of time writing tests, but then they stop being fast to iterate.
Lately I've reading this[1] academic paper about End-User Software Engineering. While not an easy read nor a good introduction if you've never read about End User Development, it delves precisely on methods for building tools that allow just that, while adding opportunistic checks to correct bugs.
[1]http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.360...
For really complex things the implementation in scripting language can introduce errors that are catchable in typed language.
So you code your complex things, it fails to perform to expectations (but somehow performs, not just stack dumps). Where the source of fault lies, in the complex idea itself or in the almost whole source code?
My rule of thumb is that I write in Tcl/Python/Bash something that is not longer than 200-300 lines.
I am sorry, haskell is just a huge roadblock to get things done in the real world.
In the real world proffesional need to juggle all sorts of models. Haskell just says "Fcuk you ! its my way or the gonadway !".
I need to juggle between json, matrix, html, etc. Each of them have hundreds of expections.
You can say my model is imperfect but guess what buddy, every model is. The only models that will work for all cases is prolly einstein's equations but even that has exceptions when dealing with blackholes !
I tried writing a music library in haskell and haskell makes it really hard to create rules that are expections to the model. Apparently the models developed by 1000s years of music theory is not good enough for haskell !
I cannot even image what it must to be code chemical rules using hakell that have hundreds of expections, or biologial models ! Oh My !
I am sorry haskell just gets makes computation much more difficult. Apparently mutation is a crime even though god himself thought it was okay as a rule for everything in the universe.
my anecdotal experience.
((
btw i really like the concepts in haskell. I read two of its famous books - LYAGH and RWH. And use all haskell concepts almost daily in production. However the implementation of haskell is not really ready for production or useful enough for the average developer. Its also not easy for the average developer to put food on the table using haskell
))
Apart from that, I have written many production grade Haskell application, and I can not agree with you that Haskell is getting in the way. I admit that when learning Haskell I sometimes had that feeling too, but basically that was just me thinking about the problem too complicated or in a wrong perspective. Now that I am past that point Haskell is super fun to write, very productive and results in extremely maintainable code - it is just so easy to refactor anything you can imagine - and when it compiles again you are probably good to go!
Perhaps not, but Haskell is good enough for them:
Functional Generation of Harmony and Melody http://dreixel.net/research/pdf/fghm.pdf- Type-safety. Correctness. Speed.
I am sorry, haskell is just a huge roadblock to get things done in the real world.
In the real world proffesional need to juggle all sorts of models. Haskell just says "Fcuk you ! its my way or the gonadway !".
- I don't know what this even means.
I need to juggle between json, matrix, html, etc. Each of them have hundreds of expections.
- Haskell has great libraries for each of these.
You can say my model is imperfect but guess what buddy, every model is. The only models that will work for all cases is prolly einstein's equations but even that has exceptions when dealing with blackholes !
- models?
I tried writing a music library in haskell and haskell makes it really hard to create rules that are expections to the model. Apparently the models developed by 1000s years of music theory is not good enough for haskell !
- It just doesn't let you do it incorrectly.
I cannot even image what it must to be code chemical rules using hakell that have hundreds of expections, or biologial models ! Oh My !
- The more complex, the better Haskell is suited.
I am sorry haskell just gets makes computation much more difficult. Apparently mutation is a crime even though god himself thought it was okay as a rule for everything in the universe.
- Immutability doesn't make computation harder.
my anecdotal experience.
((
btw i really like the concepts in haskell. I read two of its famous books - LYAGH and RWH. And use all haskell concepts almost daily in production. However the implementation of haskell is not really ready for production or useful enough for the average developer. Its also not easy for the average developer to put food on the table using haskell
- You say you like these concepts, but it doesn't sound like you have the slightest idea what those concepts are useful for.
I think it's more accurate to say that Haskell makes you annotate specifically which way you're doing it, and only combine ways when it's okay to do so.
The best a language can do is to fill a niche and to be very good at that particular thing. You should always use the language that is most suited for your problem, whatever that is. But there are many languages that let you get away with being a terrible programmer. Haskell just isn't like that and I want to point out that you can learn a great deal from being forced to think in more abstract ways, just like you did.
In programming, we often encounter the temptation to just mutate everything and to use side-effects since it's quite convenient to do so in the short-term. In the long-term, these things will come back and bite us. I argue that it is important that a programmer should have experienced what it is like to simply not have the option to do so. After learning Haskell, I tried to avoid side-effects in other languages as much as possible and to use them consciously. That was something I didn't even consider before learning Haskell. And, obviously, the less side-effects you have, the easier it is to maintain or exchange parts of your program.
I currently use Haskell to calculate probably a hundred analytical derivatives for a large sparse array that is used in a simulator written in Fortran. And it's very good at that. For quickly writing some evaluations of output of this simulator I use Python, because Python is better suited.
Pick a language based on the problem. Don't just use one language because you know it. In my experience, Haskell is very well suited for a lot of mathematical stuff.
------
Off topic:
By the way, Einstein's field equations will work for gravity in the classical regime, not in all cases. But still, if you simply want to calculate how far you'll throw a ball, you really should take another model based on Newtonian gravity or simply take a constant gravitational force. Planetary movements are also fine with Newtonian gravity (except when you really need it accurately. E.g. for the precession of the perihelion of mercury). However, GPS calculations are terribly inaccurate without general relativity (time flows differently if you are close to a big gravitational potential well). So, pick your model based on what you want to do, just like you pick your programming language based on what you want to do.
Languages tend to suffer from an Iron Triangle: quick to write, quick execution, quick to learn-- pick 2. Haskell takes a long time to learn but it produces very high-quality executables and, once you know it, it's very productive.
While "quick execution" may seem separate from the type safety which is also a major selling point of Haskell-- and, arguably, a bigger one-- they're actually tightly coupled. Safe code can be optimized more aggressively, and it's often for the sake of performance that unsafe things are done... so the fact that Haskell can be robust and generate fast executables is a major win.
Haskell just says "Fcuk you ! its my way or the gonadway !"
It doesn't, but I am going to start saying this. Thank you for the inspiration.
Apparently mutation is a crime
Not so. Every program's main method has type signature IO (), which means that it does perform mutation. You just want to get as many functions as possible not to involve mutation because it's easier to reason about them. It's a similar principle to dependency injection, but more robust and clear.
However the implementation of haskell is not really ready for production or useful enough for the average developer.
I disagree. With Clojure and Scala, I've met people who've used them and moved away. Satisfaction rates seem to be about 60% with Scala (that is, 60% of teams or companies that make a major move to Scala are happy) and 90% with Clojure. I've never heard of anyone who's become unhappy with Haskell or rolled back on it.
One of the dangers of using Scala, for an example, is that, if that if your Scala deployment doesn't work out (or is sound but is blamed by the business for something unrelated) you can get stuck doing Java. Haskell, at least, doesn't have that problem.
I really like the author's suggestion of mentally translating Functor to Mappable. Are there any other synonyms for other Haskell terms of art?
What I'd really like, I suppose, is a complete overhaul of Haskell syntax to modernise and clarify everything: make it use actual words to describe things (foldl vs foldl'? BIG NO). Put in syntax redundancy and visual space to avoid the word soup effect: typing is cheap, understanding is expensive. Normalise and simplify terminology. Fix the semantic warts which make hacks like seq necessary --- if I need to worry about strictness and order of evaluation, then the language is doing lazy wrong. etc.
Basically I want language X such that X:Haskell like Java:K&R C.
This will never happen, of course; the people who have the knowledge to do such a thing won't do it because they are fully indoctrinated into the Haskell Way Of Life...
I agree that a shared vocabulary is important, but standardizing in a way that makes the mathematical writings on the topic more accessible seems a big win. Moreover, "functor" is a bit more precise than "mappable" - a functor is a mapping that preserves structure. In what sense? The math can guide you. In this case, it means the functor laws.
That's not to say that coming up with other associations to help ease understanding is a problem - I have no problem with saying, "for now, think of Functor as 'mappable'". The equivalent for Monad would probably be "flatMappable", and Monoid would be "appendable".
Rather a bit more than that. Eilenberg and Maclane's original paper defining the basic notions of category theory was published in 1945! http://www.ams.org/journals/tran/1945-058-00/S0002-9947-1945...
Monad is definitely abnormally difficult to humanize. The trio (T, ∀ a. a -> T a, ∀ a b. (a -> T b) -> (T a -> T b)) is really hard to nail down.
Its also interesting that pg is such an accomplished writer. I think programmers need to think about code and well written document as having the same importance.
Just my two thoughts.
You can't blame that one on Haskell or the functional community - the term was already established before the C++ community decided to use it in spite of pre-existing definitions. They even ignored Prolog's pre-exising abuse of the term functor :-)
A few similar terminological accidents of history come to mind, where the original definition of some term is now obscure and a different definition popular:
- POSIX capabilities (as implemented in e.g. Linux), which are a security mechanism that has nothing to do with what security researchers have been calling capabilities since the 1970s
- Microsoft operating systems using the term "Format" for creating a file system, despite the fact that it is impossible to actually format hard disks at the hardware level since the 1990s
- imperative programming languages abusing the term "function" to mean procedures with side effects
- "thunk" meaning a stub that emulates/bridges different calling conventions, instead of a call-by-name (or lazy) closure
- "Tea Party" used to refer to a fine rock band from Canada
But foldl' is horrible, I agree.
1: fold :: (Foldable t,Monoid m) => t m -> m
Really, this should be considered as C++ perverting the existing terminology from category-theory for Functors.
> I really like the author's suggestion of mentally translating Functor to Mappable. Are there any other synonyms for other Haskell terms of art?
I think that there is a great deal to be said for leveraging intuition. But who's intuition? Who was Haskell designed by/for when Functor was first defined in the standard library?
> What I'd really like, I suppose, is a complete overhaul of Haskell syntax to modernise and clarify everything: make it use actual words to describe things (foldl vs foldl'? BIG NO).
The intention is admirable, but what does it cost to do it, and what is gained by doing it? It seems that the implication is that certain functions become immediately intuitive to people (what kind of people?) in certain contexts, and that possibly-by-analogy, these context can be extended (how far?). I'm not saying that this is a bad goal, but rather than try to compromise in this manner, the Haskell community has often adopted terminology that is precise instead of intuitive.
Functors could have been Mappables, but how far would that analogy hold, and who is already familiar with maps in this context? Better to use an accurate term, and when someone unfamiliar with it learns it in this context, they will be able to apply it to many other contexts.
> Put in syntax redundancy and visual space to avoid the word soup effect: typing is cheap, understanding is expensive. Normalise and simplify terminology.
On the surface, I've always supported this - if only for the reason that I would always like to be able to pronounce a combinator when I'm talking to someone. The downside would be the combinatorial explosion of different subsets of names that people would learn for even one library. I'm not sure weather it would be a net plus or minus.
> Fix the semantic warts which make hacks like `seq` necessary --- if I need to worry about strictness and order of evaluation, then the language is doing lazy wrong. etc.
I think you will find that this is an unsolved problem. Better to allow people to be explicit when necessary instead of making the language totally unusable.
> Basically I want language X such that X:Haskell like Java:K&R C.
I think I understand the sentiment, but the analogy feels too shallow. For instance, I would make the following predictions from your analogy - Do they hold?
* Runs on a virtual machine instead of being compiled * Extraordinary measures taken to make the language and binary-formats backwards compatible. * More type-safe * Less primitives * More automated memory-management
> This will never happen, of course; the people who have the knowledge to do such a thing won't do it because they are fully indoctrinated into the Haskell Way Of Life...
Indoctrinated is obviously a loaded term. I think you will find that nearly all Haskell programmers in any position to influence the development of the language are very open-minded when it comes to new ideas. Part of the reason why Haskell looks the way it does today is because it was intended to be a platform for experimentation.
> Really, this should be considered as C++ perverting the existing terminology from category-theory for Functors.
Only if you assume that category theory is the correct source of meaning of such terminology. But Wikipedia, for example, lists functor as being ambiguous - there's the category theory version, and there's the programming version, which is a function object. It lists a bunch of languages (C++, C#, D, Eiffel, Java, JavaScript, Scheme, Lisp, ObjectiveC, Perl, PHP, PowerShell, Python, and Ruby) that support some variant on this theme. It seems rather arrogant to say that the category theory definition is the one that should be the one we mean when discussing a programming language, rather than the one used by a large number of programming languages.
for expressions in scala are monadic comprehension and implicit parameters are analogous to typeclass constraints.
https://github.com/bitemyapp/learnhaskell
OP's article is still a great way of wetting appetite, and sharing insights; but moving on from there is better facilitated by Chris Allen's recommendations.
There is also the IDE issue; FPComplete has a web-based IDE that is good for beginners, and it is possible to setup Emacs to be a very helpful IDE (though this is by no means simple). With Haskell an IDE is really helpful: see the errors as you type, and resolving them before running into a wall of compile errors.
Anyway: go Haskell. I'm looking forward to a less buggy future :)
;-)
There are Haskell libs of course that are used in these environments, and the companies usually end up fixing them such that they're quite good. Most libs used by pandoc are likely to be great, and there's a few dozen others of the same caliber (its useful to search around and see what libs are used by the other few companies using Haskell since they have likely been vetted as well).
The other largest issue to actually using Haskell is that all the knowledge your ops team has of running a production system are essentially null and void. All your existing knowledge of how to fix performance issues, null and void. Learning Haskell and becoming productive in it almost starts to look like the easy part compared to effectively running a Haskell (dealing with space leaks, memory fragmentation issues, and ghc tuning for stack sizes, allocations, etc).
Also, a lot of the really common libraries like text, attoparsec (parsers), aeson, networking, etc are highly tuned for low latency and performance. Many use compiler rewrite rules and techniques called stream-fusion to compact a lot of the machine code away. Also aggressive inlining etc can be done.
I'm sure there are some memory-heavy or poorly optimized libraries out there but that's certainly not the norm. I've had no problems with the libraries off-the-rack.
https://github.com/kazu-yamamoto/http2/commit/0a3b03a22df1ca...
The stream fusion stuff is sweet, but not exactly unique to Haskell since any language with good iterator/generator abstractions have similar constant-time memory characteristics.
There should be warnings all over the Prelude and basic libraries documentation.
The author helped me narrow it down to some issues with how ghc by default allocates a stack space that is rarely enough, and once it starts growing the stack space the RAM per connection gets pretty ridiculous. Using higher default stack space helped remedy this some, but the per-connection RAM cost was still way higher than Golang/Python which I was comparing to.
So... separate project, I write a load-tester in haskell for a websocket server. I need to issue some HTTP requests, and I see Brian O'Sullivan made a nice library, wreq. I use it as described and quickly discover it uses ridiculous amounts of memory because it doesn't mention that you should always re-use the Session (the underlying http-client emphasized the importance of re-using the Manager): https://github.com/bos/wreq/issues/17
(I am sorry that this issue prolly came off as a bit whiny there, I was very frustrated that such a gap was omitted from the docs)
So, my program is working pretty nicely, until I discover that its not actually sending multiple HTTP requests at once (even though the underlying http-client lib has a thread-safe TCP connection pool). After browsing some code, I see the problem: https://github.com/bos/wreq/issues/57
The solution that was so far implemented seems equally weird to me.... letting different requests stomp over the Session's cookie jar... I forked it so that I could have multiple wreq Sessions use the same Manager, and now it finally works as it should.
I won't even go into how some of these libs have occasionally wanted conflicting dependencies which leads into its own 'cabal hell' (googling for that is entertaining unless its happening to you).
I've only been writing Haskell for a bit over a year now, but everytime I write code with it, despite my love of the language, the libraries and run-time end up frustrating me.
Speaking of which, I found "Functional Programming in Scala" excellent for teaching someone with an imperative background how to "think functionally". Monads are explained in an easy to understand way. I can imagine that without reading that book I'd have been looking at a couple of years of coding before I started to see the abstractions, etc. By contrast "Learn You a Haskell" lost me part way through both times I tried to read it...
Also, companies actively using and recruiting for Haskell are now starting to join the Commercial Haskell SIG, so if you want to poke around, you can find them here: https://github.com/commercialhaskell/commercialhaskell#readm...
I wish there was a language or library that was willing to take the Haskell functionality and just give it all names like this.
type Mappable = Functor
type NotScaryFluffyThing = Monad
This makes Mappable a synonym for Functor and likewise for Monad. (This is not necessarily a good idea; it goes against the principle of least surprise, but it'll work). > I wish there was a language or library that was willing
> to take the Haskell functionality and just give it all
> names like this.
I don't think this helps understanding that - for example - Either is also a functor.Two things, in particular, stand out for me when thinking about Haskell this way (as a "tool for thinking" language).
First, unless you're a mathematician, you probably haven't thought very deeply about algebraic data types, and how useful and expressive it is to build up a program representation from a collection of parameterized types. The article touches on this a little bit in noting that Haskell teaches you to think about data types first.
But it's more than just "data first," for me, at least. Grokking Haskell's type system changed how I think about object-oriented programming. Classes in, say, Java or C++ or Python are a sort of weak-sauce version of parameterized abstract types. It's kind of mind-blowing to make that connection and to see how much more to it there is.
Second, monads are a really, really powerful way of thinking about the general idea of control flow. Again, the most useful analogy might be to object-oriented programming. When you first learn to think with objects, you gain a flexible and useful way of thinking about encapsulation. When you learn to think with monads, you gain a flexible and useful way of thinking about execution sequencing: threads, coroutines, try/catch, generators, continuations -- the whole concurrency bestiary.
I think monads are hard for most of us to wrap our heads around because the languages we are accustomed to are so static in terms of their control flow models, and so similar. We're used to thinking about control flow in a very particular way, so popping up a meta-level feels crazy and confusing. But it's worth it.
For example, if you do much JavaScript programming, and are ever frustrated translating between callbacks and promises, having a little bit of Haskell in your toolkit gives you some mental leverage for thinking about how those two abstractions relate to each other.
Some notable ones include:
* Facebook Haxl, an abstraction around remote data access [1]
* Microsoft Bond, a cross-platform framework for working with schematized data [2]
* Google Ganeti, a cluster virtual server management tool [3]
* Intel Haskell research compiler, a custom Haskell compiler used internally at Intel Labs [4]
---
[0]: https://wiki.haskell.org/Haskell_in_industry
[1]: https://code.facebook.com/projects/854888367872565/haxl/
[2]: https://github.com/Microsoft/bond
[3]: https://code.google.com/p/ganeti/
[4]: http://www.leafpetersen.com/leaf/publications/hs2013/hrc-pap...
1. It is just a small team or even one person using it and they're doing it because they really want to use that technology badly.
2. The project is some side research thing or trivially small that it could have been done using any technology.
3. It is actually just a tool or sub-system of the main system that was low risk enough.
4. The project is no longer operational, if it ever made it to that stage.
Presumably I use server-side applications written in Java, but I've no way of telling. If server-side counts then most people with computers indirectly use Haskell via Facebook's Haxl project.
[1] http://en.wikipedia.org/wiki/Standard_Chartered [2] https://donsbot.wordpress.com/2014/08/17/haskell-development...
As a practical note, the fact that educated people use it is an indicator that it is useful.
Possibly. It could also be that they use it because it's interesting and informative rather than useful per se.
It could also be that it's useful in particular contexts in the same way that Feynman diagrams are useful.
[0] http://yaxu.org/tidal/ [1] http://haskell.cs.yale.edu/euterpea/
it's comforting -for me- to see that almost everybody is going through the same phases while learning haskell. i believe that should say something to haskell community.
i've recently started learning haskell. it's been 25 days. (so says cabal) i was reading a book and struggling to build a web app. (why web app?) i was so close to quitting. later i decided this is not the way to learn haskell. one simply does not read the book and try stuff. that was not enough. at least for me. so i changed my method.
my new method of learning haskell is:
- read the book.
- find one or more mentors (i have two) that are really good at haskell and can answer all kinds of impatient questions you have.
- watch people doing and explaining haskell stuff.
- join #haskell-beginners on freenode and ask your questions.
- create something small first that you can turn into something big later.
online haskell resources are surprisingly deficient however #haskell-beginners community is awesome when it comes to helping n00bs like me and "learn you a haskell" book is an excellent book.
one more resource that i use as reference material is the "haskell from scratch" [0] screencast series by chris forno (@jekor).
before you begin, make sure you checkout chris allen's (@bitemyapp) "learn haskell" [1] guide.
we'll get there people, we'll get there. :)
[0] https://www.youtube.com/playlist?list=PLxj9UAX4Em-Ij4TKwKvo-...
This definitely helped me too. I started out looking at functions and monads as 2 'types' of function that could only be mixed in certain ways, and didn't bother with the gory details at first. IME It's only when you experience monads and their effects that the gory details make perfect sense.
- Yesod
- Snap
- Happstack
- Scotty
- Spock
Right now I'm learning Yesod, but I don't feel confident that's really what I want. Which of these are closest to Rails? Which are closest to Sinatra?
Scotty would be closer to Sinatra and Flask. Spock is similar to Scotty but comes with a few more built-in features like type-safe routing, sessions, etc.
I recommend Yesod but there are certainly some advanced metaprogramming features (routing, models).
Have you checked out the Yesod scaffold site? https://github.com/yesodweb/yesod-scaffold
Scotty and Spock are both Sinatra-like.
There's a lot of good info here: https://wiki.haskell.org/Web/Frameworks
I don't mean to criticize or anything, just mean to understand. There are so many people who are very passionate about Haskell that it makes me think that it must be worth while to learn. But I just don't get how it would be useful for things that I do most with programming: writing Web/Desktop/Mobile apps in Swift, Python, and PHP.
Also, can you recommend a good book or resource that uses real world examples to teach Haskell?
Out of the things you mentioned, server-side programming is the one where Haskell fits best. Server-side programming is more amenable to unusual languages because you get to choose your own platform and there are plenty of mature web frameworks you can use (too many of them, I might say). It might be worth a try to experiment writing code in a more type-safe language. Even the simple things like algebraic-data-types are things I miss a lot when working on other languages.
Yes, and yes.
> Also, can you recommend a good book or resource that uses real world examples to teach Haskell?
The obvious thing to recommend here is Real World Haskell [0], which directly addresses some of the areas you raise.
Also, Write Yourself a Scheme in 48 Hours [1] is more in-depth and real-world than most tutorials (writing a Scheme interpreter isn't exactly a common real-world application, but its more real-world scale than most tutorials address, and it uses a lot of things that are of concern in many real-world apps.)
[0] http://book.realworldhaskell.org/read/
[1] http://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_H...
Haskell is a general purpose programming language.
RWH is well-written and covers some real-world tasks, but some of its examples are outdated enough that they don't even compile anymore (at least, I encountered that scenario a year ago or so) and Haskellers will frequently warn people that parts of it are out of date (see elsewhere in these comments).
I actually think one of the shortcomings of Haskell's approach to new developers is that it _is_ very much a general purpose programming language and sold as such. Other languages have extremely popular frameworks or applications which serve to attract newcomers. People teach Swift or Objective-C to write iOS apps, Java for Android apps, JavaScript to do web apps, Ruby to write web backends in Rails, C# to write games in Unity... hell, people learn Java to make Minecraft mods. The closest thing I can think of for Haskell is Xmonad, which doesn't exactly have mass appeal.
Someone else suggested "Write Yourself A Scheme" as a good practical introduction, and that in itself says a lot about who Haskell appeals to -- people who are interested in programming languages. The MLs and Haskell remind me of Brian Eno's line about how the first Velvet Underground album only sold 30,000 copies, but "everyone who bought one of those 30,000 copies started a band".
> "We store memories by attaching them to previously made memories, so there is going to be a tendency for your brain to just shut off if too many of these new, heavy words show up in a sentence or paragraph."
That has always been my belief. I don't have anything else to back it up, only that my own speed of learning seems to increase for new subjects with time. The more I know, the easier new concepts seem. Very few things are completely new, unless I start delving into subjects I'm completely unfamiliar with. Say, Quantum Mechanics.
With most programming languages, I (and probably many here) can learn enough to start creating something useful in a weekend. Haskell always gave me trouble because it seems to take longer than that.
Then again, so does Prolog. I'll try yet again.
I'm missing Visual Studio, are there any realy good Haskell IDEs out there? for example ones which allow debugging.
Vim + ghcmod + syntastic has a useful subset of the functionalities of an IDE.
A minor wording recommendation:
> better in every measure(lower complexity, speed, readability, extensibility)
Apart from a missing space before the parenthesis, this reads like there was lower complexity, lower speed ...
http://www.reddit.com/r/haskell/comments/33mnlc/becoming_pro...
there are only two problems in CS, cache invalidation and naming things - phil karlton
An RSS feed would be great.
Also, does anyone know what colorscheme this is using for the code samples? Looks nice.