the reason they "find that the compiler feels like an annoyance" is because their first exposure to Java / C++ is in school where they have an assignment due for tonight and the compiler won't stop banging pages of errors about std::__1::basic_string<char, std::__1::allocator<char>> and what the fuck is that shit I just want to make games !!11!1!
In contrast Haskell is often self-taught which gives a very different set of incentives and motivations.
As a mostly C++ programmer making sure that I get compiler errors as often as possible by encoding most preconditions in the type system is one of the most important part of my job and make the language very easy to use when you use an IDE which allows to click on an error and going to the right place in the code.
Haskell has such situations as well, but usually far less verbose. Getting something to typecheck because you wrote down something incompatible uninferable still sucks. But far less than C++.
For what it's worth, I believe modern C++ compilers give much better error messages than older ones.
In Python, this became easier and one could focus on the data transformations, thinking about the code a level higher.
But then I learned a bit of Haskell 5 years ago, and with type inference, this problem goes away. So it convinced me back to benefits of static typing. (Although I still feel the most productive in Python, their library APIs are IMHO unmatched in any language. But Haskell is catching up.)
how far ago was this in the past ? C++ had tuples and type inference for ten+ years officially now - gcc 4.4 had it in 2009
Nope you don't, that's what typedefs are for. They're underrated for sure though. People don't use them nearly as much as they should. They're incredibly valuable for avoiding precisely this problem.
Well I can't imagine how much more annoyed they'd be when using an interpreted language which lets the code run just fine but then fails at runtime in mysterious and subtle ways requiring hours of manually scanning though code and print statements when the compiler would have caught a decent subset of those errors with helpful messages about the exact line they need to fix.
For the amount people care about it, there isn’t much evidence in either direction. And most studies that do exist are limited to small programs typically written by novices. Yale’s Singapore campus are going to be running two instances of the same course in parallel soon, one in python and one in ocaml. Perhaps that will provide a datapoint about learning the languages but maybe there will just be a lot of selection bias or library or environment or teacher differences. And how easy it is to learn a language probably isn’t the main datapoint to care about anyway.
You cannot get mad at errors you don't know about.
Also letting the user find and report the error allows you to mark your tickets "done" and move on, which makes management happy.
Ex: "Oh, cannot access property x of undefined? Something must be going wrong in y object"
Python definitely feels a lot more helpful than JS though. Can't speak for other interpreted languages like Ruby.
But I've worked with people who saw all compiler errors as things of the devil and wanted to defer as much as possible to runtime.
When selecting a language for a recent project, that needed to run correctly, without extensive debugging that would be hard to simulate (too many states and interactions), I had a couple of important criteria:
0) checked static typing
1) ADTs (that are reasonably easy to use, read and write)
2) pattern matching (no way I'll get all the if/else right)
3) reasonably easy to write static const (pure functional) code
4) memory-safe
I've considered rust, but settled on haskell, as I needed it fast and I know haskell. While technically any Turing-complete language would work, I don't think C++ would be a fit for must-work code, even disregarding (4).
While I haven't used C++ in a while, it seems to me, encoding the constraints would be 3-10x as much code, or even more, with many checks/cracks left, and a lot of readability gone.
Clean and correct functional haskell code took a bit longer to write than say happy-path imperative python, but after fixing 2 or so bugs that manifested on pretty much the first (partial) use (like incorrect "<" vs. ">" and a bad constant), it has been running happily ever since. I didn't even bother simulating a full system configuration before a real-world customer acceptance test, because components worked on 1-2 inputs I tried, setting up a system would take a couple of hours and I couldn't think of reasonable failure scenarios. I haven't experienced similar correctness in other languages.
I know not much of Java, but my sentiments concerning C++ are even worse.
I do not regularly program in Haskell and far more often in Rust.
For what it's worth, C++ has HKTs in the form of template template parameters, making it possible to write, e.g., monad transformers, which cannot be done in Rust, last I checked. Now as for whether you'd actually want to do this in a production codebase...
It is all for good reason, you can't ditch the GC and have control over the memory structure without the compiler complaining about details here and there. But fixing those strings versus slices and iterator type mistakes is really annoying.
Compilers do not offer compensating benefits, like catching bugs or ensuring behavioral correctness, that justify all the extra rigidity, slowness, and especially liability of all the extra code (even in Haskell).
Fast forward a decade and I'm evangelizing statically typed FP at conferences. The value of the compiler is redeemed after self-teaching and learning the "Whys".
Code that always upsets me is something like `Map<String, Object>` where a concrete type would work so much better (and faster).
Using a statically typed language, the most important thing you can do is USE IT. Let the type system save you from problems. Encode whatever you can in the types.
Bypassing it by casting always causes headaches.
This is the key bit, this is called static analysis, you don't need a type system in your language to do this, and you don't need to force doing it at compile time
Most developers appear to conflate the two, uncoupling static analysis and type systems would benefit most workflows
What assembly instructions should the compiler emit if you write sum ["foo", "bar"] ?
The reason you've chosen Haskell and, by the way, also the TLD ".system", is that you've constructed your identity in such a way that "advanced language with a steep learning curve" is something that fits.
There's absolutely nothing wrong with that. It's a bit emotional, but those exist for a reason. If you consider that idea libellous, you can always cite PR motives for plausible deniability and point at this HN story as evidence.
[0]: Elm, by the way, strikes me as borderline with regards to the bounds of reasonableness, considering the state that community is in. As such, it's more evidence your left (right?) metaphorical hemisphere may have had a finger on the scale.
I think you're projecting too much on them. They found Haskell performant and are promoting it, I don't see any problem with it. How is any different from all the Rust evangelism HN sees all the time?
I do believe, very very mildly, that there's a strain of thinking among the tech crowd that glorifies this Spock-like emotional detachment and I'm-so-rational mindset. Two issues, actually:
First, such a mindset is neither possible nor would do much good. There are stroke victims that survive with full mental capabilities with regards to logical reasoning but entirely devoid of emotions. These patients can still ace the SAT, but they fail spectacularly at daily life. As it turns out, you just cannot decide on a doctor's appointment without emotions. They'll spent hours vacillating between two good choices. Emotions are incredibly well-tuned heuristics that cut down your mental load to manageable levels. As any part of being human, they are sometimes ill-fitting for modern times: there's absolutely no reason to make me flinch when I spill hot coffee over my hand. But mostly they just work.
Second, it's slightly annoying when people deny that they are subject to emotions, and it gets up to Ryanair-levels of discomfort when they announce that it makes them superior to all those emotional social science majors, illogical politicians, women "throwing a fit", superficial designers etc. If I got a KDE theme every time someone accused Apple users of being blinded by eye candy, I'd still be left with only half of what Kubuntu ships.
But Rust is cool.
It's about quality of life and picking the right tool for the job. There are some problems I can solve in Haskell, that I simply could not solve in Java, it would be too hard and too much work. Java is a simple language and therefore it's much easier to reason about the performance and space usage. There are thus many problems where Java would be a better fit.
Even if Haskell was objectively the 'best' language, it'd likely be a poor choice for most teams simply due to familiarity and developer speed.
I'd be extremely hesitant to hire the services of this company entirely because they use Haskell and it'd be a nightmare to maintain after their contract.
for(var post: postList) renderPost(post);
Or
postList.stream().map(renderPost)
Here, I've fixed the title
This may or may not be the right business strategy, however if everyone would be going with a notion of only using popular languages because it's easier to hire for them, by now we'd be using JS as a backend language.Oh,wait...
https://iohk.io/en/team/#team=development
Now if you needed to hire 1000 developers, then you'd have more of a problem, and perhaps Haskell wouldn't be the right approach. But in my experience, Haskell engineers get a multiplier effect over the average non-Haskell engineer because the code is more concise and the average engineer skill is higher. I don't know what the multiplier is, but it's definitely non-zero and positive. This not only pays the obvious direct benefits, but also reduces the communication overhead of your team and the number of managers you need to hire.
It is quite difficult to get a first Haskell job though, because they mostly require production Haskell experience, so there's your chicken and egg problem.
I also believe that to be the case as an employee. Ie, being a generalist is probably -EV for your career but it feels safer so it's kind of a contrarian position to say "be a specialist".
Probably under 100, but not sure though.
I dont think this learning curve/hiring thing makes sense. Haskell can help attract smart dev, it makes refactoring so easy increased learning curve is off set.
I'd say it's harder to outsource, and there may not be as many libraries. Compare to Ruby/Rails for web, or Python for ML, and you have to write more by yourself in Haskell.
I didn't find the comment rude.
My experience with onboarding non-Haskellers has been pretty good, though I would certainly admit the argument that they were unusually talented individuals.
1. When your company uses esoteric language, your job offers usually have "we can teach you the language" instead of "we require x years experience in language and y, z libraries)". That is a really great filter for people you want to work with, as they have to accept they will be learning from the start.
2. There is a smaller number of engineers qualified, but the number of companies they can apply to is even more reduced. I think all in all you're winning in this equation as a hiring manager.
3. Your hiring process get cheaper, since it moves from having to filter candidates heavily to focus on matching the right person for the team. People who are interested in non-mainstream languages already fit some of the criteria you have to select for otherwise.
4. As the number of people to hire easily is smaller and onboarding even a talented person without experience in tech takes time, you get much better with selecting problems you want to tackle and grow in more reasonable pace. That has great benefits for your organisation.
I know for sure that if I am ever starting my own startup, I will use no mainstream technology. If I want JVM, I will use Clojure instead of Java, if I want web, I will use Elixir instead of Rails/Django, if I go low level, it's Rust, no C++ etc.
It's counterintuitive, but it works better. People think about size of the whole market, but they should think about percentage of candidates applying being fit for the job.
Edit: removed ` characters.
Haskell has the type system Java wishes it did, and half of the reason languages like Rust are interesting is because they've learned from Haskell (which is the point of Haskell, a research langage, though it happens to also be a pretty darn good language for building practical things). Simple basic data types like `Maybe t` and `Either l r` are such a revelation that you wonder how you lived without them.
I've shared this anecdote before, but Option<T> in Java is an example of the blub paradox[0], and discovering Haskell and finding out about Algebraic Data Types (ADTs) and the Maybe type cured my blub. The crux was this: Option<T>s seems to "infect" any codebase you use it on, because you realize that anything can fail and be null -- living in java land made it seem like it was out of place it's actually Option<T> that is right -- if you allow nullable types in your code base, or you do operations that can fail, properly representing that failure is the right decision.
Without over stating some of the best features of Haskell are:
- Compile time type checking (this cannot be understated) and non-nullable types
- Expressive and simple data type creation via `data`, `type`
- An excellent system for attaching functionality and composing functionality to data types via `typeclass`es and `Constraint`s.
- An emphasis on errors as values (unfortunately exceptions are in the language too, but you can't really stop them from existing)
- Forced delineation between code with side-effects and code without (this results in some complexity if you come from a world with side-effects everywhere and no control)
- Fantastic runtime system with good support for concurrency, parallelism and shared memory management.
- Very easy refactoring (if you're not adding any complexity/abstraction) because you can just change what you want and let the compiler guide you the rest of the way.
Haskell has it's warts (hard to debug space leaks, relatively small ecosystem, the ability to drown yourself and your team in abstraction), but it's just about the most production-ready research language I've seen.
Whether or not you like it, the likelihood it's already improved your life in whatever language you're using is very high.
[0] http://paulgraham.com/avg.html
(Scroll to "The Blub Paradox", about a third of the way down.)
Servant is one of the best if not the best example of how haskell's higher level abstractions can benefit practical bread and butter programming (which making APIs is these days) tasks, and where type safety is a huge benefit.
Writing servant handlers can also feel mostly imperative depending on how much you use `do`.
Sure, Haskell's type system is nicer, and the error messages are, I'm sure, more helpful (although the Java/C++ ones make sense when you learn what they mean).
There is an example of domain modelling in Haskell:
type Dollars = Int
data CustomerInvoice = CustomerInvoice
{ invoiceNumber :: Int
, amountDue :: Dollars
, tax :: Dollars
, billableItems :: [String]
, status :: InvoiceStatus
, createdAt :: UTCTime
, dueDate :: Day
}
data InvoiceStatus
= Issued
| Paid
| Canceled
The syntax is nice (ish, CustomerInvoice is a bit ugly), and terse. But, I've seen this a million times in Java, and that works fine.Quote:
Modeling domain rules in the type system like this (e.g. the status of an invoice is either Issued, Paid, or Canceled) results in these rules getting enforced at compile time, as described in the earlier section on static typing. This is a much stronger set of guarantees than encoding similar rules in class methods, as one might do in an object oriented language that does not have sum types. With the type above, it becomes impossible to define CustomerInvoice that doesn’t have an amount due, for example. It’s also impossible to define a InvoiceStatus that is anything other than one of the three aforementioned values.
All of this is table stakes in Java/C++ too.Other brief rebuttals:
Haskell has a large number of mature, high-quality libraries
No way this beats Java. I don't know the C++ ecosystem well, but I assume C++ wins too. Haskell enables domain-specific languages, which foster expressiveness and reduce boilerplate
Be careful what you wish for. Haskell has a large community filled with smart and friendly people
I think at the end of the day Haskell just feels fun to write, if you're the sort of person that likes it. That's fine. But I don't think going all-in on Haskell is the right call for most companies.Perhaps when Java gets record types, sealed classes, pattern matching and other features. But right now, Domain Modelling in Java (and C++) is really painful compared to a higher-level language like Haskell.
To be fair, the context of that quote is:
> Many programmers encounter statically typed languages like Java or C++ and find that the compiler feels like an annoyance.
I think this is a fair statement, although it would also be fair to say "many Java and C++ programmers find their compiler errors useful". I'd guess these two camps would remain mostly the same when using Haskell.
You're right that most of the article is roughly comparing a good example of static types (Haskell) against a bad example of dynamic types (PHP).
> I've seen this a million times in Java, and that works fine.
My biggest problem with Java (and the JVM) is the existence of `null`: it completely undermines type signatures. In the above Haskell example we "know" (see caveat below) that a `myInvoice :: CustomerInvoice` is a `CustomerInvoice`, whilst in Java a `CustomerInvoice myInvoice` might be a `CustomerInvoice` or it might be `null`; likewise `myInvoice.billableItems` is a `[String]` in Haskell, whilst in Java it might be a `List<String>` or it might be `null`; in the former case, each element might be a `String` or it might be `null`.
Caveat: Haskell values are lazy by default, so errors may only get triggered when inspecting some deeply nested value; in that sense we might say that a Haskell expression of type `T` might be a `T` or might be an error (known as "bottom"). We certainly need to keep that in mind, but one nice thing about bottom is that it can't affect the behaviour of a pure function (we can't branch on it). In that sense returning a value containing errors, which are later triggered, is practically equivalent to triggering the error up-front (pure expressions have no inherent notion of "time", unlike imperative sequences of instructions). The interesting difference is that we can also use such values without triggering the errors, iff the erroneous part is irrelevant to our result ;)
Having all types nullable by default makes 'proper' null-checking incredibly verbose, not to mention tricky; the alternative is to cross our fingers and hope our assumptions are right. What makes this frustrating is that such checks are exactly the sort of thing that computers can help us with, and type systems are particularly well suited for! Hence the presence of `null` cripples Java's type system in a way which can't be worked around (without essentially layering a separate, null-less type system on top to check for nulls!).
Also note that the presence of null causes every domain model to collapse. Let's say we want to write a conversion method, e.g. from `CustomerInvoice` to `Document`, and we don't want to worry so much about `null`: hence we write in our javadoc that as long as the given CustomerInvoice contains no null values, this method will never return null; let's say we throw a NullPointerException in those invalid cases. Great, our users now have fewer edge-cases to worry about; they don't have to check for null, and they don't have to catch NullPointerException if their input is correct.
Except, once we start implementing our method we find it needs to call some other helper method, e.g. `statusToTable`; if that method returns a null result, we would be unable to construct the `Document` value that we promised. What can we do in that case? We promised we wouldn't return `null`, so maybe we throw a NullPointerException? If we do that, those calling our method might get a NullPointerException even if they gave valid input! We might throw a different exception instead, like AssertionError, but the effect would be the same. Hence we can't guarantee to our callers that we don't return null (or some equivalent that they must deal with, like NullPointerException or AssertionError); that, in turn, means they can't provide such guarantees to their callers, and so on. At any point, we might get a null (or equivalent exception), and the whole house of cards comes crashing down.
Maybe we trust that helper method doesn't return null, but how can we know? Maybe we check its documentation or source code to see whether it might return null; but we find that it calls other methods, so we have to check those, and so on. If we do this, we would also have to pin our requirements to the precise versions of the libraries that we checked. In case you couldn't tell, that process is essentially manual type checking (for a very simple system with two types: 'Null' and 'AnythingElse').
Of course, this is sometimes inherent to the problem, e.g. if a HashMap doesn't contain the entry we need then there's nothing we can do. However, most code doesn't have such constraints (except perhaps out-of-memory), but there's no way to tell that to Java (in mathematical language, Java weakens every statement to admit trivial proofs).
"Haskell's type system is more expressive than X and Y" is a strong claim and can be proven by showing that X and Y need to compose run-time workarounds for a given property that can be checked statically in Haskell.
"Functional Programming reduces the surface area for Bugs" is a strong claim and can be proven by showing that a single mutable reference strictly introduces a set of possible bugs that were not possible before and that these bugs cannot be checked in language X.
It is kind of annoying that these discussions often seem superficial, cultural or partisan, when in fact they could be much more rigorous.
Now, If we assume or find these claims to be true we can finally proceed with the real discussion: What are the costs and benefits of these properties in a given setting?
Is there really a formal proof or Software Engineering paper that proves this?
I was told this in my FP class in university, but it pretty much sold to me as gospel.
In practice I agree with the statement - I certainly feel there's an inherent "cleanliness" to FP.
But I also feel that the argument is not only about program correctness; many people ultimately conflate it with developer productivity. And here's where I feel that things fall apart a bit: I feel as if sometimes it's much quicker to do things with state, so maybe the time you save debugging is time you add elsewhere?
Of course there are other parts of Scala, Haskell and similar that require more mental gymnastics than I'd like, such as composing asynchronous operations; flatMap and monad transformers may be "elegant" once you really understand them but damn is async/await easier to just write and move on with your life.
In the end the insecurities and failures of snarky commentators don't matter to others who are in the arena solving real problems in production with an unsexy language.
Really? You comparing apples with oranges? Why not, if you're at the step of comparing compiled versus interpreted languages, compare it with Java too?
Now, do the same comparison versus C++, let's see who wins when talking about speed.
Sounds a lot like Haskell with Prolog...
“ Curry is a declarative multi-paradigm programming language which combines in a seamless way features from functional programming (nested expressions, higher-order functions, strong typing, lazy evaluation) and logic programming (non-determinism, built-in search, free variables, partial data structures). Compared to the single programming paradigms, Curry provides additional features, like optimal evaluation for logic-oriented computations and flexible, non-deterministic pattern matching with user-defined functions.”
http://chriswarbo.net/git/warbo-packages/git/branches/master...
You might like the Mercury language too: https://mercurylang.org/
Here is the paper by Kiselyov: http://okmij.org/ftp/papers/LogicT.pdf
And then when someone points out that nobody knows who they are or what they've built, we get commenters talking about how company X, Y, and Z are also using Haskell. And those claims also come up short...most of them can't say where or how it is being used because they don't know...just that at some point in the past, someone emails were exchanged between someone@bignamecompany.com and someone@haskell.org, and now there is a piece of copy on the Haskell website that disingenuously claims BigNameCompany is powered by Haskell!. Who cares how pervasivly it is used...if someone writes a config parser with Haskell, all of a sudden we can claim that BigNameCompany would fall over on its face if Haskell wasn't there protecting it.
Come on. Nobody cares that Haskell is your secret weapon if you've never overcome an opponent with it. Or built an entire profitable company on its back. All these types of posts do is fake an authority so you can jump straight to your fallacious argument by authority.
If you want to argue the merits of your favorite language, then do it. Don't make us sit through an argument about how your language makes you special when you aren't even noteworthy enough for a 10 sentence Wikipedia blurb. There are a lot of valid and powerful technical arguments in this article, but they're ruined by framing them all around the premise that we care about how it makes you and your startup special.
Now, the lack of skilled haskell programmers on the other hand, that's a pretty scary proposition if you're starting a company and may find yourself riding on a rocket, needing as many able hands as you can possibly find.
But occasionally it pays off in a really big way.
Like WhatsApp cashing out for $19 billion, on a product they never could have scaled with so few engineers without Erlang.
Like Viaweb and Common Lisp, where Paul Graham says the language allowed them to move much faster than their competitors. One anecdote was about talking on the phone to a customer reporting a bug, and actually fixing it on the live system and asking the customer to try again, and the customer was shocked to find it now worked.
Like ITA, who created the best in class flight search system in Lisp and then sold to Google.
Every once in a while, an unpopular but powerful technology really is the secret sauce for a winning product.
It requires a lot of effort to learn, development tools are scarce, and you can't easily hire a new worker simply because it's minor.
Eventually, original developer left the company for one way or other, leaving a code and half baked documents only the early developer fully understand. Good luck maintaining those software. You can't. It's either abandoned or replaced.
I would prefer a more stable platform designed for engineers , something like Clojure but with types. Ocaml has a small community and Scala brings unnecessary complexity with its support for OOP.
It looks to me like they would satisfy the same points that the article makes.
Edit: just did a quick comparison of the last 2 SO developer surveys, and it looks like Haskell "replaced" F# in their popularity ranking last year.
The result is that "the Haskell way" can seem a little more intimidating than the more "pragmatic" approach of MLs.
(I write this as someone who writes a lot of Haskell, and dabbles in StandardML!)
I see the corporate aspect of F#, but can you elaborate on what you mean by "special" about Haskell?
(Or with many other great options)
I've basically traded away some nice abstractions (functors, monoids etc) for the ability to debug error messages more easily, and not have to convince people to learn/support an obscure language. Good deal IMO.
> Many programmers encounter statically typed languages like Java or C++ and find that the compiler feels like an annoyance. By contrast, Haskell’s static type system, in conjunction with compile-type time checking, acts as an invaluable pair-programming buddy that gives instantaneous feedback during development.
Many programmers find that Java or C++'s static type system, in conjunction with with compile-type time checking feels like an annoyance. Unlike... the very same statement about Haskell? That's... that's quite a weak claim, to say the least.
> a signature like Int -> Int -> Bool indicates that a function takes two integers and returns a boolean value... this allows a programmer reading Haskell code to look only at type signatures when getting a sense of what a certain piece of code does. For example, one would not use the type signature above when looking for a function that manipulates strings, decodes JSON, or queries a database.
So... Type signature `Int -> Int -> Bool` can be used for a function that does any of the following things: manipulates strings, decodes JSON, or queries a database? How does that make it easier to deduce what a function does by "looking only at type signature"?
> Another feature of a pure functional programming paradigm is higher-order functions, which are functions that take functions as parameters.
As in: available in almost any language these days, and not exclusive to a "pure functional programming paradigm".
> One of the common development workflows we employ is relies on a tool called ghcid, a simple command line tool that relies on the Haskell repl to automatically watch code for changes and incrementally recompile. This allows us to see any compiler errors in our code immediately after saving changes to a file. It’s not uncommon for us to open only a terminal with a text editor and ghcid while developing applications in Haskell.
As in: Modern IDEs don't require you to run external tools to monitor your code for changes and highlight errors.
> a common refactoring workflow is to make a desired change in one location and then fix one compiler error at a time until the program compiles again.
As in: Modern IDEs let you do large-scale refactoring in one go, at a press of a button.
> The type system can protect us from making mistakes when changing the rules of our domain.
It can't. The example provided can't stop you from doing `case status of Paid -> delete invoiceNumber`. You have to invest significantly in a type-based DSL to prevent that from happening. But then, who will test your DSL?
> Haskell enables domain-specific languages, which foster expressiveness and reduce boilerplate
DSLs where all the rage 5-10 years ago. In reality, they are overhyped and are used very sparingly, for obvious reasons: DSLs are languages. They have to be designed, developed, maintained. Errors in your DSL will most likely harder to find and debug than in your regular program.
It can manipulate strings, decode JSON, but both of these are either (immutable) values from enclosing scope, or created within the function. But since they can't be output anywhere (because no IO) then they don't matter.
EDIT:
I'll just add a few more counterpoints.
Agreed that DSLs are difficult, and often not worth the trouble. But if you do want to create a DSL, then haskell is a good fit because of monads and monad transformer stacks - i.e you can make the statement mean whatever you want it to mean, and keep the effects in check with types.
The synergy between higher order functions with typed IO is great, better than in other languages.
Agreed that IDE refactoring is convenient, and also refactoring in other languages with static type checking the process is similar to what they describe in the article, so no immediate "pro" there.
It doesn't help if it's `Int -> Int -> IO Bool`, for example. Well, it does do IO, but other than that, who knows. Perhaps it reformats the disc while CPU is idle :)
My main point though is that the article does a very poor job of showing why Haskell is good at, well, anything, compared to, well, anything.
Genuinely curious, not that familiar with Haskell, just thought you could use something like parameter binding or similar to construct functions like that.
It can definitely NOT query a database(as that would be an effect which would be visible in the type).
From this quote, it's easy to see you've not spent any time actually using Haskell (others have explained what is actually going on), so why do you dislike it? I cannot fathom having an opinion either way on a language I don't know.
I recently tried to compile haskell-language-server and stack from sources. 157 and 168 (or something) dependencies, full of redundant esoteric bullshit, compat packages, lifted crap, etc. It is even worse than J2EE where it was the same redundant wrapping and indirection, but brain-dead straightforward verbose crap.
To use Haskell correctly, like the classic xmonad and similar projects, requires discipline, knowledge and good taste for just right abstractions, like Go stdlib or Scala3 standard library.
Yes, it doubles development time, which must be spent on understanding anyway, but fast food fp code, full of redundant abstractions, is a worst nightmare to maintain.
This comment reads like someone seeing the worst of J2EE, and going back to C++. I'd characterize haskell as having the type system that Java wishes it did.
Why are you trying to judge how haskell should be written for your use case by looking at haskell-language-server, stack, and xmonad? Those are the domains of haskell experts -- one is a language server, the other is one of the pre-eminent build tools, and the other is tiling manager.... Are any of those your use-case?
There are real problems with haskell, and forcing you into complexity is not one of them -- a steep learning curve (for certain concepts), hard to debug space leaks, and a relatively small ecosystem are the biggest issues.
These kinds of arguments are particularly lazy. Of course, Haskell's ecosystem is not so large that it's trivial to find a well-maintained, high-quality version of a library that meets one's other criteria. Programmers of a particular language are at the mercy of that language's ecosystem.
This line of reasoning reminds me of how C++ programmers would deflect criticisms of problematic features by arguing that one could use only the features that one wanted (thereby effectually creating or curating their own sub-language) and only choosing dependencies that were equally written in that sub-language. So easy!
Hum... Knowledge and an acquired taste, yes. You'll need those. Discipline not. Discipline is exactly what Haskell doesn't require.
Really in order to “push side effect to the edge” people need to avoid using monadic composition as much as possible which I see Haskell programmers rarely doing in practice.
Haskell never claimed to "get rid of side effects", the idea is make them explicit in the types and to be able to reason about them.
To push side effects to the edge you have to only use do notation and monads when you absolutely have no choice, which is not done in practice with haskell.
>The presence of monads does not necessarily mean side effects
The presence of a functor does not mean side effects. The presence of a monad implies composition and binding which does imply a side effect. Even maybe monads composed have side effects that can produce output that the function itself can never produce on it's own.
For example let's say I have a maybe monad that will never produce "Nothing."
b :: int -> Maybe Int
b x = Just x
but I can produce a side effect by binding it with Nothing. Nothing >>= b
The above yields "Nothing," even though it is not part of the definition of b. It is a contextual side effect passed on through monadic composition. Normal composition outside of monadic composition usually does not have this property.This is overlooking the cost of developers, which greatly outweights the hardware's unless you are Facebook.
The "overlooking" part you filled in yourself in bad faith and bad reading comprehension, as the author actually does acknowledge that the observed hardware savings are small compared to the cost of hiring programmers.