Most Haskellers I know are quite fond of Rust :)
Of course Haskell is heavily influenced by ML so it also has a lot of the same features.
Haskell has a garbage collector for a reason.
You can't program naturally with closures unless you have a garbage collector, because closures introduce cyclic references.
Also, Haskell has a type inference system that allows e.g. the IO monad to work seamlessly with the rest of the language.
EDIT: and laziness of course.
There's a pareto frontier of "best available language for a given project", and I think Haskell dominates the portion of the pareto frontier where you don't have super tight physical/memory/real-time constraints, and Rust dominates the portion where you do. This is by virtue of their similarity.
Rust just happens to heavily use linear types. Haskell let's you use both. If you write your Haskell program using linear types you get borrow checker semantics.
The main difference is mem management. However haskell could likely be written to be manually memory managed. There are manual men based ocaml implementations.
Haskell is not typically used for systems programming.
> you don't even have strict order of evaluation!)
People say this but I'm not sure people understand it. Haskell evaluation order is exotic but deterministic in all cases unless you introduce threads, which bring non determinism into any language including rust
You do, with `seq`. And a lot of Haskell programming in practice is deciding between lazy vs strict programming.
The only lazy feature commonly used by Rust programmers are iterators, but they are just lazy lists (that are immediately discarded after being forced)
Well yes and no. In a way the features such as immutability and algebraic data types are things you should know about as a software developer even if your current language means you can't use them at the moment.
My 16 year old son has learned Rust coming from a python at school background and is now writing small games in the bevy game engine.
Having to understand monad transformers or another kind of effect system to get anything working is a heavy load that's unnecessary in other languages.
I'd say knowing how to use monad transformers is the barrier between intermediate and expert Haskell programmers.
Effect systems and monad transformers are advanced topics. They're possible to use in my language yet most people do not and can still write software.
Certainly no need for them in Haskell to be successful. You can just program at the lower levels of abstraction common in other languages.
The only difference is that Haskell's community tends to emphasize abstraction
"The common expression "a steep learning curve" is a misnomer suggesting that an activity is difficult to learn and that expending much effort does not increase proficiency by much, although a learning curve with a steep start actually represents rapid progress."
https://en.wikipedia.org/wiki/Learning_curve
When writing, consider "challenging learning curve" instead (I'd love other suggestions too).
Rust struct and Haskell records work pretty much the same way, too:
// Rust
Item { name, price }
Item { name: name, price: price }
match item {
Item { name, .. } => todo!(),
}
corresponding to -- Haskell
Item { item, price }
Item { item = item, price = price }
case item of
Item { name, .. } -> undefined
Haskell has some opt-in flexibility wrt. packing and unpacking of field names [1].[1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/reco...
// rust
let Item { name, .. } = item;
-- haskell
let Item { name } = item in …
(Note: the haskell version requires NamedFieldPuns, or RecordWildCards for something like Rust’s version)Another possible gotcha is that by default Haskell records introduce a named accessor function into the surrounding scope. So defining two records with a `name` field next to each other is an error
This is quite aptly expressed by the recent paper on dependent types [1]:
> Previous versions of Haskell, based on System Fω , had a simple kind system with a few kinds (, → * and so on). Unfortunately, this was insufficient for kind polymorphism. Thus, recent versions of Haskell were extended to support kind polymorphism, which required extending the core language as well. Indeed, System FC↑ [30] was proposed to support, among other things, kind polymorphism. However, System FC↑ separates expressions into terms, types and kinds, which complicates both the implementation and future extensions.
Later in the paper, they show how some of the most recent "fancy" features of Haskell can be achieved in a more economic framework based on first-class types. Unfortunately, systems programming (as in C/C++) puts strict constraints on a programming language, and currently it's not quite clear what's the best way to integrate dependent types with systems programming. In the coming years, I expect to see some forms of castrated dependent types tuned for systems programming (e.g., types dependent only on indices).
[1] https://www.researchgate.net/publication/308960209_Unified_S...
Dependent types dispense with the phase separation between compile-time and run-time code, which is inherent to system languages. So you can easily have dependent types in a systems language as part of compile-time evaluation, but not really as a general feature. It would work quite similar to the "program extraction" feature in current dependent-language implementations, which does enable improved type checking because you can express arbitrary proofs and check them reliably as part of compilation.
What black magic is this? Is the article just glossing over the cost of a copy or does Haskell do something weird here to avoid the copy while retaining both versions?
Conversely, in the case where something like a list is modified in entirety (e.g. with a `map` function), if the compiler can determine that the original is no longer needed, it can run the map operation in place - much like you might do on an array in C - avoiding the need for a second copy of the structure in-memory.
From https://en.wikipedia.org/wiki/Persistent_data_structure
Clojure leverages the same data structure for (at least) four basic types; list, map, vector and set, and the following article explains it well with a pretty graph/picture too: https://practical.li/clojurescript/clojure-syntax/persistent...
For example, in a basic binary search tree implementation of a map (using C-ish syntax for those who don't know a functional language):
struct Node {
String key;
int value;
Node left;
Node right;
}
Node set(Node n, String key, int value) {
if(n == null) {
return new Node { key = key, value = value, left = null, right = null };
}
if(key < n.key) {
return new Node {
key = n.key,
value = n.value,
left = set(n.left, key, value),
right = n.right
};
}
if(key > n.key) {
return new Node {
key = n.key,
value = n.value,
left = n.left,
right = set(n.right, key, value)
};
}
return new Node { key = key, value = value, left = n.left, right = n.right };
}
A perfectly-balanced tree with depth of 5 has 1 + 2 + 4 + 8 + 16 = 31 nodes. If you call the above function, on such a tree, the worst case scenario is that the key doesn't exist, so it modifies 5 nodes during its search and creates a new sixth node. 26 of the original 31 nodes are reused and referenced by the newly-created map. The percentage of nodes reused only improves as the perfectly-balanced tree gets larger.Of course, if this is your implementation of set(), the tree won't be perfectly-balanced, so a production implementation of a tree-based map needs tree-rebalancing (as well as memory-reordering and compacting for cache locality). These extra constraints typically mean less of the tree can be re-used, but the percentage of nodes which can be reused remains high.
You have an object. When you want to apply a patch, you create a new object that contains just your patch, plus a reference to the old obejct. DiffArray is the simple common example. It's fast enough when the diffs are small, but terrible when there are many diffs in series, creating a deep stack of references.
It's not obscure. $PATH and the /bin,/usr/bin, /usr/local/bin, $HOME/bin dirs on Linux work the same way.
And GHC exploits that liberty.
I prefer Rust's way of doing this. When you're a beginner it ensures that you're passing and returning the proper type to and from the function, you kind of have the function as a guard which ensures that you're using the correct types.
You don't need it for all function declarations though, there are many trivial cases where type signatures don't add value. Consider that you probably would prefer most of the variables within a function to be inferred for you by the compiler. A similar thing could be said about the most of the functions within a given module. Important interface methods can be defined explicitly though, for extra clarity and self-documenting purposes.
module functions: if I chose, I'd let them be derived, but I'm relatively indifferent
inner functions: should be allowed to be derived, IMO. They're very infrequently used so this choice doesn't have much impact.
It's certainly good practise to write out your types, and often your editor can do it for you, but there are tons of little places where it's a waste.
Lifetime annotations aren't elided based on there being only one unambiguous answer. Rather, there's three simple rules[1] that look at the function signature and in simple cases give what you probably want. If those simple rules don't match your use case you need to define manually, the compiler doesn't get clever.
This means that if you want to take a reference to an array and return a reference to an item in an array, for example, elision will work fine. But if you take a reference to a key and look up a reference to the value in a global map you need to write it by hand, even though the compiler could pretty clearly guess the output lifetime you want.
This preserves a crucial feature: you can know the exact signature of a function by only looking at the signature definition, you don't need to look into the body.
Lifetime elision isn't like inferring argument types. It's like defaulting integer literals to i32 unless you specify otherwise.
I do not want abstractions where they aren't needed. I want control, simplicity, a clear correspondence between what I'm writing and what logical assembly I'll be generating (logical, not physical). Most of all I want my code to be stupidly clear to the next person reading it. Systems programming isn't like writing a Java app, the domain is complicated enough that there's no room for abstraction astronauts to create problems where there are none.
I am still very wary of Rust. I have used it and will continue to use it, but it still teeters on being too complicated to be useful in the same way as C.
> and what logical assembly I'll be generating
The problem with Rust and C is not worrying about what assembly I'll get, it is that one has very little control over the layout of the stack. It's the last implicit data structure and that's a PITA for highly resource constrained programming.
I absolutely need to worry about what assembly I get. I am often checking my assembly or looking for optimizations in my assembly. And I'm doing this across multiple architectures.
So it doesn't have the simplicity of C, it tries to give you as many abstractions as possible while still maintaining the zero cost philosophy.
I would say Rust is easier then C++ and easier haskell.
- OCaml is definitely tailored for more "high-level" tasks, such as writing a programming language or a theorem prover (there are many of them in the OCaml world, and even Rust was initially written in OCaml, as you've mentioned). OCaml has a GC, which might be a problem under certain circumstances.
- Rust has a far better ecosystem despite being a younger language. You can just compare the number of packages on crates.io and OPAM.
- OCaml _sometimes_ has some more fancy type features, such as functors (modules parameterized by other modules), first-class modules, GADTs (Generalized Algebraic Data Types), algebraic effects, and the list could go on. It doesn't have type classes or an ownership system though.
- OCaml is more ML-like, while Rust quite often feels C-like. For example, you have automatically curried functions in OCaml and the omnipresent HM type inference.
- OCaml has a powerful optimizer called Flambda [1], which is designed to optimize FP-style programs.
Having written some code both in OCaml and Rust, I can say they are in a lot of aspects quite similar though. They both take a more practical POV on programming than Haskell, which affects language design quite evidently.
Also, Rust's idiomatic type system usage is way closer to Haskell's than it is to OCaml's (with typeclasses, no polymorphic variants, no module functorization, etc.)
[1] https://old.reddit.com/r/haskell/comments/zk2u6k/what_do_has...
let nums = take 10000000 naturals
print $ (sum nums, length nums)
> Because the nums list is used for both sum and length computations, the compiler can’t discard list elements until it evaluates both.Now that makes me wonder, if I write something like
print $ (sum (take 10000000 naturals), length (take 10000000 naturals))
will it run in constant memory? I think it ought to, but are there mechanisms in GHC optimizer to prevent extraction of CSEs that would cause huge increases in memory consumption?When you say 'algorithms would look better' this is pretty subjective. Graph based algorithms or inplace algorithms doesn't look too good in haskell.
Because it’s target domain requires reliable efficiency.
> Wouldn't it be better if we could just have the compiler guarantee that functions marked as such
As such what?
> are doing TCO
TCO is worthless, and Rust does not have TCE, nor does it care about TCE.
> so algorithms would look better and had the full benefit of persistent data structures?
Rust does not significantly care for persistent data structures, there are crates which provide them, but not the standard library.
Rust is not a functional language.
Why not? Why this "there can only be one" mentality?
Because you can't simply disallow mutability in an imperative language.
But about annotations, you annotate your code to let the compiler know what you meant to do. If just looking at what you do was enough, C would be a safe language. You improve it by giving more information to the compiler, so it can check if you are doing what you meant to.
(But I don't know what relation you saw with TCO. It's not a related concept.)
Maybe because it's designed to replace existing, familiar imperative and mutable languages.
Well you can use Rc/Arc (that’s what Bodil’s imrs does for instance), but essentially yes, by design and purpose a persistent data structure has confused ownership, and thus requires a separate layer handling memory reclamation.
Most important question about Haskell
I participated teaching students Haskell on my alma mater this year and "what can Haskell be used for" was a common question, with genuine expectation that the answer will be it is limited to only specific use cases. I would answer that it can be used anywhere where languages like Java, C#, Go, and similar can be used -> it is a general programming language that uses garbage collector! And while somewhat harder to learn due to abstractions that we are all not used to, it is a delight to express business logic in it once you get to know it well.
The biggest factor for deciding if Haskell is a good fit for the problem is probably ecosystem support -> are there enough libraries and tools to support efficient development in a specific problem domain. In our case, we are building a compiler/transpiler, and Haskell is well-known for great support in that area, so it was a no-brainer. We were actually also considering Rust, but we just had no need for that level of memory control and rather decided to go with language where we don't have to think about that (Haskell).
And hiring right, or being prepared to train/let new hires climb up a steeper learning curve than hiring someone with Python experience for a Ruby app say.
Rust has reached that critical mass I think, got past the chicken/egg issue of experienced people to hire & companies interested in hiring them (to work on a rust codebase). Against the odds I think, there are plenty of languages you hear about similarly up and coming that haven't (D, Zig) or have only in a niche (F#, Swift, Kotlin - the last two I include mainly because I'm thinking Go could so easily have gone the same way, just been the one Google pushed for K8s plugins and GAE applications, not used generally as it is despite being a general-purpose language).
How is Wasp more convenient than other frameworks? How do I hire people that know it or can train themselves to use it?