Immutability is a tool, not a rule, and I am free to reject any assertion otherwise when those assertions provide no evidence, or shitty anecdotes.
Prove your claims.
Certainly, immutability is a foundation for performance problems.
Another provable rule in computing is that more lines of code = more bugs. Immutability uses more lines of code.
Another demonstrable fact is that Haskell based programs have just as many bugs as any other programming language whether you have immutability or not. Therefore, immutability is not a bastion of robustness.
You’re going to have significant difficulty proving to me that immutability = scalability and robustness when both are demonstrably not true just by taking measurements of thing you expect to improve out of those foundations.
Immutability is not a silver bullet. It is a tool that is sometimes useful, but has significant drawbacks, including shitty performance, and significantly limiting how your data can be managed (without that limitation paying off in any significant way)
Because the rest of your post is pretty LOL-worthy in light of your opening sentence.
1) immutability has performance problems: source: literally every measurement of immutable vs not data structures ever performed.
Source 2: logic - copying data is slower than not copying it
Source 3: cache lines: modern CPUs rely pretty heavily on cache lines and branch prediction to improve performance. Immutability measurably harms both.
2) immutability requires more code and loc is the best predictor of defects
Clarification: runtime immutability requires more code
Source: it takes more lines of code to return deep copies of objects than to not do that.
Source: https://www.researchgate.net/publication/316922118_An_Invest...
Package densities are the best predictors of defects
3) Haskell projects have as many bugs as any other language
Source: the best evidence we have here is “the large scale study of programming languages on GitHub”, but I suggest that you look deeper here, as the authors qualifications of defects is somewhat questionable (a project that never fixes defects would have low defect rates in this study, it additionally doesn’t properly compare projects sizes and other things). Anyways, in responses that do have better controls in place (and hilariously even in this paper itself, where we see Haskell programs tend of see higher defects as projects go on while c projects tend to have fewer), we see that Haskell does absolutely no better than anything else for bugs and defects.
"Table 7: Functional languages have a smaller relationship to defects than other language classes where as procedural languages are either greater than average or similar to the average."
"The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than un-managed."
You got owned by your own source.
As for your un-sourced claim that "copying data is slower than not copying it", I'd suggest learning how immutable-first languages practice data sharing between objects to minimize the amount of copying needed.
So lets disabuse your mistrust of immutability in another domain!
Here is some typical "go fast and mutable!" nonsense code:
int foo(int i, int j) {
while (i < 10) {
j += i;
i++;
}
return j;
}
Let's compile it with https://godbolt.org/, turn on some optimisations and inspect the IR (-O2 -emit-llvm). Copying out the part that corresponds to the while loop: 4:
%5 = sub i32 9, %0, !dbg !20
%6 = add nsw i32 %0, 1, !dbg !20
%7 = mul i32 %5, %6, !dbg !20
%8 = zext i32 %5 to i33, !dbg !20
%9 = sub i32 8, %0, !dbg !20
%10 = zext i32 %9 to i33, !dbg !20
%11 = mul i33 %8, %10, !dbg !20
%12 = lshr i33 %11, 1, !dbg !20
%13 = trunc i33 %12 to i32, !dbg !20
tail call void @llvm.dbg.value(metadata i32 poison, metadata !17, metadata !DIExpression()), !dbg !18
tail call void @llvm.dbg.value(metadata i32 poison, metadata !16, metadata !DIExpression()), !dbg !18
%14 = add i32 %1, %0, !dbg !20
%15 = add i32 %14, %7, !dbg !20
%16 = add i32 %15, %13, !dbg !20
br label %17, !dbg !21
17:
%18 = phi i32 [ %1, %2 ], [ %16, %4 ]
Well, would you look at that! Clang decided (even in this hot loop) never to re-assign any of the left-hand-sides, even though my instructions were just: "mutate j in-place. mutate i in-place."> Source: it takes more lines of code to return deep copies of objects than to not do that.
Defensive copying and deep copying is not a thing you have to do in immutable languages. Even under the covers, it's not happening the way you seem to think it is. If I had a large immutable map in use by some other process, and needed a version of it with an element changed or added, why would I deep copy it when I can just point to that same map instance, and add a pointer to the key-value pair I want to substitute [1]? I think this is a common reservation people have about immutable programming because they come into it with a OO mindset. At least, I know I did.
In a really simplified example, a = (1, 2, 3, ..., 100) and b = (2, 3, ..., 100) are not allocated as two full lists in memory space. a contains 1 followed by a pointer to b. Because you have guarantees that b will never change, the single instance of b can be recycled in other data structures (or passed to many other functions and threads) and you avoid the complexity of managing race conditions, mutexes, semaphores, which are a significant source of bugs in other languages.
See [2] for a more realistic implementation.
You have posted nothing else besides your own assertions.
Not everything people say on a discussion board is some scientific claim, subject to scientific inquiry and in need of a thesis defense. But if you really off-the-cuff dismiss Joe Armstrong's opinion on a matter because it hasn't met your criteria of proof, despite you thinking you are somehow being the rational scientist here, you are actually revealing your own stupidity.
I reject that claim.
In this comment, you simultaneously agree and disagree with me.
I don’t give a shit what Joe Armstrong says about immutability because the facts are the facts:
1) immutability cause performance problems
2) immutability significantly limits how you can manage data, which is counter to what computers are meant to do
3) immutability measurably does not reduce bugs in programs
I am not dismissing <insert name> off the cuff. I am dismissing them because their claim does not align with metrics you expect to improve as a result of their claim.
>it doesn’t have to be a scientific claim
When you are telling people to “make immutability a foundation of their programming”, you 100% are opening yourself to scientific scrutiny. If you cannot back up this claim with actual metrics, and you’re just going to say “hurr durr, just let me make claims without calling me out to providing evidence please”, why should anyone believe you?
Have you ever heard the saying, “assertions made without evidence can be dismissed without evidence?” My experience differs.
Immutability has been the foundation of many of our large scale programs. It makes safe concurrent programming easier, and languages built around immutable data structures usually optimize memory handling in ways that are not available when simply writing “functional style” code in non-functional languages. ie under the hood they’re using persistent data structures, structural sharing, tail call optimization, etc.
I'd be curious to see how you back this claim up. Are you referring to something published that we can all go and read?
For your points:
1) Yes, immutability can cause performance problems in some contexts. However, it can also help in the whole. Mutability in concurrent systems requires all sorts of complications such as mutexes that slow things down considerably. In even single-threaded systems, mutability leads to defensive copying in practice. Furthermore, persistent data structures[0] exist for lists, dictionaries, etc., that achieve very good space and time performance by mutating internally while exposing an immutable interface.
At any rate, even if it is slower, most of the time the performance difference just doesn't matter.
2) How does it limit how you can manage data? It's still possible to mix immutable and mutable data if necessary, but immutable data can be transformed just as mutable data can.
3) You say it measurably does not reduce bugs in programs, again with no evidence. Immutability eliminates entire classes of commonly-encountered bugs, including many pernicious ones related to concurrency. These are bugs that happen commonly with mutable data, but simply don't for immutable data.
In addition, there is some limited empirical evidence to the contrary, which is rare for this kind of thing. Immutable-first Clojure had the lowest proportion of Github issues labeled as bugs, even beating out static languages. [1]
I'm not GP, but these traditions are usually not backed by any evidence but by cargo-culting and cult-of-personalities. Not to mention people who over-hype their favourite technologies to high heavens, poisoning the well for everyone else (no, most telecom industry doesn't run Erlang, Naughty Dog didn't ship Lisp on PlayStation 2, and Prolog didn't lead to fifth-generation computing).
I don't believe in blindly believing things without evidence either, especially if I have never encountered them before, but I also don't believe in blindly dismissing experience of world renowned experts in their field because they didn't provide me a point by point prooftext of every claim they made (Again we aren't sitting here discussing a dissertation or mathematical proof). Their experience and what they've provided to the world is the evidence. We took this 19th century german ultra-materialist philosophy too far here in the west, and that's what gave us post modernism/poststructuralism with its disastrous consequences, but it still seems like we haven't learned anything from that.
The ancients had it right that theres different types of knowledge, and different ways of knowing things (and knowing them to be true, at least as far as it mattered). We here in the modern era with the most unfettered access to information have quite possibly the narrowest definition, ironically.
Bugs happen when you think you can program something correctly, but can't.
If you look at the implementations of transactions in any other language... Oh wait there aren't any!
People keep trying to implement it in their own languages, figure out it's a non-starter (because of uncontrolled mutation), and give up.
- Clojure doesn't enforce purity (it can't), but from what I hear its STM seems to work pretty well (aside from some perf issues possibly? haven't used it). That's because "mostly pure" functional programming is encouraged by both the language itself and the culture and ecosystem around it, so uncontrolled side effects are less likely to be a problem.
- I think STM can work "well enough" in unmanaged languages as long as you don't try to boil the ocean and make it perfectly transparent, safe, and fast under all circumstances (Microsoft, IBM, Intel and several others tried for years and failed). That means there will inevitably be huge footguns for non-expert programmers (e.g., any side effect might be invoked every time a transaction is optimistically and transparently retried). These footguns can be mitigated by affordances like commit/abort handlers and infallible transactions.