Furthermore, I don't see why engines should police what is or isn't acceptable performance. Using functional interfaces (map/forEach/etc.) is slower than using for loops in most cases, but that didn't stop them from implementing those interfaces either.
I don't think there's that much of a performance impact when comparing
const x = fun1(abc);
const y = fun2(x);
const z = fun3(y);
fun4(z);
and abc |> fun1 |> fun2 |> fun3 |> fun4
especially when you end up writing code like fun1(abc).then( (x) => fun2(x)).then( (y) => fun3(y)).then((z) => fun4(z))
when using existing language features. const x = fun1(a, 10)
const y = fun2(x, 20)
const z = fun3(y, 30)
In this case the pipeline version would create a bunch of throwaway closures. a |> ((a) => fun1(a, 10))
|> ((x) => fun2(x, 20))
|> ((y) => fun3(y, 30))All we're asking for is the ability to rewrite that as `2 |> Math.sqrt`.
What they're afraid of, my understanding goes, is that people hypothetically, may start leaning more on closures, which themselves perform worse than classes.
However I'm of the opinion that the engine implementors shouldn't really concern themselves to that extent with how people write their code. People can always write slow code, and that's their own responsibility. So I don't know about "silly", but I don't agree with it.
Unless I misunderstood and somehow doing function application a little different is actually a really hard problem. Who knows.
foo(1, bar(2, baz(3)), 3)
becomes something like 1 (2, (3 |> baz) > bar), 3 |> foo
or (3 |> baz) |> (2, % |> bar) |> (1, %, 3 |> foo)
That looks like just another way to write a thing in JavaScript, and it is not easier to read. What is the advantage?The goal is to linearlize unary function application, not to make all code look better.
Loads of features have been added to JS that have worse performance or theoretically enable worse performance, but that never stopped them before.
Some concrete (not-exhaustive) examples:
* Private variables are generally 30-50% slower than non-private variables (and also break proxies).
* let/const are a few percent slower than var.
* Generators are slower than loops.
* Iterators are often slower due to generating garbage for return values.
* Rest/spread operators hide that you're allocating new arrays and objects.
* Proxies cause insane slowdowns of your code.
* Allowing sub-classing of builtins makes everything slow.
* BigInt as designs is almost always slower than the engine's inferred 31-bit integers.
Meanwhile, Google and Mozilla refuse to implement proper tail calls even though they would INCREASE performance for a lot of code. They killed their SIMD projects (despite having them already implemented) which also reduced performance for the most performance-sensitive applications.
It seems obvious that performance is a non-issue when it's something they want to add and an easy excuse when it's something they don't want to add.
Record/tuple was killed off despite being the best proposed answer for eliminating hidden class mutation, providing deep O(1) comparisons, and making webworkers/threads/actors worth using because data transfer wouldn't be a bottleneck.
Pattern matching, do expressions, for/while/if/else expressions, binary AST, and others have languished for years without the spec committee seemingly caring that these would have real, tangible upsides for devs and/or users without adding much complexity to the JIT.
I'm convinced that most of the committee is completely divorced from the people who actually use JS day-to-day.
Only in function calls, surely? If you're using spread inside [] or {} then you already know that it allocates.
This applies to MOST devs today in my experience and doubly to JS and Python devs as a whole largely due to a lack of education. I'm fine with devs who never went to college, but it becomes an issue when they never bothered to study on their own either.
I've worked with a lot of JS devs who have absolutely no understand of how the system works. Allocation and garbage collection are pure magic. They also have no understanding of pointers or the difference between the stack and heap. All they know is that it's the magic that makes their code run. For these kinds of devs, spread just makes the object they want and they don't understand that it has a performance impact.
Even among knowledgeable devs, you often get the argument that "it's fast enough" and maybe something about optimizing down the road if we need it. The result is a kind of "slow by a thousand small allocations" where your whole application drags more than it should and there's no obvious hot spot because the whole thing is one giant, unoptimized ball of code.
At the end of the day, ease of use, developer ignorance, and deadline pressure means performance is almost always the dead-last priority.
Most of the more interesting proposals tend to languish these days. When you look at everything that's advanced to Stage 3-4, it's like. "ok, I'm certain this has some amazing perf bump for some feature I don't even use... but do I really care?"