There are some relatively simple heuristics where you can tell without escape analysis that a variable will not be referenced before initialization.
The obviously bad constructions are references in the same scope that happen before the declaration. It'd be nice if these were an early errors, but alas, so keep the TDZ check. The next is any closed over reference that happens before the initializer. These may run before the initializer, so keep the TDZ check. Then you have hoisted closures even if they're after the initializer (eg, var and function keyword declarations). These might run before the initializer too.
But everything else that comes after the initializer: access in the same or nested scope and access in closures in non-hoisted declarations, can't possibly run before the initializer and doesn't need the TDZ check.
I believe this check is cheap enough to run during parsing. The reason for not pursuing it was that there wasn't a benchmark that showed TDZ checks were a problem. But TypeScript showed they were!
The good news is that we can still write our Scala `val`s and `var`s (`const` and `let`) in the source code, enjoying good scoping and good performance.
Is that still true? Early versions of V8 would do scope checks for things that weren't declared with var but it doesn't do that any more. I think const and let are lowered to var representation at compile time now anyway, so when the code is running they're the same thing.
Usage of Scala itself is less shiny if you look at market share. But I believe it's still growing in absolute numbers, only quite slowly.
function f() {
return x; // Syntax parsing fails here.
}
let x = 4;
return f();
you would get the equivalent of a ReferenceError for x when f() tried to use it (well, refer to it) at the commented line. But in JavaScript, this successfully returns 4, because `let` inherits the weird hoisting behavior of `var` and `function`. And it has to, because otherwise this would be really weird: function f1() { return x; }
let x = 4;
function f2() { return x; }
return Math.random() < 0.5 ? f1() : f2();
Would that have a 50/50 chance of returning the outer x? Would the engine have to swap which x is referred to in f1 when x gets initialized?TDZ is also terrible because the engines have to look up at runtime whether a lexical variable's binding has been initialized yet. This is one reason (perhaps the main reason?) why they're slower. You can't constant fold even a `const`, because `const v = 7` means at runtime "either 7 or nothing at all, not even null or undefined".
In my opinion, TDZ was a mistake. (Not one I could have predicted at the time, so no shade to the designers.) The right thing to do when introducing let/const would have been to make any capture of a lexical variable disable hoisting of the containing function. So the example from the article (trimmed down a little)
return Math.random() < 0.5 ? useX() : 1;
let x = 4;
function useX() { return x; }
would raise a ReferenceError for `useX`, because it has not yet been declared at that point in the syntactic scope. Same with the similar return Math.random() < 0.5 ? x : 1;
let x = 4;
which in current JavaScript also either returns 1 or throws a ReferenceError. I'm not against hoisting functions, and removing function hoisting would have not been possible anyway. The thing is, that's not "just a function", that's a closure that is capturing something that doesn't exist yet. It's binding to something not in its lexical scope, an uninitialized slot in its static environment. That's a weird special case that has to be handled in the engine and considered in user code. It would be better to just disallow it. (And no, I don't think it would be a big deal for engines to detect that case. They already have to compute captures and bindings.)Sadly, it's too late now.
I wish there was explicit support for this though, maybe with a construct like <exression> where <antecedent>, etc, like what Haskell has, instead of having to hack it using functions and var
It basically means you can always override anything, which allows for monkey patching and proxying and adapter patterns and circular imports… These are all nasty things to accidentally encounter, but they can also be powerful tools when used deliberately.
These hoisting tricks all play an important role in ensuring backwards compatibility. And they’re the reason why JavaScript can have multiple versions of the same package while Python cannot.
function f() {
return g()
}
function g() {
return f()
}It’s much easier to reason about when your variables aren’t going to escape past the end of the block.
In non-GC languages going out of scope can also be a trigger to free the contents of the variable. This is useful for situations like locking where you can put the minimal span of code that requires the lock into a scope and take a lock which automatically unlocks at the end of the scope, for example.
JavaScript’s hoisting and scoping feel natural to people who started in JS, but most people who came from other languages find it surprising.
To avoid having to memorize yet one more thing that doesn't have an obvious benefit.
>An explicit construct for scoping would have been so much clearer to me
Having an additional construct for scoping is clearer than having every set of already-existing curly braces be a new scope? That seems backwards.
What would be the advantage over the system used everywhere else?
Yes, maybe we could use something similar to parenthesis. Maybe they can look curly. /s
I laud the recent efforts to remove the JS from JS tools (Go in TS compiler, esbuild, etc), as you don't need 100% of your lang utils written in the same interpreted lang, especially slow/expensive tasks like compilation.
> As of TypeScript 5.0, the project's output target was switched from es5 to es2018 as part of a transition to ECMAScript modules. This meant that TypeScript could rely on the emit for native (and often more-succinct) syntax supported between ES2015 and ES2018. One might expect that this would unconditionally make things faster, but surprise we encountered was a slowdown from using let and const natively!
So they don't transpile to ES5, and that is the issue.
local x = 42
do
print(x) -- 42
local x = x * 2
print(x) -- 84
end
print(x) -- 42
Look, ma! No dead zones!In
function example(measurement) {
console.log(calculation); // undefined - accessible! calculation leaked out
console.log(i); // undefined - accessible! i leaked out
<snip>
Why does the author say `calculation` and `i` are leaking? They’re not even defined at that point (they come later in the code), and we’re seeing “undefined” which, correct me if I’m wrong, is the JS way of saying “I have no idea what this thing is”. So where’s the leakage? function example(measurement) {
console.log(calculation); // undefined - accessible! calculation leaked out
console.log(i); // undefined - accessible! i leaked out <snip>
It's "leaking" because the variable is in scope, it's associated value is "undefined". This is different than with let/const where the variable would not be in scope at that point in the function. An undefined value bound to a variable is not the same as "I have no idea what this thing is". That would be the reference errors seen with let/const.Consider
console.log(foo)
let foo
vs console.log(foo)
var foo
I think the article confuses "in scope" with "declared", and "declared and initialised" with "initialised".I can always rely on FAANGs to make things unnecessarily confusing and ugly.
They cite a surprising 8% performance boost in some cases by using var.
They did a great job of explaining javascript variable hoisting, but that's all that they have explained.