If we still need to target es5 4 years later, and transpilation is standard practice, why bother? Is the evolution of JS not directed in practice by the authors of Babel and Typescript? If no one can confidently ship this stuff for years after, what’s the incentive to even bother thinking about what is official vs a Babel supported proposal.
I like the idea of idiomatic JS with powerful modern features, but in practice every project I’ve seen seems to use a pretty arbitrary subset of the language, with different ideas about best practices and what the good parts are.
If you never release new standards you'll never be able to use them. I realize that in JS world 4 years is basically an eternity so it's hard to project that far but I'm sure that if you still have to write javascript code 10 years from now you'll be happy to be able to use the features of ES2019.
Regarding transpilation surely it's not as standard as you make it out to be? It's popular to be sure but handwritten javascript is not that rare nowadays, is it?
The impression I got from working on various projects over the past couple of years was that if the build doesn't include Babel/Webpack/Packet/etc then you're not doing a 'professional' job.
> ... handwritten javascript is not that rare nowadays, is it?
I love handwriting vanilla javascript, though I only really get the opportunity to do it in my personal projects.
When I made the decision earlier this year to rewrite my canvas library from scratch, I made a deliberate choice to drop support for IE/Edge/legacy browsers. (I can do this because nobody, as far as I know, uses my library for production sites). Being able to use ES6+ features in the code - promises, fat arrows, const/let, etc - has been a liberation and a joy and made me fall in love with Javascript all over again. Especially as the library has zero dependencies so when I'm working on it, it really does feel like working in uncharted territory where reinventing a wheel means getting to reinvent it better than before.
I wish paid work could be such fun.
Long time front end developer here. In the last couple of years I can't recall seeing even a single project without a build pipeline (not that they don't exist, I just haven't encountered them at my day job, first or third party).
Whereas in JS-land, the support for upgrades is even more trailing because it's the end-users who need to upgrade, not just the individual/organization doing the packaging.
I sometimes wonder what's the point of new versions of C, too.
Also, a new feature I want to use needs only be supported by one C compiler: the one I'm using. With JS, I need all of them to support it.
Even if you’re only targeting evergreen browsers, the popular build tools also perform minification and dependency resolution/linking. There’s so much a tool like webpack can do for you that I imagine it will remain hugely popular even as the need for transpilation wanes.
If you are building a modern web "app", not a one off set of web pages, then yes, it is the standard. It would be very weird to not see a compile (transpilation) step.
> Variable Length Arrays are not supported (although these are now officially optional)
> restrict qualifier is not supported, __restrict is supported instead, but it is not exactly the same
> Top-level qualifiers in array declarations in function parameters are not supported (e.g. void foo(int a[const])) as well as keyword static in the same context
They only started seriously working on actual C99 support (aside from the bits of C99 which were part of C++) for VS2013 or so.
Though to be fair it seems both Clang and GCC are still missing bits and bobs:
> The support for standard C in clang is feature-complete except for the C99 floating-point pragmas.
for GCC I found https://gcc.gnu.org/c99status.html, it's unclear how up-to-date it is, and if GCC is still missing any required feature.
I'd say for the last 5 years 90% of my browser JS projects used Webpack and Babel.
Sure, but if C came out with a new standard every year, you'd essentially never be on the latest version. Isn't there at least a valid argument to slowing down a bit to give the implementations a chance to catch up instead of having a new ES 20XX every year?
It is the part I was most excited about C99 and was very disappointed that it is so poorly supported.
For one, not everyone works on the client. I can write for Node and use everything v8 supports without ever touching Babel and Typescript.
>I like the idea of idiomatic JS with powerful modern features, but in practice every project I’ve seen seems to use a pretty arbitrary subset of the language, with different ideas about best practices and what the good parts are.
Good parts/best practices are orthogonal to native features and libs, which is what we're discussing here.
You totally can do that--but you probably shouldn't, because writing TypeScript is better for you and for future you. ;)
- Experimental language proposals can be tested in the wild
- Real non-ivory-tower feedback is raised to TC39
- Everything feeds into the canonical ES spec (~no splintering)
- Us regular folk are able to harness new syntax immediately
- Users continue to have their old runtimes supported
JS is a notoriously quirky and inconsistent programming language. Clearly it's sufficiently usable for writing complex, powerful and reliable programs, but it's error-prone for non-experts and encourages programming patterns that make importing accidental complexity the norm.
For many programming situations it'd be easy to just pick a different language, but obviously thus isn't the case for writing browser-based programs.
The best possible scenario for me would somehow involve deprecation and removal of the nasty parts of JS, and a path towards a smaller, simpler, more consistent language. Right now it feels like the cost of backwards forever compatibility is paid every day, in every project, and it's completely wasteful, given that transpile and polyfill is widely considered best practice.
Whether this could the job of TC39 or some other institution could go either way for certain.
I've recently been working in Electron, and I find having app logic in both browser JS and Node to be more of a frustrating uncanny valley than a help. I suspect I'm in the minority on this one though, at least amount people with workaday skill set in client side JS.
I agree, this is super important. My inclination whenever I did into a JS project has been to use lodash/underscore everywhere for everything, assuming that it is popular enough that someone will be able to maintain it without much headache, and I can actually get stuff done without breaking my brain over JavaScript's notorious quirks. I'm curious at what point this stops being a good practice. It certainly was 4 years ago.
It’s similar to all the new tweaks and elements in HTML or DOM. If you’re working on Wikipedia, you will likely never get to use them. But if you work on a more niche app, they become quite useful. Over time, old browsers die out, and the amount of people who can use new features expands; early adopters do the testing for the late majority.
This is a dramatic improvement over the 10 year lifespan of IE6 with es3. Once the need to target es5 drops, the next level gets even shorter - I don't think we'll get less than 2 years as a practical matter, but even at a 2 year delay, that's still regularly progress.
> a pretty arbitrary subset of the language
Best practices don't come from a mathematical model - programming is about communication and being compatible with user/business demands (which keep changing). Thus, best practices come about from a lot of experimentation and retrospective. That's ongoing. This doesn't mean it won't settle down. Heck, as it is, a lot of dynamic best practices influence the direction of non-dynamic languages - all of which arises from time and experimentation.
Your “we” isn’t everyone else’s. Some places need to support very old browsers but even in places like that usually not every app does. Those people are pushing the state of the art forward since they do real work outside of the standards process and that provides useful feedback to both the standards committees and browser developers.
In practice? I suppose in practice, JavaScript is driven by everyone in aggregate and what people consider to be "JavaScript" rather than "something that can be made to run in traditional JavaScript environments like browsers." I'm not sure if you mean that, or simply who directs actual changes to the traditional JavaScript environments (like browsers) themselves.
But yeah, if transpilation tools are reliable and continue to be well-maintained, you can say "why bother updated the 'official' language and implementation in browsers?" But I don't understand how this is a bad thing. You're getting the best of both worlds: browsers will implement new JavaScript features and optimizations, and some dev teams can also use build tools to use those new features, and other potential new features, and still make their work available to older browsers.
It doesn't seem like a problem to me, unless you're thinking about all the language development effort in the JavaScript community as a fixed pie such that "non-official" language development like Babel and TypeScript take away effort that otherwise would be allocated to official language development. And I certainly don't think that is the case.
I'm sure there are plenty of build systems out there that still indiscriminately turn things into ES5, because "that's how we wrote it years ago and it still works", but anyone who actually cares about performance will think twice before using babel today to turn nice, clean, concise modern code into incredibly verbose and shimmed legacy code, and will certainly think twice before serving ES5 code to users on a modern browser.
1. Node.js greatly benefits from these features. Transpiling is not as prevalent there
2. Some people do exclusively target "evergreen" browsers and don't care about IE support
3. Those that do differential bundling (different bundles per browsers) can see quite a performance boost by not transpiling on newer browsers
This is only true if you're writing JavaScript for the client and you have to support IE11. For many companies, the usage for IE11 is so low now that it can be safely dropped, for instance my SaaS products all just target evergreen browsers.
The primary benefit is that I don't have to rely on hacks.
This is how I build the backend to my CMS and my clients are happy to keep their browsers updated. (nearly trivial to do these days).
This also means that as soon as I see a new JS/CSS feature that will eventually become mainstream, I can use it in my admin as soon as both major browsers (Firefox/Chrome) support it. And even sooner if it's not a critical feature. (eg, I can skip adding a feature like "lazy loading" because browsers will have it built in eventually, etc...)
On the public facing front end, that is a different story though. It's motivation to keep things simple.
Right now, for example, the apps I'm working on require at least async/await support as a minimum test.
Also about the “subset” thing: the last 10 years or so I have been moving away from OOP, and more towards FP, so stuff like class support or Typescript have been a big yawn, anyway.
We had “C with classes” (I.e. C++ as it was known at the time) shoved down our throats in Uni back in the 80s, since garbage collection was impractical. That “wisdom” turned out to be short lived, and thus my migration towards FP (beyond the Lisp I had in an AI class back in the 80s)
If you only need to support a subset of browsers you can turn off compilation/polyfills for specific features, which sometimes leads to better performance and smaller bundles.
I think of it like TypeScript and Babel running ahead experimenting with new ideas, ES following along turning the good ideas into a spec, and browsers taking up the rear implementing the spec. It’s a pretty decent system.
IMO, anything below that is effectively a custom macro anyway (for which you may want to consider sweet.js or babel.macro to make it clear that this may change and help you find places you use the feature). Real-world feedback may change anything from syntax to behavior (`flatMap -> flat`, `Object.observe -> Proxy`, `EventEmitter -> Observable -> Emitter?`, and the it-feels-like-dozens-of-options pipeline syntax)
What percentage of your users is on IE11? Are you making money from them? Can you serve only them a compiled bundle?
You don't. Unless you care about IE11 (for most consumer products and mobile apps isn't necessary) you can use many of the features up through ES2016 or later without transpilation. My business uses JS classes, arrow and async functions, and new prototype methods without issue.
I would much, much rather that type annotation syntax gets standardised first, because it is comparatively easy to build pattern matching when that's in place but going the opposite direction is difficult. What is a type if not a pattern?
Plus it's a stage1 proposal, meaning it's far from serious.
Could you link the type annotation proposal, I can't find it.
EDIT: here is the confirmation TS 3.7.0 got tagged: https://github.com/microsoft/TypeScript/issues/16#issuecomme...
EDIT 2: wow just noticed, that the issue ID is "16" and it has been open since Jul 15, 2014 (I guess: good things take time ... ;) )
I understand it's based on destructuring; the syntax still just doesn't work for me.
https://dev.to/kayis/pattern-match-your-javascript-with-z-nf...
See Scala[1] or Swifts[2]'s implementations.
[1] https://docs.scala-lang.org/tour/pattern-matching.html
[2] https://docs.swift.org/swift-book/ReferenceManual/Patterns.h...
My idea is that `let`, `var` and `const` return the value(s) being assigned. Basically I miss being able to declare variables in the assertion part of `if` blocks that are scoped only during the `if()` block existence (including `else` blocks).
Something along these lines:
if( let row = await db.findOne() ) {
// row available here
}
// row does not exist here
The current alternative is to declare the variable outside the `if()` block, but I believe that is inelegant and harder to read, and also requires you to start renaming variables (ie. row1, row2...) due them going over their intended scope.As previous art, Golang's:
if x:=foo(); x>50 {
// x is here
}
else {
// x is here too
}
// x is not scoped here
And Perl's if( ( my $x = foo() ) > 50 ) {
print $x
} {
let row;
if (row = await db.findOne()) {
//
}
else {
//
}
}Also having "phantom" scope blocks get very nasty to read once you have more involved logic, as the block itself has no implied meaning and the programmer has to walk a few lines into it to get what's going on.
const user = await db.findOne()
if (user) ... else ...
Typescript can even narrow the type to null vs. User in each branch block.Other than saving a few characters (the variable name), I don't see any benefit of this, while it makes code harder to read.
Too bad the `with` [0] keyword has been reserved for crap, it sounds nice (not for this, but maybe for something else).
>> and also requires you to start renaming variables (ie. row1, row2...) due them going over their intended scope
Variable shadowing [1] is a really bad practice that makes it hard for people to collaborate and keep the code sane. Bad habits are not a reason for language changes.
[0] - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
int foo(int);
int bar(int x) {
if (int y = foo(x)) return 0;
return x;
} if (foo = bar()) { // syntax error!
}
if (let foo = bar()) { // works fine
}
if (const foo = bar()) { // also works fine
}
if (var foo = bar()) { // also also works fine
}But I'm still very nervous about some of the stuff mentioned here with regard to mutation. Taking Rust and Clojure as references, you always know for sure whether or not a call to e.g. `flat` will result in a mutation.
In JS, because of past experience, I'd never be completely confident that I wasn't mutating something by mistake. I don't know if you could retrofit features like const or mut. But, speaking personally, it might create enough safety-net to consider JS again.
(Maybe I'm missing an obvious feature?)
Proper immutable support (or a stronger concept of const) would also help with this.
const a = [1,2,3]
a.push(4)
const b: readonly number[] = [1,2,3]
b.push(4) // Property 'push' does not exist on type 'readonly number[]'.I think this is the kind of thing you just have to learn when you use any language. But when you're switching between half a dozen, being able to rely on consistent founding design principles really makes things easier. And when there aren't any, this kind of guide helps.
Mutates: push, pop, shift, unshift, splice, reverse, sort, copyWithin
Does Not Mutate: everything else
In my side project, which is a high performance web app, I was able to get an extra ~20fps by virtually removing all garbage created each frame. And there's a lot of ways to accidentally create garbage.
Prime example is the Iterator protocol, which creates a new object with two keys for every step of the iteration. Changing one for loop from for...of back to old-style made GC pauses happen about half as much. But you can't iterate over a Map or Set without the Iterator protocol, so now all my data structures are hand-built, or simply Arrays.
I would like to see new language features be designed with a GC cost model that isn't "GC is free!" But I doubt that JavaScript is designed for me and my sensibilities....
Array.flat() => flatten
Array.flatMap() => mapcat
String.trimLeft() => triml, trimr
Symbols are great but they’re much more useful when you can write them as (optionally namespaced) literals, which are much faster to work with: (= :my-key :your-key) ;; false
(= :my-key :my-key) ;; true
Object.entries() and Object.fromEntries() are both covered by (into). You can use (map) and other collection-oriented functions directly with a hashmap, it will be converted to a vector of [k v] pairs for you. (into {} your-vector) will turn it back into a new hashmap.And...all of these things were already in clojurescript when it was launched back in 2013! Plus efficient immutability by default, it’ll run on IE6, and the syntax is now way more uniform than JS. I’m itching to use it professionally.
In javascript you kind of have to reason backwards and declare your variables as immutable (const). Though there are still some bugaboos; object fields can still be overwritten even if the object was declared with const.
Personally I just use TypeScript which can enforce not mutating at compile time (for the most part).
Part of the immutable value proposition is being able to work with the objects. Based on [0] Freezing feels more like constant than immutable. And the 'frozenness' isn't communicated through the language - I could be passed a frozen or unfrozen object and I wouldn't know without inspecting it.
And freeze isn't recursive against the entire object graph, meaning the nature of freezing is entirely dependent on the implementation of that object.
I really like the language-level expression and type checking of Rust. But it does require intentional language design.
I'm not criticising JS (though I think there are plenty of far better langauges). Just saying that calling `freeze` 'immutable' isn't the full story.
[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
If so then yeah, that can be annoying and/or confusing.
It becomes especially important in React where you share objects up and down an immutable structure of objects.
2.4.1 :001 > a = [4,3,5,1,2]
=> [4, 3, 5, 1, 2]
2.4.1 :002 > a.sort
=> [1, 2, 3, 4, 5]
2.4.1 :003 > a
=> [4, 3, 5, 1, 2]
2.4.1 :004 > a.sort!
=> [1, 2, 3, 4, 5]
2.4.1 :005 > a
=> [1, 2, 3, 4, 5]I makes chaining things while debugging so much harder:
let a = a.project();
let a = debug(a);
let a = a.eject();
vs let a1 = a.project();
let a1d = debug(a1);
let a2 = a1d.eject();Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.
let a = a.project();
a = debug(a)
a = a.eject();
This is perfectly legal.Any var/let statement of the form var a = 1; is interpreted as 2 statements. (1) The declaration of the variable which is hoisted to the beginning of the variable scope, and the (2) setting of the value, which is done at the location the var statement is at.
Having multiple let statements would mean the same variable is declared and hoisted to the same location multiple times. So it's basically unnecessary and breaks hoisting semantics.
In addition, the downside risk of accidentally redefining a variable is probably far greater than the semantic benefits of making the redefinition clear to a reader (esp since I think that benefit is extremely limited in a loosely typed language like JS anyways).
When you're refactoring, you then have to be much more careful when moving lines of code of code around. With unique names, you get more of a safety net (including compile time errors if you're using something like TypeScript).
If you want a variable you can assign successive different values to, it's an entirely different thing, and there have always been var and the assignment operator for that.
let projected = a.project();
let debugged = debug(projected);
let ejected= debugged.eject();Lodash wasn't necessary.
In general the spread operator should only be used for forwarding arguments not for array operations.
Much rather have the magic word “flat”
It's also confusing that `arr.flatMap()` is not equivalent to `arr.map().flat()`, but to `arr.map.flat(Infinity)`
x = [[[1, 2]], [[2, 3]], [[3, 4]]]
x.flatMap(x=>x)
output: [[1,2], [2,3], [3,4]]
x.map(x=>x).flat()
output: [[1,2], [2,3], [3,4]]
x.map(x=>x).flat(Infinity)
output: [1, 2, 2, 3, 3, 4]
[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
> It is identical to a map() followed by a flat() of depth 1, but flatMap() is often quite useful, as merging both into one method is slightly more efficient"I think Array#flatten should be shallow by default because it makes sense to do less work by default, it aligns with existing APIs like the DOM Node#cloneNode which is shallow by default, and it would align with the existing ES3 pattern of using Array#concat for a shallow flatten. Shallow by default would also align with flatMap too."
However, generally you don't want to operate on a list of lists and are trying to process each value one by one -- the nesting doesn't add anything. In this case, we use flatMap, which "flattens" or concatenates the interior lists so we can operate on them like it's just a big stream of values.
This is also the case for another type like `Optional`, which represents either a value `T` or the absence of a value. An optional can be "mapped" so that a function is applied only if there is a value `T` present. flatMap works the same way here, where if you want to call another method that also produces an `Optional`, flatMap will "unwrap" the optional since you never really want to work with the type `Optional<Optional<T>>`.
'map' as a function name isn't great either, since we have the same name for a data structure. What it has in its favor is being short and traditional.
JS flatMap = C# SelectMany
And also here is a good recap of ES 6/7/8/9 (just in case you missed something) (also not mine): https://medium.com/@madasamy/javascript-brief-history-and-ec...
The last step is pretty annoying without it.
The entire thing is very common in Python, where Object.entries() is spelled `.items()` and `Object.fromEntries(…)` is spelled `dict(…)`
If you're familiar with C#'s linq and it's reliance on SelectMany it's somewhat easier to see the significance.
In C#'s linq you might write something like:
from host in sources
from value in fetch_data_from(host)
select create_record(value, host)
with flatmap (and some abuse of notation) you can more easily implement this as: sources.flatmap( host => fetch_data_from(host)
.flatmap( value => create_record(value, host))
If you dig even further you'll find that what makes this powerful is that the flatMap, together with the function x => [x], turns arrays into a Monad. The separate functions map and flat also work, but this adds more conditions. Haskell folks tend to prefer flatMap because most of the conditions for a Monad can be encoded in its type signature (except [x].flatMap(x => x) == x, but that one is easy enough to check).You'll find equivalents in all the JS utility libraries and most functional programming language standard libraries (and languages like Ruby with functional-ish subsets), so there's a lot of evidence that people who write code in that style like to have such a function available.
I personally feel flatMap is a much more used method than flat, so if you want to remove one, I would remove flat.
Flat can flatten any level of nesting (it just defaults to 1), so would be difficult to implement in terms of flatMap.
"I think Array#flatten should be shallow by default because it makes sense to do less work by default, it aligns with existing APIs like the DOM Node#cloneNode which is shallow by default, and it would align with the existing ES3 pattern of using Array#concat for a shallow flatten. Shallow by default would also align with flatMap too."
I really hope there could be syntactic sugar like do expression in Haskell, for in Scala, and LinQ in C# for flatMap instead of type limited version like async await.
Another thing is pipe operator seems to be very welcome among the proposals. There will be no awkward .pipe(map(f), tap(g)) in RxJS since then.
1. it all but requires that ES-defined functions stringify to their source code. Pre-ES2019 that's implementation-defined
2. it standardises the placeholder for the case where toString can't or won't create ECMAScript code (e.g. host functions), this could otherwise be an issue as with implementation-defined placeholders subsequent updates to the standard might make the placeholder unexpectedly syntactically valid, by having the placeholder standard future proposals can easily avoid making it valid
3. the stringification should be cross-platform as the algorithm is standardised
https://tc39.es/Function-prototype-toString-revision/
https://github.com/tc39/Function-prototype-toString-revision...
Personally, I am not a fan of languages growing. I think C is awesome, because everyone can understand the code, and doesn’t have to be a language lawyer like with C++. Concepts, Lambdas, crazy Template preprocessing, and more. The team can just work, pick up any module and read it without magic.
In C++ I am not even sure if a copy constructor would run vs an overloaded = operator without looking it up.
How much of ES5.5+ was guided by jQuery?
[1,2,,3]
> (str.match(regexWithGroup) || [, null])[1]
I.e. if the regex matches, then give me the first group (1st index) otherwise give me null.
var arr = [];
arr[0] = 1;
arr[1] = 2;
arr[3] = 3;
so there's not really much downside to also allowing a literal syntax for the same thing.I really love languages that force you to handle errors up to the top level.
In those cases, forcing the extra parameter in the catch, even though you are not using it, is slightly annoying. I mean it's literally 3 characters, but in this age of linters encouraging to not specify arguments you don't use, it just feels unnatural.
I really hate when a system tells me "Unknown error occured" or "Either this or that happened" because the software doesn't care to be specific with the errors.
You should at least log the error message, not ignore it.
In Java it led to lots of exception wrapping and leaky abstractions.
Not sure what the answer is - although my golang experience was better.
Java programmers need to be comfortable letting exceptions have the default behavior until they're sure they have a better idea. Declaring throws is usually enough.
I've always really liked checked exceptions in my own designs. Though I'm not crazy about the syntax.
IMO the three features that would make a much more significant impact in front end work are:
- optional static types
- reactivity
- some way to solve data binding with the DOM at the native level
It seems like an unnecessary change - if the source needs to be accessed then get the source file.
const test = Symbol("Desc");
testSymbol.description; // "Desc"
---------
Should testSymbol be replaced with test?
What looks out of place to you in that example?
Would it make more sense to you with a very slightly less arbitrary example, perhaps arr = ['Value for 0', 'Value for 1', , 'Value for 3', 'Value for 4']; instead of simple mapping ints to ints?
Because array contents are mutable [even if the array variable itself is declared const] that third index may be populated at a later point in the code.
Even the trim operations they added fall short of the target. In Python (and tcl, by the way) you can specify which characters to trim.
So close, yet, so far.
We go on adding fancy new syntax for little or no gain. The whole arrow function notation, for example, buys nothing new compared to the old notation of writing "function(....){}" other than appearing to keep up with functional fashion of the times.
Similarly, python which was resistant to the idea of 20 ways to do the same thing, also seems to be going in the direction of crazy things like the "walrus" operator which seems to be increasing the cognitive load by being a little more terse while not solving any fundamental issues.
Nothing wrong with functional paradigm, but extra syntax should only be added when it brings something substantial valuable to the table.
Also, features should be removed just as aggressively as they are added, otherwise you end up with C++ where you need less of a programmer to be able to tell what a given expression will do and more of a compiler grammar lawyer who can unmangle the legalese.
Incorrect - The main advantage is fat arrow syntax can keep lexical scope of this current context. Hence you dontneed to implement that=this antipattern
Arrow functions bind this to the lexical scope, which is useful. (In a regular function the value of this depends on how it's called.)
> python which was resistant to the idea of 20 ways to do the same thing
This was in comparison to Perl which intentionally has an unusual excess of different ways to do things.
> "walrus" operator
Simplifies a very common pattern.
m = re.match(r"my_key = (.*)", text)
if m:
print(m.group(1))It allows devs to do the following:
onClick={() => doSomething()}
Without having to worry about binding the function to the correct context.
It's one of the best JS improvements of the last 10 years.
I'm a JS fan but had to admit I chuckled at the implementation of is-even: https://github.com/jonschlinkert/is-even/blob/master/index.j...
Function.toString being more accurate is helpful.
But real progress would be removing dangerous backtracking regular expressions in favor of RE2: https://github.com/google/re2/wiki/Syntax