That cross platform bit is the weakest link sigh.
Of course, most developers still have to target older browsers, but for a private Web app where you know your target audience, this shouldn't be a huge issue.
$('p').text('Hello.')
over for (var p in document.getElementsByTagName('p')) { p.innerHTML = 'Hello.'; }
any day!If you really think more people should be using plain vanilla JavaScript (and in a lot of places, I think this is actually true and even when a framework is needed, it's good to have the underlying skill) then the way to get them to do that is to educate them on it, not patronize.
Aside from not being cross-platform, VanillaJS is just plain ugly.
We finished the exercise assuming the example used jQuery instead.
Did you expect your interviewees to have memorized APIs? There's Google so that you can look up those when needed.
There always is either underscore or backbone or require or dojo or prototype or yui or jquery...
Don't really know if this is good or bad!
Not really. Just replacing document.getElementsByTagName by document.querySelectorAll (the native, browser-implemented version of what jQuery does) will generate a 150-200x perf hit depending on the browser.
The reason for that is twofold: first, getElementsByTagName doesn't have to parse the selector, figure out what is asked for, and potentially fall back on explicit JS implementation (jQuery supports non-native selectors. In fact, I believe you can implement your own selectors). But the parsing overhead should be minor in this precise case.
Second, the real reason for the difference, getElementsByTagName cheats something fierce: it doesn't return an Array like everybody else, it returns a NodeList. And the nodelist is lazy, it won't do anything before it needs to. Essentially, just calling document.getElementsByTagName is the same as calling an empty function. If you serialize the nodelist to an array (using Array.prototype.slice.call), bam 150~200x perf hit.
See http://jsperf.com/vanillajs-by-tag/2 for these two alterations added to the original "vanilla JS" perf cases.
There is a significant overhead to jQuery, but it's ~3x compared to the equivalent DOM behavior, not 400x.
Depends what you mean by "for real". The document.getElementsByTagName comparison is bullshit: gEBTN is lazy, just calling it basically doesn't do any work, it just allocates a NodeList object and returns.
If you serialize the nodelist to an array or use document.querySelector instead (it returns an array) you get ~3x between the native version and the slowest libraries, not 400x.
What gets me is when people include jQuery and then further bog things down by loading a lot of plug-ins to do things that could easily be accomplished by adding a few lines of code of their own. Even if you do need and include jQuery, it doesn't mean you have to use it for every piece of javascript in your app.
Many times, a plug-in will do a lot more than you need it to do. If your primary goal is to just get rid of the 300sm delay translating tap events to click events, you don't need a library for full gesture support. You need half a dozen lines to listen for touch events.
If you just need to add some client-side persistence for a few basic things in LocalStorage, you probably don't need a plug-in with a complex query syntax.
Cannibalize a library if you need to and pull out the bits you need. You don't have to include the whole kitchen sink.
That's what Zepto[0] is for: jQuery's API, 20% of the size (although it drops some features, e.g. $(selector) is pretty directly proxied to document.querySelectorAll, so $('> .foo') works in jQuery but blows up in Zepto)
can be written as
jQuery.post('path/to/api',{banana:yellow},function(data){alert("Success: "+data);});
much simpler and easy than
var r = new XMLHttpRequest(); r.open("POST", "path/to/api", true); r.onreadystatechange = function () { if (r.readyState != 4 || r.status != 200) return; alert("Success: " + r.responseText); }; r.send("banana=yellow");
nevermind got the joke. but i think jQuery helps write faster code sometimes
Heh.
jQuery has to parse the selector and figure out that it's of the form "#id". This requires running a regular expression. A lot more is happening.
The whole jQuery call from start to finish takes 2.85 microseconds, in what is presumably a real benchmark, but microbenchmarks like this are hard to interpret and basically meaningless. But yes, if your app needs to do a burst of 350,000 jQuery calls in a tight loop and you are bummed that the whole thing takes a full second, you should then optimize using document.getElementById.
The sentiment is borne out by the facts when you dig a little deeper. Just calling document.getElementById without using the return value does not actually dig into the DOM and find the element according to this comment: http://news.ycombinator.com/item?id=4436438
Seems like you might have some other optimization work to do at that point :)