<div id=result></div>
<script>
document.getElementById("result").textContent = "Why do it this way—";
document.querySelector("result").textContent = "—or even this way—";
result.textContent = "—when you can do it this way?";
</script>
Edit: adding another similar test to this page, window[`test${i}`] is taking roughly twice as long as document.querySelector(`#test${i}`) in Firefox, but only half as long in Chromium—which is still a bit slower than document.getElementById(`test${i}`) in Chromium, and than window[`test${i}`] in Firefox.Also to avoid any doubt: if something’s called a golfing trick, you almost certainly shouldn’t use it on normal code.
I confess I use this technique on every page on my website, in my carefully-golfed light/dark mode switcher. (https://chrismorgan.info/blog/dark-theme-implementation/ has a slightly-expanded version of it, and uses a more sensible document.querySelector instead.)
Similarly - $0 gives access to currently selected element
querySelector needs the '#' in this case.
https://bugs.chromium.org/p/project-zero/issues/detail?id=12...
FormName.FieldName.value = "foo";https://portswigger.net/research/dom-clobbering-strikes-back
> window[`test${i}`]
surely you must mean `eval('test'+i)` :)
1. It doesn’t need to be a valid JavaScript identifier, though if it isn’t you’ll need to use subscripting syntax to access it:
> document.body.id = "like-this";
> window["like-this"]
<body id="like-this">
2. It’s not added to the window object; rather, property lookup on a window object falls back to looking up elements by ID if there is no such property: > typeof example
"undefined"
> document.body.id = "example";
> example
<body id="example">
> example = "no longer an element";
> example
"no longer an element"
> delete example;
true
> example
<body id="example">one of? you have tricks better than this? this is so fucking spicy holy cow. how did i not know this? excuse me while i add this to all my future code for no good reason :).
They are both completely different and almost no one mentions how they differ in these comparison blog articles.
querySelector return a static node while getElementById returns a live node. If the element returned by getElementById is deleted in the DOM, the variable becomes unavailable while for querySelector you get a snapshot of the node that lives on.
If you use both of them the same way or don't know the difference, you are gonna have a bad time.
https://developer.mozilla.org/en-US/docs/Web/API/Document_ob...
The difference is whether membership changes in the collection are reflected immediately. Changes to the nodes themselves are reflected as usual either way, and node references do not spookily invalidate or repoint themselves.
What you’re talking about is only applicable or relevant for the methods that return collections. querySelectorAll returns a static NodeList, getElementsByName/getElementsByTagName/getElementsByClassName return live NodeLists or HTMLCollections.
I'm mostly sure that this is not true: https://imgur.com/a/XaS3b0W
Instead they decided against doing live node lists in future but couldn’t change how the older methods work as that would break websites. In the world of DOM/JS you can’t really make breaking changes.
https://humanwhocodes.com/blog/2010/09/28/why-is-getelements... Has more context
querySelector *does not* take 62ms to run. Both of them take 0.01ms at most, try it yourself. This is the sort of micro optimization you should not concern yourself with.
How often do you need to select unique elements by ID? Don't use IDs in the first place.
This is akin to using `i--` in loops to "speed up your code" — we're past that.
FWIW, JS old timers have known querySelector is slower than getElementById since querySelector became a thing.
[0] https://developer.mozilla.org/en-US/docs/Web/API/Performance...
wait a minute, does everyone NOT use IDs for unique elements on page (not each element)? I mean I am genuinely interested in knowing why not to use IDs. There are unique elements which need to be selected on a page like #logout or something.
There are some scenarios in which this is the case you may not expect right away, sometimes you need to duplicate elements for responsivity, for example.
Not as an app dev, but if you transpile your code anyway, a transpiler plugin that does this and similar optimizations could be neat.
This would be the correct approach if you're interested in the "sterile laboratory" performance of these APIs. But the average webpage is going to not be doing a bunch of throwaway work before it starts selecting elements.
I think it would actually be much more interesting to see the cold start results to see if they're comparable to each other. Hypothetically if e.g. GetElementById is only faster after the result has been cached by this simulation, then I think any conclusions about real world impact here could be misleading.
https://jsbenchit.org/?src=25e097f939f76b559b2515430fb5e459
I'm a little surprised. Sure i'd expected getElementById to be faster but honestly I'd have expected browser implementation of querySelector to do a relatively trivial up front check, is the selector a simple id, if so, call getElementById. I suppose that adds overheads to all queries, but that's true of many types of "best case" optimizations. (in the best case it's faster but adds some overhead for any non-best case)
Still, I don't care about this level of optimization. I'll just contuinue to use querySelector everywhere because it's more flexable. No code I've ever written looks up so many elements in a single interaction that this micro optimization would ever matter to me.
Both getElementById and querySelector are quite fast, down to the level of measuring individual branches in a micro benchmark. querySelector does have to do a bit more work to lookup the cached query and a handful of extra branches over getElementById, it's not scanning the document for an ID query unless you're in quirks mode though.
For example now you have to worry about whether there are any characters in the ID that need escaping in a selector (eg `.`), something that may not be easy to verify when formatting an ID out of variables.
So I'd suggest preferring getElementById for its directness, rather than for micro-optimisation reasons.
(In principle the same should be true for getElementsByClassName, but the live NodeLists returned by that method are a trap for the unwary, so neither option is ideal.)
Exactly. This is the type of nano-optimization that is pretty common to see in e.g. SO answers. Sure, selecting _a hundred thousand_ items in a loop might be some milliseconds faster, but so what? How often are you selecting more than a few at most?
Firefox 96 (Nightly): document.getElementById 2–4ms avg 3ms, document.querySelector 25–27ms avg 27ms.
Chromium 96 (stable): document.getElementById 11–37ms avg 19ms, document.querySelector 86–155ms avg 101ms.
I’m also a touch surprised by the difference between getElementById and querySelector, because I vaguely recall querySelector being optimised in browsers for the ID case some years back so that there was negligible difference.
(P.S. seeing Firefox’s version number continuing to creep up on Chromium’s, soon to overtake, I wish browsers would scrap their version numbering systems and switch to YYYY.MM instead, or even YYMM like Windows if they want just one number. Can’t even claim user-agent sniffing hazards any more since they’re slightly killing those off and reaching three digits is going to cause some trouble anyway.)
> around 44ms and 206ms
so around 162ms difference per 100,000 elements. This doesn't concern most of us for anything less than 1,000 elements (1.62ms).
I use querySelector more often simply for aesthetics (consistent with other calls and qsAll).
That said, the fact escaping is necessary could point to part of the reason why querySelector is slower. There's obviously some additional parsing necessary just to work out what the developer is requesting. If you don't need to spend that CPU time then it's certainly better not to.
For getElementById:
This is a map lookup every time.
For querySelector:
Chrome caches the parsed selector, but the benchmark doesn't use the same ID twice in any run, so the cache is not effective within a given run. Chrome also has a 256 query limit (per document) on the cache [1] which means that even though the benchmark runs 105 times, each time the browser is parsing 100,000 selectors since the cache would have the last 256 but it always starts at 0. querySelector does have a fast path [2] that calls getElementById which the benchmark hits, but the parsing cost is dominating.
So the benchmark is really measuring selector parsing vs a map lookup. Firefox might have a separate fast path for ID looking selectors that skips the real css parser. It might also have a larger cache.
Chrome's cache should probably be bigger than 256 for modern web apps , but even so that wouldn't help a benchmark that's parsing 100k selectors repeatedly since it doesn't make sense to have a cache that size just for micro benchmarks and real apps don't use 100k unique queries.
[1] https://source.chromium.org/chromium/chromium/src/+/main:thi...
[2] https://source.chromium.org/chromium/chromium/src/+/main:thi...
If the entire selector is a single ID matcher, execution time for querySelector probably would not be that much longer than for direct calls to getElementById; depending on implementation there might not even be any more stack frames. (Which would be a pain and might not matter, but there are a few ways you could do it if it did.)
In iOS 14.8 on this iPhone 12 mini with about half a battery, the getElementById test took 20ms, and the querySelector test 47. Of course I don't know what the implementation is actually doing, but those times seem awfully close together compared to those the author quotes.
[1] https://developer.mozilla.org/en-US/docs/Web/API/Document/ge...
[2] https://developer.mozilla.org/en-US/docs/Web/API/Document/qu...
This is totally unscientific, but I like to write experimental apps that manipulate the DOM heavily and Firefox is very often the fastest between Chrome and Safari in repainting the DOM (this wasn’t the case ~3 years ago).
This is less of an issue of Chrome being slow and more about measuring different things across the two browsers because of how the micro benchmark is structured.
Somewhat relieving the results here follow the common-sense expectation. (I.e. getElementById is faster than querySelector)