On that note, I do think it's a bit inaccurate to say that Google's efforts are/were primarily focused on data compression -- yes, they did introduce brotli, which is just a better LZ77 implementation (primary difference with gzip is that window size isn't a fixed 32KB), but they also pioneered SPDY (which turned into HTTP/2 after going through the standards committee) and now QUIC.
(obligatory disclaimer that google gives me money, so I am biased)
Edit: Should note that I'm excited about QUIC as well, just thought I may have been missing some new development where it's (already) more usable outside Google's walls.
More efficient protocols might reduce the disparity, but there should always be one. Right?
Given that Polaris has some downsides, it might reduce it to the point that you wouldn't consider it. In it's current state, Polaris requires a lot of pre-work...including dependency analysis via a real browser. It also serves up pages that wouldn't work with javascript disabled, and might not be terribly search engine friendly.
http://blog.httpwatch.com/2015/01/16/a-simple-performance-co...
I don't think server push was used in this benchmark, it's arguably a poor mans way to achieve part of what's discussed in the referenced paper (serving early dependencies...early).
Polaris is also a missile. And a star. And a PowerPC port of Solaris.
We're always going to have name duplication. At least it's fairly easy to disambiguate with a Google search.
the internet is faster than ever, browsers/javascript is faster than ever, cross-browser compat is better than ever, computers & servers are faster than ever, yet websites are slower than ever. i literally cannot consume the internet without uMatrix & uBlock Origin. and even with these i have to often give up my privacy by selectively allowing a bunch of required shit from third-party CDNs.
no website/SPA should take > 2s on a fast connection, (or > 4s on 3g) to be fully loaded. it's downright embarrassing. we can and must do better. we have everything we need today.
[1] https://s.ytimg.com/yts/jsbin/player-en_US-vfljAVcXG/base.js
Do you have a source for this? My understanding is that, in real usage, it is cheaper to load common libraries from a CDN because in a public CDN (for something like jQuery), the library is likely to already be cached from another website and has a chance to even already have an SSL connection to the CDN.
Obviously 60 separate CDNs is excessive, but I don't know if the practice altogether is a bad idea.
When people talk about serving jQuery, or J. Random JavaScript library, from a CDN it means the specific version of jQuery (or whatever) that they're using. There's literally no guarantee that the specific version you need will be in any given user's browser cache, and this is exacerbated if you loading multiple libraries from a CDN, or from different CDNs. If your CDNs serve files with low latency then it may not be a big problem, but not all CDNs do. Slow responding CDNs will slow your page loads down, not the reverse.
Moreover, if you're serving over HTTP2/SPDY there's even less likely to be a benefit to using a CDN. Again, it's something you need to measure.
One area where a CDN (e.g., Cloudflare) can benefit you is by serving all your static content to offer users a low-latency experience regardless of where they are in the world, but that's rather a different matter from serving half a dozen libraries from half a dozen different CDNs.
As others have pointed out, the version matters. There are a gazillion different versions of jQuery out there and many websites are very slow to update them if ever. So you only benefit by visiting multiple sites that use exactly the same version.
Additionally there is a ton of different public CDNs. Now, jQuery itself has solved this by providing an official jQuery CDN and pushing it to developers. In fact, the first result for "jQuery CDN" is their own website.
I'd say jQuery is probably the least bad example of this. If you visit enough websites there's a non-zero chance you'll have a CDN cache hit for jQuery on a few of them.
The problem is that jQuery rarely comes alone. Even jQuery UI is less widely deployed than jQuery and thus less likely to be cached from a CDN. But once you get into plugin territory or third-party libraries all bets are off.
I'm fairly certain that jQuery is the only library that has some realistic chance of benefiting from a public CDN. Everything else is probably asking for trouble (if only because you're adding unnecessary single points of failure).
another important reason i personally don't use CDNs is the privacy of my users.
A guy who I used to post with wrote a new forum for us all to post on (woo splinter groups). It's pretty cool. One of the things it does is serve a static image of the underlying youtube and then load it on click. When a 'tube might be quoted 7 times on a page - that's a pretty useful trick.
I'd just assumed this was a standard forum feature and then I opened a "Music Megathread" on an ipboard and holy shit loading 30 youtube players was painful.
you're much better off just serving an html5 <video> tag. you can likely fit your entire video in mobile quality in that 1.22MB :D
I secretly wish there was some way that allows us (as a community) to collaboratively "pirate" articles, perhaps as a torrent (IPFS perhaps), so we only have to download the ascii text.
(I subscribe to most of the sites which I read regularly so I don't feel guilty about blocking ads but I really hope someone successfully makes a Google Contributor-style network where a trusted browser serves only static image/text ads, as much resistance as that will trigger)
The Internet isn't fast for everyone. I (in the UK) have no 3G signal, let alone 4G and my broadband speed is pitiful - but it will do. There is nothing I can do to ramp the pipe speed up. I do end up turning off JS and images a lot of the time, because otherwise it ills me.
As a web dev, I don't care for bloat. So I find it particularly irksome, and currently it's enough to deter me from going mobile. Once I'd have dreamed about having a modern smartphone in my pocket with any Internet connection, but the friction currently today puts me off. The UK was recently slammed for its retrograde networks.
https://twitter.com/xbs/status/626781529054834688
As in "my site homepage is less than 1 Doom" or "that crappy site is more than 3 Dooms" ...
Example 900+ comment page: https://news.ycombinator.com/item?id=11116274
Example 2200+ comment page: https://news.ycombinator.com/item?id=12907201
hear hear. and on mobile, its painful because I can't have those (windows phone at least). planning on buying a DD-WRT compatible router soon so I can do some kind of router level ad-blocking and let me browse on the phone again
PS: opera mobile for android has a built in adblocker
uBlock Origin exists for Firefox Mobile btw, but i prefer opera's speed/ui.
Basically: more power -> more resources can be analysed in the same time, and not faster to answer.
Browser caches should be bigger. They also should be more intelligent. It does not make sense to evict a library from cache if it is the most popular library used. Maybe having two buckets, one for popular libraries and another for the rest.
I think that it would help if script tag had a hash attribute. Then cache could become more efficient. But without the first part it would be useless. Example:
<script src="https://cdn.example.com/jquery.js"
sha256=18741e66ea59c430e9a8474dbaf52aa7372ef7ea2cf77580b37b2cfe0dcb3fd7>
</script>
Or different syntax (whatever I'm not W3C): <script src="https://cdn.example.com/jquery.js">
<hash>
<sha256>
18741e66ea59c430e9a8474dbaf52aa7372ef7ea2cf77580b37b2cfe0dcb3fd7
</sha256>
</hash>
</script>
I would like to make an experiment, but as I am not experienced with webdev as much, it could take too much time for me. Test all major browsers with fresh install and default settings. Go to reddit or other links aggregator and load in sequence several links in same order on every browser. Check how efficiently cache was used. I would expect that after 10th site is loaded nothing would remain from the 1st one. Even though the same version of some library and maybe even link to CDN was used.I am amazed how quickly fully static pages work even after I am on capped speed mobile connection (after I use 1.5 GB packet).
EDIT: The most helpful thing would be to have good dead-code removing compilers for JavaScript.
Subresource Integrity Addressable Caching https://hillbrad.github.io/sri-addressable-caching/sri-addre...
I wish that web browsers would use content addressing to load stuff and do SRIs. If I already loaded a javascript file from another url, why load it again?
There's a site I read, really like and financially support, but which has some pretty terrible slowness & UI issues. It's so bad that they've started a campaign recently to fix those issues. But when I check Privacy Badger, NoScript and uBlock, there's a reason that it's so terrible slow: they're loading huge amounts of JavaScript and what can only be called cruft.
Honestly, I think that they'd come out ahead of the game if they'd just serve static pages and have a fundraising drive semi-annually.
though not for anything requiring direct hardware access, AAA games included.
You say as if that's a bad thing. Just wondering, for instance, do you use some standalone maps software instead of Google/Bing/Openstreet/... maps?
• The scheduler itself is just inline JavaScript code.
• The Scout dependency graph for the page is represented as a JavaScript variable inside the scheduler.
• DNS prefetch hints indicate to the browser that the scheduler will be contacting certain hostnames in the near future.
• Finally, the stub contains the page’s original HTML, which is broken into chunks as determined by Scout’s fine-grained dependency resolution
There are many ways to accelerate page speed and, like everything else, it's a question of costs and benefits. For most things, some level of technical debt is OK and CDNs even for jQuery are good. Of course, good design and setting things up right is always the best - and the other question is where your site traffic comes from.
> Mickens offers the analogy of a travelling businessperson. When you visit one city, you sometimes discover more cities you have to visit before going home. If someone gave you the entire list of cities ahead of time, you could plan the fastest possible route. Without the list, though, you have to discover new cities as you go, which results in unnecessary zig-zagging between far-away cities.
What a terrible analogy. Finding a topological sorting is O(|V|+|E|), while the traveling salesman problem is NP-complete.
It's not a terrible analogy. You request an HTML page and you don't know until after you load it (visit the initial city) exactly what other resources--images, css, js, etc.--you'll need to download (additional cities to visit).