- The interaction of HTTP/2 push with browser caches were left unspecified for the most part, and browsers implemented different ad-hoc policies.
- Safari in particular was pretty bad.
- Since HTTP/2 Push worked at a different layer than the rest of a web application, our offering centered around reverse-engineering traffic patterns, with the help of statistics and machine learning. We would find the resources which were more often not cached, and push those.
- HTTP/2 Push, when well implemented, offered reductions in time to DOMContentLoaded in the order of 5 to 30%. However, web traffic is noisy and visitors fall in many different buckets by network connection type and latency. Finding that 5% to 30% performance gain required looking to those buckets. And, DOMContentLoaded doesn't include image loading, and those dominated the overall page loading time.
- As the size of, say, Javascript increases, the gains from using HTTP/2 Push asymptotically tend to zero.
- The PUSH_PROMISE packets did indeed could increase loading time because they needed to be sent when the TCP connection was still cold. At that point in time, each byte costs more latency-wise.
- If a pushed resource was not matched or not needed, the loaded time increased again.
Being a tiny company, we eventually moved on and found other ways of decreasing loading times that were easier for us to implement and maintain and also easier to explain to our customers.
People often claim HTTP/2 to be superior due to its multiplexing capabilities, but always reason with the mental model of having true stream-based flows and leaving TCP completely out of the argument loop. But guess what, that model is just a model, indeed you have packet-based TCP flows that are abstracted away to mimic a continuous stream as a socket - and those matter.
Chrome's implementation was best, but the design of HTTP/2 push makes it really hard to do the right thing. Not just when it comes to pushing resources unnecessarily, but also delaying the delivery of higher priority resources.
<link rel="preload"> is much simpler to understand and use, and can be optimised by the browser.
Disclaimer: I work on the Chrome team, but I'm not on the networking team, and wasn't involved in this decision.
As someone that implemented http/1.1 almost feature complete [1], I think that the real problem on the web is the missing of a testsuite.
There needs to be a testsuite for http that both clients and servers can test against, and that is not tied to internal codes of a web browser.
I say this because even a simple spec like 206 with multiple range requests literally never is supported by any web server. Even Nginx, apache or google's own dns over https servers behave differently if the request headers expect multiple response bodies. Let alone the unpredictability of chunked encodings, which is another nightmare.
I really think that there should be an official testsuite that is maintained to reflect the specifications, similar to the intention of the (meanwhile outdated) acid tests.
Adoption rates to new http versions will always stay low if you implement it for months exactly as the spec says, just to figure out that no real world webserver actually implements it in the same way.
[1] https://github.com/tholian-network/stealth/blob/X0/stealth/s...
The web would be fine if they stopped adding features today for the next 10 years. The massive complexity of the browser will eventually be a detriment to the platform
in the post it says less then .1% connections in Chrome receive a push event. some people will always try out the cutting edge, but the fact it hasn't spread after several years is a pretty good indicator that it's not producing the expected results.
I don't know why things that are "nice to have, but not essential" and at the same time not really working need to be kept, just because they're in a standard. if it was essential I'd view it differently, but in this case I hope it gets dropped.
Server push is different, because it's supposed to be an invisible optimization, so it could be dropped without anyone noticing. But most things are not invisible.
However, IMHO the Internet has mostly degraded to a huge CDN. Http/2 is often not even handled by the final end point. Decentralized caching and proactive cache control has become a niche.
Having said that, I still dream of a world in which browsers just care about rendering, rather than defacto shaping the future arch of the net on all layers (DoH, https cert policies, quic MTUs, ...)
Of course if your HTML is small it may still be slightly slower than push. However the advantage is that you don't push cached resources over and over again.
With the variety of streaming options available now, it really seems antiquated.
But "Blink doesn't want to do it" isn't consensus on its own, this page suggests other clients implement this and offers no opinion about whether they intend to likewise deprecate the feature.
But it never happened and so made it into the H2 standard
I feel we've made a big mistake following Google's micro service requirements for what should be a ubiquitous hypermedia back bone.
It's bewildering that so many smart people could end up conflating the two for our standard protocols.
Client request:
GET / HTTP/1.1
Host: example.com
Server response: HTTP/1.1 103 Early Hints
Link: </style.css>; rel=preload; as=style
Link: </script.js>; rel=preload; as=script
HTTP/1.1 200 OK
Date: Fri, 26 May 2017 10:02:11 GMT
Content-Length: 1234
Content-Type: text/html; charset=utf-8
Link: </style.css>; rel=preload; as=style
Link: </script.js>; rel=preload; as=script
<!doctype html>
[... rest of the response body is omitted from the example ...]
[1]: https://tools.ietf.org/html/rfc8297What's the reported status code of this response in typical libraries? Usually the status code is a single value and not a list.
Maybe it is useful outside of the browser context, e.g. in gRPC.
I have tried to use it once, and the hassle of distinguishing between first time visits and repeated visits is simply not worth it. Even the hassle of using <link rel="preload"> is usually not worth it in large apps — if you have time for that, it can be better spent on reducing size of assets.
For example, a webpack stage could render inside a sandbox each page of your site, detect which resources get loaded, and add all of those as preload/server push entries. The server itself can keep records of which resources have been pushed to a specific client, and not push them again.
Writing preload lists by hand is never going to scale with today's web apps with hundreds or thousands of requests for a typical page.
Size is only semi-related to latency. For small resources, latency costs dominate. That's what push addresses.
I use it on my pet project website, and it allows for a remarkable first page load time.
And I don't have to make all these old-school tricks, like inlining CSS & JS.
HTTP/2 Push allows for such a pleasant website development. You can have hundreds of images on the page, and normally, you'd be latency-bound to load it in a reasonable amount of time. And the way to solve it old-school is to merge them all into a one big image, and use CSS to use parts of the image instead of separate image URLs. This is an ugly solution for a latency problem. Push is so much better!
The fact that 99% of people are too lazy to learn a new trick shouldn't really hamstring people into using 30-year old tricks to get to a decent latency!
What amazes me is that in this <1% there is not even Google, which implemented push in its own protocol. Any insights on that?
Server push is most useful in cases where latency is high, i.e. server and client are at different ends of the globe. It helps reduce round trips needed to load a website. Any good CDN has nodes at most important locations so the latency to the server will be low. Thus server push won't be as helpful.
Remember, most sites using CDNs still go to the root server for HTML and other no-cache content. It's only the more optimised sites that figure out how to deliver those resources straight from the CDN without consulting the end server.
Also, this is very Google: "Well, few people have adopted it over five years, time to remove it." HTTPS is almost as old as HTTP and is only now starting to become universal. Google has no patience, seriously.
https://tools.ietf.org/html/draft-ietf-httpbis-cache-digest-...
I even spent the best part of a week back in 2017 trying to build a bloom filter into Chrome's HTTP cache so each connection could send to the server a tiny filter of the resources already cached, and then the server could send back a package of "everything needed to render the page you have requested". Turns out the HTTP cache is complex so I gave up.
If fully implemented, it ought to be able to cut render times dramatically, and to eclipse the performance benefit of cdn's (where the main benefit is reducing latency for static assets).
There are potential privacy concerns, but no moreso than first party cookies.
They key point for performance is to send relations in parallel in separate HTTP streams. Even without Server Push Vulcain-like APIs are still faster than APIs relying on compound documents thanks to Preload links and to HTTP/2 / HTTP/3 multiplexing.
Using Preload links also fixes the over-pushing problem (pushing a relation already in a server-side or client-side cache), some limitations regarding authorization (by default most servers don't propagate the Authorization HTTP header nor cookies in the push request), and and is easier to implement.
(By the way Preload links were supported from day 1 by the Vulcain Gateway Server.)
However, using Preload links introduce a bit more latency than using Server Push. Does the theoretical performance gain is worth the added complexity? To be honest I don't know. I guess it doesn't.
Using Preload links combined with Early Hints (the 103 status code - RFC 8297) may totally remove the need for Server Push. And Early Hints are way easier than Server Push to implement (it's even possible in PHP!).
Unfortunately browsers don't support Early Hints yet.
- Chrome bug: https://bugs.chromium.org/p/chromium/issues/detail?id=671310
- Firefox bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1407355
For the API use case, it would be nice that Blink adds support of Early Hints before killing Server Push!
The bad news: For website requests <2MB, you spend most of your time waiting for the round-trips to complete, say: you spend most of the time warming up the TCP connection. So its very likely that if you redo your benchmarks clearing the window cache between runs (google tcp_no_metrics_save) you will get completely different results.
Here is an analogy: If you want to compare the acceleration of 2 cars, you would have race them from point A to point B starting at a velocity of 0mph at point A, and measure the time it takes to reach to point B. In your benchmark, you basically allowed the cars to start 100 meters before point A, and will measure the time it takes between passing point A and B. Frankly, for cars, acceleration decreases with increasing velocity; for TCP its the other way around: the amount of data allowed to send on a round trip gets larger with every rountrip (usually somewhat exponentially).
I'm aware of this "issue" (I must mention it in the repo, and I will). However, I don't think that it matters much for a web API: in most cases, inside web browsers, the TCP connection will already be "warmed" when the browser will send the first (and subsequent) requests to the API, because the browser will have loaded the HTML page, the JS code etc, usually from the same origin. And even if it isn't the case (mobile apps, API served from a third-party origin...) only the firsts requests will have to "warm" the connection (it doesn't matter if you use compound or atomic documents then), all subsequent requests, during the lifetime of the TCP connection, will use a "warmed" connection.
Or am I missing something?
Anyway, a PR to improve this benchmark (which aims at measuring the difference - if any - between serving atomic documents vs serving compound documents in real-life use cases) and show all cases will be very welcome!
The serverless/edge technologies becoming available at CDNs are making it easy to imagine "automatic push" could come soon.
Any chance there are folks from Vercel or Netlify here and can shed light on why push hasn't been implemented in their platforms (or if it has)? At first glance, it seems like Next.js in particular (server rendering) is ripe for automatic push.
> Chrome currently supports handling push streams over HTTP/2 and gQUIC, and this intent is about removing support over both protocols. Chrome does not support push over HTTP/3 and adding support is not on the roadmap.
I am shocked & terrified that google would consider not supporting a sizable chunk of HTTP in their user-agent. I understand that uptake has been slow. That this is not popular. But I do not see support as optional. This practice, of picking & choosing what to implement of our core standards, of deciding to drop core features that were by concensus agreed upon- because 5 years have passed & we're not sure yet how to use it well yet- is something I bow my head to & just hope, hope we can keep on through.
That's funny, given the HTTP/2 RFC does see the support as optional.
Philosophically, I am highly opposed to the browser opting not do support this.
Webdevs have been waiting for half a decade for some way to use PUSH in a reactive manner, as I linked further in this thread,
And instead we get this absolute unit of a response.
This is just absolutely hogwash james. I can not. Truly epic tragedy that they would do this to the web, to promising technology, after so so so little time to try to work things out, after so little support from the browser to try to make this useful.
You're not wrong but I am super disappointed to see such a lukewarm take from someone I respect & expect a far more decent viewpoint from.
If it's not useful, why keep support for it?
As I recall, Google added this push in spdy, so it makes sense for them to be the ones to push for it to be removed.
Adoption has been low because there hasn't been enough time for people to get comfortable with and switch to more modern web servers that support this type of thing, and most frameworks haven't figured out that having these sorts of features on by default could have major benefits. But it could happen.
This also highlights the core missing WTF, which is that the server can PUSH a resource, but there is no way for the client page to tell. 5 years latter, we've talked & talked & talked about it[2], but no one has done a damned fucking thing. Useless fucking awful biased development. PUSH gets to be used by the high & mighty. But it's useless to regular web operators, because no one cares about making the feature usable.
Now that the fuckwads can declare it's not useful, they're deleteing it. From the regular web. But not from Web Push Notifications. Which will keep using PUSH. In a way that websites have never been afforded.
You'd think that if a resource got pushed, we could react to it. These !@#$@#$ have not allowed that to happen. They have denial of serviced the web, made this feature inflexible. Sad. So Sad.
Even without the reacting to push, it seems obvious that there's just more work to do. That we haven't made progress in 3 years just seems like infantile terrible perception of time, of what the adoption curve looks like. A decade for critical serious change to start to take root is fine. The expectation for progress & adoption is way out of whack.
So so sad. Everything is so wrong. You can't just unship the web like this. You need to support the HTTP features we agreed we wanted to create. None of this is happening. I hate what has happened to PUSH so bad. This is so critically terribly mismanaged, the browsers standards groups have fucked this up so so so bad, done so little, had such attrocious mismanagement of what the IETF wanted to do. It's embarassing. We are terrible, we have screwed the pooch so bad on this so many times, & pulling the plug is a colossal fuck up of unbelievable proportions, fucking up a very basic promise that we ought have made for the web, the ability to push things, that never got delivered in to any useful ability on the web to do anything about it. Fuck up, 1000000x fuck up, just fucking shit balls terrible work everyone.
This one issue causes severe doubt in humanity for me. This is a fucked up terrible thing for all humanity, for the web. I can't believe we were so terrible at this.
[1] https://developers.google.com/web/fundamentals/push-notifica...
This was driven by poor server + client support at large, and the complexity it introduced. LL-HLS instead uses byte ranges and open-ended ranges over HTTP/2 - almost Comet-style - to handle CMAF chunks.
> It is interesting to note that server push has been used in ways other than originally intended. One prominent example is to stream data from the server to the client, which will be better served by the upcoming WebTransport protocol.
I guess I'll have to go back to putting all the images base64 encoded into the html :-(
Instead of pushing the images just set headers for the images above the fold:
Link: </img/1.jpg>; rel=preload; as=image
The browser will then request the images it doesn't have cached already.The advantage to this method is that the browser can look up the images in its cache first and avoid transferring unnecessary data.
The downside is that it will take at least one round trip for the browser to request them. So if your HTML is short and quick to generate the connection might go idle before your receive these requests.
I had read a technical comment once that told that HTTP 2 push was superior to websocket but couldn't remember why. Also what's the difference between push and server sent events?
It's like your great-great-great-grandparents built a house out of brick. Each new generation there's more people. Everyone wants to live in the same house, but they can't all fit. They try to build the house larger, but it will only work so high with brick. So they start shoving tin and iron and steel into each new floor to reinforce it. Eventually you have a skyscraper where the top floors are built out of titanium, and the bottom floor is several-hundred-years-old brick. But hey, we have a taller building!
You could say this is a perfect example of evolution, like big bright red baboon butts. But if the evolution were conscious, we'd want things to improve over time, not just make the same crap scale larger.
No, it solves a completely different problem.
HTTP/2 Server Push is a mechanism where the server can send HTTP responses for resources it believes the client will need soon. (For example, if a client has just requested a web page with a bunch of images on it, the server could use Push to send those images without having to wait for the client to encounter the image tags and request the associated image resources.)
You are making a custom request to ask the server to push something to the browser.
Browser: POST /preload?url=/next-page Server: PUSH /next-page
Just cut out the middleman and make a regular request.
Browser: GET /next-page
Even better use browser preloading mechanisms so that the browser knows how to best prioritize them. In fact if you do it this way the browser can even start downloading subresources and prerendering the page.
Maybe just adding a rel=preload link tag dynamically would be better (do link tags work dynamically? I have no idea). Or just fetching with normal ajax and use a service worker.
[1]: http://instantclick.io/ "InstantClick"
Crafting a fast website is going to be messy and difficult for a good while still.
Two decades ago: hardware was slower, bandwidth was far more constrained, and browsers didn't have so many features nor take up so much resources --- and yet the speed of page loading was often much better than it is today!
Indeed, most of the "problems" web developers complain about are self-inflicted.
IME most people trying to optimize their way out of tag manager, monolithic SPA hell don’t generally bother with these kind of features outside of turning on all the cloudflare rewriting and praying. If performance was important to them and they knew what they were doing, they’d fix those first.
It's just not true that it's super easy to write fast pages. There's a huge amount of background you need to understand to optimize fonts, images, CSS, your server, your CMS, caching, scripts etc. There's multiple approaches for everything with different tradeoffs.
Even if you have the skills to do this solo, you might not have the budget or the time or the approval of management. Big websites also require you to collaborate with different developers with different skillsets with different goals, and you need to keep people in sales, graphic design, SEO, analytics, security etc. roles happy too.
Keep in mind, that even without server push http/2 still fixes head of line blocking which was a major reason that the folk wisdom of inlining things and using sprites popped up in the first place.
I'm not saying there's a perfect solution, it's just interesting they're giving up on push.
It's simple, debuggable, inherently avoids cache-misses, scales (if you use non-blocking IO and joint concurrent capable language with OS threads).
It also avoids HTTP/TCP head-of-line because you're using a separate socket for your pushes.