"Some will expect a major update to the world’s most popular protocol to be a technical masterpiece and textbook example for future students of protocol design. Some will expect that a protocol designed during the Snowden revelations will improve their privacy. Others will more cynically suspect the opposite. There may be a general assumption of "faster." Many will probably also assume it is "greener." And some of us are jaded enough to see the "2.0" and mutter "Uh-oh, Second Systems Syndrome."
The cheat sheet answers are: no, no, probably not, maybe, no and yes.
If that sounds underwhelming, it’s because it is.
HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc. I would flunk students in my (hypothetical) protocol design class if they submitted it. HTTP/2.0 also does not improve your privacy. Wrapping HTTP/2.0 in SSL/TLS may or may not improve your privacy, as would wrapping HTTP/1.1 or any other protocol in SSL/TLS. But HTTP/2.0 itself does nothing to improve your privacy. This is almost triply ironic, because the major drags on HTTP are the cookies, which are such a major privacy problem, that the EU has legislated a notice requirement for them. HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier. That would put users squarely in charge of when they want to be tracked and when they don't want to—a major improvement in privacy. It would also save bandwidth and packets. But the proposed protocol does not do this.
[He goes on to tear a strip off the IETF and the politics behind HTTP/2.0 ...]
The discussion on it covered it pretty well: https://news.ycombinator.com/item?id=8850059
edit: it's still in google cache if anyone else wants to read it for themselves: https://webcache.googleusercontent.com/search?q=cache:3i6EwF...
Which, of course, is useless, since any browser supports turning off cookies.
As an EU citizen, my experience of this regulation is simply that I have to click "OK" to accept cookies on all the EU sites I visit.
I apologize if this comes off as a rant, but it really is annoying to constantly be presented with "This site uses cookies. Continue?" when I visit a site. :)
It is a EU thing then ? It appears or not depending on the origin ip address of the request ?
Yes, this would not have been able to be rolled out to everyone immediately, but neither is any other addition to JS, HTTP, HTML, CSS, &c. We should help build the future, not simply accommodate the past all the time.
Of course, no process will will help you if everyone disagrees with your proposals.
Sounds great!
Just one question: How many versions of the (now flabagasteringly complex) HTTP-protocol will I need to support in my application and libraries? Because as we all know, once deployed on the internet, something will never be updated and will need to be supported forever, meaning you can never obsolete that HTTP/1.1 and /2.0 code.
HTTP/1.1 was a fantastic protocol in that it survived for almost 2 decades unchanged. Here we have HTTP/2.0 and people are already talking about what we will need to add to HTTP/3.0.
If HTTP is going to end up being the new MSIE, we can only blame ourselves because we allowed Google to use its dominance to push a protocol the internet didn't need.
This is going to make the web so much faster, particular on mobile devices.
A hugely bloated, binary protocol is better than the simple, text-based on we have today? I greatly disagree. HTTP/1.1 could use an update, but HTTP/2 was not the answer.
The performance benefits are overblown: https://news.ycombinator.com/item?id=8890839
This page confirms that I'm not using Google's trainwreck protocol and tells me the config-parameters I need to ensure I keep my browser this way.
Unfortunately, you're not using HTTP/2 right now. To do so:
Use Firefox Nightly or go to about:config and enable "network.http.spdy.enabled.http2draft"
Use Google Chrome Canary and/or go to chrome://flags/#enable-spdy4 to Enable SPDY/4 (Chrome's name for HTTP/2)
This is quite good, although probably not for the reason the original authors intended.There is a copy in maillist http://lists.w3.org/Archives/Public/ietf-http-wg/2015JanMar/...
Here's a list of common servers support for SPDY/HTTP2: https://istlsfastyet.com/#server-performance
Another better way would have been keep SPDY, as there is usefulness there, separate and then on HTTP/2, to incrementally get there, and use an iteration of something like AS2/EDIINT (https://tools.ietf.org/html/rfc4130) which does encryption, compression and digital signatures on top of existing HTTP (HTTPS is usable as current but not required as it uses best compression/encryption currently available that the server supports). This standard still adheres to everything HTTP and hypertext transfer based and does not become a binary file format but relies on baked in MIME.
An iteration of that would have been better for interoperability, secure and fast. I have implemented it directly previously from RFC for an EDI product and it is used for sending all financial EDI/documents for all of the largest companies in the world Wal-mart, Target, DoD as well as most small and medium businesses with inventory. There are even existing interoperability testing centers setup for testing out and certifying products that do this so that the standard works for all vendors and customers. An iteration of this would have fit in as easily and been more flexible on the secure, compression and encryption side, and all over HTTP if you want as it encrypts the body.
Imagine this scenario, two people want to interconnect, here's the process:
- They insecurely email their public key (self-signed) and URL (no MitM protection)
- You insecurely email your public key (self-signed) and URL
- They have a HTTPS URL
- Now the thing to understand about AS2 is that when you connect to THEM you give them a return URL to confirm receipt (MDN) of the transaction.
- HTTPS becomes a giant clusterfuck in AS2 because people try to use standard popular HTTPS libraries (e.g. that do CA checking, domain checking, and other checks which are fine for typical web-browser-style traffic, but not for specialised AS2 traffic) but in the context of AS2 where certificates are often local self-signed (some even use this for HTTPS), and the URL is rarely correct for the certificate, they fall over all of the time.
- Worse still some sites want to use either HTTP or HTTPS only, so when you connect to a HTTPS URL but give them a HTTP MDN URL sometimes they will work, sometimes they will try the HTTPS version of the URL then fall over and die, and other times they will error just because of the inconsistency.
Honestly I used AS2 for over five years, looking back, it would have saved everyone hundreds of man-hours to have just used HTTPS in the standard way and implement certificate pinning (e.g. "e-mail me the serial number," or heck just list it in your documentation).
The only major advantage of AS2 is the MDNs. However even there there exists massive inconsistency, some return bad MDNs for bad data, while others only return bad MDNs for bad transmission of data (i.e. they only check that what you send is what is received 1:1, so you could send them a series of 0s and get a valid MDN, because they check the data later and then email).
To be honest I hate MDN errors. They don't provide human-readable information in an understandable way. They're designed for automation which rarely exists in the wider world (between millions of different companies with hundreds of systems).
Give me an email template for errors any day, that way there can be a brief generic explanation and formatted data, to better explain things. The only thing MDNs do well is data consistency checking which is legitimately nice, however almost every EDI format I know has that in it already (i.e. segment counters, end segments, etc).
If I was to re-invent AS2, I'd just build the entire thing on standard HTTPS. No HTTP allowed, no hard coded certificates (i.e. you receive a public key the same way your web browser does), certificate pinning would be a key part, and scrap MDNs in place of a hash as a standard header in the HTTPS stream. Normal HTTP REST return codes would be used to indicate success (e.g. 200 OK/202 ACCEPTED, 400 Md5Mismatch/InvalidInput/etc).
That way nobody has to deconstruct an MDN to try and figure out the error. And handling a small handful of HTTP codes is much easier to automate than the information barriage an MDN contains anyway, it is both easier to automate, and easier for humans.
The things that AS/2 got right was that it rides on top of an existing infrastructure of MIME/HTTP. The other part is doing encryption/compression of any type specified by the server/client. And there is some benefit to encryption/compression/digital signing over plain HTTP.
HTTP/2 might be the first protocol for the web that isn't based on MIME for better or for worse. We are headed to a binary protocol that is called Hypertext Transfer Protocol.
HTTP/2 looks more like TCP/UDP or small layer on top of it that you might build in multiplayer game servers. Take a look at the spec and look at all the binary blocks that look like file formats from '93: https://http2.github.io/http2-spec/. It is a munging of HTTP/HTTPS/encryption in one big binary ball. It will definitely be more CPU intensive but I guess we are going live either way!
Plus AS2 was a huge improvement over nightly faxing of orders, large companies were doing this as late as 2003. AS1 (email based) and AS3 (FTP based) were available as well but HTTP with AS2 is what all fulfillment processes use now. And yes it has tons of problems but the core idea of encryption/compression/signatures/receipts over current infrastructure is nice. Everything else you mention exists and definitely are the bad parts though much of that wouldn't be needed in the core.
On a serious note: it's nice to see ALNP being used in HTTP/2
Am I missing something? Do some people have so many cookies that this makes a difference or something?
So to answer your question: Header compression as employed in HTTP 2.0 helps if you do many requests with similar headers on the same connection.
In general, HTTP/2.0 seems to be about improving things if you do many requests over the same connection.
Google has about 200 different tracking cookies with lots of redundancies which is compressed sufficiently better than that.
Google's aim with SPDY was to be able to track you across all HTTP-requests without the bloat of the tracking-cookies causing you to exceed a normal ADSL MTU-size and thus having the tracking cause packet-fragmentation and potentially reduced performance.
Possibly good goals for all the wrong reasons. And again it's Google's needs above those of the internet at large.
This would benefit from the fact that in one request headers are not repeated, but over multiple requests they certainly are.
http://www.greenbytes.de/tech/webdav/draft-ietf-httpbis-head...
If it isn't community-driven, you can't expect it to be implemented in the places the big corp doesn't care for.
So in this case, Apache one of the major drivers for propelling the WWW may end up not supporting a "crucial" WWW-related standard, because the community was never invited.
If anyone still has any doubts why letting Google control internet-standards is bad, this is currently my best example.
Technically speaking, the internet is the result of what we come up with, when we all work together. Not working together will quickly end up as not working at all.
What I saw on the HTTP/2 mailing lists was "We have a new standard." "It demands SSL, but we don't want that." Then, SPDY is everywhere, let's use that.
Shortly after it was "Omg, we can't call it spdy, because then Microsoft's interests will be left behind and Google will have won. Let's abandon the mandatory SSL requirement and rename SPDY to HTTP2..."
I feel like we've all lost here.
We implemented SPDY at Twitter - the savings were fantastic and the browser performance, amazing. Google and FB did the same. It's nearly like, 800M users said it was great, can we move on now?
I found the answer from their blog:
"Part of the service CloudFlare provides is being on top of the latest advances in Internet and web technologies. We've stayed on top of SPDY and will continue to roll out updates as the protocol evolves (and we'll support HTTP/2 just as soon as it is practical)."
The thinking is, I believe, that "SPDY/4 revision is based upon HTTP/2 wholesale" http://http2.github.io/faq/ and nginx already supports SPDY via ngx_http_spdy_module. http://nginx.org/en/docs/http/ngx_http_spdy_module.html Version 3.1 though...
So it's either there or almost there.
Not in terms of the protocol spec, but most major browser vendors have indicated that they only intend to support HTTP/2 in-browser over TLS connections, so in practice for typical, browser-targeting use cases, it looks like it will, at least initially.
It would be the right thing for Google to remove SPDY at this point, otherwise it would be running a nonstandard protocol that other browsers do not, which can lead to fragmentation - as we saw just recently with an API that sadly Google has not removed despite it being nonstandard (FileSystem in the WhatsApp "Web" app).
edit: To clarify, I mean what Google is doing with SPDY sounds like the right thing. I don't mean it should remove it right now, I meant it was the right thing to do, right now, to announce it would be removed after a reasonable delay (and 1 year sounds reasonable).
The "progress to be standardized" for SPDY was SPDY being chosen as the basis for HTTP/2; as I understand for a while SPDY has been being updated in parallel to the HTTP/2 development work to continue to reflect the state of HTTP/2 + new things the Google SPDY team wants to get into the standard, but its been clear for a long time that the intent was that SPDY as a separate protocol would be unnecessary once HTTP/2 was ready for use.
It's going away, just maybe not soon enough for everyone's tastes. From the blog:
> We plan to remove support for SPDY in early 2016
So effectively they have just announced a long term support edition of SPDY. What an odd time to complain about the lack of long term support.
UPDATE: I noticed somebody wrote websocket support [1], but it didn't get merged yet with the master.
As engineers, the ones that take simple concepts and add complexity, those are not engineers, those are meddlers.
It could be as long lived as XHTML.
I was hoping for more SCTP rather than a bunch of cludge on top of what is a pretty beautiful protocol in HTTP 1.1. Protocol designers of the past seemed to have a better long view mixed with simplicity focused on interoperability that you like to see from engineers.
In binary if there is one flaw the whole block is bunk, i.e. off by one, wrong offset, binary munging/encoding, other things. As an example if you have a game profile that is binary, it can be ruined by binary profiles and corruption on a bad save or transmission.
Binary is all or nothing, maybe that is what is needed for better standards but it is a major change.
What is easier, a json or yaml format of a file or a binary block you have to parse? What worked better HTML5 or XHTML (exactness over interoperability)
Granted most of us won't be writing HTTP/2 servers all day but it does expect more of the implementations to adhere to, for better or worse.
The classic rule in network interoperability is be conservative in what you send (exactness) and liberal in what you accept (expect others to not be as exact).
I guess the same thing applies to HTTP/2, sometimes you have to dumb/simplify it down a little, the smartest way that relies on implementers, might be the leap that is too hard to make. The best standards are the simple ones that cannot be messed up even by poor implementations. Maybe the standards for protocols developed in the past looked at adoption more as they had to convince everyone to use it, here if you force it you don't need to listen to everyone or simplify, which is a mistake.
While code and products should be exact, over the wire you need to be conservative in what you send and liberal in what you accept in standards and interoperability land.
In another area, there is a reason why things like REST win over things like SOAP, or JSON over XML, it comes down to interoperability and simplicity.
The more simple and interoperable standard will always win, and as standards creators, each iteration should be more simple. As engineers, we have accepted the role of taking complexity and making it simple for others, even other engineers or maybe some junior coder that doesn't understand the complexity. What protocol is the new coder or the junior coder going to gravitate to? The simple one.
It saved them lots of money I am sure in improved speed but at the trade-off of complexity and minimal adoption of the standard because it wasn't beneficial to everyone. HTTP/2 is a continuation of that effort by Google which I would do if I were them as well probably. But in the end both are not that big of improvements for what they take away.
Of course I use both but I don't think they will last very long until the next, it was too fast and there are large swaths of engineers that do not like being forced into something that has minimal benefits when it could have been a truly nice iteration.
HTTP/2 is really closer to SPDY and I wish they would have just kept it as SPDY for now. Let a little more time go by to see if that is truly useful enough to merge into HTTP/2. HTTP/2 is essentially SPDY from Google tweaked and injected into HTTP/2 which has huge benefits for Google, so I understand where the momentum is coming from.
Google also controls the browser so it is much easier for them to be the lead now on web standards changes. We will have to use it if we like it or not. I don't like the heavy hand that they are using with their browser share, just like Microsoft of older days (i.e plugins killed off, SPDY, HTTP/2, PPAPI, NaCL etc)
Please.
Downvoters: although I don't usually do this, I'd ask you to enter into a discussion with me instead of just hitting the down arrow. Do you honestly think my discussion is worth being silenced?
Moreover, it seems like we are collectively getting better at upgrading technologies: IPv6 adoption has finally got some momentum; HTTP/2 is actually happening. With lessons learned from the HTTP => HTTP/2 transition, HTTP/3 could happen in five years instead of in another fifteen.
On the contrary, we are in a desperate need of such attitudes in software. We need for everyone to stop jumping to every new thing with silly promises. We need to start choosing quality over quantity. We need substantial well researched improvements.
HTTP has never been the bottleneck. I think IPv6 is excellent and a needed, massive improvement especially since IPv4 is no longer tenable. HTTP/1.1, however, still works quite well and keeps a larger feature set in some circumstances. It's less insane because it's not made by W3C or IETF or any other hugely bureaucratic group; however, that doesn't mean it's better either.
I can't wait for HTTP/3! Hopefully this time they won't rush it.
On some unrelated note: I found this tidbit of humor in the RFC draft (https://tools.ietf.org/html/draft-ietf-httpbis-http2-16):
ENHANCE_YOUR_CALM (0xb): The endpoint detected that its peer is
exhibiting a behavior that might be generating excessive load.I promise you that I've considered the spec and its implications. Where are we now?
Heavily optimized pages like google.com use data urls or spritesheets for small images, and inline small css/javascript.
On the bright side, reducing the need to minimize request count will make our lives as developers a bit easier :-)
What's sad about this is that if you load this site with pipelining enabled you get the same speed benefits as with HTTP/2 or SPDY, but Google would never know this, since they never tested SPDY against pipelining.
> ENHANCE_YOUR_CALM (0xb):
> Please read the spec and understand the technical implications before criticizing.
Please understand that -- technically -- this protocol is an embarrassment to the profession and to those involved in designing it.
Congrats? Want a cookie or something?
Do you have an actual complaint with the spec or do you just want to be an old man yelling at a cloud?
You can think differently, of course, but after looking at this (https://news.ycombinator.com/item?id=8824789) I reconsidered my previously positive view on it.
(Also, I'd love a cookie)