Some of the coolest stuff I saw was streams and server push. Streams allow multiplexing multiple logical streams of data onto one TCP connection. So unlike the graphs you typically see in chrome network inspector where one resource request ends and another begins, frames (the unit of data) from multiple streams are sent in parallel. So this means only one connection (connects are persistent by default) is needed between server and client, and there are ways to prioritize streams and control flow so it gives devs more opportunities for performance gains.
Also headers are only sent in deltas now. Client/server maintain header tables with previous values of headers (which persist for the connection), so only updates need to be sent after the first request. I think this will be a consistent 40-50 byte saved per request for most connections where headers rarely change.
[1] http://tools.ietf.org/html/draft-ietf-httpbis-http2-14
[2] http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...
TCP has steams. TCP has connection mux. TCP has flow and congestion control. HTTP has keepalive. Why build another stack on OSI layer 7?
Also now we have to keep state to work out what the diffs are. State is evil.
Whilst I'm sure this will have some minor performance advancements, I'm not sure that it justifies the new protocol stack.
Not sending 2Mb of JavaScript and crappy HTML down the connection to display the front page probably has higher gains.
In an ideal world we could switch to using something like the SCTP networking protocol with HTTP that would solve a lot of issues. Unfortunately we are stuck with TCP, so the application protocol (HTTP) now must implement a networking protocol so we can multiplex over a single connection.
At least people won't have to inline resources, sprite images, or concatenate CSS and JavaScript anymore. And header compression is a small upgrade to the spec.
TCP != HTTP
TCP is a transport layer protocol (OSI Layer 4). HTTP is an application layer protocol (OSI Layer 7).Connections are already processed in parallel whenever they can. That is, when the browser knows what to request, and it fits in the execution model. If there's a huge number of assets on a single hostname, this has been a limiting factor because the browsers have limited the number of requests to a single hostname to avoid overloading the server. But that will remain an issue even if the requests are multiplexed over a single connection.
Most of the time when I see graphs in the network inspector that aren't massively parallel it's because nobody have spent time optimizing where/how assets are requested in ways that will make them just as bad with connection multiplexing.
There certainly can be benefits to reap from it, but the worst offenders are already ignoring best practices.
A TCP handshake has to take place for each connection and this isn't cost free, and there's the SSL negotiations on top (though techniques like OCSP stapling help)
Going massively parallel isn't free - Will Chan of Chrome did a good write up here: https://insouciant.org/tech/network-congestion-and-web-brows...
HTTP/1.x was neatly layered on TCP with an easy-to-parse text format. This in turn ran neatly on IP4/6, which ran on top of Ethernet and other myriad things. This separation of concerns gave us the benefit of being very easy to understand and implement, while also allowing people to subvert the system, adding things like half-baked transparent proxies to networks that would munge streams and couldn't agree where HTTP headers started. We ended up having to design WebSockets to XOR packets just to fix other people's broken deployments.
HTTP/1.x also became so pervasive that it became the overwhelmingly most popular protocol on top of TCP, even to the point where a system administrator could block everything but ports 80 and 443 and probably not hear anything back from their userbase. This is the reason we ended up with earlier monstrosities like SOAP and XML-RPC: by that point HTTP had become the most prevalent transport that it was assumed incorrectly in many cases that it was the only transport.
Perhaps the IETF should be pushing a parallel version of HTTP that pushes many of these concerns into SCTP. The problem here is that it'll take forever to get that rolled out and we need something to improve things now. Look at how long it's taking to roll out IPv6: something we actually need to fix now.
I was unaware of this and became intrigued. If anyone else is curious, this is the explanation from the RFC: http://tools.ietf.org/html/rfc6455#section-10.3
Basically it's to prevent an attacker from cache poisoning an HTTP proxy (like one on a corporate network) that doesn't properly support WebSockets. WebSockets look a lot like HTTP over the wire, so without masking the wire data in some way a proxy could be tricked into believing a faked "HTTP"-looking request and response are real, and thus cache whatever an attacker supplies.
This would technically be a bug in the proxies, but it's nice to see IETF accounted for this and put in countermeasures before it inevitably became a DEFCON talk.
I disagree. Nothing is needed now, HTTP/1 is not broken and it works well enough.
There should be time enough to come up with a clean design. Even if it requires designing a new transport protocol.
Rolling out a new transport protocol like SCTP takes a lot less time than rolling out a new network protocol like IPv6. Transport protocols only runs on the endpoints, not on the routers in the network.
Except for firewalls and NAT'ing home routers, but if HTTP/1 over SCTP would result in a faster better browsing experience the problem would solve itself.
Leading? Firefox and Chrome already support HTTP/2 already (and SPDY, the basis for HTTP/2, for a long time now), just not enabled by default.
Their real problem of course is IIS. We'll probably have to wait for IIS9 which I cannot see happening for another two years. IIS8.5 appeared 12 months ago in Windows Server 2012 R2.
It seems unusual for Microsoft to disable SPDY support entirely, at least until support for HTTP/2 is more widely deployed...
So if they leave SPDY in place along with HTTP 2.0, they could wind up with strange incompatibilities occurring or site operators feeling like they need to support both SPDY and the HTTP 2.0 standard (rather than just the HTTP 2.0 standard).
Looking at it, it actually seems more progressive to dump SPDY and move to the SPDY-based HTTP 2.0 at this stage. Then ten years down the road hopefully SPDY will be dead and there will just be HTTP/1.1 and HTTP/2.0.
This does not apply for ad code that's implemented as <script src="..."></script>, which will indeed block page loading.
"What does this mean for developers?
HTTP/2 was designed from the beginning to be backwards-compatible with HTTP/1.1. That means developers of HTTP libraries don't have to change APIs, and the developers who use those libraries won't have to change their application code. This is a huge advantage: developers can move forward without spending months bringing previous efforts up to date."
It's probably because IE is really just a UI wrapper around system libraries[0]. The changes for HTTP/2 would be made not in IExplorer.exe, but instead in WinInet.dll (and possibly URLMon.dll).
This is because IE isn't the only application that will use these new features.
EDIT: I should add that you don't just go changing system libraries in a patch Tuesday, you'd wait and throw them in a new version, hence the 10 preview.
I want that so bad. Coding is hard, DDoSing is so easy.
Thank you architects for making black hats life so easy. HTTPS by default? YEESS even more leverage.
I love progress.
Next great idea: implementing ICMP, UDP, routing on top of an OSI layer 7 protocol, because everybody knows religion forbid to open firewall for protocols that do the jobs, or we could even create new protocols that are not HTTP. But HTTP for sure is the only true protocol since devs don't know how to make 3 lines of code for networking and sysadmins don't know how to do their jobs.
And HTTP is still stateless \o/ wonderful, we still have this wonderful hack living, cookies, oauth and all these shitty stuff. Central certificate are now totally discredited, but let's advocate broken stuff even more.
Why not implement a database agnostic layer on top?
When are we gonna stop this cowardly headless rush of stacking poor solutions and begin solving the root problems?
We are stacking the old problems of GUI (asynch+maintainability+costs) with the new problem of doing it all other HTTP.
I have a good solution that now seems viable: let's all code in vanilla Tk/Tcl: it has GUI, it can do HTTP and all, and it works on all environment, and it is easy to deploy.
Seriously, Tk/Tcl now seems sexy.
Given that the web is becoming more and more real-time this seems pretty interesting.
Is there a risk that cellular data usage will increase from this?
Found this project but nothing live
I'm saddened. The days of good internet protocols are clearly behind us.
At the risk of sounding too blunt: Everything? All of it? Its mere existence?
It fucks up responsibilities by addresses network-layering issues at the application layer. It takes a simple & stateless text-mode protocol and converts it into a binary & state-full mess.
It has weird micro-optimizations decided to ensure that Google's front-page and any Google-request with its army of 20000 privacy-invading tracking cookies should fit within one TCP-packet using American ISPs MTU packet-size, to ensure people are not inconvenienced when their privacy is being eaten away at. Which I'm sure is useful to Google, but pretty much nobody else.
The list goes on.
It does a lot of things which is not needed nor asked for by the majority of the internet, and yet the rest of the internet is asked to pay the cost of it through a mindboggling increase in complexity, and I'm sure a source of a million future vulnerabilities.
I'm not aware of a single thing in there which I want, and if I'm wrong and find one, I'm unwilling to accept that this is the cost I have to pay for that feature.
Any web-browser I will use in the future will be one where HTTP/2 can be disabled.
However there were may other bad protocols that died through lack of use. You can still vote with your feet. A vendor will not maintain a protocol stack if people don't use it.
And yeah, I'm with you--I think that a lot of this tail-wags-dog stuff is going to come back and haunt us, but we as an industry fucking suck at being conservative when it makes sense.