The future looks more like this, as the default, with no special effort required: https://http2.golang.org/gophertiles
May nobody else have to suffer through writing an interoperable HTTP/1.1 parser!
Yes, now it'll be much easier than parsing plain-text. Now they just have to write a TLS stack (several key exchange algorithms; block ciphers; stream ciphers; and data integrity algorithms); then implement the new HPACK compression; then finally a new parser for the HTTP/2 headers themselves.
Now instead of taking maybe one day to write an HTTP/1.1 server, it'll only take a single engineer several years to write an HTTP/2 server (and one mistake will undermine all of its attempts at security.)
If you are going to say, "well use someone else's TLS/HPACK/etc library!", then I'll say the same, "use someone else's HTTP/1.1 header parsing library!"
HTTP/2 may turn out to be great for a lot of things. But making things easier/simpler to program is certainly not one of them. This is a massive step back in terms of simplicity.
Then, separately, writing interoperable HTTP 1.1 is hard because it was designed/taken up ad-hoc in a time of relatively immature browsers. I would expect HTTP2 to increase standardisation in the same way newer HTML/CSS specs have relative to the late-nineties. That doesn't mean that initial implementation will not be more difficult, but it's done once every 20 years (per-vendor).
Your argument is invalid in my opinion. HTTP/1.1 is not simple to implement to any decent level of completeness and correctness, and HTTP/2 does fix a fair few things.
Anyway, there are already plenty of good tools for debugging HTTP/2 streams (Wireshark filters, etc.), and there's only going to be plenty more as time goes by.
People are good with dealing with a small number of simple things that can be stacked together. Throw in a human-readable data stream, and you're set to understand and use a stack of simple programs.
People are not good with dealing with a single monstrous object of unfathomable proportions, they will try to break it down in things they understand. If the thing is too complex, with too many inputs, too many outputs and too many states, this is a recipe for confusion. This is why overly complicated things always fail in face of simple things.
One could argue that FTP/SFTP was just as good as transferring bytes over network, but HTTP/1.0 won because it was simpler.
HTTP/2 was written to tickle the egos of its developers, following the principle - it is hard to write, it is hard to read. And its downfall is going to come from this problem.
Also a bonus: no more "Referer" (sic)
http://chimera.labs.oreilly.com/books/1230000000545/ch13.htm...
I'm betting there will come a polyfill to make HTTP/2 servers able to deliver content to HTTP/1.1-but-HTML5 web browsers in an HTTP/2-idiomatic way—perhaps, for example, delivering the originally-requested page over HTTP/1.1 but having everything else delivered in HTTP2-ish chunks over a websocket.
Or maybe I'm overly optimistic!
https://tools.ietf.org/html/draft-ietf-httpbis-http2-17
"
Abstract
This specification describes an optimized expression of the semantics
of the Hypertext Transfer Protocol (HTTP). HTTP/2 enables a more
efficient use of network resources and a reduced perception of
latency by introducing header field compression and allowing multiple
concurrent exchanges on the same connection. It also introduces
unsolicited push of representations from servers to clients.
This specification is an alternative to, but does not obsolete, the
HTTP/1.1 message syntax. HTTP's existing semantics remain unchanged.
"all these changes seem good without a large change, just an improved user experience. (the Introduction section is also good - "HTTP/2 addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection", I'd quote more but why not click through the link at the top of this comment. basically just some compression of headers, none of the funky stuff to keep connections alive for server push, prioritizing important requests, etc. all without changing semantics much - great.)
[1] http://mail-archives.apache.org/mod_mbox/httpd-dev/201408.mb...
Cookies solve real use case problems. Unless we all start building and experiencing and improving the alternatives, progress won't be made.
That said, good luck on getting rid of cookies all together.
I tried searching on the net, but it doesn't seem to give any concrete/valid results.
Can you give me any pointers?
Edit: I do use OAuth2.0 on my services and use Mozilla Persona to manage user logins, but I am not clear how can I keep sessions between requests if I don't use cookies.
Please, no. The internet works because it's compatible, and installing a local client for everything just prevents use of a service.
One key thing which should make this work is that server push should follow the same origin checks as most other recent web standards:
“All pushed resources are subject to the same-origin policy. As a result, the server cannot push arbitrary third-party content to the client; the server must be authoritative for the provided content.”
(http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...)
Assuming that survives contact with the actual implementations, you should be able to avoid latency-sensitive content going through the CDN while still being able to push out e.g. stylesheets & referenced fonts/images.
As the simplest level this might be just the CSS, and JS in the <head> but obviously as different UA's behave differently there's scope for much granular optimisations.
Naively, it looks to me that server push will mostly be an improvement for small websites that do not use a CDN, but I can't see how it can coexist with a CDN.
Or it would require a new syntax, where the html tells the browser to start connecting to the CDN with this particular URL, which contains a token, and should be downloaded first which would tell the CDN that a particular list of assets will be needed for that page, and then the CDN will use server push to send these static assets.
Alternatively the CDN would become a proxy for the underlying html page, which would still be generated by the application server. That would probably be simpler.
How accurate it is, I'm not sure.
While there has always been some degree of disagreement regarding technological matters, I think we're really seeing a lot more of it these days, especially when it comes to projects that are open source, or standards that are supposedly open. HTTP/2 is a good example. But we've also got GNOME 3, systemd, how systemd has been included in various Linux distros, many of the recent changes to Firefox, and so forth.
Not only is this disagreement more prevalent, it's also much harsher than what we've seen in the past. Instead of seeing compromise, we're seeing marginalization. We're repeatedly seeing a small number of people force their preferences upon increasingly larger masses of unwilling victims. We're seeing consensus being claimed, but this is only an illusion that barely masks the resentment that is building.
What we're seeing goes beyond mere competition between factions with differing situations. We're seeing any sort of competition, or even just dissent, being highly discouraged, suppressed, or even prevented wherever possible. Those whose needs aren't being met end up backed into a corner and shunned, rather than any effort being put into cooperating with them, with helping them, or even just with considering their views.
This isn't a healthy situation for the community to be in, especially when it comes to projects that allegedly pride themselves on openness. We've already seen this kind of polarization severely harm the GNOME 3 project. We're seeing things get pretty bad within the Debian project. And the HTTP/2 situation hasn't been very encouraging, either.
There's a few high profile critics e.g. PHK, who puts his points as an eloquent rant which of course we as a community tend to love but the reality is HTTP/2 is going to get rollout by companies who've tested it and seen the benefits i.e. not just Google.
I don't have any way to dispute this, but I don't think it's easy to provide evidence for it either. I feel that there may simply be more individuals involved in these kinds of discussions these days.
Obviously at some point you have to stop discussing something and start building it. That's not to say that discussion isn't important or shouldn't be encouraged (quite the contrary), but I find it very difficult to make generalizations about where the line should be drawn.
Could you please elaborate on this point?
This goes both ways though, doesn't it? Much of the arguing about HTTP/2 tended to be Johnny Come Latelies who, if we're being honest, seemed to just want to toss some refuse in the gears. Microsoft, in particular, watched as Google proposed SPDY, and then iterated and shared their findings, and then right as consensus (or as close to consensus as possible) starts to be reached, Microsoft tries to upset the cart. In that case wouldn't Microsoft, and the naysaysers, be the ones trying to force their preferences? The delay of HTTP/2, or basic improvements to these technologies, not only causes hassles for developers (image sprites, resource concatenation, many domains, and on and on), it marginalizes the web.
It is going to be pretty rare when any initiative sees complete unanimous agreement, especially given that many of the parties have ulterior motives and agendas that aren't always clear.
The first wheels were probably logs under rocks. Then axles got developed, then spokes, then tyres etc.
Everything from the gyroscope to the LHC can attribute it's beginnings to the humble wheel.
Reinvention is, if not always good, always admirable.
I put this in another comment in the above:
* Better authentication
* More secure caching
* Improved ability to download large files
* Better methods to find alternate downloads locations
* Making each request contain less information about the sender
* Improved Metadata
I brain-dump a bit here: https://github.com/jimktrains/http_ng
It seems like it was built for the big players to eek out 5% more performance. How about the average website? What is in there help standardize authentication? What is in there to help protect privacy?
In the end it looks more HTTP 1.2, with header compression being the only new feature. The rest of what makes up HTTP 2 is basically implementing a new transport layer protocol at the application level.