This doesn't invalidate the argument that you're comparing two things and claiming you're comparing two other things prima facia. For this use-case HTTP/2 will be faster, with or without TLS (if it could be tested). Claiming that it's the TLS that's speeding up the connection, which is what you mean when you say http vs https) is just plain wrong.
For example, if you terminate your secure traffic on an AWS ELB (or using S3, or CloudFront), you are serving HTTP/1.1 with TLS. And will be for the foreseeable future.
HTTP2 can be started via an HTTP over TLS request, that doesn't mean that it's HTTPS as defined in https://tools.ietf.org/html/rfc2818 and https://tools.ietf.org/html/rfc7230
But that's the entire argument he's making. It's not what I mean. When I go to my web host's sysadmins and say "I need to use https so I can use service workers," I don't mean "I need to use TLS so I can use service workers." I mean what I said. https was once defined as merely http + TLS (well, SSL), but it has now come to be a protocol/scheme that supports things that http does not. One of those, and certainly the biggest, is TLS. But there are other differences.
This is a comparison between http and https. It investigates the reason why https is faster, and makes it clear that the difference is that https means other things besides TLS.
It's especially irksome that "httpvshttps.com" complains when your browser doesn't support HTTP/2, saying the results will be inaccurate. If the site were called "http1vshttps2.com" I would agree, but it's not.
Looked at another other way, there are huge speed advantages that you can only get if you go with HTTPS.
You aren't guaranteed those advantages if you end up stuck with HTTPS/1.1, but that just means you're using an old stack which you should (and can) upgrade (unless you're using IIS).
Actually the whole HTTP/2 name is massive misnomer (since it doesn't actually support HTTP) and is the closest thing I can think of as technical newspeak as far as internet protocols are concerned.
Marketing this as a new HTTP protocol version when it clearly wasn't was just shady tactics and bad propaganda. The whole thing stinks.
One thing I have always wondered about these waterfall comparisons is why http 1.1. is slower. Since http 1.1 has keepalive, a browser should be able to send multiple requests upfront to the server, and the server can then stream them back. The lower limit on transfer time should therefore depend only on bandwidth.
Also, most browsers don't enable HTTP pipelining by default-- they'll reuse a connection if possible, but won't make multiple requests at once for compatibility reasons. Chrome even supported it for a while, but had to remove it because it didn't work (bugs in Chrome, bugs in servers, and the head-of-line blocking problem made it not worth keeping) [1].
[1] https://www.chromium.org/developers/design-documents/network...
Google spread a lot of FUD in their push to get SPDY standardized. For instance they never compared to pipelining, which is relevant because Microsoft found that with pipelining HTTP was essentially just as fast. Google's mobile test where they claimed ~40% speedup used 1 SPDY TCP connection for the entire simulated test run of many sites vs new connections per site for HTTP -- a simple mistake? Maybe, but they didn't take any steps to correct it once they were made aware of it.
"HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time."
Does it? Why?
If there was some huge drawback to using HTTP/2, I can see why people might cry foul, but whether they like it or not, HTTP/2 is coming, so we might as well embrace it.
(There's a good argument that users seeing HTTP versus HTTPS was a mistake, too, versus just secure/unsecure markers in browsers. TLS shouldn't have needed a new port number and URL prefix... but it is way too late to fix that now.)
Http/2 without tls would be even faster. Had it not been for someone with an agenda deliberately saying it should not be supported.
As a side note - what is the standard regarding multiplexing for terminating HTTP/2 proxies: i.e. how much multiplexing could make it across that boundary? Or is that a bridge we haven't crossed yet?
https://www.google.com/search?num=100&ei=oSGRV9uAHM2OjwOPxZK...
https://www.ssllabs.com/ssltest/analyze.html?d=www.troyhunt....