As someone who had to write a couple of proxy servers, I can't express how so sadly accurate it is.
This is far more of a security problem than all of the bad HTTP 1.1 implementations put together. It is built in corporate control that cannot be bypassed except by not using HTTP/3. It is extremely important that we not let the mega-corp browsers drop HTTP 1.1 and continue to write our own projects for it.
Doesn't that imply that HTTP/1 is deceptively complex?
Both of these are only concerned with reducing the latency of doing lots of requests to the same server in parallel.
Which is only needed by web browsers and nothing else.
The initial problem is usually easy to solve for, it’s all the edge cases and other details that makes something complex.
Chunked transfer/content encoding problems still give me nightmares...
LOL, yes same here. Can’t wait for Bluetooths b̶a̶l̶l̶s̶ baggage to be chopped.
I installed a web server on my phone and send files this way much faster (and Android -> Apple works):
https://f-droid.org/en/packages/net.basov.lws.fdroid/
I wish there were a standard for streaming (headphones could connect to your network via WPS, and stream some canonical URL with no configuration needed).
WiFi uses near 10x the power Bluetooth does when active (and that’s before factoring in BLE which cuts that down in half). WiFi also has access to the much less crowded 5GHz band.
IIRC WiFi is also a much simpler protocol, it’s just a data channel (its aim being to replace LAN cables).
Plus in order to support cheap and specialised devices Bluetooth supports all sorts of profiles and applications. This makes the devices simpler, and means all the configuration can be automated to pairing, but it makes the generic hosts a lot more complicated.
But mostly the problem is that too much of this complexity fell on hardware vendors and they suck at writing software. There are umpteen bajillion different bluetooth stacks out there and they're all buggy in new and exciting ways. Interoperability testing is hugely neglected by most vendors. The times where Bluetooth works well are typically where the same vendor controls both ends of the link, like Airpods on an iPhone.
In 2020 I tried buying some reputable brand Bluetooth headphones for my kids so they could do home-schooling without disturbing each other. It was a total failure. Every time their computer went to sleep the bluetooth stack would become out of sync and attempts to reconnect would result in just "error connecting" messages, requiring you to fully delete the bluetooth device on the Windows side and redo the entire discovery/association/connection from scratch. The bluetooth stack on Windows would crash halfway through the association process about half of the time forcing you to reboot the computer to start over. Absolutely unusable. I tried the same headphones on a Linux host and they worked slightly better, but were still prone to getting out of sync and requiring a full "forget this device" and add it again cycle every few days for no apparent reason.
That is cause HTTP pipelining was and is a mistake and is responsible for a ton of http request smuggling vulnerabilities because the http 1.1 protocol has no framing.
No browser supports it anymore, thankfully.
Anyone that doesn't support this is broken. My own code definitely does not wait for responses before sending more requests, that's just basic usage of TCP.
The problem is that if Request number 1 leads to an error whereby the connection is closed, those latter two requests are discarded entirely. The client would have to retry request number two and three. If the server has already done work in parallel though, it can't send those last two responses because there is no way to specify that the response is for the second or third request.
The only way a server has to signal that it is in a bad state is to return 400 Bad Request and to close the connection because it can't keep parsing the original requests.
There is no support for HTTP pipelining in current browsers.
What you are thinking about is probably HTTP keep alive, where the same TCP/IP channel is used to send a follow-up request once a response to the original request has been received and processed. That is NOT HTTP pipelining.
> Anyone that doesn't support this is broken. My own code definitely does not wait for responses before sending more requests, that's just basic usage of TCP.
Yep.
There is some "support" a server could do, in the form of processing multiple requests in parallel¹, e.g., if it gets two GET requests back to back, it could queue up the second GET's data in memory, or so. The responses still have to be streamed out in the order they came in, of course. Given how complex I imagine such an implementation would be, I'd expect that to be implemented almost never, though; if you're just doing a simple "read request from socket, process request, write response" loop, then like you say, pipelined requests aren't a problem: they're just buffered on the socket or in the read portion's buffers.
¹this seems fraught with peril. I doubt you'd want to parallelize anything that wasn't GET/HEAD for risk of side-effects happening in unexpected orders.
> Host: neverssl.com
> This is actually a requirement for HTTP/1.1, and was one of its big selling points compared to, uh...
> AhAH! Drew yourself into a corner didn't you.
> ...Gopher? I guess?
I feel like the author must know this.. HTTP/1.0 supported but didn't require the Host header and thus HTTP/1.1 allowed consistent name-based virtual hosting on web servers.
I did appreciate the simple natures of the early protocols, although it is hard to argue against the many improvements in newer protocols. It was so easy to use nc to test SMTP and HTTP in particular.
I did enjoy the article's notes on the protocols however the huge sections of code snippets lost my attention midway.
The author does know this, it's a reference to a couple paragraphs above:
> [...] and the HTTP protocol version, which is a fixed string which is always set to HTTP/1.1 and nothing else.
> (cool bear) But what ab-
> IT'S SET TO HTTP/1.1 AND NOTHING ELSE.
Since playing with QUIC, I've lost all interest in learning HTTP/2, it feels like something already outdated that we're collectively going to skip over soon.
As far as learning goes, I do think HTTP/2 is interesting as a step towards understanding HTTP/3 better, because a lot of the concepts are refined: HPACK evolves into QPACK, flow control still exists but is neatly separated into QUIC, I've only taken a cursory look at H3 so far but it seems like a logical progression that I'm excited to dig into deeper, after I've gotten a lot more sleep.
Plus, you know, just an awesome dev who knows his stuff. Huge fan.
https://fasterthanli.me/series/reading-files-the-hard-way/pa...
For TLS, I recommend The Illustrated TLS 1.3 Connection (Every byte explained and reproduced): https://tls13.xargs.org/
Does it need to be pointed out that this is complete bullshit?
CRLF was used verily heavily and thus got baked into a lot of different places. Namely, it conveniently sidesteps the ambiguity of "some systems use CR, others use LF" by just putting both in, and since they are whitespace, there's not much downside other than the extra byte.
Beyond that, there are many other clear and obvious connections between Hypertext Transfer Protocol and teletype machines. Many early web browsers were expected to be teletype machines [0]. So while it might be a bit of a stretch, I'd say this is far from "complete bullshit".
[0] - http://info.cern.ch/hypertext/WWW/Proposal.html#:~:text=it%2...
I agree the two are similar, but the space shuttle story is also bullshit. See e.g. Snopes: https://www.snopes.com/fact-check/railroad-gauge-chariots/
People are suckers for plausible-sounding and amusing stories, that one's classic bait for people's lack of critical thinking skills.
> CRLF was used verily heavily and thus got baked into a lot of different places.
Well, exactly. Which is precisely why it's bullshit to claim that HTTP was "based on teletypes". It was based on technical standards at the time, that originally derived from teletypes, but there was no consideration of teletypes in the development of HTTP that I'm aware of:
> Many early web browsers were expected to be teletype machines [0].
Could you quote a relevant part of your reference? Because I don't see it. Perhaps you're confusing "dumb terminal" with "teletype"? Or confusing the Unix concept of tty, a teletype abstraction, with the electromechanical device known as a teletype - the "remote typewriters" mentioned in the original comment?
By the time that WWW spec was written in 1990, teletypes were decades out of date and not commonly used at all. PCs had existed for over a decade, and video display terminals for mainframes and minicomputers had been around for nearly three decades. No-one was using actual teletypes any more.
> So while it might be a bit of a stretch, I'd say this is far from "complete bullshit".
This conclusion would work if any of your claims had survived scrutiny.
Theoretically yes, but in practice?
I've done my share of nc testing even simpler protocols than HTTP/1.1
For some reason the migration to HTTPS scared me despite the security assurances. I could not see anything useful in wireshark anymore. I now had to trust one more layer of abstraction.
> Theoretically yes, but in practice?
Yes, that's the whole point of encapsulation. The protocol is blissfully unaware of encryption and doesn't even have to be. It has no STARTTLS mechanism either.
Your HTTPS traffic consists of a TCP handshake to establishes a TCP connection, a TLS handshake across that TCP connection to exchange keys and establish a TLS session, and the exact, same HTTP request/response traffic, inside the encrypted/authenticated TLS session.
The wonderful magic of solving a problem by layering/encapsulating.
> I could not see anything useful in wireshark anymore
Wireshark supports importing private keys for that, see: https://wiki.wireshark.org/TLS
With TLS+SNI, this is redundant to the name from SNI. But we had TLS long before we had SNI, and we had HTTP long before we had TLS, and both of those scenarios need the `Host` header.
GET / HTTP/1.0\r\n\r\n
Still works with many websites.