I think it's important to emphasise that although Tim's toy hypermedia system (the "World Wide Web") didn't come with baked in security, ordinary users have never really understood that. It seems to them as though http://foo.example/ must be guaranteed to be foo.example, just making that true by upgrading to HTTPS is way easier than somehow teaching billions of people that it wasn't true and then what they ought to do about that.
I am reminded of the UK's APP scams. "Authorized Push Payment" was a situation where ordinary people think they're paying say, "Big Law Firm" but actually a scammer persuaded them to give money to an account they control because historically the UK's payment systems didn't care about names, so to it a payment to "Big Law Firm" acct #123456789 is the same as a payment to "Jane Smith" acct #123456789 even though you'd never get a bank to open you an account in the name of "Big Law Firm" without documents the scammer doesn't have. To fix this, today's UK payment systems treat the name as a required match not merely for your records, so when you say "Big Law Firm" and try to pay Jane's account because you've been scammed, the software says "Wrong, are you being defrauded?" and so you're safe 'cos you have no reason to fill out "Jane Smith" as that's not who you're intending to give money to.
We could have tried to teach all the tens of millions of UK residents that the name was ignored and so they need other safeguards, but that's not practical. Upgrading payment systems to check the name was difficult but possible.
And I noticed that Whatsapp is even worse than Chrome, it opens HTTPS even if I share HTTP links.
Equally your preference for HTTP should not stand in the way of a more secure default for the average person.
Honestly I'd prefer that my mom didn't browse any http sites, it's just safer that way. But that doesn't detract from your ability to serve unencrypted pages which can easily be intercepted or modified by an ISP (or worse.)
Probably a low-threat security risk for a blog.
There are ways to remove that dependency, but it's going to involve a decentralized DNS replacement like Namecoin or Handshake, many of which include their own built-in alternatives to the CA system too so if "no third parties" is something you truly care about you can probably kill two birds with one stone here.
Depending on yet another third party to provide what is IMHO a luxury should not be required, and I have been continually confused as to why it is being forced down everyone's throat.
https://multiplayeronlinestandard.com/goto.html (the reason for the domain is I will never waste time on HTTPS but github does it automatically for free up to 100GB/month)
Firefox does this when I type in a URL and the server is down. I absolutely hate this behaviour, because I run a bunch of services inside my network.
If I tell my browser ‘fetch http://site.example,’ I mean for it to connect to site.example on HTTP on port 80 nothing more. If there is a web server run ning which wants to redirect me to https://site.example, awesome, but my browser should never assume I mean anything I did not say.)
What is funny about HTTPS is that early arguments for its existence IIRC were often along the lines of protecting credit card numbers and personal information that needed to be sent during e-commerce
HTTPS may have delivered on this promise. Of course HTTPS is needed for e-commerce. But not all web use is commercial transactions
Today, it's unclear who or what^2 HTTPS is really protecting anymore
For example,
- web users' credit card numbers are widely available, sold on black markets to anyone; "data breaches" have become so common that few people ask why the information was being collected and stored in the first place nor do they seek recourse
- web users' personal information is routinely exfiltrated during web use that is not e-commerce, often to be used in association with advertising services; perhaps the third parties conducting this data collection do not want the traffic to be optionally inspected by web users or competitors in the ad services business
- web users' personal information is shared from one third party to another, e.g., to "data brokers", who operate in relative obscurity, working against the interests of the web users
All this despite "widespread use of encryption", at least for data in transit, where the encryption is generally managed by third parties
When the primary use of third-party mediated HTTPS is to protect data collection, telemetry, surveillance and ad services delivery,^1 it is difficult for me to accept that HTTPS as implemented is primarily for protecting web users. It may benefit some third parties financially, e.g., CA and domainname profiteers, and it may protect the operations of so-called "tech" companies though
Personal information and behavioral data are surreptitiously exfiltrated by so-called "tech" companies whilst the so-called "tech" company's "secrets", e.g., what data they collect, generally remain protected. The companies deal in information they do not own yet operate in secrecy from its owners, relentlessly defending against any requests for transparency
1. One frequent argument for the use of HTTPS put forth by HN commenters has been that it prevents injection of ads into web pages by ISPs. Yet the so-called "tech" companies are making a "business" out of essentially the same thing: injecting ads, e.g., via real-time auctions, into web pages. It appears to this reader that in this context HTTPS is protecting the "business" of the so-called "tech" companies from competition by ISPs. Some web users do not want _any_ ads, whether from ISPs or so-called "tech" companies
2. I monitor all HTTPS traffic over the networks I own using a local forward proxy. There is no plaintext HTTP traffic leaving the network unless I permit it for a specific website in the proxy config. The proxy forces all traffic over HTTPS
If HTTPS were optionally under user control, certainly I would be monitoring HTTPS traffic being automatically sent from own computers on own network to Google by Chrome, Android, YouTube and so on. As I would for all so-called "tech" companies doing data collection, surveillance and/or ad services as a "business"
Ideally one would be able to make an informed decision whether they want to send certain information to companies like Google. But as it stands, with the traffic sometimes being protected from inspection _by the computer owner_, through use of third party-mediated certificates, the computer owner is prevented from knowing what information is being sent
In own case, that traffic just gets blocked
It's not a strawman, it's a real attack that we've seen for decades.
The entire guidance of "don't connect to an open wireless AP"? That's because a malicious actor who controlled the AP could read and modify your HTTP traffic - inject ads, read your passwords, update the account number you requested your money be transferred to. The vast majority of that threat is gone if you're using HTTPS instead of HTTP.
But a browser will not accept a redirect from a domain with an incorrect certificate (and rightly so), so this will start failing if https becomes the default, unless we generate certificates for all those customers, many thousands in our case. And then we need to get those certificates to the AWS load balancer where we terminate https (not even sure if it can handle that many). I think we may need to retire that feature.
Interesting, that hasn't been my experience. There's a certain group of stubborn techies who have active sites lacking HTTPS. One example is Dave Winer's blog:
He's doing some really interesting things over at https://feedland.com, so I'm glad I clicked through the TLS warning on his blog.
I work at a company that also happens to run a CDN and the sheer amount of layers Google forces everyone to put onto their stack, which was a very simple text based protocol, is mind boggling.
First there was simple TCP+HTTP. Then HTTPS came around, adding a lot of CPU load onto servers. Then they invented SPDY which became HTTP2, because websites exploded in asset use (mostly JS). Then they reinvented the layer 4 with QUIC (in-house first), which resulted in HTTP3. Now this.
Each of them adding more complexity and data framing onto, what used to be a simple message/file exchange protocol.
And you can not opt out, because customers put their websites into a website checker and want to see all green traffic lights.
You can't do e-commerce without encryption. You live under capitalism. Its weird to me to see capitalists not wanting to accept payments for goods. As far as the complexity argument, goes, wait until you see what goes on in your CPU! Or the codebase of your average website. There is no real simplicity and simplicity just ties people's hands.
These weird worship of simplicity just don't make sense. By this argument we should have never left the mainframe green-screen terminal world. Or the PDP era. Or the abacus era for that matter. An arbitrary line drawn in the sense is a near purely emotional appeal and the libertarian housecat meme when applied to technology.
Instead, this is a train with no final destination and those who think overwise are just engaging in nostalgia.
Even the most hardcore capitalists refuse to take our money for their services, insisting on giving away free websites--some which don't even have any authentication at all--that have frustrating business models on the backend :(. The reason we encrypt the vast majority (by volume, not weight) of our web content is for integrity (so random other people can't hijack and modify what we render), and somewhat (but not sufficiently, as TLS is broken) for privacy, not because of some attempt to partake in capitalism.
PCI DSS is the data security standard required by credit card processors for you to be able to accept credit card payments online. Since version 1.0 came out in 2004, Requirement 4.1 has been there, requiring encrypted connections when transmitting card holder information.
There’s certainly was a time when you had two parts of a commerce website: one site all of the product stuff and catalogs and categories and descriptions which are all served over HTTP (www.shop.com) and then usually an entirely separate domain (secure.shop.com) where are the actual checkout process started that used SSL/TLS. This was due to the overhead of SSL in the early 2000s and the cost of certificates. This largely went away once Intel processors got hardware accelerated instructions for things like AES, certificates became more cost-effective, and then let’s encrypt made it simple.
Occasionally during the 2000s and 2010s you might see HTML form that were served over HTTP and the target was an HTTPS URL but even that was rare simply because it was a lot of work to make it that complex instead of having the checkout button just take you to an entirely different site
I don't like people externalizing their security policy preferences. Yes this might be more secure for a class of use-cases, but I as a user should be allowed to decide my threat model. It's not like these initiatives really solve the risks posed by bad actors. We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
> I as a user should be allowed to decide my threat model
Asking you if you want to proceed is allowing you to decide your threat model.
> We have so much compliance theater around email, and we still have exactly the same threats and issues as existed twenty years ago.
...and yet we have largely eliminated entire classes of issue on the web with the shift to HTTPS, to the point where asking users to opt-in to HTTP traffic is actually a practical option, raising the default security posture with minimal downside.
A lot of this discussion is about how the browsers define their security requirements on top of HTTPS/TLS/etc.
Such as what CAs they trust by default, and what’s the maximum lifetime of a certificate before they won’t trust it. I believe it is now 2 years? Going even lower soon.
You can also still use your own threat model. You can use self-signed certs, import your own CA, etc. The issue is that browsers need to service the mass market, including the figurative grandma who won't otherwise understand fake bank certificates.
As for email, yes...that is a complete shitshow and I'm still surprised it works as well as it does.
It is incredibly common for public wifi captive portals to be built on a stack of hacks, some of which require the inspection of HTTP and DNS requests to function.
*Yes better tools exist, but they dont arent commonly used, and require Portal, WAP and Client support. Most vendors just tell people to turn new fancy shit off, disable HTTPS and proceed with HTTP.
Whats it intercepting? Apples detection sends a HTTP/HTTPS request to captive.apple.com. If it fails, it assumes a captive portal. Theres also a DHCP option apple supports.
But even after detection, theres redirection.
Have a look at WAP Vendor options.
Heres Powerlynx explicitly requests disabling HTTPS before auth on Cambium in their user setup guide.
https://docs.powerlynx.app/networking/cambium.html
"Redirect HTTP-only - On"
This guarantees that, upon redirection, you are presented with a HTTP login page for the captive portal. And then any subsequent redirections, also have to be HTTP.
Heres Start Hotspot
https://go.starthotspot.com/help/cambium/
"Redirect: Tick HTTP-only"
Cambium supports more modern methods, but captive portal vendors are not going to shift before letting their customers fall on their face.
(Also, cambiums guest access whitelist is based on DNS and breaks with DNS over HTTPS/TLS)
HSTS might also interact with this, but I'd expect an HSTS site to just cause Chrome to go for HTTPS (and then that connection would either succeed or fail).
> to force network-level auth flows (which don't always fire correctly when hitting HTTPS)
The whole point of HTTPS is basically that these shouldn't work, essentially. Vendors need to stop implementing weird network-level auths by MitM'ing the connection, and DHCP has an option to signal to someone joining a network that they need to go to a URL to do authentication. These MitM-ers are a scourge, and often cause a litany of poor behavior in applications…
My first reaction was along the lines of "What? That can't possibly be right..."
After testing a bit, it looks like you can load https://neverssl.com but it'll just redirect you to a non-https subdomain. OTOH, if the initial load before redirecting is HTTPS then it shouldn't work on hotel wifi or whatever, so still seems like it defeats the purpose.
Huh.
http.rip will probably show a "website unavailable" error at some point unless you manually type in the http:// prefix.
What is the risk exactly? A man-in-the-middle redirect to a malicious https site?
It would be nice to see some way for browsers to indicate when a site has some extra validation so you could immediately see that your bank has a real certificate as is appropriate for a bank and not just Let's Encrypt. Yes, I can click the padlock icon to get that information, but it would be nice if there was some light warning for free certificates to make it more immediately obvious.
Vs. `traceroute` suggests that would-be on-path attackers are up against a vastly smaller attack surface.
I'm not going to drive 35mph without a trusted certificate authority verifying that sign wasn't tampered with by a MITM. My grandma tried to tell me she loved me over an unencrypted and insecure phone line the other day - nice try, hackers!
[1] (Except on the arm subdomain for some reason)
Don't ever view source on slackware.com
Awwww, that's the stuff right there.
That's very 90s looking HTML. Large swathes of blank spaces may also indicate they're rendered somehow. PHP? CGI?
Confusingly it also sets an akamai cookie, `ak_bmsc`. Seems a bit out of place.
why ?
Do you have examples? I’m not sure how to search for this feature.
Maybe everything .local will already be allowed.
On another note I would much prefer to skip https, as the default, and go straight to WSS (TLS WebSockets). WebSockets are superior to HTTP in absolutely every regard except that HTTP is session-less.
Making an exception to allow plain HTTP connections instead of making an exception to allow self-signed certificates, seems like the worse choice to me.
Anyone have a good recipe for setting up an HTTPS for one-off experiments in localhost? I generally don't because there isn't much of a compromise story there, but it's always been a security weakness in how I do tests and if Chrome is going to start reminding me stridently I should probably bother to fix it.
Two hosting providers I use only offer HTTP redirects (one being so bad it serves up a self signed cert on the redirect if you attempt HTTPS) so hopefully this kicks them into gear to offer proper secure redirects.
Either way I agreee with this update. It's better to put the burden of knowledge on those hosting things locally and tinkering with DNS than those that have no idea that a domain does not infer ownership of said domain.
Why is Linux adoption at 80% when MacOS/Android/Windows are at 95%? Quite unexpected.
The answer is probably that people that run Linux are far more likely to run a homelab intranet that isn't secured by HTTPS, because internal IP addresses and hostnames are a hassle to get certificates for. (Not to mention that it's slightly pointless on most intranets to use HTTPS.)
> If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only.
Sounds like it's just because a large chunk of Linux usage is for web interfaces on the local machine or network, rather than everyday web browsing.
It means that if someone has patched into your local network they can access anything in there, but they have to get in first, right? So how concerned should one be in these scenarios
(a) one has wifi with WPA2 enabled
(b) there's a Verizon-style router to the outside world but everything is wired on the house side?
Wait a minute, how do they know what version Chrome will be at a year from now?
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/p...
>Chrome 154 Stable next year (Oct 7, 2026)
But in every case by the way, we kinda trust the makers of this software. They can easily ship backdoors to specific users. Same with crypto wallets etc.
From having to pay for it in the past to now having to set up lets-encrypt, certbot, https-ingresses!
God, half my hobbyist and raw non-helm kubernetes config is https related. https-ingress.yaml is gigantic!
Is this really the best devex we could come up with?
Even picking the most dismissive wording you can, you contradict yourself.
If someone doesn't like it, they can stay behind on the old DNS system or they can launch a new blockchain with their own version of reality... It's retarded that we need to have one version of reality for the entire planet. If someone in China wants to own facebook.com, they should be allowed. Heck, it could be a separate silo per city. The age of copyright and trademark is over. I don't see AI companies distributing royalties to people who wrote its training set...