It certainly feels like the wrong way of solving problems (ramming more into the domain registry always seems like a bad option). Is the technology dead or destined to fail?
Edit: rationale: dnssec solves domain validity, but https tls solves almost the same problem but has better backing (azure said they don’t support dnssec and recommended tls as a better alternative). Dnssec also does not solve bgp hijacking, which combined with ip based tls signing servers moots any value dnssec has - sure you could registrar lock your domain via dns (preventing letsencrypt signing things), but if a threat actor has the capability to bgp hijack to perform such an attack and is targeting you, you probably have bigger issues elsewhere.
Both BGP and certificate issuance have bootstrapping problems, which are handled today by imperfect TOFU-like solutions. DNSSEC is, IMHO, perfectly positioned to solve both of those problems. I.e. use certificates all you like, but verify them by looking up the TLSA record in the DNS using DNSSEC. No need to trust CAs. BGP could possibly use the same solution, using the reverse lookup .arpa DNS space.
DNSSEC is the building block from which secure certificates and BGP routes can be built, without the ad-hoc CA system we have today.
If Comodo knowingly misissues a Google Mail certificate, Google will nuke them from orbit, as it has done in the past with other major CAs. Google can't do anything about .COM mis-signatures.
Thankfully, practically none of .COM is signed.
Compare to DNSSEC which are designed to have single supplier. If a TLD registrar goes bad, what is going to happen? Moving to a new TLD is a huge deal, and affects everyone, including your customer. And browsers can't really ban an entire TLD like they ban CAs.
So yes, both CAs and DNSSEC have some problems. But one of them is pretty good and getting better the time (deprecation of old crypto, short-expiration certificates) while the other is stuck with ancient crypto, constant technical outages, and no chance of improvement.
As with any PKI, the RPKI isn't effective if you don't use it, or if you use it in a merely advisory capacity and then routinely ignore its advice. And as with DNSSEC of course if you actually use this technology and people screw up (which will happen) there are outages, which would not have happened if you used no security technology.
In addition though, RPKI signifies business arrangements and so you can imagine real world policies may vary slightly from what RPKI says. For example, suppose you're a Canadian ISP and Big US ISP A says they're not going to use Long Haul provider X any more from Thursday. Sure enough the RPKI entries for ISP A via provider X expire after Wednesday. As of 00:05 on Thursday, 40% of routes for ISP A on your systems transit provider X. Should you kill those? Your customers would perhaps be pretty angry if the ISP A CEO later clarifies that "obviously" they meant from start of Business Hours. How about at 12:00 midday? How about the Monday after ? What if two months after this announcement, having left these routes in place you discover provider X were hijacking ISP A traffic and this was never merely a mistake, it was leverage ?
This seems like a pretty unreasonable complaint. Dnssec also doesn't stop phishing. Or nukes.
So you either accept that TLS is the global maxima for security and world governments can basically permanently compromise the internet, or you build private PKI systems, or you want something like DNSSec. And DNSSec is something like DNSSec.
With DNSSEC zones are controlled and signed by a single authority, and for CCTLDs that authority is controlled by ... the government. If they wanted to produce a malicious signature and serve it narrowly to a targeted victim ... that's quite doable with little in the DNSSEC system to prevent it.
While it's true that there many TLS root cert operators and some probably could be compromised by a government (though I wouldn't say "trivially"), there is also a gigantic mutual destruction pact in the form of certificate transparency that means all certs issued are visible in transparency logs and there are quite sophisticated technical and social controls in place to detect malicious certs. The cert operator would be imperiling their business and future trust in a way that isn't as true for DNSSEC.
Certificate transparency is cool, but it's not clear it really works for many classes of devices (particularly devices that only use one network like gaming systems or TVs). The global adversary just compromises the channels used to obtain the transparency logs and to report violations. It seems to work for mobile consumer devices like cel phones, because these devices naturally connect to many different networks, of which only some are compromised.
Users who are concerned about a government like the United States can use DNSSec to prevent a threat like this by using a non-US based TLD that employs DNSSec, and by running their client in a mode that requires valid DNSSec records for their domains. Of course, such services would practically need to be located outside of the country of concern as well.
Speaking as someone who most people consider a DNS expert and actually did help develop and deploy something substantially additive that is in widespread use today (DNSCrypt). ¯\_(ツ)_/¯
I'm only half joking.
Google is very enthusiastic it seems about things which force users to use Google Chrome, and very unenthusiastic about users doing anything easily from the command line because it has the notable quality of removing a place you can show ads.
And what I note about the whole OAuth ecosystem is that you wind up having to puppet a web browser in order to get through sign-ins and the like. "Oh but you do it infrequently" says every single company implementing their own bespoke way of entering a username, password and TOTP while salivating at all that unused <div> space for ads.
The story of Ethernet is kinda interesting. Invented in 1974 at Xerox PARC, the inventors started a new company called 3COM in 1979, and worked with IEEE, as well as DEC, Intel, and Xerox (called the DIX - I'm not kidding) for all of them to join forces and support one new standard. IEEE project to standardize it started in 1980, and formal standard publication happened in 1983. International (ISO) publication was in 1989.
Business decides what becomes the new standard, because the biggest businesses are whom everyone is dependent upon. So the biggest companies do set the new standards. Google has been doing that for years, using its search market dominance and custom browser as carte blanche to shape the web as it sees fit. CloudFlare has come in the back door, and doesn't have anywhere near the same influence, but does control a powerful market segment that is growing. Add in the cloud providers, and that's most of whom actually matter in terms of where the web goes.
Where the "internet" (sans web) goes is, I think, more up to the operating systems and ISPs. But since everything has been pushed into the web to avoid the manipulation of the "middle boxes" (ISPs and corporate networks), the end result is the people who control 'the web' (Google and CloudFlare) can now dictate terms.
Same cars has issues with drivers being locked inside if the car goes into the water. That is a bug.
Is the solution to abandon locks on cars, or is the solution to fix the problem of the car doors staying locked when submerged into water? The security system by now is fairly advanced and addressing issues with accidents is a real problem. No one however would sell cars without locks.
The natural progression is for hijackers to then carry buckets of water or spray cans and target the sensors that detect a water scenario.
Supplementary question: Why do so many sites these days opt for tiny font-sizes in some shade of pale-grey on white?