Yikes, couldn't disagree with that more. There are a ton of things that ipv6 designers could have done to make the transition much easier. This is a (now quite old) blog post that is my "go to" that explains a lot of the problems with ipv6: https://cr.yp.to/djbdns/ipv6mess.html
FWIW I couldn't find the link to that post until finding it on one of the comments here, https://news.ycombinator.com/item?id=33894933 . That whole thread has lots of good commentary.
I still don't understand how people can defend ipv6. I remember the "we better get ready to switch to ipv6" noise a quarter century ago when I started my career. And yet we're still talking about how v4 addresses are worth billions. Ipv6 has been an unmitigated disaster. The original architects should have "the perfect is the enemy of the good" forcibly tattooed on their foreheads.
Bernstein was certainly part of that discussion, at the later stages, and the document you link to reflected that. It was just one of many counter proposals that influenced what became IPv6.
Some people seem to suggest that Internet standards are written in some ivory tower and dropped down on the network engineers to implement. In that light, such criticism of IPv6 would be valid and important. But the IETF does not work like that. You can take part, and I can take part, and any reasonable criticism is discussed in the open. In general, practical proposals and code is taken more seriously than loose ideas.
There is no central command which decides what you or any other network operator should implement. People all over the world implements what they think is good for their network, in order to interoperate with other networks. If anything, Internet standard can be criticized for being slow to fruition because of this open process. That's the price we pay.
It's not very useful to come 20 years later and re-hash the exact same discussion all over again. All counter proposals turned out to be impossible to deploy, and the consensus and running code we ended up with is what we call IPv6. A dual stack approach was the only solution practical enough to get general deployment. There are certainly problems with any protocol, and let's suggest improvements and new protocols. Just make them relevant today if they should have any chance of deployment.
I'm not complaining as if to say "yeah guys, let's stop the ipv6 rollout and do ipv10" or whatever. But I think it is useful to see why the problems of ipv6 came about. A great comment from one of the linked HN threads said "ipv6 was a product of it's time", a time (I put it at mid 90s to mid 00s) when there were a ton of over-complicated, over-engineered specs that were designed by committee. Some examples:
1. XML, and all its complexities and incompatible versions (I still have some PTSD from some Java XML version incompatibilities), vs. what the industry discovered to be the much simpler and now much more widely used JSON.
2. The insanity of SOAP vs. something like REST.
3. The original Enterprise Java Beans spec. Feel like "'nuff said" is good enough here, what a nightmarish shit show that was.
Thankfully I think the industry has largely learned its lesson when it comes to valuing simple even if imperfect specs. But I still totally disagree that "ipv6 is the best we could have done".
You can criticize IPsec and mobile IP which was tacked on to the spec. But starting from scratch, a core IP v6 stack is easier to implement than a v4. The TCP parts is downright nasty in comparison.
Most importantly, IPv6 was developed in the open. IETF is the counter example to design-by-committee. That's true today, and that was doubly true twenty years ago.
Every discussion since them mostly concerns resurrecting old ideas about either impractical extensions (misusing port numbers and flags to extend addressing, none of these schemes have been proven practical), more efficient address space allocations (which would have bought us a couple of years, at most) or various ways to tunnel traffic in backwards compatible ways (which is what we did back when 6bone was a thing, but which is not useful anymore).
The mailing lists are completely open. You can join any one of them today, and people did. You can still follow how the discussions went in the archives. Hopefully no one has suggested that IPv6 is the best we can do, but that the process still works and that anyone capable and interested is welcome to attend.
As to whether the industry's obsession with complexity has faded, I bring you Kubernetes.
I do not mean no changes at a binary level, but at an administrative level. An upgraded "A" record for example so that DNS admins could go about their day somewhere between completely and largely unaware that the protocol was undergoing transparent upgrades behind the scenes while preserving administrative compatibility with all existing configuration files, source data, and user interfaces. In such a scheme it would be quite important that there only be one "A" record from an administrative point of view, not different ones like A and AAAA. Admins need to be insulated from the binary protocol changes going on behind the scenes.
That means that the new address format would need to be compatible with the old one, and of course a routable embedding of the current address space be provided in the new address space. That means existing routing prefixes would have to be preserved indefinitely, complete with the routing table explosion that was a major challenge a couple of decades ago.
All routers would need to be upgraded over the transition period to handle a new frame type that supported current addresses, extended addresses, and a single routing table with larger extended address sizes. It is quite important that there be a single routing table, not two of them. Same configuration file, same everything from an administrative point of view for a decade, while older hardware was gradually replaced with new hardware that had extended capabilities that would be dormant on the public net.
Comparable and in some cases less transparent updates would need to be made to programming APIs, and to the Berkeley socket API in particular. Not to require programmers to do everything for two different layer 3 protocols, but rather to allow them to do it once and have it work with current and extended addresses, transparently from an administrative point of view, not doubled configuration for anything.
There are many other things that would need comparable binary behind the scenes upgrades that would be administratively compatible and not affect current network configuration files, and in particular not require anything to be duplicated from an administrative point of view. No one does that and no one wants to for protocol extensions that are not actually usable yet. It doubles their workload with no short term benefit, and does so indefensibly.
And then after all these extensions and capabilities have been designed, implemented, and transparently deployed across essentially the entire Internet without requiring large scale administrative intervention - a process that could easily take a decade - would the first extended addresses with non-zero bits in the extension fields actually be usable and globally routable in both directions. The entire network would be ready for it, it would be a dormant capability unused until that day arrived and requiring no large scale intervention when that day arrived either, because the silently upgraded network would remain administratively compatible with the old one.
The incentives for vendors to implement it are just not there, since the customer is not actually going to use the expanded address-space feature at all for at least a decade, so why bother implementing it and break the existing stuff in the process? While with IPv6 you could at least somewhat use it right away or at least implement it on the side, where it won't break the existing stuff. And even if you get the vendor to implement it, chances are they are gonna do it the same way IPv6 got implemented initially: By routing it software, so performance is going to absolutely suck. So the first decade after the proposed flag day performance is going to suck until everyone has upgraded their hardware that can do both in hardware.
Next are the random boxes (firewalls or NAT boxes) that will happily mangle all your option bits in the IPv4 header for no reason. Of course while you haven't used any expanded address space everything will seem fine and might even work fine in the lab, but once your flag day arrives and people are supposed to start using it, you will realize it doesn't actually work, because of all those broken boxes in the wild and fun routing bugs and so forth.
And then you get all the regular bugs that come with making any change that were hidden by no one actually using it. You get all the phases IPv6 went through, but much worse and with a couple decades of delay.
The only way to make things work is by using it. The earlier one gets started the better. Wishing really hard does nothing.
Most of the other criticism is not relevant anymore, since we now have a lot of transition technologies that allow IPv6 clients to interoperate with IPv4 servers (this way around is possible since IPv6 is a superset of IPv4). Overall we are now much further into the IPv6 migration than djb ever envisioned.
The only way to remain incentive compatible is to remain administratively compatible, and that is where IPv6 as presently constituted fails dramatically by requiring two independent network configurations to be maintained for the better part of a century, without giving anyone an incremental incentive to maintain the second one, leading to a hold out problem.
The public switched phone network has gone through major upgrades and yet at no point did someone say we are going to throw out all your existing phone numbers and require you to get new ones, or require you to have two independent and incompatible phone numbers that you put on your business cards, have two phones on your desk, or a phone with a mode selection button depending on whether you wanted to call a new style phone number or an old style phone number.
And that - from an administrative point of view - is the fundamental problem with the deployment of IPv6 as we know it. Dual stack now and for decades to come. Dual stack anything is not incentive compatible and should never have been done. The proper solution is single stack everything with capabilities that are dormant until they are deployed on a global level as part of the normal upgrade process in an administratively compatible fashion so that no large scale administrative intervention is required now or at any time in the future.
Other changes were made in the early days, like the introduction of dialing, then long distance dialing.
SLACC, interface-specific link-local address both look good in paper but cause lots real life headache.
When it works, it works; when it don't, you have to unlearn and relearn everything network before you could possibly understand the problem, let alone fixing them.
Let me summarize my understanding of what he's saying, because I don't quite see why/how you disagree. I think you (or I) might be misunderstanding his claim.
Imagine this topology: C (client, IPv4-only) <=> R (intermediate router) <=> S (server)
My understanding of djb is he's saying that IPv6 could have been designed such that S could still serve C via only simple software updates -- this means, crucially, without the need for S to separately obtain a public IPv6 address through R, because its IPv4 address would be automatically valid for IPv6.
How can this work? Well, there are two scenarios:
1. If R is IPv4-only, then S could figure that out during some startup/negotiation process, and send only IPv4 packets to R. R only lets IPv4 clients connect to S anyway, so any response (even from IPv6 applications) on S must be going back to an IPv4 address. So the kernel can transparently translate those IPv6 addresses into IPv4 before passing them along to R, and vice-versa.
2. If R supports IPv6, then R can do the same thing S would've done in the previous^ scenario. (In fact, I think S could become IPv6-communication-only in that case, reserving IPv4 for just address leasing? I'm not sure, but in any case, I don't think that matters here.)
Notice that all of this is almost completely stateless. (I think the only state S needs to track here is 1 bit, indicating whether R supports IPv6 or not.) So, S and R can be independently and (importantly, rather trivially) upgraded to support the IPv6 protocol, without losing the ability to talk to any clients within the IPv4 address range.
This is easy and requires no explicit leasing of IPv6 addresses. That step can be implemented and have support for it added later, whenever S is ready to serve clients beyond the IPv4 address space.
Does this make sense? If so, then it seems to show how IPv4-only clients could talk to IPv6 servers without modification. If not, then I'd love to see where I'm mistaken (I very well might be).
The inverse situation (IPv6-only client but IPv4-only server) is not really an issue, since for that situation NAT64 works, since you can embed IPv4 addresses into IPv6.
The only way C (ipv4-only) could communicate with S (ipv6-only) is by either allocating a dedicated ipv4-address to S (doesn't have to be directly connected to S - it can be sent to some translation box that does SIIT) or by upgrading C to support ipv6 and tunneling it (6to4, 6rd, teredo, etc.).
How are R and S negotiating when R cannot even name S on the network? Its stack only allows for 32-bit addresses and S can't have a 32-bit address.
That post was written 20 years ago. I would hope that the migration would be more than "much further along", I'd have hoped it had been completed, like a decade ago.
> is based on the same fundamental misunderstanding that one somehow can extend IPv4 in a way somehow, but remain compatible with IPv4-only clients
I'm not a network engineer, but I've seen loads of commentary from knowledgeable sources that it would have been quite possible to have extended the ipv4 address space without requiring 2 completely separate network stacks.
I think the simple fact that ipv6 includes so many other parts beyond just extending the address space shows what a foolish endeavor it was in the first place. I'm not saying the other bits aren't good ideas, but the only immovable factor that has people wringing their hands about ipv4 is the address limitation. If they had just focused on that, we probably wouldn't be in a situation where we're still running 2 network stacks virtually everywhere, and will be for the foreseeable future. The famous XKCD "Standards" meme says it best: https://xkcd.com/927/
It may sound like a great plan as long as one doesn't look too closely at the details. IPv4 has fixed 32-bit addresses and one cannot cram more than 32-bit of information into a fixed 32-bit field. But one would need to do that for it to be forward compatible, since how would a IPv4-only client open communications with a expanded address space server?
One idea is to only upgrade the client and server and tunnel the expanded address space packets over IPv4. IPv6 has that - that's how it was bootstrapped before native IPv6 connectivity was a thing.
https://www.google.com/intl/en/ipv6/statistics.html
Yes, the IPv6 migration has taken much longer than anyone expected. But this argument would have made more sense in 2015 when we were looking at 5% IPv6 deployment and very erratic growth. But it's not, we've been looking at 10% of the market gaining IPv6 support for the last 3 years and are now at 45%. Now granted, this is likely to be largely "new" devices, e.g. in mobile networks and in countries like India where these were hidden behind CGNAT before. But these are exactly the type of devices that an IPv4 extension header couldn't have reached either.
It's a dumb post, to the point I think it must be a deliberate troll. The parts that are possible don't solve any relevant problems ("my new protocol would allow computers that already have public IPv4 addresses to talk to each other" is not a point in favour of your new protocol), and the parts that solve relevant problems aren't possible.
Lol, I'd like to send this to DJ Bernstein, let him know that random Internet commenter thinks that one of his most well-known essays "must be a deliberate troll." Glad HN doesn't support emojis, not enough facepalms it the world for this one.
For example his Qmail was conceptually a very well designed email server but the email standards kept evolving and I'm fairly sure at some point he just said "Qmail is feature complete and secure, no more new features and patches". Like, what? It's networked software, that's not how any of this works.
Who's rejecting the good enough in favor of the perfect, now? This thread is full of "IPv6 is not perfect, so we must reject it".