Look at the history of everything that was acquired by either Qwest/CenturyLink or Level3, and then the merger of Level3. You can't tell me that the existence of Lumen, the combined Centurylink-Level3 entity is good for anyone, except for their shareholders.
It's the very definition of too much centralization.
Look at all of the things that have now been jammed together into the modern Verizon, as well.
Look at the sad state of competition in Canada, with Rogers and Shaw trying to merge.
When 90%+ of people have no upload capacity at their house, and are behind tons of CGNAT, then onedrive/iCloud/google drive become the only solution for storing your things you can access later. Same with chat protocols.
BT retail (the arm that sells to clients) has strict rules that forbid it from sharing with BT wholesale (the arm that sells to the other ISPs) so BT retail really can’t crush the competitors.. I don’t know the exact arrangement but they’re treated like any other ISP customer I believe
BT doesn't own our wires as such. OpenReach does (yes they were formally BT but spinned off).
BT or OpenReach - who cares? The important thing is functionality. I'd like to provide you with a novel telephony setup but the lack of ENUM means I am not able to do that.
wireless is only radically different from a consumer perspective and the capacity is nothing to write home about. It's the coverage and latency that bring value.
Implementing them takes time and you need many implementations for the protocols themselves to become de-centralized.
This is what breaks most new protocols and languages combined with diminishing returns (low hanging fruit has allready been harvested).
Personally I'm going back to HTTP, DNS and SMTP.
And even if DNS is completely centralized, it's the only thing we have for name lookups after 38 years!
Also I never rely on DNS if I can avoid it (I use static IPs and only use the hostname for virtual hosting / load balancing).
And de-centralization by hosting is more important than the protocol itself being p2p, since no p2p protocol can operate purely without server because of discovery.
I have made my own, from scratch, implementation of all 3:
DNS and SMTP soon coming to a rupy (HTTP), enabling DNS and SMTP through HTTP, you'll basically be able to control DNS and SMTP via a "Servlet/Filter".
Home hosting on fiber with static IP and ports 80, 53 and 25 open is the real challenge.
Making sure your ISP enables those has way higher priority than this document!
And the real canary is when you don't get an external IP on your fiber when IPv4 allocations in Africa run out.
It's time to wake up if we want an internet that does not become rent seeking.
Google charges for static IP addresses on GCP which should not be a thing if they get allocations for free.
IPv4 is a scarce asset, so they have an incentive to slow down IPv6!
Same as in AWS (IIRC) in Google Cloud you don't get billed for static IP addresses if they are in use:
"If you reserve a static external IP address and do not assign it to a resource such as a VM instance or a forwarding rule, you are charged at a higher rate than for static and ephemeral external IP addresses that are in use.
You are not charged for static external IP addresses that are assigned to forwarding rules."
External IP Charge on a Standard VM Usage 2021-11-01 2021-11-30 2,XXX.XXX hour 50.XXXXXSEK
So ~$2 per IP in USE per month!
>gcloud compute addresses list
ERROR: (gcloud.compute.addresses.list) Some requests did not succeed:
- Request had insufficient authentication scopes.
We need to move away from these companies before it's too late... they are incompetent and rent seeking.
They also do not rebate your free instance if you shut it off and change the instance type of the instance that had the rebate, even if you change it back.
And there is no recourse, no support, no way to get help.
Depending on how you mean, it's not the only thing or its not 100% centralized; https://en.wikipedia.org/wiki/Alternative_DNS_root lists the major alternatives in that immediate space.
The problem here is the word "Always". Encryption is good for just the reasons they say. But only encryption, always encryption, not having an option for plain text is highly centralizing in itself. This is because the current status quo for encryption is to use TLS based on certificate authorities. And CAs are always highly centralized and highly centralizing.
If Lets Encrypt ever goes corrupt like dot Org did it would cause an incredible amount of trouble and that entity would have power over a large portion of the web, if not the entire internet. There's an easy solution to this though. Don't throw alway plain protocls. Plain and TLS wrapped are synergistic. Use both. There's no need for, and it is damaging, to always encrypt without an option for plain text.
A hypothetical downgrade attack is not an excuse for using only highly centralized TLS CA based protocols in this context.
Not everything has to be TLS or even HTTP. Look at messaging apps. Signal is encrypted, but the end-to-end encryption it uses isn't TLS and doesn't use certificate authorities.
> If Lets Encrypt ever goes corrupt like dot Org did it would cause an incredible amount of trouble and that entity would have power over a large portion of the web, if not the entire internet.
Not really. Let's Encrypt doesn't have a monopoly over anything. They use an open protocol (ACME) that any other CA could implement. If they went evil, someone else would implement the same protocol and everybody would switch to them. Which also implies that they won't, because why bother if that's what will happen?
This is kind of a problem with the CA system the other way -- if you have one bad CA they can sign any domain even if they shouldn't -- but in this case it prevents what you're worried about.
This is why certificate transparency is a thing and most browsers require it for public internet domains[0,1].
0: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
For reference, many CAs (even paid ones) have implemented it:
Digicert https://docs.digicert.com/certificate-tools/Certificate-life...
Sectigo (formerly Comodo) https://sectigo.com/resource-library/sectigo-adds-acme-proto...
Technically it is so captured -- IANA is the root of the hierarchy that distributes both IP address assignments and ASN assignments -- and the RIRs are effectively centralized authorities in their regions. Thus far it has not been a problem.
With a larger address space and longer ASNs you could decentralize the entire process. Basically, subnets and ASNs would be hashes of public keys, and you would use a path-vector protocol where the NLRIs contain NIZKs proving knowledge of the secret keys and asserting who the NLRI was sent to at each hop (identified by ASN). It is not current being considered because (1) it would greatly increase the cost of routers and related infrastructure and (2) thus far there is no immediate need.
> "[A]ny decentralized order requires a centralized substrate, and the more decentralized the approach is the more important it is that you can count on the underlying system."
This somewhat counterintuitive notion is often overlooked. In order to facilitate a healthy decentralization effort you need a heck of a collaborative movement to make it a reality and sustain the initiative.
[0] https://www.thediff.co/p/the-promise-and-paradox-of-decentra...
It does not avoid government control. Even ignoring local legalities, at the very least it will be under the physical control of whichever country hosts the nearest ground-segment station.
https://github.com/fiatjaf/nostr
Kinda like the "fediverse", but improved in the sense that it is not federated, but also not P2P (because pure p2p doesn't scale).
It is better to design systems to handle centralization than with the assumption that they will remain decentralized, which would sort of break them.
If it seems like everything that we build is centralized, it might just be that we're bad at building things that last.
2021: AWS and amazon US-EAST-1 is down, this means my coffee maker doesn't work
1970: Early networks suffered from congestive collapse problems, routing protocols were slow to converge, computed suboptimal routes, and had count-to-infinity problems, only a handful of transit networks existed, domain names were managed by one dude broadcasting a file to everyone, little to no security infrastructure, etc.
2021: We have robust congestion control and queue management, scalable routing protocols that find optimal routes and have no count-to-infinity problems, DNS, large numbers of transit networks with a high level of redundancy, and at least some security infrastructure in key places (DNSSEC, RPKI, etc.).
Don't confuse web infrastructure and hosting services with the Internet itself, which is the network and which has never been more distributed or more robust than it is today.
While Wire and Matrix are working on a decentralized version the IETF is, unfortunately, working towards one based on a central entity.
Source:
https://news.ycombinator.com/item?id=25102916
https://matrix.org/blog/2021/06/25/this-week-in-matrix-2021-...
On the Matrix side we’re working on fully decentralising it (as per https://matrix.uhoreg.ca/mls/ordering.html). There’s also a cool similar project from Matthew Weidner: https://dl.acm.org/doi/10.1145/3460120.3484542
It’s a bit perplexing that mnot’s draft cites XMPP as decentralised, given MUCs are very much centralised to a single provider which entirely controls that conversation, and if that provider goes down the conversation is dead. But I guess that’s because XMPP is submitted to the IETF, and Matrix isn’t yet.
No, there is nothing unavoidable in making a centralized DNS system.
Suppose the root is a set of public keys, each with a top level domain. Adding one requires a supermajority of the others to agree. Removing one is impossible; it can sign its own successor and that's it. You now have a federated system with no single chokepoint.
Compare with systems that don't revolve around making any coherent global view, like Petnames. In the context of Zooko's triangle - do the human readable name lookup once as part of a manual process, and then persist the relationship as decentralized/secure but not human-readable.
All voting systems require protection against Sybil attacks. The best methods to protect against Sybil attacks are centralized. The not-best methods use proof of work, which has extreme downsides and only makes Sybil attacks expensive, not impossible.
There isn't a single world-wide authority that assigns names to people or companies, or plate numbers to cars or airplanes. It's partially federated and we accept the tradeoffs.