We were kids back then and those were kids that were attacking us with just a 5-10usd budget. Yes they were relatively small (ranging from 10-60Gbps) attacks compared to the Tbps attacks that are happening to some companies, but good god it was so annoying when all it took was just 5 usd from some idiot to take down your server.
We moved to gcp got null routed (or reduced network bandwitch to the node under attack) every-time there was an attack. Bought azure's 3000usd a month anti DDos protection, was worthless for a tcp/udp service. Tried to have a network load balancer in the cloud that auto-scaled, still some players got effected when an attack came in.
Finally we moved over to OVH and placed a few really powerful servers in-front of the game server and applied some ipfilter rules to reduce common attacks. That ended up being the cheapest option out of all the options. When you have a very small community its not like you have the biggest budget to work with. But it was really fun and taught all of us a lot. Looking back its kinna sad we had to end things. But it was a lot fun.
DDos attacks are one of those things that really makes me worried about the future of the internet. The only way to win it is to throw money at it and cross your fingers that the attacker will run out of resources before you do.
Definitely companies like cloudflare does an incredibly good job of stopping some insanely big attacks when it comes to http/https (I recently saw they were supporting udp and tcp based services now, never tried it).
But one thing that's weird is having to rely on some 3rd party company. Yes cloudflare so far has been a company I can trust, but, I once loved and trusted a company that said "Don't be evil".
If you are a developer for some IOT device manufacturer please do your best to makesure someone wont turn your light bulb in to a part of a botnet. When you guys fuck-up the rest of us have to suffer.
http://www.paulgraham.com/marginal.html
Finally we moved over to OVH and placed a few really powerful servers in-front of the game server and applied some ipfilter rules to reduce common attacks. That ended up being the cheapest option out of all the options
The cheaper attacks seem to be at the level, where machine learning could be able to counter them. Raising the bar for inexpensive attacks would be a huge boon to the internet and human progress. It wouldn't be that expensive to fund, either.
We used to run a game server for a small community of around 400-500 people and DDos attacks were something we had to face almost every week, whenever someone got upset with the admin team, the go to solution was was to DDos, you get scammed by another player? DDos. Got banned for saying racist things ingame? DDos. You figured out a new way to cheat in game and the admins fixed it? DDos.
I wonder if this sort of thing could be honeypotted? Give perpetrators a way to figure out and target a fake "edge server" of a particular user? (Which only affects about 5% of your user base, let's say.) However, that "edge server" is actually a honeypot that gathers data on the attack, and correlates that to support emails to the admin team, or flame wars in the game's forums.
This is the kind of suckage that holds back the entire network, but which can ultimately be defeated:
What's hard is paying for 100s of gigabits of bandwidth, 24x7, so the incoming packet flood doesn't crowd out the good traffic before it gets to your filtering box.
Basically the only solution there is centralization. Cloudflare can afford to buy 1000s of times more bandwidth than any one of its customers needs, because it has (much more than) 1000s of customers.
I don't think it uses any of the techniques currently considered central to machine learning, but if it works well / catches on to start with then it could be a good place to see how useful those would be.
One method could be to anycast the domain to a bunch of edge servers which all relay traffic to the actual server.
DNS queries of the domain return the closest edge which gets attacked, other edge servers can still route.
I think that's historically false. At the time, "Don't be evil" seemed like, more than anything, an acknowledgement that Google wanted to have a corporate culture that was different from Microsoft, which at the time was the 800 pound gorilla in tech and was widely seen as being "evil" (I may be dating myself, but does anyone else remember the Bill Gates/Borg avatar that was the standard for Microsoft stories on Slashdot back in the day?) Google was founded in 1998, right when the US v Microsoft antitrust suit was filed.
One could certainly argue Google now engages in some of the monopolistic tactics that originally got Microsoft in hot water (with MS is "everything is part of the OS", with Google it's "everything can be part of the search results page"), but I think you're reading too much into what was originally behind the "Don't be evil" slogan.
That's overthinking it. "Don't be evil" is just the kind of slogan that could emerge in the '90s, when it became clear that good and bad were not linked to a specific organizational form or trait - you could have bad capitalism and good collectivism as well as the other way around. There was a feeling that "big business" was bad but "medium business" could be a force for good, you only had to stay decent and things would work out. And of course the 'net would have rejected any clipper-chip and not replicate the historical corruption of the real world.
Those were very naive times, in retrospect, but I don't blame the original googlers for believing in a simplistic view of the world. I blame Eric Schmidt and his sponsors for hiding their evil behind that line. Modern Google is basically all Schmidt.
CF still requires an Enterprise contract for proxying arbitrary traffic via Spectrum, likely because of the abuse prevention aspect. Otherwise SSH and minecraft is offered at pay-as-you go rates, but a lot have complained about how expensive it is:
https://community.cloudflare.com/t/what-do-you-think-about-t...
I mean, I guess it's a compelling use case for some customers. Still, it's a weird outlier.
Also there was another kind of attack where you would start thousands of bot clients at once that would spam messages. The hopes would be that you would (a) shut down the server, (b) attract the players to the server your bots were advertising
As for the DDoS tools, after writing the parent comment just did a quick ddg search and you can still find several websites advertising services to DDoS. Some I recognize from back then.
On a side-note doing a nslookup shows some of these sites are behind cloudflare haha.
>>Also there was another kind of attack where you would start thousands of bot clients at once that would spam messages. The hopes would be that you would (a) shut down the server, (b) attract the players to the server your bots were advertising
Oh man.. some people..
Is there a lawyer here who can comment on whether the manufacturer of these horrid devices have any civil liability - either currently or possibly in the future?
My gut tells me the only way this will get better is for their to be rules of negligence applied to the realm of computer security.
I have used Cloudflare Spectrum to prevent attacks. It does work incredibly well but the cost is significant.
As for 3rd party copmanies, I do hate to rely on cloudflare for this. It is the worst business relationship by far I have ever been in, but yet there are no good alternatives we found.
Gaming tends to attract a population that is tech-savvy (means), competitive (motive), and has copious leisure time (opportunity). Combine those three things and you have the kindling.
The spark, I think, is due to the fact that the crowd was historically quite young. That means three things. First, impulsive. Second, nothing/less to lose (someone with real assets they Worked Hard For wouldn't Risk It All over an in-game spat). Third, might not've learned how to handle competition in a healthy way.
A DDoS attack is a crime, but the sort that most law enforcement don't really care about at least in the context of a small-time game server. It's kind of the modern equivalent of knocking down mailboxes or shooting out traffic signs with a shotgun. Both things that cause actual damage that costs actual money, but which teenage males have been doing probably since the advent of mailboxes and shotguns.
Huh, I've only seen that with VPS hosters and thought it was related to game servers causing high CPU load on shared resources.
Like IRC back in the day :P
edit: nvm.
https://maidsafe.net is the best project to come out of the “Web3” space. If you heard of freenet, this is like freenet 2.0
PS: Why the massive silent downvotes? This platform actually solves the problem and many others HN constantly correctly complain about. But when posted, you prefer to ignore it. (Disclaimer: I am not affiliated with them in any way. In some ways they are a competitor to Qbix and Intercoin but I give credit where it is due.)
From all the hosting providers I have used, they are the only ones who don't null route you as soon as you get a attacked, and considering the cost of their service OVH is a real life saver when you really need the help.
----
Just realized this sounds like an advert for OVH, lol I have no affiliation with them whatsoever, just a really happy old customer.
Let's Encrypt asks for max 1 request per day per certificate: https://letsencrypt.org/docs/integration-guide/
or
"domain xyz's certificate is expiring. If we pay for a ddos, their site won't be able to renew and (customers wont go to the site due to expired cert/API people use wont work/we can take advantage of a compromised cert longer)"
Just some possible but implausible scenarios.
That's why it's so important not to wait until the end of the 90 day expiration period but to renew it every other week or so.
Or, those admins can switch to zerossl.com until the DDoS ends (you basically just need to change the domain in certbot).
Yielding the DDoS.. wasted money.
[edit] Unless the attackers identified a bug in certbot (commonly used autorenewal scripts), e.g what happens when LE is unavailable when autorenew is triggered - you'd hope it would retry periodically until LE is restored, but perhaps not. If not you could time the DDoS just right to ensure a specific cert does not get renewed even after the DDoS stops, then maybe a couple weeks later it would expire... But that's relying on such a bug existing and the site owners not noticing it (LE will also email the registered email address eventually regardless of autorenewal scripts), so maybe this is too much of a stretch.
Just last week our shared hosting provider was attacked and the attacker tried to brute-force it‘s way into a management API. I cannot image another reason as „fun“ and „just because we can“ because there‘s nothing to get [besides money after encrypting all data].
So I think the attacker just attacks LE because it‘s in the internet and he can.
Or am I missing something?
I see in their status page that OCSP endpoints are also impacted. There could be any number of motivations including interfering with someone's ability to check if a certificate has been revoked.
Switching to a new provider in case LetsEncrypt goes down is as simple as updating your scripts.
All "usable" HTTPS depends on certs, right? And "usable" certs require a domain, right? And that cert for that domain needs to have been generated by a CA, right? But it's tied to a domain, and IP space. You have to prove to a CA that you both control a domain record and some IP space it points to. Nobody has designed anything to straightforwardly prove that in an unhackable way. We have shitty hacks, like "serve this unique file on this web server that this domain record is pointing to", or "answer an e-mail on one of 20 addresses at this domain", etc.
But none of those address what we actually want to do, which is just to prove that we own/control a domain record. That's the only meaningful thing in having a cert: proving that you actually own the domain record this cert is assigned to. And we have no actual way to do this. Literally the only way to prove definitively that you own a domain is to talk to the registrar, and the only way to prove that you control a domain record is to talk to the nameserver that the registrar is pointing to. The former we don't handle at all, and the latter is highly susceptible to various attacks.
You could remove the reliance on CAs entirely with a different model. You tie a private key to domain ownership, and a private key to a domain record. Then you only have to trust registrars' keys/certs, and you can walk backward along a cryptographically-signed web of trust. Your browser trusts the registrar's key X. The registrar signs your domain key Y. The domain key Y signs a domain record key Z. Your web server generates a cert using domain key Z.
For a client to verify the web server cert, they verify it was created by key Z, and verify that key Z was signed by key Y, and that key Y was signed by key X. Then any webserver can generate its own cert for any domain record, we don't need CAs to generate certs, and we have a solid web of trust that goes back to the actual owner of the domain, but also allows split trust via the domain owner assigning keys to domain records.
This is such a well-understood problem in fact that it has a name and Wikipedia entry, called "bus factor". According to:
> The "bus factor" is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel
As for proving that you own a domain, I think the DNS-01 challenge that is used to grant Star-certificates does a pretty good approximation, if you can create and update TXT records in the root zone, you have at least functionally "owned" a domain even if you don't legally own the domain.
Afaik, the LE team is distributed across the globe.
> If somebody took them out all at once, would the web's security essentially crumble?
No, there are other both free and paid CAs
> We have shitty hacks, like "serve this unique file on this web server that this domain record is pointing to", or "answer an e-mail on one of 20 addresses at this domain", etc.
Yes, but we also have certificate transparency. You can monitor all certificates issued to your domains and revoke them if needed. Not perfect but imo still reasonably safe considering you know that all the issued certs are on your servers.
> You tie a private key to domain ownership, and a private key to a domain record. Then you only have to trust registrars' keys/certs, and you can walk backward along a cryptographically-signed web of trust.
That exists and is called DNSSEC. If you haven't heard of it, you already understand: it isn't widely used. Also, it would require major rethinking of how we use the internet. Most clients do not validate DNSSEC, only public and maybe ISP resolvers do, but they can (and probably will) tamper the DNSSEC answers if they can better spy and mitm you.
> Your browser trusts the registrar's key X
Sure, we could do it in browsers, but the internet is wider than the web, and we would need to rewrite a great part or what we use every day (not saying that we can't or should not).
In the mean time, if you use a DNSSEC-compatible TLD and registrar, you can already sign your zones. That way, the current CAs will be able to cryptographically verify that the server asking for a cert also owns the domain/subdomain.
Right. Because of the hundreds of millions of domains out there, every one of them is monitoring the CT logs for their domains....? And once someone does create a false cert, by the time you find out about it, the cyber criminals have already hauled away a bank transfer or personal data, etc.
CT isn't security, it's a broken window.
> That exists and is called DNSSEC.
Every time I propose this, somebody equates it to something else (DNSSEC, DANE, etc), but what I'm proposing intentionally avoids those designs' pitfalls. I'm saying we need a brand new design that does not piggy-back on existing solutions.
> Also, it would require major rethinking of how we use the internet.
It would require rethinking of the workflows between registrars, domain owners, nameservers, and webservers. But in theory, browsers would work exactly the same; they'd just trade their ca-certificates for registrar-certificates. Validating the full chain of certs that they already do should be the same.
LetsEncrypt is great and I am really glad someone stepped up to create a mostly not evil non-profit cert authority. But everyone using LE is very bad for the health of the internet. It provides nearly a single point of failure for government/political interefence, technical failure, and failure due to corruption from money and scale internally.
Putting certs in DNS with DNSSEC authenticating them might be a more robust design overall, and would eliminate a lot of what is bad about HTTPS-everywhere (namely that LE trusts DNS to begin with, so doesn't add much to the web of trust, and that certificate issuance would be much more straightforward and automated from your TLD).
Unfortunately I have to disagree with you about the end of HTTP. ISPs have historically proven that they can't be trusted (NXDOMAIN interception, ad replacement/injection, DPI) and so for a non-negligable fraction of the world HTTPS (and DNSSEC or similar, although not enough people realize it yet) are a necessity.
I don't see alternative options except perhaps onion routing everywhere, but that only moves the goalposts to exit nodes without HTTPS and a PKI.
Another possibility for securing the existing PKI is to extend support for Name Constraints so that root CAs are only given authority to issue for subsets of domains, and finally making TLS only trust the most specific root CA for a given domain, e.g. if a TLS implementation has a trusted root CA with a Name Constraint of .example.com then it should not accept a certificate chain for anything under example.com from another root CA, and vice versa that root CA could not sign certificates for domains not under example.com. This would allow sites with high security needs to get their own CAs accepted by browsers, and allow breaking root CAs up by TLD which would match DNSSEC.
You can say this when ISPs stop MITMing ads into documents served over HTTP.
I'd be interested to hear more about this, care to elaborate?
Cloudflare though, we should all talk about more...