re: chat ops vs a web page. It's just a single BGP advertisement -- big whoop. Chatops is just hipster famous right now.
"Hey Jim, Did you go adjust that thing?"
vs.
"10:15am [JIM] /chatbot adjust x to y"
In my day, such back-end services were either simply not connected to the Internet (connected via a private network to the application services), firewalled, or at the very least, configured to listen for and respond exclusively to connections from known front-end or application services.
Is this sort of deployment architecture falling out of favor? My casual observation is that cloud architectures—at least the ones I've seen employed by small organizations—are more comfortable than I am with services running with public IPs. What is going on? Am I misunderstanding this in some way?
When it's easier to just open up a server to the wide world than it is to learn how to connect safely, you'll always get a lot of people doing it.
(I’ll be here all week)
Those are the excuses I dealt with when I took over the current IT department. By now, only haproxy accepts public connections. Everything else is firewalled to the office at most.
Combine this with staying on top of vulnerabilities, this is really all you can hope for from a host standpoint. What is changing are the days of perimeter defense. The Zero Trust model is really the best path forward, and the only way to implement security in relation to the IoT.[1][2]
[1]https://www.youtube.com/watch?v=k80jOH2H10U [2]https://www.safaribooksonline.com/library/view/zero-trust-ne...
But when I read that he had found a public facing Jenkins server owned by Google, I figured I must be missing something.
I run a 2 man shop, but I still keep things like Jenkins behind OpenVPN. Why would anyone leave Jenkins open? There must be a reason, right?
https://emtunc.org/blog/01/2018/research-misconfigured-jenki... [0]
To answer your second question, I work for an open source non-profit software company, and we run some of our jenkins servers, which do continuous integration builds, publicly available so that community contributors and users can see build failures. Google has a number of open source projects that probably have similar goals.
A quick Shodan search[1] shows like 90k boxes publicly accessible Memcached. Misconfiguration of firewalls is a serious problem.
I also wonder if you can store something in a memcached cache that looks like a valid request, then reflect that with the source IP of another memcached server and let them burn each other out...
https://github.com/memcached/memcached/commit/dbb7a8af90054b...
1. It may be difficult/expensive to arrange for the correct set of source subnets to be available at the points where filtering needs to be done. Motivation to perform egress filtering fails to overcome this cost threshold.
2. Fear that some customers are actually (probably without realizing) relying on alien source address traffic being routed. Therefore filtering that traffic would result in unhappy customers and support workload.
In our network over the years I've come across several instances where it turned out we were (erroneously) relying on one of our upstream providers routing traffic with source IP from another provider's network. Since policy-based source IP selection on outbound traffic is quite tricky to setup and get right, I can imagine that ISPs would take the easy way out and just pass the traffic.
https://www.internetsociety.org/blog/2014/07/anti-spoofing-b...
If I understand the article's point, essentially, carriers pay for the egress traffic that causes DDoSes, that cost and the cost of the generated ill-will outweighs that of filtering, whose price has fallen and continues to fall.
Personally, I think that if the article author is correct, then I wonder if this is one of those high-level long-term decisions that companies appear absolutely incapable of making. (In my experience, short-term gains are way overvalued at the cost of long-term loss, generally, especially when it is hard to directly determine the costs/benefits involved.)
It's still possible to restrict it, but simple RPF checks don't always cut it.
There is almost no reason whatsoever for clients to spoof their public IP address. Obviously, there are reasons to SNAT at the carrier level for load balance or routing purposes.
How many times are we going to see the HN comment that says "lol why do so many people use Cloudflare? I don't need it for my blog!"
Naive decentralization (naive trust) doesn't work.
- Volumetric attacks like this one, mostly reflection.
- Application level attacks like SYN floods or protocol-specific attacks.
Defending against both costs a LOT of money.
Volumetric attack are dealt with at the network edge using rate limits and router ACLs. They're really easy to identify and block, but the point is that you need more bandwidth than the attacker in order to successfully do so. With attacks in the terabits-per-second range, this gets expensive.
Application-level attacks are harder to execute since there's no amplification and you need more bandwidth to pull it off, but they're much harder to block, too. They exhaust the server software's capacity by mimicking a real client. Common examples are SYN or HTTP floods.
When you get hit by a DDoS attack, you have two choices:
- Filter the attack and block the offending traffic without affecting legitimate requests. This is hard, and most companies can't do this. They need to have someone like Akamai on the retainer and dynamically reroute traffic like GitHub did.
- Declare bankruptcy and announce a blackhole route to your upstream providers (taking down the host in question, but protecting the rest of your network).
When you host custom applications that can't be scaled out or cached, DDoS mitigation is especially hard since you cannot just throw more servers at it like CloudFlare does.
Most services we host use proprietary binary UDP protocols, which is unfortunate, since UDP is easy to spoof and even experienced DDoS mitigation companies have trouble filtering it. Our customers get hit by DDoS attacks 24/7, so blackholing is not an option.
We had to build our own line-rate filtering appliances in order to handle the ever-increasing number of application-level DDoS attacks, by reverse engineering the binary protocols and building custom filtering and flow tracking.
All of this costs a huge amount of money, and most ISPs simply lack the resources to do this.
Happy to answer questions, but I'm going home right now, so it may take a few hours :-)
(Nitrado is a leading hosting provider specializing on online gaming, both for businesses/studios and regular customers, so we're dealing with DDoS attacks on a regular basis. We got hit with the same memcached attacks than GitHub and CloudFlare, and it was the largest attack in our company history. Ping me if you want to talk.)
Based on how it's done, you can't check first if the page hidden behind clouflare is something you'd want to enable javascript for, because clouflare will not let you see the HTML code of the page, without enabling javascript for it first.
That is broken.
We make things more annoying for VPN traffic because it's 99% bad actors. Every time someone is up to no good on our services, they're behind Tor/VPN.
It's simple cost/benefit analysis. If you think a business should bend to every single whim someone might have, then you haven't built much of one.
Making someone run Javascript so they can click on a captcha? Worth the loss of a few pennies because someone's angry about it on HN.
You need to conveniently ignore why people use Cloudflare to say that Cloudflare is breaking the internet. Ideally, nobody would have to use it, but that isn't reality.
Does anyone have an example of this webpage?
Unless I am engaging in e-commerce, I do not run a browser JavaScript engine. I rarely if ever encounter a webpage that truly "requires" one. GitHub certainly does not require JavaScript for me to use it via www.
Edit: The attacker didn't need nearly that kind of bandwidth to execute this attack. See [1]
Edit: 1/50th -> 1/400th (bits vs bytes)
[0] http://www.internetlivestats.com/one-second/#traffic-band
> The vulnerability via misconfiguration described in the post is somewhat unique amongst that class of attacks because the amplification factor is up to 51,000, meaning that for each byte sent by the attacker, up to 51KB is sent toward the target.
From what I understand the attack originates from publicly exposed memcached servers configured to support udp and that have no authentication requirements:
- put a large object in a key
- construct a memcached "get" request for that key
- forge the IP address of the udp request to point to that of the target/victim server
- memcached sends the large object to the target/victim
Multiply times thousands of exposed memcached servers.
That about right?
I consider it community service.