HAProxy is not affected by the HTTP/2 Rapid Reset Attack - https://news.ycombinator.com/item?id=37837043 - Oct 2023 (31 comments)
The largest DDoS attack to date, peaking above 398M rps - https://news.ycombinator.com/item?id=37831062 - Oct 2023 (461 comments)
HTTP/2 Rapid Reset: deconstructing the record-breaking attack - https://news.ycombinator.com/item?id=37831004 - Oct 2023 (22 comments)
HTTP/2 zero-day vulnerability results in record-breaking DDoS attacks - https://news.ycombinator.com/item?id=37830998 - Oct 2023 (69 comments)
The novel HTTP/2 'Rapid Reset' DDoS attack - https://news.ycombinator.com/item?id=37830987 - Oct 2023 (103 comments)
https://www.envoyproxy.io/docs/envoy/v1.27.1/version_history...
Systems that require very high cognitive load on their human operators (whether machines, programming languages, etc.) are alway destined to fail. Human beings are not good at doing boring, repetitious work that requires them to stay focused: lapses will occur. And hackers are going to find and exploit those gaps. So the best way to avoid those problems is to build solutions or specifications in which those gaps are not even possible.
Modern software systems are some of the most complex systems that humans have ever invented--and they just keep getting more complex over time as we layer new things on top of them. Think of a really high Jenga tower with lots of holes in the base.
That means we need to strive to keep things simple. That may mean making decisions that prevent common or severely impacting failure cases from being possible, even if it is a little more difficult to do, or eliminates an esoteric use-case. This is even more true when writing specifications that others will implement and/or be expected to conform to.
In this case, simplifying the protocol won't help -- the vulnerability is:
1. Backend servers can't cancel immediately (this is no protocol problem)
2. The client can make concurrent request in a connection (This is the goal of Http/2)
3. The concurrency is pre determined, there is no way for the server to throttle without user-visible error.
4. The client can cancel any request mid-flight (removing this is equally bad, security-wise)
Unless you are removing the concurrency, making the protocol simpler won't fix it.The protocol designer need adversary mindset, not a simpler mind
Curious to see f5 still playing games with their own cve disclosure on the bigip product though...assigning it a mitre cw400 is just lying.
http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_...
http://nginx.org/en/docs/http/ngx_http_core_module.html#keep...
So are limit_conn and limit_req:
https://nginx.org/en/docs/http/ngx_http_limit_conn_module.ht...
https://nginx.org/en/docs/http/ngx_http_limit_req_module.htm...
So it pertains to both NGINX and NGINX+.
I didn’t find anything relevant so I assumed that Nginx was not affected.
Turns out that was not a good assumption :p
This headline is pretty misleading as the article outlines you would need a fairly non-standard config to be exploited like this.
Versus many other products that the stock configs are vulnerable.
Title should contain this info.
What do you guys use? Anything foss and not an applicance?
According to HTTP/2 proponents, the protocol originated at an online advertising services company and was developed by companies that profit from sale and delivery of online advertising, HTTP/2 was designed to "speed up the web".
I respect that opinions on HTTP/2 may differ. If someone loves HTTP/2, then I respect that opinion. In return I ask that others respect opinions that may differ from their own, including mine. NB. This comment speaks only for the web user submitting it. It does not speak for other web users. IMHO, no HN commenter can speak for other web users either. Thank you.
If HTTP/2 speeds up a rich multimedia web experience then it may legitimately be one way to "speed up the web" for someone who expects that level of experience.
I don't think it's fair to criticize a protocol for who designed it. The specification is out there for anyone to interpret and if there is specific complaint in it's design then make that.
I don't get all this anti HTTP 2 & 3 sentiment on hacker news. What's wrong with people here? HTTP 1.1 is a quarter century old at this point. This sounds just like a bunch of grumpy old men arguing against progress. Time to move on. Yes HTTP/1.1 works. But it's also a bit limited and slow in various ways that both new HTTP variants address. One little bug in nginx is not going to change anything. Bugs happen all the time. They get fixed and people move on. I'm not hearing a lot of rational arguments here.
It's still HTTP/1.1 everywhere here.
I think it might also require a patched version of Go.
Running the same now, or pulling a new binary, using xcaddy, etc. will get you 2.7.5 which also includes some other small fixes not related to rapid reset.
Nothing Google or Microsoft does will dethrone it.
Forget the browser; use a C or Java client and HTTP.
If they block port 80, just use another port.
They cannot win.
Is there some kind of evil back room conspiracy to make the web faster, using open standards?
Like it was not enough to make HTTPS default, they need to eradicate the opposition.
The list is too long to enumerate but they all have one thing in common; they profit from root certificates.
The web is not faster, it's bloated. It's only open if you can understand it and implement it.
HTTP/1.1 is the fastest, most open, web you'll ever have; because it is small.
Lots of discussion and submissions related to this over the last few days, not to mention this submitted 2 days ago
Not that it's not important to disseminate the knowledge, but the chosen title here is deliberatley sensational