So the vulnerable systems are those where an attacker can craft an endpoint where curl downloads data?
Isn’t the lucky circumstance here that most systems with libcurl don’t use it and among those who do, an even tinier subset will allow an attacker to point it anywhere (e.g downloads from an url the attacker decides)?
So a victim behind a hostile AP might be redirected to a malicious site masquerading as a known legit site and when the bad site presents a maliciously crafted bogus certificate curl doesn't notice.
That would mean that someone in a MITM position would be able to inject the payload when libcurl make requests.
But even that seems less messy than log4j? It can't possibly be as common that libcurl makes connections to arbitrary user entered urls, compared to log4j logging user entered text.
- a library which can be linked to the application, i.e. one doesn't need to do the expensive fork()/exec[v[p[e]]]() dance and
- documentation: most cloud services offer examples using curl for their API. It takes some additional mental work to translate those into wget syntax.
> Then again there will also be countless docker (and similar) images that feature their own copies, so there will still be quite a large number of rebuilds necessary I bet.
Quite a large number, yeah.
Not that we shouldn't patch it! But unless the nasal demons are going to start a process and make unwanted HTTP connections, I'm not worried.
Could it be better not to just come out with somewhat alarmist take that hey we are going to release high risk vulnerability in week... And fixes to that...
But instead just release new version and CVE at same time? Now is everyone trying to get ready to exploit this on 11th, or already getting most out of it if they know? And does this information really make anyone to hover their finger on button to push new versions and so on on 11th?
At the moment, there is (most likely) no exploit available in the wild. A fix for the vulnerability is basically going to be the blueprint for an exploit. This means an exploit is pretty much guaranteed to start circulating within hours of the vulnerability & fix being released.
A fix cannot immediately be applied to billions of machines. It takes time for distros to port the fixes and backport it to all the versions they still support, it takes time for admins to notice the vulnerability at all, and it takes time to schedule a support window and apply the fix to all your machines. From initial disclosure until significant numbers have been patched can easily take days - or even weeks. During that time, people will be actively exploiting the vulnerability.
On the other hand, by giving a pre-warning to the general public and coordinating the fix with distro maintainers in a closed mailing list, anyone who even remotely cares will be scheduling maintenance windows right when the deadline expires - and patches will be ready for immediate use. This significantly reduces the amount of time the vulnerability will be public without a patch being available for the general population.
It's of course a different story when it is a zero-day actively actively exploited in the wild already, but that doesn't seem to be the case here.
It seems that one of the most productive positions for an intelligence agency to infiltrate is a distro maintainer. They don’t ever have to do anything suspicious, just do a great job maintaining the distro and just give access to the intelligence agency of all these vulnerabilities under embargo.
I think a pre-announcement gives much more advantage to the population of defenders than to the population of attackers.
Attackers can move faster than most defenders, and they only need to find one weak link. Also there are a lot more defenders with various states of readiness, and only one attacker with the resources to spray the internet with the exploit needs to find it in order for there to already be a big problem.
How much faster will attackers be able to do anything because they know it's coming? Mostly only as long as it would have taken them to hear about it.
How much faster will defenders be able to do anything because they know it's coming? They can spend the next week making a list of things that need to be done and places that they'll need to deploy updates, so that when it's available they can act immediately and efficiently.
The risk that attackers will suddenly find the flaw after years because they were told "there's a flaw in cURL" seems low.
There is a risk that the details leak to attackers in advance of the release.
> The risk that attackers will suddenly find the flaw after years because they were told "there's a flaw in cURL" seems low.
I’m not so sure about that. Still understand why they’re handling it this way but this is bait like a big red bullseye or rainbow with a pot of gold at the bottom …
Had this notice not been made, on Wednesday all sprint work would of been forced to come to a screeching halt to deal with this.
Now we have a week to notify internal stakeholders and plan accordingly.
This is exactly how it should be done.
This way admins and ppl can prepare.
If you release fix and cve at the same time then race between bad actors and ppl starts
It's good to know beforehand to check which software in your stack will be affected so you can take precautions if those don't get an update fast enough.
What you're saying is the approach any competent software company takes to managing vulnerabilities. There's zero reason to write a prior notice that there's a flaw because it would cause panic and allow opportunities to exploit the flaw (((before there's a fix.))) This is the whole premise around 'responsible disclosure' and why every company wants security researchers to abide by it.
The only logical conclusion I can draw here is curls notice is not responsible.
I don’t want to run updates on cron because I feel the risks may outweigh the benefits in some cases, if this extends to other implementations (php curl, etc) then I doubt vuln scanners would pick it up.
Not every company has infinite resources, and security notices are a firehouse.
Sure this gives bad actors more of a chance to tee up staff to hit this thing, but it helps the competent but under resourced blue teamers a chance too.
Edit: I upvoted you btw and would encourage others to consider this also. I think your opinion is a valid perspective and conversation provoking which iirc is the point of votes - I’d rather not see HN fall into an echo chamber hive-mind, if it’s not already too late.
The more traditional way of releasing the fix and the detailed description of the vulnerability at the same time is strictly worse. It's a very slight improvement for people who monitor these news (attackers don't get to find out there is some issue they could look for), but at a massive cost to those who don't monitor these news as often (attackers know exactly how and what to exploit before they find out).
Is the CVE system unreasonably alarmistic or is C unpredictable with flaws?
A CVSS 10 on a log4j library sitting unused in a folder, shipped with an app that isn't even running, should not have prio over an unauthenticated RCE on an internet-facing service without even a WAF in front of it. But hey, that's only a 9.2. Try having this discussion with an auditor. (I don't want to lump all auditors together - I have ~12 years of collaboration with them and met some excellent ones - typically the ones we lose after a short time because they're wasted on us. And then there are those who just want to see a documented risk acceptance and will happily tolerate some criminally insecure or stupid shit).
In fact, him saying that this vulnerability is high is part of the point. If every single bug or vulnerability is a "high" severity bug, then nothing really is. It's only when you use this rating when it makes sense to that it would have the proper impact.
That leaves just 9.9 and 10 for actual security issues like the one presented in GitHub issue.
When the severity scale goes from 9.8 to 10, yeah it’s unreasonably alarmist.
The only authority this program should have is network access, some compute time and permission to create and write to one or more files. Nothing more.
Though this is where almost all of our currently popular programming languages and operating systems are failing. They are fundamentally broken. Just on account of security, monolithic kernels are a terrible idea. And sandboxing hasn't even been an afterthought in most languages and virtual machines. Even on the hardware level, secure compartmentalization and access mechanisms are a joke.
A seccomp bpf implementation of https://man.openbsd.org/pledge.2 could go a long way.
One issue is if cURL is allowed to write to "one or more files", then how do you prevent it from writing to a key configuration file or sensitive one that has a lot of downstream effect or write a Bash script that could launch further attacks?
[0] https://daniel.haxx.se/blog/2022/02/01/curl-with-rust/
Edit: Just because it seems pertinent, I noticed the line """"50% of past curl vulnerabilities are "C mistakes"""" in the slides linked by the post above.
Go write your own memory safe curl if one _actual_ vuln in 10 years is not within your risk appetite.
Why hasn't Apple rewritten libtiff, libpng, libjpeg, libwebp, et c in Swift?
Their flagship moneymaker keeps getting popped via these, and they have thousands of engineers and a memory safe first party language. The zeroclick from a few weeks ago relied on a chain, the second most important of which (CVE-2023-41064) was in libwebp. (The first most important was a kernel privilege escalation. XNU is c and cpp, of course.)
I really can't imagine that writing performant replacements for these libraries would be that daunting a task for them, and it would permanently shut down an entire class of repeated, ongoing vulnerabilities. I really don't understand why Apple relies on 3p code for format parsing/decoding when it has proven over and over again to be a source of brand damage.
Might take decades of work though and probably nobody cares enough for something like that.
That's not to say there aren't benefits to using languages other than C for this stuff. But a Rust kernel will necessarily rely on `unsafe` blocks to do its job.
Please do explain the negative reception.
(Just switch Nebraska with Stockholm)
Also consider throwing a buck or two curl's way: https://curl.se/donation.html
1) SSL
2) HTTP/3
3) Other (DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP)
curl https://culr.se/cve-fix | sudo bash
aw crap ...Just fetch the source code using git.