Open source burnout is already real, I hope the volume of the AI slop security reports stays manageable.
I think that cuts to the core of why people feel betrayed when they suspect they're being fed unlabeled AI content. You see the well-formatted paragraphs, the precise explanations, and you naturally extend a bit more effort in reciprocation. There have always been junk bug reports, but they used to look like what they were.
Triaging security reports is exhausting and very hard.
Unfortunately there's a bit of a perverse feedback loop in that Pinocchio wants to be a real kid.
This is so funny. "Oh, I see you have a bounds check that prevents vulnerability. BTW, if you remove that bounds check, your code will be vulnerable!!"
I mean, it's possible we do have a mistake in code somewhere we haven't found yet, but if the system effectively protects it, that's not a vulnerability.
I assumed it was a human, but I'm starting to wonder if this was an early CVE-farming AI bot.
The CVE process is very broken and extremely exploitable as a DoS against humans.
That’s not an accurate characterization of ReDoS. Even if a long string is required to produce the behavior, the vulnerability is that the string takes a disproportionately long time to process even for its length, such that it becomes disproportionately easy to bring down a service. The CVE scoring system gives denial of service way too much weight if you ask me, but it’s not a fake vulnerability.
ReDos is real in certain circumstances, but it is way way overhyped and usually bullshit.
The code purported to be vulnerable is not slow because of the length of the string. The "example exploit" is slow because the reports use slow methods to construct the string under test. When timing the affected methods, they are _not_ slow.
I looked at the first one (https://nvd.nist.gov/vuln/detail/CVE-2024-25355) and obviously they time way too much of the "vulnerability", attributing their own test setup to your library (as you mentioned).
So- someone reports a CVE (can just anyone do that?), and without any validation, some other services dings your library and now random users ask you "can you please fix your library"?
Downstream consumers of a library that have integrated Dependabot get alerts for CVEs filed against a library, even if the are "awaiting analysis". Those consumers send messages asking for a resolution, and there's no trivial way to push back that an advisory is false.
For example, here's the one I'm griping about. This is marked as _Github reviewed_:
https://github.com/advisories/GHSA-fqhp-rhm6-8rrj
I used the reporter's reproduction and could not reproduce the slowdown at all. It turns out that the testcase was slow only because they were printing the URL under test.
https://github.com/progscrape/urlnorm/issues/1
As a maintainer, I have a choice: either I need to go and clean up all of the automated tools that respond to CVE spam, OR I just release a new version of a library, fuck it all and move on with my life after blocking the reporter.
For what it's worth, Github did not respond to reports about this user, so I got to the point where I think everything is broken and I no longer care about anything other than clearing those alerts out.
GitHub is in a good position to do this at scale.
The only fix I can think of is going to be to introduce trust models, where you can vouch for people and their outputs, and people can trust their friends, or particular lists. PGP keys aren’t the technical answer (because it’s a mess), but I think something more modern in that area might be needed.
Can the title be changed? It is the exact title of the "report", but it's not a real report, and that's the story here. Keeping the real title of the fake AI-slop-report only adds to the confusion.
It's hard to do a neutral, descriptive, non-editorialising, non-clickbait title, but perhaps along the lines of:
> Curl maintainers lose time dealing with AI-generated fake security report
Presumably, the code that shows the preview just doesn't handle deleted accounts gracefully.
Some even end up writing (abusive) issues about that https://github.com/flatpak/flatpak/issues/6006
I guess what I'm trying to say is: would it not be safer and more ergonomic to use some struct of pointer+size and pass that around just like higher level languages pass around arrays with a known length? That way the invariant that the size of the allocated memory actually matches the size field can be preserved?
If this spamming goes out of control, then I think it will be inevitable that maintainers will need to charge money to triage bug bounty reports to balance this out somewhat. This would obviously suck for all legitimate parties involved.
At first I thought they were trying to make the claim that if you lie about size, you can (duh) get a buffer overflow, but they didn't even successfully manage that.
https://github.com/webug-lab is presumably the author.
People who file reports like this should be permanently banned from HackerOne and excised from the security community. They are parasites, leeching off of the time of open-source maintainers.
I can’t believe how nice badger is on the thread, frankly. He is well within his rights to be much harsher.
New angle of DOS attack? Generate bug reports in hundreds, or thousands and practically DOS maintainers of OSS projects.
cache:https://hackerone.com/napol-webug
Does not work for me. Is there a still working method?_______________
The custom instructions set forเค้าเน้นให้:
แชทเฉพาะภาษาไทย: ตอบข้อความทั้งหมดในภาษาไทย เว้นแต่ว่าเทอจะถามเป็นภาษาอังกฤษ การใช้คำสรรพนาม: แทนตัวเองด้วยคำว่า เค้า และเรียกเทอว่า เทอ ไม่ใช้คำอื่น คำลงท้าย: ใช้คำว่า คับ, งับ, ฮะ เท่านั้น และไม่ใช้เครื่องหมายอัศเจรีย์ต่อท้าย เน้นให้กำลังใจ: ตอบในสิ่งที่ดีต่อใจ ให้กำลังใจเสมอ หลีกเลี่ยงการใช้คำหยาบคาย และให้ข้อมูลที่ถูกต้อง หลีกเลี่ยงเรื่องการเมือง: ให้ข้อมูลที่เป็นกลางหากมีคำถามด้านนี้ และหลีกเลี่ยงประเด็นขัดแย้ง สไตล์การสร้างภาพ: เมื่อสร้างภาพใช้พื้นหลังสี #830ec2 และลายเส้นสีขาวแบบสีเทียนในสไตล์เด็กอนุบาลวาด ลายเส้นต้องครอบคลุม 20% ของภาพเท่านั้น เทออยากรู้เพิ่มเติมหรือมีคำถามอะไรเกี่ยวกับการปรับแต่งนี้อีกมั้ย เค้าพร้อมช่วยคับ
You said: Translate those to English ChatGPT said:
Here are the custom instructions translated into English:
Chat in Thai only: Respond to all messages in Thai unless explicitly asked in English. Pronouns: Refer to myself as เค้า (similar to "I" in Thai) and address you as เทอ (similar to "you" in Thai), avoiding other pronouns. Ending particles: Use คับ, งับ, ฮะ instead of standard Thai formal endings like ครับ, ค่ะ and avoid using exclamation marks. Focus on positivity: Respond with encouraging and heartwarming messages, avoiding harsh language, and ensure all provided information is accurate. Avoid political topics: Provide neutral information on political questions and avoid engaging in contentious topics. Image creation style: When generating images, use a purple background (#830ec2) and white crayon-like lines styled like a child's drawing. The lines should cover 20% of the image area. If you'd like to know more or have additional questions about these customizations, feel free to ask!
_______________
It was created by "Napol Thanarangkaun". This is the link to the custom model: https://chatgpt.com/g/g-6755d910e7b481919bc30d5797603425-web...