If you just want to detect loops, keep a “+1” pointer that you use to increment through the data; also, keep a “+2” pointer that is advanced twice each time your “+1” pointer advances: either your “+2” pointer hits the end, or it becomes equal to your “+1” pointer — in which case you have a loop.
https://en.wikipedia.org/wiki/Cycle_detection#Floyd's_Tortoi...
There's a funny plot twist at the end.
3 bug reports by discovering 1 bug. What a bargain! :-D
First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone."
I've been thinking about fuzzing JavaScript code (not attacking V8 or SpiderMonkey, but the JS code itself). While JavaScript might not be vulnerable to buffer overflows and format string vulnerabilities, it certainly can have logic issues, unhandled exceptions, and DoS vulnerabilities that are exposed by fuzzing.
I took a look at the most-depended-on NPM packages. I'll try writing test harnesses on functions that take user input. Does anyone have any ideas for packages that could use some fuzz testing?
Even better, use the /dev/shm RAM disk if your memory is more than enough (although you should probably create an additional RAM disk with a size limit if you don't want a runaway program to accidentally drain your RAM). On a modern development machine, taking 2 GiB out for testcase issue is usually not a problem, and there's often a significant acceleration.
You'd look for code where input would be able to modify Object.prototype (or I guess some other constructor's prototype) unintentionally (and it's basically always unintentional).
Example of such vulnerability found in Minimist https://snyk.io/vuln/SNYK-JS-MINIMIST-559764
These issues are a constant pain in the JS ecosystem and you wouldn't be the only one using fuzzing to try to find them.
Then the impressive number of writes are all to memory, which should pose no problem.
My problem was how to decide if a test failed, because this would not be a crash, but failing to find an intersection between the curves. So I compared against an existing library which uses a completely different algorithm, which means the other library fails at other test cases than my own library. If the results in a test case were different, one must have failed and by testing against the found intersections I could easily decide which one.
Would it? Maybe it's because I've had a "low-level upbringing", but whenever I'm writing parsing code for a file format, "assume any byte of data you read can have any value" is the norm. The rest of it follows from there.