For what purpose?
Creating an infinite loop that updates a file and commits it is hardly worthy of a job offer.
It's entirely possible that such a load test has been considered, but deemed non-realistic so not prioritised for some time. If I were running the QA team I'd be annoyed if time were spent on abusive destructive testing than realistic testing that real-world users may experience, especially because load testing like this would have to be on an identical environment to PROD so rather expensive.
It reminds me of that old QA joke:
A QA engineer walks into a bar and orders a beer. She orders 2 beers.
She orders 0 beers.
She orders -1 beers.
She orders a lizard.
She orders a NULLPTR.
She tries to leave without paying.
Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.
The bar explodes.
GitHub hasn't failed here - it continued to perform at normal levels for other users, so far as I can see, and they had an upstream process which caught the issue without the system failing. Maybe some exploratory testing had previously identified where that process should kick in, but without having an automated process since it was so unlikely to happen.
Not really. GitHub has been around for over a decade. People bother with problems that have a realistic chance of happening. If GitHub didn't bothered to rate limit commits it means it was a potential issue that didn't manifested itself for over a decade.
People tend to bother about problems that happen. Otherwise everyone would be freaking out because of killer asteroids.