My two favorite examples were:
The Therac-25 — it’s just frontend GUI code. Why test it? What could go wrong?
The Siberian gas pipeline explosion of ‘82 — not technically an accident, but it shows the problem with testing untrustworthy code to correctness. It was also the biggest non-nuclear man-made explosion, at least at the time.
The Russians had stolen some pipeline schematics from US companies. The theft was discovered before they stole the control software. Instead of stopping the software from being stolen, US intelligence modified it so an integer would overflow after a year (or two) of operation. The Russian economy would be ruined if the pipeline wasn’t operational in less time than it would take the bug to trigger, so it wouldn’t show up in testing.
When it triggered, it slammed a bunch of valves shut, causing multiple parts of the network to explode at the same time.
The US military’s seismologists detected it, and thought the Russians had detonated a new type of nuclear weapon. The military was going to escalate until the intelligence service told them they were responsible for the blast.
Here’s a decent list of other incidents:
https://royal.pingdom.com/10-historical-software-bugs-with-e...