A Tesla has ~ 100.000.000 [1] lines of code. Considering this post, do you think we are sufficiently educated in software security to produce secure self-driving cars?
Elon Musk: "I think one of the biggest risks for autonomous vehicles is somebody achieving a fleet wide hack" [2].
By your logic, we should not fly any modern commercial or military aircraft or spacecraft, live within a certain radius of any power or hazardous chemical plant, place any dependency on any first world country's health care network, including life support, or invest in any company or stock.
Like most things in life it comes down to a security/convenience risk/benefit compromise.
If ISIS was able to hack a major fleet through one such bug, do you think for a single moment they wouldn't make use of it to kill many people?
I imagine a Twilight Zone episode... Go back in time to before cars were invented and imagine some Mephistopheles offering the bargain: "You'll fly like the wind over hills and mountains, making a journey of days in mere hours!" What the catch? "For each mile traveled a certain number of people chosen at random must be put to death or maimed."
He would go on about how the chances of someone you love being chosen for sacrifice were infinitesimal, and the benefits to all were so great and so obvious...
("Also, it will poison the air and water, and force you to become dependent on fuel sources that destroy life and engender wars.")
As for human readiness to safely control a tonne of speeding metal, my position as a full-time motorcyclist makes me extremely confident that the average alleged 'driver' (actually: daydreamer, snot-picker, instagrammer) isn't even approaching the edge of the competence ballpark.
Firstly, car automation is machine learning, not programming. Completely incomparable.
Finally, we've known how to prove the absence of bugs for decades. It's not a matter of not knowing how, it's about incentives to do it right.