> Even if our ML systems were meaningfully intelligent, there's still the issue of proper training. You can't teach humans to be a safe drivers by showing them a huge slideshow of dash-cam images.
That's an interesting concept to explore:
I would guess that videos could improve people's driving: Imagine new drivers; showing them videos of different situations, actions, and their outcomes, may help. The same videos might not help an experienced driver, but they might be helped by videos of more complex situations or by videos tailored to a specific driving skill.
But I'd be interested in research: When does such training help people and when does it not? What aspects of the training are effective or not?
And can that be applied to ML? It may be the old fallacy of conceiving of computers as 'thinking' like people, which Dijkstra compared to conceiving of submarines swimming like us.