The machine's ability to recognize that it's about to crash may actually be one of the issues here, since often the self-driving/driving-assist car crashes are cases where the AI just completely misinterpreted the environment and made bad choices.
A human driver is somewhat likely to eventually realize what situation they've gotten themselves into (oh no, i can't stop in time) because of the multiple different feedback loops and information sources they're working with combined with their experience as a driver. For example, a drunk or very tired driver is operating with impaired decision making and response time, but they may eventually notice and respond - while an AI misclassifying a fire truck as a stop sign may very well continue misclassifying it until impact.
One way to mitigate this would be via sensor fusion - even if your vision or radar sensing fail, you can rely on data from other sensors to do things like apply emergency braking.
Unfortunately at least one vendor has decided to ditch radar, lidar, etc and just go with vision!