Humans are error prone, even when doing things they know and are good at.
Artificial intelligence is marketed as being a machine that is as smart as a human, but somehow we infer that because AI is a machine it will not make human mistakes. Mistakes are what produces learning.
The question becomes, do we only release AI for public use when it is assigned to a narrow range of problems and trained to 99.9% accuracy? Or does a consumer just throw AI at unknown, or even non trainable, problems and we take the result with a grain of salt? (Non trainable being something like predicting the value of the S&P 500 in 24 months.)
Perhaps a new words will be formed to describe AI, its behavior, accuracy, and experience? For now there is a lot of "one size fits all" and "holy grail" seeking. Big companies with armies of sales people seem to prefer this.