It's a bit more nuanced than that. Floating point arithmetic is not associative: "(A+B)+C" is not always equal to "A+(B+C)". Because of that, certain mathematical operations used in neural networks, such as parallel reductions, will yield slightly different results if you run them multiple times with the same arguments.
There are some people working hard to provide the means to perform deterministic AI computations like these, but that will come with some performance losses, so I would guess that most AIs will continue to be (slightly) non-deterministic.