Well I'd argue Excel is more obviously deterministic and not at all a blackbox. Basically I can an give it an unlimited number of dates (or other calculations) and as long as my inputs are 'correct' (and I don't encounter any bugs) I can be essentially completely confident that Excel will provide a correct answers. Or even it if won't it will brake in a logical and fixable/workaroundable. With ChatGPT.. well it doesn't seems very consistent. Minor modifications to the question can produce wildly different results which are not at all obvious.
> We attribute this "confident" adjective to the result, not because it outputs a confidence interval, but because it uses language and we associate that with human traits.
I'm not sure. I mean externally/superficially that's true. But internally it's still a statistical model which can't really be totally accurate by definition. It has a huge number of inputs scrapped from the internet etc. While those inputs are generated by humans language can be more or less perfectly logical as long as we follow a consistent set of rules. It's not obvious ChatGPT is capable of that due to the quality of inputs and it's nature.
Even if it's accurate 98%+ of the time that's still problematic (in many but not all applications). I mean would you use a calculator that (seemingly) randomly produces an obviously incorrect answer 2 times of out a hundred?