Or maybe he recognizes that it’s literally impossible to train a system to output a result that isn’t biased. Creating a model will result in bias, even if that model is a calculator. You put your perspective in its creation, its utility, its fundamental language(s) (including design language), its method of interaction, its existence and place in the world. If you train a model on the web, it’ll be billions of biases included, including the choice to train on the web. If you train on a “sanctioned list,” what you include or don’t include will also be bias. Even training on just Nature papers would give you a gigantic amount of bias.
This is what I really don’t like about the AI ethics critics (of the woke variety): it’s super easy to be dismissive, but it’s crazy hard to do anything that moves the world. If you move the world, some people will naturally be happy and others angry. Even creating a super “balanced” dataset will piss off those who want an imbalanced world!
No opinion is “correct” - they’re just opinions, including mine right now!