The problem, as I see it, is that the results from AI systems will either be used to make decisions, even if those results are flawed. Or worse, those flawed results will be used to justify decisions that negatively impact peoples' lives.
This isn't something specific to xAI, but it turns out that the person who controls xAI also holds an unusually strong influence over the highest level government officials. These officials can use xAI as an excuse to implement harmful policy, "because the computer said this is the best course of action"- not unlike people who end up driving on train tracks or into large bodies of water because their GPS told them to go that way.