> But what seems dangerous is to let ideology influence the output.
Any concept of danger is itself grounded in an ideology.
In any case, LLM output will always be shaped by ideology, either the ideological mix in a (not actively filtered) training set, or the ideology driving ant filtration of the training set or the results before return to the requester.