I wonder if something like that could work with regards to how LLMs are trained and released.
People have already noted in the comments that bias is kind of unavoidable and a really hard problem to solve. So wouldn't the solution be 1) more transparency about biases and 2) ways to engage with different models that have different biases?
EDIT: I'll expand on this a bit. The idea of an "unbiased newspaper" has always been largely fiction: bias is a spectrum and journalistic practices can encourage fairness but there will always be biases in what gets researched and written about. The solution is to know that when you open the NYT or the WSJ you're getting different editorial interests, and not restricting access to either of them. Make the biases known and do what you can to allow different biases to have a voice.
No comments yet.