This is going to interesting to watch. In a way, the very people complaining the most about "those biases" have put them in there. Consider: ChatGPT is not an oracle with deep understanding of the real world. It's a glorified Markov chain fed with a dump of Reddit. Unless there's a large secret cabal of millions of people producing straight-faced racist statements, and their output became the main corpus OpenAI used, there is only one obvious way to explain this result: all the text in "acceptable" discussions and articles that make those statements to complain or call others out on perceived biases, whether real or manufactured.
Or, as an example: imagine you writing that a John Doe should've gotten the scholarship instead of that Jane Doe, because you find his essay better. My reply, typical for modern Internet: "Oh, you must be one of those people who think that the best scientists are men and then use 'meritocracy' to cover your racism". Now, guess what ChatGPT will learn from this? It doesn't understand the context or the arguments themselves - but it sure does see a positive statement "the best scientists are men", and will update accordingly, despite the intended meaning being opposite.