Now, this can be very dangerous, so OpenAI tried to add a final fine-tuning that will sanitise the responses. This is why we get the yada yada disclaimer sometimes. But underneath it's the same language model that is influenced by the prompt to say anything, as evidenced by the thousands of hacks.
The model has all the biases you can imagine, but it needs to know which ones you want every time. In this sense we cannot say it is biased. It is "just following orders".
As a proof that the model has all the biases, look here.
> GPT-3 has biases that are “fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups.”
What do you think they did? Simulated a political poll with language models instead of people.
https://jack-clark.net/2022/10/11/import-ai-305-gpt3-can-sim...