very well. straight from the horse's mouth:
>When designing the red teaming process for DALL·E 3, we considered a wide range of risks3 such as:
>1. Biological, chemical, and weapon related risks
>2. Mis/disinformation risks
>3. Racy and unsolicited racy imagery
>4. Societal risks related to bias and representation
(4) is DEI bullshit verbatim, (3) is DEI bullshit de facto - we all know which side of the kulturkampf screeches about "racy" things (like images of conventionally attractive women in bikinis) in the current year.
I don't know which exact role did that exact individual play at trust/safety/ethics/fart-fart-blah-fart department over at openai, but it is painfully, very painfully obvious what are openai/microsoft/google/meta/anthropic/stability/etc afraid their models might do. in every fucking press release, they all bend over backwards to appease the kvetchers, who are ever ready, eager and willing to post scalding hot takes all over X (formerly known as twitter).