While the system card itself has been well received among researchers interested in understanding GPT-4’s risk profile, it appears to have been less successful as a broader signal of OpenAI’s commitment to safety. The reason for this unintended outcome is that the company took other actions that overshadowed the import of the system card: most notably, the blockbuster release of ChatGPT four months earlier. Intended as a relatively inconspicuous “research preview,” the original ChatGPT was built using a less advanced LLM called GPT-3.5, which was already in widespread use by other OpenAI customers. GPT-3.5’s prior circulation is presumably why OpenAI did not feel the need to perform or publish such detailed safety testing in this instance. Nonetheless, one major effect of ChatGPT’s release was to spark a sense of urgency inside major tech companies.149 To avoid falling behind OpenAI amid the wave of customer enthusiasm about chatbots, competitors sought to accelerate or circumvent internal safety and ethics review processes, with Google creating a fast-track “green lane” to allow products to be released more quickly.150 This result seems strikingly similar to the race- to-the-bottom dynamics that OpenAI and others have stated that they wish to avoid. OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to “jailbreaks” that allow users to bypass safety controls.151 This muddled overall picture provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.