OpenAI has released its GPT-4o System Card, a research document that outlines the safety measures and risk evaluations the startup conducted before releasing its latest model.
Before its debut, OpenAI used an external group of red teamers, or security experts trying to find weaknesses in a system, to find key risks in the model (which is a fairly standard practice).
They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio.