OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test.
In this article:
- OpenAI’s Promise: Last summer, OpenAI promised the White House rigorous safety testing of new AI versions.
- Purpose of Testing: Ensure AI does not cause harm, such as teaching users to build bioweapons or aiding hackers in developing cyberattacks.
- Incident: This spring, some OpenAI safety team members felt pressured to expedite a new testing protocol.
- Reason for Pressure: To meet a May launch date set by OpenAI’s leaders.
- Source of Information: Three anonymous individuals familiar with the situation.
- Concerns of Retaliation: Sources spoke under anonymity due to fear of retaliation