AI systems have a history of failing in unexpected ways, with over 10,000 safety incidents reported by news outlets since 2014.
As AI becomes more integrated into society, the number and impact of these incidents are likely to increase.
In other safety-critical industries like aviation and medicine, authorities collect and investigate such incidents through a process called ‘incident reporting’.
Experts, along with the U.S., Chinese governments, and the EU, agree that an effective incident reporting system is essential for AI regulation, as it quickly identifies how AI systems fail.
This necessary component is notably missing from the UK’s regulatory plans, which is a significant concern.