The Centre for Long-Term Resilience (CLTR) is calling for a comprehensive incident reporting system to urgently address critical gaps in AI regulatory plans.
According to CLTR, AI has a history of failing in unexpected ways, with more than 10,000 safety incidents in deployed AI systems documented by news outlets since 2014. As AI becomes more integrated into society, the frequency and impact of these incidents is likely to increase.
Drawing parallels with safety-critical industries such as aviation and healthcare, the think tank argues that well-functioning incident reporting systems are essential for effective AI regulation — a view supported by broad expert consensus as well as the US and Chinese governments and the European Union.
The report outlines three main benefits of implementing an incident reporting system.
- Monitoring real-world AI safety risks Inform regulatory adjustments
- Coordinating rapid response Investigating serious accidents and their root causes
- Identifying early warnings Possibility of large-scale damage occurring in the future
Currently, UK AI regulations lack an effective incident reporting framework. This gap prevents the UK Department of Science, Technology and Innovation (DSIT) from learning about a range of significant incidents, including:
- Very capable foundation model issue
- UK Government Uses AI in Public Services
- Misusing AI systems for malicious purposes
- Harm caused by AI companions, tutors, and therapists
CLTR warns that without a proper incident reporting system, DSIT may learn of new harms through the news media rather than through established reporting processes.
To close this gap, the think tank recommends three immediate measures to the UK government:
- Government Incident Reporting System: Establish a system for reporting incidents from AI used in public services, which could simply extend the Algorithmic Transparency Records Standard (ATRS) to include public sector AI incidents and provide information to government agencies that can then be shared with the public in the interest of transparency.
- Engage with regulators and experts: We will commission regulators and consult with experts to identify gaps of most concern, ensure effective coverage of priority incidents and understand stakeholder needs for a functional regime.
- DSIT Capacity Building: Develop DSIT's ability to monitor, investigate and respond to incidents through a pilot AI incident database. This will become part of DSIT's core functionality, initially focusing on the most urgent gaps but eventually expanding to include all reports from UK regulators.
These recommendations are intended to strengthen the government’s capacity to take responsibility and improve public services, effectively cover priority incidents, and develop the necessary infrastructure to collect and respond to AI incident reports.
Veera Siivonen, CCO and Partner at Saidot, commented:
“This report from the Centre for Long Term Resilience comes at an opportune time. As the UK heads towards a general election, the next Government's AI policy will be fundamental to economic growth. However, this will require precision in striking the right balance between regulation and innovation, providing guardrails without stifling industry experimentation. Implementing a centralised incident reporting system for AI misuse or failures is a commendable first step, but much more needs to be done.”
The new UK administration needs to provide clear governance requirements for businesses, providing certainty and understanding while monitoring and mitigating the most likely risks. By integrating a range of AI governance strategies with centralized incident reporting, the UK can harness the economic potential of AI and ensure it delivers benefits to society, while safeguarding democratic processes and public trust.”
As AI advances and permeates various aspects of society, implementing robust incident reporting systems is likely to be critical to mitigating risks and ensuring the safe development and deployment of AI technologies.
reference: SoftBank CEO: Forget AGI, ASI will appear within 10 years
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London – this comprehensive event will take place alongside other major events such as Intelligent Automation Conference, BlockX, Digital Transformation Week and Cyber Security & Cloud Expo.
Find out about upcoming enterprise technology events and webinars hosted by TechForge here.