{"id":48770,"date":"2024-06-26T15:03:45","date_gmt":"2024-06-26T14:03:45","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=48770"},"modified":"2024-06-26T15:03:45","modified_gmt":"2024-06-26T14:03:45","slug":"ai-incident-reporting-addressing-gap-uk-ai-regulation","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/ai-incident-reporting-addressing-gap-uk-ai-regulation\/48770\/","title":{"rendered":"AI incident reporting: Addressing a gap in UK AI regulation"},"content":{"rendered":"

A new report by the Centre for Long-Term Resilience (CLTR) says that the UK needs an incident reporting system to log the misuse and malfunctions of artificial intelligence (AI).<\/h2>\n

The CLTR recommends that the government create an incident reporting system for logging AI failures<\/a> in public services and consider building a hub where all AI-related issues can be collated.<\/p>\n

It says such a system is vital if the technology is to be used successfully.<\/p>\n

AI incidents are on the rise<\/h3>\n

AI has a history of failing unexpectedly, with over 10,000 safety incidents recorded<\/a> by news outlets in deployed systems since 2014.<\/p>\n

With greater integration of AI into society, incidents are likely to increase in number and scale of impact.<\/p>\n

In other safety-critical industries, such as aviation and medicine, incidents like these are collected and investigated by authorities in a process known as \u2018incident reporting\u2019.<\/p>\n

The CLTR believes that a well-functioning incident reporting regime is critical for the regulation of AI, as it provides fast insights into how AI is going wrong.<\/p>\n

However, there is a concerning gap in the UK\u2019s regulatory plans.<\/p>\n

The urgent need for incident reporting<\/h3>\n

Incident reporting is a proven safety mechanism, and will support the UK Government\u2019s \u2018context-based approach\u2019 to AI regulation by enabling it to:<\/p>\n