{"id":30101,"date":"2023-02-17T15:17:05","date_gmt":"2023-02-17T15:17:05","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=30101"},"modified":"2023-02-17T15:17:05","modified_gmt":"2023-02-17T15:17:05","slug":"researchers-develop-framework-mitigate-potential-dangers-ai","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/researchers-develop-framework-mitigate-potential-dangers-ai\/30101\/","title":{"rendered":"Researchers develop a framework to mitigate the potential dangers of AI"},"content":{"rendered":"
The new model, known as Copyleft AI with Trusted Enforcement (CAITE), will provide ethical guidance to the rapidly advancing field of Artificial Intelligence (AI), attempting to guard against the potential dangers of AI.<\/p>\n
The article, \u2018Leveraging IP for AI governance<\/a>,\u2019 which details the new model, is published in the journal Science. <\/em><\/p>\n Artificial Intelligence has the potential to revolutionise almost every aspect of our lives, but a misuse of AI-based tools may be harmful, especially to communities already facing discrimination. Although AI is often considered to be objective, the human-annotated data it feeds on can contain biases.<\/p>\n When reading through websites, the AI does not have the understanding to filter what is useful and what is a harmful stereotype. There have been numerous studies on the dangers of AI due to these biases, for example, where exclusionary algorithms have made racist predictions on an offender\u2019s likelihood to re-offend<\/a>.<\/p>\nPotential dangers of AI are brought about by misuse<\/h3>\n