{"id":55794,"date":"2025-02-26T08:38:38","date_gmt":"2025-02-26T08:38:38","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=55794"},"modified":"2025-02-26T08:38:38","modified_gmt":"2025-02-26T08:38:38","slug":"council-of-europe-addressing-the-risks-of-artificial-intelligence-to-human-rights","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/council-of-europe-addressing-the-risks-of-artificial-intelligence-to-human-rights\/55794\/","title":{"rendered":"Council of Europe: Addressing the risks of artificial intelligence to human rights"},"content":{"rendered":"
Artificial intelligence (AI) is currently a recurring topic in many areas of society, whether in everyday conversations or in specialised forums. This technology, which exhibits a remarkable potential to improve people\u2019s quality of life, has generated a wide spectrum of opinions and debates in different sectors, such as business, health, economics and law. However, it is important to recognise that, like any other technological use, artificial intelligence can carry both benefits and potential risks for society and individuals. In mature societies, regulation emerges as the main tool for responsible use of this technology.<\/p>\n
The regulation of artificial intelligence has generated heated debates since the beginning of the second decade of the 21st century. With the election of President Donald Trump to the White House, who has shared positions with prominent business and technology leaders, there has been a resurgence of the view that regulating this technology represents a significant obstacle to its innovation and development. This perspective, while not new, was not the predominant view in the specialised forums where the regulation of artificial intelligence was debated in previous years. Even most companies demanded a regulatory framework that would establish a level playing field to be able to invest and innovate with legal and business certainty.<\/p>\n
Therefore, the debate should not focus on whether artificial intelligence should be regulated or not. In fact, from a legal point of view, any legal system has sufficient tools to provide a response if the use of an AI system causes harm to a person. No judge in a similar dispute, claiming compensation for harm caused by the use of an AI system, can leave the dispute unaddressed. A different question is to what extent such an answer is satisfactory and how much judicial activism or legal creation the judge should develop. Judges, as a public authority, must make their decisions solely on the basis of the law. In this way, citizens can know in advance what arguments the arbiter of their dispute is likely to use and adapt their behaviour accordingly. This way also means that arbitrary and abusive behaviour is prevented. This requirement applies not only to judges, but to all public authorities in any democratic system, embodied in the legal principles that constitute the rule of law.<\/p>\n
Without specific rules, people\u2019s behaviour and decisions will lack a frame of reference and the response of authority will be unpredictable. Suitable and specific rules therefore provide certainty for all stakeholders, including companies, to protect their rights and interests and to counter arbitrariness, abuse of power, and injustice.<\/p>\n
The debate should therefore focus on other types of questions that provide the rigour and complexity that this issue deserves, thus avoiding Manichean and simplistic postulates. Firstly, what should be the object of regulation? Secondly, how should it be regulated? Thirdly, who should regulate it? Lastly, from what point of view? For example, what ethical and\/or legal principles should preside over such regulation, being aware that these principles will decisively condition its objective and its addressees.<\/p>\n
Depending on the answers to these questions, a wide variety of artificial intelligence regulations can be observed around the world.<\/p>\n
Until a few years ago, codes of conduct or internal rules created by companies or technology companies predominated, characterised by the fact that their observance was entirely voluntary, and included very general ethical principles without much specific content. The flexibility offered by these guiding instruments contrasted with the lack of definition of the responses they could offer to specific problems, as well as the voluntary nature of their observance, even though their usefulness is beyond doubt, without definitive results.<\/p>\n
The internet was a technological revolution that raised major cross-border problems. National regulations proved to be totally ineffective in providing a satisfactory response, so that the co-ordination of the approaches and efforts of several countries has become essential, leading to international collaboration and agreements.<\/p>\n
This scenario inspires the response that the Council of Europe has adopted to face the multiple challenges that artificial intelligence raises in which it focuses its interest: the protection and promotion of human rights, democratic functioning and the principles of the rule of law. There are various institutions (such as the Parliamentary Assembly or the Commissioner for Human Rights, among others), or sectoral committees (such as the Steering Committee for Human Rights or, most notably, the European Commission for the Efficiency of Justice (CEPEJ) that have developed and continue to work on drawing up recommendations and different non-binding regulations on the use of AI<\/a>.<\/p>\n But above all of them, the constitution of the Artificial Intelligence Committee<\/a> (CAI) in December 2019 stands out. It consists of 46 Council of Europe Member States, the European Union and 11 non-member States (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay), as well as 68 representatives of the private sector, civil society and academia, who participated as observers.<\/p>\n During five years of work, debate and negotiations, it was considered that the technological level of development of artificial intelligence, and the risks of AI to human rights, democracy and the rule of law, required a legally binding regulatory response; from an international perspective, i.e. integrating the maximum number of cultures and perspectives and legal traditions (which is one of the main differences compared to the European Union AI Act); regulating only those uses that entailed a significant risk or impact; always from a perspective that encouraged technological development that favoured and promoted human beings, their dignity and individual autonomy, in short, human-centred artificial intelligence; and whose addressees are both the public and the private sector, but with regard to the latter a great degree of flexibility is recognised as to the means of regulation, which may take the form of legislative, administrative or other measures.<\/p>\n These five years concluded with the drafting and approval of the first legally binding international treaty on artificial intelligence and human rights, democracy and the rule of law.<\/p>\nInternational treaty on artificial intelligence and human rights, democracy and the rule of law<\/h3>\n