{"id":36597,"date":"2023-10-30T09:50:13","date_gmt":"2023-10-30T09:50:13","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=36597"},"modified":"2023-10-30T10:24:12","modified_gmt":"2023-10-30T10:24:12","slug":"protecting-human-rights-and-democracy-in-era-of-ai-technology","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/protecting-human-rights-and-democracy-in-era-of-ai-technology\/36597\/","title":{"rendered":"Protecting human rights and democracy in the era of AI"},"content":{"rendered":"
Artificial Intelligence (AI) systems have significantly increased in popularity in recent years<\/a>, and are particularly in the spotlight in 2023 with the introduction of a particular type of system known as \u2018Large Language Models\u2019 (LLMs). Examples of such systems include ChatGPT and Bard.<\/p>\n Though AI technology, of which LLMs are just one example, is still a work in progress, it is already clear that the use of AI systems will profoundly change our working lives<\/a>, private lives, and \u2013 perhaps most importantly in the bigger picture \u2013 how we organise and govern our societies. This is not because algorithms are truly more intelligent than humans, but because they offer economic stability and efficiency in the execution of numerous, both basic and advanced, tasks at a level on which most forms of human intelligence cannot compete.<\/p>\n Whereas the public debate in general tends to focus on the economic benefits and downsides of using AI technology, the introduction of AI systems in public administration and the judicial system \u2013 but also their use in relation to the provision of certain essential services by private actors \u2013 gives rise to serious concerns about how to ensure continued protection of human rights and democracy, and respect for the rule of law, if AI systems assist or even replace human decision-making. Their use may also significantly impact democratic processes, including elections, the right to assembly and association, and the right to hold opinions and receive or impart information \u2013 in short, the very fundaments of liberal democracy.<\/p>\n It is thus high time for all States around the world and intergovernmental organisations to address the challenges posed by AI technology and create the necessary legal framework that will promote much needed innovation, but not at the cost of human rights and fundamental freedoms. We need to consider carefully how we can use AI systems to improve the way our societies function, to improve the protection of our environment and boost our economies without inadvertently creating a dystopic and undemocratic world governed by the rule of algorithm rather than the rule of law.<\/p>\n In 2019, the Council of Europe, the continent\u2019s oldest intergovernmental regional organisation with 46 Member States and perhaps most widely known around the world for its European Court of Human Rights, began groundbreaking work on the feasibility and need for an international treaty on AI based on its own and other relevant international legal standards in the area of human rights, democracy and the rule of law. The results of this important pioneering work at international level led to the creation in 2022 of the Committee on Artificial Intelligence (CAI). The task of the CAI is to elaborate a Framework Convention on AI technology that will set out legally binding requirements, principles, rights, and obligations in relation to the design, development, use, and decommissioning of AI systems from a human rights, democracy, and rule of law point of view.<\/p>\n \u2018No man is an island\u2019, as the saying goes, and no region in the world can stand entirely on its own. We all form part of a globalised economy and are ultimately facing the same challenges and policy choices. AI technology knows no borders, and a meaningful international standard-setting for the human rights and democracy aspects relating to AI systems can obviously not be limited to a specific region of the world. Accordingly, the Committee of Ministers of the Council of Europe has decided to allow for the inclusion in the negotiations of interested non-European States sharing the values and aims of the Council of Europe, and a growing number of States from around the globe have already joined, or are in the process of joining our efforts.<\/p>\n Likewise, it has been important for the Council of Europe to closely involve relevant non-state actors in these negotiations. There are currently 61 civil society and industry representatives in the CAI as observers, participating in the negotiations together with States and representatives of other international organisations and relevant Council of Europe bodies and committees.<\/p>\n In the European region, the European Union (EU) plays a key role in the regulation of AI systems for its 27 Member States and, for that reason, is also directly involved in the Council of Europe negotiations on their behalf. When entering into force, the EU\u2019s AI Act and the Council of Europe\u2019s Framework Convention are set to mutually reinforce each other, providing an example of how best to make use of the combined strengths and competencies of both European organisations.<\/p>\n The draft Framework Convention (a consolidated \u2018working draft\u2019 is publicly available at the Council of Europe website for the CAI<\/a>) is focused on ensuring that the use of AI technology does not lead to the existence of a legal vacuum in terms of the protection of human rights, the functioning of democracy and democratic processes, or the respect of the principle of rule of law. In line with the findings of the feasibility study prepared by the former Ad Hoc Committee on Artificial Intelligence (CAHAI), which preceded the CAI, its aim is not to create new substantive human rights specific to the AI context, but to guarantee that the existing human rights and fundamental freedoms protected most notably by international law cannot be violated. This will be achieved by requiring parties to oblige regulators, developers, providers and other AI actors to consider risks to human rights and democracy and the rule of law from the moment of conception and throughout the lifecycle of these systems. Moreover, the system of legal remedies available to victims of human rights violations should be updated in view of such specific challenges posed by AI technologies as their transparency and explainability.<\/p>\n When it comes to the potential threats to democracy and democratic processes emanating from AI technology, it is, in particular, the capacity of such systems to be used to manipulate or deceive individuals which will be addressed by the treaty. This includes the use of so-called \u2018deep fakes\u2019, microtargeting, or more direct interferences with the rights to freedom of expression, to form and hold an opinion, freedom of assembly and association, and to receive or impart information. The Framework Convention will contain legally binding obligations for its parties to provide for effective protection against such practices.<\/p>\n The \u2018rule of law\u2019 is a longstanding legal-philosophical concept encompassing, amongst other things, the ideas that government, as well as private actors, are accountable under the law; that the law should be clear and publicised; that laws are enacted, administered, and enforced in an accessible, fair, and efficient manner; and that access to impartial dispute resolution is guaranteed for everyone. It is obvious that this basic notion of what constitutes a fair and liberal, law-abiding society must be respected when designing and using AI systems that may be used in sensitive contexts, such as (but not limited to) the drafting of laws, public administration, and not least the administration of justice through the courts of law. The Framework Convention will also set out specific obligations for parties in this regard.<\/p>\n The draft Framework Convention \u2013 and indeed all the work of the CAI \u2013 adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, and in doing so puts a premium on human dignity and agency. It is important that we are not carried away by the obvious possibilities offered by AI technology without carefully considering the potential negative consequences of using AI systems in various contexts. The draft Framework Convention therefore also obliges parties to raise awareness and stimulate an informed public debate on how AI technology should be used.<\/p>\nConcerns surrounding AI use<\/h3>\n
Governing AI technology use<\/h3>\n
A global approach<\/h3>\n
Protecting human rights and democracy<\/h3>\n
Threats to democracy<\/h3>\n
A balancing act: A risk-based approach to AI<\/h3>\n