{"id":28260,"date":"2022-12-14T10:05:40","date_gmt":"2022-12-14T10:05:40","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=28260"},"modified":"2022-12-14T10:05:40","modified_gmt":"2022-12-14T10:05:40","slug":"can-artificial-intelligence-ethically-improve-society","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/can-artificial-intelligence-ethically-improve-society\/28260\/","title":{"rendered":"Can Artificial Intelligence ethically improve society?"},"content":{"rendered":"

A more ethical society can be built if we rethink the way we design technology and Artificial Intelligence.<\/h2>\n

The Three Laws of Robotics, which state that a robot cannot harm or allow a human to come into harm, must obey orders given by human beings, and must protect its own existence, were set out by Isaac Asimov 80 years ago, long before Artificial Intelligence became a reality. Despite this, they are still able to illustrate how humans have dealt with the ethical challenges of technology by protecting the users.<\/p>\n

Ethical challenges associated with technology are not inherently about the technology itself, but rather are a social problem. Technology, therefore, and in particular, Artificial Intelligence, could be used to empower users and help us build a more ethical society.<\/p>\n

This approach, put forward in the article, \u2018Ethical Idealism, Technology and Practice: a Manifesto<\/a>,\u2019 will help us utilise technology for the betterment of society.<\/p>\n

Is Artificial Intelligence objective?<\/h3>\n

There has long been the fear that humans would succeed in making machines so intelligent that they would end up rebelling against their creators. However, by the early days of computing and data technologies in the 1960s, this was not a central concern.<\/p>\n

“There was a belief that, because the data were objective and scientific, the resulting information was going to be true and of high quality. It was derived from an algorithm in the same way that something is derived from a mathematical calculation. Artificial Intelligence was objective and therefore helped us to eliminate human bias,” explained Joan Casas-Roma, a researcher at the Open University of Catalonia and the study\u2019s author.<\/p>\n

This was not the case, and scientists soon realised that the data and algorithms replicated the worldview of the person who was using the data or designing the system. This meant that technology did not eliminate human biases, and instead was just transferring them to a new medium.<\/p>\n

\"\"
\u00a9 iStock\/hh5800<\/figcaption><\/figure>\n

“Over time, we have learned that Artificial Intelligence is not necessarily objective and, therefore, its decisions can be highly biased. The decisions perpetuated inequalities, rather than fixing them,” Casas-Roma explained.<\/p>\n

When it was realised that Artificial Intelligence was not objective, actions were taken to contain its harmful effects. “The ethical question of Artificial Intelligence arose from the need to build a shield so that the undesirable effects of technology on users would not continue to be perpetuated. It was necessary to do so,” said Casas-Roma.<\/p>\n

By reacting this way, the relationship between technology and ethics can be greatly explored, and it sheds light on how technology can move society towards a more ethical future.<\/p>\n

The relationship between ethics and technology<\/h3>\n

The EU is currently aiming to move towards a more inclusive, more integrated, and more co-operative society. To achieve this, technology and Artificial Intelligence may be major obstacles, but equally, have the potential to be very useful.<\/p>\n

“Depending on how people’s interaction is designed with Artificial Intelligence, a more co-operative society could be promoted,” said Casas-Roma.<\/p>\n

In recent years, online education has massively taken off. Digital learning<\/a> tools have many benefits but can also leave users feeling isolated, however.<\/p>\n

“Technology could encourage a greater sense of co-operation and create a greater sense of community. For example, instead of having a system that only automatically corrects exercises, the system could also send a message to another classmate who has solved the problem to make it easier for students to help each other. It’s just one idea to understand how technology can be designed to help us interact in a way that promotes community and cooperation.\u201d<\/p>\n

Casas-Roma believes that an ethical idealist perspective on technology can create new opportunities to achieve ethical benefits for users and society as a whole. This approach should have the following characteristics.<\/p>\n