Is artificial intelligence a threat to human rights? Part 7 of the series ‘Perspectives on Artificial Intelligence’.

Futures Studies

Of all the current technological developments, artificial intelligence is both the most profound and the least understood. We are witnessing impressive new applications, but can hardly foresee their impact on people, organisations and society. In this series of blogs – Perspectives on Artificial Intelligence – we investigate not only the opportunities, but also the intended and unintended consequences.

Fears concerning artificial intelligence are justified

When I am giving keynotes on robotisation and artificial intelligence, audience questions concerning the dangers of technology invariably arise. Does artificial intelligence take over the power of the people? Can we prevent all those bad things that can be done using artificial intelligence from actually happening? Will artificial intelligence lead to the downfall of mankind?

These are existential questions that go far beyond arguments regarding the impact of robotisation on employment and how to manage the transition towards a labour market in which all routine work is carried out by computers. Debates about the potential negative effects caused by artificial intelligence are very important for guiding technological development in the right direction.

This topic featured prominently on the agenda, for the first time, during the January 2018 World Economic Forum in Davos. Rightly so. One of the forum speakers was Yuval Noah Harari. He is the author of one of the best books I’ve read in years: Homo Deus. He describes how man’s authority and free will continue to become increasingly undermined as more and more decisions are being left to algorithms until humanism and liberalism must ultimately make way for dataism. He explains in a well-substantiated and plausible manner how those who control the data determine the future of mankind and of life itself. If organisms are biochemical algorithms, the combination of information technology and biotechnology will result in machines that know us better that we know ourselves. This is how new, digital dictatorships are created. His alarmist message is receiving increasing attention.

From narrow to general and self-aware AI

The artificial intelligence applications that we currently and most often use are primarily intended to assist people within a specific field; this is known as narrow AI. Consider Google Translate, Apple’s Siri voice support, self-driving cars, customer service chatbots and expert systems from IBM Watson which have been developed to assist doctors and lawyers. These are examples of applications that indicate the superiority of artificial intelligence over human intelligence within specific areas. The social and economic consequences of this are great. We are seeing major shifts in the labour market, while on the stock market the five largest companies in the world, according to market value, are American technology companies: Apple, Alphabet, Microsoft, Amazon and Facebook. These are closely followed by their Chinese counterparts, Alibaba and Tencent.

The development of artificial intelligence does not stop here. Narrow AI expands into general AI and Artificial Superintelligence. These terms refer to self-aware and autonomous AI systems which can process and reproduce knowledge, learn, reason and strategically plan. When the machines are more intelligent than the people who made them and develop their own agenda, it is high time to start getting worried. Stephen Hawking was one of the first scientists to warn us of the ability of machines to manipulate humans and develop weapons against the human race, leading to the extinction of the human species. Elon Musk also labelled AI as a fundamental risk for the survival of human civilization.

The speed with which AI is getting smarter is apparent in the development of AlphaGo from Google Deepmind. In May 2017, this computer beat the world’s top Go player, Ke Jie. This computer’s training was based on contests that had been played by people in the past. The next version, AlphaGo Zero, was given only the rules of the game and had to teach itself the rest by playing exclusively against itself. It needed just 21 days to attain the level of its predecessor.

Fusion of the human brain with AI

The distinction between people and machines seems very clear today, but this line will fade. We can already help patients suffering from Parkinson’s or epilepsy by implanting neurostimulators in the brain. The next step will be to upgrade the brain of healthy individuals through brain-computer interfaces. Companies such as Neuralink and Kernel are working on the fusion of biological intelligence and artificial intelligence which could improve your memory function, increase your thinking speed and offer you a direct interface with a computer. You will also be able to make a digital copy of yourself. The question is whether, with these brain-computer interfaces, you will be able to find out whether the origin of your thoughts and ideas stemmed from within your own brain or that of the computer. Ultimately, our emotions are also biochemical algorithms that AI systems learn to understand and are able to influence.

Moving towards an ethical framework for AI

Despite the undeniable risks, artificial intelligence development continues to move forward. We can not go back in time or stop development; however, we do have an influence on the way in which the technology is used. Technology provides various options and the application thereof will depend on the choices people make. This view is mirrored in Andrew Mcafee and Erik Brynjolfsson’s book Machine Platform Crowd: “So we should ask not “What will technology do to us?” but rather “What do we want to do with technology?” More than ever before, what matters is thinking deeply about what we want. Having more power and more choices means that our values ​​are more important than ever.”

In other words, the question is how do we maintain human autonomy amidst machines that hack into our brains and are capable of making autonomous decisions? During the World Economic Forum it also became apparent that there are no easy answers as yet. The ethical debate on the objectives and principles of artificial intelligence has only just begun, so far mainly providing suggestion for an ethical framework which includes stipulations that AI systems must be designed for the benefit of people, that the decisions of these systems can be audited and must be reversible, and that all algorithms must be free from prejudice.

In the next blog we will discuss regulations associated with artificial intelligence development and the necessity to limit the power of data monopolies.


Earlier in this series:

  • Part 1: Man or machine: who decides?
  • Part 2: Personal customer contact via intelligent chatbots: how it’s done
  • Part 3: The HR manager or intelligent software: which selects the best people?
  • Part 4: AI increases security at the expense of our privacy
  • Part 5: The impact of AI in every company division
  • Part 6: Disruption of the labour market