Man or machine: who decides? Part 1 of the series ‘Perspectives on Artificial Intelligence’.

 

Of all the current technological developments, artificial intelligence is both the most profound and the least understood. We are witnessing impressive new applications, but can hardly foresee their impact on people, organisations and society. In this series of blogs – Perspectives on Artificial Intelligence – we investigate not only the opportunities, but also the intended and unintended consequences.

What is artificial intelligence?

In accordance with Gartner, we define artificial intelligence (AI) here as technology which imitates human performance by learning, drawing conclusions, understanding complex content, engaging with people using natural dialogue, improving the cognitive performance of people, or replacing human resources in the execution of non-routine tasks.

The first applications of AI are based on algorithms which are thought up and programmed by people. The machine carries out specific tasks, only responding to a situation that has been defined in advance. It gets a lot more interesting if the machine has the capability to learn and, on the basis of experience, is able to make the right decision in divergent but comparable situations. If machines develop their own parameters based upon the data, the application of the algorithm can be improved. This is machine learning.

How AI understands us better than we understand ourselves

Deep Learning does not involve top-down programming; the machine learns to autonomously define its own algorithms. For self-driving cars, you can program many traffic situations and traffic rules in advance, keeping the car sufficiently spaced, making sure it travels within the road markings and slowing it down for a red light. Crossing an intersection is much more difficult to program. By looking at the way humans do this, deep-learning machines can program the rules themselves.

The figure below by Nvidia shows the development of artificial intelligence.

The development of deep learning and artificial neural networks makes it increasingly difficult for people to understand what is happening inside the machine. What are the algorithms doing, and why?

Artificial intelligence surpasses human intelligence

In May 2017, AlphaGo – a Google Deepmind system – won a Go contest against the Chinese grandmaster Ke Jie. The extraordinary thing about this was not that the computer won the match, but the way in which the match was won. Professional Go players saw the computer making moves and following strategies they had never seen before. What seemed to be a wrong move was actually genius. AlphaGo is a self-learning system, and had based its knowledge of the game not only on the analysis of previously played games, but had also played against itself in order to improve.

There are many more applications where the computer surpasses humans. In healthcare, systems such as IBM Watson are used for medical diagnostics. Automatic pattern recognition has become so good that machines can also interpret medical photos and scans extremely accurately.

Thanks to the automatic pilot, the number of aircraft accidents has drastically reduced; the self-driving car is expected to achieve the same. Furthermore, the personal recommendations you receive from Netflix, Facebook or Amazon are based upon perceptions of you made by artificial intelligence. In subsequent blogs, we will look much deeper into a wide range of artificial intelligence applications.

Unintended consequences of AI

AI systems can also make mistakes and have unintended consequences. After all, they are only as good as the data they have access to. Algorithms are not an alternative to democratic elections, for example. At the most recent US presidential election with candidates Donald Trump and Hilary Clinton, algorithms predicted that Trump would win, but this was because there are no examples of female presidents in the dataset and therefore Clinton’s profile was not associated with the presidency.

Elon Musk wrote on Twitter on August 12, 2017 that he considers artificial intelligence to be far more dangerous than the nuclear threat from North Korea. The day before, it had been reported that OpenAI – a non-profit AI research institute established by Elon Musk – had defeated the number one human player in the game Dota 2.

In an open letter to the United Nations, Musk and 115 other AI and robotics experts warned that technology can also be used to develop autonomous weapons. With such weapons, armed conflict could take place on a much larger scale, and much faster than we now imagine to be possible. The UN has been called upon to act quickly before these weapons are placed in the wrong hands.

Human intervention remains necessary

Artificial intelligence systems are very good at quickly analysing large amounts of data and recognising patterns in those data. They continue to learn and become more valuable as they grow older. But they are not always flawless. And even if they function well technically, they can be manipulated, for example by hackers. Human intervention remains necessary to test the operation of the algorithm and to verify outcomes.

For the time being, people are better than machines when it comes to determining goals, dreaming about the future and taking morals into consideration. If the human mind no longer determines the context within which AI systems function, undesirable outcomes are inevitable.

Furthermore, we need people to make decisions about non-operational and non-routine activities. Because these occur on a less regular basis, insufficient data relating to past performance exist, meaning one cannot leave decision-making to a machine.

Controlling systems one does not understand

As we increasingly rely on AI systems, we need to spend more time controlling those systems. The risk here is that we become complacent and have blind trust in a system that we do not understand.

In order to check an AI system, it is necessary that the system is not a black box, but open and accessible. This is far less obvious than it sounds. Facebook recently put a halt to an AI application that had developed its own language that was incomprehensible to people. Neural networks within Google Translate also use a self-developed language to make translations between language ​​pairs that it has not been trained on.

Preliminary conclusion: let AI work for us

Artificial intelligence can enrich and simplify our lives, but it also has the capacity to command and take over. The same technology that brings us prosperity can also destroy us. Today, the consequences of self-learning systems are hardly imaginable and lead to existential questions.

We will discuss both the fascinating new applications of AI and the way we deal with it in this series. This is not because we think that technological developments can or should be stopped, but to ensure AI is beneficial to society.

To be continued.

Comments are closed.