Usefulness and necessity of regulations for Artificial Intelligence. Part 8 of the series ‘Perspectives on Artificial Intelligence’.

 

Of all the current technological developments, artificial intelligence is both the most profound and the least understood. We are witnessing impressive new applications, but can hardly foresee their impact on people, organisations and society. In this series of blogs – Perspectives on Artificial Intelligence – we investigate not only the opportunities, but also the intended and unintended consequences.

 Regulation is necessary, but is not easily formulated

In the 4th Industrial Revolution, in which we are now living, it is primarily the development of robotics and  artificial intelligence that will lead us to a new prosperity. We have seen examples of new products, services and business models in this series of blogs.

Furthermore, we have simultaneously pointed out the dangers and possible negative consequences of artificial intelligence. As the technology can also be applied in undesirable ways, we concluded that regulation is necessary at the end of the previous blog (part 7).

As far as I am concerned, the purpose of regulation is not to reverse or stop the development of artificial intelligence or to ban the replacement of jobs by robots. We should not want to stop technological progress, if that is at all possible. The goal of regulation must be to guide development in the right direction.

In this blog we discuss in which areas artificial intelligence regulation is advisable. We will see that formulating such regulations is anything but simple. These are the issues that need to be considered.

Liability and morality

If you have purchased a product that either does not work properly or causes damage, you can hold the manufacturer liable. The distinction between an error in the product itself and a mistake made by the user is, in principle, clear. If a car accident is caused because a driver forgets to brake, the driver is liable. If the cause is a technical failure (and the car is relatively new and has been maintained and used according to the instructions, etc.), it is the manufacturer who is liable for any damages.

A self-driving car is its own driver. In which case it is obvious that the manufacturer is liable for the failure of the algorithms that drive the car. This becomes more complicated when the self-driving car algorithm has to choose between two bad outcomes. If a wrong-way driver suddenly appears on your side of the road, is your car permitted to hit a cyclist in order to drive into the curb and prevent a frontal collision? Does the safety of one prevail over that of another?

How do you teach an algorithm to make a moral decision? At MIT, the Moral Machine was developed with the aim to map people’s moral decisions via crowd-sourcing. What do you do when given a choice between killing 2 passengers or the 4 pedestrians who cross the road when the traffic light is red?

Liability is even more difficult to determine in self-learning AI systems. Take an AI system that makes medical diagnoses, for example. Can the system developer be held liable for an incorrect medical diagnosis if the original algorithms have been updated through machine learning? To this purpose, IBM Watson stresses that their system only functions in an advisory capacity and that it is the doctor who makes the final decision and remains responsible.

Existential threats

The European Economic and Social Committee (EESC) issued its opinion to the European Commission in 2017, calling for a ‘human-in-command’ approach. The point of departure is that people must always maintain control over machines. The question, however, is whether this is feasible. When cars are self-driving, people lose the ability to drive a car.

In addition to this, competitions such as those between the Go world champion and AlphaGo from Google Deepmind have shown that artificial intelligence can transcend human intelligence. How do people control self-learning algorithms? How do we prevent algorithms from becoming a black box that no human can understand? We can, at most, only attempt to verify the results.

Of all autonomous systems, autonomous weapon systems are by far the most controversial. Remotely controlled drones equipped with machine guns have existed for some time. It is technically feasible to make these drones autonomous, that is to say they make life or death decisions without human intervention. Add facial recognition software and we will have taken long strides on this particular path. The ‘Campaign to Stop Killer Robots’ has made a video that illustrates what could happen.

Fake news and filter bubbles

Search engines (such as Google) and social media (such as Facebook) use algorithms that select which information you see based, for example, on your location and online surfing behaviour. It is entirely possible that you are only shown information that confirms your own opinion, while counter-arguments are filtered out. Democracy can be undermined should independent and objective journalism be replaced by propagandistic fake news.

This subject is high on the agenda of the European Commission, which wants to present a plan in April 2018. The question is, how can you effectively take action against fake news without restricting freedom of speech? If the government evaluates news organisations or news items, this would be a form of censorship. And the implementation of self-censorship in order to prevent fines is not much better. Do we protect democracy with undemocratic measures?

Data monopolies and geopolitical relations

AI systems are only as good as the data they have access to. The big technology companies understand that ‘data is the new oil’ and do everything they can to collect as much data as possible. In the digital economy, network effects lead to a winner-takes-all dynamic. The largest five technology companies, Apple, Alphabet, Microsoft, Amazon and Facebook, have become powerful data monopolies.

China effectively protects its data by not giving these American companies access to the Chinese market, where Tencent and Alibaba are superpowers. There are no European companies of a similar size and scope. European data is mainly stored on the servers of US companies. The General Data Protection Regulation (GDPR), which comes into effect in May 2018, gives European citizens more control over their data, but it is doubtful whether this will be sufficient. Just as Standard Oil was broken up at the end of the 19th century because it controlled 90% of the market, we should now also break up data monopolies in such a way that European data is in European hands.

Social discussion concerning ethical principles as a precursor to regulation

Some governments wish to further promote digitization by giving robots civil rights. Saudi Arabia declared the robot named Sophia, which looks like a woman, a citizen in October 2017. I am personally not in favour of this action. We now know enough about the dangers of artificial intelligence to realise that humanism and the power of man over machine must be actively protected.

Several international organisations, such as the Future of Life Institute and the AI For Good Foundation, as well as universities including TU Delft and Stanford, have published manifestos listing ethical principles for artificial intelligence. The recurring themes are that AI systems must be designed for the benefit of people, that systems must be transparent and verifiable, and that decisions based on algorithms must be reversible and free from prejudice. These are ethical principles that will find their way into new regulations in the coming years.

Comments are closed.