AI increases security at the expense of our privacy. Part 4 of the series ‘Perspectives on Artificial Intelligence’.

Futures Studies

Of all the current technological developments, artificial intelligence is both the most profound and the least understood. We are witnessing impressive new applications, but can hardly foresee their impact on people, organisations and society. In this series of blogs – Perspectives on Artificial Intelligence – we investigate not only the opportunities, but also the intended and unintended consequences.

You may not see the camera, but the camera sees you

The smart phone and social media have brought us a lot of good. We have access to all kinds of information at any time we choose and at any place we happen to be, and we can easily connect with family and friends; online services have enriched our lives in many different ways.

In the field of security we have also been offered new and relatively inexpensive options. A few cameras that capture movements around the house and store these images in the cloud help us to protect our possessions. With a camera on the dashboard of your car (dashcam) you can film the road and surrounding traffic so that you can see exactly what happened in the event of a collision. And throughout the country, citizens inform each other about suspicious goings-on in the neighbourhood via WhatsApp.

Governments and businesses are taking advantage of the extensive opportunities of artificial intelligence in this field. Satellite cameras recognise faces from space, making light work of uncovering disguises. Those who comprehend the possibilities soon realise that permanently accessible is not the same as permanently traceable.

From ‘predictive policing’ to a totalitarian state

Democratically-controlled governments who implement big data analyses and improve citizen safety levels using artificial intelligence tend to experience very little resistance.

This is why there are no objections against the Dutch National Police’s recent launch of the Crime Anticipation System (CAS). This is a predictive policing system, anticipating where and when crimes such as street robbery and burglary will occur. The system depicts 125 by 125 metre areas showing the highest probability of crime in certain places (Hot Spots) and at certain times (Hot Times) on a grid map of the Netherlands.

In Stratumseind, an area in Eindhoven well known for its nightlife, the police have installed cameras with microphones. Images and audio are analysed by software. Upon certain signals, such as shouting, running crowds, or anything which might indicate the possibility of a situation turning ugly, the system automatically notifies the police. Their subsequent arrival can then prevent any escalation. Social media data are also included in the analysis. This project has been developed by Atos, the Dutch Institute of Safety & Security (DITSS) and Intel.

In Eindhoven, faces are made unrecognisable to protect the privacy of the people on the street. In other countries, privacy is not considered to be a major issue. The Risk Assessment and Horizon Scanning (RAHS) programme in Singapore goes much further. Online and offline behaviour is monitored to determine trends in the population’s mood. The government anticipates these trends with the aim to minimise the risk of social unrest.

In China, Shenzhen Bao’an airport is patrolled by the police robot AnBot. This robot uses face recognition to identify criminals. The robot can track suspicious persons until police officers arrive, or temporarily disable them by way of a Taser device operated from the control room.

In Dubai, the police deploy robots for a number of tasks, including report taking and settling fines. They aim to have a 25% robotic police force by 2030.

Surveillance and influencing by companies

Surveillance is government territory, but private companies also know a lot about you. Online, consumers often have to make a choice between ease of use and privacy. It is convenient and more efficient when your social media accounts, search engine and emails are connected to each other. The disadvantage of this is that everything you do online supplies data to AI systems, who are then able to build up a detailed client profile of you. This makes personalised advertising possible; social media and search engine business models are based upon advertising revenue. Shoshana Zuboff (Harvard Business School) calls this “surveillance capitalism”.

You could, of course, argue that this is nothing new. Businesses have always carried out market research and collected data from customers in order to provide the best possible service. However, this has now become a new question of ethics. Where does the boundary lie between responding to customer needs and the manipulation or misleading of customers?

Take the phenomenon of the filter bubble. If two people fill in the same search terms, they are given different results based on their previous browsing behaviour. An algorithm determines which information is the most relevant to each individual. On the whole, this is simply convenient. However, the algorithm can also lead to adverse side effects; in the United States there is discussion about whether Facebook algorithms influenced the outcome of the presidential elections.

Democratisation thanks to AI

Cameras and AI can also be powerful instruments for those citizens who wish take a stand against illegal government actions. We often see filmed footage of excessive police behaviour, leading to the police in question being held publicly accountable for their actions. Transparency is on the increase, and the spread of information via the internet is difficult to stop.

In the UK, Joshua Browder has launched the chatbot DoNotPay. This allows you to quickly and easily object to parking fines, send letters to your landlord about the lack of safety measures taken, and file claims for lost airline baggage. The chatbot has already successfully disputed several hundred thousand parking tickets. Formerly, such claims were rarely submitted because people did not know how to, and because lawyers were too expensive.

Our concerns about privacy are justified

We cannot deny that AI and other technologies limit privacy. Anyone who has a mobile phone can be traced. Search engines know who you are and possess an infinite memory. Democratic control of AI is a logical goal, but not easily achieved as technological developments proceed at a rate much faster than any institution’s reaction to them.

For most people, the benefits of AI counterbalance the restriction of privacy. The question is whether this will remain so. Businesses would do well to prepare themselves for the intensification of the social debate concerning the boundaries between AI and privacy. Not just because of governmental regulatory requirements, but primarily to maintain a positive relationship with the consumer.




Earlier in this series:

  • Part 1: Man or machine: who decides?
  • Part 2: Personal customer contact via intelligent chatbots: how it’s done
  • Part 3: The HR manager or intelligent software: which selects the best people?