Artificial Intelligence in policing: safeguards needed against mass surveillance

Met dank overgenomen van Europees Parlement (EP) i, gepubliceerd op dinsdag 29 juni 2021.

The use of Artificial Intelligence in law enforcement and the judiciary should be subject to strong safeguards and human oversight, says the Civil Liberties Committee.

In a draft report adopted with 36 votes to 24, and 6 abstentions, MEPs highlight the need for democratic guarantees and accountability for the use of Artificial Intelligence (AI) in law enforcement.

Measures against discrimination

MEPs worry that the use of AI systems in policing could potentially lead to mass surveillance, breaching key EU principles of proportionality and necessity. The committee warns that otherwise legal AI applications could be re-purposed for mass surveillance.

The draft resolution highlights the potential for bias and discrimination in the algorithms on which AI and machine-learning systems are based. As a system’s results depend on its inputs (such as training data), it is important to take algorithm bias into account. Currently, AI-based identification systems are inaccurate and can wrongly identify minority ethnic groups, LGBTI people, seniors and women, among other groups. In addition, AI-powered predictions can amplify existing discrimination, a concern in the context of law enforcement and the judiciary.

Use of facial recognition and other biometric data by the police and the judiciary

Addressing specific techniques available to the police and the judiciary, the committee notes that AI should not be used to predict behaviour based on past actions or group characteristics. On facial recognition, MEPs note that different systems have different implications. They demand a permanent ban on the use of biometric details like gait, fingerprints, DNA or voice to recognise people in publicly accessible spaces.

The committee wants to ban law enforcement from using private facial recognition databases, like the already used Clearview AI. MEPs also ask for a ban on assigning scores to citizens based on AI, stressing that it would violate the principles of basic human dignity. Finally, facial recognition should not be used for identification until such systems comply with fundamental rights, state MEPs.

The use of biometric data for remote identification is of particular concern to MEPs. For example, automated recognition-based border control gates and the iBorderCtrl project (a "smart lie-detection system" for traveller entry to the EU) are problematic and should be discontinued, say MEPs, who urge the Commission to open infringement procedures against member states if necessary.

Quote

Rapporteur Petar Vitanov (S&D, BG) said: “The use of AI is growing exponentially, and things that we thought possible only in sci-fi books and movies - predictive policing, mass surveillance using biometric data - are a reality in some countries. I am satisfied that the majority of the Civil Liberties Committee recognises the inherent danger of such practices for our democracy. Technical progress should never come at the expense of people’s fundamental rights.”

Next steps

The non-legislative report will be up for debate and vote during the September plenary session (13-16 September).


1.

Relevante EU dossiers