The awaited European Parliament resolution on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters

0

Introduction

On October 6th, 2021, the European Parliament (hereinafter “EP”) published a resolution on artificial intelligence (hereinafter “AI”) in criminal law and its use by the police and judicial authorities in criminal matters (hereinafter “Resolution”).

Indeed, the EP deemed it necessary to call out for the attention of the other European institutions in light of the number of potentially high, and in some cases unacceptable, risks for the protection of fundamental rights of individuals that the use of AI in law enforcement entails, such as: opaque decision-making; different types of discrimination and errors; risks for privacy and personal data, freedom of expression and information, the presumption of innocence, the right to an effective remedy and a fair trial; as well as risks for the freedom and security of individuals.

As such, considering that the European Commission recently presented its Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence, it is coherent that the EP has now decided to address this body together with the European Council to properly assess the raised concerns and try to solve them.

Indeed, this act fits perfectly into the EU framework in relation to the use of AI, since it seems consistent with the various pieces of legislation passed by the European institutions aimed at making the use of AI as safe and transparent as possible.

The concerns of the European Parliament

In these last years, the use of AI by law enforcement authorities has relied on the premise that AI in the criminal law field would reduce certain types of crime and lead to more objective decisions.

Unfortunately, this assumption, as also demonstrated by various well-known cases (e.g., Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016); State of Kansas v. John Keith Walls, 116,027, The Court of Appeals of the State of Kansas (2017); United States v. Curry, 965 F. 3d 313 (2020)), may not hold entirely true.

Indeed, those cases has shown how the use of AI in criminal justice and policing potentially affects several criminal procedure rights and fundamental rights as such the presumption of innocence, the right to a fair trial, the principle of non-discrimination and equality, the principle of transparency as well as the principle of legality.

In the light of the above, while considering the recent regulatory and implementation developments related to the use of AI, the EP Resolution identifies several issues, requiring Member States and European Institutions to take them into account when implementing AI systems in criminal law and allowing their use by judicial and law enforcement authorities in criminal matters.

Therefore, some of the concerns are illustrated below as they are constitute key points which a definitive intervention by the European legislator is still awaited to tackle.

  1. Transparency

A particularly problematic feature is the inherent opaqueness of many of AI systems. Given inputs, the final algorithmic output is known, but how the algorithm generated that specific predictive outcome is often not easily explainable.

Therefore, an unease with adopting algorithmic decision-making in high-stakes areas like criminal justice exists because of how poorly understood the technologies in question are.

Particularly, AI algorithms used in predictive policing can be so complex that even experts struggle to understand everything that happens in the process and cannot always explain the decisions made by AI.

This lack of transparency leads to a lack of accountability.

Indeed, for example, without explicability, law enforcement authorities cannot effectively be held accountable to the public for their actions.

Additionally, AI models must always be explainable and verifiable for people to trust the system, and for the judiciary to be able to exercise their authority lawfully. Indeed, convicts have the right to understand judicial decisions. If these decisions made by AI systems are not fully explainable, being the result of an opaque process, then they cannot de facto be legal.

Furthermore, private tech companies that design and sell this technology are not required to reveal their algorithms, further participating in the opacity of the process, raising also major concerns about the responsibilities of the private sector involved, power structures and democratic accountability.

Thus, in this scenario, the EP considers essential, both for the effectiveness of the exercise of defense rights and for the transparency of national criminal justice systems, that a specific, clear and precise legal framework is created.

Specifically, the EP asks for the regulation of the conditions, modalities and consequences of the use of AI tools in the field of law enforcement and the judiciary, as well as for the indication of the rights of targeted persons, and effective and easily available complaint and redress procedures, including judicial redress. It also calls for traceability of AI systems and the decision-making process that outlines their functions, defines the capabilities and limitations of the systems, and keeps track of where the defining attributes for a decision originate, through compulsory documentation.

Lastly, the EP has underlined the importance of keeping full documentation of training data, its context, purpose, accuracy and side effects, as well as its processing by the manufacturers and developers of algorithms and its compliance with fundamental rights.

  1. Power asymmetry

In addition, the use of AI in criminal law and criminal matters is characterized by a power asymmetry that exists both with the public actors that use AI tools, and private entities that provide those tools.

Particularly, regarding the power exercised by the private actors, it should be considered that law enforcement often uses tools and databases created by them, although they are rarely being subjected to specific and consistent legislation around the world.

Therefore, in many cases, it is not possible to ascertain the lawfulness of the personal data collected, as well as it results complex to understand the real functionality of the algorithms used.

Essentially, there is a risk that the administration of justice is left to algorithms created by private parties, whose interests could not overlap with the public ones. Moreover, the lack of transparency in algorithm construction processes by proprietary companies and their accountability to the public should be a cause for great concerns.

For instance, in February 2020, according to some leaked documents, the media reported that the company Clearview AI had contracts with thousands of law enforcement agencies, companies, and individuals around the world, and in Europe.

Clearview AI services can be described as consisting in matching persons to online images taken from a variety of social platforms. The service allows for instance an investigating officer to upload a photo of an individual of interest and search within the database, sometimes matching with somebody who is not necessarily the person at stake, being able to generate confusion and exchange of people.

In this context, the use of such a database raises several issues including, first and foremost, the origin of data. Indeed, as also demonstrated by the revealed documents, all personal data were collected without any legal basis and without the data subject being aware that his/her image and his/her biometric data were included in that database.

In the light of the above, in the Resolution the EP expresses its great concern over the use of private facial recognition databases by law enforcement actors and intelligence services, such as Clearview AI, considering that more than three billion pictures have been collected illegally from social networks and other parts of the internet, including from EU citizens.

Therefore, the EP requires Member States to oblige law enforcement actors to disclose whether they are using Clearview AI technology, or equivalent technologies from other providers and recalls the opinion of the European Data Protection Board (EDPB) that the use of a service such as Clearview AI by law enforcement authorities in the European Union would “likely not be consistent with the EU data protection regime”.

Ultimately, the EP calls for a ban on the use of private facial recognition databases in law enforcement.

  1. Biases and discrimination

In criminal matters, there are also potential risks of discrimination when one considers that AI tools can reproduce unjustified and already existing inequalities in the relevant criminal justice system.

As already mentioned, the Loomis v. Wisconsin case clearly revealed the discriminatory effects of the algorithm used in COMPAS, which predicted that black populations were twice as likely to re-offend as white populations within two years of sentencing while considering that white populations were much less likely to repeat the offence.

Moreover, as it showed in the dissenting opinions of the US v. Curry case the continued use of predictive policing tends to “fray community relations, undermine the legitimacy of the police, and lead to disproportionate exposure to police violence.” Recognizing that Curry’s own situation arose because of heightened police monitoring of a predominantly Black neighbourhood, the separate opinions highlighting the effects of heightened police monitoring of specific groups of people seem to indicate that the decision raised a big concern about the use of predictive tool in the justice administration.

In this context, it should be underlined that many algorithmically driven identification technologies currently in use disproportionately misidentify and misclassify and therefore cause harm to racialized people, individuals belonging to certain ethnic communities, LGBTI people, children and the elderly, as well as women. These results also come out from the circumstance that AI applications are necessarily influenced by the quality of the data used, and that such inherent biases are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain ethnic groups or racialized communities.

In this context, when algorithms are used in a criminal trial it seems essential to fully guarantee respect for the principle of equality and the presumption of innocence established by Article 6 of the European Convention on Human Rights (ECHR).

Particularly, the party concerned should have access to and be able to challenge the scientific validity of an algorithm, the weighting given to its various elements and any erroneous conclusions it comes to whenever a judge suggests that he/she might use it before making his/her decision.

For all these reasons, in the Resolution the EP stresses the potential for bias and discrimination arising from the use of AI applications such as machine learning, including the algorithms on which such applications are based and notes that biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real-world settings.

Therefore, the EP believes that strong efforts should be made to avoid automated discrimination and bias; calls for robust additional safeguards where AI systems in law enforcement or the judiciary are used on or in relation to minors.

  1. Cybersecurity

Finally, it should be noted that, inter alia, the EP also poses its attention to issues related to cybersecurity.

Indeed, it is now clear that the use of AI systems in criminal law and criminal matters makes the issue of applying cybersecurity measures central. Specifically, since our judicial system is increasing its use of AI decision-making systems in the justice administration, ensuring high levels of systems’ impenetrability and their resilience to possible attacks or malfunctions has become a crucial issue.

Being aware of these premises, in the Resolution the EP acknowledges that AI systems used by law enforcement and the judiciary are also vulnerable to AI-empowered attacks against information systems or data poisoning and that the resulting damage is potentially even more significant, and can result in exponentially greater levels of harm to both individuals and groups.

However, although the sensitivity of EU institutions towards cybersecurity issues is evident, it must be acknowledged that even the Proposal for a regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence does not clearly address the issues, including the assessment of cyber-threats.

Indeed, regarding the cybersecurity requirements, Recital 51 clarifies that “cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behavior, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure”.

And again, according to Recital 43 “requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade”.

However, despite Recitals 43 and 51 suggesting a strong attention to the cybersecurity issues, it must be noted that the provisions of the Proposal deal very lightly with these matters.

Indeed, just Article 15 provides some high-level provisions that, for example, requires that “high-risk AI systems shall be designed and developed in such a way that they achieve, […], an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.

Moreover, the legislator clarifies that “high-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems”.

Furthermore, they “shall be resilient as regards attempts by unauthorized third parties to alter their use or performance by exploiting the system vulnerabilities. The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (“data poisoning”), inputs designed to cause the model to make a mistake (“adversarial examples”), or model flaws”.

Thus, as it is clear, the proposed cybersecurity requirements are particularly vague, and not sufficient to impose high standards, also considering that in the field of criminal law and criminal matters particular and strictly private data is processed.

Conclusion

As shown in this comment, in its Resolution, the EP identifies several issues related to the use of AI in criminal law, demanding other European authorities and Member States to take them into account in order to make the use of AI tools as safe as possible and to safeguard fundamental rights of individuals.

Indeed, Member States should collaborate providing comprehensive information on the tools used by their law enforcement and judicial authorities, the types of tools in use, the purposes for which they are used, the types of crime they are applied to, and the names of the companies or organizations that developed those tools.

Ultimately, law enforcement and judicial authorities should also inform the public and provide sufficient transparency as to their use of AI and related technologies when implementing their powers, including disclosure of false positive and false negative rates of the technology in question.

 

Bibliography

[1] European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters.

[2] Proposal for a regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence.

[3] Aleš Završnik, Criminal justice, artificial intelligence systems, and human rights, ERA Forum, 2020.

[4] Report Artificial Intelligence and Law Enforcement.

[5] EUROPEAN COMMISSION FOR THE EFFICIENCY OF JUSTICE (CEPEJ) European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment.

[6] https://thesecuritydistillery.org/all-articles/ethics-artificial-intelligence-and-predictive-policing.

 

 

Cover Image: © Suhaimi Abdullah

Share this article!
Share.

About Author

Leave A Reply