“In our society we have no major crimes” explains Anderton, the chief of the Precrime agency, in Philip K. Dick’s short story ‘The Minority Report’. For this society to exist, intelligence agencies and police forces apprehend perpetrators before the crime occurs based on foreknowledge provided by three clairvoyant humans, known as ‘Precogs’.
What until recently was strictly confined to the realm of science fiction has become the driving force of many states’ counter-terrorism strategies. Predicting future crimes is viewed by states across the globe as an essential means for the effective prevention of terrorism. Especially recent technological breakthroughs reinforced the conviction that data-driven learning algorithms can provide states precisely with such almost ‘magical’ power.
This perspective critically examines some challenges posed to the rule of law and human rights by states’ use of data-driven learning-based artificial intelligence (AI) systems – in particular machine learning and deep learning models – for identifying individuals presenting a threat to national security. A close examination of this practice is warranted in light of the fact that states continue investing in the development of AI prediction systems, that such systems are often tested and used without the general public’s awareness, and that these AI-based predictions may be used to justify the application of a broad range of administrative measures encroaching on fundamental rights.
AI for states’ prevention of terrorism
The use of AI for the prediction of terrorism is part of the move from a reactive to a pre-emptive approach to counterterrorism. This move is essentially reflected in two closely connected developments since the events of 11th September 2001.
First, states adopted counter-terrorism legislations increasingly criminalising conduct in the preliminary stages of the potential commission of a crime (‘Vorfeldkriminalisierung’) – i.e. before a criminal act has actually taken place. This was done by criminalising either mere endangerment or the purpose for which a certain action is performed. For instance, the United States (US) adopted the so-called ‘material support’ statute, holding individuals criminally liable under US law for providing material support – including a broad range of activities – irrespective of the individual’s intent to support an organisation’s terrorist activities. These statutory provisions have also expanded States’ investigatory powers and the permissibility of pre-trial detention for terrorism-related activities. In 2006, the Dutch government enacted legislation amending the criminal procedural code by, among others, replacing the threshold of ‘reasonable suspicion’ of a terrorist crime triggering special investigative power by the much lower threshold of mere ‘indications’. By the same token, states’ counter-terrorism legislations have also introduced a broad spectrum of administrative measures, ranging from control orders and area restrictions to deprivation of social benefits and citizenship revocation. In France, a law adopted in October 2017 provides for the use of administrative measures as ordinary means for counter-terrorism purposes, including assigned residence orders, house searches, controls on access to certain areas, and the power to close places of worship. Framed as means to prevent crimes, these measures often operate outside the ordinary criminal justice system, since they apply before the actual commission of a terrorist act. These measures have taken up a quasi-punitive character, by imposing more or less severe restrictions on individuals deemed to pose a risk to national security.
This legislative development has attracted strong criticism for chronologically moving up punishable behaviour or intention – to the pre-crime stage – in a manner that is likely to undermine the rule of law and fundamental human rights. States have set aside the criticism and they have reinforced their move towards a pre-emptive approach to counter-terrorism by introducing the use of AI systems in their prevention activities.
Indeed, to effectively apply these administrative measures, states explored new means that would enhance their capabilities of detecting terrorist threats in advance. As a first step, new data collection capabilities were developed that provide valuable intelligence about these threats. But with the increase in computing power and data availability due to the enormous amount of data most of our daily actions generate on the internet and social media platforms, data analysts soon had become no longer capable of manually scrutinising all the information that was being generated. They were also too slow for the intelligence to be of effective utility. Hence, the intelligence cycle had become an issue of scale: while data collection tools had brought enormous progress in the ability to collect information, no technical solutions were presented to automate the processing of collected data.
AI data-driven learning models gained traction precisely to address the limitation of human capabilities in data processing by improving the autonomy of decision-making features such as predicting, profiling, and assessing the risk of potential terrorists. One of the earliest uses of AI systems for terrorist predictions was the US National Security Agency’s Skynet programme in 2015. The programme was reportedly used to collect the cellular network data of 55 million individuals through Pakistan’s mobile phone network and to rate the individuals’ risk of being a terrorist using a machine learning algorithm. Ever since, states across the globe have begun investing in and testing various AI systems capable of predicting the risk individuals could pose to national security, also at the domestic level.
As a result, the use of administrative measures taken by law enforcement and intelligence agencies is increasingly informed by these AI systems’ predictions. In other words, AI systems are becoming essential actors in the determination of the application of administrative measures that may significantly impair the human rights of the individuals concerned. In 2017, the German Federal Police Office, for instance, introduced a nationwide risk assessment tool (RADAR-iTE) to assess the ‘acute risk’ of ‘potentially destructive offenders’ of ‘Islamist terrorism’. Depending on the level of risk the system assigns to an individual, German authorities will decide on the required measure. For individuals posing a high risk (so-called ‘Gefährder’), these measures may include wiretapping inside and outside homes, surveillance of telecommunications, electronic monitoring of residence, or preventive detention. States defend such developments by drawing on a still dominant narrative that portrays AI systems as capable of conducting these predictive decision-making features not only more effectively and accurately, but also in a more objective and bias-free manner than humans – who are subject to emotional biases.
Behind the ‘magic’ of AI: Rule of law and human rights concerns
From a rule of law perspective, any administrative measure against a potential terrorist – even at the pre-trial stage – needs to comply with a number of human rights safeguards. Some of these safeguards may be considerably further at risk when AI predictions are used to justify the use of administrative measures.
One of the safeguards at risk is the necessity requirement. When AI systems are used to identify individuals presenting a threat to national security the task to justify the necessity of a measure to counter that threat shifts to some extent from the appropriate state authority to the AI system. Indeed, the human’s role is limited to the collection of adequate training data and to the decision on the learning method to be adopted given a particular objective. According to this process, it is then the AI system that examines available data and assesses whether the data indicates that an individual poses a threat to national security. Hence, the determination of what constitutes a threat to national security or a terrorist threat – terms for which no consensual normative definition exists at the international level – is essentially left to an AI system. This in turn means that the scope of these terms may be broadened without humans even being aware of it because data-driven learning algorithms often operate as ‘black boxes’, meaning that humans, even those who design the AI systems, cannot understand how the data is processed to make predictions.
Moreover, the use of AI systems for the prediction of terrorist threats entails the risk of applying administrative measures in a manner that discriminates against particular individuals or groups. As a broad range of studies and reports have shown, AI systems are prone to biases having led to various discriminating decisions towards individuals often already suffering from discrimination. Different means to reduce these biases continue to be explored. Yet, it is very unlikely that these will be completely unbiased any time soon. The fact is that an AI system can only be as good as the data, the humans involved in their development, and those interpreting the outputs of these systems. Thus, as long as human minds are not completely unbiased, AI systems will remain so too.
A further area in which challenges may arise from the situation under scrutiny concerns procedural safeguards applicable to the pre-trial stage. An example of such a procedural safeguard at risk is the need for any decision on administrative measures to be based on objective information about the concerned individuals. Indeed, traditionally, ‘reasonable grounds for suspicion’ depended on objective intelligence or information about an individual and his or her particular behaviour. Yet, with AI predictions, the mere description of a suspect, and his or her physical appearance, or the fact that the individual is known for having a previous connection to a terrorist group, become the determinant factors (either separately or jointly) for intelligence or law enforcement agencies to act. In more technical terms, with the introduction of machine learning and deep learning-based AI systems we see a move from subject-based data mining – which is merely a traditional police investigation method in digitalised form – towards pattern-based data mining which consists of algorithms identifying segments “as deviant in a number of central variables in which the deviation is then connected with terrorism”.
Another procedural safeguard at risk is the right of suspected individuals to effectively challenge the administrative measure of which they are the recipients. For this right to be effective an individual needs to know which factual grounds have prompted the authorities to apply the administrative measures to them. Yet, when AI systems are used to assess the threat, identifying the factual grounds becomes merely impossible due to the AI models’ ‘black box’. The Chinese government has deployed AI for terrorist prediction in the region of Xinjiang to identify individuals belonging to the Uyghur Muslim minority that the government considers posing a terrorism or extremism threat. Those identified by the system are often detained and sent to ‘reeducation centres’ where they may be held indefinitely without charge or trial, unable to challenge the authorities’ decision relying on those systems. In fact, artificial intelligence’s promise of more objectivity and neutrality may easily prove to be an illusion, and its application in some cases may entail states’ misuse of administrative measures in violation of human rights and the rule of law, without any remedies being actually available to the recipients of these measures.
Finally, and unlike what the dominant narrative suggests, the use of AI for the prediction of terrorist threats may reduce the accuracy and effectiveness of counter-terrorism measures, ultimately undermining the rule of law. This is because, by its very nature, data-driven learning models are highly sensitive to data quality and quantity. For instance, deep-learning methods call for thousands – and, in some cases, millions or even billions – of training examples for models to become relatively good at prediction tasks almost at the level as humans. Consequently, one significant prerequisite for offenses to be subject to crime prediction is a sufficiently large number of offenses. This explains also why it is generally argued that robust predictions can only be made about high-volume offenses. It is thus no coincidence that burglaries are, by far, the most common example of law enforcement’s crime prediction across the globe. But, for the prediction of low-frequency events – such as terrorist acts – AI systems are often not the ideal solution.
Moreover, every single act of terrorism tends to be unique. The problem with AI prediction is that it projects past events assuming that they mirror the present ones. Unlike crime prediction in Philip K. Dick’s science fiction, there is nothing truly ‘magic’ about present AI predictions of terrorist threats. What present AI systems offer are mere forecasts rather than true predictions of terrorist threats.
Overall, this means that for the prediction of events where the data is rather limited and for which the future differs from the past, it is actually humans that are likely to be far more effective and accurate in learning complex rules than any present AI system. Disregarding this fact can lead to risky and inappropriate results with a detrimental impact on many individuals falsely suspected of terrorism.
Conclusion
At present AI capabilities are still a long way from Philipp Dick’s crime-free society. But the introduction of these technological means in states’ search for more security comes with a significant risk of undermining the rule of law and human rights. Moreover, unlike states’ still dominant narrative, AI systems’ predictions are neither necessarily more objective and neutral, nor more effective and accurate than those made by humans. Against this background, it is crucial to reiterate that the mere fact that AI predictions justifying the adoption of administrative measures take place in the crime-prevention phase does not mean that these measures take place in a legal vacuum. The rule of law and human rights provide important safeguards that need to be complied with. Moreover, narratives that present technological advances almost as magical solutions to all our contemporary challenges need to be questioned and critically examined.
There is no doubt that technological advances can help us to create a safer environment. But this search for security should not undermine the fundament of our society. Anderton’s concluding words in Philip Dick’s story ‘The Minority Report’ cited at the outset sounds as a timely warning against the possible misuse of AI in criminal law matters: “In our society we have no major crimes. But we do have a detention camp full of would-be criminals”.