Back to overview
Press release

ICCT Hosted Blue Sky AI Workshop

18 Nov 2024

The Hague, 7 and 8 November 2024: the International Centre for Counter-Terrorism (ICCT), the Institute of Security & Global Affairs (ISGA) at Leiden University, and The Netherlands’ National Coordinator for Security and Counter Terrorism (NCTV) partnered to co-host a ‘Blue-Sky’ workshop on Terrorist Exploitation of Artificial Intelligence (AI) in The Hague. During this closed-door event, 26 terrorism, national security, counter-terrorism, and technology experts from academia, industry, law enforcement, computer science and government, representing eleven countries came together to discuss and hypothesise how terrorists and violent extremists could adapt to the rise of AI and exploit these technologies. The two-day workshop was facilitated by Dr. Joana Cook (ICCT, Leiden University), Dr. Bàrbara Molas (ICCT), and Dr. Graig R. Klein (Leiden University).    

The workshop facilitated an unique approach that prompted participants to think in a ‘blue-sky’ manner. Blue-sky thinking refers to the process of generating creative, imaginative, and out-of-the-box ideas without any limitations or constraints. In short, it is a brainstorming format where participants are encouraged to think freely about a topic and explore, anticipate, and create potential scenarios. Using this proactive approach was aimed at overcoming the common critique that (counter)terrorism studies are too reactive by enabling participants to creatively anticipate emerging threats. 

In preparation for the event, participants were invited to read an introductory concept note, which provided an overview of how terrorists have already exploited AI and ensured a common understanding of key concepts in the field of AI and terrorism. The participants were then divided into three groups, each consisting of a varied group of experts from different sub-fields and professions relating to terrorism and AI and one workshop organisers to moderate the discussions. Each group then independently participated in a series of five roundtable discussions, focusing on specific aspects of potential terrorist adaptation to and exploitation of AI.   

Day one started with participants discussing the potential use of AI for operational purposes. They explored the capabilities of AI to enhance and enable weapon production and the risks of decentralised AI in providing instructions outside content moderation. This naturally led the experts to discuss the current legal frameworks and new opportunities to counter AI-enhanced weapons. The second session focused on the misuse of decentralised and open-source AI. Participants discussed the increasing decentralisation of large language models and their impacts on content creation, radicalisation processes, and civilian interactions with extremist chatbots. 

On the second day, session three explored the lifecycle and trajectory of generative AI extremist online content. Participants compared decentralised AI, easy-access, and centralised AI to ‘traditional’ online extremist content. Central to this topic were questions on how AI could enable or more easily facilitate cross-platform migration and increase the general staying power of extremist online content. In the fourth session, participants considered how emerging AI technologies could further enable states to provide support to non-state actors, including terrorist groups and violent extremist groups. Participants also discussed how these groups could adapt their tactics when provided with AI products, weapons, and tools from state sponsors. Key topics during this debate included the ways in which AI could assist target identification, enhance the technical capabilities of groups, and how communication between state sponsors and non-state groups is affected by AI. 

The final round of discussions encouraged participants to push their bounds of creativity and truly think outside the box in considering scenarios in which AI could become a fully independent entity. The varied debates explored how AI could be used to automate the creation and dissemination of extremist content while circumventing current legal and content moderation frameworks. Moreover, the participants were asked to assess the risks associated with AI becoming independent to a point at which it could function as an uncontrollable weapon. Based on a combination of the five discussions, each moderator provided an overview of their group’s perspective on how terrorists are most likely to adapt to and exploit AI-driven technologies in a final roundtable presentation.  

Ultimately, the workshop led to interesting insights, threat assessments, and hopeful and dystopian scenarios encompassing multiple aspects of terrorists’ and violent extremists’ exploitation of AI. As a result of its open-minded ‘blue-sky’ approach, participants were able to consider a much broader range of topics and scenarios, providing numerous potential avenues for future investigation and collaboration. The diverse input from experts in several fields related to terrorist exploitation of AI has further stressed both the need for and interest in this type of collaboration, demonstrating the success of this two-day event. It is the aim of the organisers to use this as a launching point for further work which would then focus on how such scenarios could be countered or mitigated.