Introduction
The spread of harmful online content poses a multilayered challenge to our democratic societies. It can stifle pluralistic views, undermine trust and cohesion that are part of the social fabric on which our societies depend on. The spread of harmful content online is of growing concern, in particular because of the scope, speed, and sophisticated methods used for producing, disseminating, and amplifying content.
The implications are profound in particular for teenagers who are susceptible to online radicalisation, not only in Europe, but also increasingly in South and South East Asia. At the core of this concern is the sheer volume and easy access to all forms of harmful content online and the constant exposure to extremist ideologies. Social media and gaming platforms such as Minecraft, Roblox, Discord and TikTok are being used to groom minors, expose them to extreme violence and radical beliefs, all of which increase the risk of political violence both in the online and offline world.
Terrorist and illegal content are more clearly defined and regulated, whereas so-called borderline content is shrouded in ambiguity. Borderline content is also referred to as content that is ‘awful but lawful’, because it appears at first glance to be lawful and thus protected under the freedom of speech. Yet, borderline content can contribute to the normalisation of violence, foster hostilities against specific groups, fuel polarisation, or have a negative impact on the democratic order. The lack of a clear understanding and definition of what constitutes borderline content, makes it difficult to determine lawfulness and whether it falls under the freedom of speech. Before we can actually detect and moderate borderline content, we need to know what borderline content is. ICCT has conducted a commissioned study examining the feasibility of developing a reliable framework for detecting and moderating extremist and terrorist content online. This analysis is based on the report and seeks to define the scope of borderline content, especially in light of the impact this content has on youth. The analysis explains why the term borderline content is not helpful and suggests using the definition of implicit extremist content as this is a better term to assist service providers in detecting and moderating harmful content online in the context of violent extremism.
The impact on youth
Europol’s TE-SAT report 2025 observed a continued increase in the number of minors involved in terrorist or violent extremist activities across the EU, amounting to more than 29 percent of the total number of terrorist arrests. Based on forthcoming research by one of the authors, conducted in the Netherlands, Germany, and the United Kingdom, nearly 100 cases have been collected, revealing that jihadi and right-wing extremist groups both exploit and recruit youth. Data from the United Kingdom reveal that minors are mostly prosecuted for online terrorist offences, out of the 87 offences in 53 cases, nearly 50 percent of the convictions relate to possession of terrorist material, and 29 percent to dissemination of terrorist publications. Similarly, the Five Eyes countries see the number of minors being investigated and prosecuted for terrorism-related activities rising and are calling for collective action.
In the last two years, minors have been increasingly targeted by the 764 network or the Com, coercing them into extreme violent acts, self-harm, or the dissemination of child sexual abuse material within their networks. These networks operate at the nexus of violent extremism, child sexual abuse, and other violent crimes. Other misanthropic groups, such as No Lives Matter, produce explicit terrorist content such as “Manhunt Guide” and deliberately target minors. In Sweden, for instance, a 15-year-old minor has been found guilty of committing unprovoked knife attacks linked to 764 and No Lives Matter.
The scale, speed, and profound impact of harmful content online has on youth is a legal and moral minefield. Minors – who are still in the process of forming their identity and brain development – across the globe are using social media daily and are consequently exposed to all sorts of harmful content. Notwithstanding the benefits of social media, there is a steep rise in problematic use of social media in Europe. The use of new technologies such as Large Language Models (LLM), Artificial Intelligence (AI), chatbots, and deepfakes can further amplify harmful messages. The consequences can be far-reaching. In the US, for example, parents claim their 16-year-old boy has committed suicide after confiding and consulting with ChatGPT.
Online radicalisation is one of the main concerns for national security and usually begins on widely used, public platforms aimed at a broad audience. Given that minors are in a formative stage of psychological and social development, they are especially susceptible to online radicalisation. According to a survey conducted in the United Kingdom 70 percent of the minors aged 13–17 years are exposed to violence online. Borderline content– can further trivialise violence and gradually contribute to the normalisation of such content. As a result, in some cases, minors are drawn into violent extremism and are involved in terrorist activities. Notably, minors also move from being passive consumers to active content creators and coordinators due to their high digital literacy. Considering the high risks for youth and the need to protect them, it is critical to get a better understanding of what constitutes borderline content. To mitigate susceptibility and vulnerability to online recruitment, it is important to improve the detection and moderation of implicit extremist content.
The use of different types of content and methods to conceal true meaning
Research shows that terrorist and extremist groups make ample use of online services for recruiting, disseminating propaganda, fundraising, plotting, networking, and sharing of online manuals with instructions for building weapons and explosives. This can be seen among right-wing extremists, jihadi circles, and anti-government groups. In the Netherlands, ten members of an anti-governement extremist group - the Common Law Nederland Earth (CLNE) – who are now standing trial, also frequently use chat groups at Signal and Telegram. Some of the members were willing to resort to offline violence by plotting an attack against a mayor in the east of the Netherlands.
The online content ranges from text-based posts and messages to image-based memes, video clips, and links directing users to external sites. While some content is apparently terrorist or extremist in nature, for example, the display of symbols of designated terrorist organisations or calls to ideologically motivated violence, other content is less explicit, although not necessarily less harmful. At the same time, jihadi and right-wing extremists are increasingly relying on coded language, symbols, and humour to conceal their messages and ensure they do not qualify as explicit terrorist content.
Common techniques to conceal the meaning include using a misleading cover image or blurred images. The use of memes and humour is very effective; some memes can contain multiple messages for different audiences and serve as a ‘dog whistle’ while the underlying message is only intended and understood by a specific subculture. The use of such memes - such as Pepe the Frog - is common under right-wing extremist groups, including in the Netherlands, but the meme culture is also prevalent in jihadi groups.
Borderline Content
The use of these concealing methods blurs the distinction between legitimate free speech and incitement to violence. Overly broad content moderation risks stifling legitimate debate, while insufficient moderation leaves societies exposed to manipulation and radicalisation. However, if one pierces through the veil, some of the borderline content is in fact terrorist content or constitutes other forms of illegal content such as hate speech, child sexual abuse or incitement to violence.
While terrorist and illegal content can be identified relatively more objectively, as it is defined by regulatory frameworks, so-called borderline content is far more subjective. To give an example, in the Netherlands, two members of parliament representing the political party Party for the Freedom (PVV) have been disseminating a deep fake of Frans Timmermans, the political leader of the Green/Labour Party (Groen Links/PvdA) depicting him as a man stealing money from the hardworking Dutch citizen, and instead giving it to a Muslim family. The post has received many threatening reactions, which could qualify as incitement to violence. Can this deepfake be considered satire and fall under the freedom of speech, or is it incitement to violence?
There is no consensus on what constitutes borderline content in relation to violent extremism and terrorism. Common expressions such as awful but lawful or legal but harmful only partially capture the complexity of this issue. Policy makers, practitioners, experts, and platforms have a different understanding of what constitutes borderline content. Considering the potential harm implicit extremist content poses and the abilities platforms have to amplify, demote, or reduce the visibility of content, they play an important role in regulating implicit extremist content. Platforms often refer to borderline content as material that could potentially violate or come close to violating the community guidelines of the platforms. Meta, for instance, explains that even if certain content has not (yet) been formally confirmed as a violation warranting removal, its visibility can be reduced when their automated systems predict it is likely to violate their Community Standards. Furthermore, platforms frequently adapt and adopt their policies on what content is classified as violative. Because the definition of borderline content is highly context-specific and shaped by political and societal dynamics, what is considered borderline today may change over time, and qualifications may differ across platforms. The shifting boundaries of what constitutes borderline content, make it difficult to navigate and moderate implicit extremist content consistently.
Implicit Extremist Content that is Harmful
Considering the rise of so-called borderline content, a clear definition will enable both users and, in particular, platforms to determine the lawfulness of harmful content in a rule-of-law-compliant manner. Under international human rights law, any restriction on the right to freedom of expression must have a legal basis, serve a legitimate aim and meet the substantive criteria of necessity, proportionality and adequacy. Because borderline content blurs the distinction between lawful and unlawful content, moderation needs to carefully take the freedom of expression into account. We therefore propose to use the term implicit extremist content that is harmful, which consists of three elements:
Implicit refers to the fact that the meaning is concealed. When this is done intentionally, it aims to disguise the illegality, unlawfulness or harmfulness of the content.
Extremist refers to the fact that the content disseminates exclusionary and hateful narratives that may contribute to radicalisation towards terrorism and (violent) extremism.
Harmful refers to the fact that the content could cause serious harm to an individual, a group of people, institutions or the democratic legal order, and that is not protected under international human rights law.
Extremists are not only using technologies to spread hateful content but also increasingly do so to conceal the meaning by employing tactics such as coded language, humour, and irony. Extremists try to avoid detection of overtly terrorist content and target their messages to their audience, triggering more engagement. An example from our research demonstrates how a post simplifies the idea of so-called race-mixing by using the analogy of mixing colours in a washing machine. As a result, content could go undetected, the so-called false negatives.
While we deliberately choose the term extremist, there is no internationally agreed-upon definition for the terms terrorism or violent extremism. The UN Special Rapporteur for Countering Terrorism while respecting Human Rights has expressed her concerns on numerous occasions that a lack of a definition for violent extremism opens the door for abuse and can have a chilling effect on human rights. Some countries, like the United Kingdom, have adopted a definition of violent extremism, but extremism in itself is not illegal nor is it criminalised, but some activities, such as vandalism and incitement to violence , constitute a criminal offence. It is important that there is a direct link between the activities and undermining the legal democratic order, which can be observed with some forms of anti-institional extremism. While all acts of terrorism constitute a criminal offence, not all acts of extremism are criminal offences. Using the term extremist should thus also be narrowly interpreted to avoid violation of freedom of expression, causing false positives.
Finally, harm and harmful content are defined differently in national jurisdictions. Some content may be tasteless or inappropriate, but not harmful. In Australia, it ‘means serious physical harm or serious harm to a person’s mental health, whether temporary or permanent,’ and the issue of defining harm was one of the core issues debated during the drafting of the UK Online Safety Act. The EU Digital Service Act explicitly mentions harmful content and includes harm to both individuals and society, and considers harm as stemming from systemic issues rather than isolated individual behaviours. Harmful content is a container term and, according to research by the Rathenau Institute contains six categories of harmful behaviour. The fact that different approaches have been adopted regarding what constitutes harmful content will also have an impact on how this is moderated. The term harm is context-specific; if it is not interpreted properly, it could lead to both too many false negatives and false positives.
Conclusion
Considering how harmful implicit extremist content can be for individuals, specific vulnerable groups and society at large, it is crucial to develop a comprehensive approach. To strengthen effective regulation, early detection, and proportionate moderation, it is vital to have a proper and clear definition of implicit extremist content. This will also avoid unnecessary violations of the freedom of expression, whilst insufficient removal of content can also be detrimental in particularly for youth. The proposed definition is not perfect and leaves room for improvement. Perceptions of what is harmful may differ over time, but also from one society to another. The underlying dilemma in moderating harmful content is determining when implicit extremist content falls within the scope of freedom of expression.
Under international human rights law, restrictions on the right to freedom of expression are permitted under certain conditions permitted provided they have a legal basis, serve a legitimate aim, and meet the substantive criteria. When platforms adopt a more transparent and human-rights-compliant content moderation, this should reduce harmful content online as well as infringements of freedom of expression. Moreover, content moderation not only affects the freedom of expression but can also directly impact other human rights, such as the right to religion and the right to privacy.
More fundamental is the question of whether platforms can and should uphold and protect human rights. Given the ability of platforms to shape public opinion, amplify certain messages, and influence human rights outcomes, platforms bear a responsibility to respect and protect human rights. The UN Guiding Principles on Business and Human Rights (UNGPs) affirm that companies must mitigate risks and take proactive measures to prevent violations. Yet, at the same time, reports suggest that platforms are struggling to achieve an appropriate balance between safeguarding freedom of expression and enforcing content moderation policies. Our research reveals that platforms have been violating the freedom of expression due to vague definitions often followed by disproportionate moderation decisions.
The right to freedom of expression takes on a different dimension online, but human rights apply equally offline and online. The built-in flexibilities of the international human rights framework allow it to adapt to the online dimension. The notion of free speech versus free reach underlines that the right to express one's opinion is not the same as having it widely promoted. Platforms play a crucial role as they control the possibilities to amplify, demote, or reduce the visibility of content, in particular implicit extremist content. Ultimately, clarity in the definition will provide guidance to platforms – but also users – on how to navigate harmful content online in the context of violent extremism.