In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this paper, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes–specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far-right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.
Back to publications
Policy Brief
13 Dec 2023