AI disinformation: lessons from the UK’s election

The year of elections was also feared to be the year that deepfakes would be weaponised to manipulate election results or undermine trust in democracy.

The record-breaking 2024 figure of about 4 billion voters eligible to go to the polls across more than 60 countries coincided with the full-fledged arrival and widespread uptake of multimodal generative artificial intelligence (AI), which enables almost anyone to make fake images, videos and sound.

Have these fears been realised? Our centre has analysed the incidence of AI-generated disinformation around the UK election held on July 4 and found both reasons for some reassurance, but also grounds for concern over long-terms trends eroding democracy that these threats exacerbate.

In contrast to fears of a tsunami of AI fakes targeting political candidates, the UK saw only a handful of examples of such content going viral during the campaign period.

While there’s no evidence these examples swayed any large number of votes, we did see spikes in online harassment against the people targeted by the fakes. We also observed confusion among audiences over whether the content was authentic.

These early signals point to longer term trends that would damage the democratic system itself, such as online harassment creating a ‘chilling’ effect on the willingness of political candidates to participate in future elections, and an erosion of trust in the online information space as audiences become increasingly unsure about which content is AI-generated and therefore which sources can be trusted.

Similar findings on the impact of generative AI misuse in 18 other elections since January 2023 are reported in a recent CETaS briefing paper.

There has of course been a sensible case for heightened vigilance this year. From India to the UK, and from France to the US, the outcome of many of 2024’s elections have had, or will have, enormous geopolitical implications, thus giving malicious actors strong incentives to interfere.

The capability that generative AI gives users to create highly realistic content at scale using simple keyboard prompts has enhanced the disruptive powers of sophisticated state actors. But it has also dramatically lowered the barriers to access, such that even individual members of the public can pose risks to the integrity of democratic processes – including elections.

The latter threat has been underscored by comments from Australia’s Director-General of Security (Mike Burgess) last week, when he helped announce the lifting of the country’s terrorism threat level. The basis for the increase was in part, Burgess said, that people with violent intent were ‘motivated by a diversity of grievances and personal narratives’ and were ‘interacting in ways we have not seen before’.

As a result, the risk of mis- and disinformation influencing election outcomes is much more serious.

Looking at the UK general election however, generative AI turned out to play a lesser role than traditional automated threats. For instance, several investigations into election-related content on online platforms found hallmarks of bot accounts seeking to sow division over controversial campaign issues such as immigration.

Some had possible links to Russia, and pushed pro-Kremlin narratives about the war in Ukraine. While these bot activities did include a few instances of AI-generated election material being circulated, the majority used a well-established tactic known as ‘astroturfing’, in which many automated accounts are used to increase perceived popular support for a particular policy stance or political candidate by spamming thousands of fake comments on relevant social media posts.

Alongside these bot incidents, the UK was targeted by a fake news operation with strong connections to a Russian-affiliated disinformation network called Doppelganger. Known as ‘CopyCop’, the operation involved the spreading of fictitious articles about the war in Ukraine, to confuse the UK public and reduce support for military aid. As part of CopyCop, real news stories were pasted into AI chatbots and then re-written to align them to the network’s strategic aims.

However, many had prompts left in, which betrayed obvious signs of AI editing and therefore failed to attract much engagement. That said, some of these sources were picked up by Russian media influencers and spread across their channels to tens of thousands of users. Often, the real sources of the articles were concealed in a tactic called ‘information laundering’ in an effort to trick users into assuming it originated from a credible news outlet.

While these disinformation activities can be connected to hostile foreign states, most viral misleading AI content in the UK election came from members of the public. This content included deepfakes that implicated political candidates in controversial statements that they never made. Interestingly, many users behind the content claimed they were doing it for satirical or ‘trolling’ purposes. Others may have pushed the content to increase support for their political party or because they were disillusioned with conventional political campaigns. This range of motives across different users highlights the new sources of risk and the expanded threat landscape that stem from such wide access to generative AI systems.

Taken together, the most prominent disinformation problems during the UK election did not arise from novel AI technology, but from longstanding issues tied to social media platforms – including the role of influencer accounts and recommender algorithms.

As we look ahead to the US election in November, it is vital that these platforms co-ordinate with different sectors to invest in measures to protect users.

This includes red-teaming exercises, requiring clear labels on AI-generated political adverts, and engaging with fact-checking organisations to detect malicious content before it goes viral.

And with Australia facing its own federal election in the next nine months, continued scrutiny of the risks and the malicious perpetrators – and emerging measures to combat them – is also vitally in the country’s interests.

  • This article is part of a short series The Strategist is running in the lead up to ASPI’s Sydney Dialogue on September 2 and 3. The event will cover key topics in critical, emerging and cyber technologies, including disinformation, electoral interference, artificial intelligence, hybrid warfare, clean technologies and more.