Path to Power: 2024 - Democracy Disrupted?
Digital Technologies Enable Artificial Armies and Artificial Candidates
As has been the case for a long time, digital technologies are often used in elections create the illusion of a silent majority, though its influence remains shaped by digital divides and levels of social media penetration. These techniques are used more frequently by parties and players seeking to undermine liberal democratic institutions both from within and from outside of the country.
The main impact of advances in generative AI technology appears to have been in its ability to create artificial representations of key players. This happened both as misinformation, but several elections included notable instances of clearly labeled AI usage to create videos of dead or detained candidates. The transparent and sometimes democracy-supporting use of AI in elections deserves further attention.
The United Kingdom
McLoughlin (2024) identifies four different varieties of AI-generated synthetic media in the UK election. Firstly, AI was used to create humorous or satirical content (Cashell, 2024). Secondly, AI tools and AI-generated media were incorporated into campaign materials, with parties using AI to generate scenes or backgrounds to be combined with other assets. Thirdly, AI candidates, such as AI Steve in Brighton and Hove, served as stand-ins or representations for actual candidates. These AI candidates were presented as highly accessible and controlled by citizens, though they garnered more media attention than votes. AI was also used by paper candidates to present more individualized campaign materials despite limited resources. Finally, AI-driven disinformation – using deepfake images, videos and audio to spread misleading content. These examples included deepfaked audio clips of Labour candidates, which had a more impactful presence on social media and blurred the line between satirical in-jokes and intentional deception.
Pakistan
Pakistan Tehreek-e-Insaf (PTI) leader Imran Khan, the party’s leader, has been incarcerated under corruption charges since 2023, rendering him unable to participate in traditional campaign activities. To address this challenge, the PTI utilized an Artificial Intelligence Voice Clone to simulate Khan’s voice during the election campaign. According to the party, Khan, through his legal representatives, provided new speech scripts, which were then handed over to party officials. These scripts were transformed into audio by ElevenLabs, a U.S.-based AI firm, which analysed Khan’s previous speeches to replicate his distinctive speaking style. The resulting audio was released to the public as audiovisual presentations, where Khan’s AI-generated voice was synchronized with archival photos and videos. In this strategy, the PTI, however, clearly noted that the voice had been generated by Artificial Intelligence tools.
Georgia
Evidence collected by media monitors, and described in Meta’s report on closing inauthentic accounts before the election, showed that Georgia’s social media was the target of concerted computational propaganda from both domestic and foreign sources. This greatly expanded the reach of anti-European and pro-Georgian Dream narratives, with coordinated bot networks seeking to delegitimise the opposition and protest movements.
Mexico
In several instances, generative AI was used to create artificial videos, particularly targeting Sheinbaum as the leading candidate. Two of the most widely shared AI-generated videos against her sought to exploit her Jewish heritage and left-wing affiliations. In one, Sheinbaum appears to advocate for closing Catholic churches, with a satanic symbol in the background. Just days after she won the election, another deepfake surfaced, depicting Sheinbaum speaking Russian with communist propaganda in the room. Although the significant gap between the candidates suggests that this type of manipulation had little impact, in a much tighter race, the consequences could have been more substantial.
During the campaign, millions of alleged bots amplified hashtags like #narcopresidente and #narcocandidata, targeting Lopez Obrador and Sheinbaum, respectively. While the electoral authority dismissed MORENA’s request to investigate what the party referred to as a dirty war, some studies identified over 12 million attacks purportedly involving bots. These attacks were traced to approximately 160,000 accounts originating from Colombia, Mexico, Spain, and, most notably, Argentina (Radio Fórmula, 2024).
India
In the 2024 general elections, the DMK party – a regional party based mostly in the state of Tamil Nadu – used AI to create videos of deceased leader M. Karunanidhi, which were shown at campaign rallies. In one of the speeches, for example, Karunanidhi was seen as recollecting his achievements as the Chief Minister of Tamil Nadu, while simultaneously praising his son’s ability to govern and the efforts of party workers. Other leaders such as Prime Minister Narendra Modi have also deployed these technologies, including AI clones and holograms for campaigning.
South Africa
The MK Party deployed Russian-linked bot networks to circulate targeted disinformation on social media. They posted fabricated evidence alleging that senior ANC officials, including cabinet ministers, were diverting public funds to offshore accounts in Dubai. They amplified hashtags like #ANCCorruptionExposed and #VoteMK2024, ensuring they trended by reposting thousands of identical messages. This sustained operation also included sharing manipulated videos and falsified WhatsApp screenshots to create the impression of insider leaks, eroding trust in the ANC while portraying the MK Party as a transparent and anti-corruption alternative.
Mozambique
Mondlane has used AI images of himself online, including on his Facebook page from where he would often live stream. These digitally generated portrayals presented him at the center of a group of adoring supporters. However, given his popularity and regular attendance at rallies prior to the election, these images may be seen as augmented, synthetic supplements to his authentic reputation as a rousing speaker rather than attempts to wilfully deceive. Social media – while rapidly rising in popularity in recent years – has very low penetration in the country and in regional comparison, so social media strategies are often aimed at a young, tech-savvy audience in Maputo or in the broader diaspora with the expectation they both understand and recognize intentional uses of AI.
Indonesia
Buzzers, typically young, tech-savvy individuals, are often hired to generate online buzz by promoting political messages, products, or ideologies. They are often employed by political figures, businesses, or organizations seeking to influence public opinion, and they operate by amplifying narratives across platforms like Twitter and Facebook, blurring the line between genuine discourse and paid manipulation. Buzzers are often deployed to artificially boost certain topics and scandals higher up the algorithmic agenda, although the effectiveness of buzzers alone at shaping discourse at the level of national politics is dubious.
The USA
Musk’s policy changes to X (formerly Twitter) – purportedly to support an absolute free speech model – have rigorously deregulated the platform, enabling greater anonymity and accelerating bot activity. These bots frequently amplified pro-Trump talking points while slandering progressive politicians, further polarizing political discourse. They contribute to inflating and distorting the perception that Trump-style populism represents the will of the silent majority, shaping public opinion through coordinated manipulation. Bots amplifying Trump-style populism often rally around hashtags such as #MAGA, #AmericaFirst, #StopTheSteal, #SaveAmerica, and #DrainTheSwamp. They typically focus on attacking the legitimacy of the 2020 election, promoting nationalism and anti-immigration rhetoric, and vilifying political elites and the media. The proliferation of far-right bots on X (formerly Twitter) amplifies extremist narratives, creating an illusion of widespread support that desensitises and normalises increasingly extreme social and political attitudes.