Path to Power: 2024 - Democracy Disrupted?
State-Platform Cooperation is Needed to Address Online Toxicity
In order to address online toxicity and it effects, there needs to be a cooperation between states and platforms around a shared democratic commitment. However, where there is willingness of the state to work with platforms around structural issues, such as content moderation or algorithm adjustments, there is limited willingness of platforms to cooperate.
In cases where the state actively spreads, condones or tolerates online toxicity, attempts between external states and online platforms at cooperation are limited. Platforms do not show willingness to address these issues domestically without concerted and robust state intervention. Within domestic elections, illiberal external states and actors utilise core functions of platforms to advance their geopolitical interests in a way the platforms and target countries struggle to control.
The United Kingdom
The government’s efforts to curb online toxicity and misinformation, such as through the Online Safety Bill, have largely failed to address the deeper societal divisions that fuel these issues. The Conservative government under Rishi Sunakimplemented stricter regulations on technology platforms to tackle harmful content, yet disinformation continues to thrive, particularly around topics like immigration, economic policies, and the ongoing effects of Brexit. Furthermore, due to the number of individuals being arrested for their social media posts, especially in contexts where misinformation may have the intention of escalating incidents of violence, civil liberties groups wish to emphasize the difference between misinformation spread for profit and unintentional communications (Spring, 2024).
Pakistan
During the 2024 elections, the Election Commission of Pakistan collaborated with Meta to counter mis/disinformation. This was in addition to Meta’s collaboration with local and international civil society groups and factchecking organizations around this issue, including organizations such as the Agence France-Presse, PakVoter and Shehri Pakistan. Despite these collaborations, according to a report by Pakistan’s Digital Rights Foundation, during the elections, harmful content was rampant across digital media platforms including Facebook, Instagram, TikTok and X (Digital Rights Foundation, 2024).
South Africa
Government efforts to regulate social media platforms to curb disinformation have failed to mitigate the deep-rooted societal issues contributing to political polarization. Despite campaigns to combat fake news, online misinformation around topics like corruption, land reform, and race relations remains pervasive. The ANC has faced significant disinformation attacks, particularly surrounding its handling of corruption cases, which have been amplified on platforms by opposition parties like the EFF and the DA. For instance, the #ZumaMustFall movement and ongoing coverage of state capture was undermined by online misinformation from pro-Zuma accounts. The IEC partnered with major tech companies, including Google, Meta, and TikTok, in an attempt to curb the spread of disinformation. For the first time, these companies signed a Framework of Cooperation with the IEC, aiming to safeguard the integrity of information during elections, and although instances persisted, public awareness campaigns across traditional media provided citizens with the digital tools necessary to discriminate fact from fiction.
Indonesia
In recent years, instances such as the widespread dissemination of false information during the 2019 presidential election, including doctored videos and fake news, highlight the scale of the issue. While the government has made efforts, such as the establishment of the Information Ministry’s Hoaks task force in 2018 to combat disinformation and the introduction of the ITE (Information and Electronic Transactions) Law in 2008, these measures have faced criticism for being either ineffective or overly broad, often curbing free expression rather than addressing the root causes of misinformation (International Commission of Jurists, 2020; France 24, 2024).
Mexico
Efforts to combat disinformation during the 2024 Mexican election were primarily led by the electoral authority, the Instituto Nacional Electoral (INE). These initiatives included collaboration with tech giant META and civil society organizations to promote digital literacy. However, these efforts fell short, as evidenced by the prevalent polarization and hate speech on social media during the campaign. Moreover, many of the initiative’s actions impacted only a few thousand users. Additionally, the potential ripple effect of online misinformation on less connected individuals remains unaddressed. The less educated and the elderly are particularly vulnerable to misinformation, whether they encounter it online or offline. Both groups often lack the tools to identify misinformation or take additional steps to fact-check it. Finally, the Mexican government and independent agencies lack a clear and effective strategy to comprehensively and sustainably address existing disparities in access, particularly in rural areas, and to ensure the full exercise of user rights.
Mozambique
Efforts to combat online misinformation have been met with limited success. The government introduced measures like the 2022 Anti-Terrorism Law, which penalizes spreading false information about terrorism, especially sensitive considering the ongoing conflict with extremist forces in Cabo Delgado. The government also proposed revisions to its Social Communications Law, which could regulate journalism more strictly. The government has also blocked social media platforms during political unrest, as seen after the 2024 elections, citing public order concerns. While supposedly aimed at curbing disinformation, these actions have raised fears of suppressing dissent and restricting freedom of expression.
Georgia
The Georgian government actively supported online toxicity and misinformation with official and unofficial accounts regularly engaging in adversarial attacks on the opposition and misleading the public with false narratives during this election. These strategies were effective in distracting voters away from key societal systemic problems in Georgia, such as unemployment and poverty, which survey data shows were the most important for the electorate. Within this context, internal efforts from civil society groups, including media monitors and factcheckers, were most effective in uncovering and priming the Georgian population, in the absence of effective state-platform cooperation. As such groups have been increasingly targeted through restrictive legislation, the bottom-up accountability of both the state and platforms to Georgian civil society, remains under significant threat.
The USA
Efforts to combat disinformation during the 2024 presidential election, such as the Biden administration’s collaboration with social media platforms and the creation of disinformation task forces, have done little to heal the deep political polarization. Misinformation about the legitimacy of the previous election continued to dominate platforms, particularly among Trump supporters, and spreads rapidly through fringe media and social networks. For instance, false claims about voter fraud, despite being debunked, remain central to Trump’s rhetoric, galvanizing his base and undermining confidence in the democratic process. During the 2022 mid-terms, the Biden administration worked with platforms like Facebook, Twitter, and YouTube, alongside factchecking organisations such as the Center for an Informed Public in Seattle, Washington, to flag and counter election-related misinformation during the midterm elections (Leingang, 2024).
Following the Republicans’ takeover of the House, figures like Jim Jordan and James Comer led efforts to politicise this collaboration, alleging censorship of conservative voices and initiating probes through the House Judiciary and Oversight Committees. After Trump’s election victory, the tech companies expressed willingness to work with Republicans. Zuckerberg went as far as releasing a video in which he announced Meta would no longer subject user-uploaded content to a full content moderation review. Instead, users will have the burden of flagging and complaining about content that causes harm or spreads dangerous misinformation, thereby turning the public into their own fact checkers and absolving themselves of responsibility (Vaidhyanathan, 2025).
India
Meta’s platforms, including Facebook, WhatsApp, and Instagram, are central to political campaigning in India, where parties, including the BJP, the INC and others, increasingly rely on digital outreach. This reliance on digital platforms for political campaigning usually exacerbates the problems of misinformation and disinformation, especially during the electoral cycles (Klepper and Pathi, 2024). In May 2024, Ekō, in collaboration with India Civil Watch, ran an experiment submitting AI-generated ads that violated Meta’s policies on hate speech and misinformation, with 14 out of 22 ads approved. Despite Meta’s prior commitments to combat misinformation during the election, the experiment revealed the company’s failure to prevent harmful content, while also highlighting the limitations of collaborating with platforms for mitigating online toxicity.