Democracy Reporting International

Report March 2025

Submitted
Executive summary 
Democracy Reporting International's (DRI) Digital Democracy Programme Unit focuses on identifying trends in online discourse and online harms during political events and electoral periods across Europe and beyond. Our Digital Democracy team conducts social media monitoring and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organizations. 

Key Findings and Actions during the Reporting Period: 

1. Research into Murky Accounts: DRI’s Digital Democracy Programme Unit researched how VLOPs and VLOSEs address tactics like inauthentic accounts, fake followers, and political impersonation. We published eight reports on "Murky Accounts" —accounts of questionable affiliation that present themselves as official government, politician, or party accounts when, in fact, they are not. Murky accounts do not declare themselves as fan or parody pages, and can be interpreted as attempts to promote, amplify, and/or advertise political content.  
We identified the systematic use of Murky Accounts in the 2024 European Parliament, French, and Romanian elections. We recommended that TikTok strengthen policies to prevent fan account abuse, improve enforcement of their policies to identify and address impersonation, require verified badges for political accounts, and enforce consistent guidelines, including pre-election reviews. 

2. Social Media Monitoring (SMM): DRI also conducted detailed analyses of online discourse during the EP Elections in eight member states uncovering instances of toxic speech and disinformation threats targeting historically marginalised groups and the integrity of elections. Our techniques included keyword searches, sentiment analysis, and advanced computational methods to glean a nuanced understanding of online discourse during both electoral periods. 

3. AI System Analysis and Recommendations: DRI continued its monitoring of generative AI risks, particularly from LLM-powered chatbots, through regular audits assessing their impact on elections. While some genAI systems (e.g., Gemini) implemented safeguards, others (e.g., Copilot, ChatGPT-4) still generated misleading electoral information, highlighting the need for consistent safeguards. We also track the use of AI-generated content during the 2024 EP Elections and formulated policy recommendations to address potential misuses. During the reporting period we also published a guide on auditing approaches for LLM risks and a report analysing chatbot alignment with human rights-based pluralism. 

4. Policy Recommendations, Engagement and Advocacy: DRI actively participated in the Rapid Response System under the Code of Conduct on Disinformation, advocating for the robust implementation of the DSA’s risk mitigation framework and data access provisions. We worked directly with platforms to develop strategies for minimising online harms and pushed for greater transparency in content recommendation and moderation practices. Additionally, we engaged with EU stakeholders through roundtables, workshops, and conferences, fostering awareness and action on the DSA and broader digital governance issues. 

Download PDF

Elections 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Threats observed or anticipated at time of reporting: 

  1. Impersonation and inauthentic TikTok political accounts and political ads violating TikTok’s policies 

European Parliament and French snap Elections:
DRI submitted the first RRS notification under the newly created system drawing TikTok’s attention to “murky accounts”, a term we coined for accounts with unclear affiliations and questionable authenticity actively distributing and promoting politician and party content. Between May and July, we sent four RRS notifications flagging 116 TikTok accounts from 31 candidates/political parties across 15 EU member states. We looked at accounts supporting parties from across the political spectrum. 79.31% of the flagged accounts supported far-right candidates and political parties. As of September 2024, TikTok had removed 53 out of the 116 accounts we flagged.  


Romanian Elections:
Across our two reports, we identified a total of 114 TikTok accounts violating the platform’s rules on impersonation and/or displaying signs of coordinated inauthentic behaviour or fake engagement, all linked to candidate Călin Georgescu. 

We also analysed TikTok’s Commercial Content Library, which is intended to list all active ads on the platform. Despite TikTok’s policy prohibiting political advertising—including ads that reference, promote, or oppose candidates or solicit votes—we uncovered 49 political ads supporting Călin Georgescu. 

 
2. Chatbots misinforming about the EP Elections, and prevalence of generative AI in the campaign 

After asking four chatbots ten questions in ten different EU languages (a total of 400 questions), DRI showed that chatbots are less reliable than search engines in providing users with electoral information. We asked ChatGPT 3.5 & 4, CoPilot and Gemini common questions about the European elections and all of them provided some totally or partially incorrect answers. This research brief was covered by Politico and Euronews, as well as other outlets. A follow-up study showed that Google adapted its Gemini chatbot which refused to answer election-related questions instead of giving wrong responses. The chatbots of OpenAI and Microsoft continued to provide wrong responses. We also examined the use of generative AI in political campaigns across eight EU countries. 


3. Toxicity in political speech, disinformation narratives, and far-right online campaigning 

DRI monitored social media ahead of the European elections to track trends in online debates and identify country-specific instances of toxic content, disinformation, and manipulation. Through our Social Media Monitoring Hub, eight researchers analysed posts from political figures across Facebook, Instagram, X, and TikTok. We also explored WhatsApp activity across German public groups during the months leading up to the elections and examined political campaigns by AfD and RN on X. Our findings were shared via an interactive dashboard, three election briefs, and a final report. 

Mitigations in place
Mitigations in place – or planned - at time of reporting: 

  1. Provided evidence to enforcement authorities on identified threats 

We provided the European Commission with our research and findings as evidence for ongoing enforcement processes. We shared our chatbot audits to support investigations into VLOPs and VLOSEs regarding their policies and enforcement of regulations on generative AI.
 
2. Urged platforms to revise and strengthen their terms and conditions to effectively combat the identified threats. 

DRI published a brief elaborating on the policy implications of impersonation and inauthentic political accounts on TikTok, highlighting their threat to civic discourse and EU elections by misleading voters, distorting perceptions of support, and bypassing TikTok’s stricter policies on political accounts. We recommended that TikTok and other VLOPs update their policies to prevent fan account abuse, implement features to stop impersonation, mandate verified badges for political accounts in the EU, conduct pre-election reviews, and ensure consistent enforcement of guidelines. Following the brief, we met with TikTok representatives in Berlin on 12 August to discuss our findings and recommendations. 
The big loophole (and how to close it: How TikTok's policy and practice invites murky political accounts| 22.07.2024

3. Raised awareness about threats and built networks with relevant stakeholders through webinars and roundtables 

Throughout our monitoring of the EP elections, we worked closely with key stakeholders. A key initiative was establishing the Social Media Monitoring Hub, staffed by eight researchers from France, Germany, Spain, Poland, Hungary, Italy, Sweden, and Romania, to track relevant country-specific issues and online threats in the months leading up to the election. We also raised public awareness about these threats through webinars and events. In April, DRI organised a 90-minute online exchange with 34 participants from EU institutions, platforms, and NGOs to assess risks ahead of the European Parliament elections. Breakout groups discussed emerging online threats, sharing insights and identifying potential risks. We continued these efforts with a post-election webinar to review how campaigns unfolded, identify opinion leaders, and explore the use of generative AI in the elections.