TikTok

Report March 2025

Submitted
TikTok's mission is to inspire creativity and bring joy. In a global community such as ours with millions of users it is natural for people to have different opinions, so we seek to operate on a shared set of facts and reality when it comes to topics that impact people’s safety. Ensuring a safe and authentic environment for our community is critical to achieving our goals - this includes making sure our users have a trustworthy experience on TikTok. As part of creating a trustworthy environment, transparency is essential to enable online communities and wider society to assess TikTok's approach to its regulatory obligations. TikTok is committed to providing insights into the actions we are taking as a signatory to the Code of Practice on Disinformation (the Code). 

Our full executive summary is available as part of our report, which can be downloaded by following the link below.

Download PDF

Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.
  • Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 
  • Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter. 
  • Improved our IAB certification for Sweden Gold Standard to 2.0. 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media and Dangerous Conspiracy Theories) which advertisers also need to comply with. Towards the end of 2024, we launched a fifth granular policy covering climate misinformation.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
Methodology of data measurement:

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our four granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

The majority of ads that violate our newly launched misinformation policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for these additional misinformation policies, the removal is counted under the older policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these four additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

The data below suggests that our existing policies (such as political content) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. 

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies
Austria 746 3
Belgium 1152 1
Bulgaria 328 7
Croatia 3 0
Cyprus 128 0
Czech Republic 111 0
Denmark 409 0
Estonia 90 0
Finland 235 0
France 4621 7
Germany 6498 63
Greece 911 8
Hungary 512 2
Ireland 565 1
Italy 2781 8
Latvia 131 4
Lithuania 19 0
Luxembourg 86 0
Netherlands 1179 3
Poland 1118 4
Portugal 438 1
Romania 10698 2
Slovakia 145 4
Slovenia 52 0
Spain 2558 17
Sweden 752 0
Norway 474 2
Total EU 36266 135
Total EEA 36740 137