TikTok

Report September 2025

Submitted

TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Continued to improve and enforce our five granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2024 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 
    • Climate Disinformation 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.

TikTok did not subscribe to this measure as outlined in the January 2025 subscription document.

QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

N/A

SLI 1.1.2

Please insert the relevant data

N/A

Country Methodology of data measurement Euro value of ads demonetised
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

N/A

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

We partner with a number of industry leaders to provide a number of controls and transparency tools to advertising buyers with regard to the placement of ads:

Controls: We offer pre-campaign solutions to advertisers so they can put additional safeguards in place before their campaign goes live to mitigate the risk of their advertising being displayed adjacent to certain types of user-generated content. These measures are in addition to the Community Guidelines, which provide overarching rules around the types of content that can appear on TikTok and are eligible for the For You feed:

  • TikTok Inventory Filter: This is our proprietary system, which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the main system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies, which include topics that may be susceptible to disinformation. Additionally, this enabled advertisers to:
    • Selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
    • Exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List.
  • TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.

Transparency: We have partnered with third parties to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in the For You feed, against their chosen brand suitability parameters:

  • Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.
  • IAS: Advertisers can measure brand safety, viewability, and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the Industry Standards. 
  • DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand its suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.

Measure 1.4

Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.


QRE 1.4.1

Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.

N/A

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.


QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

N/A

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

N/A

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.


QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

N/A

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

N/A

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

N/A

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

N/A

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Continued to enforce and improve our five granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2024 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 
    • Climate Misinformation
  • Enabled advertisers to selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
  • Enabled advertisers to exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List. Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our five granular policies on Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, Dangerous Conspiracy Theories, and Climate Misinformation. We launched our Climate Misinformation policy in December 2024.

The majority of ads that violate our newly launched misinformation policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for these additional misinformation policies, the removal is counted under the older policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these five additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

The data below suggests that our existing policies (such as Political Content) already cover the majority of harmful misinformation ads due to their expansive coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. 

Country Number of ad removals under the political content ad policy Number of ad removals under the five granular misinformation ad policies
Austria 1,634 11
Belgium 2,447 4
Bulgaria 880 9
Croatia 705 0
Cyprus 585 0
Czech Republic 859 0
Denmark 796 2
Estonia 307 0
Finland 1,033 2
France 16,026 46
Germany 18,041 72
Greece 2,420 20
Hungary 1,647 111
Ireland 1,263 8
Italy 8,150 27
Latvia 795 2
Lithuania 521 4
Luxembourg 250 1
Malta 0 0
Netherlands 3,028 30
Poland 5,699 19
Portugal 1,430 1
Romania 13,989 23
Slovakia 500 2
Slovenia 230 2
Spain 6,526 54
Sweden 1,659 8
Iceland 3 0
Lichtestein 0 0
Norway 1,071 3
EU Level 91,420 458
EEA Level 92,494 461

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:


  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Synthetic and Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary. 

TikTok also operates a "recall" process whereby ads already on TikTok will undergo an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:

  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Synthetic and Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary.

TikTok also operates a "recall" process whereby ads already on TikTok will go through an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

We are clear with advertisers that their ads must comply with our strict ad policies (see TikTok Business Help Centre). We explain that all ads are reviewed before being uploaded on our platform - usually within 24 hours. Ads already on TikTok may go through an additional stage of review if they are reported, if certain conditions are met (e.g., reaching certain impression thresholds) or because of random sampling conducted at TikTok’s own initiative.

Where an advertiser has violated an ad policy, they are informed by way of a notification. This is visible in their TikTok Ads Manager account and/or sent by email (if they have provided a valid email address), or where an advertiser has booked their ad through a TikTok representative, then the representative will inform the advertiser of any violations. Advertisers are able to make use of functionality to appeal rejections of their ads in certain circumstances. 

As part of our overarching DSA compliance programme, we improved how we notify and increase transparency to our advertisers. Notifications of restrictions include the restriction itself, reason for restriction, whether we made that decision by automated means, how we came to detect the violation (e.g. as a result of a user report or proactive TikTok initiatives) and what their rights of redress are. Advertisers can access online functionality to appeal restrictions on their account or ads. These appeals are then also reviewed against our ad policies and additional information could be provided to advertisers to help them understand the violation and what to do about it.

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage in the Task force and all its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

As set out later on in this report, we cooperate with a number of third parties to facilitate the flow of information that may be relevant for tackling purveyors of harmful misinformation. This information is shared internally to help ensure consistency of approach across our platform.

We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chaired. We work with other signatories to define and outline metrics regarding the monetary reach and impact of harmful misinformation. We are in close collaboration with industry to ensure alignment and clarity on the reporting of these code requirements.

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

We work with industry partners to discuss common standards and definitions to support consistency of categorising content, adjacency & measurement relevant topics, in appropriate fora. We work closely with IAB Sweden, IAB Ireland and other organisations such as TAG in the EEA and globally. We are also on the board of the Brand Safety Institute. 

We continue to share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We continue to engage in the subgroups set up for insights sharing between signatories and the Commission.

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

We continue to work closely with IAB Sweden, IAB Ireland, and other organisations such as TAG in the EEA and globally.

Political Advertising

Commitment 4

Relevant Signatories commit to adopt a common definition of "political and issue advertising".

We signed up to the following measures of this commitment

Measure 4.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?



If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 13

Relevant Signatories agree to engage in ongoing monitoring and research to understand and respond to risks related to Disinformation in political or issue advertising.

We signed up to the following measures of this commitment

Measure 13.1 Measure 13.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document.

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 13.1

Relevant Signatories agree to work individually and together through the Task-force to identify novel and evolving disinformation risks in the uses of political or issue advertising and discuss options for addressing those risks.



Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we completed our AIGC media literacy campaign series in Mexico and the UK. These campaigns in Brazil, Germany, France, Mexico and the UK, which ran across H2 2024 and H1 2025, were developed with guidance from expert organisations like Mediawise and WITNESS to teach our community how to spot and label AI generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections. 
  • Continued to participate in the working groups on the integrity of services and Generative AI.
  • We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s Transparency Centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.
  • We continue to update and refine our policies around Covert Influence Operations in order to stay agile to changing behaviours and tactics on the platform and to ensure more granular detail is enshrined in our policy rationales. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

As well as our Integrity & Authenticity policies in our Community Guidelines, which safeguard against harmful misinformation (see QRE 18.2.1), our Integrity & Authenticity policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our Integrity & Authenticity policies, which address Spam and Deceptive Account Behaviours, expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam, including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also do not allow impersonation, including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
Our Integrity & Authenticity policies, which address fake engagement, do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora, or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation, which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 
In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) is prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We have published details of all the CIO networks we identified and removed in H1 2025 in a dedicated monthly report within our Transparency Centre here.

In H1 2025, through our Deceptive Behaviours policies we worked on a number of initiatives that sought to continue developing and adapting our strategies at combatting manipulative behaviours and practices. We continue to make progress through several updates and development schemes.

Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack-and-leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
  • Our harmful misinformation policies combat conspiracy theories related to unfolding events and dangerous misinformation.
  • Our Trade of Regulated Goods and Services policy prohibits the trading of hacked goods.

Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...) 

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
 
In accordance with our policy, we prohibit AIGC, which features:
  • The likeness of young people or realistic-appearing people under the age of 18.
  • The likeness of adult private figures, if we become aware that it was used without their permission.
  • Misleading AIGC or edited media that falsely show:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 Community Guidelines refresh by increasing the information around our policing of this policy and providing specific examples.

We also don't allow paid political advertising. This includes creators being compensated for making branded political content, and the use of other promotional tools on the platform, such as Promote. We prohibit advertising of any kind by political figures and entities, and suspected paid political advertising is ineligible for the For You feed.

In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation (see QRE 18.1.1) and deceptive behaviours on our Platform before they are reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and used detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area, and TikTok's moderation teams therefore play a key role in assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the Integrity & Authenticity policies in our Community Guidelines, including providing case banks of harmful misinformation claims to support their moderation work, and allowing them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain popularity levels in terms of the number of video views, it will be flagged for further review. Such a review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our Community Guidelines including our Integrity & Authenticity policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, effectively increasing the number of videos removed for policy violations. This also decreased the number of visitors per video over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

The implementation of our policies is ensured by different means, including specifically-designed tools (such as toggles to disclose branded content - see QRE 14.1.1) or human investigations to detect deceptive behaviours (for CIO activities - see QRE 14.1.2).

The implementation of these policies is also ensured through enforcement measures applied in all Member States. 

CIO investigations are resource-intensive and require in-depth analysis to ensure high confidence in proposed actions. Where our teams have the necessary high degree of confidence that an account is engaged in CIO or is connected to networks we took down in the past as part of a CIO, it is removed from our Platform.

Similarly, where our teams have a high degree of confidence that specific content violates one of our TTPs-related policies (See QRE 14.1.1), such content is removed from TikTok.

Lastly, we may reduce the discoverability of some content, including by making videos ineligible for recommendation in the For You feed section of our platform. This is, for example, the case for content that tricks or manipulates users in order to inauthentically increase followers, likes, or views.

Full metrics from this QRE (and QREs 14.2.2 and 14.2.4) can be found in our full report, linked at the top of this page. 

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

We collaborated as part of the Integrity of Services working group to set up the first list of TTPs. We continue to provide updates on observed TTPs through our monthly CIO transparency reporting, including observations on novel and emerging tradecraft.

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we completed our AIGC media literacy campaign series in Mexico and the UK. These campaigns in Brazil, Germany, France, Mexico and the UK, which ran across H2 2024 and H1 2025, were developed with guidance from expert organisations like Mediawise and WITNESS to teach our community how to spot and label AI generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
  • We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Our Edited Media and AI-Generated Content (AIGC) policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

While we welcome the creativity that new AI may unlock, in line with our updated policy, users must proactively disclose when their content is AI-generated or manipulated but shows realistic scenes (i.e. fake people, places, or events that look like they are real). We launched an AI toggle in September 2023, which allows users to self-disclose AI-generated content when posting. When this has been turned on, a tag “Creator labelled as AI-generated” is displayed to users. Alternatively, this can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’. 

We also automatically label content made with TikTok effects if they use AI. TikTok may automatically apply the "AI-generated" label to content we identify as completely generated or significantly edited with AI. This may happen when a creator uses TikTok AI effects or uploads AI-generated content that has Content Credentials attached, a technology from the Coalition for Content Provenance and Authenticity (C2PA). Content Credentials attach metadata to content that we can use to recognize and label AIGC instantly. Once content is labeled as AI-generated with an auto-label, users are unable to remove the label from the post.

We do not allow: 
  • AIGC that shows the likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals. 
  • AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission.
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

We have a number of measures to ensure the AI systems we develop uphold the principles of fairness and comply with applicable laws. To that end:
  • We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
  • We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.
We are also proud to be a launch partner of the Partnership on AI's Responsible Practices for Synthetic Media.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Actively engaged with the Crisis Response working group, sharing insights and learnings about relevant areas, including CIOs. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

N/A

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

N/A

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

We publish all of the CIO networks we identify and remove within our transparency reports here. As new deceptive behaviours emerge, we’ll continue to evolve our response, strengthen enforcement capabilities, and publish our findings.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Rolled out three new ongoing general media literacy and critical thinking skills campaigns in the EU in collaboration with our fact-checking and media literacy partners:
    • Germany: Deutsche Presse-Agentur (dpa)
    • Romania: Funky Citizens, Digi Media, and Libertatea
    • Poland: Demagog, FakeNews.pl, Radio Zet, and Orientuj.sie

This brings the number of  general media literacy and critical thinking skills campaigns in Europe to 14 (Denmark, Finland, France, Georgia, Germany, Ireland, Italy, Romania, Spain, Sweden, Moldova, Netherlands, Poland and Portugal).

  • We ran 9 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 7 in the EU 
      • Croatia (local election): Faktograf
      • Croatia (presidential election): Faktograf
      • Germany: Deutsche Presse-Agentur (dpa)
      • Latvia: Lead Stories
      • Poland: Demagog.pl & FakeNews.pl
      • Portugal: Poligrafo
      • Romania: Funky Citizens 
    • 2 in wider European/regionally relevant countries 
      • Albania: Internews Kosova (Kallxo)
      • Greenland: Logically Facts
  • During the reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States and 4 in Albania, Belarus, Greenland, and Kosovo. 
  1. Albania: Internews Kosova (Kallxo)
  2. Belarus: Belarusian Investigative Center
  3. Germany: Deutsche Presse-Agentur (dpa)
  4. Greenland: Logically Facts
  5. Kosovo: Internews Kosova (Kallxo)
  6. Poland: Demagog
  7. Portugal: Poligrafo
  • Launched a revamped version of our Holocaust Education Campaign providing a dedicated hub within the app, in partnership with the World Jewish Congress and UNESCO with new videos from our partners designed to inform our community about the Holocaust. This includes first-hand witness accounts from Holocaust survivors, videos of users visiting Holocaust memorial sites, testimonials from curators sharing stories about Holocaust victims, and more. Our community can access the hub through TikTok searches related to the Holocaust and on relevant videos.
  • Launched 2 new temporary search guides to provide users with guidance about interacting with sensitive content, and authoritative information sources, when events are unfolding rapidly.
    • Italy & Portugal: Pope Francis, Health Status, 14 Mar 2025 - 12 May 2025
    • Ireland & UK: Ballymena Riots, 13 Jun 2025 - 24 June 2025
  • Launched a new temporary in-app natural disaster media literacy search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025 and continued our temporary search guide for the Mayotte Cyclone until 14 Feb 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support. 
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around elections, the Israel-Hamas Conflict, Climate Change, Holocaust Education, Mpox, and the War in Ukraine.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

In addition to actioning content that violates our Integrity & Authenticity policies, we continue to dedicate resources to: expanding our in-app measures that show users additional context on certain content (e.g., natural disasters and rapidly unfolding events); redirecting them to authoritative information; and making these tools available in 23 EU official languages (plus, for EEA users, Norwegian & Icelandic).

We work with external experts to combat harmful misinformation. For example, we work with the World Health Organisation (WHO) on medical information, and our global fact-checking partners, taking into account their feedback, as well as user feedback, to continually identify new topics and consider which tools may be best suited for raising awareness around that topic.

We deploy a combination of in-app user intervention tools on topical issues such as elections , the Israel-Hamas Conflict, Holocaust Education, Mpox and the War in Ukraine.

Video notice tags. 

A video notice tag is an information bar at the bottom of a video which is automatically applied to a specific word or hashtag (or set of hashtags). The information bar is clickable and invites users to “Learn more about [the topic]”. Users will be directed to an in-app guide, or reliable third party resource, as appropriate.

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

In order to raise awareness among our users about specific topics and empower them, we run a variety of on and off-platform media literacy campaigns. Our approach may differ depending on the topic. We localise certain campaigns (e.g., for elections) meaning we collaborate with national partners to develop an approach that best resonates with the local audience. For other campaigns such as the War in Ukraine, our emphasis is on scalability and connecting users to accurate and trusted resources. 

Below are examples of the campaigns we have most recently run in-app which have leveraged a number of the intervention tools we have outlined in our response to QRE 17.1.1 (e.g. search interventions and video notice tags).

(I) Promoting election integrity. As well as the election integrity pages on TikTok's Safety Center and Transparency Center, and the new dedicated Global Elections Hub, which provides an overview of our overall approach to protecting TikTok through the elections, including the most relevant policies that we use to protect the platform during elections, our media literacy features, and the continuous updates we make to support our community in real-time. Along with the hub,  we launched media literacy campaigns in advance of several elections in the EU and wider Europe.

  • Croatia Presidential Election 2024: From 6 Dec 2024 - 14 Jan 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Croatia presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Faktograf.
  • German Federal Election 20254: From 16 Dec 2024 - 3 Mar 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).
  • Greenland General Election 2025: From 18 Feb 2025 - 12 Mar 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Greenland general election. The page contained a section about following our Community Guidelines, with a link to our Danish fact-checking partner, Logically Facts for digital literacy resources.
  • Finland Local & Municipal Elections 2025: From 4 Apr 2025 - 14 Apr 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Finnish elections and a link to a government website with election information. The page contained a section about following our Community Guidelines, with a link to the Finnish National Agency for Education (EDUFI) for digital literacy resources.
  • Romania Presidential Election 2025: From 11 Apr 2025 - 23 May 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Romanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Funky Citizens and media agencies Digi Media and Libertatea.
  • Albania General Election 2025: From 14 Apr 2025 - 12 May 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Albanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Kallxo.
  • Croatia Local Elections 2025: From 17 Apr 2025 - 5 Jun 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Croatian local election. The page contained a section about following our Community Guidelines, with a link to our Croatia fact-checking partner, Faktograf for digital literacy resources.
  • Portugal Legislative Election 2025: From 18 Apr 2025 to 2 June 2025, (ongoing at date of publication), we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portuguese election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Poligrafo.
  • Poland Presidential Election 2025: From 18 Apr 2025 - 6 Jun 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Polish election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Demagog, fact checker FakeNews.pl, and media partners Radio Zet and Orientuj.sie.
  • Latvia Local & Municipal Elections 2025: From 9 May 2025 (ongoing at date of publication), we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Latvian elections. The page contained a section about following our Community Guidelines, with a link to our Croatia fact-checking partner, Lead Stories for digital literacy resources.

(II) Election Speaker Series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States, and 4 in Albania, Belarus, Greenland, and Kosovo.
  1. Albania: Internews Kosova (Kallxo)
  2. Belarus: Belarusian Investigative Center
  3. Germany: dpa
  4. Greenland: Logically Facts
  5. Kosovo: Kallxo
  6. Poland: Demagog.pl
  7. Portugal: Poligrafo

(III) Media literacy (General). We rolled out 3 new ongoing general media literacy and critical thinking skills campaigns in the EU in collaboration with our fact-checking and media literacy partners:
  • Germany: Deutsche Presse-Agentur (dpa)
  • Romania: Funky Citizens, Digi Media, and Libertatea
  • Poland: Demagog.pl, FakeNews.pl, Radio Zet, and Orientuj.sie

This brings the number of general media literacy and critical thinking skills campaigns in Europe to 14 (Denmark, Finland, France, Georgia, Germany, Ireland, Italy, Romania, Spain, Sweden, Moldova, Netherlands, Poland and Portugal).

(IV) Media literacy (War in Ukraine). We continue to serve 17 localised media literacy campaigns specific to the war in Ukraine in: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania, Czechia, Poland, Croatia, Slovenia, Bulgaria, Germany, Austria, Bosnia, Montenegro, and Serbia.
  • Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.
  • Partnered with fakenews.pl: Poland.
  • Partnered with Correctiv: Germany, Austria.

Through these media literacy campaigns, users searching for keywords relating to the war in Ukraine on TikTok are directed to tips prepared in partnership with local media literacy bodies and our trusted fact-checking partners, to help them identify misinformation and prevent its spread on the platform.

(V) Israel-Hamas conflict. To help raise awareness and to protect our users, we have search interventions which are triggered when users search for neutral terms related to this topic (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also directs them to well-being resources.

(VI) Climate literacy.
  • Our climate change search intervention tool is available in 23 official EU languages (plus Norwegian and Icelandic for EEA users). It redirects users looking for climate change-related content to authoritative information and encourages them to report any potential misinformation they see.
  • As of August 2024, popular hashtags #ClimateChange, #SustainableLiving, and #ClimateAction have more than 1.2 million associated posts on TikTok, combined.

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

As documented in the TikTok Safety Center Safety Partners page and TikTok’s Advisory Councils,  we work with an array of industry experts, non-governmental organisations, and industry associations around the world in our commitment to building a safe platform for our community. They include media literacy bodies, to develop campaigns that educate users and redirect them to authoritative resources, and fact-checking partners. Specific examples of partnerships within the campaigns and projects set out in QRE 17.2.1 are:

(I) Promoting election integrity. We partner with various media organisations and fact-checkers to promote election integrity on TikTok. For more detail about the input our fact-checking partners provide please refer to QRE 30.1.3.
  • Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we worked with European fact-checkers on 9 temporary media literacy election integrity campaigns, in advance of regional elections, through our in-app Election Centers:
    • 7 in the EU
      • Croatia (local election): Faktograf
      • Croatia (presidential election): Faktograf
      • Germany: Deutsche Presse-Agentur (dpa)
      • Latvia: Lead Stories
      • Poland: Demagog & FakeNews.pl
      • Portugal: Poligrafo
      • Romania: Funky Citizens
    • 2 in wider European/regionally relevant countries
      • Albania: Internews Kosova (Kallxo)
      • Greenland: Logically Facts
  • Election speaker series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations: 
  1. Albania: Internews Kosova (Kallxo) Kallxo
  2. Belarus: Belarusian Investigative Center
  3. Germany: Deutsche Presse-Agentur (dpa)DPA
  4. Greenland: Logically Facts
  5. Kosovo: Kallxo
  6. Poland: Demagog
  7. Portugal: Poligrafo

(II) War in Ukraine.
We continue to run our media literacy campaigns about the war in Ukraine, developed in partnership with our media literacy partners Correctiv in Austria and Germany, Fakenews.pl in Poland and Lead Stories in Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania. We also expanded this campaign to Serbia, Bosnia, Montenegro, Czechia, Croatia, Slovenia, Bulgaria.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models. 
  • Began testing large language models (LLMs) to further support proactive moderation at scale. Because LLMs can comprehend human language and perform highly specific, complex tasks, we are better able to moderate nuanced areas like misinformation by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
  • Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example: 

  • In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs us about our approach to the upcoming election. During the reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States, and 4 in Albania, Belarus, Greenland, and Kosovo.
    • Albania: Internews Kosova (Kallxo)
    • Belarus: Belarusian Investigative Center
    • Germany: Deutsche Presse-Agentur (dpa)
    • Greenland: Logically Facts
    • Kosovo: Internews Kosova (Kallxo)
    • Poland: Demagog
    • Portugal: Poligrafo
  • In June 2025, 14 members of our Trust & Safety team (including leaders of our fact-checking program) attended GlobalFact12. In addition to a breakout session on Footnotes, TikTok hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and Safety Advisory Councils.
  • TikTok teams and personnel also regularly participate in research-focused events. In H1 2025, we presented at the Political Tech Summit in Berlin (January), hosted Research Tools demos in Warsaw (April), Presented at GNET Annual Conference (May), hosted Research Tools demos in Prague (June), Presented at the Warsaw Women in Tech Summit (June), briefed a small group of Irish academic UCD (Dublin) researchers (June), and attended the ICWSM conference in Copenhagen (June). 
  • Continued to participate in, and co-chair, the working group on Elections.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 18.1

Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.

QRE 18.1.1

Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.

N/A

QRE 18.1.2

Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.

N/A

QRE 18.1.3

Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.

N/A

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

We take action against misinformation that causes significant harm to individuals, our community, or the larger public regardless of intent. We do this by removing content and accounts that violate our rules, by investing in media literacy and connecting our community to authoritative information, and by partnering with experts.

Our Terms of Service and Integrity & Authenticity policies under our Community Guidelines are the first line of defence in combating harmful misinformation and (as outlined in more detail in QRE 14.1.1) deceptive behaviours on our platform. These rules make clear to our users what content we remove or make ineligible for the For You feed when they pose a risk of harm to our users and our community.

Specifically, our policies do not allow:

  • Misinformation 
    • Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life
    • Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
    • Conspiracy theories that name and attack individual people.
    • Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.

  • Civic and Election Integrity
    • Election misinformation, including:
      • How, when, and where to vote or register to vote;
      • Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
      • Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
      • Final results or outcome of an election.

  • Edited Media and AI-Generated Content (AIGC)
    • The likeness of young people or realistic-appearing people under the age of 18.
    • The likeness of adult private figures, if we become aware it was used without their permission.
    • Misleading AIGC or edited media that falsely shows:
      • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
      • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour;
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
      • being politically endorsed or condemned by an individual or group.
    • Fake Engagement
      • Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
      • Providing instructions on how to artificially increase engagement on TikTok.

We have made even clearer to our users here that the following content is ineligible for the For You feed:

  • Misinformation 
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society"
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study
    • Unverified claims related to an emergency or unfolding event
    • Potential high-harm misinformation while it is undergoing a fact-checking review

  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill

  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content

As outlined in the QRE 14, we also remove accounts that seek to mislead people or use TikTok to deceptively sway public opinion. These activities range from inauthentic or fake account creation, to more sophisticated efforts to undermine public trust.

We have policy experts within our Trust and Safety team dedicated to the topic of integrity and authenticity. They continually keep these policies under review and collaborate with external partners and experts  to understand whether updates or new policies are required and ensure they are informed by a diversity of perspectives, expertise, and lived experiences. In particular, our Safety Advisory Council for Europe, which brings together independent leaders from academia and civil society, represent a diverse array of backgrounds and perspectives, and are made up of experts in free expression, misinformation and other safety topics.They work collaboratively with us to inform and strengthen our policies, product features, and safety processes.

Enforcing our policies. We remove content – including video, audio, livestream, images, comments, links, or other text – that violates our Integrity & Authenticity policies. Individuals are notified of our decisions and can appeal them if they believe no violation has occurred. We also make clear in our Community Guidelines that we will temporarily or permanently ban accounts and/or users that are involved in serious or repeated violations, including violations of our Integrity & Authenticity policies.

We enforce our Community Guidelines policies, including our Integrity & Authenticity policies, through a mix of technology and human moderation. To do this effectively at scale, we continue to invest in our automated review process as well as in people and training. At TikTok we place a considerable emphasis on proactive content moderation. This means our teams work to detect and remove harmful material before it is reported to us.

However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have misinformation moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.

We strive to maintain a balance between freedom of expression and protecting our users and the wider public from harmful content. Our approach to combating harmful misinformation, as stated in our Community Guidelines, is to remove content that is both false and can cause harm to individuals or the wider public. This does not include simply inaccurate information which does not pose a risk of harm. Additionally, in cases where fact-checks are inconclusive, especially during emergency or unfolding events, content may not be removed and may instead become ineligible for recommendation in the For You feed and labelled with the “unverified content” label to limit the spread of potentially misleading information. 

We are pleased to include in this report the number of videos made ineligible for the For You feed under the relevant Integrity & Authenticity policies as explained to users here.

Note that in relation to the metrics we have shared at SLI 18.2.1 below, of all the views from users in the EEA that were recorded in H1 2025, fewer than 1 in per 10,000 views were of content identified and removed for violating our policies around harmful misinformation. 

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

N/A

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • At TikTok, we strive to bring more transparency to how we protect our platform.  We continue to increase the reports we voluntarily publish, the depth of data we disclose, and the frequency with which we publish.
  • In H1 2025, we published updates to our transparency reports, including:  
  • We also worked to make it easier for people to independently study our data and platform. For example through: 
    • our Research Tools which empower over 900 research teams to independently study our platform.
    • adding additional functionality to the Research API, including a compliance API (launched in June) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) do not impede researchers' ability to efficiently access data from TikTok's Research API.
    • the downloadable data file in the Community Guidelines Enforcement Report offering access to aggregated data, including removal data by policy category, for the 50 markets with the highest volumes of removed content. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

The For You feed is the interface users first see when they open TikTok. It's central to the TikTok experience and where most of our users spend their time exploring the platform.

We make clear to users in our Terms of Service and Community Guidelines (and also provide more context in our Help Center article and Transparency Center page, and Safety Center guide) that each account holder’s For You feed is based on a personalised recommendation system. The For You feed is curated to each user. Safety is built into our recommendations. As well as removing harmful misinformation content that violates our Community Guidelines, we take steps to avoid recommending certain categories of content that may not be appropriate for a broad audience including general conspiracy theories and unverified information related to an emergency or unfolding event. We may also make some of this content harder to find in search. 
Main parameters. The system recommends content by ranking content based on a combination of factors including:

  • user interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back); 
  • Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and 
  • User information  (e.g. device settings, language preferences, location, time zone and day, and device types). 


The main parameters help us make predictions on the content users are likely to be interested in. Different factors can play a larger or smaller role in what’s recommended, and the importance – or weighting – of a factor can change over time. For many users, the time spent watching a specific video is generally weighted more heavily than other factors. These predictions are also influenced by the interactions of other people on TikTok who appear to have similar interests. For example, if a user likes videos 1, 2, and 3 and a second user likes videos 1, 2, 3, 4 and 5, the recommendation system may predict that the first user will also like videos 4 and 5.
Users can also access the “Why this video” feature, which allows them to see with any particular video that appears in their For You feed factors that influenced why it appeared in their feed. This feature provides added transparency in relation to how our ranking system works and empowers our users to better understand why a particular video has been recommended to them. The feature essentially explains to users how past interactions on the platform have impacted the video they have been recommended.

User preferences. Together with the safeguards we build into our platform by design, we also empower our users to customise their experience to their preferences and comfort.  These include a number of features to help shape the content they see. For example, in the For You feed:
  • Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
  • Users are able to automatically filter out specific words or hashtags from the content recommended to them(see here). 
  • Users are able to refresh their For You feed if they no longer feel like recommendations are relevant to them or are too similar. When the For You feed is refreshed, users view a number of new videos which include popular videos (e.g., they have a high view count or a high like rate). Their interaction with these new videos will inform future recommendations.
  • Users can also personalise their "For You" page through our new Manage Topics feature (June 2025). This allows users to adjust the frequency of content they see related to particular topics. The settings don't eliminate topics entirely but can influence how often they're recommended as peoples' interests evolve over time. It adds to the many ways people shape their feed every day - including liking or sharing videos, searching for topics, or simply watching videos for longer.
  • As part of our obligations under the DSA (Article 38), we introduced non-personalized feeds on our platform, which provide our European users with an alternative to recommender systems. They are able to turn off personalisation so that feeds show non-personalised content. For example, the For You feed will instead show popular videos in their regions and internationally. See here.


Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

Methodology of data measurement:

The number of users who have filtered hashtags or a keyword to set preferences for For You feed, the number of times users clicked “not interested” in relation to the For You feed, and the number of times users clicked on the For You Feed Refresh are all based on the approximate location of the users that engaged with these tools.

The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.

Country Number of users that filtered hashtags and keywords Number of users that clicked on "not interested" Number of times for algo reset Number of Videos tagged with AIGC label
Austria 71,042 952,721 60,429 216,782
Belgium 109,999 1,428,998 109,438 310,518
Bulgaria 61,967 838,603 50,767 325,280
Croatia 36,204 396,069 25,456 62,728
Cyprus 15,655 199,425 17,429 105,231
Czech Republic 63,390 811,437 77,494 248,842
Denmark 47,704 585,499 32,565 103,602
Estonia 17,219 162,805 13,428 27,463
Finland 64,531 641,392 59,140 151,632
France 621,904 8,623,045 621,611 2,631,307
Germany 714,270 8,678,005 708,174 2,923,297
Greece 95,344 1,267,887 87,742 289,830
Hungary 60,520 1,056,004 34,031 242,598
Ireland 77,782 894,686 71,318 85,518
Italy 407,290 6,719,765 305,942 1,606,752
Latvia 25,337 298,797 29,270 73,324
Lithuania 31,173 339,592 27,527 74,036
Luxembourg 6,249 83,357 5,752 36,563
Malta 6,356 79,651 7,349 21,483
Netherlands 225,595 2,327,551 188,048 440,107
Poland 277,460 3,572,508 201,086 789,871
Portugal 97,779 1,208,681 73,846 354,910
Romania 149,926 2,827,115 268,322 685,318
Slovakia 26,822 363,060 16,471 112,814
Slovenia 13,155 174,113 17,172 26,794
Spain 475,525 7,262,327 430,715 1,837,668
Sweden 112,446 1,467,000 141,965 324,255
Iceland 6,330 60,021 3,572 9,200
Liechtenstein 180 3,636 295 211
Norway 65,219 733,515 53,304 118,623
Total EU 3,912,644 53,260,093 3,682,487 14,108,523
Total EEA 3,984,373 54,057,265 3,739,658 14,236,557

Commitment 21

Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.

We signed up to the following measures of this commitment

Measure 21.1 Measure 21.2 Measure 21.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • We ran 9 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 7 in the EU
      • Croatia (local election): Faktograf
      • Croatia (presidential election): Faktograf
      • Germany: Deutsche Presse-Agentur (dpa)
      • Latvia: Lead Stories
      • Poland: Demagog & FakeNews.pl
      • Portugal: Poligrafo
      • Romania: Funky Citizens
    • 2 in wider European/regionally relevant countries
      • Albania: Internews Kosova (Kallxo)
      • Greenland: Logically Fact
  • Continued our temporary in-app natural disaster media literacy search guide for the Mayotte Cyclone until 14 Feb 2025, and launched a new search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support. 
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change, Holocaust Education, Mpox, and the War in Ukraine.
  • We partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners  determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility.
  • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 21.1

Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.

QRE 21.1.1

Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.

We currently have 12 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. dpa Deutsche Presse-Agentur
  3. Demagog
  4. Facta
  5. Fact Check Georgia
  6. Faktograf
  7. Internews Kosova
  8. Lead Stories
  9. Newtral
  10. Poligrafo
  11. Reuters
  12. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, plus Georgian, Russian, Turkish, and Ukrainian.

We ensure that our users benefit from the context and insights provided by the fact checking organisations we partner with in the following ways: 

  • Enforcement of misinformation policies. Our fact-checking partners play a critical role in helping us enforce our misinformation policies, which aim to promote a trustworthy and authentic experience for our users. We consider context and fact-checking to be key to consistently and accurately enforcing these policies, so, while we use machine learning models to help detect potential misinformation, we have our misinformation moderators assess, confirm, and take action on harmful misinformation. As part of this process, our moderators can access a repository of previously fact-checked claims and they are able to provide content to our expert fact checking partners for further evaluation. Where fact-checking partners advise that content is false, our moderators take measures to assess and remove it from our platform. Our response to QRE 31.1.1 provides further insight into the way in which fact-checking partners  are involved in this process.
  • Unverified content labelling. As mentioned above, we partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners  determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility. In these circumstances, the content creator is also notified that their video was flagged as unsubstantiated content and the video will become ineligible for recommendation in the For You feed.

  • In-app tools related to specific topics:
    • Election integrity. We have launched campaigns in advance of several major elections aimed at educating the public about the voting process which encourage users to fact-check information with our fact-checking partners. For example, the election integrity campaign we rolled out in advance of France legislative elections in June 2024 included a search intervention and in-app Election Centre. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP). In total, during the reporting period, we ran 14 temporary media literacy election integrity campaigns in advance of regional elections. 
    • Climate Change. We launched a search intervention which redirects users seeking out climate change-related content to authoritative information. We worked with the UN to provide the authoritative information. 
    • Natural disasters: Launched a new temporary in-app natural disaster media literacy search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025 and continued our temporary search guide for the Mayotte Cyclone until 14 Feb 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support. 
  • User awareness of our fact-checking partnerships and labels. We have created pages on our Safety Center & Transparency Center to raise users’ awareness about our fact-checking program and labels and to support the work of our fact-checking partners. 

SLI 21.1.1

Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.

Methodology of data measurement:

The share of removals under our harmful misinformation policy,  share of proactive removals, share of removals before any views and share of the removals within 24h are relative to the total removals of each policy. 

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.

Country % video removals under Misinformation policy % proactive video removals under Misinformation policy % video removals before any views under Misinformation policy % video removals within 24h under Misinformation policy % video removals under Civic and Election Integrity policy % proactive video removals under Civic and Election Integrity policy % video removals before any views under Civic and Election Integrity policy % video removals within 24h under Civic and Election Integrity policy % video removals under Synthetic Media policy % proactive video removals under Synthetic Media policy % video removals before any views under Synthetic Media policy % video removals within 24h under Synthetic Media policy Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 18.55% 98.56% 80.73% 84.72% 6.53% 99.06% 93.18% 92.13% 4.01% 96.75% 43.59% 45.30% 26.41%
Belgium 28.82% 98.73% 77.11% 83.00% 8.79% 99.57% 95.91% 95.66% 5.17% 96.82% 19.68% 16.64% 37.10%
Bulgaria 45.82% 97.98% 55.05% 84.53% 5.05% 99.73% 97.04% 97.84% 3.55% 99.23% 22.61% 24.14% 35.71%
Croatia 25.45% 97.78% 72.32% 86.06% 4.83% 96.81% 87.23% 91.49% 5.60% 93.58% 22.02% 40.37% 26.38%
Cyprus 24.09% 97.28% 72.11% 80.27% 5.30% 97.94% 88.66% 87.63% 8.47% 96.77% 30.32% 28.39% 34.36%
Czech Republic 38.46% 98.26% 55.52% 94.22% 6.14% 99.70% 97.27% 97.73% 3.31% 94.94% 37.64% 57.58% 28.55%
Denmark 16.32% 98.90% 74.19% 86.90% 6.69% 99.81% 98.27% 98.07% 4.26% 96.37% 30.21% 50.15% 31.84%
Estonia 2.12% 98.69% 63.61% 80.00% 0.25% 100.00% 97.22% 97.22% 0.62% 98.88% 59.55% 68.54% 30.33%
Finland 27.42% 94.45% 70.50% 91.43% 5.78% 99.81% 95.94% 98.84% 1.59% 97.89% 27.46% 41.55% 33.60%
France 25.23% 99.10% 84.59% 91.27% 3.99% 99.52% 95.00% 95.77% 2.98% 96.16% 22.30% 23.12% 37.30%
Germany 27.90% 98.10% 79.18% 90.84% 9.17% 98.25% 89.13% 91.11% 3.18% 93.75% 35.63% 44.09% 26.78%
Greece 23.99% 98.97% 73.88% 89.04% 8.93% 99.91% 96.18% 98.70% 5.02% 96.45% 17.75% 27.78% 30.79%
Hungary 2.84% 97.39% 74.59% 92.61% 2.44% 99.06% 89.01% 99.48% 0.24% 90.53% 32.63% 31.58% 32.45%
Ireland 27.15% 97.54% 73.16% 80.88% 6.80% 99.30% 77.17% 98.04% 2.38% 98.00% 30.00% 30.00% 29.28%
Italy 28.38% 98.55% 78.93% 84.12% 9.76% 99.42% 94.08% 92.75% 3.62% 96.61% 20.04% 13.75% 36.90%
Latvia 17.22% 97.82% 83.01% 91.26% 2.59% 100.00% 95.16% 95.16% 9.53% 98.25% 64.04% 78.95% 33.33%
Lithuania 20.17% 99.19% 80.97% 87.04% 2.08% 98.04% 96.08% 96.08% 6.86% 97.02% 59.52% 66.67% 29.57%
Luxembourg 25.02% 89.36% 69.17% 92.11% 2.62% 100.00% 87.72% 87.72% 1.70% 91.89% 27.03% 27.03% 28.85%
Malta 50.81% 90.08% 70.63% 94.44% 2.82% 100.00% 89.29% 89.29% 3.13% 100.00% 9.68% 25.81% 39.10%
Netherlands 25.49% 99.16% 81.22% 87.35% 4.96% 99.04% 97.04% 97.12% 3.92% 95.85% 22.27% 33.60% 29.46%
Poland 37.85% 98.77% 65.49% 93.13% 3.86% 97.34% 85.94% 84.66% 1.46% 94.35% 39.69% 46.26% 30.81%
Portugal 27.48% 98.57% 84.09% 91.37% 8.42% 98.13% 92.28% 94.85% 6.52% 99.40% 25.68% 18.28% 28.31%
Romania 44.26% 96.03% 67.42% 86.84% 12.41% 94.39% 80.65% 71.79% 4.59% 95.44% 32.38% 25.04% 35.42%
Slovakia 62.64% 94.27% 67.80% 95.33% 1.01% 100.00% 92.45% 92.45% 2.14% 97.35% 49.56% 69.03% 28.82%
Slovenia 32.84% 93.76% 77.23% 98.71% 0.34% 100.00% 100.00% 100.00% 3.13% 97.67% 77.91% 78.29% 25.24%
Spain 30.23% 99.46% 87.87% 90.63% 5.09% 99.31% 92.75% 89.88% 3.73% 97.03% 21.44% 20.00% 34.09%
Sweden 15.83% 98.65% 78.99% 82.53% 6.94% 99.67% 96.92% 96.84% 3.17% 97.63% 19.13% 21.86% 31.25%
Iceland 8.01% 98.63% 89.04% 90.41% 1.10% 100.00% 100.00% 100.00% 1.54% 100.00% 28.57% 71.43% 22.73%
Liechtenstein 8.00% 100.00% 66.67% 66.67% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 11.76%
Norway 16.73% 98.33% 69.09% 87.01% 4.98% 98.78% 93.41% 94.39% 3.30% 95.96% 24.26% 43.01% 32.93%
Total EU 27.42% 98.11% 76.94% 89.47% 6.64% 98.29% 90.43% 90.17% 3.20% 95.84% 28.85% 31.14% 30.95%
Total EEA 27.28% 98.11% 76.90% 89.45% 6.62% 98.30% 90.46% 90.21% 3.20% 95.84% 28.80% 31.30% 30.95%

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In line with our DSA requirements, we continued to provide a dedicated reporting channel, and appeals process for users who disagree with the outcome, for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

We provide users with simple, intuitive ways to report/flag content in-app for any breach of our Terms of Service or Community Guidelines including for harmful misinformation in each EU Member State and in an official language of the European Union.
  • By ‘long-pressing’ (e.g., clicking for 3 seconds) on the video content and selecting the “Report” option. 
  • By selecting the “Share” button available on the right-hand side of the video content and then selecting the “Report” option.
The user is then shown categories of reporting reasons from which to select (which align with the harms our Community Guidelines seek to address). In 2024, we updated this feature to make the “Misinformation” categories more intuitive and allow users to report with increased granularity. 

In line with our DSA requirements, we continued to provide a dedicated reporting channel, and appeals process for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law.

People can report TikTok content or accounts without needing to sign in or have an account by accessing the Report function using the “More options (…)” menu on videos or profiles in their browser, or through our “Report Inappropriate content” webform which is available in our  Help Centre. Harmful misinformation can be reported across content features such as video, comment, search, hashtag, sound, or account.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Reporting system

To ensure the integrity of our reporting system, we deploy a combination of automated review and human moderation.

Videos uploaded to TikTok are initially reviewed by our automated moderation technology, which aims to identify content that violates our Community Guidelines. If a potential violation of our Community Guidelines is found, the automated review system will either pass it on to our moderation teams for further review or, if there is a high degree of confidence that the content violates our Community Guidelines, remove it automatically. Automated removal is only applied when violations are clear-cut, such as where the content contains nudity or pertains to youth safety. We are constantly working to improve the precision of our automated moderation technology so we can more effectively remove violative content at scale, while also reducing the number of incorrect removals.

To support the fair and consistent review of potentially violative content, where violations are less clear-cut, content will be passed to our human moderation teams for further review. Human moderators can take additional context and nuance into account, which cannot always be picked up by technology, and in the context of harmful misinformation, for example, our moderators have access to a repository of previously fact-checked claims to help make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.

We have sought to make our Community Guidelines as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as review of moderation cases, flows, appeals and undertaking Root Cause Analyses).

As part of our requirements under the DSA, we have introduced an additional reporting channel for our community in the European Union to ‘Report Illegal Content,’ which enables users to alert us to content they believe breaches the law. TikTok will review the content against our Community Guidelines and where a violation is detected, the content may be removed globally. If it is not removed, our illegal content moderation team will further review the content to assess whether it is unlawful in the relevant jurisdiction - this assessment is undertaken by human review. If it is, access to that content will be restricted in that country. Those who report suspected illegal content will be notified of our decision, including if we consider that the content is not illegal. Users who disagree can appeal those decisions using the appeals process.

We also note that whilst user reports are important, at TikTok we place considerable emphasis on proactive detection to remove violative content.  We are proud that the vast majority of removed content is identified proactively before it is reported to us.


Appeals system.

We are transparent with users in relation to appeals.  We set out the options that may be available both to the user who reported the content and the creator of the affected content, where they disagree with the decision we have taken.  

The integrity of our appeals systems is reinforced by the involvement of our trained human moderators, who can take context and nuance into consideration when deciding whether content is illegal or violates our Community Guidelines. 

Our moderators review all appeals raised in relation to removed videos, removed comments, and banned accounts and assess them against our policies. To ensure consistency within this process and its overall integrity, we have sought to make our policies as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as auditing appeals and undertaking Root Cause Analyses).

If users who have submitted an appeal are still not satisfied with our decision, they can share feedback with us via the webform on TikTok.com. We continuously take user feedback into consideration to identify areas of improvement, including within the appeals process. Users may also have other legal rights in relation to decisions we make, as set out further here.

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.



QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

Users in all EU member states are notified by an in-app notification in their relevant local language where the following action is taken:
  • removal or otherwise restriction of access to their content;
  • a ban of the account;
  • restriction of their access to a feature (such as LIVE); or
  • restriction of their ability to monetise. 

Such notifications are provided in near real time after action has been taken (i.e. generally within several seconds or up to a few minutes at most). 

Where we have taken any of these decisions, an in-app inbox notification sets out the violation deemed to have taken place, along with an option for users to “disagree” and submit an appeal. Users can submit appeals within 180 days of being notified of the decision they want to appeal. Further information, including about how to appeal a decision is set out here.

All such appeals raised will be queued for review by our specialised human moderators so as to ensure that context is adequately taken into account in reaching a determination. Users can monitor the status and view the results of their appeal within their in-app inbox. 

As mentioned above, our users have the ability to share feedback with us to the extent that they don't agree with the result of their appeal. They can do so by using the in-app function which allows them to "report a problem". We are continuously taking user feedback into consideration in order to identify areas of improvement within the appeals process.

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

Methodology of data measurement:

The number of appeals/overturns is based on the country in which the video being appealed/overturned was posted. These numbers are only related to our Misinformation, Civic and Election Integrity and Edited media and AIGC policies.

Country Number of Appeals of videos removed for violation of misinformation policy Number of overturns of appeals for violation of misinformation policy Appeal success rate of videos removed for violation of misinformation policy Number of Appeals of videos removed for violation of Civic and Election Integrity policy Number of overturns of appeals for violation of Civic and Election Integrity policy Appeal success rate of videos removed for violation of Civic and Election Integrity policy Number of Appeals of videos removed for violation of Synthetic and Manipulated Media Number of overturns of appeals for violation of Synthetic and Manipulated Media Appeal success rate of videos removed for violation of Synthetic and Manipulated Media
Austria 609 422 69.30% 160 124 77.50% 27 24 88.90%
Belgium 809 674 83.30% 246 196 79.70% 55 48 87.30%
Bulgaria 582 283 48.60% 58 46 79.30% 21 21 100.00%
Croatia 91 55 60.40% 14 11 78.60% 7 2 28.60%
Cyprus 92 59 64.10% 20 15 75.00% 17 11 64.70%
Czech Republic 1,453 468 32.20% 162 137 84.60% 72 39 54.20%
Denmark 311 226 72.70% 102 84 82.40% 40 32 80.00%
Estonia 84 49 58.30% 15 10 66.70% 8 7 87.50%
Finland 207 139 67.10% 72 58 80.60% 27 21 77.80%
France 6,935 6,296 90.80% 709 639 90.10% 421 396 94.10%
Germany 12,837 8,939 69.60% 2,844 2,327 81.80% 716 542 75.70%
Greece 705 425 60.30% 173 139 80.30% 55 37 67.30%
Hungary 228 131 57.50% 133 102 76.70% 6 4 66.70%
Ireland 948 765 80.70% 108 97 89.80% 36 32 88.90%
Italy 4,266 3,523 82.60% 1,188 1,048 88.20% 143 132 92.30%
Latvia 110 77 70.00% 20 13 65.00% 42 19 45.20%
Lithuania 101 84 83.20% 16 15 93.80% 22 14 63.60%
Luxembourg 35 29 82.90% 9 7 77.80% 5 3 60.00%
Malta 28 24 85.70% 0 0 0.00% 0 0 0.00%
Netherlands 1,732 1,441 83.20% 290 236 81.40% 92 77 83.70%
Poland 5,004 2,065 41.30% 423 332 78.50% 126 87 69.00%
Portugal 600 393 65.50% 154 129 83.80% 18 14 77.80%
Romania 5,175 1,539 29.70% 1,066 855 80.20% 158 78 49.40%
Slovakia 569 140 24.60% 20 17 85.00% 27 19 70.40%
Slovenia 96 48 50.00% 7 6 85.70% 12 10 83.30%
Spain 3,231 2,844 88.00% 464 416 89.70% 143 130 90.90%
Sweden 658 550 83.60% 231 176 76.20% 48 40 83.30%
Iceland 13 11 84.60% 4 4 100.00% 2 2 100.00%
Liechtenstein 2 2 100.00% 0 0 0.00% 0 0 0.00%
Norway 278 228 82.00% 80 68 85.00% 32 28 87.50%
Total EU 47,496 31,688 66.70% 8,704 7,235 83.10% 2,344 1,839 78.50%
Total EEA 47,789 31,929 66.80% 8,788 7,307 83.10% 2,378 1,869 78.60%

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Supported new independent research through TikTok’s Research Tools (Research API and VCE). 
  • Further enriched the data available to include more information on stickers and effects (January) and video tags (April) and reached full parity in data available across the API and VCE (May).
  • Added additional functionality to the Research API, including a compliance API (launched in June) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) do not impede researchers' ability to efficiently access data from TikTok's Research API.
  • Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
  • Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad was active for, the main parameters used for targeting (e.g. age, gender), the number of people who were served the ad.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

We have a dedicated TikTok Transparency Centre available in a number of EU languages which hosts our:
  • COPD Transparency Reports, as part of our commitments to the Code, we publish a transparency report every six months to provide granular data for EU/EEA countries about our efforts to combat online misinformation. 
  • Our TikTok Community Guidelines Enforcement Reports, providing proactive quarterly insights into the volume and nature of content and accounts removed from our platform for violating our Community Guidelines, Terms of Service or Advertising Policies since 2019.
  • DSA Transparency Reports, building on our proactive approach to transparency in our quarterly TikTok Community Guidelines Enforcement Reports and our obligations under the Digital Services Act (“DSA”), we publish a transparency report every six months to provide granular data for EU countries about our content moderation activities.
  • We publish monthly Covert Influence Operations Reports, providing more frequent and granular detail about the covert influence operations we have disrupted.   
  • In H1 2025, we launched a new Global Elections Integrity Hub, including dedicated coverage of elections across Europe, the Middle East, and Africa. The Hub outlines our policies, product features, and moderation practices that help protect platform integrity during elections. Throughout this reporting period, we regularly updated the Hub with information on our safety efforts in markets with active elections, including Croatia, Kosovo, Germany, Romania, Portugal, and Poland.

As part of our commitment to regulatory transparency and accountability, we launched the European Online Safety Hub, which serves as a 'one-stop-shop' for our community to learn more about how we're complying with the DSA. The Hub is currently available in 22 EU languages and at least one official language of each of the EU Member States. Our dedicated TikTok for Developers website hosts our Research Tools and Commercial Content APIs.(detailed below).

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

In this H1 2025 report,TikTok has shared more than 3,000 data points across 30 EU/EEA countries. 

We provide access to researchers to data that is publicly available on our platform through our Research Tools and through our Commercial Content API for commercial content (detailed below).

We also provide ongoing insights into the action we take against content and accounts that violate our Community Guidelines, Terms of Service, or Advertising Policies, in our quarterly TikTok Community Guideline Enforcement Reports. The report includes a variety of data visualisations, which are designed with transparency and accessibility in mind, including for people with colour vision deficiency.

As part of our continued efforts to make it easy to study the TikTok platform, the report also offers access to aggregated data, including removal data by policy category, for the 50 markets with the highest volumes of removed content.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

Research Tools, Commercial Content API, and the Commercial Content Library
During this reporting period we received:
  • 173 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
  • 74 applications to access the TikTok Commercial Content API.

Country Number of applications received for Researcher API Number of applications accepted for Researcher API Number of applications rejected for Researcher API Number of applications received for TikTok Commercial Content Library content library Number of applications accepted for TikTok Commercial Content Library content library Number of applications rejected for TikTok Commercial Content Library content library
Austria 6 6 4 1 1 0
Belgium 2 1 0 1 1 0
Bulgaria 0 0 0 0 0 0
Croatia 0 0 1 0 0 0
Cyprus 0 0 0 0 0 0
Czech Republic 5 3 1 2 2 0
Denmark 5 5 2 1 1 0
Estonia 0 0 0 0 0 0
Finland 2 1 0 0 0 0
France 16 11 11 24 19 3
Germany 48 50 21 11 10 1
Greece 1 2 0 2 2 0
Hungary 0 0 0 2 1 0
Ireland 3 1 3 1 0 1
Italy 21 16 6 1 1 0
Latvia 0 0 0 3 3 0
Lithuania 0 0 0 0 0 0
Luxembourg 0 0 0 0 0 0
Malta 0 0 0 0 0 0
Netherlands 13 9 12 3 2 0
Poland 4 3 1 4 4 0
Portugal 0 0 0 3 3 0
Romania 4 3 3 2 2 0
Slovakia 1 0 0 2 1 1
Slovenia 2 1 1 1 1 0
Spain 32 12 18 7 5 2
Sweden 6 5 2 3 3 0
Iceland 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0
Norway 2 2 1 0 0 0
Total EU 171 129 86 74 62 8
Total EEA 173 131 87 74 62 8

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.


(I) Research API

To make it easier to independently research our platform and bring transparency to TikTok content, we built a Research API that provides researchers in the US, EEA, UK and Switzerland, with access to public data on accounts and content, including comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here. We carefully consider feedback from researchers who have used the API and continue to make improvements such as additional data fields, streamlining the application process, and enabling collaboration through Lab Access, which allows up to 10 researchers to work together on a shared research project.

(II) Virtual Compute Environment (VCE)

The VCE allows qualifying not-for-profit researchers in the EU to access and analyse TikTok's public data, while ensuring robust security and privacy protections. Public data can be accessed and analysed in 2 stages:

  1. Test Stage: Query the data using TikTok's query software development kit (SDK). The VCE will return random sample data based on your query, limited to 5,000 records per day.
  2. Execution Stage: Submit a script to execute against all public data. TikTok provides a powerful search capability that allows data to be paginated in increments of up to 100,000 records. TikTok will review the results file to make sure the output is aggregated.

(III) Commercial Content API  

As required under the DSA, and to enhance transparency on advertisements presented on our platform, we have built a commercial content API that includes ads, ad and advertiser metadata, and targeting information. Researchers and professionals are required to create a TikTok for Developers account and submit an application to access the Commercial Content API which we review to help prevent malicious actors from misusing this data. 

(IV) Commercial Content Library

The Commercial Content Library is a publicly searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that's commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad. 

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

(I) Research API
Through our Research API, academic researchers from non-profit academic institutions in the US and Europe, can apply to study public data about TikTok content and accounts. This public data includes comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here.

(II) Virtual Compute Environment (VCE)
Through our VCE, qualifying not-for-profit researchers and academic researchers from non-profit academic institutions in the EU can query and analyse TikTok’s public data. To protect the security and privacy of our users the VCE is designed to ensure that TikTok data is processed within confined parameters. TikTok only reviews the results to ensure that there is no identifiable individual information extracted out of the platform. All aggregated results will be shared as a downloadable link to the approved primary researcher's email.

(III) Commercial Content API 
Through our Commercial Content API, qualifying researchers and professionals, who can be located in any country, can request public data about commercial content including ads, ad and advertiser metadata, and targeting information. To date, the Commercial Content API only includes data from EU countries.

(IV) Commercial Content Library
TikTok's Commercial Content Library is a repository of ads and other types of commercial content posted to users in the European Economic Area (EEA), Switzerland, and the UK only, but can be accessed by members of the public located in any country. Each ad and ad details will be available in the library for one year after the advertisement was last viewed by any user. Through the Commercial Content Library, the public can access information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that is commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad.

We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access. Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.Similarly with the Commercial Content API, we provide participants with detailed information on how to query ad data and fetch public advertiser data.

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access. Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.Similarly with the Commercial Content API, we provide participants with detailed information on how to query ad data and fetch public advertiser data.

Commitment 27

Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.

We signed up to the following measures of this commitment

Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document.

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Supported new independent research through TikTok’s Research Tools (Research API and VCE).
  • Enriched the data available to include more information on stickers and effects (January) and video tags (April) and reached full parity in data available across the API and VCE (May).
  • Added additional functionality to the Research API, including a compliance API (launched in June) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) does not impede researchers' ability to efficiently access data from TikTok's Research API.
  • Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
  • Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad was active for, the main parameters used for targeting (e.g. age, gender), the number of people who were served the ad.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

TikTok is committed to facilitating research and engaging with the research community.

As set out above, TikTok is committed to facilitating research through our Research Tools, Commercial Content APIs and Commercial Content Library, full details of which are available on our TikTok for Developers and Commercial Content Library websites.

We have many teams and individuals across product, policy, data science, outreach and legal working to facilitate research. We believe transparency and accountability are essential to fostering trust with our community. We are committed to transparency in how we operate, moderate and recommend content, empower users, and secure our platform. That's why we opened our global Transparency and Accountability Centers (TACs) for invited guests to see first-hand our work to protect the safety and security of the TikTok platform..

Our TACs are located in Dublin, Los Angeles, Singapore, and Washington, DC. They provide an opportunity for invited academics, businesses, policymakers, politicians, regulators, researchers and many other expert audiences from Europe and around the world to see first-hand how teams at TikTok go about the critically important work of securing our community's safety, data, and privacy. During the reporting period, DubTAC hosted 24 external tours, welcoming over 180 visitors. Notable attendees included: Ofcom; the EU Commission and representatives from the Irish Parliament; French; Danish; German; and UAE governments. We also welcomed mental health organisations and brand clients, including Coca Cola and Zalando. In March, we launched Mobile TAC in Brussels during Global Marketing Week and delivered 5 Mobile TAC tours across the EU.

We work closely with our ten regional Advisory Councils, including our European Safety Advisory Council and US Content Advisory Council, and our global Youth Advisory Council, which bring together a diverse array of independent experts from academia and civil society as well as youth perspectives. Advisory Council members provide subject matter expertise and advice on issues relating to user safety, content policy, and emerging issues that affect TikTok and our community, including in the development of our AI-generated content label and a recent campaign to raise awareness around AI labeling and potentially misleading AIGC. These councils are an important way to bring outside perspectives into our company and onto our platform.

In addition to these efforts, there are a plethora of ways through which we engage with the research community in the course of our work.

Our Outreach & Partnerships Management (OPM) Team is dedicated to establishing partnerships and regularly engaging with civil society stakeholders and external experts, including the academic and research community, to ensure their perspectives inform our policy creation, feature development, risk mitigation, and safety strategies. For example, we engaged with global experts, including numerous academics in Europe, in the development of our state-affiliated media policy, Election Misinformation policies, and AI-generated content labels. OPM also plays an important role in our efforts to counter misinformation by identifying, onboarding and managing new partners to our fact-checking programme. In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series.Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.

During this reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States and 4 in Albania, Belarus, Greenland, and Kosovo. 
  1. Albania: Internews Kosova (Kallxo)
  2. Belarus: Belarusian Investigative Center
  3. Germany: Deutsche Presse-Agentur (dpa)
  4. Greenland: Logically Facts
  5. Kosovo: Internews Kosova (Kallxo)
  6. Poland: Demagog
  7. Portugal: Poligrafo

TikTok teams and personnel also regularly participate in research-focused events. In H1 2025, we presented at the Political Tech Summit in Berlin (January), hosted Research Tools demos in Warsaw (April), presented at GNET Annual Conference (May), hosted Research Tools demos in Prague (June), presented at the Warsaw Women in Tech Summit (June), briefed a small group of Irish academic researchers (June), and attended the ICWSM conference in Copenhagen (June).

At the end of June 2025, we sent a 14 strong delegation to GlobalFact12 in Rio de Janiero, Brazil. TikTok was a top-tier sponsor of GlobalFact. Sponsorship money supports IFCN's work serving the fact-checking community and makes the conference itself possible for fact-checking organizations to attend through providing travel scholarships. The annual conference represents the most important industry event for TikTok's Global Fact-Checking Program and covers a broad set of topics related to mis- and dis-information that are discussed in main stage sessions and break-out rooms. In addition to a breakout session on Footnotes, TikTok hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and TikTok's Safety Advisory Councils. 

As well as opportunities to share context about our approach, research interests, and opportunities to collaborate, these events enable us to learn from the important work being done by the research community on various topics, which include aspects related to harmful misinformation.

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

We have a dedicated TikTok for Developers website which hosts our Research Tools and Commercial Content APIs. 

With the Research API, researchers can access:

  • Public account data, such as user profiles, followers and following lists, liked videos, pinned videos and reposted videos.
  • Public content data, such as comments, captions, subtitles, and number of comments, shares and likes that a video receives.
  • Through the VCE, qualifying not-for-profit researchers in the EU can access and analyse TikTok's public data, including public U18 data, in a secure environment that is subject to strict security controls. 

Our commercial content related APIs includes ads, ad and advertiser metadata, and targeting information. These APIs will allow the public and researchers to perform customised - advertiser name or keyword based - searches on ads and other commercial content data that is stored in the Commercial Content Library repository. The Library is a searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. 
 

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

The data we make available and the application criteria for our Research Tools (Research API and VCE) and Commercial Content API is research topic agnostic and clearly set out in our dedicated TikTok for Developers website. In August 2024, introduced a  due diligence process with an external vendor to confirm the eligibility of NGO applicants. 

Empowering fact-checkers

Commitment 30

Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.

We signed up to the following measures of this commitment

Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Updated fact checking agreements to include the requirement that fackfact checking partners provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 30.1

Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.

QRE 30.1.1

Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.

Within Europe, we work with 12 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, and additional languages includingGeorgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

Our partners have teams of fact-checkers who review and verify reported content. Our moderators then use that independent feedback to take action and where appropriate, remove or make ineligible for recommendation false or misleading content or label unverified content. 

Our agreements with our partners are standardised, meaning the agreements are based on our template master services agreements and consistent of common standards and conditions. We reviewed and updated our template standard agreements as part of our annual contract renewal process.

The terms of the agreements describe:
  • The service the fact-checking partner will provide, namely, that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue, and will provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation. 
  • The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories. 
  • An option to receive pro-actively flagging of potential harmful misinformation from our partners.
  • The languages in which they will provide fact-checking services.
  • The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
  • All other key terms including the applicable term and fees and payment arrangements.

QRE 30.1.2

Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).

We currently have 12 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. Deutsche Presse-Agentur (dpa)
  3. Demagog
  4. Facta
  5. Geofacts
  6. Faktograf
  7. Internews Kosova (Kallxo)
  8. Lead Stories
  9. Newtral
  10. Poligrafo
  11. Reuters
  12. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

We can, and have, put in place temporary agreements with these fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis. 

Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we worked with European fact-checkers on 9 temporary media literacy campaigns, in advance of regional elections, through our in-app Election Centers:
  • 7 in the EU
    • Croatia (local election): Faktograf
    • Croatia (presidential election): Faktograf
    • Germany: Deutsche Presse-Agentur (dpa)
    • Latvia: Lead Stories
    • Poland: Demagog & FakeNews.pl
    • Portugal: Poligrafo
    • Romania: Funky Citizens
  • 2 in wider European/regionally relevant countries
    • Albania: Internews Kosova (Kallxo)
    • Greenland: Logically Facts

We also rolled out three new ongoing general media literacy and critical thinking skills campaigns in the EU in collaboration with our fact-checking and media literacy
partners:
  • Germany: Deutsche Presse-Agentur (dpa)
  • Romania: Funky Citizens, Digi Media, and Libertatea
  • Poland: Demagog, FakeNews.pl, Radio Zet, and Orientuj.sie

Globally, we have 21 IFCN-accredited fact-checking partners. We are continuously working to expand our fact-checking network and we keep users updated here.

QRE 30.1.3

Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.

We have fact-checking coverage in 23 official EEA languages: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. 

We have fact-checking coverage in a number of other European languages or languages which affect European users, including Georgian, Russian, Turkish, and Ukrainian and we can request additional support in Azeri, Armenian, and Belarusian. 

In terms of global fact-checking initiatives, we currently cover more than 60 languages and 130 markets across the world, thereby improving the overall integrity of the service and benefiting European users. 

In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.

  • Fact-checking repository. We have built a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
  • Insights reports. Our fact-checking partners provide regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.  
  • Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform and suggest prominent misinformation that is circulating online that may benefit from verification. 
Fact-checking guidelines. Where relevant, we create guidelines and trending topic reminders for our moderators which are informed by previous fact checking assessments. This helps our moderation teams leverage the insights from our fact-checking partners and supports swift and accurate decisions on flagged content regardless of the language in which the original claim was made.
  • Election Speaker Series. To further promote election integrity, and inform our approach to country-level EU and regionally relevant elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations:
    • Albania: Internews Kosova (Kallxo)
    • Belarus: Belarusian Investigative Center
    • Germany: Deutsche Presse-Agentur (dpa)
    • Greenland: Logically Facts
    • Kosovo: Internews Kosova (Kallxo)
    • Poland: Demagog
    • Portugal: Poligrafo

Moderation teams working dedicated misinformation queues receive enhanced training on our misinformation policies and have access to the above-mentioned tools and measures, which enables them to make accurate content decisions across Europe and globally.

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.

Some of the methods and technologies that support these efforts include:
  • Vision-based: Computer vision models can identify objects that violate our Community Guidelines—like weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our Community Guidelines, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. "Natural language processing"—a type of Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
  • LLM-based: We're starting to use a kind of AI called "large language learning models" to scale and improve content moderation. LLMs can comprehend human language and perform highly specific, complex tasks. This can make it possible to moderate content with a higher degree of precision, consistency and speed than human moderation.
  • Multi-modal LLM-based: "Multi-modal LLMs" can also perform complex, highly specific tasks related to other types of content, such as visual content. For example, we can use this technology to make misinformation moderation easier by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
  • Content Credentials: We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.

Continuing to leverage the fact-checking output in this way enables us to further increase the positive impact of our fact checking programme.


SLI 30.1.1

Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.

Total EU Languages: 22
Total EEA Languages: 23

Country Member States and languages covered by agreements with the fact-checking organisations
Austria Fact-checking coverage implemented
Belgium Fact-checking coverage implemented
Bulgaria Fact-checking coverage implemented
Croatia Fact-checking coverage implemented
Cyprus Fact-checking coverage implemented
Czech Republic Fact-checking coverage implemented
Denmark Fact-checking coverage implemented
Estonia Fact-checking coverage implemented
Finland Fact-checking coverage implemented
France Fact-checking coverage implemented
Germany Fact-checking coverage implemented
Greece Fact-checking coverage implemented
Hungary Fact-checking coverage implemented
Ireland Fact-checking coverage implemented
Italy Fact-checking coverage implemented
Latvia Fact-checking coverage implemented
Lithuania Fact-checking coverage implemented
Luxembourg Fact-checking coverage implemented
Malta No permanent fact-checking coverage. We can, and have, put in place temporary agreements with fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis. Meanwhile, our fact-checking repository and other initiatives benefit all European users and ensure the overall integrity of our platform.
Netherlands Fact-checking coverage implemented
Poland Fact-checking coverage implemented
Portugal Fact-checking coverage implemented
Romania Fact-checking coverage implemented
Slovakia Fact-checking coverage implemented
Slovenia Fact-checking coverage implemented
Spain Fact-checking coverage implemented
Sweden Fact-checking coverage implemented
Iceland No permanent fact-checking coverage. We can, and have, put in place temporary agreements with fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis. Meanwhile, our fact-checking repository and other initiatives benefit all European users and ensure the overall integrity of our platform.
Liechtenstein Fact-checking coverage implemented
Norway Fact-checking coverage implemented

Measure 30.2

Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.

QRE 30.2.1

Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.

Our agreements with our fact-checking partners are standardised, meaning the agreements are based on our template master services agreements and consistent of common standards and conditions. These agreements, as with all of our agreements, must meet the ethical and professional standards we set internally including containing anti-bribery and corruption provisions. 

Our partners are compensated in a fair, transparent way based on the work done by them using standardised rates. Our fact-checking partners then invoice us on a monthly basis based on work done.

All of our fact-checking partners are independent organisations, which are certified through the non-partisan IFCN. Our agreements with them explicitly state that the fact-checkers are non-exclusive, independent contractors of TikTok who retain editorial independence in relation to the fact-checking, and that the services shall be performed in a professional manner and in line with the highest standards in the industry. Our processes are also set up to ensure our fact-checking partners independence. Our partners access flagged content through an exclusive dashboard for their use and provide their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data or authenticating videos and images.

To facilitate transparency and openness with our fact-checking partners, we regularly meet them and provide data regarding their feedback and also conduct surveys with them.

QRE 30.2.2

Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.

We meet regularly with our fact-checking partners and have an ongoing dialogue with them about how our partnership is working and evolving. We survey our fact-checking partners to encourage feedback about what we are doing well and how we could improve.

QRE 30.2.3

European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.

This provision is not relevant to TikTok, only to fact-checking organisations.

Measure 30.3

Relevant Signatories will contribute to cross-border cooperation between fact-checkers.

QRE 30.3.1

Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.

Given our fact-checking partners are all IFCN-accredited, our fact-checking partners already engage in some informal cross-border collaboration through that network. 

In addition, we continue to collaborate with our partners to understand how we may be able to facilitate further collaboration through individual feedback sessions, and active participation in global fact-checking events, such as GlobalFact12 (June 2025), where we hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and TikTok's Safety Advisory Councils.  

Measure 30.4

To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.

QRE 30.4.1

Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.

We are in regular dialogue with EDMO and the EFCSN on these and other issues. We continue to be open to discussing and exploring what further progress can be made on these points.

Commitment 31

Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.

We signed up to the following measures of this commitment

Measure 31.1 and 31.2 Measure 31.3 Measure 31.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Expanded our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
  • Conducted feedback sessions with our partners to further enhance the efficiency of the fact-checking program.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Commitment 32

Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.

We signed up to the following measures of this commitment

Measure 32.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Continued to explore ways to improve data sharing in connection with our pilot scheme to share enforcement data with our fact-checking partners on the claims they have provided feedback on.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 32.3

Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.

QRE 32.3.1

Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.

Our fact-checking partners access content which has been flagged for review through a dashboard made available for their exclusive use. The dashboard shows our fact-checkers certain quantitative information about the services they provide, including the number of videos queued for assessment at any one time, as well as the time the review has taken. Fact-checkers can also use the dashboard to see the rating they applied to videos they have previously assessed.

Going forward, we plan to continue to explore ways to further increase the quality of our methods of data sharing with fact-checking partners.

We continue to participate in the taskforce made up of the relevant signatories’ representatives that is being set up for this purpose. Meanwhile we are also engaging with EDMO pro-actively on this commitment.

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

We have meaningfully engaged in the Task-force / Plenaries and all working groups.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

We have meaningfully engaged in the Task-force and all of its working groups by attending and participating in meetings and engaging in any relevant discussions, in particular regarding elections and further developing/activating the Rapid Response System (RRS). 

We will continue to engage in the Task-force and all of its working groups and subgroups.

Monitoring of the Code

Commitment 38

The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.

We signed up to the following measures of this commitment

Measure 38.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 38.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

QRE 38.1.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

TikTok will continue to have appropriate resources in place to meet our commitments and compliance. 

Given the breadth of the Code and the commitments therein, our work spans multiple teams, including Trust and Safety, Legal, Monetisation Integrity, Product and Public Policy. Teams across the globe are deployed to ensure that we meet our commitments and compliance with the notable involvement of our Trust and Safety Leadership.

Across the European Union, we have thousands of trust and safety professionals dedicated to keeping our platform safe.We also recognise the importance of local knowledge and expertise as we work to ensure online safety for our users. We take a similar approach to our third party partnerships.

Commitment 39

Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Polish Elections:
The 2025 Polish Presidential Election was a high-risk election with significant negative exposure potential. Round 1 elections occurred on 18 May, and the run-off was held on 1 June. Official results were announced on 2 June. Because of its significance in the context of Poland's domestic policies and international relations, we activated our Mission Control Centre (MCC) work in advance of the election, which resulted in identifying and containing threats early and quickly. Regulators publicly praised TikTok’s collaboration, and national media highlighted TikTok "more ambitious" safety posture compared to rival platforms.Some examples of the violative content we successfully disrupted include:
  • Content removals: We proactively removed more than 3,300 pieces of election-related content in Poland for violating our policies on synthetic and manipulated media, misinformation, and civic and election integrity.
  • Covert influence disruption: We removed three new domestic CIO networks (totaling 77 accounts and 36,419 followers) that were identified as specifically targeting a Polish audience for manipulating election discourse using fake news accounts and personas. More information relating to network disruptions is published on our dedicated Covert Influence Operations Reports. 

German Elections:
We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the German federal election held on 23 February 2025 . In advance of the election, a core election team was formed and consultations between cross function teams helped to identify and design response strategies.

TikTok did not observe major threats during the German election. Some examples of the violative content we successfully disrupted in German during January 2025:
  • We removed more than 862,000 pieces of content for violating our Community Guidelines, which includes our policies on civic and election integrity and misinformation.
  • We also removed 712 accounts for impersonating German election candidates and elected officials.
  • We proactively prevented +24 million fake likes and +18.9 million fake follow requests. We also blocked +293,000 spam accounts from being created.
  • We also removed +700,000 fake accounts, +17 million fake likes, and +5.7 million fake followers.

Portuguese Elections:
We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the Portugal legislative election held on 18 May 2025. In advance of the election, a core election team was formed and consultations between cross function teams helped to identify and design response strategies.

TikTok did not observe major threats during the Portuguese election. Through the election, we monitored for and actioned inauthentic behavior, and removed content that violated our Community Guidelines. As part of these efforts:
  • Between May 12 and May 25, we removed more than 300 pieces of content for violating our policies on civic and election integrity, misinformation and AI generated content. We removed more than 94% of it before anyone told us about it.
  • Between May 12 and 25, we proactively prevented more than 1,800,000 fake likes and more than 671,000 fake follow requests, and blocked more than 5,400 spam accounts from being created in Portugal. We also removed more than 5,400 fake accounts, more than 880,000 fake likes, and more than 154,000 fake followers.
  • Between May 15 - May 29, we also removed 28 accounts for impersonating Portuguese election candidates and elected officials.

Romanian Election:
As co-chair of the Code of Conduct on Disinformation's Working Group on elections, TikTok takes our role of protecting the integrity of elections on our platform very seriously. We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the Romanian Presidential Election, which took place on 4 May 2025, with a second round on 18 May 2025, following the unprecedented annulment of the 2024 results and marked one of the most closely monitored electoral cycles for TikTok to date.

From March to May 2025, TikTok deployed robust detection models, automated moderation, and local partnerships to safeguard its Romanian user base of over 8 million. The following are examples of some of the threats TikTok observed in relation to both election rounds:

  • Covert influence disruption: TikTok reported removing two new domestic covert networks totaling 87 accounts and 33,296 followers)in April 2025  for manipulating election discourse using fake news accounts and personas. More information relating to the network disruptions is published on our dedicated Covert Influence Operations transparency page
  • Content removals: We removed over 13,100 pieces of election-related content in Romania for violating our policies on misinformation, civic integrity, and synthetic media - over 93% were taken down before any user report.
  • We received 57 submissions through the COCD Rapid Response System in relation to the Romanian Presidential Election, which were rapidly addressed. Actions included banning or geo-blocking of accounts and content removals for violation of Community Guidelines.

(V) Deterring covert influence operations 

We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.
 
(VI) Tackling misleading AI-generated content 

Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a  ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.

(VII) Government, Politician, and Political Party Accounts (GPPPAs)

We classify presidential candidate accounts as a Government, Politician, and Political Party Account (GPPPA). We then apply designated policies to GPPPAs to ensure the right experience, given their important role in civic processes. This includes disabling monetisation features.

We strongly recommend that GPPPAs be verified. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.

In advance of the elections TikTok’s GR team organized dedicated sessions with every political group in Romania to inform about our policies and to educate political actors about safety measures. TikTok also requested a list of candidates be provided by the Romanian authorities to ensure the GPPPA label could be correctly applied where relevant. 
 
  1. Directing people to trusted sources

(I) Investing in media literacy

We invest in media literacy campaigns as a counter-misinformation strategy. TikTok has partnered with the local NGO Funky Citizens in Romania to help the community safely navigate the platform and protect themselves against potential misinformation during the election. Funky Citizens developed a series of educational videos explaining how users could identify and avoid misinformation, use TikTok’s safety features, and critically evaluate content related to the electoral process. The Romanian community could find the video series with practical advice and useful information about the electoral process on Funky Citizens' official TikTok account and the in-app Election Center dedicated to Romania’s elections. These videos were viewed over 45 million times between March 2024 and February 2025.

  1. External engagement at the national and EU levels

(I) Rapid Response System: external collaboration with COPD Signatories

The COPD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 57 notifications through the RRS in relation to the Romanian Election, which were addressed and actioned, enforcement included banning or geo-blocking of accounts and content removals for violation of Community Guidelines.


(II) Engagement with local experts

To further promote election integrity, and inform our approach to the Romanian Presidential Election, we organised an Election Speaker Series with Funky Citizens who shared their insights and market expertise with our internal teams. 

(III) Engagement with national authorities pre-election

GR proactively organized an election-dedicated meeting on 7 February 2025 with ANCOM, the Permanent Electoral Authority and Ministry of Research, Innovation and Digitalization to establish points of contact before the elections and to offer access to our reporting tools including the Romanian election center.  On 27 February 2025, we engaged in an online meeting with ANCOM and Autoritatea Electorală Permanentă, the Permanent Electoral Authority in Romania on new Romanian regulation .

On 3 March 2025, we participated in an ANCOM roundtable in Bucharest, as well as a series of meetings including an in-person tabletop exercise on the Romanian election.

In the run up to the 2025 election, and during the election period, we continued to engage with ANCOM and promptly responded to ongoing questions and correspondence 


Mitigations in place

Polish Elections:
(I) Moderation capabilities

We have thousands of trust and safety professionals dedicated to keeping our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election we had proactive data monitoring, trend detection and regular monitoring of enriched keywords and accounts.

(II) Mission Control Centre: internal cross-functional collaboration

TikTok established a Mission Control Centre (MCC) in advance of the election, developed risk scenario mapping (covering focused Russian influence operations, AI-generated content (AIGC), misinformation/disinformation, scaled inauthentic behavior, hate speech surges), and implemented regular content trend clustering with rolling containment-correction-and-prevention cycle, covering key features. As a result, all identified threats were contained or mitigated early, with no credible or substantiated election interference claims emerging.

(III) Countering misinformation

Our misinformation moderators receive enhanced training and tools to detect and remove misinformation and other violative content. We also have teams on the ground who partner with experts to ensure local context and nuance is reflected in our approach.

In the weeks leading up to and including the run-off, we removed 530 videos for violating our civic and election integrity policies, and 2,772 videos for violating our misinformation policies.

(IV) Fact-checking

Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.

Within Europe, we partnered with 12 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Demagog, serves as the fact-checking partner for Poland, which provides coverage for the platform.

(V) Deterring covert influence operations

We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.

(VI) Tackling misleading AI-generated content

Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.

(VII) Government, Politician, and Political Party Accounts (GPPPAs)

Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.

We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.

Directing people to trusted sources

(I) Investing in media literacy

We invest in media literacy campaigns as a counter-misinformation strategy, working with fact checkers as part of our Election Centre for Poland. TikTok has partnered with Demagog & FakeNews.pl in Poland to help the community safely navigate the platform and protect themselves against potential misinformation during the elections. We also worked with fact checkers to launch an Evergreen Media Literacy Campaign.

External engagement at the national and EU levels

(I) Rapid Response System: external collaboration with COPD Signatories 

The COCD Rapid Response System (RRS) was utilised to  exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 23 RRS reports through the RRS before the Polish Election, which were rapidly addressed, including NASK and DSA cases. Actions included banning of accounts and content removals for violation of Community Guidelines.

(II) Engagement with local experts

To further promote election integrity, and inform our approach to the Polish Election, we organised an Election Speaker Series with Demagog who shared their insights and market expertise with our internal teams.

German Elections:

Enforcing our policies

(I) Moderation capabilities

We have thousands of trust and safety professionals dedicated to keeping our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection and regular monitoring of enriched keywords and accounts.

(II) Mission Control Centre: internal cross-functional collaboration


On 18 November , ahead of the German election , we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election. 


(III) Countering misinformation


Our misinformation moderators receive enhanced training and tools to detect and remove misinformation and other violative content. We also have teams on the ground who partner with experts to ensure local context and nuance is reflected in our approach.


In January 2025, we removed more than 862,000 pieces of content for violating our Community Guidelines, which includes our policies on civic and election integrity and misinformation.


In the weeks leading up to and including the election, we removed 3,283 videos for violating our civic and election integrity policies, and 12,781 videos for violating our misinformation policies. 


(IV) Fact-checking


Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.


TikTok collaborates with 12 fact-checking organizations across Europe to evaluate the accuracy of content in most European languages, including German.  Deutsche Presse-Agentur (dpa), serves as the fact-checking partner for Germany, which provides coverage for the platform.


(V) Deterring covert influence operations


We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website as well as monthly Covert Influence Operations reports


(VI) Tackling misleading AI-generated content


Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.


(VII) Government, Politician, and Political Party Accounts (GPPPAs)


Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.


We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.


Before the German election, we provided all parties represented in federal and state parliaments with written information about our election integrity policies and measures, and offered virtual information sessions for the parties and their candidates. We presented at security-focused webinar for candidates and parties organised by the Federal Office for Information Security (BSI). We also offered all parties represented in federal and state parliaments verification support for their candidates.


Directing people to trusted sources


(I) Investing in media literacy


We invest in media literacy campaigns as a counter-misinformation strategy. From 16 Dec 2024 to 3 Mar 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa). The Election Center was visited more than 5.7 million times.


External engagement at the national and EU levels


(I) Rapid Response System: external collaboration with COPD Signatories 


The COPD Rapid Response System (RRS) was utilised to  exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 4 RRS reports through the RRS before the German election which were rapidly addressed. Actions included banning of accounts and content removals for violation of Community Guidelines.


(II) Engagement with local experts


To further promote election integrity, and inform our approach to the German Election, we organised an Election Speaker Series with dpa who shared their insights and market expertise with our internal teams


(III) Engagement with national authorities and stakeholders


We participated in the two election roundtables hosted by the Federal Ministry of the Interior (BMI), one before and one after the election.


We participated in the election roundtable as well as the stress test hosted by the Federal Network Agency (BNetzA), the German Digital Service Coordinator (DSC). In addition, we held three separate virtual meetings between TikTok and the BNetzA, also attended by the European Commission, and answered a set of written questions.


We met with the domestic intelligence service (BfV) and the BMI state secretary.


We attended two election-focused virtual meetings with BzKJ (Federal Agency for Child and Youth Protection) and other platforms.


We engaged with the electoral commissioner ("Bundeswahlleiterin") and onboarded them to TikTok. In our election center, we included 2 videos from the electoral commissioner and linked to their website.


We provided all parties represented in federal and state parliaments with information about our election integrity measures and what they/their candidates can and cannot do on the platform in written form and also offered virtual info sessions for the parties and their candidates. We also offered all parties represented in federal and state parliaments verification support for their candidates.


We presented a security-focused webinar for candidates and parties organised by the Federal Office for Information Security (BSI).

Portuguese Elections:

(I) Moderation capabilities


We have thousands of trust and safety professionals dedicated to keeping our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection, and regular monitoring of enriched keywords and accounts.


(II) Mission Control Centre: internal cross-functional collaboration


On 13 May, ahead of the Portuguese election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election. .


(III) Countering misinformation

Our misinformation moderators receive enhanced training and tools to detect and remove misinformation and other violative content. We also have teams on the ground who partner with experts to ensure local context and nuance is reflected in our approach.

In the weeks leading up to and including the election (April 21 to May 18), we removed 821 pieces of content for violating our policies on civic and election integrity, misinformation, and AI generated content. In this same period, we removed over 99% of violative misinformation content before it was reported to us.


(IV) Fact-checking


Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.


TikTok collaborates with 12 fact-checking organizations across Europe to evaluate the accuracy of content in most European languages, including Portuguese. Poligrafo , serves as the fact-checking partner for Portugal, which provides coverage for the platform.


(V) Deterring covert influence operations


We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website as well as monthly Covert Influence Operations reports.


(VI) Tackling misleading AI-generated content


Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.


(VII) Government, Politician, and Political Party Accounts (GPPPAs)


Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.


We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.


Before the election we met with the main Portuguese regulatory bodies and political parties' Heads of Communication to (i) provide an overview of TikTok's policies for political accounts, (ii) outline TikTok's approach to election integrity and to data security, (iii) encourage account verification and (iv) enable direct contact to respond to their specific requests.


Directing people to trusted sources


(I) Investing in media literacy


We invest in media literacy campaigns as a counter-misinformation strategy. From 18 Apr 2025 to 2 June 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portugal election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Poligrafo. TikTok has partnered with Poligrafo in Portugal to help the community safely navigate the platform and protect themselves against potential misinformation during the elections. Poligrafo developed a series of educational videos explaining how users could identify and avoid misinformation, use TikTok’s safety features, and critically evaluate content related to the electoral process. The Portuguese community could find the video series with practical advice and useful information about the electoral process in the relevant Election Center.


External engagement at the national and EU levels


(I) Rapid Response System: external collaboration with COPD Signatories 


The COCD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 1 RRS report through the RRS during the Portuguese election, which was  quickly addressed and resulted in the reported content being deemed “FYF Ineligible”.  


(II) Engagement with local experts


To further promote election integrity, and inform our approach to the Portuguese election, we organised an Election Speaker Series with Poligrafo who shared their insights and market expertise with our internal teams.


(III) Engagement with national authorities and stakeholders


Ahead of the election, our Government Relations team represented TikTok at an official meeting organised by ANACOM with the Portuguese Regulatory Authority for the Media (ERC) and the National Election Commission (CNE). The team also met with the Organization for Security and Cooperation in Europe’s Office of Democratic Institutions and Human Rights (OSCE/ODIHR) and in particular, their Election Expert Team (EET) deployed for these elections.

As previously referenced, we also met with Portuguese political parties’ Heads of Communication to (i) provide an overview of TikTok's policies for political accounts, (ii) outline TikTok's approach to election integrity and to data security, (iii) encourage account verification and (iv) enable direct contact to respond to their specific requests.

Romanian Elections:
(I) Moderation capabilities
We supported the Romania 2025 elections by preparing moderators, updating policy, and escalating hate organization content in time for both election rounds. Our teams worked alongside technology to ensure that we consistently enforced our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. We continue to prioritize and enhance TikTok's automated moderation technology as such technology enables faster and consistent removal of content that violates our rules. We invest in technologies that improve content understanding and predict potential risks so that we can take action on violative content before it's viewed.
We have thousands of trust and safety professionals dedicated to keeping our platform safe. We have 95 Romanian-speaking moderators, which is the largest such team among digital platforms in the country, both in absolute terms and relative to the number of users. We increased resources on our Romanian elections task force by adding more than 120 subject matter experts across multiple teams including Deceptive Behaviour (which includes Covert Influence Operations analysts), Security and Ads Integrity. 


(II) Mission Control Centre: internal cross-functional collaboration  

In advance of the official campaign period for the Romanian Presidential Election, we established a dedicated Mission Control Centre (MCC), including employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the Romanian Presidential Election.

(III) Countering misinformation

Our misinformation moderators receive enhanced training and tools to detect and remove misinformation and other violative content. We also have teams on the ground who partner with experts to ensure local context and nuance are reflected in our approach. We also integrated the most recent insights from our expert partners into our policies and guidelines on misinformation and impersonation. We removed more than 5,500 pieces of election-related content in Romania for violating our policies on misinformation, harassment, and hate speech between March and May 2025. 

(IV) Fact-checking 

Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify. TikTok collaborates with 12 fact-checking organizations across Europe to evaluate the accuracy of content in most European languages, including Romanian. LeadStories, which is a verified member of International Fact-Checking Network and the European Fact-Checking Standards Network,  serves as the fact-checking partner for Romania, which provided coverage for the platform, including across weekends. 

 

Policies and Terms and Conditions

Outline any changes to your policies

N/A

Policy - 50.1.1

N/A

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2

N/A

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Scrutiny of Ad Placements, including prohibition on monetisation and fundraising campaigns for GPPPAs

(Commitment 1 and Measure 1.1) 


Specific Action applied - 50.2.1

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document

Description of intervention - 50.2.2

N/A

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.




Specific Action applied - 50.3.1

Prohibition on Political Advertising 
(Commitment 5, Measure 5.1

Description of intervention - 50.3.2

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document


Indication of impact - 50.3.3

N/A

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.4.1

Identifying and removing CIO networks 

(Commitment 14, Measure 14.1

Description of intervention - 50.4.2

Polish elections:
During the Polish Election we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform. To further increase transparency, accountability, and cross-industry sharing we introduced dedicated covert influence operations reports.

Accounts targeting political discourse in Poland
We assess that this network operated from Poland and targeted a Polish audience. The individuals behind this network created inauthentic accounts in order to promote nationalistic viewpoints that criticized Poland's engagement with the EU and aid to Ukraine, within the context of the 2025 Polish presidential election. The network systematically recycled content throughout its accounts in order to further spread its messaging.

  • Removed accounts in network
    : 16
  • Followers of Network: 14,743

We assess that this network operated from Poland and targeted a Polish audience. The individuals behind this network created inauthentic accounts in order to make coordinated and directed posts supporting a Polish politician. The network was found to strategically synchronise activity/content across multiple platforms through hashtags and the timing of posts.
  • Removed accounts in network: 12
  • Followers of Network: 10,252

We assess that this network operated from Poland and targeted a Polish audience. The individuals behind this network created inauthentic accounts in order to discredit the current government within the context of the 2025 Polish presidential election. The network was found to post videos that exploited the Volhynia Massacre and other sensitive historical topics to promote Eurosceptic, anti-Ukrainian, and anti-Semitic narratives.

  • Removed accounts in network: 49
  • Followers of Network: 11,424

More information relating to the above detailed network disruptions is published on our dedicated Covert Influence Operations transparency page.

German Elections:
 During the German election we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform. To further increase transparency, accountability, and cross-industry sharing, we introduced dedicated covert influence operations reports.

Portuguese Elections:
During the Portugal election we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform. To further increase transparency, accountability, and cross-industry sharing we introduced dedicated covert influence operations reports.

Romanian Elections:
During the Romanian Presidential Election, we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform.

Indication of impact - 50.4.3

Polish Elections:
See above.

German Elections:
In February 2025, we disrupted three small scale covert influence operations targeting the German market within the context of the federal election:
  1. A network of 40 accounts operated from Germany and targeted a German audience. The individuals behind this network created inauthentic accounts in order to amplify content supporting the political party "Alternative for Germany (AfD).” A large proportion of the network's accounts were found to use the word "news" or "nachricht" in their handle or nickname.
  2. A network of 17 accounts operated from Germany and targeted a German audience. The individuals behind this network created inauthentic accounts in order to promote the "Bündnis Sahra Wagenknecht (BSW)" Party within the context of the 2025 German federal elections. The network was found to alternate between posting apolitical and political content in order to drive engagement.
  3. A network of 14 accounts operated from Germany and targeting a German audience. The individuals behind this network created inauthentic accounts in order to promote the political party "Alternative for Germany (AfD)”. The accounts used Smurf avatars and were observed to rebrand their accounts and alternate content in order to gain engagement.

In addition to these network disruptions, we continued to remove accounts associated with previously disrupted networks attempting to re-establish their presence within this reporting period.

Portuguese Elections:
In May 2025, we disrupted one small scale covert influence operation targeting the Portuguese market within the context of the legislative election:
  • We assess that this network targeted a Portuguese audience. The individuals behind this network created inauthentic accounts in order to promote the Socialist Party and undermine the Social Democratic Party, within the context of the 2025 Portuguese election. This network masked its operating location through advanced operational security.

Romanian Election:
TikTok has scaled mitigations against deceptive behaviours including spam, impersonation, and activities in relation to fake engagement.  As examples of our efforts in this area, from March to May 2025:
  • We proactively prevented more than 21.5 million fake likes and 8.09+ million fake follow requests, and we blocked 38,000 spam accounts from being created in Romania. We also removed: 
  • 48,300 fake accounts
  • 8.2+ million fake likes
  • 1.83+ million fake followers
  • From 1 September 2024 to 26 May 2025, we prevented more than 120 million fake likes and more than 53 million fake follow requests, and we blocked more than 707,670 spam accounts from being created in Romania. We also removed: over 2,000 accounts impersonating Romanian Government, Politician, or Political Party Accounts, 379,324 fake accounts, 28.9+ million fake likes and 15.6+ million fake followers.

As set out above we reported removing two CIO networks in 2025 that were identified as specifically targeting a Romanian audience, including:

  • A network of 27 accounts that had 9,474 cumulative followers as at the date of removal, operating from Romania that attempted to target Romanian audiences in order to amplify certain narratives, attempting to manipulate Romanian elections discourse. The network was found to create accounts with generic handles and avatars which it presented as news accounts.
  • A network of 60 accounts that had 23,822 cumulative followers as at the date of removal, operating from Romania that attempted to target Romanian audiences in order to amplify certain narratives, attempting to manipulate Romanian elections discourse. The network was found to create fictitious personas in order to post comments and content aligned with its strategic goal.
  • More information relating to the above detailed network disruptions is published on our dedicated Covert Influence Operations transparency page.


Specific Action applied - 50.4.4

Tackling misleading AIGC and edited media 

(Commitment 15, Measures 15.1 and 15.2)

Description of intervention - 50.4.5

Our Edited Media and AI-Generated Content (AIGC) policy makes it clear that we do not want our users to be misled about political issues. For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime.

We do not allow misleading AIGC or edited media that falsely shows:
  • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation,
  • A crisis event, such as a conflict or natural disaster,
  • A public figure who is:
    • being degraded or harassed, or engaging in criminal or anti-social behaviour
    • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
    • being politically endorsed or condemned by an individual or group.

In addition, all AIGC or edited media, including that which depicts public figures, such as politicians, must be clearly labelled as AI generated, and can not be used for endorsements.

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

TikTok has invested in labeling technologies and tools, including the implementation of Content Credentials technology from the Coalition for Content Provenance and Authenticity (C2PA), which enables the automatic recognition and labeling of AI-generated content. This is complemented by a TikTok-developed tool that allows creators to easily label AI-generated content, already used by 37 million creators. TikTok’s commitment to AIGC transparency ensures a safe environment for users, who can easily identify synthetic content and understand its context.

TikTok is a member of the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity, and was the first video sharing platform to put Content Credentials into practice. We have the ability to read Content Credentials that attach metadata to content, which we can use to instantly recognise and label AIGC. This helped us to expand auto-labelling to AIGC created on some other platforms. 

Indication of impact - 50.4.6

Polish Elections:
Number of videos removed because of violation of our Edited media and AIGC policy from 21-27 April to 26 May to 1 June to cover both rounds of the Polish Elections and four complete weeks preceding Round 1: 75

German Election:
Number of videos removed for violating our Edited Media and AIGC policy during the 4 weeks leading up to and including the day of the German federal election on 23 February 2025: 574 

Portuguese Elections:
Number of videos removed for violating our Edited Media and AIGC policy during the 4 weeks leading up to and including the day of the Portuguese election on 18 May 2025: 11

Romanian Election:
Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) from 31 March to 6 April 2025 and 12-18 May 2025 to cover both rounds of elections and the 4 weeks leading up to Round 1 of the Romanian Presidential Election: 657



Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.5.1

Rolling out Media literacy campaigns (Commitment 17, Measure 17.2) 

Description of intervention - 50.5.2

Polish Elections:
From 18 April 2025, Tiktok launched an in-app Election Centre to provide users with up-to-date information about the 2025 Polish election. Working with electoral commissions and civil society organisations, the Election Centre connected people with reliable voting information, including when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results.
The Election Centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Demagog, fact checker FakeNews.pl, and media partners Radio Zet and Orientuj.sie.
We directed people to the Election Centre through prompts on videos, LIVEs and searches related to elections.

German Election:
In advance of EU and select regional elections, TikTok works with electoral commissions, civil society organisations, and fact-checking bodies to establish in-app Election Centres that connect people with reliable voting information, including: when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results. We direct people to the Election Centres through prompts on videos, LIVEs and searches related to elections.
From 16 Dec 2024 to 3 Mar 2025, we had a dedicated in-app Election Centre providing users with up-to-date information about the German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Deutsche Presse-Agentur (dpa). On 21 February 2025, we also launched a new permanent general media literacy and critical thinking skills campaign in Germany in collaboration with Deutsche Presse-Agentur (dpa).During H1 2025, we further enhanced awareness and visibility about how we tackle election misinformation and covert influence operations on our platform through the launch of our Global Elections Hub. This evergreen resource provides users and external stakeholders with timely updates on our election integrity efforts throughout each election cycle.

Portuguese Elections:
In advance of EU and select regional elections, TikTok works with electoral commissions, civil society organisations, and fact-checking bodies to establish in-app Election Centres that connect people with reliable voting information, including: when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results. We direct people to the Election Centres through prompts on videos, LIVEs and searches related to elections.
From 18 Apr 2025 to 2 June 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portuguese election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Poligrafo.
We directed people to the Election Centres through prompts on videos, LIVEs and searches related to elections.

Romanian Election:
TikTok had an in-app Election Center dedicated to Romania’s Presidential election. We updated our in-app Election Center to directly link to the Electoral Commission's website so it's even easier for people to access authoritative election information. In line with media literacy best practices, we also added a reminder to verify the accuracy of election information people see online and off.

TikTok also partnered with local NGO Funky Citizens to help the community safely navigate the platform and protect themselves against potential misinformation during the election. The Center featured authoritative information from Funky Citizens, Libertatea, and Digi FM, and was promoted to users through search guides and automatic content labels.   

Funky Citizens developed a series of educational videos explaining how users could identify and avoid misinformation, use TikTok’s safety features, and critically evaluate content related to the electoral process. The Romanian community could find the video series with practical advice and useful information about the electoral process on Funky Citizens' official TikTok account and in the Election Center.





Indication of impact - 50.5.3

Polish Elections:
The Election Centre launched before the Polish Election was visited 1,968,010 times.

German Election:
The Election Centre launched in advance of the German federal election was visited 5,708,749 times, and search banners were viewed 712,652 times. This localised approach helped to ensure that messaging in relation to the election was relevant to our community and encouraged more engagement.

Portuguese Elections:
The Election Centre, which launched in advance of the Portuguese election, was visited 371,857 times. This localised approach helped to ensure that messaging in relation to the election was relevant to our community and encouraged more engagement.

Romanian Election:
The in-app Election Center launched before the Presidential Election was visited 2,018,869 times between 31 March and 23 May.

Funky Citizens videos were viewed over 45 million times between March 2024 and February 2025.


Specific Action applied - 50.5.4

Engagement with local and regional experts (Commitment 17, Measure 17.2)

Description of intervention - 50.5.5

Polish Elections:
To further promote election integrity, and inform our approach to the Polish Election, we organised an Election Speaker Series with Demagog who shared their insights and market expertise with our internal teams.

Our fact-checking partners and local media literacy bodies have also supported TikTok in our launch of the Election Centres, which featured videos from them. This localised approach helped to ensure that messaging in relation to the Polish election was relevant to our community and encouraged more engagement. 

German Election:
To further promote election integrity, and inform our approach to the election, we organised an Election Speaker Series on 14 January 2025 with Deutsche Presse-Agentur (dpa) who shared their insights and market expertise with our internal teams. 

Portuguese Elections:
To further promote election integrity, and inform our approach to the Portuguese election, we organised an Election Speaker Series with Poligrafo who shared their insights and market expertise with our internal teams.

Romanian Election:
To further promote civic awareness, TikTok introduced a permanent media literacy hub on 14 May 2025, surfacing critical thinking tools via keyword-triggered notices. Additionally, Romanian influencers and marketing agencies were briefed on TikTok's strict rules against political advertising and branded content.


Indication of impact - 50.5.6

All elections:
This engagement with external regional and local experts allowed us to inform our country-level approach to these elections.

Description of intervention - 50.5.8



Indication of impact - 50.5.9



Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.6.1

Providing access to our Research API (Commitment 26 and Measures 26.1 and 26.2)

Description of intervention - 50.6.2

Through our Research API, academic researchers from non-profit universities in the US and Europe can apply to study public data about TikTok content and accounts. This public data includes comments, captions, subtitles, and number of comments, shares, likes, and favourites that a video receives, and comments from our platform. More information is available here

We conduct regular workshops for researchers, both online and in-person, to facilitate successful applications, provide hands-on demonstrations of our research tools, and address questions. These sessions are designed to maximize researcher success. Since launching January 2025-June 2025, we have delivered over 9 workshops, engaging more than 150 researchers. This included Germany, Romania, Poland, and Czech Republic.

Indication of impact - 50.6.3

Number of Research API applications related to the Polish Election that have been approved from January to June 2025: One application was received.

Number of Research API applications related to the German federal election that were approved in H1 2025: 15

Number of Research API applications related to the Portuguese election that have been approved from January to June 2025: No applications received.

Number of Research API applications related to the Romanian Presidential Election received  January to June 2025: 7


Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.7.1

Ensuring fact-checking coverage (Commitment 30, Measure 30.1)

Description of intervention - 50.7.2

Polish Elections:
Demagog serves as the fact-checking partner for Poland, which provided coverage for the platform.

German Elections:
Deutsche Presse-Agentur (dpa) serves as the fact-checking partner for Germany , which provides coverage for the platform.

Portuguese Elections:
Poligrafo serves as the fact-checking partner for Portugal, which provided coverage for the platform. 

Romanian Election:
LeadStories
serves as the fact-checking partner for Romania, which provided coverage for the platform, including across weekends in advance of the Romanian Presidential Election.




Indication of impact - 50.7.3

German, Portuguese, and Polish elections:
Please refer to Chapter 7 - Empowering the Fact-Checking Community for metrics.

Romanian Election:
In May 2025, LeadStories provided 77 misinformation leads and submitted an Insights Report focused on the Romanian election. Please refer to Chapter 7 - Empowering the Fact-Checking Community for comprehensive metrics.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War of Aggression by Russia on Ukraine


The war of aggression by Russia on Ukraine (hereinafter, “War in Ukraine”) continues to challenge us to confront an incredibly complex and continually evolving environment. At TikTok, the safety of our people and community is of paramount importance and we work continuously to safeguard our platform.

We have set out below some of the main threats we have observed on our platform in relation to the spread of harmful misinformation and covert influence operations (CIO) related to the War in Ukraine in the reporting period. We remain committed to preventing such content from being shared in this context.

(I) Spread of harmful misinformation

We observe and take action where appropriate under our policies. Since the War in Ukraine began we have seen false or unconfirmed claims about specific attacks and events, the development or use of weapons, the involvement of specific countries in the conflict and statements about specific military activities, such as the direction of troop movement. We also have seen instances of footage repurposed in a misleading way, including from video games or unrelated footage from past events presented as current.

 TikTok adopts a dynamic approach to understanding and removing misleading stories. When addressing harmful misinformation, we apply our Integrity & Authenticity policies (Integrity & Authenticity policies) in our Community Guidelines and we will take action against offending content on our platform. Our moderation teams are provided with detailed policy guidance and direction when moderating on crisis related misinformation using our misinformation policies, this includes the provision of case banks of harmful misinformation claims to support their moderation work.

(II) CIOs

We continuously work to detect and disrupt covert influence operations that attempt to establish themselves on TikTok and undermine the integrity of our platform. Our Integrity & Authenticity policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose. We have specifically-trained teams that are on high alert to investigate and detect CIOs on our platform.  We ban accounts that try to engage in such behavior, take action on others that we assess as part of the network, and report them regularly in our Transparency Center. When we ban these accounts, any content they posted is also removed.

In the period from January to June 2025, we took action to remove a total of 7 networks (consisting of 29,245 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our new dedicated CIO transparency report here.  

CIO will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. To counter these emerging threats and stay ahead of evolving challenges, we have expert teams who focus entirely on detecting, investigating, and disrupting covert influence operations.

Israel-Hamas Conflict


TikTok acknowledges both the significance and sensitivity of the Israel-Hamas conflict (referred to as the “Conflict” throughout this section).  We understand this remains a difficult, fearful, and polarizing time for many people around the world and on TikTok. TikTok continues to recognise the need to engage in content moderation of violative content at scale while ensuring that the fundamental rights and freedoms of European citizens are respected and protected. We remain dedicated to supporting free expression, upholding our commitment to human rights, and maintaining the safety of our community and integrity of our platform during the Conflict.   

We have set out below some of the main threats both observed and considered in relation to the Conflict and the actions we have taken to address these during the reporting period. 

(I) Spread of harmful misinformation

Trust forms the foundation of our community, and we strive to keep TikTok a safe and authentic space where genuine interactions and content can thrive. TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our:  Integrity & Authenticity policies (Integrity & Authenticity policies) in our Community Guidelines; products; practices; and external partnerships with fact-checkers, media literacy bodies, and researchers. We support our moderation teams with detailed misinformation policy guidance, enhanced training, and access to tools like our global database of previously fact-checked claims from our IFCN-accredited fact-checking partners, who help assess the accuracy of content. 

We continue to take swift action against misinformation, conspiracy theories, fake engagement,and fake accounts relating to the Conflict.


TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large.  This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. 

We have specifically-trained teams on high alert to investigate CIO, and disrupting CIO networks has been a high priority for us in the context of the Conflict. We now provide regular updates on the CIO networks we detect and remove from our platform, including those we identify relating to the Conflict, in our dedicated CIO transparency report. Between January to June 2025, we reported one new CIO network disruption that was found to post content relating to the Conflict as a dominant theme.

We know that CIO will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform, which is why we continually seek to strengthen our policies and enforcement actions in order to protect our community against new types of harmful misinformation and inauthentic behaviours. 

Mitigations in place

War of Aggression by Russia on Ukraine


We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine.. 

(I) Investment in our fact-checking programme

We employ a layered approach to detecting harmful misinformation that is in violation of our Community Guidelines. 

Working closely with our fact-checking partners is a crucial part of our approach to enforcing harmful misinformation on our platform. Our fact-checking programme includes coverage of Russian, Ukrainian, and Belarusian. We also partner with Reuters, which is dedicated to helping us accurately fact-check content in Russian and Ukrainian. 

We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives.This has facilitated proactive responses against high-harm trends and has ensured that our moderation teams have up-to-date guidance.

(II) Disruption of CIOs

As set out above, disrupting CIO networks has been high priority for us in the context of the crisis. We published a list of the networks we disrupted in the relevant period within our most recently published transparency report here

Between January and June 2025, we took action to remove a total of 7 networks (consisting of 29,245 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our dedicated CIO transparency report here.  

Countering influence operations is an industry-wide effort, in part because these operations often spread their activity across multiple platforms. We regularly consult with third-party experts, including our global Content and Safety Advisory Councils, whose guidance helps us improve our policies and understand regional context. 

(III) Restricting access to content for state-affiliated media

Since the early stages of the war, we have restricted access to content from a number of Russian state-affiliated media entities in the EU, Iceland and Liechtenstein. Our state-affiliated media policy is used to help users understand the context of certain content and to help them to evaluate the content they consume on our platform. Labels have since applied to content posted by the state-affiliated accounts of such entities in Russia, Ukraine and Belarus. 

We continue the detection and labeling of state-controlled media accounts in accordance with our state-controlled media label policy globally. 

(IV) Mitigating the risk of monetisation of harmful misinformation

Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of profiteering from the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions, removing Livestream videos originating in Ukraine from the For You feed of users located in the EU. In addition, the ability to add new video content or Livestream videos to the platform in Russia remains suspended.

(V) Launching localised media literacy campaigns

Proactive measures aimed at improving our users' digital literacy are vital, and we recognise the importance of increasing the prominence of authoritative information. We have thirteen localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bulgaria, Czech Republic, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia in close collaboration with our factchecking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners, to help users identify misinformation and prevent the spread of it on the platform. We have also partnered with a local Ukrainian fact-checking organisation, VoxCheck, with the aim of launching a permanent media literacy campaign in Ukraine.

Israel-Hamas Conflict


We are continually working hard to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis. As part of our crisis management process, we launched a command centre that brings together key members of our global team of thousands of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis. Since the beginning of the Conflict, we are:

(I) Upholding TikTok's Community Guidelines

Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. In H1 2025, we have removed 7,589 videos in relation to the conflict, which violated our misinformation policies.

Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.

(II) Leveraging our Fact-Checking Program

We employ a layered approach to detecting harmful misinformation that violates our Community Guidelines and our global fact-checking program is a critical part of this. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of harmful and difficult-to-verify claims. 

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content related to unfolding or emergency events, which have been assessed by our fact-checkers but cannot be verified as accurate i.e., ‘unverified content’. Mindful about how evolving events may impact the assessment of sensitive Conflict related claims day-to-day, we have implemented a process that allows our fact-checking partners to update us quickly if claims previously assessed as ‘unverified’  become verified with additional context and/or at a later stage.

(III) Scaling up our content moderation capabilities

TikTok has Arabic and Hebrew speaking moderators in the content moderation teams who review content and assist with Conflict-related translations. As we continue to focus on moderator care, we have also deployed additional well-being resources for our human moderation teams during this time. 

(IV) Disruption of CIOs

Disrupting CIO networks has also been high-priority work for us in tackling deceptive behaviour that may cause harm to our community or society at large.  As noted above, between January to June 2025, we took action to remove one network (consisting of twelve accounts in total) that were found to be related to the Conflict. We now publish all of the CIO networks we identify and remove, including those relating to the conflict, within our dedicated CIO transparency report, here

(V) Mitigating the risk of monetisation of harmful misinformation

Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behaviour in the region. For example, we’ve added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation. Our existing political ads policy, GPPPA labelling, and safety and civility policies help to mitigate the risk of monetisation of harmful misinformation.  

(VI) Deploying search interventions to raise awareness of potential misinformation 

To help raise awareness and to protect our users, we previously launched search interventions, which are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also direct them to well-being resources. In H2 2024 we continued to refine this process; in particular, we focused on improving keywords to ensure they are relevant and effective.

(VII) Adding opt-in screens over content that could be shocking or graphic

We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. Opt-in screens help prevent people from unexpectedly viewing shocking or graphic content as we continue to make public interest exceptions for some content. 

In addition, we are committed to engagement with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, and cooperation with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during these difficult times.  

Policies and Terms and Conditions

Outline any changes to your policies

Russia-Ukraine: No relevant updates in the reporting period.
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Israel-Hamas:
We refined and expanded our newsworthy exceptions to allow the dissemination of content documenting from a conflict zone and legitimate political speech/criticism, while remaining sensitive to the potential harm users may experience from exposure to graphic visuals, hateful behaviours, or incitement to violence. As part of this effort, we introduced dedicated policies addressing content related to the Conflict, specifically in areas depicting hostages, human suffering, and protests.

Additionally, we strengthened our policies on content that glorifies Hamas or Hezbollah and on the promotion or celebration of violent acts committed by either side of the Conflict. To further enhance platform integrity, we implemented specific Integrity & Authenticity policies for Israel-Hamas-related content, with a focus on conspiracy theories of varying severity and unsubstantiated claims.

Policy - 51.1.1

Russia-Ukraine:
No relevant updates in the reporting period.

Israel-Hamas
Policy updates


Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2

Russia-Ukraine:
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Israel-Hamas:
We continue to rely on our existing, robust Integrity & Authenticity policies, which are an effective basis for tackling content related to the Conflict. As such, we have not needed to introduce any new misinformation policies, for the purposes of addressing the crisis. In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Rationale - 51.1.3

Russia-Ukraine:
Our Integrity & Authenticity policies are our first line of defense in combating harmful misinformation and deceptive behaviours on our platform. 

Our Community Guidelines make clear to our users what content we remove or make ineligible for the For You feed when it poses a risk of harm to our users or the wider public. Our moderation teams are provided with detailed policy guidance and direction when moderating on war-related harmful misinformation using existing policies.

We have specialist teams within our Trust and Safety department dedicated to the policy issue of Integrity & Authenticity, including within the areas of product and policy. Our experienced subject matter experts on Integrity & Authenticity continually keep these policies under review and collaborate with external partners and experts when understanding whether updates are required.

When situations such as the War in Ukraine arise, our teams work to ensure that appropriate guidance is developed so that the Integrity & Authenticity policies are applied effectively in respect of content relating to the relevant crisis (in this case, the war). This includes issuing detailed policy guidance and direction, including providing case banks on harmful misinformation claims to support moderation teams.

Israel-Hamas: 
We refined and expanded our newsworthy exceptions to allow the dissemination of content documenting from a conflict zone and legitimate political speech/criticism, while remaining sensitive to the potential harm users may experience from exposure to graphic visuals, hateful behaviours, or incitement to violence. As part of this effort, we introduced dedicated policies addressing content related to the Conflict, specifically in areas depicting hostages, human suffering, and protests.Additionally, we strengthened our policies on content that glorifies Hamas or Hezbollah and on the promotion or celebration of violent acts committed by either side of the Conflict. To further enhance platform integrity, we implemented specific Integrity & Authenticity policies for Israel-Hamas-related content, with a focus on conspiracy theories of varying severity and unsubstantiated claims.

In the context of the Conflict, we rely on our robust  Integrity & Authenticity policies as our first line of defence in combating harmful misinformation and deceptive behaviours on our platform. 

Our Community Guidelines clearly identify to our users what content we remove or make ineligible for the For You feed when it poses a risk of harm to our users or the wider public. We have also supported our moderation teams with detailed policy guidance and direction when moderating on Conflict-related harmful misinformation using existing policies.

We have specialist teams within our Trust and Safety department dedicated to the policy issue of Integrity & Authenticity, including within the areas of product and policy. Our experienced subject matter experts on Integrity & Authenticity continually keep these policies under review and collaborate with external partners and experts when understanding whether updates are required.
When situations such as the Conflict arise, these teams work to ensure that appropriate guidance is developed so that the Integrity & Authenticity policies are applied in an effective manner in respect of content relating to the relevant crisis (in this case, the Conflict). This includes issuing detailed policy guidance and direction, including providing case banks on harmful misinformation claims to support moderation teams.

Policy - 51.1.4

Russia-Ukraine: No relevant updates in the reporting period.

Israel-Hamas: 
Feature policies

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5

Israel-Hamas:
In addition to being able to rely on our Integrity & Authenticity policies, we have made temporary adjustments to existing policies which govern certain TikTok features. For example, we have added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation.

Rationale - 51.1.6

Israel-Hamas:
Temporary adjustments have been introduced in an effort to proactively prevent certain features from being used for hateful or violent behaviour in the region. 

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.



Specific Action applied - 51.3.1

Preventing misuse of our monetisation features
(Commitment 1, Measure 1.1)

Description of intervention - 51.3.2

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document

Indication of impact - 51.3.3

N/A

Specific Action applied - 51.3.4

Content moderation

(Commitment 2, Measure 2.2)

Description of intervention - 51.3.5

Russia-Ukraine:
We use a combination of automated and human moderation to identify content that breaches our ad policies. 

We enforce our strict ad policies and have expert teams focused on investigating and responding to any attempts to circumvent them.

Israel-Hamas:
We use a combination of automated and human moderation in order to identify content that breaches our ad policies. These policies prohibit, among other things, ad content and landing pages to display negative content regarding the military or police symbols, sensitive military events, militarism, advocating or whitewashing of war, terrorism, illegal organizations, or unlawful elements. 

We've continued to invest in both automated moderation technology, which now takes down 80% of the content removed from TikTok, as well as moderators. We've continued to update and expand our hate speech policy refreshers, trainings, and course materials, including implicit bias training addressing antisemitism and Islamophobia. We also had additional training from the Anti-Defamation League and the American Jewish Committee to further our understanding of new threats facing the Jewish community. 

Our Monetisation Integrity department has moderation teams in multiple locations that speak Arabic and Hebrew.

Indication of impact - 51.3.6

Russia-Ukraine:
Our efforts on ad moderation practices help to ensure that ads that breach our policies are rejected or removed, both in the context of the War in Ukraine and more broadly on our platform. 

Israel-Hamas:
Given the range of potential policy violations that could be engaged, we are currently unable to provide metrics specific to this issue.

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.



Specific Action applied - 51.4.1

Identifying and removing CIO networks

(Commitment 14, Measure 14.1)

Description of intervention - 51.4.2

Russia-Ukraine:
We fight against CIO as our Integrity & Authenticity policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity, or overall purpose. We prohibit and constantly work to disrupt attempts to engage in covert influence operations by manipulating our platform and/or harmfully misleading our community, our expert teams that focus entirely on detecting, investigating, and disrupting CIO networks that have removed numerous networks targeting discourse about the War in Ukraine.

Countering covert influence operations is a particular challenge because the adversarial actors behind them continuously evolve the ways they hide the linkage between their accounts. Our experts work to counter covert influence operations by studying the many layers of techniques, tactics, and procedures that deceptive actors use to try to manipulate platforms, drawing from a variety of disciplines, including threat intelligence and data science.

Israel-Hamas:
We have assigned dedicated resourcing within our specialist teams to proactively monitor for CIO in connection with the Conflict.

We fight against CIO as our Integrity & Authenticity policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose. We have specifically-trained and dedicated teams that are on high alert to investigate and detect CIO networks on our platform and have removed networks targeting discourse about the Conflict, in accordance with our Integrity & Authenticity policies, which prohibit deceptive behaviours. 

We know that CIO will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform, which is why we continually seek to strengthen our policies and enforcement actions in order to protect our community against new types of harmful misinformation and inauthentic behaviours.

Indication of impact - 51.4.3


Between January and June 2025, we took action to remove the following 7 networks (consisting of 29,245 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the Russia-Ukraine war while also misleading individuals, our community, or our systems:

Network Origin: Moldova
Description: We assess that this network operated from Moldova and targeted a Russian-speaking audience in Moldova and Ukraine. The individuals behind this network created inauthentic accounts in order to discredit the government of Moldova within the context of the 2024 Moldovan presidential elections. The network was also found, to a lesser extent, to post content undermining the mobilization processes in Ukraine.
Accounts Removed: 38
Followers: 41,305

Network Origin: Poland
Description: We assess that this network operated from Poland and targeted a Polish audience. The individuals behind this network created inauthentic accounts in order to make coordinated and directed posts supporting a Polish politician. The network was found to strategically synchronise activity/content across multiple platforms through hashtags and the timing of posts.
Accounts Removed: 12
Followers: 10,252

Network Origin: Ukraine
We assess that this network operated from Ukraine and targeted audiences in Russia, Georgia, Croatia, and Belarus. The individuals behind this network created inauthentic accounts to undermine political candidates favoring Russian-aligned agendas, amplify anti-government protests, and incite ethnic hatred. We assess that the network used off-platform generative artificial intelligence tools in order to create fictitious user avatars.
Accounts Removed: 28,713
Followers: 300,456

Network Origin: Ukraine
Description: We assess that this network operated from Ukraine and targeted a Russian audience. The individuals behind this network created inauthentic accounts in order to demoralize the Russian side in the context of the Kursk and Belgorod offensives during the ongoing Russia-Ukraine war. The network was observed to create fictitious personas in order to amplify the reach of its content.
Accounts Removed: 32
Followers: 13,940

Network Origin: Ukraine
Description: We assess that this network operated from Ukraine and targeted audiences in Germany and Ukraine. The individuals behind this network created inauthentic accounts in order to promote anti-Russian viewpoints, within the context of the war between Russia and Ukraine. The network started by targeting a domestic Ukrainian audience but then changed the language used in its videos in order to target a German audience
Accounts Removed: 20
Followers: 200,048

Network Origin: Russia
Description: We assess that this network operated from Russia and targeted Moldovan audiences. The individuals behind the network created inauthentic accounts to deliver content that criticized incumbent Moldovan officials and promoted political figures sympathetic to Russian foreign policy on Moldova. The network was found to be using location obfuscation services in order to hide their true location.
Accounts Removed: 314
Followers: 108,823

Network Origin: Russia
We assess that this network operated from Russia and targeted a European audience. The individuals behind this network created inauthentic accounts posing as journalists from established European news agencies in order to amplify narratives undermining Moldova’s government and Moldova's European Union candidate status. The network was found to be using location obfuscation services in order to hide its true operating location.
Accounts Removed: 116
Followers: 4,372

We published this information within our most recently published transparency report here.

Israel-Hamas:
Between January to June 2025, we took action to remove the following network (consisting of 12 accounts in total) that were found to be related to the Conflict:

Network Origin: US
Description: We assess that this network operated from the US and targeted a domestic US audience. The individuals behind the network created inauthentic accounts in order to artificially amplify narratives critical of Israel and US support of Israel. The network inflated its reach by frequently reposting, liking, and commenting on content published by other network accounts.
Accounts in network: 12
Followers of network: 26,647

We now publish all of the CIO networks we identify and remove, including those relating to the Conflict, within our dedicated CIO transparency report, here.

Specific Action applied - 51.4.4

Tackling synthetic and manipulated media

(Commitments 14 and 15, Measures 14.1, 15.1 and 15.2). 

Description of intervention - 51.4.5

Tackling synthetic and manipulated media

Russia-Ukraine:
Artificial intelligence (AI) enables incredible creative opportunities, but can potentially confuse or mislead users if they’re not aware content was generated or edited with AI.

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy became effective in May 2024. In this policy we prohibit AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. TikTok has also started to automatically label AIGC when it's uploaded from certain other platforms.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. 
 
In accordance with our policy, we prohibit AIGC that features:
  • The likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualization, bullying, or privacy concerns, including those related to personally identifiable information or likeness to private individuals
  • The likeness of adult private figures, if we become aware it was used without their permission
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
      • being politically endorsed or condemned by an individual or group

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Prohibited practices are set out in our Integrity & Authenticity policies here.

Israel-Hamas:
Our Edited Media and AI-Generated Content (AIGC) policy makes it clear that we do not want our users to be misled about crisis events. For the purposes of our policy, AIGC refers to content created or modified by AI technology or machine-learning processes. It includes images of real people and may show highly realistic-appearing scenes.

We do not allow misleading AIGC or edited media that falsely shows:
  • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation,
  • A crisis event, such as a conflict or natural disaster,
  • A public figure who is:
    • being degraded or harassed, or engaging in criminal or anti-social behavior
    • taking a position on a political issue, commercial product, or a matter of public importance (such as an election)
    • spreading misinformation about matters of public importance

In addition, all AIGC or edited media, including that which depicts public figures, such as politicians, must be clearly labelled as AI-generated, and can not be used for endorsements. 

We have an  AI-generated content label for users to easily inform their community when they post AIGC. The label can be applied to any content that has been completely generated or significantly edited by AI, which makes it easier to comply with the obligation to disclose AIGC that shows realistic scenes. Creators can do this through this label or through other types of disclosures, like a sticker, watermark, or caption. 

TikTok is also proud to be a part of, the Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA), and were the first video sharing platform to put Content Credentials into practice. TikTok has the ability to read Content Credentials that attach metadata to content, which we can use to instantly recognize and label AIGC. This helps our auto-labelling functionality for  AIGC created on some other platforms.

Indication of impact - 51.4.6

Russia-Ukraine:
Our efforts support transparent and responsible content creation practices, both in the context of the War in Ukraine and more broadly on our platform. 

Israel-Hamas:
Our efforts support transparent and responsible content creation practices, which are relevant both in the context of the Conflict and more broadly on our platform. 

Specific Action applied - 51.4.7

Removing harmful misinformation from our platform 

(Commitment 14, Measure 14.1)

Description of intervention - 51.4.8

Russia-Ukraine:
We take action to remove accounts or content that contain inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. In conflict environments, such information may include content that is repurposed from past conflicts, content that makes false and harmful claims about specific events, or incites panic. In certain circumstances, we may also reduce the prominence of such content where it does not warrant removal.

We employ a layered approach to misinformation detection, leveraging multiple overlapping strategies to ensure comprehensive and responsive coverage. We place considerable emphasis on proactive content moderation strategies to remove harmful misinformation that violates our policies before it is reported to us by users or third parties.

We place significant emphasis on proactive content moderation at TikTok, and are proud that we remove the vast majority of violative videos before they are reported to us by users or other third parties.

Israel-Hamas:
We employ a dynamic approach to misinformation detection, leveraging multiple overlapping strategies to ensure comprehensive and responsive coverage. We place considerable emphasis on proactive content moderation strategies in order to remove harmful misinformation that violates our policies before it is reported to us by users or third parties.

We take action to remove accounts or content that contain inaccurate, misleading, or false information that may cause significant harm to individuals or society, regardless of intent. In conflict environments, such information may include content that is repurposed from past conflicts, content that makes false and harmful claims about specific events, or incites panic. In certain circumstances, we may  reduce the prominence of such content.


Indication of impact - 51.4.9

Russia-Ukraine:
In the context of the crisis, we are proud to have proactively removed thousands of videos containing harmful misinformation related to the War in Ukraine. We have been able to do this through a combination of automated review, human-level content moderation, carrying out targeted sweeps of certain types of content (e.g. hashtags/sensitive keyword lists) as well as working closely with our fact-checking partners and responding to emerging trends they identify. 

Relevant metrics:

  • Number of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 3,405
  • Number of videos not recommended because of violation of misinformation policy with a proxy (only focusing on RU/UA) - 5,299
  • Number of proactive removals of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 3,110

Israel-Hamas:
In the context of the crisis, we have proactively removed 7,177 videos in H1 containing harmful misinformation related to the Conflict. We have been able to do this through a combination of automation and human moderation. We carry out targeted sweeps of certain types of content (e.g. hashtags/sensitive keyword lists) as well as working closely with our fact-checking partners and responding to emerging trends they identify. 

We have Arabic and Hebrew speaking content moderation as we recognise the importance of language and cultural context in the misinformation moderation process.

Relevant metrics: 
  • Number of videos removed because of violation of misinformation policy with a proxy (IL/Hamas) -  7,589
  • Number of videos not recommended because of violation of misinformation policy with a proxy (IL/Hamas) - 14,103
  • Number of proactive removals of videos removed because of violation of misinformation policy with a proxy (IL/Hamas): 7,177

Description of intervention - 51.4.11



Indication of impact - 51.4.12

N/A

Description of intervention - 51.4.17



Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.5.1

Not proactively promoting news-type content to our users 
(Commitment 18, Measure 18.1)

Description of intervention - 51.5.2

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document.

Indication of impact - 51.5.3

N/A

Specific Action applied - 51.5.4

Applying our state affiliated media label

(Commitment 17, Measure 17.1)

Description of intervention - 51.5.5

Russia-Ukraine
We have restricted access to certain state-affiliated media entities and strengthened our state-affiliated media policy in order to provide context to users to evaluate content shared by such Russian, Ukrainian, and Belarusian entities.

In the EU, Iceland, and Liechtenstein, we have taken steps to restrict access to content from media outlets and accounts subject to sanctions.

We continue to strive to update our state-affiliated media policy in order to strengthen our approach to countering influence attempts. Recent updates included:

  • Prohibiting state-affiliated media accounts attempting to engage in foreign influence campaigns from advertising outside of the country with which they are primarily affiliated; including in the EU
  • Investing in our detection capabilities of state-affiliated media accounts; AND

We have also worked with third-party external experts to shape our state-affiliated media policy and assessment of state-controlled media labels.

Where our state-affiliated media label is applied to content posted by the accounts of such entities in Russia, Ukrain, and Belarus, users across the EEA are automatically shown a full screen pop-up containing information about what the label means and inviting the user to click on “learn more” and be redirected to an in-app page, which explains why the content has been labelled as state-controlled media.

In addition to the above, we continue to invest in automation and scaled detection of state-affiliated media accounts. We also continue to work with third-party experts who help shape our state-affiliated media policies and who help inform our assessments of accounts that have been labelled as statecontrolled. We continue to improve our existing processes for applying our state-affiliated media label, such as looking to automate where possible, and aiming to streamline all communications to ensure maximum efficiency. We also continue our efforts in developing an additional layer of intervention to state-affiliated accounts that engage in harmful behaviours.

Indication of impact - 51.5.6

Russia-Ukraine:
We continue the detection and labelling of state-controlled media accounts in accordance with our state-controlled media label policy globally

Relevant metrics:
  • Number of videos tagged with the state affiliated media label for Russia, Belarus, and Ukraine - 13,847
  • Number of impressions of the state-affiliated media label for Russia, Belarus, and Ukraine - 100,813,065

Specific Action applied - 51.5.7

Creating localised media literacy campaigns

(Commitment 17, Measures 17.2 and 17.3)

Description of intervention - 51.5.8

Russia-Ukraine
We recognise the importance of proactive measures that are aimed at improving our users' digital literacy and increasing the prominence of authoritative information.

We have localised media literacy campaigns related to the crisis to raise awareness amongst our users. We promoted the campaign through a combination of our in-app intervention tools to ensure that authoritative information is promoted to our users. We have also partnered with a local Ukrainian fact-checking organisation, VoxCheck, with the aim of launching a permanent media literacy campaign in Ukraine.

Users searching for keywords related to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners. These tips help users identify misinformation and prevent its spread on the platform.

Indication of impact - 51.5.9

Russia-Ukraine
Working with our fact-checking partners, we have 17 localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bosnia, Bulgaria, Czech Republic, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Montenegro, Poland, Romania, Serbia, Slovakia, Slovenia, and Ukraine. 

Relevant metrics for the media literacy campaigns (EEA total numbers):

  • Total Number of impressions of the search intervention - 30,442,000
  • Total Number of clicks on the search intervention - 155,726
  • Click through rate of the search intervention - 0.51%

Specific Action applied - 51.5.10

Deploying search interventions to raise awareness of potential misinformation

(Commitment 21, Measure 21.1) 

Description of intervention - 51.5.11

Israel-Hamas:
To minimise the discoverability of misinformation and help to protect our users, we have launched search interventions which are triggered when users search for neutral terms related to the Conflict (e.g., Israel, Palestine). We continuously evaluate the effectiveness of our keywords, adding or removing terms based on their relevance.

Indication of impact - 51.5.12

Israel-Hamas:
These search interventions remind users to pause and check their sources and also direct them to well-being resources. 

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.6.1


Measures taken to support research into crisis related misinformation and disinformation

(Commitment 26, Measure 26.1 and 26.2)

Description of intervention - 51.6.2

Through our Research API, academic researchers from non-profit universities in the US and Europe can apply to study public data about TikTok content and accounts.This public data includes comments, captions, subtitles, and number of comments, shares, likes, and favourites that a video receives from our platform. More information is available here

Indication of impact - 51.6.3

Russia-Ukraine:
During the period of this COCD report, we approved 2 applications through the Research API, with an express focus on the War in Ukraine.

Israel-Hamas:
Between January and June 2025, 2 Research API applications related to the Conflict have been approved. 

Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.7.1

Applying our unverified content label and making content ineligible for recommendation

(Commitment 31, Measure 31.2)

Description of intervention - 51.7.2

Russia-Ukraine, Israel-Hamas:
Where our misinformation moderators or fact-checking partners determine that content is not able to be verified at the given time (which is common during an unfolding event), we apply our unverified content label to the content to encourage users to consider the reliability or source of the content. The application of the label will also result in the content becoming ineligible for recommendation in order to limit the spread of potentially misleading information.

Indication of impact - 51.7.3

Russia-Ukraine, Israel-Hamas:
Verifying certain information during dynamic and fast moving events such as a war can be challenging and our moderators and fact-checkers cannot always conclusively determine whether content is indeed harmful misinformation, in violation of our Community Guidelines.

Therefore, in order to minimise risk, where our fact-checkers or our trained moderators do not have enough information to verify content which may potentially be misleading, we apply our unverified content label to inform users the content has been reviewed but cannot be conclusively validated. The goal is to raise users' awareness about the credibility of the content and to reduce sharing (see screenshots here). Our unverified content label is available to users in 23 EU official languages (plus, for EEA users, Norwegian and Icelandic).

Where the banner is applied, the content will also become ineligible for recommendation into anyone's For You feed to limit the spread of information relating to unfolding events where details are still developing and which may potentially be misleading.

Specific Action applied - 51.7.4

Ensuring fact-checking coverage

(Commitment 30, Measure 30.1) 

Description of intervention - 51.7.5

Russia-Ukraine:
Our fact checking efforts cover Russian, Ukrainian, Belarusian and all major European languages (including 18 official European languages as well as a number of other languages which affect European users).

Israel-Hamas:
As part of our fact-checking program, TikTok works with more than 20 IFCN-accredited fact-checking organisations that support more than 60 languages, including Hebrew and Arabic, to help assess the accuracy of content in this rapidly-changing environment.  In the context of the Conflict, our independent fact-checking partners are following our standard practice, whereby they do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. Fact-checker input is then incorporated into our broader content moderation efforts in a number of different ways, as further outlined in the ‘indication of impact’ section below.  

In the context of the Conflict, we have also adjusted our information consolidation process to allow us to track and store Conflict related claims separately from our global repository of previously fact-checked claims. This facilitates quick and effective access to relevant assessments, which, in turn, increases the effectiveness of our moderation efforts. We also continue to improve our hate speech detection with an improved audio hash bank to help detect hateful sounds as well as updated machine learning models to recognize emerging hateful content. 

Indication of impact - 51.7.6

Russia-Ukraine:
Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we have ensured that, in the context of the crisis, our fact-checking programme covers Russian, Ukrainian and Belarusian. 

More generally, we work with 12 fact-checking partners in Europe, covering the spoken language of 25 languages (22 official EU languages plus Russian, Ukrainian, and Turkish). One of our fact-checking partners, Reuters, is dedicated to helping us to accurately fact-check content in Russian and Ukrainian. To further support our fact-checking efforts in Ukraine specifically, we have also been leveraging additional Ukrainian-speaking reporters who are connected with some of our existing fact checking partners.

Relevant metrics:
  • Number of fact-checked videos with a proxy related to the War in Ukraine - 881
  • Number of videos removed as a result of a fact-checking assessment with words related to the War in Ukraine - 144
  • Number of videos not recommended in the For Your Feed as a result of a fact-checking assessment with words related to the War in Ukraine - 323

Israel-Hamas:
We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we have ensured that, in the context of the crisis, our fact-checking programme covers Arabic and Hebrew. 

As noted above, we also incorporate fact-checker input into our broader content moderation efforts in different ways: 

  • Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
  • Collaborating with our fact-checking partners to receive advance warning of emerging misinformation narratives has facilitated proactive responses against high-harm trends and has helped to ensure that our moderation teams have up-to-date guidance.
  • A repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions. 

Relevant metrics:
  • Number of fact checked tasks related to IL/Hamas - 1,913
  • Number of videos removed as a result of a fact checking assessment with words related to IL/Hamas - 242
  • Number of videos demoted (NR) as a result of a fact checking assessment with words related to IL/Hamas - 323

Specific Action applied - 51.7.7

Collaborating with our fact-checking partners in relation to emerging trends

(Commitment 31, Measure 31.1)

Description of intervention - 51.7.8

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document

Indication of impact - 51.7.9

N/A