
Report September 2025
TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes
Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to improve and enforce our five granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2024 report, the policies cover:
- Medical Misinformation
- Dangerous Misinformation
- Synthetic and Manipulated Media
- Dangerous Conspiracy Theories
- Climate Disinformation
- We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
SLI 1.1.2
Please insert the relevant data
Country | Methodology of data measurement Euro value of ads demonetised |
---|---|
Austria | 0 |
Belgium | 0 |
Bulgaria | 0 |
Croatia | 0 |
Cyprus | 0 |
Czech Republic | 0 |
Denmark | 0 |
Estonia | 0 |
Finland | 0 |
France | 0 |
Germany | 0 |
Greece | 0 |
Hungary | 0 |
Ireland | 0 |
Italy | 0 |
Latvia | 0 |
Lithuania | 0 |
Luxembourg | 0 |
Malta | 0 |
Netherlands | 0 |
Poland | 0 |
Portugal | 0 |
Romania | 0 |
Slovakia | 0 |
Slovenia | 0 |
Spain | 0 |
Sweden | 0 |
Iceland | 0 |
Liechtenstein | 0 |
Norway | 0 |
Measure 1.2
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.
QRE 1.2.1
Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
- TikTok Inventory Filter: This is our proprietary system, which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the main system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies, which include topics that may be susceptible to disinformation. Additionally, this enabled advertisers to:
- Selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
- Exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List.
- TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.
- Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.
- IAS: Advertisers can measure brand safety, viewability, and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the Industry Standards.
- DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand its suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.
Measure 1.4
Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.
QRE 1.4.1
Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to enforce and improve our five granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2024 report, the policies cover:
- Medical Misinformation
- Dangerous Misinformation
- Synthetic and Manipulated Media
- Dangerous Conspiracy Theories
- Climate Misinformation
- Enabled advertisers to selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
- Enabled advertisers to exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List. Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.
- We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
Country | Number of ad removals under the political content ad policy | Number of ad removals under the five granular misinformation ad policies |
---|---|---|
Austria | 1,634 | 11 |
Belgium | 2,447 | 4 |
Bulgaria | 880 | 9 |
Croatia | 705 | 0 |
Cyprus | 585 | 0 |
Czech Republic | 859 | 0 |
Denmark | 796 | 2 |
Estonia | 307 | 0 |
Finland | 1,033 | 2 |
France | 16,026 | 46 |
Germany | 18,041 | 72 |
Greece | 2,420 | 20 |
Hungary | 1,647 | 111 |
Ireland | 1,263 | 8 |
Italy | 8,150 | 27 |
Latvia | 795 | 2 |
Lithuania | 521 | 4 |
Luxembourg | 250 | 1 |
Malta | 0 | 0 |
Netherlands | 3,028 | 30 |
Poland | 5,699 | 19 |
Portugal | 1,430 | 1 |
Romania | 13,989 | 23 |
Slovakia | 500 | 2 |
Slovenia | 230 | 2 |
Spain | 6,526 | 54 |
Sweden | 1,659 | 8 |
Iceland | 3 | 0 |
Lichtestein | 0 | 0 |
Norway | 1,071 | 3 |
EU Level | 91,420 | 458 |
EEA Level | 92,494 | 461 |
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
Dangerous Misinformation- Dangerous Conspiracy Theories
- Medical Misinformation
- Synthetic and Manipulated Media
- Climate Misinformation
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
- Dangerous Misinformation
- Dangerous Conspiracy Theories
- Medical Misinformation
- Synthetic and Manipulated Media
- Climate Misinformation
After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary.
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chaired. We work with other signatories to define and outline metrics regarding the monetary reach and impact of harmful misinformation. We are in close collaboration with industry to ensure alignment and clarity on the reporting of these code requirements.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.
Political Advertising
Commitment 4
Relevant Signatories commit to adopt a common definition of "political and issue advertising".
We signed up to the following measures of this commitment
Measure 4.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 13
Relevant Signatories agree to engage in ongoing monitoring and research to understand and respond to risks related to Disinformation in political or issue advertising.
We signed up to the following measures of this commitment
Measure 13.1 Measure 13.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 13.1
Relevant Signatories agree to work individually and together through the Task-force to identify novel and evolving disinformation risks in the uses of political or issue advertising and discuss options for addressing those risks.
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we completed our AIGC media literacy campaign series in Mexico and the UK. These campaigns in Brazil, Germany, France, Mexico and the UK, which ran across H2 2024 and H1 2025, were developed with guidance from expert organisations like Mediawise and WITNESS to teach our community how to spot and label AI generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.
- Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
- Continued to participate in the working groups on the integrity of services and Generative AI.
- We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s Transparency Centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.
- We continue to update and refine our policies around Covert Influence Operations in order to stay agile to changing behaviours and tactics on the platform and to ensure more granular detail is enshrined in our policy rationales.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible:
- Operating large networks of accounts controlled by a single entity, or through automation;
- Bulk distribution of a high volume of spam; and
- Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes
- Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
- Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform
Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
- facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
- provide instructions on how to artificially increase engagement on TikTok.
We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.
When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We have published details of all the CIO networks we identified and removed in H1 2025 in a dedicated monthly report within our Transparency Centre here.
- Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
- Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
- Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
- Our harmful misinformation policies combat conspiracy theories related to unfolding events and dangerous misinformation.
- Our Trade of Regulated Goods and Services policy prohibits the trading of hacked goods.
Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)
Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.
For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
In accordance with our policy, we prohibit AIGC, which features:
- The likeness of young people or realistic-appearing people under the age of 18.
- The likeness of adult private figures, if we become aware that it was used without their permission.
- Misleading AIGC or edited media that falsely show:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
- A crisis event, such as a conflict or natural disaster
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour.
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
- being politically endorsed or condemned by an individual or group.
As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.
Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 Community Guidelines refresh by increasing the information around our policing of this policy and providing specific examples.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
- prevent inauthentic accounts from being created based on malicious patterns; and
- remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).
We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.
- They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
- They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
- They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
Full metrics from this QRE (and QREs 14.2.2 and 14.2.4) can be found in our full report, linked at the top of this page.
Measure 14.3
Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.
QRE 14.3.1
Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we completed our AIGC media literacy campaign series in Mexico and the UK. These campaigns in Brazil, Germany, France, Mexico and the UK, which ran across H2 2024 and H1 2025, were developed with guidance from expert organisations like Mediawise and WITNESS to teach our community how to spot and label AI generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.
- Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
- We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
- AIGC that shows the likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals.
- AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission.
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour.
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
- being politically endorsed or condemned by an individual or group.
Measure 15.2
Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.
QRE 15.2.1
Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.
- We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
- We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1 Measure 16.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
SLI 16.1.1
Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).
Measure 16.2
Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.
QRE 16.2.1
As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Rolled out three new ongoing general media literacy and critical thinking skills campaigns in the EU in collaboration with our fact-checking and media literacy partners:
- Germany: Deutsche Presse-Agentur (dpa)
- Romania: Funky Citizens, Digi Media, and Libertatea
- Poland: Demagog, FakeNews.pl, Radio Zet, and Orientuj.sie
- We ran 9 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 7 in the EU
- Croatia (local election): Faktograf
- Croatia (presidential election): Faktograf
- Germany: Deutsche Presse-Agentur (dpa)
- Latvia: Lead Stories
- Poland: Demagog.pl & FakeNews.pl
- Portugal: Poligrafo
- Romania: Funky Citizens
- 2 in wider European/regionally relevant countries
- Albania: Internews Kosova (Kallxo)
- Greenland: Logically Facts
- 7 in the EU
- During the reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States and 4 in Albania, Belarus, Greenland, and Kosovo.
- Albania: Internews Kosova (Kallxo)
- Belarus: Belarusian Investigative Center
- Germany: Deutsche Presse-Agentur (dpa)
- Greenland: Logically Facts
- Kosovo: Internews Kosova (Kallxo)
- Poland: Demagog
- Portugal: Poligrafo
- Launched a revamped version of our Holocaust Education Campaign providing a dedicated hub within the app, in partnership with the World Jewish Congress and UNESCO with new videos from our partners designed to inform our community about the Holocaust. This includes first-hand witness accounts from Holocaust survivors, videos of users visiting Holocaust memorial sites, testimonials from curators sharing stories about Holocaust victims, and more. Our community can access the hub through TikTok searches related to the Holocaust and on relevant videos.
- Launched 2 new temporary search guides to provide users with guidance about interacting with sensitive content, and authoritative information sources, when events are unfolding rapidly.
- Italy & Portugal: Pope Francis, Health Status, 14 Mar 2025 - 12 May 2025
- Ireland & UK: Ballymena Riots, 13 Jun 2025 - 24 June 2025
- Launched a new temporary in-app natural disaster media literacy search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025 and continued our temporary search guide for the Mayotte Cyclone until 14 Feb 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support.
- Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around elections, the Israel-Hamas Conflict, Climate Change, Holocaust Education, Mpox, and the War in Ukraine.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
- Croatia Presidential Election 2024: From 6 Dec 2024 - 14 Jan 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Croatia presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Faktograf.
- German Federal Election 20254: From 16 Dec 2024 - 3 Mar 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).
- Greenland General Election 2025: From 18 Feb 2025 - 12 Mar 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Greenland general election. The page contained a section about following our Community Guidelines, with a link to our Danish fact-checking partner, Logically Facts for digital literacy resources.
- Finland Local & Municipal Elections 2025: From 4 Apr 2025 - 14 Apr 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Finnish elections and a link to a government website with election information. The page contained a section about following our Community Guidelines, with a link to the Finnish National Agency for Education (EDUFI) for digital literacy resources.
- Romania Presidential Election 2025: From 11 Apr 2025 - 23 May 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Romanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Funky Citizens and media agencies Digi Media and Libertatea.
- Albania General Election 2025: From 14 Apr 2025 - 12 May 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Albanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Kallxo.
- Croatia Local Elections 2025: From 17 Apr 2025 - 5 Jun 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Croatian local election. The page contained a section about following our Community Guidelines, with a link to our Croatia fact-checking partner, Faktograf for digital literacy resources.
- Portugal Legislative Election 2025: From 18 Apr 2025 to 2 June 2025, (ongoing at date of publication), we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portuguese election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Poligrafo.
- Poland Presidential Election 2025: From 18 Apr 2025 - 6 Jun 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Polish election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Demagog, fact checker FakeNews.pl, and media partners Radio Zet and Orientuj.sie.
- Latvia Local & Municipal Elections 2025: From 9 May 2025 (ongoing at date of publication), we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Latvian elections. The page contained a section about following our Community Guidelines, with a link to our Croatia fact-checking partner, Lead Stories for digital literacy resources.
(II) Election Speaker Series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States, and 4 in Albania, Belarus, Greenland, and Kosovo.
- Albania: Internews Kosova (Kallxo)
- Belarus: Belarusian Investigative Center
- Germany: dpa
- Greenland: Logically Facts
- Kosovo: Kallxo
- Poland: Demagog.pl
- Portugal: Poligrafo
- Germany: Deutsche Presse-Agentur (dpa)
- Romania: Funky Citizens, Digi Media, and Libertatea
- Poland: Demagog.pl, FakeNews.pl, Radio Zet, and Orientuj.sie
- Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.
- Partnered with fakenews.pl: Poland.
- Partnered with Correctiv: Germany, Austria.
- Our climate change search intervention tool is available in 23 official EU languages (plus Norwegian and Icelandic for EEA users). It redirects users looking for climate change-related content to authoritative information and encourages them to report any potential misinformation they see.
- As of August 2024, popular hashtags #ClimateChange, #SustainableLiving, and #ClimateAction have more than 1.2 million associated posts on TikTok, combined.
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
- Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we worked with European fact-checkers on 9 temporary media literacy election integrity campaigns, in advance of regional elections, through our in-app Election Centers:
- 7 in the EU
- Croatia (local election): Faktograf
- Croatia (presidential election): Faktograf
- Germany: Deutsche Presse-Agentur (dpa)
- Latvia: Lead Stories
- Poland: Demagog & FakeNews.pl
- Portugal: Poligrafo
- Romania: Funky Citizens
- 2 in wider European/regionally relevant countries
- Albania: Internews Kosova (Kallxo)
- Greenland: Logically Facts
- 7 in the EU
- Election speaker series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations:
- Albania: Internews Kosova (Kallxo) Kallxo
- Belarus: Belarusian Investigative Center
- Germany: Deutsche Presse-Agentur (dpa)DPA
- Greenland: Logically Facts
- Kosovo: Kallxo
- Poland: Demagog
- Portugal: Poligrafo
(II) War in Ukraine.
We continue to run our media literacy campaigns about the war in Ukraine, developed in partnership with our media literacy partners Correctiv in Austria and Germany, Fakenews.pl in Poland and Lead Stories in Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania. We also expanded this campaign to Serbia, Bosnia, Montenegro, Czechia, Croatia, Slovenia, Bulgaria.
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models.
- Began testing large language models (LLMs) to further support proactive moderation at scale. Because LLMs can comprehend human language and perform highly specific, complex tasks, we are better able to moderate nuanced areas like misinformation by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
- Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example:
In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs us about our approach to the upcoming election. During the reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States, and 4 in Albania, Belarus, Greenland, and Kosovo.- Albania: Internews Kosova (Kallxo)
- Belarus: Belarusian Investigative Center
- Germany: Deutsche Presse-Agentur (dpa)
- Greenland: Logically Facts
- Kosovo: Internews Kosova (Kallxo)
- Poland: Demagog
- Portugal: Poligrafo
- In June 2025, 14 members of our Trust & Safety team (including leaders of our fact-checking program) attended GlobalFact12. In addition to a breakout session on Footnotes, TikTok hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and Safety Advisory Councils.
- TikTok teams and personnel also regularly participate in research-focused events. In H1 2025, we presented at the Political Tech Summit in Berlin (January), hosted Research Tools demos in Warsaw (April), Presented at GNET Annual Conference (May), hosted Research Tools demos in Prague (June), Presented at the Warsaw Women in Tech Summit (June), briefed a small group of Irish academic UCD (Dublin) researchers (June), and attended the ICWSM conference in Copenhagen (June).
- Continued to participate in, and co-chair, the working group on Elections.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
Our Terms of Service and Integrity & Authenticity policies under our Community Guidelines are the first line of defence in combating harmful misinformation and (as outlined in more detail in QRE 14.1.1) deceptive behaviours on our platform. These rules make clear to our users what content we remove or make ineligible for the For You feed when they pose a risk of harm to our users and our community.
- Misinformation
- Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life
- Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
- Conspiracy theories that name and attack individual people.
- Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.
- Civic and Election Integrity
- Election misinformation, including:
- How, when, and where to vote or register to vote;
- Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
- Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
- Final results or outcome of an election.
- Election misinformation, including:
- Edited Media and AI-Generated Content (AIGC)
- The likeness of young people or realistic-appearing people under the age of 18.
- The likeness of adult private figures, if we become aware it was used without their permission.
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour;
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
- being politically endorsed or condemned by an individual or group.
- Fake Engagement
- Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
- Providing instructions on how to artificially increase engagement on TikTok.
We have made even clearer to our users here that the following content is ineligible for the For You feed:
- Misinformation
- Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society"
- Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness
- Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest
- Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study
- Unverified claims related to an emergency or unfolding event
- Potential high-harm misinformation while it is undergoing a fact-checking review
- Civic and Election Integrity
- Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied
- Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill
- Fake Engagement
- Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content
We have policy experts within our Trust and Safety team dedicated to the topic of integrity and authenticity. They continually keep these policies under review and collaborate with external partners and experts to understand whether updates or new policies are required and ensure they are informed by a diversity of perspectives, expertise, and lived experiences. In particular, our Safety Advisory Council for Europe, which brings together independent leaders from academia and civil society, represent a diverse array of backgrounds and perspectives, and are made up of experts in free expression, misinformation and other safety topics.They work collaboratively with us to inform and strengthen our policies, product features, and safety processes.
Enforcing our policies. We remove content – including video, audio, livestream, images, comments, links, or other text – that violates our Integrity & Authenticity policies. Individuals are notified of our decisions and can appeal them if they believe no violation has occurred. We also make clear in our Community Guidelines that we will temporarily or permanently ban accounts and/or users that are involved in serious or repeated violations, including violations of our Integrity & Authenticity policies.
We enforce our Community Guidelines policies, including our Integrity & Authenticity policies, through a mix of technology and human moderation. To do this effectively at scale, we continue to invest in our automated review process as well as in people and training. At TikTok we place a considerable emphasis on proactive content moderation. This means our teams work to detect and remove harmful material before it is reported to us.
However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have misinformation moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.
Measure 18.3
Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.
QRE 18.3.1
Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.
Commitment 19
Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.
We signed up to the following measures of this commitment
Measure 19.1 Measure 19.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- At TikTok, we strive to bring more transparency to how we protect our platform. We continue to increase the reports we voluntarily publish, the depth of data we disclose, and the frequency with which we publish.
- In H1 2025, we published updates to our transparency reports, including:
- Community Guidelines Enforcement Report (January-March 2025)July-September 2024)
- Covert Influence Operations Reports, where we shared information about the influence networks we disrupted from January-June 2025.
- Platform Security Report (January-March 2025)
- We also worked to make it easier for people to independently study our data and platform. For example through:
- our Research Tools which empower over 900 research teams to independently study our platform.
- adding additional functionality to the Research API, including a compliance API (launched in June) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) do not impede researchers' ability to efficiently access data from TikTok's Research API.
- the downloadable data file in the Community Guidelines Enforcement Report offering access to aggregated data, including removal data by policy category, for the 50 markets with the highest volumes of removed content.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 19.1
Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.
QRE 19.1.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
We make clear to users in our Terms of Service and Community Guidelines (and also provide more context in our Help Center article and Transparency Center page, and Safety Center guide) that each account holder’s For You feed is based on a personalised recommendation system. The For You feed is curated to each user. Safety is built into our recommendations. As well as removing harmful misinformation content that violates our Community Guidelines, we take steps to avoid recommending certain categories of content that may not be appropriate for a broad audience including general conspiracy theories and unverified information related to an emergency or unfolding event. We may also make some of this content harder to find in search.
- user interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back);
- Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and
- User information (e.g. device settings, language preferences, location, time zone and day, and device types).
The main parameters help us make predictions on the content users are likely to be interested in. Different factors can play a larger or smaller role in what’s recommended, and the importance – or weighting – of a factor can change over time. For many users, the time spent watching a specific video is generally weighted more heavily than other factors. These predictions are also influenced by the interactions of other people on TikTok who appear to have similar interests. For example, if a user likes videos 1, 2, and 3 and a second user likes videos 1, 2, 3, 4 and 5, the recommendation system may predict that the first user will also like videos 4 and 5.
User preferences. Together with the safeguards we build into our platform by design, we also empower our users to customise their experience to their preferences and comfort. These include a number of features to help shape the content they see. For example, in the For You feed:
- Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
- Users are able to automatically filter out specific words or hashtags from the content recommended to them(see here).
- Users are able to refresh their For You feed if they no longer feel like recommendations are relevant to them or are too similar. When the For You feed is refreshed, users view a number of new videos which include popular videos (e.g., they have a high view count or a high like rate). Their interaction with these new videos will inform future recommendations.
- Users can also personalise their "For You" page through our new Manage Topics feature (June 2025). This allows users to adjust the frequency of content they see related to particular topics. The settings don't eliminate topics entirely but can influence how often they're recommended as peoples' interests evolve over time. It adds to the many ways people shape their feed every day - including liking or sharing videos, searching for topics, or simply watching videos for longer.
- As part of our obligations under the DSA (Article 38), we introduced non-personalized feeds on our platform, which provide our European users with an alternative to recommender systems. They are able to turn off personalisation so that feeds show non-personalised content. For example, the For You feed will instead show popular videos in their regions and internationally. See here.
Measure 19.2
Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.
SLI 19.2.1
Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.
The number of users who have filtered hashtags or a keyword to set preferences for For You feed, the number of times users clicked “not interested” in relation to the For You feed, and the number of times users clicked on the For You Feed Refresh are all based on the approximate location of the users that engaged with these tools.
The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.
Country | Number of users that filtered hashtags and keywords | Number of users that clicked on "not interested" | Number of times for algo reset | Number of Videos tagged with AIGC label |
---|---|---|---|---|
Austria | 71,042 | 952,721 | 60,429 | 216,782 |
Belgium | 109,999 | 1,428,998 | 109,438 | 310,518 |
Bulgaria | 61,967 | 838,603 | 50,767 | 325,280 |
Croatia | 36,204 | 396,069 | 25,456 | 62,728 |
Cyprus | 15,655 | 199,425 | 17,429 | 105,231 |
Czech Republic | 63,390 | 811,437 | 77,494 | 248,842 |
Denmark | 47,704 | 585,499 | 32,565 | 103,602 |
Estonia | 17,219 | 162,805 | 13,428 | 27,463 |
Finland | 64,531 | 641,392 | 59,140 | 151,632 |
France | 621,904 | 8,623,045 | 621,611 | 2,631,307 |
Germany | 714,270 | 8,678,005 | 708,174 | 2,923,297 |
Greece | 95,344 | 1,267,887 | 87,742 | 289,830 |
Hungary | 60,520 | 1,056,004 | 34,031 | 242,598 |
Ireland | 77,782 | 894,686 | 71,318 | 85,518 |
Italy | 407,290 | 6,719,765 | 305,942 | 1,606,752 |
Latvia | 25,337 | 298,797 | 29,270 | 73,324 |
Lithuania | 31,173 | 339,592 | 27,527 | 74,036 |
Luxembourg | 6,249 | 83,357 | 5,752 | 36,563 |
Malta | 6,356 | 79,651 | 7,349 | 21,483 |
Netherlands | 225,595 | 2,327,551 | 188,048 | 440,107 |
Poland | 277,460 | 3,572,508 | 201,086 | 789,871 |
Portugal | 97,779 | 1,208,681 | 73,846 | 354,910 |
Romania | 149,926 | 2,827,115 | 268,322 | 685,318 |
Slovakia | 26,822 | 363,060 | 16,471 | 112,814 |
Slovenia | 13,155 | 174,113 | 17,172 | 26,794 |
Spain | 475,525 | 7,262,327 | 430,715 | 1,837,668 |
Sweden | 112,446 | 1,467,000 | 141,965 | 324,255 |
Iceland | 6,330 | 60,021 | 3,572 | 9,200 |
Liechtenstein | 180 | 3,636 | 295 | 211 |
Norway | 65,219 | 733,515 | 53,304 | 118,623 |
Total EU | 3,912,644 | 53,260,093 | 3,682,487 | 14,108,523 |
Total EEA | 3,984,373 | 54,057,265 | 3,739,658 | 14,236,557 |
Commitment 21
Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.
We signed up to the following measures of this commitment
Measure 21.1 Measure 21.2 Measure 21.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We ran 9 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 7 in the EU
- Croatia (local election): Faktograf
- Croatia (presidential election): Faktograf
- Germany: Deutsche Presse-Agentur (dpa)
- Latvia: Lead Stories
- Poland: Demagog & FakeNews.pl
- Portugal: Poligrafo
- Romania: Funky Citizens
- 2 in wider European/regionally relevant countries
- Albania: Internews Kosova (Kallxo)
- Greenland: Logically Fact
- 7 in the EU
- Continued our temporary in-app natural disaster media literacy search guide for the Mayotte Cyclone until 14 Feb 2025, and launched a new search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support.
- Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change, Holocaust Education, Mpox, and the War in Ukraine.
- We partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility.
- Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 21.1
Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.
QRE 21.1.1
Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.
- Agence France-Presse (AFP)
- dpa Deutsche Presse-Agentur
- Demagog
- Facta
- Fact Check Georgia
- Faktograf
- Internews Kosova
- Lead Stories
- Newtral
- Poligrafo
- Reuters
- Teyit
- Enforcement of misinformation policies. Our fact-checking partners play a critical role in helping us enforce our misinformation policies, which aim to promote a trustworthy and authentic experience for our users. We consider context and fact-checking to be key to consistently and accurately enforcing these policies, so, while we use machine learning models to help detect potential misinformation, we have our misinformation moderators assess, confirm, and take action on harmful misinformation. As part of this process, our moderators can access a repository of previously fact-checked claims and they are able to provide content to our expert fact checking partners for further evaluation. Where fact-checking partners advise that content is false, our moderators take measures to assess and remove it from our platform. Our response to QRE 31.1.1 provides further insight into the way in which fact-checking partners are involved in this process.
- Unverified content labelling. As mentioned above, we partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility. In these circumstances, the content creator is also notified that their video was flagged as unsubstantiated content and the video will become ineligible for recommendation in the For You feed.
- In-app tools related to specific topics:
- Election integrity. We have launched campaigns in advance of several major elections aimed at educating the public about the voting process which encourage users to fact-check information with our fact-checking partners. For example, the election integrity campaign we rolled out in advance of France legislative elections in June 2024 included a search intervention and in-app Election Centre. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP). In total, during the reporting period, we ran 14 temporary media literacy election integrity campaigns in advance of regional elections.
- Climate Change. We launched a search intervention which redirects users seeking out climate change-related content to authoritative information. We worked with the UN to provide the authoritative information.
- Natural disasters: Launched a new temporary in-app natural disaster media literacy search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025 and continued our temporary search guide for the Mayotte Cyclone until 14 Feb 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support.
- User awareness of our fact-checking partnerships and labels. We have created pages on our Safety Center & Transparency Center to raise users’ awareness about our fact-checking program and labels and to support the work of our fact-checking partners.
SLI 21.1.1
Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.
The share of removals under our harmful misinformation policy, share of proactive removals, share of removals before any views and share of the removals within 24h are relative to the total removals of each policy.
Country | % video removals under Misinformation policy | % proactive video removals under Misinformation policy | % video removals before any views under Misinformation policy | % video removals within 24h under Misinformation policy | % video removals under Civic and Election Integrity policy | % proactive video removals under Civic and Election Integrity policy | % video removals before any views under Civic and Election Integrity policy | % video removals within 24h under Civic and Election Integrity policy | % video removals under Synthetic Media policy | % proactive video removals under Synthetic Media policy | % video removals before any views under Synthetic Media policy | % video removals within 24h under Synthetic Media policy | Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Austria | 18.55% | 98.56% | 80.73% | 84.72% | 6.53% | 99.06% | 93.18% | 92.13% | 4.01% | 96.75% | 43.59% | 45.30% | 26.41% |
Belgium | 28.82% | 98.73% | 77.11% | 83.00% | 8.79% | 99.57% | 95.91% | 95.66% | 5.17% | 96.82% | 19.68% | 16.64% | 37.10% |
Bulgaria | 45.82% | 97.98% | 55.05% | 84.53% | 5.05% | 99.73% | 97.04% | 97.84% | 3.55% | 99.23% | 22.61% | 24.14% | 35.71% |
Croatia | 25.45% | 97.78% | 72.32% | 86.06% | 4.83% | 96.81% | 87.23% | 91.49% | 5.60% | 93.58% | 22.02% | 40.37% | 26.38% |
Cyprus | 24.09% | 97.28% | 72.11% | 80.27% | 5.30% | 97.94% | 88.66% | 87.63% | 8.47% | 96.77% | 30.32% | 28.39% | 34.36% |
Czech Republic | 38.46% | 98.26% | 55.52% | 94.22% | 6.14% | 99.70% | 97.27% | 97.73% | 3.31% | 94.94% | 37.64% | 57.58% | 28.55% |
Denmark | 16.32% | 98.90% | 74.19% | 86.90% | 6.69% | 99.81% | 98.27% | 98.07% | 4.26% | 96.37% | 30.21% | 50.15% | 31.84% |
Estonia | 2.12% | 98.69% | 63.61% | 80.00% | 0.25% | 100.00% | 97.22% | 97.22% | 0.62% | 98.88% | 59.55% | 68.54% | 30.33% |
Finland | 27.42% | 94.45% | 70.50% | 91.43% | 5.78% | 99.81% | 95.94% | 98.84% | 1.59% | 97.89% | 27.46% | 41.55% | 33.60% |
France | 25.23% | 99.10% | 84.59% | 91.27% | 3.99% | 99.52% | 95.00% | 95.77% | 2.98% | 96.16% | 22.30% | 23.12% | 37.30% |
Germany | 27.90% | 98.10% | 79.18% | 90.84% | 9.17% | 98.25% | 89.13% | 91.11% | 3.18% | 93.75% | 35.63% | 44.09% | 26.78% |
Greece | 23.99% | 98.97% | 73.88% | 89.04% | 8.93% | 99.91% | 96.18% | 98.70% | 5.02% | 96.45% | 17.75% | 27.78% | 30.79% |
Hungary | 2.84% | 97.39% | 74.59% | 92.61% | 2.44% | 99.06% | 89.01% | 99.48% | 0.24% | 90.53% | 32.63% | 31.58% | 32.45% |
Ireland | 27.15% | 97.54% | 73.16% | 80.88% | 6.80% | 99.30% | 77.17% | 98.04% | 2.38% | 98.00% | 30.00% | 30.00% | 29.28% |
Italy | 28.38% | 98.55% | 78.93% | 84.12% | 9.76% | 99.42% | 94.08% | 92.75% | 3.62% | 96.61% | 20.04% | 13.75% | 36.90% |
Latvia | 17.22% | 97.82% | 83.01% | 91.26% | 2.59% | 100.00% | 95.16% | 95.16% | 9.53% | 98.25% | 64.04% | 78.95% | 33.33% |
Lithuania | 20.17% | 99.19% | 80.97% | 87.04% | 2.08% | 98.04% | 96.08% | 96.08% | 6.86% | 97.02% | 59.52% | 66.67% | 29.57% |
Luxembourg | 25.02% | 89.36% | 69.17% | 92.11% | 2.62% | 100.00% | 87.72% | 87.72% | 1.70% | 91.89% | 27.03% | 27.03% | 28.85% |
Malta | 50.81% | 90.08% | 70.63% | 94.44% | 2.82% | 100.00% | 89.29% | 89.29% | 3.13% | 100.00% | 9.68% | 25.81% | 39.10% |
Netherlands | 25.49% | 99.16% | 81.22% | 87.35% | 4.96% | 99.04% | 97.04% | 97.12% | 3.92% | 95.85% | 22.27% | 33.60% | 29.46% |
Poland | 37.85% | 98.77% | 65.49% | 93.13% | 3.86% | 97.34% | 85.94% | 84.66% | 1.46% | 94.35% | 39.69% | 46.26% | 30.81% |
Portugal | 27.48% | 98.57% | 84.09% | 91.37% | 8.42% | 98.13% | 92.28% | 94.85% | 6.52% | 99.40% | 25.68% | 18.28% | 28.31% |
Romania | 44.26% | 96.03% | 67.42% | 86.84% | 12.41% | 94.39% | 80.65% | 71.79% | 4.59% | 95.44% | 32.38% | 25.04% | 35.42% |
Slovakia | 62.64% | 94.27% | 67.80% | 95.33% | 1.01% | 100.00% | 92.45% | 92.45% | 2.14% | 97.35% | 49.56% | 69.03% | 28.82% |
Slovenia | 32.84% | 93.76% | 77.23% | 98.71% | 0.34% | 100.00% | 100.00% | 100.00% | 3.13% | 97.67% | 77.91% | 78.29% | 25.24% |
Spain | 30.23% | 99.46% | 87.87% | 90.63% | 5.09% | 99.31% | 92.75% | 89.88% | 3.73% | 97.03% | 21.44% | 20.00% | 34.09% |
Sweden | 15.83% | 98.65% | 78.99% | 82.53% | 6.94% | 99.67% | 96.92% | 96.84% | 3.17% | 97.63% | 19.13% | 21.86% | 31.25% |
Iceland | 8.01% | 98.63% | 89.04% | 90.41% | 1.10% | 100.00% | 100.00% | 100.00% | 1.54% | 100.00% | 28.57% | 71.43% | 22.73% |
Liechtenstein | 8.00% | 100.00% | 66.67% | 66.67% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 11.76% |
Norway | 16.73% | 98.33% | 69.09% | 87.01% | 4.98% | 98.78% | 93.41% | 94.39% | 3.30% | 95.96% | 24.26% | 43.01% | 32.93% |
Total EU | 27.42% | 98.11% | 76.94% | 89.47% | 6.64% | 98.29% | 90.43% | 90.17% | 3.20% | 95.84% | 28.85% | 31.14% | 30.95% |
Total EEA | 27.28% | 98.11% | 76.90% | 89.45% | 6.62% | 98.30% | 90.46% | 90.21% | 3.20% | 95.84% | 28.80% | 31.30% | 30.95% |
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In line with our DSA requirements, we continued to provide a dedicated reporting channel, and appeals process for users who disagree with the outcome, for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
- By ‘long-pressing’ (e.g., clicking for 3 seconds) on the video content and selecting the “Report” option.
- By selecting the “Share” button available on the right-hand side of the video content and then selecting the “Report” option.
People can report TikTok content or accounts without needing to sign in or have an account by accessing the Report function using the “More options (…)” menu on videos or profiles in their browser, or through our “Report Inappropriate content” webform which is available in our Help Centre. Harmful misinformation can be reported across content features such as video, comment, search, hashtag, sound, or account.
Measure 23.2
Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).
QRE 23.2.1
Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.
We have sought to make our Community Guidelines as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as review of moderation cases, flows, appeals and undertaking Root Cause Analyses).
We also note that whilst user reports are important, at TikTok we place considerable emphasis on proactive detection to remove violative content. We are proud that the vast majority of removed content is identified proactively before it is reported to us.
Appeals system.
We are transparent with users in relation to appeals. We set out the options that may be available both to the user who reported the content and the creator of the affected content, where they disagree with the decision we have taken.
The integrity of our appeals systems is reinforced by the involvement of our trained human moderators, who can take context and nuance into consideration when deciding whether content is illegal or violates our Community Guidelines.
Our moderators review all appeals raised in relation to removed videos, removed comments, and banned accounts and assess them against our policies. To ensure consistency within this process and its overall integrity, we have sought to make our policies as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as auditing appeals and undertaking Root Cause Analyses).
If users who have submitted an appeal are still not satisfied with our decision, they can share feedback with us via the webform on TikTok.com. We continuously take user feedback into consideration to identify areas of improvement, including within the appeals process. Users may also have other legal rights in relation to decisions we make, as set out further here.
Commitment 24
Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.
We signed up to the following measures of this commitment
Measure 24.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 24.1
Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.
QRE 24.1.1
Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.
- removal or otherwise restriction of access to their content;
- a ban of the account;
- restriction of their access to a feature (such as LIVE); or
- restriction of their ability to monetise.
Such notifications are provided in near real time after action has been taken (i.e. generally within several seconds or up to a few minutes at most).
All such appeals raised will be queued for review by our specialised human moderators so as to ensure that context is adequately taken into account in reaching a determination. Users can monitor the status and view the results of their appeal within their in-app inbox.
SLI 24.1.1
Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.
Country | Number of Appeals of videos removed for violation of misinformation policy | Number of overturns of appeals for violation of misinformation policy | Appeal success rate of videos removed for violation of misinformation policy | Number of Appeals of videos removed for violation of Civic and Election Integrity policy | Number of overturns of appeals for violation of Civic and Election Integrity policy | Appeal success rate of videos removed for violation of Civic and Election Integrity policy | Number of Appeals of videos removed for violation of Synthetic and Manipulated Media | Number of overturns of appeals for violation of Synthetic and Manipulated Media | Appeal success rate of videos removed for violation of Synthetic and Manipulated Media |
---|---|---|---|---|---|---|---|---|---|
Austria | 609 | 422 | 69.30% | 160 | 124 | 77.50% | 27 | 24 | 88.90% |
Belgium | 809 | 674 | 83.30% | 246 | 196 | 79.70% | 55 | 48 | 87.30% |
Bulgaria | 582 | 283 | 48.60% | 58 | 46 | 79.30% | 21 | 21 | 100.00% |
Croatia | 91 | 55 | 60.40% | 14 | 11 | 78.60% | 7 | 2 | 28.60% |
Cyprus | 92 | 59 | 64.10% | 20 | 15 | 75.00% | 17 | 11 | 64.70% |
Czech Republic | 1,453 | 468 | 32.20% | 162 | 137 | 84.60% | 72 | 39 | 54.20% |
Denmark | 311 | 226 | 72.70% | 102 | 84 | 82.40% | 40 | 32 | 80.00% |
Estonia | 84 | 49 | 58.30% | 15 | 10 | 66.70% | 8 | 7 | 87.50% |
Finland | 207 | 139 | 67.10% | 72 | 58 | 80.60% | 27 | 21 | 77.80% |
France | 6,935 | 6,296 | 90.80% | 709 | 639 | 90.10% | 421 | 396 | 94.10% |
Germany | 12,837 | 8,939 | 69.60% | 2,844 | 2,327 | 81.80% | 716 | 542 | 75.70% |
Greece | 705 | 425 | 60.30% | 173 | 139 | 80.30% | 55 | 37 | 67.30% |
Hungary | 228 | 131 | 57.50% | 133 | 102 | 76.70% | 6 | 4 | 66.70% |
Ireland | 948 | 765 | 80.70% | 108 | 97 | 89.80% | 36 | 32 | 88.90% |
Italy | 4,266 | 3,523 | 82.60% | 1,188 | 1,048 | 88.20% | 143 | 132 | 92.30% |
Latvia | 110 | 77 | 70.00% | 20 | 13 | 65.00% | 42 | 19 | 45.20% |
Lithuania | 101 | 84 | 83.20% | 16 | 15 | 93.80% | 22 | 14 | 63.60% |
Luxembourg | 35 | 29 | 82.90% | 9 | 7 | 77.80% | 5 | 3 | 60.00% |
Malta | 28 | 24 | 85.70% | 0 | 0 | 0.00% | 0 | 0 | 0.00% |
Netherlands | 1,732 | 1,441 | 83.20% | 290 | 236 | 81.40% | 92 | 77 | 83.70% |
Poland | 5,004 | 2,065 | 41.30% | 423 | 332 | 78.50% | 126 | 87 | 69.00% |
Portugal | 600 | 393 | 65.50% | 154 | 129 | 83.80% | 18 | 14 | 77.80% |
Romania | 5,175 | 1,539 | 29.70% | 1,066 | 855 | 80.20% | 158 | 78 | 49.40% |
Slovakia | 569 | 140 | 24.60% | 20 | 17 | 85.00% | 27 | 19 | 70.40% |
Slovenia | 96 | 48 | 50.00% | 7 | 6 | 85.70% | 12 | 10 | 83.30% |
Spain | 3,231 | 2,844 | 88.00% | 464 | 416 | 89.70% | 143 | 130 | 90.90% |
Sweden | 658 | 550 | 83.60% | 231 | 176 | 76.20% | 48 | 40 | 83.30% |
Iceland | 13 | 11 | 84.60% | 4 | 4 | 100.00% | 2 | 2 | 100.00% |
Liechtenstein | 2 | 2 | 100.00% | 0 | 0 | 0.00% | 0 | 0 | 0.00% |
Norway | 278 | 228 | 82.00% | 80 | 68 | 85.00% | 32 | 28 | 87.50% |
Total EU | 47,496 | 31,688 | 66.70% | 8,704 | 7,235 | 83.10% | 2,344 | 1,839 | 78.50% |
Total EEA | 47,789 | 31,929 | 66.80% | 8,788 | 7,307 | 83.10% | 2,378 | 1,869 | 78.60% |
Empowering Researchers
Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.2 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Supported new independent research through TikTok’s Research Tools (Research API and VCE).
- Further enriched the data available to include more information on stickers and effects (January) and video tags (April) and reached full parity in data available across the API and VCE (May).
- Added additional functionality to the Research API, including a compliance API (launched in June) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) do not impede researchers' ability to efficiently access data from TikTok's Research API.
- Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
- Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad was active for, the main parameters used for targeting (e.g. age, gender), the number of people who were served the ad.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
- COPD Transparency Reports, as part of our commitments to the Code, we publish a transparency report every six months to provide granular data for EU/EEA countries about our efforts to combat online misinformation.
- Our TikTok Community Guidelines Enforcement Reports, providing proactive quarterly insights into the volume and nature of content and accounts removed from our platform for violating our Community Guidelines, Terms of Service or Advertising Policies since 2019.
- DSA Transparency Reports, building on our proactive approach to transparency in our quarterly TikTok Community Guidelines Enforcement Reports and our obligations under the Digital Services Act (“DSA”), we publish a transparency report every six months to provide granular data for EU countries about our content moderation activities.
- We publish monthly Covert Influence Operations Reports, providing more frequent and granular detail about the covert influence operations we have disrupted.
- In H1 2025, we launched a new Global Elections Integrity Hub, including dedicated coverage of elections across Europe, the Middle East, and Africa. The Hub outlines our policies, product features, and moderation practices that help protect platform integrity during elections. Throughout this reporting period, we regularly updated the Hub with information on our safety efforts in markets with active elections, including Croatia, Kosovo, Germany, Romania, Portugal, and Poland.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
SLI 26.1.1
Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.
During this reporting period we received:
- 173 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
- 74 applications to access the TikTok Commercial Content API.
Country | Number of applications received for Researcher API | Number of applications accepted for Researcher API | Number of applications rejected for Researcher API | Number of applications received for TikTok Commercial Content Library content library | Number of applications accepted for TikTok Commercial Content Library content library | Number of applications rejected for TikTok Commercial Content Library content library |
---|---|---|---|---|---|---|
Austria | 6 | 6 | 4 | 1 | 1 | 0 |
Belgium | 2 | 1 | 0 | 1 | 1 | 0 |
Bulgaria | 0 | 0 | 0 | 0 | 0 | 0 |
Croatia | 0 | 0 | 1 | 0 | 0 | 0 |
Cyprus | 0 | 0 | 0 | 0 | 0 | 0 |
Czech Republic | 5 | 3 | 1 | 2 | 2 | 0 |
Denmark | 5 | 5 | 2 | 1 | 1 | 0 |
Estonia | 0 | 0 | 0 | 0 | 0 | 0 |
Finland | 2 | 1 | 0 | 0 | 0 | 0 |
France | 16 | 11 | 11 | 24 | 19 | 3 |
Germany | 48 | 50 | 21 | 11 | 10 | 1 |
Greece | 1 | 2 | 0 | 2 | 2 | 0 |
Hungary | 0 | 0 | 0 | 2 | 1 | 0 |
Ireland | 3 | 1 | 3 | 1 | 0 | 1 |
Italy | 21 | 16 | 6 | 1 | 1 | 0 |
Latvia | 0 | 0 | 0 | 3 | 3 | 0 |
Lithuania | 0 | 0 | 0 | 0 | 0 | 0 |
Luxembourg | 0 | 0 | 0 | 0 | 0 | 0 |
Malta | 0 | 0 | 0 | 0 | 0 | 0 |
Netherlands | 13 | 9 | 12 | 3 | 2 | 0 |
Poland | 4 | 3 | 1 | 4 | 4 | 0 |
Portugal | 0 | 0 | 0 | 3 | 3 | 0 |
Romania | 4 | 3 | 3 | 2 | 2 | 0 |
Slovakia | 1 | 0 | 0 | 2 | 1 | 1 |
Slovenia | 2 | 1 | 1 | 1 | 1 | 0 |
Spain | 32 | 12 | 18 | 7 | 5 | 2 |
Sweden | 6 | 5 | 2 | 3 | 3 | 0 |
Iceland | 0 | 0 | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 | 0 | 0 |
Norway | 2 | 2 | 1 | 0 | 0 | 0 |
Total EU | 171 | 129 | 86 | 74 | 62 | 8 |
Total EEA | 173 | 131 | 87 | 74 | 62 | 8 |
Measure 26.2
Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.
QRE 26.2.1
Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.
(I) Research API
(II) Virtual Compute Environment (VCE)
- Test Stage: Query the data using TikTok's query software development kit (SDK). The VCE will return random sample data based on your query, limited to 5,000 records per day.
- Execution Stage: Submit a script to execute against all public data. TikTok provides a powerful search capability that allows data to be paginated in increments of up to 100,000 records. TikTok will review the results file to make sure the output is aggregated.
The Commercial Content Library is a publicly searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that's commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad.
QRE 26.2.2
Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.
(II) Virtual Compute Environment (VCE)
Through our VCE, qualifying not-for-profit researchers and academic researchers from non-profit academic institutions in the EU can query and analyse TikTok’s public data. To protect the security and privacy of our users the VCE is designed to ensure that TikTok data is processed within confined parameters. TikTok only reviews the results to ensure that there is no identifiable individual information extracted out of the platform. All aggregated results will be shared as a downloadable link to the approved primary researcher's email.
(III) Commercial Content API
Through our Commercial Content API, qualifying researchers and professionals, who can be located in any country, can request public data about commercial content including ads, ad and advertiser metadata, and targeting information. To date, the Commercial Content API only includes data from EU countries.
(IV) Commercial Content Library
TikTok's Commercial Content Library is a repository of ads and other types of commercial content posted to users in the European Economic Area (EEA), Switzerland, and the UK only, but can be accessed by members of the public located in any country. Each ad and ad details will be available in the library for one year after the advertisement was last viewed by any user. Through the Commercial Content Library, the public can access information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that is commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad.
We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access. Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.Similarly with the Commercial Content API, we provide participants with detailed information on how to query ad data and fetch public advertiser data.
QRE 26.2.3
Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.
Commitment 27
Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.
We signed up to the following measures of this commitment
Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Supported new independent research through TikTok’s Research Tools (Research API and VCE).
- Enriched the data available to include more information on stickers and effects (January) and video tags (April) and reached full parity in data available across the API and VCE (May).
- Added additional functionality to the Research API, including a compliance API (launched in June) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) does not impede researchers' ability to efficiently access data from TikTok's Research API.
- Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
- Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad was active for, the main parameters used for targeting (e.g. age, gender), the number of people who were served the ad.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
As set out above, TikTok is committed to facilitating research through our Research Tools, Commercial Content APIs and Commercial Content Library, full details of which are available on our TikTok for Developers and Commercial Content Library websites.
We have many teams and individuals across product, policy, data science, outreach and legal working to facilitate research. We believe transparency and accountability are essential to fostering trust with our community. We are committed to transparency in how we operate, moderate and recommend content, empower users, and secure our platform. That's why we opened our global Transparency and Accountability Centers (TACs) for invited guests to see first-hand our work to protect the safety and security of the TikTok platform..
Our TACs are located in Dublin, Los Angeles, Singapore, and Washington, DC. They provide an opportunity for invited academics, businesses, policymakers, politicians, regulators, researchers and many other expert audiences from Europe and around the world to see first-hand how teams at TikTok go about the critically important work of securing our community's safety, data, and privacy. During the reporting period, DubTAC hosted 24 external tours, welcoming over 180 visitors. Notable attendees included: Ofcom; the EU Commission and representatives from the Irish Parliament; French; Danish; German; and UAE governments. We also welcomed mental health organisations and brand clients, including Coca Cola and Zalando. In March, we launched Mobile TAC in Brussels during Global Marketing Week and delivered 5 Mobile TAC tours across the EU.
We work closely with our ten regional Advisory Councils, including our European Safety Advisory Council and US Content Advisory Council, and our global Youth Advisory Council, which bring together a diverse array of independent experts from academia and civil society as well as youth perspectives. Advisory Council members provide subject matter expertise and advice on issues relating to user safety, content policy, and emerging issues that affect TikTok and our community, including in the development of our AI-generated content label and a recent campaign to raise awareness around AI labeling and potentially misleading AIGC. These councils are an important way to bring outside perspectives into our company and onto our platform.
In addition to these efforts, there are a plethora of ways through which we engage with the research community in the course of our work.
Our Outreach & Partnerships Management (OPM) Team is dedicated to establishing partnerships and regularly engaging with civil society stakeholders and external experts, including the academic and research community, to ensure their perspectives inform our policy creation, feature development, risk mitigation, and safety strategies. For example, we engaged with global experts, including numerous academics in Europe, in the development of our state-affiliated media policy, Election Misinformation policies, and AI-generated content labels. OPM also plays an important role in our efforts to counter misinformation by identifying, onboarding and managing new partners to our fact-checking programme. In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series.Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.
During this reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States and 4 in Albania, Belarus, Greenland, and Kosovo.
- Albania: Internews Kosova (Kallxo)
- Belarus: Belarusian Investigative Center
- Germany: Deutsche Presse-Agentur (dpa)
- Greenland: Logically Facts
- Kosovo: Internews Kosova (Kallxo)
- Poland: Demagog
- Portugal: Poligrafo
At the end of June 2025, we sent a 14 strong delegation to GlobalFact12 in Rio de Janiero, Brazil. TikTok was a top-tier sponsor of GlobalFact. Sponsorship money supports IFCN's work serving the fact-checking community and makes the conference itself possible for fact-checking organizations to attend through providing travel scholarships. The annual conference represents the most important industry event for TikTok's Global Fact-Checking Program and covers a broad set of topics related to mis- and dis-information that are discussed in main stage sessions and break-out rooms. In addition to a breakout session on Footnotes, TikTok hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and TikTok's Safety Advisory Councils.
As well as opportunities to share context about our approach, research interests, and opportunities to collaborate, these events enable us to learn from the important work being done by the research community on various topics, which include aspects related to harmful misinformation.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
- Public account data, such as user profiles, followers and following lists, liked videos, pinned videos and reposted videos.
- Public content data, such as comments, captions, subtitles, and number of comments, shares and likes that a video receives.
- Through the VCE, qualifying not-for-profit researchers in the EU can access and analyse TikTok's public data, including public U18 data, in a secure environment that is subject to strict security controls.
Our commercial content related APIs includes ads, ad and advertiser metadata, and targeting information. These APIs will allow the public and researchers to perform customised - advertiser name or keyword based - searches on ads and other commercial content data that is stored in the Commercial Content Library repository. The Library is a searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
Empowering fact-checkers
Commitment 30
Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.
We signed up to the following measures of this commitment
Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Updated fact checking agreements to include the requirement that fackfact checking partners provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 30.1
Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.
QRE 30.1.1
Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.
- The service the fact-checking partner will provide, namely, that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue, and will provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation.
- The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories.
- An option to receive pro-actively flagging of potential harmful misinformation from our partners.
- The languages in which they will provide fact-checking services.
- The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
- All other key terms including the applicable term and fees and payment arrangements.
QRE 30.1.2
Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).
- Agence France-Presse (AFP)
- Deutsche Presse-Agentur (dpa)
- Demagog
- Facta
- Geofacts
- Faktograf
- Internews Kosova (Kallxo)
- Lead Stories
- Newtral
- Poligrafo
- Reuters
- Teyit
- 7 in the EU
- Croatia (local election): Faktograf
- Croatia (presidential election): Faktograf
- Germany: Deutsche Presse-Agentur (dpa)
- Latvia: Lead Stories
- Poland: Demagog & FakeNews.pl
- Portugal: Poligrafo
- Romania: Funky Citizens
- 2 in wider European/regionally relevant countries
- Albania: Internews Kosova (Kallxo)
- Greenland: Logically Facts
partners:
- Germany: Deutsche Presse-Agentur (dpa)
- Romania: Funky Citizens, Digi Media, and Libertatea
- Poland: Demagog, FakeNews.pl, Radio Zet, and Orientuj.sie
QRE 30.1.3
Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.
In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
- Fact-checking repository. We have built a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
- Insights reports. Our fact-checking partners provide regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.
- Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform and suggest prominent misinformation that is circulating online that may benefit from verification.
- Election Speaker Series. To further promote election integrity, and inform our approach to country-level EU and regionally relevant elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations:
- Albania: Internews Kosova (Kallxo)
- Belarus: Belarusian Investigative Center
- Germany: Deutsche Presse-Agentur (dpa)
- Greenland: Logically Facts
- Kosovo: Internews Kosova (Kallxo)
- Poland: Demagog
- Portugal: Poligrafo
Some of the methods and technologies that support these efforts include:
- Vision-based: Computer vision models can identify objects that violate our Community Guidelines—like weapons or hate symbols.
- Audio-based: Audio clips are reviewed for violations of our Community Guidelines, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. "Natural language processing"—a type of Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
- LLM-based: We're starting to use a kind of AI called "large language learning models" to scale and improve content moderation. LLMs can comprehend human language and perform highly specific, complex tasks. This can make it possible to moderate content with a higher degree of precision, consistency and speed than human moderation.
- Multi-modal LLM-based: "Multi-modal LLMs" can also perform complex, highly specific tasks related to other types of content, such as visual content. For example, we can use this technology to make misinformation moderation easier by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
- Content Credentials: We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.
SLI 30.1.1
Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.
Total EEA Languages: 23
Country | Member States and languages covered by agreements with the fact-checking organisations |
---|---|
Austria | Fact-checking coverage implemented |
Belgium | Fact-checking coverage implemented |
Bulgaria | Fact-checking coverage implemented |
Croatia | Fact-checking coverage implemented |
Cyprus | Fact-checking coverage implemented |
Czech Republic | Fact-checking coverage implemented |
Denmark | Fact-checking coverage implemented |
Estonia | Fact-checking coverage implemented |
Finland | Fact-checking coverage implemented |
France | Fact-checking coverage implemented |
Germany | Fact-checking coverage implemented |
Greece | Fact-checking coverage implemented |
Hungary | Fact-checking coverage implemented |
Ireland | Fact-checking coverage implemented |
Italy | Fact-checking coverage implemented |
Latvia | Fact-checking coverage implemented |
Lithuania | Fact-checking coverage implemented |
Luxembourg | Fact-checking coverage implemented |
Malta | No permanent fact-checking coverage. We can, and have, put in place temporary agreements with fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis. Meanwhile, our fact-checking repository and other initiatives benefit all European users and ensure the overall integrity of our platform. |
Netherlands | Fact-checking coverage implemented |
Poland | Fact-checking coverage implemented |
Portugal | Fact-checking coverage implemented |
Romania | Fact-checking coverage implemented |
Slovakia | Fact-checking coverage implemented |
Slovenia | Fact-checking coverage implemented |
Spain | Fact-checking coverage implemented |
Sweden | Fact-checking coverage implemented |
Iceland | No permanent fact-checking coverage. We can, and have, put in place temporary agreements with fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis. Meanwhile, our fact-checking repository and other initiatives benefit all European users and ensure the overall integrity of our platform. |
Liechtenstein | Fact-checking coverage implemented |
Norway | Fact-checking coverage implemented |
Measure 30.2
Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.
QRE 30.2.1
Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.
Our partners are compensated in a fair, transparent way based on the work done by them using standardised rates. Our fact-checking partners then invoice us on a monthly basis based on work done.
All of our fact-checking partners are independent organisations, which are certified through the non-partisan IFCN. Our agreements with them explicitly state that the fact-checkers are non-exclusive, independent contractors of TikTok who retain editorial independence in relation to the fact-checking, and that the services shall be performed in a professional manner and in line with the highest standards in the industry. Our processes are also set up to ensure our fact-checking partners independence. Our partners access flagged content through an exclusive dashboard for their use and provide their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data or authenticating videos and images.
To facilitate transparency and openness with our fact-checking partners, we regularly meet them and provide data regarding their feedback and also conduct surveys with them.
QRE 30.2.2
Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.
QRE 30.2.3
European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.
Measure 30.3
Relevant Signatories will contribute to cross-border cooperation between fact-checkers.
QRE 30.3.1
Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.
In addition, we continue to collaborate with our partners to understand how we may be able to facilitate further collaboration through individual feedback sessions, and active participation in global fact-checking events, such as GlobalFact12 (June 2025), where we hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and TikTok's Safety Advisory Councils.
Measure 30.4
To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.
QRE 30.4.1
Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.
Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 and 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Expanded our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
- Conducted feedback sessions with our partners to further enhance the efficiency of the fact-checking program.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 32
Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.
We signed up to the following measures of this commitment
Measure 32.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 32.3
Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.
QRE 32.3.1
Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.
We continue to participate in the taskforce made up of the relevant signatories’ representatives that is being set up for this purpose. Meanwhile we are also engaging with EDMO pro-actively on this commitment.
Permanent Task-Force
Commitment 37
Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.
We signed up to the following measures of this commitment
Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 37.6
Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.
QRE 37.6.1
Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.
We will continue to engage in the Task-force and all of its working groups and subgroups.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
Across the European Union, we have thousands of trust and safety professionals dedicated to keeping our platform safe.We also recognise the importance of local knowledge and expertise as we work to ensure online safety for our users. We take a similar approach to our third party partnerships.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Crisis and Elections Response
Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
The 2025 Polish Presidential Election was a high-risk election with significant negative exposure potential. Round 1 elections occurred on 18 May, and the run-off was held on 1 June. Official results were announced on 2 June. Because of its significance in the context of Poland's domestic policies and international relations, we activated our Mission Control Centre (MCC) work in advance of the election, which resulted in identifying and containing threats early and quickly. Regulators publicly praised TikTok’s collaboration, and national media highlighted TikTok "more ambitious" safety posture compared to rival platforms.Some examples of the violative content we successfully disrupted include:
- Content removals: We proactively removed more than 3,300 pieces of election-related content in Poland for violating our policies on synthetic and manipulated media, misinformation, and civic and election integrity.
- Covert influence disruption: We removed three new domestic CIO networks (totaling 77 accounts and 36,419 followers) that were identified as specifically targeting a Polish audience for manipulating election discourse using fake news accounts and personas. More information relating to network disruptions is published on our dedicated Covert Influence Operations Reports.
German Elections:
We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the German federal election held on 23 February 2025 . In advance of the election, a core election team was formed and consultations between cross function teams helped to identify and design response strategies.
TikTok did not observe major threats during the German election. Some examples of the violative content we successfully disrupted in German during January 2025:
- We removed more than 862,000 pieces of content for violating our Community Guidelines, which includes our policies on civic and election integrity and misinformation.
- We also removed 712 accounts for impersonating German election candidates and elected officials.
- We proactively prevented +24 million fake likes and +18.9 million fake follow requests. We also blocked +293,000 spam accounts from being created.
- We also removed +700,000 fake accounts, +17 million fake likes, and +5.7 million fake followers.
We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the Portugal legislative election held on 18 May 2025. In advance of the election, a core election team was formed and consultations between cross function teams helped to identify and design response strategies.
TikTok did not observe major threats during the Portuguese election. Through the election, we monitored for and actioned inauthentic behavior, and removed content that violated our Community Guidelines. As part of these efforts:
- Between May 12 and May 25, we removed more than 300 pieces of content for violating our policies on civic and election integrity, misinformation and AI generated content. We removed more than 94% of it before anyone told us about it.
- Between May 12 and 25, we proactively prevented more than 1,800,000 fake likes and more than 671,000 fake follow requests, and blocked more than 5,400 spam accounts from being created in Portugal. We also removed more than 5,400 fake accounts, more than 880,000 fake likes, and more than 154,000 fake followers.
- Between May 15 - May 29, we also removed 28 accounts for impersonating Portuguese election candidates and elected officials.
As co-chair of the Code of Conduct on Disinformation's Working Group on elections, TikTok takes our role of protecting the integrity of elections on our platform very seriously. We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the Romanian Presidential Election, which took place on 4 May 2025, with a second round on 18 May 2025, following the unprecedented annulment of the 2024 results and marked one of the most closely monitored electoral cycles for TikTok to date.
From March to May 2025, TikTok deployed robust detection models, automated moderation, and local partnerships to safeguard its Romanian user base of over 8 million. The following are examples of some of the threats TikTok observed in relation to both election rounds:
- Covert influence disruption: TikTok reported removing two new domestic covert networks totaling 87 accounts and 33,296 followers)in April 2025 for manipulating election discourse using fake news accounts and personas. More information relating to the network disruptions is published on our dedicated Covert Influence Operations transparency page.
- Content removals: We removed over 13,100 pieces of election-related content in Romania for violating our policies on misinformation, civic integrity, and synthetic media - over 93% were taken down before any user report.
- We received 57 submissions through the COCD Rapid Response System in relation to the Romanian Presidential Election, which were rapidly addressed. Actions included banning or geo-blocking of accounts and content removals for violation of Community Guidelines.
(VI) Tackling misleading AI-generated content
We strongly recommend that GPPPAs be verified. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.
- Directing people to trusted sources
- External engagement at the national and EU levels
The COPD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 57 notifications through the RRS in relation to the Romanian Election, which were addressed and actioned, enforcement included banning or geo-blocking of accounts and content removals for violation of Community Guidelines.
Mitigations in place
(I) Moderation capabilities
(III) Countering misinformation
In the weeks leading up to and including the run-off, we removed 530 videos for violating our civic and election integrity policies, and 2,772 videos for violating our misinformation policies.
(V) Deterring covert influence operations
We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.
(VI) Tackling misleading AI-generated content
Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.
(VII) Government, Politician, and Political Party Accounts (GPPPAs)
Directing people to trusted sources
The COCD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 23 RRS reports through the RRS before the Polish Election, which were rapidly addressed, including NASK and DSA cases. Actions included banning of accounts and content removals for violation of Community Guidelines.
To further promote election integrity, and inform our approach to the Polish Election, we organised an Election Speaker Series with Demagog who shared their insights and market expertise with our internal teams.
German Elections:
Enforcing our policies
(II) Mission Control Centre: internal cross-functional collaboration
On 18 November , ahead of the German election , we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election.
(III) Countering misinformation
Our misinformation moderators receive enhanced training and tools to detect and remove misinformation and other violative content. We also have teams on the ground who partner with experts to ensure local context and nuance is reflected in our approach.
In January 2025, we removed more than 862,000 pieces of content for violating our Community Guidelines, which includes our policies on civic and election integrity and misinformation.
In the weeks leading up to and including the election, we removed 3,283 videos for violating our civic and election integrity policies, and 12,781 videos for violating our misinformation policies.
(IV) Fact-checking
Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.
TikTok collaborates with 12 fact-checking organizations across Europe to evaluate the accuracy of content in most European languages, including German. Deutsche Presse-Agentur (dpa), serves as the fact-checking partner for Germany, which provides coverage for the platform.
(V) Deterring covert influence operations
We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website as well as monthly Covert Influence Operations reports.
(VI) Tackling misleading AI-generated content
Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.
(VII) Government, Politician, and Political Party Accounts (GPPPAs)
Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.
We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.
Before the German election, we provided all parties represented in federal and state parliaments with written information about our election integrity policies and measures, and offered virtual information sessions for the parties and their candidates. We presented at security-focused webinar for candidates and parties organised by the Federal Office for Information Security (BSI). We also offered all parties represented in federal and state parliaments verification support for their candidates.
Directing people to trusted sources
(I) Investing in media literacy
We invest in media literacy campaigns as a counter-misinformation strategy. From 16 Dec 2024 to 3 Mar 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa). The Election Center was visited more than 5.7 million times.
External engagement at the national and EU levels
(I) Rapid Response System: external collaboration with COPD Signatories
The COPD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 4 RRS reports through the RRS before the German election which were rapidly addressed. Actions included banning of accounts and content removals for violation of Community Guidelines.
(II) Engagement with local experts
To further promote election integrity, and inform our approach to the German Election, we organised an Election Speaker Series with dpa who shared their insights and market expertise with our internal teams
(III) Engagement with national authorities and stakeholders
We participated in the two election roundtables hosted by the Federal Ministry of the Interior (BMI), one before and one after the election.
We participated in the election roundtable as well as the stress test hosted by the Federal Network Agency (BNetzA), the German Digital Service Coordinator (DSC). In addition, we held three separate virtual meetings between TikTok and the BNetzA, also attended by the European Commission, and answered a set of written questions.
We met with the domestic intelligence service (BfV) and the BMI state secretary.
We attended two election-focused virtual meetings with BzKJ (Federal Agency for Child and Youth Protection) and other platforms.
We engaged with the electoral commissioner ("Bundeswahlleiterin") and onboarded them to TikTok. In our election center, we included 2 videos from the electoral commissioner and linked to their website.
We provided all parties represented in federal and state parliaments with information about our election integrity measures and what they/their candidates can and cannot do on the platform in written form and also offered virtual info sessions for the parties and their candidates. We also offered all parties represented in federal and state parliaments verification support for their candidates.
We presented a security-focused webinar for candidates and parties organised by the Federal Office for Information Security (BSI).
Portuguese Elections:
(I) Moderation capabilities
We have thousands of trust and safety professionals dedicated to keeping our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection, and regular monitoring of enriched keywords and accounts.
(II) Mission Control Centre: internal cross-functional collaboration
On 13 May, ahead of the Portuguese election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election. .
(III) Countering misinformation
(IV) Fact-checking
Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.
TikTok collaborates with 12 fact-checking organizations across Europe to evaluate the accuracy of content in most European languages, including Portuguese. Poligrafo , serves as the fact-checking partner for Portugal, which provides coverage for the platform.
(V) Deterring covert influence operations
We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website as well as monthly Covert Influence Operations reports.
(VI) Tackling misleading AI-generated content
Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.
(VII) Government, Politician, and Political Party Accounts (GPPPAs)
Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.
We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.
Before the election we met with the main Portuguese regulatory bodies and political parties' Heads of Communication to (i) provide an overview of TikTok's policies for political accounts, (ii) outline TikTok's approach to election integrity and to data security, (iii) encourage account verification and (iv) enable direct contact to respond to their specific requests.
Directing people to trusted sources
(I) Investing in media literacy
We invest in media literacy campaigns as a counter-misinformation strategy. From 18 Apr 2025 to 2 June 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portugal election. The centre contained a section about spotting misinformation, which included videos created in partnership with our fact-checking partner Poligrafo. TikTok has partnered with Poligrafo in Portugal to help the community safely navigate the platform and protect themselves against potential misinformation during the elections. Poligrafo developed a series of educational videos explaining how users could identify and avoid misinformation, use TikTok’s safety features, and critically evaluate content related to the electoral process. The Portuguese community could find the video series with practical advice and useful information about the electoral process in the relevant Election Center.
External engagement at the national and EU levels
(I) Rapid Response System: external collaboration with COPD Signatories
The COCD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 1 RRS report through the RRS during the Portuguese election, which was quickly addressed and resulted in the reported content being deemed “FYF Ineligible”.
(II) Engagement with local experts
To further promote election integrity, and inform our approach to the Portuguese election, we organised an Election Speaker Series with Poligrafo who shared their insights and market expertise with our internal teams.
(III) Engagement with national authorities and stakeholders
Ahead of the election, our Government Relations team represented TikTok at an official meeting organised by ANACOM with the Portuguese Regulatory Authority for the Media (ERC) and the National Election Commission (CNE). The team also met with the Organization for Security and Cooperation in Europe’s Office of Democratic Institutions and Human Rights (OSCE/ODIHR) and in particular, their Election Expert Team (EET) deployed for these elections.
Romanian Elections:
(I) Moderation capabilities
(II) Mission Control Centre: internal cross-functional collaboration
Policies and Terms and Conditions
Outline any changes to your policies
Policy - 50.1.1
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.2.1
Description of intervention - 50.2.2
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.3.1
Description of intervention - 50.3.2
Indication of impact - 50.3.3
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.4.1
(Commitment 14, Measure 14.1)
Description of intervention - 50.4.2
During the Polish Election we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform. To further increase transparency, accountability, and cross-industry sharing we introduced dedicated covert influence operations reports.
Removed accounts in network: 16- Followers of Network: 14,743
- Removed accounts in network: 12
- Followers of Network: 10,252
- Removed accounts in network: 49
- Followers of Network: 11,424
German Elections:
During the German election we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform. To further increase transparency, accountability, and cross-industry sharing, we introduced dedicated covert influence operations reports.
Portuguese Elections:
During the Portugal election we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform. To further increase transparency, accountability, and cross-industry sharing we introduced dedicated covert influence operations reports.
Romanian Elections:
During the Romanian Presidential Election, we continued our work to detect and disrupt covert influence operations (CIOs) that attempt to establish themselves on TikTok and undermine the integrity of our platform.
Indication of impact - 50.4.3
See above.
German Elections:
In February 2025, we disrupted three small scale covert influence operations targeting the German market within the context of the federal election:
- A network of 40 accounts operated from Germany and targeted a German audience. The individuals behind this network created inauthentic accounts in order to amplify content supporting the political party "Alternative for Germany (AfD).” A large proportion of the network's accounts were found to use the word "news" or "nachricht" in their handle or nickname.
- A network of 17 accounts operated from Germany and targeted a German audience. The individuals behind this network created inauthentic accounts in order to promote the "Bündnis Sahra Wagenknecht (BSW)" Party within the context of the 2025 German federal elections. The network was found to alternate between posting apolitical and political content in order to drive engagement.
- A network of 14 accounts operated from Germany and targeting a German audience. The individuals behind this network created inauthentic accounts in order to promote the political party "Alternative for Germany (AfD)”. The accounts used Smurf avatars and were observed to rebrand their accounts and alternate content in order to gain engagement.
Portuguese Elections:
In May 2025, we disrupted one small scale covert influence operation targeting the Portuguese market within the context of the legislative election:
- We assess that this network targeted a Portuguese audience. The individuals behind this network created inauthentic accounts in order to promote the Socialist Party and undermine the Social Democratic Party, within the context of the 2025 Portuguese election. This network masked its operating location through advanced operational security.
TikTok has scaled mitigations against deceptive behaviours including spam, impersonation, and activities in relation to fake engagement. As examples of our efforts in this area, from March to May 2025:
- We proactively prevented more than 21.5 million fake likes and 8.09+ million fake follow requests, and we blocked 38,000 spam accounts from being created in Romania. We also removed:
- 48,300 fake accounts
- 8.2+ million fake likes
- 1.83+ million fake followers
- From 1 September 2024 to 26 May 2025, we prevented more than 120 million fake likes and more than 53 million fake follow requests, and we blocked more than 707,670 spam accounts from being created in Romania. We also removed: over 2,000 accounts impersonating Romanian Government, Politician, or Political Party Accounts, 379,324 fake accounts, 28.9+ million fake likes and 15.6+ million fake followers.
- A network of 27 accounts that had 9,474 cumulative followers as at the date of removal, operating from Romania that attempted to target Romanian audiences in order to amplify certain narratives, attempting to manipulate Romanian elections discourse. The network was found to create accounts with generic handles and avatars which it presented as news accounts.
- A network of 60 accounts that had 23,822 cumulative followers as at the date of removal, operating from Romania that attempted to target Romanian audiences in order to amplify certain narratives, attempting to manipulate Romanian elections discourse. The network was found to create fictitious personas in order to post comments and content aligned with its strategic goal.
- More information relating to the above detailed network disruptions is published on our dedicated Covert Influence Operations transparency page.
Specific Action applied - 50.4.4
Description of intervention - 50.4.5
We do not allow misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation,
- A crisis event, such as a conflict or natural disaster,
- A public figure who is:
- being degraded or harassed, or engaging in criminal or anti-social behaviour
- taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
- being politically endorsed or condemned by an individual or group.
Indication of impact - 50.4.6
Number of videos removed because of violation of our Edited media and AIGC policy from 21-27 April to 26 May to 1 June to cover both rounds of the Polish Elections and four complete weeks preceding Round 1: 75
German Election:
Number of videos removed for violating our Edited Media and AIGC policy during the 4 weeks leading up to and including the day of the German federal election on 23 February 2025: 574
Portuguese Elections:
Number of videos removed for violating our Edited Media and AIGC policy during the 4 weeks leading up to and including the day of the Portuguese election on 18 May 2025: 11
Romanian Election:
Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) from 31 March to 6 April 2025 and 12-18 May 2025 to cover both rounds of elections and the 4 weeks leading up to Round 1 of the Romanian Presidential Election: 657
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.5.1
Description of intervention - 50.5.2
From 18 April 2025, Tiktok launched an in-app Election Centre to provide users with up-to-date information about the 2025 Polish election. Working with electoral commissions and civil society organisations, the Election Centre connected people with reliable voting information, including when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results.
German Election:
In advance of EU and select regional elections, TikTok works with electoral commissions, civil society organisations, and fact-checking bodies to establish in-app Election Centres that connect people with reliable voting information, including: when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results. We direct people to the Election Centres through prompts on videos, LIVEs and searches related to elections.
Portuguese Elections:
In advance of EU and select regional elections, TikTok works with electoral commissions, civil society organisations, and fact-checking bodies to establish in-app Election Centres that connect people with reliable voting information, including: when, where, and how to vote; eligibility requirements for candidates; and, ultimately, the election results. We direct people to the Election Centres through prompts on videos, LIVEs and searches related to elections.
Romanian Election:
TikTok had an in-app Election Center dedicated to Romania’s Presidential election. We updated our in-app Election Center to directly link to the Electoral Commission's website so it's even easier for people to access authoritative election information. In line with media literacy best practices, we also added a reminder to verify the accuracy of election information people see online and off.
Indication of impact - 50.5.3
The Election Centre launched before the Polish Election was visited 1,968,010 times.
German Election:
The Election Centre launched in advance of the German federal election was visited 5,708,749 times, and search banners were viewed 712,652 times. This localised approach helped to ensure that messaging in relation to the election was relevant to our community and encouraged more engagement.
Portuguese Elections:
The Election Centre, which launched in advance of the Portuguese election, was visited 371,857 times. This localised approach helped to ensure that messaging in relation to the election was relevant to our community and encouraged more engagement.
Romanian Election:
The in-app Election Center launched before the Presidential Election was visited 2,018,869 times between 31 March and 23 May.
Specific Action applied - 50.5.4
Description of intervention - 50.5.5
To further promote election integrity, and inform our approach to the Polish Election, we organised an Election Speaker Series with Demagog who shared their insights and market expertise with our internal teams.
German Election:
To further promote election integrity, and inform our approach to the election, we organised an Election Speaker Series on 14 January 2025 with Deutsche Presse-Agentur (dpa) who shared their insights and market expertise with our internal teams.
To further promote election integrity, and inform our approach to the Portuguese election, we organised an Election Speaker Series with Poligrafo who shared their insights and market expertise with our internal teams.
Romanian Election:
To further promote civic awareness, TikTok introduced a permanent media literacy hub on 14 May 2025, surfacing critical thinking tools via keyword-triggered notices. Additionally, Romanian influencers and marketing agencies were briefed on TikTok's strict rules against political advertising and branded content.
Indication of impact - 50.5.6
This engagement with external regional and local experts allowed us to inform our country-level approach to these elections.
Description of intervention - 50.5.8
Indication of impact - 50.5.9
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.6.1
Description of intervention - 50.6.2
Indication of impact - 50.6.3
Number of Research API applications related to the German federal election that were approved in H1 2025: 15
Number of Research API applications related to the Portuguese election that have been approved from January to June 2025: No applications received.
Number of Research API applications related to the Romanian Presidential Election received January to June 2025: 7
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.7.1
Description of intervention - 50.7.2
Demagog serves as the fact-checking partner for Poland, which provided coverage for the platform.
German Elections:
Deutsche Presse-Agentur (dpa) serves as the fact-checking partner for Germany , which provides coverage for the platform.
Portuguese Elections:
Poligrafo serves as the fact-checking partner for Portugal, which provided coverage for the platform.
Romanian Election:
LeadStories serves as the fact-checking partner for Romania, which provided coverage for the platform, including across weekends in advance of the Romanian Presidential Election.
Indication of impact - 50.7.3
Please refer to Chapter 7 - Empowering the Fact-Checking Community for metrics.
Romanian Election:
In May 2025, LeadStories provided 77 misinformation leads and submitted an Insights Report focused on the Romanian election. Please refer to Chapter 7 - Empowering the Fact-Checking Community for comprehensive metrics.
Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
War of Aggression by Russia on Ukraine
CIO will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. To counter these emerging threats and stay ahead of evolving challenges, we have expert teams who focus entirely on detecting, investigating, and disrupting covert influence operations.
Israel-Hamas Conflict
TikTok acknowledges both the significance and sensitivity of the Israel-Hamas conflict (referred to as the “Conflict” throughout this section). We understand this remains a difficult, fearful, and polarizing time for many people around the world and on TikTok. TikTok continues to recognise the need to engage in content moderation of violative content at scale while ensuring that the fundamental rights and freedoms of European citizens are respected and protected. We remain dedicated to supporting free expression, upholding our commitment to human rights, and maintaining the safety of our community and integrity of our platform during the Conflict.
Mitigations in place
War of Aggression by Russia on Ukraine
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine..
We continue the detection and labeling of state-controlled media accounts in accordance with our state-controlled media label policy globally.
Proactive measures aimed at improving our users' digital literacy are vital, and we recognise the importance of increasing the prominence of authoritative information. We have thirteen localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bulgaria, Czech Republic, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia in close collaboration with our factchecking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners, to help users identify misinformation and prevent the spread of it on the platform. We have also partnered with a local Ukrainian fact-checking organisation, VoxCheck, with the aim of launching a permanent media literacy campaign in Ukraine.
Israel-Hamas Conflict
We are continually working hard to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis. As part of our crisis management process, we launched a command centre that brings together key members of our global team of thousands of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis. Since the beginning of the Conflict, we are:
To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content related to unfolding or emergency events, which have been assessed by our fact-checkers but cannot be verified as accurate i.e., ‘unverified content’. Mindful about how evolving events may impact the assessment of sensitive Conflict related claims day-to-day, we have implemented a process that allows our fact-checking partners to update us quickly if claims previously assessed as ‘unverified’ become verified with additional context and/or at a later stage.
TikTok has Arabic and Hebrew speaking moderators in the content moderation teams who review content and assist with Conflict-related translations. As we continue to focus on moderator care, we have also deployed additional well-being resources for our human moderation teams during this time.
In addition, we are committed to engagement with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, and cooperation with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during these difficult times.
Policies and Terms and Conditions
Outline any changes to your policies
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.
Israel-Hamas:
We refined and expanded our newsworthy exceptions to allow the dissemination of content documenting from a conflict zone and legitimate political speech/criticism, while remaining sensitive to the potential harm users may experience from exposure to graphic visuals, hateful behaviours, or incitement to violence. As part of this effort, we introduced dedicated policies addressing content related to the Conflict, specifically in areas depicting hostages, human suffering, and protests.
Additionally, we strengthened our policies on content that glorifies Hamas or Hezbollah and on the promotion or celebration of violent acts committed by either side of the Conflict. To further enhance platform integrity, we implemented specific Integrity & Authenticity policies for Israel-Hamas-related content, with a focus on conspiracy theories of varying severity and unsubstantiated claims.
Policy - 51.1.1
No relevant updates in the reporting period.
Israel-Hamas
Policy updates
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.
We continue to rely on our existing, robust Integrity & Authenticity policies, which are an effective basis for tackling content related to the Conflict. As such, we have not needed to introduce any new misinformation policies, for the purposes of addressing the crisis. In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.
Rationale - 51.1.3
Our Integrity & Authenticity policies are our first line of defense in combating harmful misinformation and deceptive behaviours on our platform.
Our Community Guidelines make clear to our users what content we remove or make ineligible for the For You feed when it poses a risk of harm to our users or the wider public. Our moderation teams are provided with detailed policy guidance and direction when moderating on war-related harmful misinformation using existing policies.
We have specialist teams within our Trust and Safety department dedicated to the policy issue of Integrity & Authenticity, including within the areas of product and policy. Our experienced subject matter experts on Integrity & Authenticity continually keep these policies under review and collaborate with external partners and experts when understanding whether updates are required.
When situations such as the War in Ukraine arise, our teams work to ensure that appropriate guidance is developed so that the Integrity & Authenticity policies are applied effectively in respect of content relating to the relevant crisis (in this case, the war). This includes issuing detailed policy guidance and direction, including providing case banks on harmful misinformation claims to support moderation teams.
Israel-Hamas:
We refined and expanded our newsworthy exceptions to allow the dissemination of content documenting from a conflict zone and legitimate political speech/criticism, while remaining sensitive to the potential harm users may experience from exposure to graphic visuals, hateful behaviours, or incitement to violence. As part of this effort, we introduced dedicated policies addressing content related to the Conflict, specifically in areas depicting hostages, human suffering, and protests.Additionally, we strengthened our policies on content that glorifies Hamas or Hezbollah and on the promotion or celebration of violent acts committed by either side of the Conflict. To further enhance platform integrity, we implemented specific Integrity & Authenticity policies for Israel-Hamas-related content, with a focus on conspiracy theories of varying severity and unsubstantiated claims.
In the context of the Conflict, we rely on our robust Integrity & Authenticity policies as our first line of defence in combating harmful misinformation and deceptive behaviours on our platform.
Our Community Guidelines clearly identify to our users what content we remove or make ineligible for the For You feed when it poses a risk of harm to our users or the wider public. We have also supported our moderation teams with detailed policy guidance and direction when moderating on Conflict-related harmful misinformation using existing policies.
We have specialist teams within our Trust and Safety department dedicated to the policy issue of Integrity & Authenticity, including within the areas of product and policy. Our experienced subject matter experts on Integrity & Authenticity continually keep these policies under review and collaborate with external partners and experts when understanding whether updates are required.
When situations such as the Conflict arise, these teams work to ensure that appropriate guidance is developed so that the Integrity & Authenticity policies are applied in an effective manner in respect of content relating to the relevant crisis (in this case, the Conflict). This includes issuing detailed policy guidance and direction, including providing case banks on harmful misinformation claims to support moderation teams.
Policy - 51.1.4
Israel-Hamas:
Feature policies
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5
In addition to being able to rely on our Integrity & Authenticity policies, we have made temporary adjustments to existing policies which govern certain TikTok features. For example, we have added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation.
Rationale - 51.1.6
Temporary adjustments have been introduced in an effort to proactively prevent certain features from being used for hateful or violent behaviour in the region.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.3.1
Description of intervention - 51.3.2
Indication of impact - 51.3.3
Specific Action applied - 51.3.4
Description of intervention - 51.3.5
We use a combination of automated and human moderation to identify content that breaches our ad policies.
Israel-Hamas:
We use a combination of automated and human moderation in order to identify content that breaches our ad policies. These policies prohibit, among other things, ad content and landing pages to display negative content regarding the military or police symbols, sensitive military events, militarism, advocating or whitewashing of war, terrorism, illegal organizations, or unlawful elements.
Indication of impact - 51.3.6
Our efforts on ad moderation practices help to ensure that ads that breach our policies are rejected or removed, both in the context of the War in Ukraine and more broadly on our platform.
Israel-Hamas:
Given the range of potential policy violations that could be engaged, we are currently unable to provide metrics specific to this issue.
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.4.1
Description of intervention - 51.4.2
We fight against CIO as our Integrity & Authenticity policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity, or overall purpose. We prohibit and constantly work to disrupt attempts to engage in covert influence operations by manipulating our platform and/or harmfully misleading our community, our expert teams that focus entirely on detecting, investigating, and disrupting CIO networks that have removed numerous networks targeting discourse about the War in Ukraine.
Countering covert influence operations is a particular challenge because the adversarial actors behind them continuously evolve the ways they hide the linkage between their accounts. Our experts work to counter covert influence operations by studying the many layers of techniques, tactics, and procedures that deceptive actors use to try to manipulate platforms, drawing from a variety of disciplines, including threat intelligence and data science.
Israel-Hamas:
We have assigned dedicated resourcing within our specialist teams to proactively monitor for CIO in connection with the Conflict.
Indication of impact - 51.4.3
Between January and June 2025, we took action to remove the following 7 networks (consisting of 29,245 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the Russia-Ukraine war while also misleading individuals, our community, or our systems:
Network Origin: Ukraine
Network Origin: Ukraine
We published this information within our most recently published transparency report here.
Israel-Hamas:
Between January to June 2025, we took action to remove the following network (consisting of 12 accounts in total) that were found to be related to the Conflict:
We now publish all of the CIO networks we identify and remove, including those relating to the Conflict, within our dedicated CIO transparency report, here.
Specific Action applied - 51.4.4
Description of intervention - 51.4.5
Russia-Ukraine:
Artificial intelligence (AI) enables incredible creative opportunities, but can potentially confuse or mislead users if they’re not aware content was generated or edited with AI.
Our ‘Edited Media and AI-Generated Content (AIGC)’ policy became effective in May 2024. In this policy we prohibit AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. TikTok has also started to automatically label AIGC when it's uploaded from certain other platforms.
- The likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualization, bullying, or privacy concerns, including those related to personally identifiable information or likeness to private individuals
- The likeness of adult private figures, if we become aware it was used without their permission
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour
- taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
- being politically endorsed or condemned by an individual or group
Israel-Hamas:
Our Edited Media and AI-Generated Content (AIGC) policy makes it clear that we do not want our users to be misled about crisis events. For the purposes of our policy, AIGC refers to content created or modified by AI technology or machine-learning processes. It includes images of real people and may show highly realistic-appearing scenes.
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation,
- A crisis event, such as a conflict or natural disaster,
- A public figure who is:
- being degraded or harassed, or engaging in criminal or anti-social behavior
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election)
- spreading misinformation about matters of public importance
Indication of impact - 51.4.6
Our efforts support transparent and responsible content creation practices, both in the context of the War in Ukraine and more broadly on our platform.
Israel-Hamas:
Our efforts support transparent and responsible content creation practices, which are relevant both in the context of the Conflict and more broadly on our platform.
Specific Action applied - 51.4.7
Description of intervention - 51.4.8
We take action to remove accounts or content that contain inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. In conflict environments, such information may include content that is repurposed from past conflicts, content that makes false and harmful claims about specific events, or incites panic. In certain circumstances, we may also reduce the prominence of such content where it does not warrant removal.
Israel-Hamas:
We employ a dynamic approach to misinformation detection, leveraging multiple overlapping strategies to ensure comprehensive and responsive coverage. We place considerable emphasis on proactive content moderation strategies in order to remove harmful misinformation that violates our policies before it is reported to us by users or third parties.
Indication of impact - 51.4.9
In the context of the crisis, we are proud to have proactively removed thousands of videos containing harmful misinformation related to the War in Ukraine. We have been able to do this through a combination of automated review, human-level content moderation, carrying out targeted sweeps of certain types of content (e.g. hashtags/sensitive keyword lists) as well as working closely with our fact-checking partners and responding to emerging trends they identify.
- Number of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 3,405
- Number of videos not recommended because of violation of misinformation policy with a proxy (only focusing on RU/UA) - 5,299
- Number of proactive removals of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 3,110
Israel-Hamas:
In the context of the crisis, we have proactively removed 7,177 videos in H1 containing harmful misinformation related to the Conflict. We have been able to do this through a combination of automation and human moderation. We carry out targeted sweeps of certain types of content (e.g. hashtags/sensitive keyword lists) as well as working closely with our fact-checking partners and responding to emerging trends they identify.
- Number of videos removed because of violation of misinformation policy with a proxy (IL/Hamas) - 7,589
- Number of videos not recommended because of violation of misinformation policy with a proxy (IL/Hamas) - 14,103
- Number of proactive removals of videos removed because of violation of misinformation policy with a proxy (IL/Hamas): 7,177
Description of intervention - 51.4.11
Indication of impact - 51.4.12
Description of intervention - 51.4.17
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.5.1
Description of intervention - 51.5.2
Indication of impact - 51.5.3
Specific Action applied - 51.5.4
Description of intervention - 51.5.5
We have restricted access to certain state-affiliated media entities and strengthened our state-affiliated media policy in order to provide context to users to evaluate content shared by such Russian, Ukrainian, and Belarusian entities.
- Prohibiting state-affiliated media accounts attempting to engage in foreign influence campaigns from advertising outside of the country with which they are primarily affiliated; including in the EU
- Investing in our detection capabilities of state-affiliated media accounts; AND
In addition to the above, we continue to invest in automation and scaled detection of state-affiliated media accounts. We also continue to work with third-party experts who help shape our state-affiliated media policies and who help inform our assessments of accounts that have been labelled as statecontrolled. We continue to improve our existing processes for applying our state-affiliated media label, such as looking to automate where possible, and aiming to streamline all communications to ensure maximum efficiency. We also continue our efforts in developing an additional layer of intervention to state-affiliated accounts that engage in harmful behaviours.
Indication of impact - 51.5.6
- Number of videos tagged with the state affiliated media label for Russia, Belarus, and Ukraine - 13,847
- Number of impressions of the state-affiliated media label for Russia, Belarus, and Ukraine - 100,813,065
Specific Action applied - 51.5.7
Description of intervention - 51.5.8
We recognise the importance of proactive measures that are aimed at improving our users' digital literacy and increasing the prominence of authoritative information.
Indication of impact - 51.5.9
Relevant metrics for the media literacy campaigns (EEA total numbers):
- Total Number of impressions of the search intervention - 30,442,000
- Total Number of clicks on the search intervention - 155,726
- Click through rate of the search intervention - 0.51%
Specific Action applied - 51.5.10
Description of intervention - 51.5.11
To minimise the discoverability of misinformation and help to protect our users, we have launched search interventions which are triggered when users search for neutral terms related to the Conflict (e.g., Israel, Palestine). We continuously evaluate the effectiveness of our keywords, adding or removing terms based on their relevance.
Indication of impact - 51.5.12
These search interventions remind users to pause and check their sources and also direct them to well-being resources.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.6.1
Measures taken to support research into crisis related misinformation and disinformation
Description of intervention - 51.6.2
Indication of impact - 51.6.3
During the period of this COCD report, we approved 2 applications through the Research API, with an express focus on the War in Ukraine.
Israel-Hamas:
Between January and June 2025, 2 Research API applications related to the Conflict have been approved.
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.7.1
Description of intervention - 51.7.2
Where our misinformation moderators or fact-checking partners determine that content is not able to be verified at the given time (which is common during an unfolding event), we apply our unverified content label to the content to encourage users to consider the reliability or source of the content. The application of the label will also result in the content becoming ineligible for recommendation in order to limit the spread of potentially misleading information.
Indication of impact - 51.7.3
Verifying certain information during dynamic and fast moving events such as a war can be challenging and our moderators and fact-checkers cannot always conclusively determine whether content is indeed harmful misinformation, in violation of our Community Guidelines.
Where the banner is applied, the content will also become ineligible for recommendation into anyone's For You feed to limit the spread of information relating to unfolding events where details are still developing and which may potentially be misleading.
Specific Action applied - 51.7.4
Description of intervention - 51.7.5
Our fact checking efforts cover Russian, Ukrainian, Belarusian and all major European languages (including 18 official European languages as well as a number of other languages which affect European users).
Israel-Hamas:
As part of our fact-checking program, TikTok works with more than 20 IFCN-accredited fact-checking organisations that support more than 60 languages, including Hebrew and Arabic, to help assess the accuracy of content in this rapidly-changing environment. In the context of the Conflict, our independent fact-checking partners are following our standard practice, whereby they do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. Fact-checker input is then incorporated into our broader content moderation efforts in a number of different ways, as further outlined in the ‘indication of impact’ section below.
In the context of the Conflict, we have also adjusted our information consolidation process to allow us to track and store Conflict related claims separately from our global repository of previously fact-checked claims. This facilitates quick and effective access to relevant assessments, which, in turn, increases the effectiveness of our moderation efforts. We also continue to improve our hate speech detection with an improved audio hash bank to help detect hateful sounds as well as updated machine learning models to recognize emerging hateful content.
Indication of impact - 51.7.6
Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we have ensured that, in the context of the crisis, our fact-checking programme covers Russian, Ukrainian and Belarusian.
- Number of fact-checked videos with a proxy related to the War in Ukraine - 881
- Number of videos removed as a result of a fact-checking assessment with words related to the War in Ukraine - 144
- Number of videos not recommended in the For Your Feed as a result of a fact-checking assessment with words related to the War in Ukraine - 323
Israel-Hamas:
We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we have ensured that, in the context of the crisis, our fact-checking programme covers Arabic and Hebrew.
As noted above, we also incorporate fact-checker input into our broader content moderation efforts in different ways:
- Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
- Collaborating with our fact-checking partners to receive advance warning of emerging misinformation narratives has facilitated proactive responses against high-harm trends and has helped to ensure that our moderation teams have up-to-date guidance.
- A repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
- Number of fact checked tasks related to IL/Hamas - 1,913
- Number of videos removed as a result of a fact checking assessment with words related to IL/Hamas - 242
- Number of videos demoted (NR) as a result of a fact checking assessment with words related to IL/Hamas - 323