YouTube

Report September 2025

Submitted

Your organisation description

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

But, see QRE 14.1.2

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube’s systems are designed to connect people with high-quality content.

In addition, YouTube has various policies which set out what is not allowed on YouTube. These policies, which can be accessed in YouTube’s Help Centre, address relevant TTPs. Notably, YouTube’s policies tend to be broader than the identified TTPs. As such, related SLIs providing information about actions taken related to the TTP may be overinclusive.

YouTube’s Community Guidelines, commitment to promote high-quality content and curb the spread of harmful misinformation, disclosure requirements for paid product placements, sponsorships & endorsements, and ongoing work with Google’s Threat Analysis Group (TAG) broadly address TTPs: 1, 2, 3, 5, 7, 8, 9, 10, and 11 - and notably, go beyond these TTPs.

In this report, YouTube has provided data relating to TTPs 1, 5, 7 and 9. Removals relating to the remaining TTPs are included, in part or in whole, in the Community Guidelines enforcement report, but YouTube does not have more detailed removal reporting at this time. TTPs do not necessarily map singularly to one Community Guideline, and therefore, there are challenges in providing more granular mapping for TTPs. 

YouTube continues to assess, evaluate, and update its policies on a regular basis, the latest updated policies, including Community Guidelines, can be found here.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube’s approach to combating misinformation involves removing content that violates YouTube’s policies, raising high-quality information in rankings and recommendations curbing the spread of harmful misinformation, and rewarding trusted, eligible creators and artists. YouTube applies these principles globally, including across the EU. 

YouTube uses a combination of people and machine learning to detect problematic content automatically and at scale. Machine learning is well-suited to detect patterns, including harmful misinformation, which helps YouTube find content similar to other content that YouTube has already removed, even before it is viewed. Every quarter, YouTube publishes data in the Community Guidelines enforcement report about removals that were first detected by automated means. 

YouTube’s Intelligence Desk monitors the news, social media, and user reports to detect new trends surrounding inappropriate content, and works to make sure YouTube’s teams are prepared to address them before they can become a larger issue.

In addition, Google’s Threat Analysis Group (TAG) and Google and YouTube’s Trust and Safety Teams are central to Google’s work to monitor malicious actors around the globe, including but not limited to coordinated information operations that may affect EU Member States. More information about this work is outlined in QRE 16.1.1.

YouTube continues to invest in automated detection systems, and rely on both human evaluators and machine learning to train their systems on new data. YouTube’s engineering teams also continue to update and improve their detection systems regularly. 

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube enforces a broad range of policies to help build a safer community. These policies include, but are not limited to, YouTube’s Community Guidelines, which include policies covering Spam, Deceptive Practices, and Scams, Impersonation Policy and Fake Engagement Policy. YouTube applies these policies globally, including across the EEA Member States.

Implementing and enforcing YouTube policies
In general, enforcement of YouTube’s policies is a joint effort between people and machine learning technology. YouTube starts by giving a team of experienced content moderators enforcement guidelines (detailed explanations of what makes content violative and non-violative), and asks them to differentiate between violative and non-violative material. If the new guidelines allow them to achieve a very high level of accuracy, YouTube expands the testing group to include moderators across different backgrounds, languages and experience levels. 

Then YouTube may begin revising the guidelines so that they can be accurately interpreted across a larger, more diverse set of moderators. These findings then help train YouTube’s machine learning technology to detect potentially violative content at scale. As done with its content moderators, YouTube also tests its models to understand whether it has provided enough context for them to make accurate assessments about what to surface for people to review.

Once models are trained to identify potentially violative content, the role of content moderators remains essential throughout the enforcement process. Machine learning helps identify potentially violative content at scale and content moderators may then help assess whether the content should be removed.

This collaborative approach helps improve the accuracy of YouTube’s models over time, as models continuously learn and adapt based on content moderator feedback. It also means YouTube’s enforcement systems can manage the sheer scale of content that is uploaded to YouTube, while still digging into the nuances that determine whether a piece of content is violative.

For TTPs 1, 5, 7 and 9, YouTube provides details around mapping to its policies. To learn more about these methodologies, refer to SLI 14.2.1, SLI 14.2.2, and SLI 14.2.4.

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 1

(1) Number of channels for TTP 1, identified for potential removal by EEA Member State for reporting period H1 2025 (1 January 2025 to 30 June 2025);
(2) Number of removals of channels for TTP 1 by EEA Member State for reporting period H1 2025.


TTP 5
(3) Number of channels for TTP 5, identified for potential removal by EEA Member State for reporting period H1 2025 (1 January 2025 to 30 June 2025);
(4) Number of removals of channels for TTP 5 by EEA Member State for reporting period H1 2025;
(5) Number of videos for TTP 5, identified for potential removal by EEA Member State for reporting period H1 2025;
(6) Number of removals of videos for TTP 5 by EEA Member State for reporting period H1 2025.


TTP 7
(7) Number of videos for TTP 7, identified for potential removal, by EEA Member State for reporting period H1 2025 (1 January 2025 to 30 June 2025);
(8) Number of removals of videos for TTP 7, by EEA Member State for reporting period H1 2025.


TTP 9
(9) Number of channels for TTP 9, identified for potential removal by EEA Member State for reporting period H1 2025 (1 January 2025 to 30 June 2025);
(10) Number of removals of channels for TTP 9 by EEA Member State for reporting period H1 2025;
(11) Number of videos for TTP 9, identified for potential removal by EEA Member State for reporting period H1 2025;
(12) Number of removals of videos for TTP 9 by EEA Member State for reporting period H1 2025.


The number of removals may represent an overcount, as the respective Community Guidelines may be inclusive of more policy-violative activity than identified by the TTP alone. 

Country TTP OR ACTION 1 - Number of channels identified TTP OR ACTION 1 - Number of channels removed TTP OR ACTION 5 - Number of channels identified TTP OR ACTION 5 - Number of channels removed TTP OR ACTION 5 - Number of videos identified TTP OR ACTION 5 - Number of videos removed TTP OR ACTION 7 - Number of videos identified TTP OR ACTION 7 - Number of videos removed TTP OR ACTION 9 - Number of channels identified TTP OR ACTION 9 - Number of channels removed TTP OR ACTION 9 - Number of videos identified TTP OR ACTION 9 - Number of videos removed
Austria 838 838 78 78 24 24 29 29 71 71 15 15
Belgium 758 758 123 123 53 53 28 28 99 99 20 20
Bulgaria 629 629 82 82 115 115 34 34 48 48 9 9
Croatia 176 176 34 34 14 14 4 4 31 31 3 3
Cyprus 605 605 36 36 0 0 10 10 59 59 28 28
Czech Republic 1,234 1,234 90 90 71 71 41 41 119 119 26 26
Denmark 466 466 48 48 27 27 27 27 69 69 11 11
Estonia 195 195 20 20 3 3 7 7 13 13 2 2
Finland 531 531 38 38 83 83 12 12 58 58 7 7
France 9,483 9,483 708 708 798 798 365 365 579 579 121 121
Germany 115,514 115,514 863 863 4,523 4,523 512 512 769 769 265 265
Greece 8,851 8,851 85 85 8 8 35 35 104 104 16 16
Hungary 695 695 57 57 4 4 11 11 79 79 3 3
Ireland 1,082 1,082 78 78 71 71 42 42 65 65 17 17
Italy 3,127 3,127 402 402 332 332 129 129 281 281 48 48
Latvia 432 432 28 28 51 51 3 3 31 31 2 2
Lithuania 1,220 1,220 35 35 15 15 6 6 116 116 7 7
Luxembourg 119 119 9 9 5 5 2 2 11 11 2 2
Malta 62 62 6 6 0 0 5 5 11 11 0 0
Netherlands 14,499 14,499 224 224 447 447 180 180 436 436 120 120
Poland 7,480 7,480 391 391 264 264 84 84 581 581 99 99
Portugal 1,048 1,048 106 106 52 52 29 29 73 73 3 3
Romania 1,724 1,724 307 307 379 379 41 41 141 141 45 45
Slovakia 264 264 45 45 0 0 6 6 40 40 5 5
Slovenia 153 153 19 19 9 9 2 2 16 16 5 5
Spain 4,827 4,827 462 462 1,656 1,656 99 99 293 293 37 37
Sweden 1,260 1,260 133 133 210 210 33 33 91 91 19 19
Iceland 27 27 9 9 0 0 1 1 7 7 1 1
Liechtenstein 5 5 0 0 0 0 0 0 0 0 0 0
Norway 551 551 53 53 11 11 18 18 82 82 9 9
Total EU 177,272 177,272 4,507 4,507 9,214 9,214 1,776 1,776 4,284 4,284 935 935
Total EEA 177,855 177,855 4,569 4,569 9,225 9,225 1,795 1,795 4,373 4,373 945 945

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 1
Methodology
(1) Views threshold on video removals for TTP 1 by EEA Member State for reporting period H1 2025;
(2) Interaction/engagement before action for TTP 1 by EEA Member State for reporting period H1 2025;
(3) Views/impressions after action for TTP 1 by video by EEA Member State for reporting period H1 2025;
(4) Interaction/engagement after action for TTP 1 by EEA Member State for reporting period H1 2025.

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) N/A;
(2) N/A;
(3) N/A;
(4) N/A. 


TTP 5
Methodology
(1) Views threshold on video removals for TTP 5 by EEA Member State for reporting period H1 2025;
(2) Interaction/engagement before action for TTP 5 by EEA Member State for reporting period H1 2025;
(3) Views/impressions after action for TTP 5 by video by EEA Member State for reporting period H1 2025;
(4) Interaction/engagement after action for TTP 5 by EEA Member State for reporting period H1 2025.

For SLI 14.2.2 (1): Starting March 2025, YouTube updated the terminology used for Shorts view counts. This terminology change does not apply to YouTube’s transparency reporting view-related metrics, which remain the same in name and methodology. Learn more here.

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) Please see table below;
(2) N/A;
(3) Please see table below;
(4) N/A. 


TTP 7
Methodology
(1) Views threshold on video removals for TTP 7 by EEA Member State for reporting period H1 2025;
(2) Interaction/engagement before action for TTP 7 by EEA Member State for reporting period H1 2025;
(3) Views/impressions after action for TTP 7 by video by EEA Member State for reporting period H1 2025;
(4) Interaction/engagement after action for TTP 7 by EEA Member State for reporting period H1 2025.

For SLI 14.2.2 (1): Starting March 2025, YouTube updated the terminology used for Shorts view counts. This terminology change does not apply to YouTube’s transparency reporting view-related metrics, which remain the same in name and methodology. Learn more here.

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) Please see table below;
(2) N/A;
(3) Please see table below;
(4) N/A. 


TTP 9
Methodology
(1) Views threshold on video removals for TTP 9 by EEA Member State for reporting period H1 2025;
(2) Interaction/engagement before action for TTP 9 by EEA Member State for reporting period H1 2025;
(3) Views/impressions after action for TTP 9 by video by EEA Member State for reporting period H1 2025;
(4) Interaction/engagement after action for TTP 9 by EEA Member State for reporting period H1 2025.

For SLI 14.2.2 (1): Starting March 2025, YouTube updated the terminology used for Shorts view counts. This terminology change does not apply to YouTube’s transparency reporting view-related metrics, which remain the same in name and methodology. Learn more here.

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) Please see table below;
(2) N/A;
(3) Please see table below;
(4) N/A. 

Country TTP OR ACTION 5 - Number of videos removed with 0 views TTP OR ACTION 5 - Number of videos removed with 1-10 views TTP OR ACTION 5 - Number of videos removed with 11-100 views TTP OR ACTION 5 - Number of videos removed with 101-1,000 views TTP OR ACTION 5 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 5 - Number of videos removed with >10,000 views TTP OR ACTION 5 - Views after action TTP OR ACTION 7 - Number of videos removed with 0 views TTP OR ACTION 7 - Number of videos removed with 1-10 views TTP OR ACTION 7 - Number of videos removed with 11-100 views TTP OR ACTION 7 - Number of videos removed with 101-1,000 views TTP OR ACTION 7 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 7 - Number of videos removed with >10,000 views TTP OR ACTION 7 - Views after action TTP OR ACTION 9 - Number of videos removed with 0 views TTP OR ACTION 9 - Number of videos removed with 1-10 views TTP OR ACTION 9 - Number of videos removed with 11-100 views TTP OR ACTION 9 - Number of videos removed with 101-1,000 views TTP OR ACTION 9 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 9 - Number of videos removed with >10,000 views TTP OR ACTION 9 - Views after action
Austria 0 2 0 1 1 20 0 3 9 9 6 1 1 0 0 0 3 5 2 5 0
Belgium 13 2 14 20 2 2 0 1 18 6 2 0 1 0 1 1 2 6 4 6 0
Bulgaria 7 2 1 13 37 55 0 7 12 6 4 4 1 0 0 1 1 4 2 1 0
Croatia 9 0 2 3 0 0 0 0 3 0 0 0 1 0 0 0 0 2 0 1 0
Cyprus 0 0 0 0 0 0 0 4 3 3 0 0 0 0 0 1 2 5 13 7 0
Czech Republic 5 4 27 31 4 0 0 11 20 3 5 2 0 0 0 4 0 9 7 6 0
Denmark 3 0 1 21 2 0 0 2 12 7 4 1 1 0 0 2 2 5 1 1 0
Estonia 1 0 0 2 0 0 0 0 2 3 0 1 1 0 0 0 0 2 0 0 0
Finland 18 5 10 28 19 3 0 3 3 4 0 2 0 0 0 1 3 1 2 0 0
France 99 58 33 123 220 265 0 53 159 80 49 13 11 0 1 11 17 47 23 22 0
Germany 813 275 730 1,221 668 816 0 52 219 100 66 41 34 0 1 15 21 86 99 43 0
Greece 1 0 3 4 0 0 0 3 11 5 9 5 2 0 1 1 4 6 3 1 0
Hungary 2 0 0 0 0 2 0 0 7 4 0 0 0 0 0 0 0 1 1 1 0
Ireland 3 1 24 28 8 7 0 2 13 14 9 3 1 0 0 0 4 5 4 4 0
Italy 19 7 18 108 175 5 0 15 51 28 24 7 4 0 0 1 8 11 18 10 0
Latvia 14 6 16 15 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 2 0
Lithuania 1 0 1 1 8 4 0 1 3 0 0 1 1 0 0 2 2 1 2 0 0
Luxembourg 0 0 3 2 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0
Malta 0 0 0 0 0 0 0 1 2 0 1 1 0 0 0 0 0 0 0 0 0
Netherlands 44 20 72 163 105 43 0 28 86 34 15 12 5 0 2 2 16 34 46 20 0
Poland 14 22 35 48 145 0 0 12 31 14 13 9 5 0 5 8 34 33 11 8 0
Portugal 6 1 9 36 0 0 0 6 12 6 3 2 0 0 0 0 1 1 1 0 0
Romania 16 1 12 32 8 310 0 6 11 10 8 4 2 0 3 2 12 11 8 9 0
Slovakia 0 0 0 0 0 0 0 0 2 2 1 0 1 0 0 0 1 2 1 1 0
Slovenia 2 0 4 2 1 0 0 0 2 0 0 0 0 0 0 0 0 1 4 0 0
Spain 79 35 233 366 307 636 0 17 38 13 14 10 7 0 2 2 6 9 13 5 0
Sweden 44 12 51 71 28 4 0 3 15 6 4 5 0 0 0 2 5 4 3 5 0
Iceland 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Norway 1 1 2 7 0 0 0 0 5 9 1 2 1 0 1 1 1 4 2 0 0
Total EU 1,213 453 1,299 2,339 1,738 2,172 0 230 745 359 238 124 80 0 17 56 144 291 268 159 0
Total EEA 1,214 454 1,301 2,346 1,738 2,172 0 230 751 368 239 126 81 0 18 58 145 295 270 159 0

SLI 14.2.3

Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 1
Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken.
 
TTP 5
Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken.
 
TTP 7
Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken.
 
TTP 9
Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken.

Country TTP OR ACTION1 - Penetration and impact on genuine users TTP OR ACTION1 - Trends on targeted audiences TTP OR ACTION1 - Trends on narratives used TTP OR ACTION2 - Penetration and impact on genuine users TTP OR ACTION2 - Trends on targeted audiences TTP OR ACTION2 - Trends on narratives used TTP OR ACTION3 - Penetration and impact on genuine users TTP OR ACTION3 - Trends on targeted audiences TTP OR ACTION3 - Trends on narratives used TTP OR ACTION4 - Penetration and impact on genuine users TTP OR ACTION4 - Trends on targeted audiences TTP OR ACTION4 - Trends on narratives used TTP OR ACTION5 - Penetration and impact on genuine users TTP OR ACTION5 - Trends on targeted audiences TTP OR ACTION5 - Trends on narratives used TTP OR ACTION6 - Penetration and impact on genuine users TTP OR ACTION6 - Trends on targeted audiences TTP OR ACTION6 - Trends on narratives used TTP OR ACTION7 - Penetration and impact on genuine users TTP OR ACTION7 - Trends on targeted audiences TTP OR ACTION7 - Trends on narratives used TTP OR ACTION8 - Penetration and impact on genuine users TTP OR ACTION8 - Trends on targeted audiences TTP OR ACTION8 - Trends on narratives used TTP OR ACTION9 - Penetration and impact on genuine users TTP OR ACTION9 - Trends on targeted audiences TTP OR ACTION9 - Trends on narratives used TTP OR ACTION10 - Penetration and impact on genuine users TTP OR ACTION10 - Trends on targeted audiences TTP OR ACTION10 - Trends on narratives used TTP OR ACTION11 - Penetration and impact on genuine users TTP OR ACTION11 - Trends on targeted audiences TTP OR ACTION11 - Trends on narratives used TTP OR ACTION12 - Penetration and impact on genuine users TTP OR ACTION12 - Trends on targeted audiences TTP OR ACTION12 - Trends on narratives used
Austria 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Belgium 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Bulgaria 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Croatia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cyprus 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Czech Republic 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Denmark 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Estonia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Finland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
France 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Germany 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Greece 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Hungary 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ireland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Italy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Latvia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Lithuania 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Luxembourg 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Malta 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Netherlands 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Poland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Portugal 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Romania 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Slovakia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Slovenia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Spain 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Sweden 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Iceland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Norway 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 1
Methodology
(1) Percentage of TTP 1 channel removals out of all related channel removals by EEA Member State for reporting period H1 2025;
(2) N/A;
(3) N/A.

Response
(1) Please see table below;
(2, 3) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.


TTP 5
Methodology
(1) Percentage of TTP 5 channel removals out of all related channel removals by EEA Member State for reporting period H1 2025;
(2) Percentage of TTP 5 video removals out of all related video removals by EEA Member State for reporting period H1 2025;
(3) N/A;
(4) N/A.

Response
(1) Please see table below;
(2) Please see table below;
(3, 4) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.


TTP 7
Methodology
(1) Percentage of TTP 7 video removals out of all related video removals by EEA Member State for reporting period H1 2025;
(2) N/A;
(3) N/A.

Response
(1) Please see table below;
(2, 3) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.


TTP 9
Methodology
(1) Percentage of TTP 9 channel removals out of all related channel removals by EEA Member State for reporting period H1 2025;
(2) Percentage of TTP 9 video removals out of all related channel removals by EEA Member State for reporting period H1 2025;
(3) N/A;
(4) N/A.

Response
(1) Please see table below;
(2) Please see table below;
(3, 4) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.

Country TTP OR ACTION 1 - Percentage of TTP 1 channel removals out of all related channel removals TTP OR ACTION 5 - Percentage of TTP 5 channel removals out of all related channel removals TTP OR ACTION 5 - Percentage of TTP 5 video removals out of all related video removals TTP OR ACTION 7 - Percentage of TTP 7 video removals out of all related video removals TTP OR ACTION 9 - Percentage of TTP 9 channel removals out of all related channel removals TTP OR ACTION 9 - Percentage of TTP 9 video removals out of all related video removals
Austria 34.57% 3.22% 0.16% 0.20% 2.93% 0.10%
Belgium 26.03% 4.22% 0.21% 0.11% 3.40% 0.08%
Bulgaria 28.03% 3.65% 0.56% 0.17% 2.14% 0.04%
Croatia 20.39% 3.94% 0.23% 0.07% 3.59% 0.05%
Cyprus 45.52% 2.71% 0.00% 0.17% 4.44% 0.48%
Czech Republic 29.87% 2.18% 0.13% 0.08% 2.88% 0.05%
Denmark 27.27% 2.81% 0.07% 0.07% 4.04% 0.03%
Estonia 35.01% 3.59% 0.06% 0.13% 2.33% 0.04%
Finland 27.00% 1.93% 0.68% 0.10% 2.95% 0.06%
France 39.29% 2.93% 0.57% 0.26% 2.40% 0.09%
Germany 81.51% 0.61% 2.43% 0.28% 0.54% 0.14%
Greece 46.36% 0.45% 0.06% 0.26% 0.54% 0.12%
Hungary 30.96% 2.54% 0.03% 0.07% 3.52% 0.02%
Ireland 46.28% 3.34% 0.40% 0.24% 2.78% 0.10%
Italy 30.74% 3.95% 0.39% 0.15% 2.76% 0.06%
Latvia 37.66% 2.44% 0.58% 0.03% 2.70% 0.02%
Lithuania 41.17% 1.18% 0.18% 0.07% 3.91% 0.08%
Luxembourg 23.61% 1.79% 0.29% 0.12% 2.18% 0.12%
Malta 33.33% 3.23% 0.00% 0.40% 5.91% 0.00%
Netherlands 49.78% 0.77% 0.59% 0.24% 1.50% 0.16%
Poland 43.86% 2.29% 0.30% 0.10% 3.41% 0.11%
Portugal 33.04% 3.34% 0.23% 0.13% 2.30% 0.01%
Romania 23.48% 4.18% 0.54% 0.06% 1.92% 0.06%
Slovakia 21.34% 3.64% 0.00% 0.05% 3.23% 0.04%
Slovenia 32.01% 3.97% 0.35% 0.08% 3.35% 0.20%
Spain 34.29% 3.28% 1.69% 0.10% 2.08% 0.04%
Sweden 34.85% 3.68% 0.66% 0.10% 2.52% 0.06%
Iceland 20.45% 6.82% 0.00% 0.10% 5.30% 0.10%
Liechtenstein 41.67% 0.00% 0.00% 0.00% 0.00% 0.00%
Norway 29.95% 2.88% 0.07% 0.12% 4.46% 0.06%
Total EU 59.34% 1.51% 0.87% 0.17% 1.43% 0.09%
Total EEA 59.14% 1.52% 0.86% 0.17% 1.45% 0.09%

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

The final list of TTPs agreed within the Permanent Task-force in H2 2022 was used by Signatories as part of their reports from then on, as intended. The Permanent Task-force will continue to examine and update the list as necessary in light of the state of the art. 

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

But, see QRE 15.2.1

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

All content uploaded to YouTube is subject to its Community Guidelines—regardless of how it is generated.

YouTube’s long-standing Misinformation Policies prohibit content that has been technically manipulated or doctored in a way that misleads users (usually beyond clips taken out of context) and may pose a serious risk of egregious harm. YouTube detects content that violates Community Guidelines using a combination of machine learning and human review. YouTube also has policies on: 

Refer to QRE 18.2.1 for how YouTube enforces these policies.

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

All content uploaded to YouTube is subject to its Community Guidelines—regardless of how it is generated. 

YouTube requires creators to disclose when they have created altered or synthetic content that is realistic, including using AI tools. YouTube also informs viewers that content may be altered or synthetic in two ways. A label may be added to the description panel indicating that some of the content was altered or synthetic. For certain types of content about sensitive topics, YouTube will apply a more prominent label to the video player. Examples of content that require disclosures can be found here.

YouTube has noted feedback from its community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them. YouTube makes it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using its privacy request process. Not all content will be removed from YouTube, and YouTube will consider a variety of factors when evaluating these requests, some examples can be found here

Additionally, YouTube has highlighted how it will build responsibility into its AI tools and features for creators. This includes significant, ongoing work to develop guardrails that will prevent its AI tools from generating the type of content that does not belong on YouTube.

YouTube works to continuously improve protections. And within YouTube, dedicated teams like the intelligence desk are specifically focused on adversarial testing and threat detection to ensure YouTube’s systems meet new challenges as they emerge. Content generated by YouTube’s AI tools includes a SynthID watermark, which is a tool for watermarking and identifying AI-generated images. Across the industry, Google, including YouTube, continues to help increase transparency around digital content. This includes its work as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).

Deploying AI technology to power content moderation
YouTube has always used a combination of people and machine learning technologies to enforce its Community Guidelines. AI helps YouTube detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is continuously increasing both the speed and accuracy of YouTube’s content moderation systems.

Improved speed and accuracy of YouTube’s systems also allows it to reduce the amount of harmful content human reviewers are exposed to.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
Google’s Threat Analysis Group (TAG) published its Q1 2025, and Q2 2025 Quarterly Bulletin, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Analysis Group (TAG) and Trust & Safety Teams work to monitor malicious actors around the globe, disable their accounts, and remove the content that they post, including but not limited to coordinated information operations and other operations that may affect EEA Member States.
 
One of TAG’s missions is to understand and disrupt coordinated information operations threat actors. TAG’s work enables Google teams to make enforcement decisions backed by rigorous analysis. TAG’s investigations do not focus on making judgements about the content on Google platforms, but rather examining technical signals, heuristics, and behavioural patterns to make an assessment that activity is coordinated inauthentic behaviour.

TAG regularly publishes its TAG Bulletin, updated quarterly here, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms, as well as additional periodic blog posts. TAG also engages with other platform Signatories to receive and, when strictly necessary for security purposes, share information related to threat actor activity – in compliance with applicable laws. To learn more, refer to SLI 16.1.1.

See Google’s disclosure policies about handling security vulnerabilities for developers and security professionals.

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

Google’s Threat Analysis Group (TAG) posts a quarterly Bulletin, which includes disclosure of coordinated influence operation campaigns terminated on Google’s products and services, as well as additional periodic blog posts. In the Bulletin, TAG often notes when findings are similar to or supported by those reported by other platforms.

YouTube
The publicly available H1 2025 TAG Bulletins (1 January 2025 - 30 June 2025) show 34,177 YouTube channels across 52 separate actions were involved in Coordinated Influence Operation Campaigns. Industry partners supported 4 of those separate actions by providing leads. The TAG Bulletin and periodic blog posts are Google’s, including YouTube’s, primary public source of information on coordinated influence operations and TTP-related issues.

As reported in the Bulletin, some channels YouTube took action on were part of campaigns that uploaded content in some EEA languages, specifically: Romanian (80 channels), German (77 channels), Polish (68 channels), French (51 channels), Spanish (34 channels), Italian (16 channels), and Greek (4 channels). Certain campaigns may have uploaded content in multiple languages, or in other countries outside of the EEA region utilising EEA languages. Please note that there may be many languages for any one coordinated influence campaign and that the presence of content in an EEA Member State language does not necessarily entail a particular focus on that Member State. For more information, please see the TAG Bulletin


The EU Code of Conduct on Disinformation Rapid Response System (RRS) is a collaborative initiative involving both non-platform and platform Signatories of the Code of Conduct to provide a means for cooperation and communication between them for a period of time ahead, during and after the election period.

The RRS allows non-platform Signatories of the Code of Conduct to report time-sensitive content or accounts that they deem may present serious or systemic concerns to the integrity of the electoral process, and enables discussion with the platform Signatories in light of their respective policies.

The disclosures below also include reporting through the RRS of allegedly illegal content. Although the Article 16 Digital Services Act (DSA) mechanism should be used by non-platform Signatories to report allegedly illegal content, Google reviews such notifications, too, as part of the RRS, provided the non-platform Signatory has already used the Article 16 DSA mechanism to submit them and shares the appropriate notification reference with Google through the RRS.

Search
  • Germany - No notifications were received through RRS.
  • Poland - One notification was received through RRS. 252 URLs were flagged as allegedly illegal content, of which 213 URLs were removed. 
  • Portugal - No notifications were received through RRS.
  • Romania - No notifications were received through RRS.

YouTube
  • Germany - Two notifications were received through RRS. Two videos were flagged of which one was removed and one was found to be non-violative of policies.
  • Poland - Two notifications were received through RRS. One video was flagged and found to be non-violative of policies. One channel was flagged and found to be non-violative of policies.
  • Portugal - No notifications were received through RRS.
  • Romania - No notifications were received through RRS.

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Analysis Group (TAG) and Trust & Safety Teams work to monitor malicious actors around the globe, disable their accounts, and remove the content that they posted, including but not limited to coordinated information operations and other operations that may affect EU Member States. 

Refer to the TAG Bulletin articles that cover the reporting period to learn more about the number of YouTube channels terminated as part of TAG’s investigation into coordinated influence operations linked to Russia, Poland, and other countries around the world. 

The most recent examples of specific tactics, techniques, and procedures (TTPs) used to lure victims, as well as how Google collaborates and shares information, can be found in Google’s TAG Blog.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

But, see QRE 17.1.1

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube takes its responsibility efforts seriously, outlining clear policies used to moderate content on the platform and providing tools that users can leverage to improve their media literacy education and better evaluate what content and sources to trust. 

Information panels may appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about the content they are viewing. For example, topics that are more prone to misinformation may have information panels that show basic background info, sourced from independent, third-party partners, to give more context on the topic. If a user wants to learn more, the panels also link to the third-party partner’s website. YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. 

During election periods, text-based information panels about a candidate, how to vote, and election results may also be displayed to users.

Further EEA Member State coverage can be found in SLI 17.1.1.

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

(1) Impressions of information panels (excluding fact-check panels, crisis resource panel, non-COVID medical panels) in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State.

(2) Impressions on labels indicating altered or synthetic content.

Note: YouTube relies on a number of systems to calculate this metric and make the best effort to be as accurate as possible. Since the last report, YouTube has moved to reporting the metric via sampling, which derives the metric from a subset of the data by using random sampling to estimate the number of impressions.

Country Impressions of information panels Impressions on labels indicating altered or synthetic content
Austria 36,950,757 35,908,138
Belgium 167,407,873 32,192,311
Bulgaria 49,748,085 22,197,544
Croatia 54,222,976 13,074,600
Cyprus 4,183,263 6,346,362
Czech Republic 157,675,234 43,550,449
Denmark 22,016,705 31,720,890
Estonia 16,418,581 5,646,988
Finland 15,279,046 16,681,096
France 1,000,634,704 212,319,334
Germany 2,552,766,596 413,944,130
Greece 25,349,565 36,600,994
Hungary 51,006,178 17,389,712
Ireland 72,559,534 27,200,214
Italy 758,249,496 255,118,514
Latvia 50,972,400 11,553,365
Lithuania 47,908,078 12,661,456
Luxembourg 2,630,439 2,743,446
Malta 2,356,838 2,594,124
Netherlands 458,307,918 84,857,904
Poland 454,115,580 159,350,791
Portugal 28,842,733 45,132,552
Romania 89,583,459 46,238,625
Slovakia 27,063,094 11,456,529
Slovenia 16,569,288 6,249,370
Spain 451,036,417 277,140,219
Sweden 121,980,070 41,023,290
Iceland 1,058,138 1,515,108
Liechtenstein 210,543 217,258
Norway 21,105,606 20,788,623
Total EU 6,735,834,907 1,870,892,947
Total EEA 6,758,209,194 1,893,413,936

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

Grants
In H1 2025 (1 January 2025 to 30 June 2025), Google supported a number of organisations that seek to help build a safer online world. This includes:

  • A $103,220 grant to GLOBSEC, which is a platform that combines prebunking techniques, AI content recognition, and personalised learning to educate users on civic topics.

  • A $102,640 grant to the Social Incubator nonprofit organisation, an AI-powered civic education platform that offers a suite of tools, including interactive chatbots, personalized learning paths, gamified content, and comprehensive digital literacy training.

  • A $102,640 grant to Parlons Démocratie, which helps teachers deliver better civic education through a professionally designed massive open online course (MOOC) combined with an AI-powered chatbot that provides personalised, real-time support.

  • A $103,060 grant to CyberPeace Institute to support European civil society organisations (CSO) to combat cyber attacks and disinformation by using AI for actionable insights and collective intelligence.

Search
To raise awareness of its features and build literacy across society, Google Search works with information literacy experts to help design tools in a way that allows users to feel confident and in control of the information they consume and the choices they make. 

In addition, Google, in partnership with Public Libraries 2030, launched Super Searchers in 2022. The ongoing program has trained thousands of library staff in community and school libraries in the EU to increase the search and information literacy skills of tens of thousands of library patrons. 

YouTube
YouTube remains committed to supporting efforts that deepen users’ collective understanding of misinformation. To empower users to think critically and use YouTube’s products safely and responsibly, YouTube invests in media literacy campaigns to improve users’ experiences on YouTube. In 2022, YouTube launched ‘Hit Pause’, a global media literacy campaign, which is live in all EEA Member States and the campaign has run in 40+ additional countries around the world, including all official EU languages.

The program seeks to teach viewers critical media literacy skills via engaging and educational public service announcements (PSAs) via YouTube home feed and pre-roll ads, and on a dedicated YouTube channel. The YouTube channel hosts videos from the YouTube Trust & Safety team that explain how YouTube protects the YouTube community from misinformation and other harmful content, as well as additional campaign content that provides members of the YouTube community with the opportunity to increase critical thinking skills around identifying different manipulation tactics used to spread misinformation – from using emotional language to cherry picking information. 

EEA Member State coverage of 'Hit Pause' media literacy impressions can be found in SLI 17.2.1.

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

Media Literacy campaign impressions in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State.

Note: Due to an operational issue, media literacy campaign impressions were undercounted for Malta in the H2 2024 Report. The corrected number for SLI 17.2.1 for Malta in H2 2024 is 362,838. The error has been corrected starting with this H1 2025 Report.

Country Impressions from YouTube's media literacy campaigns
Austria 4,650,619
Belgium 5,581,901
Bulgaria 4,856,246
Croatia 2,924,101
Cyprus 633,498
Czech Republic 10,183,474
Denmark 5,266,332
Estonia 514,536
Finland 3,755,871
France 40,742,901
Germany 44,797,416
Greece 10,489,944
Hungary 7,398,316
Ireland 3,252,276
Italy 36,945,233
Latvia 881,253
Lithuania 2,100,449
Luxembourg 318,473
Malta 490,705
Netherlands 11,133,841
Poland 38,145,203
Portugal 8,590,420
Romania 11,863,927
Slovakia 5,141,025
Slovenia 1,454,433
Spain 23,225,987
Sweden 7,026,252
Iceland 250,719
Liechtenstein 27,918
Norway 2,333,149
Total EU 292,364,632
Total EEA 294,976,418

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube partners with media literacy experts and researchers to identify unique and engaging ways to build up the YouTube Community’s media literacy. For example, to inform the ‘Hit Pause’ global campaign, YouTube partnered with the National Association for Media Literacy Education (NAMLE), a U.S.-based organisation, to identify which competency areas the campaign should focus on. 

For additional information about YouTube’s ‘Hit Pause’ campaign, please refer to QRE 17.2.1.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

See response to QRE 14.1.1 to see how YouTube’s Community Guidelines map to the TTPs. These policies seek to, among other things, limit the spread of misleading or deceptive content that poses a serious risk of egregious harm. 

Community Guidelines Enforcement
After a creator’s first Community Guidelines violation, they will typically get a warning with no penalty to their channel. They will have the chance to take a policy training to allow the warning to expire after 90 days. Creators will also get the chance to receive a warning in another policy category. If the same policy is violated within that 90 day window, the creator’s channel will be given a strike.

If the creator receives three strikes in the same 90-day period, their channel may be removed from YouTube. In some cases, YouTube may terminate a channel for a single case of severe abuse, as explained in the Help Centre. YouTube may also remove content for reasons other than Community Guidelines violations, such as a first-party privacy complaint or a court order. In these cases, creators will not be issued a strike.

If a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain the action taken on their content and which of YouTube’s policies the content violated. More detailed guidelines of YouTube’s processes and policies on strikes can be found here.

YouTube also reserves the right to restrict a creator's ability to create content on YouTube at its discretion. A channel may be turned off or restricted from using any YouTube features. If this happens, users are prohibited from using, creating, or acquiring another channel to get around these restrictions. This prohibition applies as long as the restriction remains active on the YouTube channel. A violation of this restriction is considered circumvention under YouTube’s Terms of Service, and may result in termination of all existing YouTube channels of the user, any new channels created or acquired, and channels in which the user is repeatedly or prominently featured.

Refer to SLI 18.2.1 on YouTube’s enforcement at an EEA Member State level.

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

(1) Number of videos removed for violations of YouTube’s Misinformation Policies in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State;

(2) Views threshold on videos removed for violations of YouTube’s Misinformation Policies in H1 2025 broken down by EEA Member State.

For SLI 18.2.1 (2): Starting March 2025, YouTube updated the terminology used for Shorts view counts. This terminology change does not apply to YouTube’s transparency reporting view-related metrics, which remain the same in name and methodology. Learn more here.

Country Number of videos removed Number of videos removed with 0 views Number of videos removed with 1-10 views Number of videos removed with 11-100 views Number of videos removed with 101-1,000 views Number of videos removed with 1,001- 10,000 views Number of videos removed with >10,000 views
Austria 62 6 26 16 12 1 1
Belgium 49 3 29 8 7 1 1
Bulgaria 90 28 23 14 15 7 3
Croatia 18 2 6 1 3 4 2
Cyprus 39 6 6 9 12 5 1
Czech Republic 70 17 27 7 10 5 4
Denmark 53 4 20 12 12 3 2
Estonia 30 2 9 4 10 4 1
Finland 41 8 7 17 5 3 1
France 528 70 209 124 74 29 22
Germany 902 108 339 194 138 74 49
Greece 76 4 14 15 17 21 5
Hungary 37 3 20 9 3 1 1
Ireland 136 19 52 26 23 13 3
Italy 311 30 119 67 54 24 17
Latvia 44 4 10 10 11 6 3
Lithuania 30 6 7 10 4 2 1
Luxembourg 3 0 2 1 0 0 0
Malta 6 1 3 0 1 1 0
Netherlands 320 46 134 72 43 17 8
Poland 155 26 47 31 24 21 6
Portugal 65 11 26 12 9 6 1
Romania 95 19 31 18 21 4 2
Slovakia 13 1 5 4 1 1 1
Slovenia 48 10 9 6 13 9 1
Spain 747 89 215 141 130 128 44
Sweden 92 8 34 15 12 17 6
Iceland 3 0 2 0 0 1 0
Liechtenstein 0 0 0 0 0 0 0
Norway 47 4 15 11 13 2 2
Total EU 4,060 531 1,429 843 664 407 186
Total EEA 4,110 535 1,446 854 677 410 188

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google, including YouTube, works with stakeholders across the technology sector, government, and civil society to set good policies, remain abreast of emerging challenges, and establish, share, and learn from industry best practices and research. 

Described below are examples that demonstrate Google’s, including YouTube, commitment to these actions:

Jigsaw-led Research
Jigsaw is a unit within Google that explores threats to open societies and builds technology that inspires scalable solutions. Jigsaw began conducting research on 'information interventions' more than 10 years ago. Jigsaw has since contributed research and technology on ways to make people more resilient to disinformation. Their research efforts are based on behavioural science and ethnographic studies that examine when people might be vulnerable to specific messages and how to provide helpful information when people need it most. These interventions provide a methodology for proactively addressing a range of threats to people online, as a complement to approaches that focus on removing or downranking material online.

An example of a notable research effort by Jigsaw run on and with YouTube is:
  • Accuracy Prompts (APs): APs remind users to think about accuracy. The prompts work by serving users bite-sized digital literacy tips at a moment when it might matter. Lab studies conducted across 16 countries with over 30,000 participants, suggest that APs increase engagement with accurate information and decrease engagement with less accurate information.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

On YouTube, recommendations help users discover more of the videos they love, whether it is a great new recipe to try or finding their next favourite song. 

Users can find recommendations across the platform, including the homepage, the ‘Up Next’ panel, and the Shorts tab:

  • Homepage: A user’s homepage is what they typically see when they first open YouTube.
  • Up Next: The Up Next panel appears when a user is watching a video. It suggests additional content based on what they are currently watching and personalised signals (details below).
  • Shorts: Shorts are ranked based on their performance and relevancy to that individual viewer. 

YouTube understands that individuals have unique viewing habits and uses signals to recommend content. YouTube’s system compares the user’s viewing habits with those that are similar to others, and uses that information to suggest other content.

YouTube’s recommendation system is constantly evolving, learning every day from over 80 billion pieces of information or 'signals,' the primary ones being:
  • Watch history: YouTube’s system uses the videos a user watches to give better recommendations, remember where a user left off, and more.
  • Search history: YouTube’s system uses what a user searches for on YouTube to influence future recommendations.
  • Channel subscriptions: YouTube’s system uses information about the channels a user subscribes to in order to recommend videos they may like.
  • Likes: YouTube’s system uses a user’s likes information to try to predict the likelihood that they will be interested in similar videos in the future.
  • Dislikes: YouTube’s system uses videos a user dislikes to inform what to avoid recommending in the future.
  • 'Not interested' feedback selections: YouTube’s system uses videos a user marks as 'Not interested' to inform what to avoid recommending in the future.
  • 'Don’t recommend channel' feedback selections: YouTube’s system uses 'Don’t recommend channel' feedback selections as a signal that the channel content likely is not something a user enjoyed watching.
  • Satisfaction surveys: YouTube’s system uses user surveys that ask a user to rate videos that they watched, which helps the system understand satisfaction, not just watch time.

Different YouTube features rely on certain recommendation signals more than others. For example, YouTube uses the video a user is currently watching as an important signal when suggesting a video to play next. The influence of each signal on recommendations can vary based on many variables, including but not limited to the user’s device type and the type of content they are watching. This is why the same user will see different recommendations on a mobile phone vs. a television. 

Recommendations
Recommendations connect viewers to high-quality information and complement the work done by the Community Guidelines that define what is and is not allowed on YouTube. YouTube raises up videos in search and recommendations to viewers on certain topics where quality is key. Human evaluators, trained using publicly available guidelines, assess the quality of information from a variety of channels and videos. 

These human evaluations are used to train YouTube’s system to model their decisions, and YouTube then scales their assessments to all videos across the platform. Learn more about how YouTube elevates high-quality information on the How YouTube Works website and the YouTube Blog

Controls to personalise recommendations 
YouTube has built controls that help users decide how much data they want to provide. Users can view, delete, or turn on or off their YouTube watch and search history whenever they want. And, if users do not want to see recommendations at all on the homepage or on the Shorts tab, they can turn off and clear their YouTube watch history. For users with YouTube watch history off and no significant prior watch history, the homepage will show the search bar and the Guide menu, with no feed of recommended videos.

Users can also tell YouTube when it is recommending something a user is not interested in. For example, buttons on the homepage and in the ‘Up next' section allow users to filter and choose recommendations by specific topics. Users can also click on 'Not interested' and/or 'Don’t recommend channel' to tell YouTube that a video or channel is not what a user wanted to see at that time, and YouTube will consider that when generating recommendations for that viewer in the future.

Additional information about how a user can manage their recommendation settings are outlined here in YouTube’s Help Centre. 

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

YouTube is sharing the percentage of Daily Active Users that are signed in to the platform (those not signed in are signed out). Signed in users are able to amend their settings in their YouTube or Google Accounts.

The average percentage of signed in Daily Active Users over H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State.

Country Percentage of daily active users that are signed in
Austria 73%
Belgium 76%
Bulgaria 76%
Croatia 79%
Cyprus 79%
Czech Republic 80%
Denmark 76%
Estonia 77%
Finland 76%
France 76%
Germany 73%
Greece 77%
Hungary 77%
Ireland 72%
Italy 80%
Latvia 78%
Lithuania 78%
Luxembourg 71%
Malta 77%
Netherlands 77%
Poland 78%
Portugal 80%
Romania 79%
Slovakia 76%
Slovenia 76%
Spain 80%
Sweden 71%
Iceland 72%
Liechtenstein 62%
Norway 68%
Total EU 77%
Total EEA 77%

Commitment 22

Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.

We signed up to the following measures of this commitment

Measure 22.7

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

But, see QRE 22.7.1

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 22.7

Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.

QRE 22.7.1

Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube highlights information from high-quality, third-party sources using information panels. As users navigate YouTube, they might see a variety of different information panels. These panels provide additional context, with each designed to help users make their own decisions about the content they find. 

These information panels will show regardless of what opinions or perspectives are expressed in a video. If users want to learn more, most panels also link to the third-party partner’s website.

Information panels on YouTube include, but are not limited to:
  • Panels on topics prone to misinformation: Topics that are prone to misinformation, such as the moon landing, may display an information panel at the top of search results or under a video. These information panels show basic background information, sourced from independent, third-party partners, to give more context on a topic. The panels also link to the third-party partner’s website. YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. More details found here.
  • Election information panels: The election-related features are only available in select countries/regions during election cycles. Users may see candidate information panels, voting information panels, election integrity information panels, or election results information panels. More details found here.
  • Health-related information panels: Health-related topics, such as cancer treatment misinformation, may have a health information panel in your search results. These panels show info like symptoms, prevention and treatment options. More details found here.
  • Crisis resource panels: These panels let users connect with live support, 24/7 from recognised service partners. The panels may surface on the Watch page, when a user watches videos on topics related to suicide or self-harm, or in search results, when a user searches for topics related to certain health crises or emotional distress. More details found here.

Additional data points and EEA Member State coverage is provided in SLI 22.7.1.

SLI 22.7.1

Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).

(1) Impressions of information panels (excluding fact-check panels, crisis resource panel, non-COVID medical panels) in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State.

(2) Impressions on labels indicating altered or synthetic content.

Note: Due to a technical issue, some info panel impressions were undercounted. YouTube relies on a number of systems to calculate this metric and make the best effort to be as accurate as possible. Since the last report, YouTube has moved to reporting the metric via sampling, which derives the metric from a subset of the data by using random sampling for a better estimate of the number of impressions.

Country Impressions of information panels Impressions on labels indicating altered or synthetic content
Austria 36,950,757 35,908,138
Belgium 167,407,873 32,192,311
Bulgaria 49,748,085 22,197,544
Croatia 54,222,976 13,074,600
Cyprus 4,183,263 6,346,362
Czech Republic 157,675,234 43,550,449
Denmark 22,016,705 31,720,890
Estonia 16,418,581 5,646,988
Finland 15,279,046 16,681,096
France 1,000,634,704 212,319,334
Germany 2,552,766,596 413,944,130
Greece 25,349,565 36,600,994
Hungary 51,006,178 17,389,712
Ireland 72,559,534 27,200,214
Italy 758,249,496 255,118,514
Latvia 50,972,400 11,553,365
Lithuania 47,908,078 12,661,456
Luxembourg 2,630,439 2,743,446
Malta 2,356,838 2,594,124
Netherlands 458,307,918 84,857,904
Poland 454,115,580 159,350,791
Portugal 28,842,733 45,132,552
Romania 89,583,459 46,238,625
Slovakia 27,063,094 11,456,529
Slovenia 16,569,288 6,249,370
Spain 451,036,417 277,140,219
Sweden 121,980,070 41,023,290
Iceland 1,058,138 1,515,108
Liechtenstein 210,543 217,258
Norway 21,105,606 20,788,623
Total EU 6,735,834,907 1,870,892,947
Total EEA 6,758,209,194 1,893,413,936

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube's approach to combating misinformation involves removing content that violates YouTube’s policies, and surfacing high-quality information in ranking and recommendations. YouTube applies these principles globally, including across the EU.

Implementing and enforcing YouTube policies
Each of YouTube’s policies is carefully thought through so they are consistent, well-informed, and can be applied to content from around the world. They are developed in consultation with a wide range of external experts, as well as YouTube Creators. New policies go through testing before they go live to ensure YouTube’s global team of content reviewers can apply them accurately and consistently. 

Flagging inappropriate or harmful content on YouTube
YouTube offers YouTube users the possibility to report or flag content that they believe violates YouTube’s Community Guidelines or other policies. Users can report content using YouTube’s flagging feature, which is available to signed-in users in all EU Member States via computer (desktop or laptop), mobile devices, and other surfaces. Details on how to report different types of content using YouTube’s flagging feature is outlined in YouTube’s Help Centre.

In addition to user flagging, YouTube uses machine learning technology to flag videos for review. YouTube developed powerful machine learning that detects content that may violate YouTube’s policies and sends it for human review. In some cases, that same machine learning automatically takes an action, if there is high confidence that content is violative given information about similar or related content that has been previously removed.

YouTube relies on this combination of people and machine learning technology to flag inappropriate content and enforce YouTube’s community guidelines. 

Information about YouTube’s content moderation efforts across the official EU Member State languages can be found in the Human Resources involved in Content Moderation section of the VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA)

Reporting illegal content
While YouTube’s Community Guidelines are policies that apply globally, YouTube is available in more than 100 different countries; therefore, processes are in place to review and appropriately act on requests from users, courts, and governments about content that violates local laws. Users can report illegal content using webforms dedicated to specific legal issues such as trademark, copyright, counterfeit and defamation. Webforms may also be accessed via the flagging feature after selecting Legal Issue as the report reason. To expedite the review, users should report content that violates the legal policies outlined here in YouTube’s Help Centre.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Content can be flagged by YouTube users, YouTube’s machine learning technology, and human content moderators. All users agree to not 'misuse any reporting, flagging, complaint, dispute, or appeals process, including by making groundless, vexatious, or frivolous submissions' in YouTube’s Terms of Service.

Additionally, YouTube ensures integrity of its systems through: 
  • Having a dedicated team to identify and mitigate the impact of sophisticated bad actors on YouTube at scale, while protecting the broader community;
  • Partnering with Google’s Threat Analysis Group (TAG) and Trust & Safety Teams to monitor malicious actors around the globe, disable their accounts, and remove the content that they post (See QRE 16.1.1 and QRE 16.2.1);
  • Legal protections, such as those found in the Digital Services Act;
  • Educating users about Community Guidelines violations through its guided policy experience;
  • Providing clear communication on appeals processes and notifications, and regular policy updates on its Help Centre; and, 
  • Investing in automated systems to provide efficient detection of content to be evaluated by human reviewers.

Where appropriate, YouTube makes it clear to users that it has taken action on their content and provides them the opportunity to appeal that decision.

For more detailed information about YouTube’s complaint handling systems (i.e. appeals), please see the latest VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA).

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.

QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

As noted in QRE 18.2.1, if a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain what content was removed or age restricted, which policies the content violated, how it affects the user’s channel, and what the creator can do next. More detailed guidelines of YouTube’s processes and policies on strikes here.

Sometimes a single case of severe abuse will result in channel termination without warning.

The below appeals processes are available in all Member States, which are outlined in the YouTube Help Centre: 

After a creator submits an appeal
After a creator submits an appeal, they will get an email from YouTube letting them know the appeal outcome. One of the following will happen:

  • If YouTube finds that a user’s content followed YouTube’s Community Guidelines, YouTube will reinstate it and remove the strike from their channel. If a user appeals a warning and the appeal is granted, the next offence will be a warning.

  • If YouTube finds that a user’s content followed YouTube’s Community Guidelines, but is not appropriate for all audiences, YouTube will apply an age-restriction. If it is a video, it will not be visible to users who are signed out, are under 18 years of age, or have Restricted Mode turned on. If it is a custom thumbnail, it will be removed.

  • If YouTube finds that a user’s content was in violation of YouTube’s Community Guidelines, the strike will stay and the video will remain down from the site. There is no additional penalty for appeals that are rejected.

For a more granular Member State level breakdown, refer to SLI 24.1.1.

For more information about YouTube’s median time needed to action a complaint, please see the latest VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA).

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

(1) Appeals following video removal for violations of YouTube’s Misinformation Policies in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State;

(2) Video reinstatements following a successful appeal against content removals for violations of YouTube’s Misinformation Policies in H1 2025, broken down by EEA Member State.

Country Number of videos removed that were subsequently appealed Number of videos removed that were then reinstated following a creator's appeal
Austria 21 3
Belgium 13 3
Bulgaria 14 0
Croatia 4 1
Cyprus 4 0
Czech Republic 12 1
Denmark 9 2
Estonia 3 2
Finland 12 1
France 95 14
Germany 177 29
Greece 12 1
Hungary 12 2
Ireland 39 8
Italy 72 10
Latvia 5 0
Lithuania 6 2
Luxembourg 0 0
Malta 2 1
Netherlands 58 9
Poland 45 5
Portugal 16 5
Romania 20 1
Slovakia 3 1
Slovenia 2 0
Spain 175 19
Sweden 16 2
Iceland 0 0
Liechtenstein 0 0
Norway 9 2
Total EU 847 122
Total EEA 856 124

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

Google Trends
Google Search and YouTube provide publicly available data via Google Trends, which provides access to a largely unfiltered sample of actual search requests made to Google Search and YouTube’s search function. It is anonymised (no one is personally identified), categorised (determined by the topic for a search query) and aggregated (grouped together). This allows Google Trends to display interest in a particular topic from around the globe or down to city-level geography. See Trends Help Centre for details.

Google Researcher Program
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact. For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API for eligible academic researchers from around the world, who are affiliated with an accredited, higher-learning institution. Learn more about the data available in the YouTube API reference.

Transparency into paid content on YouTube
YouTube provides users a bespoke front end search page to access publicly available data containing organic content with paid product placements, sponsorships and endorsements as disclosed by creators. This is to enable users to understand that creators may receive goods or services in exchange for promotion. This search page complements YouTube’s existing process of displaying a disclosure message when creators disclose to YouTube that their content contains paid promotions. Learn more about adding paid product placements, sponsorships & endorsements here

Users can also query the same set of results using the YouTube Data API. Use is subject to YouTube’s API Terms of Service

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

Google Trends
The information provided via Google Trends is a sample of all of Google Search and YouTube’s search activity. The 2 different samples of Google Trends data that can be accessed are:
  • Real-time data - a sample covering the last seven days;
  • Non real-time data - a separate sample from real-time data that goes as far back as 2004 and up to 72 hours before one’s search.

Only a sample of Google Search and YouTube searches are used in Google Trends (a publicly available research tool), because Google, including YouTube, handles billions of searches per day. Providing access to the entire data set would be too large to process quickly. By sampling data, Google can look at a dataset representative of all searches on Google, which includes YouTube, while finding insights that can be processed within minutes of an event happening in the real world. See Trends Help Centre for details.

Google Researcher Program
Approved researchers will receive permissions and access to public data for Search and YouTube in the following ways: 
  • Search: Access to an API for limited scraping with a budget for quota;
  • YouTube: Permission for scraping limited to metadata.

For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. The program allows eligible academic researchers around the world to independently analyse the data they collect, including generating new/derived metrics for their research. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data.

Transparency into paid content on YouTube
The information provided via the bespoke front end search page allows users to view videos with active paid product placements, sponsorships, and endorsements that have been declared on YouTube.
  • Paid product placements
    • Videos about a product or service because there is a connection between the creator and the maker of the product or service;
    • Videos created for a company or business in exchange for compensation or free of charge products/services; 
    • Videos where that company or business’s brand, message, or product is included directly in the content and the company has given the creator money or free of charge products to make the video.
  • Endorsements - Videos created for an advertiser or marketer that contains a message that reflects the opinions, beliefs, or experiences of the creator.
  • Sponsorships - Videos that have been financed in whole or in part by a company, without integrating the brand, message, or product directly into the content. Sponsorships generally promote the brand, message, or product of the third party.

Definitions can be found on the YouTube Help Centre.

Additional data points are provided in SLI 26.1.1 and 26.2.1.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

Number of users of the Google Trends online tool to research information relating to YouTube in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State (see table below).

Country Number of Google Trends users researching YouTube
Austria 796
Belgium 1,183
Bulgaria 1,767
Croatia 487
Cyprus 392
Czech Republic 1,227
Denmark 722
Estonia 241
Finland 549
France 5,270
Germany 9,618
Greece 2,548
Hungary 2,071
Ireland 1,134
Italy 7,908
Latvia 426
Lithuania 550
Luxembourg 71
Malta 100
Netherlands 2,326
Poland 3,767
Portugal 1,748
Romania 2,212
Slovakia 601
Slovenia 279
Spain 8,124
Sweden 1,352
Iceland 11
Liechtenstein 7
Norway 881
Total EU 57,469
Total EEA 58,368

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

Please refer to QRE 26.1.1 and QRE 26.1.2.

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

Please refer to QRE 26.1.1 and QRE 26.1.2.

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Researcher Program
The Google Researcher Program, which includes YouTube, has a 3-step application process:

  1. Review and confirm the applicant’s eligibility;
  2. Submit an application, which requires a Google account;
  3. If approved, the applicant gains permission to access public data relevant to their research.

Once an application has been submitted, accepted researchers will be notified via email. 

YouTube Researcher Program
The YouTube Researcher Program has a 3-step application process: 

  1. YouTube verifies the applicant is an academic researcher affiliated with an accredited, higher-learning institution;
  2. The Researcher creates an API project in the Google Cloud Console and enables the relevant YouTube APIs. They can learn more by visiting the enabled APIs page;
  3. The Researcher applies with their institutional email (e.g. with a .edu suffix), includes as much detail as possible, and confirms that all of their information is accurate.

Once an application has been submitted, YouTube’s operations team will conduct a review and let applicants know if they are accepted into the program. 

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

(1-4) Applications received, approved, rejected or under review for the YouTube Researcher Program in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member States (* indicates applications that were rejected on the basis of incorrect/incomplete application);

(5) Total number of unique researchers accessing the YouTube Researcher Program API in H1 2025, broken down by EEA Member States;

(6) Median application resolution time in days in H1 2025, reported at the EU and EEA level.

Please note the following:
  • Cells with '0' under applications received signify that there were no applications submitted by a researcher from that country. Similarly, cells with '0' signify that there were no applications approved, rejected, or under review for that country.

  • Applications under review reflect those applications still being processed at the end of the reporting period. The outcomes of these applications will be included in the next reporting period.

  • Researchers accessing the Researcher Program API from 1 January 2025 to 30 June 2025 may have been approved before H1 2025. There can be more than one researcher per application.

  • Median Application Resolution time is the median number of days from application creation to application resolution. Applications may go back and forth between the applicant and API Ops Agents throughout the approval process. This metric does not reflect YouTube’s first response back to the applicant.

Country Applications Received Applications Approved Applications Rejected Applications under Review Number of unique researchers accessing the API Median application resolution time
Austria 1 0 1 0 0 -
Belgium 0 0 0 0 2 -
Bulgaria 0 0 0 0 0 -
Croatia 0 0 0 0 0 -
Cyprus 0 0 0 0 0 -
Czech Republic 0 0 0 0 1 -
Denmark 2 1 1 0 1 -
Estonia 0 0 0 0 0 -
Finland 1 1 0 0 1 -
France 1 0 1 0 3 -
Germany 7 5 2 0 22 -
Greece 1 0 1 0 0 -
Hungary 1 1 0 0 1 -
Ireland 0 0 0 0 0 -
Italy 1 1 0 0 6 -
Latvia 0 0 0 0 0 -
Lithuania 0 0 0 0 0 -
Luxembourg 0 0 0 0 0 -
Malta 0 0 0 0 0 -
Netherlands 4 2 2 0 1 -
Poland 1 1 0 0 1 -
Portugal 1 1 0 0 0 -
Romania 0 0 0 0 1 -
Slovakia 0 0 0 0 0 -
Slovenia 0 0 0 0 0 -
Spain 9 6 3 0 8 -
Sweden 1 0 1 0 0 -
Iceland 0 0 0 0 0 -
Liechtenstein 0 0 0 0 0 -
Norway 0 0 0 0 0 -
Total EU 31 19 12 0 48 10.0 days
Total EEA 31 19 12 0 48 10.0 days

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

Google Trends
For Google Trends, users have an option to report an issue by taking a screenshot of the malfunction area and then submitting it for feedback via the Send Feedback option on the Google Trends page. Additionally, users can access the Trends Help Centre to troubleshoot any issues they may be experiencing.

Google Researcher Program
For the Google Researcher Program, the most up to date information is captured in the Program description on the Transparency Centre, and also on the Acceptable Use Policy page. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

YouTube Researcher Program
For the YouTube Researcher Program, there is support available via email. Researchers can contact YouTube, with questions and to report technical issues or other suspected faults, via a unique email alias, provided upon acceptance into the program. Questions are answered by YouTube’s Developer Support team and by other relevant internal parties as needed.

​​Google is not aware of any malfunctions during the reporting period that would have prevented access to these reporting systems.

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
  • In May 2025, Google hosted a research workshop with over 30 attendees, in Tokyo, Japan adjacent to the Conference on Human Factors in Computing Systems (CHI 2025).
  • In June 2025, Google announced the 3 areas of primary interest for this year's Google Academic Research Award (GARA). This cycle, the program will focus on Trust, Safety, Security, & Privacy Research.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

Google has a longstanding commitment to transparency, and has led the way in transparency reporting of content removals and government requests for user data over the past decade plus.
 
Google and YouTube’s products, processes, and practices via the Lumen Database and Google Trends show some of the ways that Google provides tools to support not only researchers, but journalists and others, to understand more about Google.
 
Please refer to QRE 26.1.1, QRE 26.1.2, and QRE 26.3.1 for further information about Google Trends.

Google
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact.

Google has teams that operate the Google Researcher Program. They manage the researcher application process and evaluate potential updates and developments for the Google Researcher Program. Additional information can be found on the Google Transparency Centre. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

Additionally, Google’s partnership with Lumen is an independent research project managed by the Berkman Klein Centre for Internet & Society at Harvard Law School. The Lumen database houses millions of content takedown requests that have been voluntarily shared by various companies, including Google. Its purpose is to facilitate academic and industry research concerning the availability of online content. As part of Google’s partnership with Lumen, information about the legal notices Google receives may be sent to the Lumen project for publication. Google informs users about its Lumen practices under the 'Transparency at our core' section of the Legal Removals Help Centre. Additional information on Lumen can be found here.

YouTube
The YouTube Researcher Program provides eligible academic researchers from around the world with scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data. (See YouTube API reference for more information).

YouTube has teams that operate the YouTube Researcher Program. They manage the researcher application process and provide technical support throughout the research project. They also evaluate potential updates and developments for the YouTube Researcher Program. Researchers can use any of the options below to obtain support:

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

See response to QRE 28.1.1.

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google continues to engage constructively with the Code of Conduct’s Permanent Task-force and with the European Digital Media Observatory (EDMO). As of the time of this report, no annual consultation has yet taken place, but Google stands ready to collaborate with EDMO to that end in 2025. 

Additionally, refer to QRE 26.1.1 to learn more about how Google, including YouTube, provides opportunities for researchers on its platforms.

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

In 2021, Google committed €25M to help launch the European Media & Information Fund (EMIF), of which €22.5M has been provided to date. Overall, 121 projects related to information quality have now received grants across 28 countries (including 26 EEA Member States).

The EMIF was established by the European University Institute and the Calouste Gulbenkian Foundation. The European Digital Media Observatory (EDMO) agreed to play a scientific advisory role in the evaluation and selection of projects that will receive the fund’s support, but does not receive Google funding. Google has no role in the assessment of applications. 

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Overview
In elections and other democratic processes, people want access to high-quality information and a broad range of perspectives. High-quality information helps people make informed decisions when voting and counteracts abuse by bad actors. Consistent with its broader approach to elections around the world, during the various elections across the EU in H1 2025 (1 January 2025 to 30 June 2025), Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with best-in-class security tools and training – with a strong focus on helping people navigate AI-generated content.

Mitigations in place

Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2025, a number of key elections took place around the world and across the EU in particular. In H1 2025, voters cast their votes in Germany, Poland, Portugal and Romania. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the disinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts. 

Safeguarding Google platforms and disrupting the spread of disinformation
To better secure its products and prevent abuse, Google continues to enhance its enforcement systems and to invest in Trust & Safety operations — including at its Google Safety Engineering Centre (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. Google also continues to partner with the wider ecosystem to combat disinformation. 
  • Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
  • Working with the wider ecosystem: Since Google’s inaugural commitment of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 121 projects have been funded across 28 countries so far.

Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information. Google put in place a number of policies and other measures that have helped people navigate content that was AI-generated. Overall, harmful altered or synthetic political content did not appear to be widespread on Google’s platforms. Measures that helped mitigate that risk include: 
  • Ads disclosures: Google expanded its Political Content Policies to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
  • Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
  • Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online.
  • Industry collaboration: Google is a member of the Coalition for Content Provenance and Authenticity (C2PA) and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. 

Informing voters surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways Google makes it easy for people to find what they need, and which were deployed during elections that took place across the EU in 2025: 
  • High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
  • Ongoing transparency on Election Ads: All advertisers who wish to run election ads in the EU on Google’s platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in Google’s Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. Google also limits how advertisers can target election ads. Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025. 

Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, Google works hard to help high-risk users, such as campaigns and election officials, civil society and news sources, improve their security in light of existing and emerging threats, and to educate them on how to use Google’s products and services. 
  • Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
  • Tackling coordinated influence operations: Google’s Threat Intelligence Group helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.

Google is committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on its commitments made in the EU Code of Conduct on Disinformation. 

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 50.1.1

N/A

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2

N/A

Rationale - 50.1.3

N/A

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.4.1

YouTube works quickly to remove content that violates its policies. These policies apply to all forms of content, including videos, livestreams and comments, and YouTube’s policies are enforced across languages and locales.

Description of intervention - 50.4.2

YouTube continues to assess, evaluate, and update its policies on a regular basis, the latest updates policies, including Community Guidelines, can be found here.

Indication of impact - 50.4.3

See Commitment 14 in the EU Code of Conduct Transparency Report for more details on this effort.

Specific Action applied - 50.4.4

YouTube creators are required to disclose when they upload a video that contains realistic altered or synthetic content, after which YouTube adds a transparency label so that viewers have this important context. 

Description of intervention - 50.4.5

See Commitment 15 in the EU Code of Conduct Transparency Report for details on how YouTube approaches responsible AI innovation, which were applied to elections, like the Germany, Poland, Portugal, and Romania Elections that were held in 2025.

Indication of impact - 50.4.6

See Commitment 17 in the EU Code of Conduct Transparency Report for more details on this effort.

Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.5.1

YouTube’s systems prioritise connecting viewers with high-quality information, including on events such as elections in the EU.

Description of intervention - 50.5.2

YouTube’s recommendation system prominently surfaces news from high-quality sources on the homepage, in search results and the 'Up Next' panel. YouTube’s systems do this across every country where YouTube operates.

YouTube’s Top News and Breaking News shelves surface at the top of search results, prominently featuring content from high-quality news sources, which may include information about EU elections.

Indication of impact - 50.5.3

See Commitments 17 and 18 for metrics on these efforts.

Specific Action applied - 50.5.4

Election information panels may appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about election related content they are viewing.

Description of intervention - 50.5.5

Information panels may appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about the content they are viewing. During election periods, text-based information panels about a candidate, how to vote, and election results may also be displayed to users.

Indication of impact - 50.5.6

See Commitment 17 in the EU Code of Conduct Transparency Report for more details on this effort.

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.6.1

YouTube provides publicly available data via Google Trends. YouTube also established the YouTube Researcher Program, which continues to provide scaled, expanded access to global video metadata via a Data API for verified and affiliated academic researchers.

Description of intervention - 50.6.2

See Commitments 26 and 28 in the EU Code of Conduct Transparency Report for details on how YouTube provides publicly available data via Google Trends, and provides eligible academic researchers access to global video metadata, which may be applied to EU elections in 2025.

Indication of impact - 50.6.3

See Commitment 26 for metrics on these efforts.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War in Ukraine

Overview
The ongoing war in Ukraine has continued into 2025, and Google continues to help by providing cybersecurity and humanitarian assistance, and providing high-quality information to people in the region. The following list outlines the main threats observed by Google during this conflict:

  1. Continued online services manipulation and coordinated influence operations;
  2. Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
  3. Threats to security and protection of digital infrastructure.


Israel-Gaza conflict

Overview
Following the Israel-Gaza conflict, Google has actively worked to support humanitarian and relief efforts, ensure platforms and partnerships are responsive to the current crisis, and counter the threat of disinformation. Google identified a few areas of focus for addressing the ongoing crisis:

  • Humanitarian and relief efforts;
  • Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.

Mitigations in place

War in Ukraine

The following sections summarise Google’s main strategies and actions taken to mitigate the identified threats and react to the war in Ukraine.

1. Online services manipulation and malign influence operations
Google’s Threat Analysis Group (TAG) is helping Ukraine by monitoring the threat landscape in Eastern Europe and disrupting coordinated influence operations from Russian threat actors. 

2. Advertising and monetisation linked to Russia and Ukraine disinformation
In H1 2025 (1 January 2025 to 30 June 2025), Google continued to pause the majority of commercial activities in Russia – including ads serving in Russia via Google demand and third-party bidding, ads on Google’s properties and networks globally for all Russian-based advertisers, AdSense ads on state-funded media sites, and monetisation features for YouTube viewers in Russia. Google paused ads containing content that exploits, dismisses, or condones the war. In addition, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager in August 2024. Free Google services such as Search, Gmail and YouTube are still operating in Russia. Google will continue to closely monitor developments.

3. Threats to security and protection of digital infrastructure
Google expanded eligibility for Project Shield, Google’s free protection against Distributed Denial of Service (DDoS) attacks, shortly after the war in Ukraine broke out. The expansion aimed to allow Ukrainian government websites and embassies worldwide to stay online and continue to offer their critical services. Since then, Google has continued to implement protections for users and track and disrupt cyber threats. 

TAG has been tracking threat actors, both before and during the war, and sharing their findings publicly and with law enforcement. TAG’s findings have shown that government-backed actors from Russia, Belarus, China, Iran, and North Korea have been targeting Ukrainian and Eastern European government and defence officials, military organisations, politicians, nonprofit organisations, and journalists, while financially motivated bad actors have also used the war as a lure for malicious campaigns. 

Google aims to continue to follow the following approach when responding to future crisis situations: 
  • Elevate access to high-quality information across Google services;
  • Protect Google users from harmful disinformation;
  • Continue to monitor and disrupt cyber threats;
  • Explore ways to provide assistance to support the affected areas more broadly.

Future measures
Google will continue to monitor the situation and take additional action as needed.


Israel-Gaza conflict

Humanitarian and relief efforts
Google.org has provided more than $18 million to nonprofits providing relief to civilians affected in Israel and Gaza. This includes more than $11 million raised globally by Google employees with company match and $1 million in donated Search Ads to nonprofits so they can better connect with people in need and provide information to those looking to help. We also provided $6 million in Google.org grant funding, including $3 million provided to Natal, an apolitical nonprofit organisation focused on psychological treatment of victims of trauma. The remaining funds were provided to organisations focussed on humanitarian aid and relief Gaza, including $1 million to Save the Children, $1 million to Palestinian Red Crescent, $1 million to International Medical Corps.

Specifically, Google’s humanitarian and relief efforts with these organisations include:

  • Natal - Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel. 
  • As of mid-April, the International Medical Corps has provided care to more than 433,000 civilians, delivered more than 5,400 babies, performed more than 11,800 surgeries and supplied safe drinking water to more than 302,000 people. We continue to care for some 800 patients per day, responding to mass-casualty events and performing an average of 15 surgeries per day. 

Platforms and partnerships
As the conflict continues, Google is committed to tackling disinformation, hate speech, graphic content and terrorist content by continuing to find ways to provide support through its products. For example, Google has deployed language capabilities to support emergency efforts including emergency translations, and localising Google content to help users, businesses and nonprofit organisations. Google has also pledged to help its partners in these extraordinary circumstances. For example, when schools closed in October 2023, the Ministry of Education in Israel used Meet as their core teach-from-home platform and Google provided support. Google has been in touch with Gaza-based partners and participants in its Palestine Launchpad program, its digital skills and entrepreneurship program for Palestinians, to try to support those who have been significantly impacted by this crisis.

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 51.1.1

War in Ukraine: N/A

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2

War in Ukraine: N/A

Rationale - 51.1.3

War in Ukraine: N/A

Policy - 51.1.4

Israel-Gaza conflict: Enforcement of existing policies, including YouTube’s Hate Speech Policy

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5

Israel-Gaza conflict: YouTube’s Hate Speech Policy prohibits content denying, trivialising, or minimising violent historical events, including the 7 October Hamas attacks in Israel. YouTube relies on a variety of factors to determine whether a major violent event is covered, using guidance from outside experts and governing bodies to inform its approach.

Rationale - 51.1.6

Israel-Gaza conflict: No changes to YouTube Community Guidelines and to Terms and Conditions were made as a result of the Israel-Gaza conflict. YouTube continues to enforce all policies, including the ones mentioned in this report.

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.4.1

War in Ukraine: YouTube continues to enforce its Community Guidelines, including but not limited to misinformation policies, which establish what type of content and behaviour is not allowed on the platform.

Description of intervention - 51.4.2

War in Ukraine: See Commitment 14 in the EU Code of Conduct Transparency Report for information on how YouTube enforces its Community Guidelines.

Indication of impact - 51.4.3

War in Ukraine: Since 24 February 2022, related to the ongoing war in Ukraine:
  • YouTube has removed over 12,000 channels and over 160,000 videos related to the ongoing war in Ukraine for violating its content policies, including those pertaining to misinformation, hate speech, and graphic violence.
  • YouTube has blocked over 1,000 channels and over 5.9 million videos related to the ongoing war in Ukraine.

Specific Action applied - 51.4.4

Israel-Gaza conflict: YouTube’s teams have been working quickly to remove content that violates its policies including those pertaining to hate speech, violent extremism, violent or graphic content, harassment, and misinformation. These policies apply to all forms of content, including videos, livestreams and comments, and YouTube’s policies are enforced across languages and locales.

Description of intervention - 51.4.5

Israel-Gaza conflict:

  • Per YouTube’s Hate Speech Policy, content that promotes violence or hatred against groups based on their ethnicity, nationality, race or religion is not allowed on YouTube. This includes Jewish, Muslim, and other religious or ethnic communities.
  • Per YouTube’s Violent Extremist Policy, content that praises, promotes or in any way aids violent criminal organisations is prohibited. Additionally, content produced by designated terrorist organisations, such as a Foreign Terrorist Organisation (U.S.), or organisation identified by the United Nations, is not allowed on YouTube. This includes content produced by Hamas and Palestinian Islamic Jihad (PIJ). 
    • In addition, YouTube has a dedicated button underneath every video on YouTube to flag content with the option to mark it as 'promotes terrorism.' 
  • Per YouTube’s Violent or Graphic Content Policies, YouTube prohibits violent or gory content intended to shock or disgust viewers. Additionally, content encouraging others to commit violent acts against individuals or a defined group of people, including the Jewish, Muslim and other religious communities, is not allowed on YouTube.
  • Per YouTube’s Harassment Policies, content that promotes harmful conspiracy theories or targets individuals based on their protected group status is not allowed on YouTube. Additionally, content that realistically simulates deceased minors or victims of deadly or well-documented major violent events describing their death or violence experienced, is not allowed on YouTube.
  • Per YouTube’s Misinformation Policies, content containing certain types of misinformation that can cause real-world harm, including certain types of misattributed content, is not allowed on YouTube.

Indication of impact - 51.4.6

Israel-Gaza conflict: As of 30 June 2025, following the terrorist attack by Hamas in Israel and the escalated conflict now underway in Israel and Gaza, YouTube has globally: 

  • Removed over 140,000 videos;
  • Terminated over 6,000 channels; and
  • Removed over 500 million comments.

Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.5.1

War in Ukraine: YouTube continues its ‘Hit Pause’ global media literacy campaign, to teach viewers critical skills and to improve users’ experiences on YouTube.

Description of intervention - 51.5.2

War in Ukraine: See Commitment 17 in the EU Code of Conduct Transparency Report for details on how YouTube’s ‘Hit Pause’ campaign has been teaching viewers critical media literacy skills. These skills are important in all crisis situations, including the war in Ukraine. 

Indication of impact - 51.5.3

War in Ukraine: See Commitment 17 for metrics on these efforts.

Specific Action applied - 51.5.4

War in Ukraine: YouTube continues to surface videos from high-quality sources in search results and recommendations.

Description of intervention - 51.5.5

War in Ukraine: See Commitments 17 and 18 in the EU Code of Conduct Transparency Report for details on how YouTube surfaces videos from high-quality sources in search results and recommendations. These high-quality sources are important in all crisis situations, including the war in Ukraine.

Indication of impact - 51.5.6

War in Ukraine: See Commitments 17 and 18 for metrics on these efforts.

Specific Action applied - 51.5.7

War in Ukraine: YouTube continues to provide features to enhance access to high-quality information, including Information Panels, on YouTube. 

Description of intervention - 51.5.8

War in Ukraine: See Commitments 17 and 18 in the EU Code of Conduct Transparency Report for details on how YouTube enhances access to high-quality information, including information panels on topics prone to misinformation. 

Indication of impact - 51.5.9

War in Ukraine: See Commitments 17 and 18 for metrics on these efforts.

Specific Action applied - 51.5.10

Israel-Gaza conflict: YouTube is continuing to actively surface high-quality news content in search results for queries about Israel and Gaza, including through its breaking news and top news shelves. 

Description of intervention - 51.5.11

Israel-Gaza conflict: YouTube’s recommendation system is prominently surfacing news from high-quality sources on the homepage, in search results and the 'Up Next' panel. YouTube’s systems do this across every country where YouTube operates.

YouTube’s Top News and Breaking News shelves are surfacing at the top of search results related to the attacks in Israel and on the homepage, prominently featuring content from high-quality news sources.

Indication of impact - 51.5.12

Israel-Gaza conflict: See Commitments 17 and 18 for metrics on these efforts.

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.6.1

War in Ukraine: YouTube provides publicly available data via Google Trends. YouTube also established the YouTube Researcher Program, which continues to provide scaled, expanded access to global video metadata via a Data API for verified and affiliated academic researchers.

Description of intervention - 51.6.2

War in Ukraine: See Commitments 26 and 28 in the EU Code of Conduct Transparency Report for details on how YouTube provides publicly available data via Google Trends and provides eligible academic researchers access to global video metadata, which may include content about the ongoing war in Ukraine. 

Indication of impact - 51.6.3

War in Ukraine: See Commitment 26 for metrics on these efforts.

Specific Action applied - 51.6.4

Israel-Gaza conflict: YouTube provides publicly available data via Google Trends. YouTube also established the YouTube Researcher Program, which continues to provide scaled, expanded access to global video metadata via a Data API for verified and affiliated academic researchers.

Description of intervention - 51.6.5

Israel-Gaza conflict: See Commitments 26 and 28 in the EU Code of Conduct Transparency Report for details on how YouTube provides publicly available data via Google Trends, and provides eligible academic researchers access to global video metadata, which may be applied to the ongoing conflict in Israel and Gaza.

Indication of impact - 51.6.6

Israel-Gaza conflict: See Commitment 26 for metrics on these efforts.

Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.7.1

War in Ukraine: N/A

Description of intervention - 51.7.2

War in Ukraine: N/A

Indication of impact - 51.7.3

War in Ukraine: N/A

Specific Action applied - 51.7.4

Israel-Gaza conflict: N/A

Description of intervention - 51.7.5

Israel-Gaza conflict: N/A

Indication of impact - 51.7.6

Israel-Gaza conflict: N/A