Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.3 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
N/A
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.
Google sets a high bar for information quality on services that involve advertising and content monetisation. Given that many bad actors may seek to make money by spreading harmful content, raising the bar for monetisation can also diminish their incentives to misuse Google services. For example, Google prohibits deceptive behaviour on Google advertising products.
Google Ads also provides advertisers with additional controls and helps them exclude types of content that, while in compliance with AdSense policies, may not fit their brand or business. These controls let advertisers apply content filters or exclude certain types of content or terms from their video, display, and search ad campaigns. Advertisers can exclude content such as politics, news, sports, beauty, fashion and many other categories. These categories are listed in the
Google Ads Help Centre.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to:
- First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA.
- Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.
Google partakes in audits including those conducted by independent accreditation organisations such as the Media Rating Council (MRC) and maintains this accreditation via participation in annual audit cycles conducted by the MRC.
The current MRC accreditation certifies that:
- Google Ads display and search clicks measurement methodology and AdSense ad serving technologies adhere to the industry standards for click measurement.
- Google Ads video impression and video viewability measurement as reported in the Video Viewability Report adheres to the industry standards for video impression and viewability measurement.
- The processes supporting these technologies are accurate. This applies to Google’s measurement technology which is used across all device types: desktop, mobile, and tablet, in both browser and mobile apps environments.
For more information about what this accreditation means, please see this
help page.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
See response to QRE 1.5.1.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals:
- To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies.
- Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation.
- Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.
Google Ads provides its advertising partners with features that enable them to maintain control over where their ads appear, the format in which their ads run, and their intended audience.
Since April 2021, advertisers have the
ability to use
dynamic exclusion lists that can be updated seamlessly and continuously over time. These lists can be created by advertisers themselves or by a third party they trust, such as brand safety organisations and industry groups. Once advertisers upload a dynamic exclusion list to their Google Ads account, they can schedule automatic updates as new web pages or domains are added, ensuring that their exclusion lists remain effective and up-to-date.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
Not relevant for Google Ads (intended for Signatories that purchase ads).
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
Not relevant for Google Ads (intended for Signatories that provide brand safety tools).
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
Not relevant for Google Ads (intended for Signatories that rate sources).
SLI 1.6.1
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
Not relevant for Google Ads (intended for Signatories that purchase ads).
Country |
In view of steps taken to integrate brand safety tools: % of advertising/media investment protected by such tools |
Austria |
0 |
Belgium |
0 |
Bulgaria |
0 |
Croatia |
0 |
Cyprus |
0 |
Czech Republic |
0 |
Denmark |
0 |
Estonia |
0 |
Finland |
0 |
France |
0 |
Germany |
0 |
Greece |
0 |
Hungary |
0 |
Ireland |
0 |
Italy |
0 |
Latvia |
0 |
Lithuania |
0 |
Luxembourg |
0 |
Malta |
0 |
Netherlands |
0 |
Poland |
0 |
Portugal |
0 |
Romania |
0 |
Slovakia |
0 |
Slovenia |
0 |
Spain |
0 |
Sweden |
0 |
Iceland |
0 |
Liechtenstein |
0 |
Norway |
0 |
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
In H1 2025 (1 January 2025 to 30 June 2025), Google
updated the
Ads Transparency Policy to include the display of additional information about the entity that pays for the ads. This update was implemented in two phases:
- In May 2025, Google began displaying the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. The payer name will be visible in the ‘My Ad Center’ panel and the Ads Transparency Centre.
- Since June 2025, Google Ads advertisers have been able to edit the displayed payer name by navigating to the advertiser verification page under billing. When such an edit is made, the revised payer name will display instead of the payment profile name.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.
All newly created ads and ads that are edited by users are reviewed for policy violations. The review of new ads is performed by either, or a combination of:
- Automated mechanisms; and
- Manual reviews performed by human reviewers.
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
See response to QRE 2.2.1.
SLI 2.3.1
Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.
Number of own-initiative actions taken on advertisements that affect the availability, visibility, and accessibility of information provided by recipients of Google Ads services, by EEA Member State billing country and policy in H1 2025 (1 January 2025 to 30 June 2025). These actions taken include enforcement against ads and ad assets that violate any of the policy topics in scope for reporting.
Google takes content moderation actions on content which violates or may be shown to violate Google Ads policies, or where the content is illegal. These can encompass both proactive and reactive enforcement actions. Proactive enforcement takes place when Google employees, algorithms, or contractors flag potentially policy-violating content. Reactive enforcement takes place in response to external notifications, such as user policy flags or legal complaints (e.g. an Article 9 order or an Article 16 notice under the Digital Services Act).
To ensure a safe and positive experience for users, Google requires that advertisers comply with all applicable laws and regulations in addition to the Google Ads policies. Ads, assets, destinations, and other content that violates Google Ads policies can be blocked on the Google Ads platform and associated networks.
Ad or asset disapproval
Ads and assets that do not follow Google Ads policies will be disapproved. A disapproved ad will not be able to run until the policy violation is fixed and the ad is reviewed.
Account suspension
Google Ads Accounts may be suspended if Google finds violations of its policies or the Terms and Conditions.
Policies in scope:
- Destination Requirements (Insufficient Original Content);
- Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
- Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).
Country |
Number of actions taken, for Destination Requirements |
Number of actions taken, for Inappropriate Content |
Number of actions taken, for Misrepresentation |
Austria |
5,679,742 |
152,315 |
306,606 |
Belgium |
6,771,548 |
121,570 |
555,205 |
Bulgaria |
7,203,929 |
18,865 |
355,403 |
Croatia |
1,904,898 |
20,974 |
94,461 |
Cyprus |
5,382,607 |
290,609 |
1,142,090 |
Czech Republic |
13,608,158 |
359,832 |
4,235,014 |
Denmark |
6,847,768 |
201,918 |
1,406,770 |
Estonia |
1,384,422 |
12,757 |
210,660 |
Finland |
2,080,150 |
32,778 |
327,317 |
France |
391,100,341 |
794,427 |
2,887,710 |
Germany |
106,072,196 |
1,148,272 |
2,402,796 |
Greece |
1,875,045 |
36,299 |
128,314 |
Hungary |
5,503,166 |
140,868 |
252,077 |
Ireland |
19,331,043 |
1,579,743 |
1,899,322 |
Italy |
47,864,574 |
265,087 |
3,049,594 |
Latvia |
1,637,301 |
30,738 |
1,933,507 |
Lithuania |
8,357,484 |
93,765 |
193,107 |
Luxembourg |
1,075,313 |
29,661 |
442,666 |
Malta |
5,866,262 |
15,759 |
3,687,084 |
Netherlands |
71,198,546 |
658,263 |
2,131,593 |
Poland |
23,028,128 |
747,208 |
2,382,988 |
Portugal |
2,648,319 |
65,962 |
279,512 |
Romania |
7,698,926 |
303,699 |
422,070 |
Slovakia |
4,211,053 |
76,015 |
1,226,400 |
Slovenia |
3,003,559 |
28,852 |
138,664 |
Spain |
95,254,245 |
556,151 |
2,567,776 |
Sweden |
12,044,118 |
176,689 |
256,526 |
Iceland |
158,051 |
2,202 |
86,432 |
Liechtenstein |
322,224 |
676 |
7,245 |
Norway |
3,439,066 |
22,644 |
206,561 |
Total EU |
858,632,841 |
7,959,076 |
34,915,232 |
Total EEA |
862,552,182 |
7,984,598 |
35,215,470 |
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.
Notification
Ads that do not follow Google Ads policies will be disapproved or (if appropriate) limited in where and when they can show. This will be shown in the ‘Status’ column as ‘Disapproved’ or ‘
Eligible (limited),’ and the ad may not be able to run until the policy violation is fixed and the ad is re-reviewed. By hovering the cursor over the status of the ad, there is additional information, including the policy violation impacting the ad. For more information on how to fix a disapproved ad, see the external
Help Centre page.
Appeal process
Advertisers have multiple options and pathways to appeal a policy decision directly from their Google Ads account, for instance the 'ads and assets' table, the Policy Manager, or the
Disapproved Ads and Policy Questions form. For more information about the appeal process, check the
Help Centre page. For account suspensions, advertisers can also appeal following the
submit an appeal process.
SLI 2.4.1
Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.
Number of content moderation complaints received from advertisers located in EEA Member States during H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State and by complaint outcome. Advertiser complaints were received via Google Ads standardised path for appealing policy decisions.
Complaint outcomes include initial decision upheld and initial decision reversed. An ‘initial decision’ refers to the first enforcement of Google’s terms of service or product policies. These decisions may be reversed in light of additional information provided by the appellant as part of an appeal or additional automatic, manual review of the content.
Policies in scope:
- Destination Requirements (Insufficient Original Content);
- Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
- Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).
Country |
Number of Ads Appeals |
Number of Successful Appeals |
Number of Failed Appeals |
Austria |
68,158 |
52,996 |
15,162 |
Belgium |
25,671 |
13,280 |
12,391 |
Bulgaria |
42,739 |
13,207 |
29,532 |
Croatia |
9,568 |
4,423 |
5,145 |
Cyprus |
143,815 |
44,042 |
99,773 |
Czech Republic |
142,345 |
62,184 |
80,161 |
Denmark |
49,175 |
23,477 |
25,698 |
Estonia |
29,176 |
10,144 |
19,032 |
Finland |
24,350 |
11,387 |
12,963 |
France |
91,582 |
34,113 |
57,469 |
Germany |
242,262 |
99,494 |
142,768 |
Greece |
10,853 |
6,585 |
4,268 |
Hungary |
23,583 |
14,451 |
9,132 |
Ireland |
12,186 |
6,225 |
5,961 |
Italy |
168,738 |
33,998 |
134,740 |
Latvia |
16,239 |
6,463 |
9,776 |
Lithuania |
147,455 |
26,493 |
120,962 |
Luxembourg |
551 |
356 |
195 |
Malta |
46,200 |
12,897 |
33,303 |
Netherlands |
246,757 |
88,441 |
158,316 |
Poland |
137,049 |
62,249 |
74,800 |
Portugal |
16,924 |
8,183 |
8,741 |
Romania |
40,741 |
14,509 |
26,232 |
Slovakia |
15,751 |
9,896 |
5,855 |
Slovenia |
25,667 |
7,143 |
18,524 |
Spain |
251,087 |
121,197 |
129,890 |
Sweden |
15,145 |
10,308 |
4,837 |
Iceland |
419 |
88 |
331 |
Liechtenstein |
1,252 |
313 |
939 |
Norway |
5,407 |
1,554 |
3,853 |
Total EU |
2,043,767 |
798,141 |
1,245,626 |
Total EEA |
2,050,845 |
800,096 |
1,250,749 |
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
N/A
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
Google Advertising works across industry and with civil society to facilitate the flow of information, relevant to tackling disinformation. For example, Google participates in the EU Code of Conduct on Disinformation Permanent Task-force’s dedicated Working Groups, such as the elections working group, which involves civil society and Industry Signatories.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Google takes part in the EU Code of Conduct on Disinformation Permanent Task-force’s Working Group on elections - as mentioned in response to QRE 3.1.1. In addition, Google’s Threat Analysis Group (TAG) continues to engage with other Industry Signatories to the Code in order to stay abreast of cross-platform deceptive practices, such as operations leveraging fake or impersonated accounts.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.
Google Advertising frequently engages with third-party organisations in order to explain, collect feedback on, and improve Google Advertising policies. Google Advertising has also exchanged views with experts at numerous policy roundtables, conferences, and workshops - both in Brussels and in the EU capitals.
Please also see QRE 3.1.1 for additional information on the collaboration with third party organisations and government entities.
Crisis and Elections Response
Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Overview
In elections and other democratic processes, people want access to high-quality information and a broad range of perspectives. High-quality information helps people make informed decisions when voting and counteracts abuse by bad actors. Consistent with its broader approach to elections around the world, during the various elections across the EU in H1 2025 (1 January 2025 to 30 June 2025), Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with best-in-class security tools and training – with a strong focus on helping people navigate AI-generated content.
Mitigations in place
Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2025, a number of key elections took place around the world and across the EU in particular. In H1 2025, voters cast their votes in Germany, Poland, Portugal and Romania. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the disinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts.
Safeguarding Google platforms and disrupting the spread of disinformation
To better secure its products and prevent abuse, Google continues to enhance its enforcement systems and to invest in Trust & Safety operations — including at its
Google Safety Engineering Centre (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. Google also continues to partner with the wider ecosystem to combat disinformation.
- Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
- Working with the wider ecosystem: Since Google’s inaugural commitment of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 121 projects have been funded across 28 countries so far.
Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information. Google put in place a number of policies and other measures that have helped people navigate content that was AI-generated. Overall, harmful altered or synthetic political content did not appear to be widespread on Google’s platforms. Measures that helped mitigate that risk include:
- Ads disclosures: Google expanded its Political Content Policies to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
- Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
- Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online.
- Industry collaboration: Google is a member of the Coalition for Content Provenance and Authenticity (C2PA) and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content.
Informing voters surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways Google makes it easy for people to find what they need, and which were deployed during elections that took place across the EU in 2025:
- High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
- Ongoing transparency on Election Ads: All advertisers who wish to run election ads in the EU on Google’s platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in Google’s Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. Google also limits how advertisers can target election ads. Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025.
Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, Google works hard to help high-risk users, such as campaigns and election officials, civil society and news sources, improve their security in light of existing and emerging threats, and to educate them on how to use Google’s products and services.
- Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
- Tackling coordinated influence operations: Google’s Threat Intelligence Group helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.
Google is committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on its commitments made in the EU Code of Conduct on Disinformation.
Policies and Terms and Conditions
Outline any changes to your policies
Policy - 50.1.1
Please see the ‘Scrutiny of Ads Placement’ section below.
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2
Please see the ‘Scrutiny of Ads Placement’ section below.
Rationale - 50.1.3
Please see the ‘Scrutiny of Ads Placement’ section below.
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.2.1
Political Content Policy
Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025.
Description of intervention - 50.2.2
Once an advertiser has completed EU Election Ads verification, all their EU Election Ads must contain a disclosure that identifies who paid for the ad. For most ad formats, Google will automatically generate a ‘Paid for by’ disclosure, using the information provided during the verification process.
All EU Election Ads run by verified EU election advertisers in the EU are also subject to targeting restrictions, whereby only the following criteria may be used to target election ads:
- Geographic location (except radius around a location);
- Age, gender;
- Contextual targeting options such as: ad placements, topics, keywords against sites, apps, pages and videos.
All other types of targeting are not allowed for use in election ads.
To provide transparency for users, Google publishes a Political Advertising transparency report and a political ads library. Only ads that are in scope of the Election Ads Policy, and that are run by verified election advertisers, will be included in the report at this time. For example, EU Election Ads run by a verified EU election advertiser that serve in the EU will be included in the report. US Election Ads run by a verified EU election advertiser that serve in the EU will not be included.
In July 2024, Google
updated the Disclosure requirements for synthetic content under the Political Content Policy, requiring advertisers to disclose election ads that contain synthetic or digitally altered content that inauthentically depicts real or realistic-looking people or events by selecting the checkbox in the ‘Altered or synthetic content’ section in their campaign settings. Google will generate an in-ad disclosure based on that checkbox, for certain types of formats. This is because Google believes that users should have information to make informed decisions when viewing election ads that contain synthetic content that has been digitally altered or generated. Accordingly, verified election advertisers in regions where verification is required, must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events. This disclosure must be clear and conspicuous and must be placed in a location where it is likely to be noticed by users. This policy applies to image, video and audio content.
In June 2024, Google updated the
policy for EU Election Ads to include restrictions in Italy that ‘Advertisers must comply with applicable local electoral laws, including pausing ads as required during periods defined by law as silence periods. Google does not allow EU Election Ads, as defined by Ads’ policies, to serve in Italy during a silence period.’ This is because Google supports responsible political advertising, and has consistently expected all political ads and destinations to comply with
local legal requirements. This includes campaign and election laws and election ‘silence periods’ for any geographic areas they target. Google requires all advertisers to fully comply with applicable laws and regulations, including local electoral laws. Advertisers are expected to familiarise themselves with the local law and regulations for any location their ads target.
Indication of impact - 50.2.3
No applicable metrics to report at this time.
Specific Action applied - 50.2.4
Misrepresentation Policy
Description of intervention - 50.2.5
AdSense policies that disrupt the monetisation incentives of malicious and misrepresentative actors related to politics in the AdSense ecosystem that publishers must adhere to include
Manipulated Media and
Deceptive Practices.
Google Ads provides a way for advertisers and businesses to reach new customers as they search on Google for words related to an advertiser’s business, or browse websites with related themes. However, Google Ads enforces policies that do not allow ads or destinations related to politics that display Inappropriate Content or Misrepresentation. Policies that prohibit political ads and destinations that display Inappropriate Content include the Sensitive Event Policy and Hacked Political Materials Policy. Policies that prohibit political ads and destinations that display Misrepresentation include the Coordinated Deceptive Practices and Manipulated Media Policy.
In March 2024, Google Advertising
updated the
Unacceptable business practices portion of the
Misrepresentation Policy to include enticing users to part with money or information by impersonating or falsely implying affiliation with or endorsement by a public figure, brand, or organisation. Google Advertising began enforcing this policy in March 2024 for advertisers outside of France. For advertisers in France, Google Advertising began enforcing this policy in April 2024. The reason for this was that toward the end of 2023 and into 2024, Google Advertising faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deep fakes. When Google Advertising detected this threat, it created a dedicated team to respond immediately. It also pinpointed patterns in the bad actors’ behaviour, trained its automated enforcement models to detect similar ads and began removing them at scale. Google Advertising also
updated its Misrepresentation Policy to better enable it to rapidly suspend the accounts of bad actors.
Indication of impact - 50.2.6
Please refer to SLI 2.3.1 for metrics related to these policies.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.3.1
Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025. Additionally, paid political promotions, where they qualify as political ads under the TTPA, will no longer be permitted on YouTube in the EU.
Description of intervention - 50.3.2
N/A
Indication of impact - 50.3.3
N/A
Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
War in Ukraine
Overview
The ongoing war in Ukraine has continued into 2025, and Google continues to
help by providing cybersecurity and humanitarian assistance, and providing high-quality information to people in the region. The following list outlines the main threats observed by Google during this conflict:
- Continued online services manipulation and coordinated influence operations;
- Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
- Threats to security and protection of digital infrastructure.
Israel-Gaza conflict
Overview
Following the Israel-Gaza conflict, Google has
actively worked to support humanitarian and relief efforts, ensure platforms and partnerships are responsive to the current crisis, and counter the threat of disinformation. Google identified a few areas of focus for addressing the ongoing crisis:
- Humanitarian and relief efforts;
- Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.
Mitigations in place
War in Ukraine
The following sections summarise Google’s main strategies and actions taken to mitigate the identified threats and react to the war in Ukraine.
1. Online services manipulation and malign influence operations
2. Advertising and monetisation linked to Russia and Ukraine disinformation
In H1 2025 (1 January 2025 to 30 June 2025), Google continued to pause the majority of commercial activities in Russia – including ads serving in Russia via Google demand and third-party bidding, ads on Google’s properties and networks globally for all Russian-based advertisers, AdSense ads on state-funded media sites, and monetisation features for YouTube viewers in Russia. Google
paused ads containing content that exploits, dismisses, or condones the war. In addition, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager in August 2024. Free Google services such as Search, Gmail and YouTube are still operating in Russia. Google will continue to closely monitor developments.
3. Threats to security and protection of digital infrastructure
Google expanded eligibility for
Project Shield, Google’s free protection against Distributed Denial of Service (DDoS) attacks, shortly after the war in Ukraine broke out. The expansion aimed to allow Ukrainian government websites and embassies worldwide to stay online and continue to offer their critical services. Since then, Google has continued to implement protections for users and track and disrupt cyber threats.
TAG has been tracking threat actors, both before and during the war, and sharing their findings publicly and with law enforcement. TAG’s
findings have shown that government-backed actors from
Russia, Belarus, China, Iran, and North Korea have been targeting Ukrainian and Eastern European government and defence officials, military organisations, politicians, nonprofit organisations, and journalists, while financially motivated bad actors have also used the war as a lure for malicious campaigns.
Google aims to continue to follow the following approach when responding to future crisis situations:
- Elevate access to high-quality information across Google services;
- Protect Google users from harmful disinformation;
- Continue to monitor and disrupt cyber threats;
- Explore ways to provide assistance to support the affected areas more broadly.
Future measures
Google will continue to monitor the situation and take additional action as needed.
Israel-Gaza conflict
Humanitarian and relief efforts
Google.org has provided more than $18 million to nonprofits providing relief to civilians affected in Israel and Gaza. This includes more than $11 million raised globally by Google employees with company match and $1 million in donated Search Ads to nonprofits so they can better connect with people in need and provide information to those looking to help. We also provided $6 million in Google.org grant funding, including $3 million provided to Natal, an apolitical nonprofit organisation focused on psychological treatment of victims of trauma. The remaining funds were provided to organisations focussed on humanitarian aid and relief Gaza, including $1 million to Save the Children, $1 million to Palestinian Red Crescent, $1 million to International Medical Corps.
Specifically, Google’s humanitarian and relief efforts with these organisations include:
- Natal - Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel.
- As of mid-April, the International Medical Corps has provided care to more than 433,000 civilians, delivered more than 5,400 babies, performed more than 11,800 surgeries and supplied safe drinking water to more than 302,000 people. We continue to care for some 800 patients per day, responding to mass-casualty events and performing an average of 15 surgeries per day.
Platforms and partnerships
As the conflict continues, Google is
committed to tackling disinformation, hate speech, graphic content and terrorist content by continuing to find ways to provide support through its products. For example, Google has deployed language capabilities to support emergency efforts including emergency translations, and localising Google content to help users, businesses and nonprofit organisations. Google has also pledged to
help its partners in these extraordinary circumstances. For example, when schools closed in October 2023, the Ministry of Education in Israel used Meet as their core teach-from-home platform and Google provided support. Google has been in touch with Gaza-based partners and participants in its Palestine Launchpad program, its digital skills and entrepreneurship program for Palestinians, to try to support those who have been significantly impacted by this crisis.
Policies and Terms and Conditions
Outline any changes to your policies
Policy - 51.1.1
War in Ukraine: Enforcement of existing policies
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2
War in Ukraine: Google Ads continued to enforce all Google Ads policies during the war in Ukraine, including its Sensitive Events Policy.
Rationale - 51.1.3
War in Ukraine: No changes to Ads policies and to Terms and Conditions were made as a result of the war in Ukraine during this reporting period. Google Ads continues to enforce all Google Ads policies, including the ones mentioned in this report.
Policy - 51.1.4
Israel-Gaza conflict: Enforcement of existing policies
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5
Israel-Gaza conflict: Google Ads continued to enforce all Google Ads policies during the Israel-Gaza conflict.
Rationale - 51.1.6
Israel-Gaza conflict: No changes to Ads policies or to Terms and Conditions were made as a result of the Israel-Gaza conflict. Google Ads continues to enforce all Google Ads policies, including the ones mentioned in this report.
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.2.1
War in Ukraine: Enforces the Google Ads
Misleading Representation Policy focusing on ensuring ads are honest and transparent, providing users with the information needed to make informed decisions. This policy covers various forms of deception, including unacceptable business practices and misleading representations.
Description of intervention - 51.2.2
War in Ukraine: Specifically for the war in Ukraine, Google Ads focused on the Manipulated Media sub-category in the Misleading Representation Policy which disallows the practice of deceptively doctoring media related to politics, social issues or matters of public concern.
Google Ads also enforced the Clickbait Ads Policy which is a sub-category under the Misleading Representation Policy. This policy prohibits ads that use clickbait tactics or sensationalist text or imagery to drive traffic.
Indication of impact - 51.2.3
War in Ukraine: Please refer to SLI 2.3.1 for more details on Google Ads Misrepresentation Policy, including Manipulated Media and Clickbait Ads sub-categories.
Specific Action applied - 51.2.4
War in Ukraine: As noted above, Google Ads enforces the
Sensitive Events Policy which does not allow ads that potentially profit from or exploit a sensitive event with significant social, cultural, or political impact, such as civil emergencies, natural disasters, public health emergencies, terrorism and related activities, conflict, or mass acts of violence.
Description of intervention - 51.2.5
War in Ukraine: Due to the war in Ukraine, Google Ads enforced
the Sensitive Events Policy and paused ads on pages containing content that is exploitative, dismissive, or condones the invasion in March 2022. This is in addition to the
pausing of ads from and on Russian Federation state-funded media in February 2022.
Indication of impact - 51.2.6
War in Ukraine: Google Advertising continues to remain vigilant in enforcing all relevant policies, including the Sensitive Events Policy, related to the war in Ukraine.
Specific Action applied - 51.2.7
War in Ukraine: Enforces the
Inappropriate Content Policy which does not allow ads or destinations that display shocking content or that promote hatred, intolerance, discrimination, or violence.
Description of intervention - 51.2.8
War in Ukraine: Due to the war in Ukraine, Google Ads focused on enforcing the Dangerous or Derogatory and Shocking Content sub-categories of the Inappropriate Content Policy. The Dangerous or Derogatory sub-category does not allow content that incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalisation. The Shocking Content sub-category does not allow promotions containing violent language, gruesome or disgusting imagery, or graphic images or accounts of physical trauma.
Indication of impact - 51.2.9
War in Ukraine: Please refer to SLI 2.3.1 for more details on Google Ads Inappropriate Content Policy.
Specific Action applied - 51.2.10
War in Ukraine: Enforces the
Other Restricted Businesses Policy which restricts certain kinds of businesses from advertising with Google Ads to prevent users from being exploited, even if individual businesses appear to comply with other policies.
Description of intervention - 51.2.11
War in Ukraine: In order to protect users, Google Ads specifically focused on enforcing the Government Documents and Official Services Policy which disallows the promotion of documents and/or services that facilitate the acquisition, renewal, replacement or lookup of official documents or information that are available directly from a government or government delegated provider.
Indication of impact - 51.2.12
War in Ukraine: No applicable metrics to report at this time.
Specific Action applied - 51.2.13
Description of intervention - 51.2.14
War in Ukraine: Google AdSense will continue to monitor and prevent monetisation of content that violates these policies.
Indication of impact - 51.2.15
War in Ukraine: No applicable metrics to report at this time.
Specific Action applied - 51.2.16
War in Ukraine: Paused Google AdSense’s monetisation of Russian Federation state-funded media.
Description of intervention - 51.2.17
War in Ukraine: Beginning in February 2022, Google AdSense prohibited the monetisation of any Russian Federation state-funded media (i.e. sites, apps, YouTube channels). It is important to note that Google’s current
Publisher Policies and
advertiser-friendly guidelines already prohibited many forms of content related to the war in Ukraine from monetising. In addition, Google Advertising paused the monetisation of content that exploits, dismisses, or condones the invasion across services.
Indication of impact - 51.2.18
War in Ukraine: No applicable metrics to report at this time.
Specific Action applied - 51.2.19
War in Ukraine: Paused the ability of Russian-based publishers to monetise with AdSense, AdMob, and Ad Manager.
Description of intervention - 51.2.20
War in Ukraine: In August 2024, due to ongoing developments in Russia, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager.
Indication of impact - 51.2.21
War in Ukraine: No applicable metrics to report at this time.
Specific Action applied - 51.2.22
War in Ukraine: Paused ads from and for Russian Federation state-funded media since February 2022.
Description of intervention - 51.2.23
War in Ukraine: Google also paused ads from and for Russian Federation
state-funded media.
Indication of impact - 51.2.24
War in Ukraine: No applicable metrics to report at this time.
Specific Action applied - 51.2.25
War in Ukraine: Enforced the Coordinated Deceptive Practices Policy which prohibits advertisers from promoting content related to public concerns while misrepresenting or concealing their identity or country or origin.
Description of intervention - 51.2.26
War in Ukraine: Accounts found to be engaging in Coordinated Deceptive Practices are suspended immediately and without prior warning.
Clickbait ads are disapproved upon detection. Repeated violations of this policy can lead to an account suspension.
Indication of impact - 51.2.27
War in Ukraine: No applicable metrics to report at this time.
Specific Action applied - 51.2.28
Israel-Gaza conflict: Google AdSense enforces the
Dangerous or Derogatory Content Policy which does not allow monetisation of content that incites hatred against, promotes discrimination of, or disparages an individual or group of people on the basis of their race or ethnic origin, religion, or nationality.
Description of intervention - 51.2.29
Israel-Gaza conflict: In order to protect users and advertisers, Google requires that all publishers comply with Google Publisher Policies in order to monetise on AdSense.
Due to the Israel-Gaza conflict, Google AdSense focused on enforcing the
Dangerous or Derogatory Content Policy. Under this policy, Google AdSense does not allow monetisation of content that incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or marginalisation. Nor is content allowed that harasses, intimidates, or bullies an individual or group of individuals. In addition, content that threatens or advocates for physical or mental harm to oneself or others is also not allowed. Google also does not allow content that seeks to exploit others, like extortion, blackmail.
Indication of impact - 51.2.30
Israel-Gaza conflict: No applicable metrics to report at this time.
Specific Action applied - 51.2.31
Description of intervention - 51.2.32
Israel-Gaza conflict: Since 7 October 2023, Google Ads has taken several measures across its platforms in response to the Israel-Gaza conflict, including implementing a sensitive event to help prevent exploitative ads around this conflict. Google’s mission to elevate high-quality information and enhance information quality across its services is of utmost importance and Google Ads has and will continue to rigorously enforce its policies.
Google Ads often institutes sensitive events following natural disasters or other tragic events. When a sensitive event is declared, Google Ads does not allow ads that exploit or capitalise on these tragedies.
Google does not allow ads that potentially profit from or exploit a sensitive event with significant social, cultural, or political impact, such as civil emergencies, natural disasters, public health emergencies, terrorism and related activities, conflict, or mass acts of violence. Google does not allow ads that claim victims of a sensitive event were responsible for their own tragedy or similar instances of victim blaming; ads that claim victims of a sensitive event are not deserving of remedy or support.
Indication of impact - 51.2.33
Israel-Gaza conflict: See SLI 2.3.1 for metrics on this policy.
Specific Action applied - 51.2.34
Israel-Gaza conflict: Within the Inappropriate Content Policy, Google Advertising does not allow
Shocking Content.
Description of intervention - 51.2.35
Israel-Gaza conflict: Google does not allow promotions containing violent language, gruesome or disgusting imagery, or graphic images or accounts of physical trauma.
Google does not allow promotions containing gratuitous portrayals of bodily fluids or waste.
Google does not allow promotions containing obscene or profane language.
Google does not allow promotions that are likely to shock or scare.
Indication of impact - 51.2.36
Israel-Gaza conflict: See SLI 2.3.1 for metrics on this policy.
Specific Action applied - 51.2.37
Israel-Gaza conflict: Google Advertising enforces the Misrepresentation Policy, which includes
Clickbait ads.
Description of intervention - 51.2.38
Israel-Gaza conflict: Google does not allow ads that use clickbait tactics or sensationalist text or imagery to drive traffic. Google also does not allow ads that use negative life events such as death, accidents, illness, arrests or bankruptcy to induce fear, guilt or other strong negative emotions to pressure the viewer to take immediate action.
Indication of impact - 51.2.39
Israel-Gaza conflict: See SLI 2.3.1 for metrics on this policy.
Specific Action applied - 51.2.40
Israel-Gaza conflict: No changes to the enforcement of Ads Policies as a result of the Israel-Gaza conflict.
Description of intervention - 51.2.41
Israel-Gaza conflict: To ensure a safe and positive experience for users, Google requires that advertisers comply with all
applicable laws and regulations in addition to the
Google Ads policies. Ads, assets, destinations, and other content that violate these policies can be blocked on the Google Ads platform and associated networks. Google Ads policy violations can lead to ad or asset disapproval, or account suspension.
Indication of impact - 51.2.42
Israel-Gaza conflict: No applicable metrics to report at this time.
Specific Action applied - 51.2.43
Israel-Gaza conflict: Teams across the company are dedicating resources as part of an urgent escalations workforce to respond to the Israel-Gaza conflict and take quick measures as needed.
Description of intervention - 51.2.44
Israel-Gaza conflict: Google Advertising invests heavily in the enforcement of its policies. Google Advertising has a team of thousands working around the clock to create and enforce its policies at scale.
Indication of impact - 51.2.45
Israel-Gaza conflict: No applicable metrics to report at this time.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.3.1
War in Ukraine: Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025. Additionally, paid political promotions, where they qualify as political ads under the TTPA, will no longer be permitted on YouTube in the EU.
Description of intervention - 51.3.2
War in Ukraine: N/A
Indication of impact - 51.3.3
War in Ukraine: N/A
Specific Action applied - 51.3.4
Israel-Gaza conflict: Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025. Additionally, paid political promotions, where they qualify as political ads under the TTPA, will no longer be permitted on YouTube in the EU.
Description of intervention - 51.3.5
Israel-Gaza conflict: N/A
Indication of impact - 51.3.6
Israel-Gaza conflict: N/A