
Report September 2025
Your organisation description
Permanent Task-Force
Commitment 37
Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.
We signed up to the following measures of this commitment
Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 37.6
Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.
QRE 37.6.1
Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Crisis and Elections Response
Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Our Community Standards set out strict rules for content that can and cannot be posted to our platforms. These policies cover voter interference, voter fraud, electoral violence, and misinformation, among other categories such as hateful conduct, coordinating harm and promoting crime, bullying and harassment. Our policies have been refined over many years, by partnering with academics, civil society, and third-party fact-checkers to find the appropriate balance between protecting people and protecting freedom of expression and information. These policies are regularly reviewed, and they are made available to the public through our Transparency Centre.
Our comprehensive approach to elections continued for European elections held between 1 January - 30 June 2025. The election responses covered in this report include:
- Germany (Parliamentary), 23 February 2025
- Romania (Presidential), 4 May, 2025
- Romania (Presidential Runoff), 18 May 2025
- Portugal (Parliamentary), 18 May 2025
- Poland (Presidential), 18 May 2025
Mitigations in place
Building on our experience of the 2024 European Parliament (EP) Elections, we continued to conduct in depth preparations and risk assessments for elections covered in this reporting period, deploy mitigation measures and utilise our Election Operation Centres established to address risks in real time ahead of the elections day.
Overview of Cooperation with External Stakeholders and Election Integrity Efforts
Germany
Overview of partners and notifications received during the Rapid Response Implementation period (6 February to 5 March):
- Number of onboarded non-platform signatories to our direct reporting channels: 6.
- Number of reports received during the election period through the rapid response system: 18.
Romania (First Round and Run Off)
Overview of partners and notifications received during the Rapid Response Implementation period: (7 April to 25 May)
Number of onboarded non-platform signatories to our direct reporting channels: 7.- Number of reports received during the election period: 60.
Portugal
Overview of partners and notifications received during the Rapid Response Implementation period (7 April to 25 May):
- Number of onboarded non-platform signatories to our direct reporting channels: 2.
- Number of reports received during the election period: 2.
Poland
Overview of partners and notifications received during the Rapid Response Implementation period (22 April to 24 June 2025):
- Number of onboarded non-platform signatories to our direct reporting channels: 4.
- Number of reports received during the election period: 14.
Responsible Approach to Gen AI
Meta’s approach to responsible AI is another way that we are safeguarding the integrity of elections globally, including for the EU national elections.
Community Standards, Fact-Checking, and AI Labelling:
Meta’s Community Standards and Advertising Standards apply to all content, including content generated by AI. AI-generated content is also eligible to be reviewed and rated by Meta’s third-party fact-checking partners, whose rating options allow them to address various ways in which media content may mislead people, including but not limited to media that is created or edited by AI.
Meta labels photorealistic images created using Meta AI, as well as AI-generated images from certain content creation tools.
Meta has begun labelling a wider range of video, audio, and image content when we detect industry-standard AI image indicators or when users disclose that they are uploading AI-generated content. Meta requires people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and may apply penalties if they fail to do so. If Meta determines that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so that people have more information and context.
Continuing to Foster AI Transparency through Industry Collaboration:
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
As outlined in our benchmark report, we took a variety of actions with the objectives of:
- Helping to keep people in Ukraine and Russia safe: We’ve added several privacy and safety features to help people in Ukraine and Russia protect their accounts from being targeted.
- Enforcing our policies: We are taking additional steps to enforce our Community Standards, not only in Ukraine and Russia but also in other countries globally where content may be shared.
- Reducing the spread of misinformation: We took steps to fight the spread of misinformation on our services and consulted with outside experts.
- Transparency around state-controlled media: We have been working hard to tackle disinformation from Russia coming from state-controlled media. Since March 2022, we have been globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets and making them harder to find across our platforms. In addition to demoting, labelling, demonetizing and blocking ads from Russian State Controlled Media, we are also demoting and labelling any posts from users that contain links to Russian State Controlled Media websites.
- In addition to these global actions, in Ukraine, the EU and UK, we have restricted access to Russia Today (globally), Sputnik, NTV/NTV Mir, Rossiya 1, REN TV and Perviy Kanal and others.
- On 15 June 2024, we added restrictions to further state-controlled media organisations targeted by the EU broadcast ban under Article 2f of Regulation 833/2014. These included: Voice of Europe, RIA Novosti, Izvestia, Rossiyskaya Gazeta.
- On 17 September 2024, we expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT, and other related entities were banned from our apps globally due to foreign interference activities.
[Israel - Hamas War]
In the spirit of transparency and cooperation we share below the details of some of the specific steps we are taking to respond to the Israel - Hamas War.
Mitigations in place
Our main strategies are in line with what we outlined in our benchmark report, with a focus on safety features in Ukraine and Russia, extensive steps to fight the spread of misinformation (including through media literacy campaigns), tools to help our community access crucial resources, transparency around state controlled media and monitoring/taking action against any coordinated inauthentic behaviour.
This means (as outlined in previous reports) we will continue to:
- Monitor for coordinated inauthentic behaviour and other adversarial networks (see commitment 16 for more information on behaviour we saw from Doppelganger during the reporting period).
- Enforce our Community Standards
- Work with fact-checkers
- Strengthen our engagement with local experts and governments in the Central and Eastern Europe region
[Israel - Hamas War]
In the wake of the 07/10/2023 terrorist attacks in Israel and Israel’s response in Gaza, expert teams from across Meta took immediate crisis response measures, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. As we did so, we were guided by core human rights principles, including respect for the right to life and security of the person, the protection of the dignity of victims, and the right to non-discrimination - as well as balancing those with the right to freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights to prioritise and mitigate the most salient human rights risks: in this case, that people may use Meta platforms to further inflame an already violent conflict. We also looked to international humanitarian law (IHL) as an important source of reference for assessing online conduct. We have provided a public overview of our efforts related to the war in our Newsroom, as well as in our 2023 Annual Human Rights report. The following are some examples of the specific steps we have taken:
- We quickly established a dedicated crisis response staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation in real time. This allows us to remove content that violates our Community Standards faster, and serves as another line of defence against misinformation.
- We continue to enforce our policies around Dangerous Organisations and Individuals, Violent and Graphic Content, Hate Speech, Violence and Incitement, Bullying and Harassment, and Coordinating Harm.
- In addition to this, our teams detected and removed a cluster of Coordinated Inauthentic Behaviour (CIB) activity attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms.
- In early 2025, we removed 17 accounts on Facebook, 22 FB Pages and 21 accounts on Instagram for violating our CIB policy. This network originated in Iran and targeted Azeri-speaking audiences in Azerbaijan and Turkey. Fake accounts – some of which were detected and disabled by our automated systems prior to our investigation – were used to post content, including in Groups, manage Pages, and to comment on the network’s own content – likely to make it appear more popular than it was. Many of these accounts posed as female journalists and pro-Palestine activists. The operation also used popular hashtags like #palestine, #gaza, #starbucks, #instagram in their posts, as part of its spammy tactics in an attempt to insert themselves in the existing public discourse.
- We memorialise accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other.
- We’re working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. When they rate something as false, we move this content lower in Feed so fewer people see it.
- We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
- We’re also giving people more information to help them decide what to read, trust, and share, by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers.
- We also have limits on message forwarding and we label messages that haven’t originated with the sender so people are aware that something is information from a third party.
- Hidden Words: This tool filters offensive terms and phrases from DM requests and comments.
- Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
- Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis.
- Show More, Show Less: This gives people direct control over the content they see on Facebook.
- Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed.
- Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations.
Policies and Terms and Conditions
Outline any changes to your policies
Policy
No further policy updates since our benchmark report.
Rationale
We continue to enforce our Community Standards and prioritise people’s safety and well-being through the application of these policies alongside Meta’s technologies, tools and processes. There are no substantial changes to report on for this period.
Israel - Hamas War
For the duration of the ongoing crisis, Meta has taken various actions to mitigate the possible content risks emerging from the crisis. This includes, inter alia, under the Dangerous Organisations and Individuals Policy, removes imagery depicting the moment an identifiable individual is abducted, unless such imagery is shared in the context of condemnation or a call to release, in which case we allow with a Mark as Disturbing (MAD) interstitial; and, remove Hamas-produced imagery for hostages in captivity in all contexts. Meta has some further discretion policies which may be applied when content is escalated to us.
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Measures taken to demonetise disinformation related to the crisis (Commitment 1 and Commitment 2)
- As mentioned in our baseline report, our Advertising Standards prohibit ads that include content debunked by third-party fact-checkers and advertisers that repeatedly attempt to post content rated by fact-checkers may also incur restrictions to advertise across Meta technologies.For the monetisation of initially organic content, (1) per our Content Monetisation Policies, any content that's labelled as false by our third-party fact-checkers is ineligible for monetisation, and (2) any actor found in violation of our Community Standards, including our misinformation policies, may lose the right to monetise their content, per our Partner Monetisation Policies.
- As mentioned in our baseline report, we prohibited ads or monetisation from Russian state-controlled media. Before Russian authorities blocked access to Facebook and Instagram, we paused ads targeting people in Russia, and advertisers in Russia are no longer able to create or run ads anywhere in the world.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
AI Generated or altered SIEP ads disclosure (Commitment 3)
Meta announced in November 2023 an AI Disclosure policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy went into effect in early 2024 and is required globally.
Advertisers now have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:
- Depict a real person as saying or doing something they did not say or do; or
- Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
- Depict a realistic event that allegedly occurred, but that is not a true image, video or audio recording of the event.
Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If it is determined that an advertiser did not disclose as required, Meta will reject the ad. Repeated failure to disclose may result in penalties against the advertiser.
The AI Disclosure policy helps inform people about digitally altered or created Ads. This way, people will be more aware about the authenticity of messaging, which will help combat Disinformation.
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Measures taken in the context of the crisis to counter manipulative behaviours/TTCs ( Commitment 14)
- As mentioned in our baseline report, we have technical teams building scaled solutions to detect and prevent these behaviours, and are partnering with civil society organisations, researchers, and governments to strengthen our defences. We also improved our detection systems to more effectively identify and block fake accounts, which are the source of a lot of the inauthentic activity.
- Since the invasion began, we shared what measures we’ve taken to help keep Ukrainians and Russians safe, our approach to misinformation, state-controlled media and ensuring reliable access to trusted information.As mentioned in our baseline report, our security teams took down three distinct networks in Russia targeting discourse on the war (announced here, here, and here) and have continued to monitor and enforce against Russian threat actors engaged in coordinated inauthentic behaviour (CIB). The Q4 2024 Adversarial Threat Report shared information on the continued low efficacy of the Doppelganger operation’s efforts on our apps, with most attempts to acquire fake accounts or run ads being quickly detected and blocked.
Relevant changes to working practices to respond to the demands of the crisis situation and/or additional human resources procured for the mitigation of the crisis (Commitment 14 -16)
- As mentioned in the baseline report, throughout the war, we have mobilised our teams, technologies and resources to combat the spread of harmful content, especially disinformation and misinformation as well as adversarial threat activities such as influence operations and cyber-espionage.
- We continue to work with a cross-functional team of experts from across the company, including native Ukrainian and Russian speakers, who are monitoring the platform around the clock, allowing us to respond to issues in real time.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Removing a Coordinated Inauthentic Behaviour Network (Commitment 14, Commitment 16)
In early 2025, we removed 17 accounts on Facebook, 22 FB Pages and 21 accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated in Iran and targeted Azeri-speaking audiences in Azerbaijan and Turkey. Fake accounts – some of which were detected and disabled by our automated systems prior to our investigation – were used to post content, including in Groups, manage Pages, and to comment on the network’s own content – likely to make it appear more popular than it was. Many of these accounts posed as female journalists and pro-Palestine activists. The operation also used popular hashtags like #palestine, #gaza, #starbucks, #instagram in their posts, as part of its spammy tactics in an attempt to insert themselves in the existing public discourse.
We removed this network before it was able to build authentic audiences on our apps.
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Actions taken against dis- and misinformation content (for example deamplification, labelling, removal etc.) (Commitment 17)
- State controlled media: We continue to take the actions we outlined in our benchmark report. We have taken further action to limit the impact of state controlled media, described above.
- Escalation channel: This channel continues to operate as outlined in our benchmark report.
- Covert influence campaigns: We have continued to monitor for and remove recidivist attempts by coordinated inauthentic behaviour (CIB) networks that target discourse about the war in Ukraine. This covert activity is aggressive and persistent, constantly probing for weak spots across the internet, including setting up hundreds of new spoof news organisation domains.
Promotion of authoritative information, including via recommender systems and products and features such as banners and panels (Commitment 19)
As mentioned in our baseline report, we provided tools to help our community access crucial resources and take action to support people in need.
We continued supporting the Halo Trust and the State Emergency Service of Ukraine to spread authoritative factual information about the risks in contaminated areas, risks related to unexploded ordinances and life-saving information around shelters. Notably we sponsored the targeted ads campaigns of Halo Trust and improved the WhatsApp chat bot run by the State Emergency Service of Ukraine to ensure a safe and secure infoline.In addition, we provided an ad credits budget to 'Ty Yak?', a national mental health awareness campaign, to promote mental health resources for people affected by the war.
We continue to see funds raised on Facebook and Instagram for nonprofits in support of humanitarian efforts for Ukraine.
We continue to work through our Data for Good program, which empowers humanitarian organizations, researchers, UN agencies, and European policymakers to make more informed decisions on how to support the people of Ukraine.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Warning Screens on sensitive content, Sensitive Content Control and Facebook Reduce: (Commitment 17)
The 07/10/2023 attack by Hamas was designated as a Terrorist Attack under Meta’s Dangerous Organisation and Individuals policy. Consistent with that designation, we removed all content showing identifiable victims at the moment of the attack. Following that, people began sharing this type of footage in order to raise awareness and condemn the attacks. Meta’s goal is to allow people to express themselves while still removing harmful content. In turn, we began allowing people to post this type of footage within that context only, with the addition of a warning screen to inform users that it may be disturbing. If the user’s intent in sharing the content is unclear, we err on the side of safety and remove it.
However, there are additional protections in place to ensure people have choices when it comes to this content.
Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. We try not to recommend sensitive content in these places by default, but people can also choose to see less, to further reduce the possibility of seeing this content from accounts they don’t follow.
Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. Or if preferred, they can turn many of these demotions off entirely. They can also choose to maintain Meta’s current demotions.
These actions ensure that we balance the protection of voice with removing harmful content. In this context, it has allowed for important discussion and condemnation of violence, while also empowering people to make choices in reaction to the content they see on Facebook and Instagram.
Hidden words Filter (Commitment 18, Commitment 19)
When turned on, Hidden Words filters offensive terms and phrases from DM requests and comments, so people never have to see them. People can customise this list, to make sure the terms they find offensive are hidden.
Hidden Words help people choose offensive terms and phrases to hide, so they are protected from seeing them.
Limits (Commitment 18, Commitment 19,)
When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
This tool gives people choice about DM and requests they receive, which may be important when engaging online around sensitive topics.
Comment Controls (Commitment 18, Commitment 19)
People can control who can comment on their posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis.
This tool gives people control over engagement with what they post on Facebook and Instagram.
Show more Show less: (Commitment 18, Commitment 19)
Show More, Show Less gives people direct control over the content they see on Facebook. Selecting “Show more” will temporarily increase the amount of content that is like the post a user gave feedback on, while selecting “Show Less” means a user will temporarily see fewer posts like the one that feedback was given on.
This tool provides people with more direct control over what they see, which is important for protecting people's well-being during high profile crisis events.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Measures taken to support research into crisis related misinformation and disinformation (Commitment 17-25)
As mentioned in our baseline report, the Data for Good program shares privacy-protected data externally to help tackle social issues like disasters, pandemics, poverty and climate change. In support of the Ukraine humanitarian response, the program's maps have been utilized to provide valuable assistance.
As mentioned in our baseline report, we continued providing baseline population density maps (the high resolution settlement layer) of Ukraine and surrounding countries to humanitarian organisations for supply-chain planning and to aid demining efforts. These are the most accurate in the world with 30 metre resolution and demographic breakouts by combining updated census estimates with satellite imagery (i.e., no Facebook user data).
Our Social Connectedness Index has been used by leading researchers, including the European Commission - Joint Research Centre unit on Demography, Migration and Governance to quantify the rate at which Ukrainian refugees seek shelter in European regions with existing Ukrainian diaspora.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Content Library and API tools (Commitment 26)
As we previously reported, Meta has opened access to tools such as the Content Library and API tools to provide access to near real-time public content from Pages, Posts, Groups and Events on Facebook and public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on both a graphical User Interface (UI) or through a programmatic API. Together, these tools provide the most comprehensive access to publicly-accessible content across Facebook and Instagram of any research tool built to date.
Individuals from qualified institutions, including journalists that are pursuing scientific or public interest research topics are able to apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.
Qualified individuals pursuing scientific or public interest research, including journalists can gain access to the tools if they meet all the requirements.
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Cooperation with independent fact-checkers in the crisis context, including coverage in the EU (Commitment 30-33)
As mentioned in our baseline report, for misinformation that does not violate our Community Standards, but undermines the authenticity and integrity of our platform, we work with our network of independent third-party fact-checking partners.
The details of the network are outlined under the Empowering Fact-Checkers chapter above.
As mentioned in our baseline report, our cooperation with fact-checkers is as outlined in the Fact-Checkers’ Empowerment chapter above.
In Europe, we partner with 46 fact-checking organisations, covering 36 languages. This includes 29 partners covering 26 countries and 23 different languages in the EU.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Working with fact checker in the region and deploying keyword detection (Commitment 30)
Meta is working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
When they rate something as false, we move this content lower in Feed so fewer people see it.
Content Warning Labels Commitment 31)
Meta is adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. We also have limits on message forwarding and label messages that haven’t originated with the sender so people are aware that something is information from a third party.
Meta is supporting people in the region by giving them more information to decide what to read, trust and share by adding warning labels onto relevant content.