Microsoft Bing

Report March 2025

Submitted
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
During the reporting period Microsoft continued piloting Content Integrity Tools, which allowed users to add content credentials to their own authentic content. Designed as a pilot program primarily to support the 2024 election cycle and gather feedback about Content Credentials-enabled tools, during the reporting period of this report, the tools were available to political campaigns in the EU, as well as to elections authorities and select news media organizations in the EU and globally. These tools included a partnership and collaboration with fellow Tech Accord signatory, TruePic.  Announced in April 2024, this collaboration leveraged TruePic’s mobile camera SDK enabling campaign, election, and media participants to capture authentic images, videos and audio directly from a vetted and secure device. Called the “Content Integrity Capture App” (an app that makes it easy to directly capture images with C2PA enabled signing) launched for both Android and Apple and can be used by participants in the Content Integrity Tools pilot program.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
Bing Search is an online search engine, the primary purpose of which is to provide a searchable index of webpages available on the internet to help users find the content they are looking for. Bing Search does not host the content or control the operation, content, or design of indexed websites. Users come to Bing Search with a specific research topic in mind and expect Bing to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Bing Search does not allow users to post and share content or otherwise enable content to go “viral” through user-to-user exchanges of information on Bing. 

As such, addressing misinformation in organic search results often requires a different approach than may be appropriate for other types of online services.  The majority of the TTPs (namely, TTPs 1-9 and 11-12) are more pertinent to social media or account-driven services in that they specifically relate to user accounts, subscribers/followers, inauthentic coordination, influencers, or targeting users of a service, account hijacking, etc., and thus are not relevant to search engines. 

The highest potential for abuse in web search arises under TTP 10, which involves “use of deceptive practices to deceive/manipulate platform algorithms, such as by exploiting data voids, spam tactics, or keyword stuffing.” Therefore, relevant Bing Search policies and practices that help combat manipulative behaviors primarily address TTP Number 10. 

Although as a search engine Bing does not have any control over third party websites appearing in search results, Bing’s ranking algorithms, spam policies, and other safeguards described below can also address and mitigate the risks arising from malicious websites that use other TTPs attempting to manipulate our search engine rankings. For example, pages employing social media schemes (e.g., fake followers – TTP 3), using inauthentic domains (TTP 4), or keyword stuffing (TTP 9) are considered abusive practices that are addressed in Bing’s ranking system and Webmaster Guidelines. In addition, in connection with generative AI features, Microsoft has implemented measures intended to address TTP No. 7 (related to deceptive deepfakes), which are discussed in more detail below.

Bing’s primary mechanism for combatting manipulative behaviors in search results is via its ranking algorithms and systems designed to identify and combat attempts to abuse search engine optimization techniques (i.e., spam). Bing Search describes the main parameters of its ranking systems in depth in How Bing Delivers Search Results. Abusive techniques and examples of prohibited SEO activities are described in more detail in the Bing Webmaster Guidelines

As described in these documents, Bing’s ranking algorithms are designed to identify and prioritize high quality, highly authoritative content available online that is relevant to the user’s query and to prevent abusive search engine optimization techniques (spam).  

One of the key ranking techniques Bing uses to prevent low quality or deceptive websites from returning high in search results is through the “quality and credibility” score. Determining the quality and credibility (QC) of a website includes evaluating the clarity of purpose of the site, its usability, and presentation. QC also consists of an evaluation of the page’s “authority”, which includes factors such as:

§   Reputation: What types of other websites link to the site? A well-known news site is considered to have a higher reputation than a brand-new blog. 

§   Level of discourse: Is the purpose of the content solely to cause harm to individuals or groups of people? For example, a site that promotes violence or resorts to name-calling or bullying will be considered to have a low level of discourse, and therefore lower authority, than a balanced news article.

§   Level of distortion: How well does the site differentiate fact from opinion? A site that is clearly labeled as satire or parody will have more authority than one that tries to obscure its intent.

§   Origination and transparency of the ownership: Is the site reporting first-hand information, or does it summarize or republish content from others? If the site doesn’t publish original content, do they attribute the source? A first-hand account published on a personal blog could have more authority than unsourced content.

In addition to its ranking algorithms, Bing Search’s general abuse/spam policies prohibit certain practices intended to manipulate or deceive the Bing Search algorithms, including those that could be employed by malicious actors in the spread of disinformation. Pursuant to the Webmaster Guidelines, Bing may take action on websites employing spam tactics (such as social media schemes, keyword stuffing, malicious behavior, cloaking, link schemes, or misleading structured data markups) or that otherwise violate the Webmaster Guidelines, including by applying ranking penalties (such as demoting a website) or delisting a website from the index. 

Note that it is not feasible to distinguish between general spam tactics and spam tactics employed by malicious actors specifically for the purpose of spreading disinformation. Therefore, Bing Search has not presented data on the amount of spam detected and actioned under its policies since these figures are indicative of actions taken toward spam overall and presently cannot be used to provide an accurate assessment of whether it pertains to spam used in connection with disinformation campaigns or spam used for another purpose (e.g., phishing).  

Generative AI Features
During the Reporting Period, the nature of Bing generative AI experiences evolved. In October 2024, Microsoft launched a separate, standalone consumer service known as Microsoft Copilot at copilot.microsoft.com, which offers conversational experiences powered by generative AI, and the Copilot in Bing (formerly known as Bing Chat) generative AI experience was phased out. Bing continues to offer generative AI experiences, such as Bing Image Creator and Bing Generative Search, which was launched this Reporting Period. Bing Generative Search utilizes AI to deliver a unique experience by not only optimizing search results but presenting information in a user-friendly, cohesive layout. Results also include citations and links that enable users to explore further and evaluate websites for themselves. For AI-powered experiences, Bing has partnered closely with Microsoft’s Responsible AI team to proactively address AI-related risks and continues to evolve these features based on user and external stakeholder feedback. Bing generative AI experiences continue to rely on the same infrastructure and mitigations previously discussed in Microsoft’s last report. 

Bing Generative Search’s primary functionality is, like traditional Bing search, to provide users with links to third party content responsive to their search queries. As such, the ranking algorithms and spam/abuse policies described above continue to be Bing’s primary defense against manipulation and abuse, supplemented by interventions designed specifically to address manipulation in generative AI features. As to answers triggering creative inspiration, Microsoft has worked continuously to improve and adjust safety mitigations, policies, and user experiences within Bing’s generative AI experiences to minimize the risk they may be used for manipulative purposes. Additional information on how Microsoft approached responsible AI in Bing’s generative AI experiences is available How Bing Delivers Search Results.  

TTP 10 remains the most relevant TTP to Bing’s generative AI experiences, as users cannot post or share content directly on the Bing service. In addition, Microsoft undertakes specific mitigations to address TTP 7 given the risks that users may attempt to use generative AI to create deepfakes or manipulated media to spread disinformation. Although Bing does not have the ability to monitor third party platforms for publication of content created through Bing’s services, Bing has implemented safeguards to help to minimize the risk that bad actors can use Bing generative AI experiences to create mis/disinformation.

Microsoft’s Copilot AI Experiences Terms(applicable to Copilot in Bing through October 2024) and Bing’s Image Creator Terms of Use(referred to here as “Supplemental Terms”) advise users on prohibited conduct and content. These Supplemental Terms primarily address TTPs No. 10 and 7 by restricting attempts to create or spread mis/disinformation or deceptive images using Bing’s generative AI experiences. Users that violate the Supplemental Terms and Code of Conduct may be suspended from the service. In addition, Bing’s generative AI experiences work to prevent generation of problematic text or images by blocking user prompts that (i) violate the Code of Conduct or (ii) are likely to lead to creation of material that violates the Code of Conduct. Repeated attempts to produce prohibited content or other violations of the Code of Conduct may also result in service or account suspension. 

For further information as to how Bing Search and Bing’s generative AI experiences implement these policies see QRE 14.1.2. 
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
As discussed under QRE 14.1.1, TTP No. 10 tends to be the primary mechanism for manipulation and abuse in the context of search engines and is addressed through Bing’s ranking systems and abuse policies (for both traditional search and Bing’s generative AI experiences).

Blocking content in organic search results based solely on the truth or falsity of the content can raise significant concerns relating to fundamental rights of freedom of expression and the freedom to receive and impart information. Instead of blocking access to content to address these TTPs, Bing Search focuses on ranking its organic search results so that trusted, authoritative news and information appears first and provides tools to help its users evaluate the trustworthiness of certain sites and ensure they are not misled or harmed by the content that appears in search results. Bing presumes the user seeks high quality, authoritative content unless the user clearly indicates an intent to research low quality content. Bing Search takes actions to promote high authority, high quality content and thereby reduce the impact of misinformation appearing in Bing Search results. This includes Bing Search’s continued improvement of its ranking algorithms to ensure that authoritative, relevant content is returned at the top of search results, regular review and actioning of disinformation threat intelligence, partnership with third party information intelligence and media literacy organizations, contributing to and supporting the research community, and enforcement of clear policies concerning the use of manipulative tactics on Bing Search, among other initiatives described elsewhere in this report.  

Although the Bing Search algorithm endeavors to prioritize relevance, quality, and credibility, in some cases Bing Search identifies threats arising from emerging or evolving world events and/or activities by external actors that attempt to undermine the efficacy of its algorithms. When this happens, Bing Search employs “defensive search” strategies and interventions to counteract threats and TTPs in accordance with its trustworthy search principles (which are discussed in further detail in the  How Bing Delivers Search Results. 

“Defensive search interventions” may include algorithmic interventions (such as authority signal boost in ranking or demotions of a website), restricting autosuggest or related search terms to avoid directing users to problematic queries, prioritizing additional features promoting high authority information (e.g., Answers or Public Service Announcements), and in limited cases manual interventions for individual reported issues or broader areas more prone to misinformation or disinformation. Bing actively monitors manipulation trends in identified high-risk areas and deploys mitigation methods as needed to ensure users are provided with high quality, high authority search results.

In addition to defensive search, Bing Search regularly monitors for other violations of its Webmaster Guidelines, including attempts to manipulate the Bing Search algorithm through prohibited practices such as cloaking, link spamming, keyword stuffing, and phishing. Bing Search dedicates meaningful resources to maintaining the integrity of the platform, promoting high authority, relevant results, and reducing spam (including spam aimed at distributing low authority information and manipulative content). Bing Search utilizes a combination of human intervention and AI-driven analysis to regularly review, detect, and address spam tactics occurring on Bing Search. When Bing Search detects websites deploying manipulative techniques or engaging in spam tactics, those websites may incur ranking penalties or be removed from the Bing Search index altogether. 

Microsoft also works to identify and track nation-state information operations targeting democracies across the world and works with a number of trusted third-party partners for early indicators of narratives, hashtags, or information operations that can be leveraged to inform early detection and defensive search strategies. Through Microsoft’s Democracy Forward team and the Microsoft Threat Assessment Center (MTAC), Microsoft also offers mediums for election authorities, including in the EU and EEA Member States, to have lines of communication with Microsoft to identify possible foreign information operations targeting elections.

The above measures also apply to Bing’s generative AI experiences. responses to user prompts are "grounded” on high authority content from the web (except in certain creative use cases), which are based on the same ranking algorithms and moderation infrastructure that are used by Bing’s traditional web search, and, as such, benefit from Bing’s longstanding safety infrastructure described above. Nonetheless, Microsoft recognizes that generative AI technology may also raise novel risks and possibilities of harm that are not present in traditional web search and has supplemented its existing threat identification and mitigation processes with additional risk assessments and mitigation processes based on Microsoft’s Responsible AI program.  

Microsoft’s Responsible AI program is designed to identify potential harms, measure their propensity to occur, and build mitigations to address them. Guided by its Responsible AI Standard, Microsoft identifies, measures, and mitigates potential harms and misuse of new generative AI experiences while securing the transformative and beneficial uses that these tools provide. , Microsoft has implemented a range of safety mitigations to help address, among other things, impermissible content, behaviours, and other TTPs that could potentially be used to create or spread misinformation.

Below are several examples of Microsoft’s iterative approach to identify, measure, and mitigate potential harms, including the spread of misinformation. 

- Pre-launch and ongoing testing. Before launching Bing’s generative AI experiences, Microsoft conducted “red team” testing. A multidisciplinary team of experts evaluated how well the system responded when pressed to produce harmful responses, surface potential avenues for misuse, and identify capabilities and limitations. Post-release, generative AI experiences are integrated into Microsoft engineering organizations’ existing production measurement and testing infrastructure. More information on Microsoft’s approach to red-team testing is available at Microsoft AI Red Team building future of safer AI | Microsoft Security Blog.

- Classifiers, Metaprompting, and Filtering Interventions: Microsoft has created special mitigations in the form of “classifiers” and “metaprompting” to help reduce the risk of certain harms and misuse of generative AI features. Classifiers classify text to flag different types of potentially harmful content in search queries, chat prompts, or generated responses. Microsoft uses AI-based classifiers and content filters, which apply to all search results and relevant features; it also designed additional prompt classifiers and content filters specifically to address possible harms raised by new generative AI features.. Flags lead to potential mitigations, such as not returning generated content to the user, diverting the user to a different topic, or redirecting the user to traditional search. Metaprompting involves giving instructions to the model to guide its behavior, including so that the system behaves in accordance with Microsoft's AI Principles and user expectations. Microsoft has also implemented additional filtering and classifiers to prevent chat responses from returning what Bing considers “low authority” content as part of an answer and to help address impermissible content, behaviours, and other TTPs (e.g., TTP No. 7) that could potentially be used to create or spread misinformation.

- Content Provenance Tools. Microsoft also makes it clear that images created in Bing Image Creator (and Copilot in Bing prior to its phase out) are AI-generated by including content provenance information in each image. These content provenance features use cryptographic methods to mark and sign AI-generated content with metadata about its source and history. The invisible digital watermark feature shows the source, time, and date of original creation, and this information cannot be altered. Providing clear indications of image provenance helps reduce the risk of deepfakes (e.g., TTP No. 7) and helps users identify when an image was generated with the assistance of Microsoft generative AI tools. Microsoft has partnered with other industry leaders to create the Coalition for Content Provenance and Authenticity (C2PA) standards body to help develop and apply content provenance standards across the industry.    

 

- Expanded and Prominent Reporting Functionality. Bing’s generative AI experiences allow users to submit feedback and report their concerns, which are then reviewed by Microsoft’s operations teams. Microsoft has made it easy for users to report problematic content they encounter while using generative AI features in Bing by including a “Feedback” portal on the footer of every Bing page, with direct links to its “Report a Concern” tool. 

- Regular Improvements Based on Real World Usage. Microsoft continues to make changes to Bing generative AI experiences regularly to improve product performance, update existing mitigations, and implement new mitigations in response to our learnings based on real-world usage of the product.

- Operations and incident response. Bing also uses Microsoft’s ongoing monitoring and operational processes to address when Bing’s generative AI features receive signals or a report indicating possible misuse or violations of the terms of use.

- Cooperation with Industry Partners. The third-party content that grounds Bing’s generative AI experiences relies on the same ranking algorithms and defensive interventions that power traditional Bing search, including reliance on signals of page authority that Bing receives from its third-party partners and fact-checks using the ClaimReview protocol. 

Our approach to identifying, measuring, and mitigating harms will continue to evolve as we learn more, and we continue to make improvements based on feedback from users, civil society groups, and other third-party stakeholders.  

Microsoft also maintains a web page – Microsoft-2024 Elections – where political candidates and election authorities can report alleged deepfakes of themselves or the election process on Microsoft platforms to Microsoft. 

See also response to QRE 14.1.1.