Google Leverages Gemini AI to Transform Ad Safety Enforcement in 2025 Transparency Report

Google has officially integrated its Gemini large language model into the core of its advertising moderation infrastructure, marking a significant shift in how the search giant battles fraudulent activity and policy violations across its global network. According to the newly released 2025 Ads Safety Report, this artificial intelligence upgrade has enabled the company to block or remove 8.3 billion advertisements and suspend nearly 25 million advertiser accounts over the past year. The report highlights a growing "AI arms race" in the digital advertising space, where sophisticated bad actors are increasingly using generative tools to create deceptive content, forcing platforms like Google to deploy equally advanced AI to maintain ecosystem integrity.
The scale of Google’s enforcement actions in 2025 reflects a massive escalation in the volume of digital threats. By removing 8.3 billion ads, Google is effectively policing a volume of content that averages out to hundreds of millions of interventions every month. Perhaps more critical than the total volume is the efficiency of the detection: Google reports that more than 99% of policy-violating ads were intercepted and blocked before they ever reached a user’s screen. This proactive approach is a cornerstone of Google’s strategy to prevent financial loss and misinformation from reaching the general public.
The Gemini Advantage: From Keywords to Contextual Intent
The transition to Gemini represents a fundamental change in the methodology of ad policing. For years, ad safety systems relied heavily on keyword-based filters and relatively rigid machine-learning models that looked for specific red flags, such as prohibited terms or known malicious URLs. While effective for their time, these systems often struggled with nuance, sarcasm, or sophisticated "cloaking" techniques where an ad appears legitimate to a bot but redirects a human user to a scam site.

With the integration of Gemini, Google’s systems are now capable of analyzing hundreds of billions of signals simultaneously. These signals include the age of the advertiser’s account, historical behavior patterns, campaign activity, and the semantic content of the ad copy itself. Gemini’s ability to understand context allows it to identify malicious intent even when a scammer uses "clean" language that might bypass traditional filters. For example, Gemini can better distinguish between a legitimate financial service and a predatory high-interest loan scheme that uses similar terminology but exhibits different structural patterns in its landing pages and user flows.
Furthermore, Google noted that by the end of 2025, the vast majority of Responsive Search Ads (RSAs) were being reviewed instantly upon submission. This real-time enforcement is vital for stopping "flash" scams—campaigns that go live for a few hours to harvest data or money before disappearing. The company has expressed intentions to expand this instant-review capability to video and display formats throughout the remainder of the year.
A Statistical Breakdown of Global and Domestic Enforcement
The 2025 report provides a granular look at where the enforcement actions were most concentrated. Globally, the 24.9 million suspended accounts represent a significant increase in the removal of "bad actors" at the source, rather than just treating the symptoms of individual bad ads.
In the United States specifically, the numbers tell a story of high-stakes digital combat. Google removed 1.7 billion ads and suspended 3.3 million advertiser accounts within the U.S. borders. The top three policy violations cited in the American market were:

- Misrepresentation: Scams that trick users into sharing personal information or money by posing as legitimate entities or making false promises.
- Trademark Infringement: Unauthorized use of brand names to sell counterfeit goods or siphon traffic from established businesses.
- Unreliable Claims: Ads promoting "miracle" cures, get-rich-quick schemes, or other content that lacks scientific or factual backing.
These categories highlight the dual nature of modern ad fraud: it is both a financial threat to consumers and an intellectual property threat to legitimate brands. The rise in trademark violations, in particular, has been exacerbated by AI tools that allow scammers to quickly generate convincing logos and brand-aligned copy.
The Evolution of Ad Safety: A Chronology of Progress
To understand the significance of the 2025 report, it is necessary to look at the timeline of Google’s ad safety efforts over the last decade. In the mid-2010s, Google’s primary focus was on "bad ads" related to malware and phishing. As the digital landscape matured, the focus shifted toward "bad actors" and the networks behind them.
- 2018-2020: Google introduced more rigorous advertiser identity verification programs, requiring many advertisers to submit government-issued documentation to prove their legitimacy.
- 2021-2023: The company began leaning into standard machine learning to automate the flagging of inappropriate content and COVID-19-related misinformation.
- 2024: Initial pilot programs for Gemini-based moderation began, focusing on high-risk categories like financial services and political advertising.
- 2025: Full-scale deployment of Gemini across the ad ecosystem, resulting in the record-breaking 8.3 billion removals.
This chronology demonstrates a clear trajectory toward total automation. While human reviewers still play a role in training models and handling complex appeals, the sheer volume of the internet makes manual review impossible at scale.
The "Double-Edged Sword" of Automated Enforcement
Despite the technological triumphs highlighted in the report, the aggressive move toward AI-driven enforcement has not been without controversy. While Google asserts that Gemini has reduced false suspensions for legitimate advertisers by better understanding nuance, the reality on the ground for some businesses has been more complicated.

Throughout late 2024 and early 2025, several reports emerged from advertisers in the United Kingdom and the United States regarding "bulk ad disapproval alerts." In many of these cases, legitimate brands found their entire campaign libraries flagged for policy violations that did not exist. These "false positives" can be devastating for small and medium-sized businesses that rely on consistent ad traffic for their daily revenue.
Industry analysts suggest that as AI models become more "eager" to catch scams, the threshold for what constitutes a "suspicious signal" may become too sensitive. This creates a friction point where Google must balance the safety of its users with the stability of its revenue-generating partners. The company has acknowledged these challenges, stating that Gemini’s iterative learning process is designed to refine these thresholds over time, ensuring that legitimate brands are not caught in the crossfire of the war on scams.
Industry Reactions and Broader Implications
The response from the digital marketing community has been a mix of cautious optimism and a call for greater transparency. Trade organizations representing advertisers have praised the reduction in fraudulent competition, noting that every scam ad removed is a win for the integrity of the digital auction. However, there is a growing demand for a more robust appeals process.
"While we applaud the use of Gemini to scrub the ecosystem of malicious actors, the ‘black box’ nature of AI enforcement remains a concern," says one industry consultant specializing in Google Ads policy. "When an account is suspended by an AI, getting a human to understand the nuance of a legitimate business model can still be a bureaucratic nightmare."

From a regulatory perspective, Google’s 2025 report serves as a defensive shield against increasing pressure from global governments. With the European Union’s Digital Services Act (DSA) and similar legislative frameworks in the U.S. and Asia demanding more accountability from "Big Tech," these transparency reports are essential evidence that platforms are taking their "duty of care" seriously.
Analysis: The Future of the AI Arms Race
The bottom line of the 2025 Ads Safety Report is that the battle for a clean internet is no longer a human-led endeavor; it is a software-versus-software conflict. Scammers are now using generative AI to create thousands of ad variations in seconds, each slightly different to evade detection. Google’s response—using Gemini to analyze those variations in real-time—is the only viable counter-strategy.
Looking forward, the digital advertising landscape will likely see even tighter integration of AI. We can expect to see "predictive enforcement," where Google’s systems analyze the "vibe" or intent of a new advertiser before they even upload their first creative asset. While this may lead to a safer experience for users, it also places a significant burden on advertisers to maintain impeccable account health and transparency.
As Google continues to refine Gemini’s role, the focus will likely shift from merely "blocking" bad ads to "rehabilitating" the ecosystem by providing advertisers with clearer, AI-generated feedback on why an ad was flagged. This would bridge the gap between automated enforcement and user education, potentially reducing the friction that currently exists in the system. For now, the 8.3 billion blocked ads stand as a testament to the staggering scale of the challenge and the unprecedented power of the tools being used to meet it.







