A massive network of 15,000 domains is actively promoting AI investment scams by exploiting commercial ad trackers from Google, Microsoft, and Oracle, according to research from Netskope Threat Labs. The operation, which has been active since at least March 2024, represents a sophisticated evolution in how cybercriminals blend fraudulent content with legitimate traffic to avoid detection and scale their theft across the globe.
Key Takeaways
- Netskope identified 15,500 scam domains using Google Analytics, Microsoft Clarity, and Oracle trackers to evade security detection.
- Scammers use cloaking techniques to show benign content to security scanners while directing real users to fraudulent investment pages.
- The network generated over 10 million daily ad impressions via Google Ads, YouTube, and social platforms in Q4 2024.
- Victims lose crypto through fake AI trading bots promising 300-500% returns; deposits sent to attacker-controlled wallets are rarely recovered.
- An estimated $50 million in cryptocurrency has been stolen since the campaign’s detection, based on on-chain analysis.
How AI investment scams leverage mainstream ad infrastructure
The core innovation behind this AI investment scams network is its abuse of trusted ad trackers. By embedding Google Analytics, Google Tag Manager, Microsoft Clarity, and Oracle Advertising into their fraudulent domains, scammers make their traffic appear legitimate to automated security systems. Eighty-seven percent of the identified domains use at least one of these trackers, creating a veneer of legitimacy that shields the operation from detection. Ray Canzanese, Technical Director at Netskope Threat Labs, described the scale bluntly: “This isn’t a fringe operation—it’s a foundational block of modern cybercrime, hiding in plain sight behind the same trackers every legit business uses.”
The scam delivery chain operates with mechanical precision. Attackers purchase keywords like “AI stocks” and “crypto AI bot” through Google Ads, bidding competitively to appear alongside legitimate investment content. When users click, they land on cloaked domains that detect the visitor’s browser and device. Security scanners see benign content; real users see polished fake investment platforms with testimonials, AI bot demos, and countdown timers claiming “limited spots available.” The cloaking layer is essential—it allows domains to survive longer on ad networks before being flagged, multiplying the campaign’s reach before suspension.
Domains are registered across 1,200+ registrars worldwide, with heavy concentrations in the US, Netherlands, and Germany. Seventy-two percent of detected domains were under 30 days old at discovery, indicating rapid domain rotation to evade bans. Bulk registration costs only $10-15 per domain annually, making the economics of throwaway infrastructure trivial for attackers operating at scale.
The victim extraction funnel behind AI investment scams
Once a user lands on a fake investment site, the extraction process follows a predictable sequence. Registration requires only an email address and phone number—no real verification. The site then requests wallet connection for a “demo trade,” establishing initial trust. Attackers promise small guaranteed returns (20-50% daily), designed to lower victim resistance. After the victim deposits even modest amounts in Bitcoin or Ethereum, the pressure escalates. Support agents (often operating via Telegram or Discord) encourage larger deposits, claiming exclusive AI trading opportunities. When victims attempt withdrawal, excuses multiply: trading fees, KYC delays, platform “maintenance.” By then, the crypto has been transferred to attacker-controlled wallets and tumbled through mixers, making recovery nearly impossible.
The fake investment sites mimic legitimate brokers with professional branding, fake regulatory badges, and fabricated client testimonials. Some promise returns as high as 300-500% on AI trading bots—claims that should trigger immediate skepticism but often work because they arrive alongside ads on trusted platforms like YouTube and news websites. Malvertising campaigns place scam ads in high-traffic contexts, exploiting the assumption that ads on major sites must be vetted.
Why mainstream ad platforms struggle to stop AI investment scams
Google Ads and similar platforms face a cat-and-mouse game with sophisticated scammers. The cloaking technique defeats automated ad review systems, which see legitimate landing pages during the approval process but cannot detect the bait-and-switch that happens after approval. By the time a domain accumulates enough user reports to trigger manual review, it has already generated significant traffic and revenue. Leaked scammer communications analyzed by Netskope researchers suggest individual campaigns can net $1 million monthly before suspension—a return that justifies rapid domain cycling and ad spend across multiple accounts.
The use of legitimate ad trackers compounds the detection problem. Security teams cannot simply block all traffic from domains using Google Analytics; that would flag thousands of legitimate sites. The trackers themselves are not compromised—they function exactly as intended. Scammers simply weaponize their presence as a trust signal, exploiting the fact that legitimate businesses use the same tools.
Comparing AI investment scams to older fraud models
This operation differs from earlier investment fraud in its scale and sophistication. Older forex scams relied on direct phishing emails and lacked the infrastructure to handle millions of daily impressions. “Pig butchering” romance scams, which also extract crypto, operate through slower social engineering on dating apps—effective but labor-intensive. The Finiko pyramid scheme of 2021, which stole an estimated $3 billion, operated with less evasion, relying on word-of-mouth and direct recruitment rather than ad tech. Real AI trading platforms like Trade Ideas and Kavout disclose actual risks, do not use cloaking, and operate with regulatory oversight—a stark contrast to the anonymous, rotating infrastructure of this scam network.
The scalability advantage comes from automation. Once the WordPress template and cloaking plugin are deployed, domain rotation and ad placement can run semi-autonomously, requiring minimal human oversight for the support side. This allows a small team to manage thousands of victim interactions simultaneously across multiple Telegram and Discord channels.
What happens after detection and what victims can do
Netskope shared its takedown list with major registrars and ad platforms, triggering the first widespread disruptions to the network in late 2024. However, the speed of domain rotation means new scam sites appear faster than platforms can disable them. Cryptocurrency stolen through these schemes is nearly impossible to recover once tumbled through mixers. Law enforcement can trace on-chain transactions but cannot easily identify the wallets’ owners, especially when funds move through privacy-focused protocols.
Victims are urged to report suspected scams to the FBI’s Internet Crime Complaint Center and their local law enforcement, though recovery prospects remain poor. The best defense remains skepticism: any platform promising guaranteed returns of 20-50% daily, requesting crypto deposits, or using aggressive urgency tactics is almost certainly a scam, regardless of where the ad appears.
Is there a way to identify AI investment scams before losing money?
Yes. Check for red flags: guaranteed high returns, pressure to deposit quickly, requests for wallet connection before any legitimate trading activity, and difficulty reaching verified customer support. Legitimate investment platforms disclose regulatory status (SEC, FCA, etc.), allow fiat deposits, and never promise guaranteed returns. If a platform’s website is less than 30 days old or uses a privacy-protected domain registration, treat it with extreme caution.
How much money have victims lost to these AI investment scams?
Netskope estimates $50 million in cryptocurrency stolen since detection based on on-chain analysis of attacker wallets. The true figure may be higher, as not all victims report losses and some transactions remain untraced. Individual losses typically range from $250 to tens of thousands, depending on how aggressively scammers escalated deposit requests.
Why do Google Ads and Microsoft Clarity allow these domains to advertise?
The cloaking technique defeats automated review systems by showing legitimate content during the approval process. Ad platforms review thousands of new domains daily; manual verification of every site is impractical. Scammers exploit this scale by cycling domains quickly, staying ahead of suspension waves. Once reported, platforms disable domains, but new ones launch within hours, creating an endless cycle.
The discovery of this 15,000-domain network exposes a critical vulnerability in how ad ecosystems and security infrastructure interact. Scammers have weaponized the same tools that legitimate businesses rely on, turning trust signals into camouflage. Until ad platforms implement more aggressive verification of investment-related ads and security vendors develop better detection for cloaking techniques, AI investment scams will remain a scalable, profitable operation. For users, the lesson is simple: skepticism beats hype, especially when money is involved.
This article was written with AI assistance and editorially reviewed.
Source: TechRadar

