Urgent Alert: Big Tech Fails to Stop AI Scams Targeting Users

UPDATE: A new report from consumer group Which? reveals that Big Tech is failing to protect internet users from a surge of AI-driven scams, including deepfake videos impersonating trusted figures like financial journalist Martin Lewis and UK Prime Minister Kier Starmer. These deceptive videos are designed to lure viewers into fraudulent investment schemes, falsely presenting them as safe and government-backed.

The investigation highlights an alarming rise in AI impersonation scams in 2025, making it increasingly difficult for consumers to identify genuine content. The urgent findings prompt calls for stricter regulations from the government to hold tech companies accountable for the spread of this harmful content.

Which? has specifically urged platforms like YouTube, X (formerly Twitter), and Meta to take immediate action against misleading materials. Rocio Concha, Director of Policy and Advocacy at Which?, emphasizes, “AI is making it much harder to detect what’s real and what’s not. Fraudsters know this and are exploiting it ruthlessly.”

The Financial Conduct Authority warns that around 20% of individuals making investment decisions trust online influencers, which is particularly dangerous when those influencers can be easily mimicked through AI technology. The rise of deepfakes complicates consumer protection, as criminals create convincing fake websites and content that mimic reputable news sources such as Which? and the BBC.

Authorities stress the importance of verifying information by ensuring content originates from official channels and legitimate websites. The upcoming fraud strategy from the UK government is expected to include tough measures aimed at holding Big Tech accountable for their role in the proliferation of scams.

In a positive step, YouTube has introduced a new tool allowing creators to flag AI-generated video clones, but experts argue that more needs to be done to tackle deepfake financial fraud directly.

The urgent need for action is palpable, as consumers remain vulnerable to these sophisticated scams. With technology evolving rapidly, the implications for user safety are significant. As fraudsters continue to exploit AI capabilities, it is crucial for both the government and tech companies to step up, ensuring that measures are in place to protect the public.

As investigations continue, consumers are urged to stay vigilant and critical of the content they encounter online. The consequences of these scams can be devastating, affecting financial security and trust in online platforms. The time to act is now.