In March 2026, British presenter Ashley James, known for her body-positive advocacy, discovers an advertisement showing her promoting weight-loss pills on the set of This Morning. The decor, the ITV logo, her voice, her expressions, everything is perfectly imitated. However, she never said those words. This content is entirely created by artificial intelligence.
Ad Deepfakes Exploit Trust in Familiar Faces
Ad deepfakes rely on a well-oiled mechanism. Scammers gather hours of public videos of a celebrity, interviews, shows, social media posts. They then train an artificial intelligence model on this data. This model analyzes facial features, expressions, voice, and lip movements. In minutes, it generates a video where the personality seems to say anything. These systems no longer just replicate a person’s appearance. They also mimic their behavior over time.
According to Metro, advertising deepfakes, these AI-generated manipulated videos, are now flooding social networks at a speed that platforms struggle to contain. Scammers dress their videos in the colors of renowned media. A television set, a channel logo, a newspaper background reinforce the illusion of legitimacy. In Ashley James’s case, the fake advertisement replicated the set and graphic design of This Morning. Indeed, human trust largely relies on visual recognition. Seeing a familiar face speaking on camera is enough to trigger a sense of credibility, even among those aware of the risks.
Ad Deepfakes Cause Massive Financial Losses and Impact All Targets
This phenomenon extends beyond British borders. According to a study by cybersecurity company Surfshark, deepfake-related scams resulted in over a billion dollars in losses worldwide in 2025. This is three times more than in 2024. In France, an 82-year-old man lost 350,000 euros. He had watched a fake advertisement featuring actor Jean Reno endorsing a trading platform. The video appeared authentic. Scammers had accurately replicated his voice and expressions.
Faced with this phenomenon, Meta filed several lawsuits in March 2026 against advertisers based in Brazil, China, and Vietnam. The technique used is called celeb-bait, which exploits the image of public figures through hyper-realistic montages to redirect victims to fraudulent pages. The group claims its program now covers the images of over 500,000 public figures. However, detection tools struggle to keep pace. Generation technologies evolve faster than moderation systems.
Ad Deepfakes Raise Fundamental Questions about Our Relationship with Images
Beyond financial losses, these contents raise a deeper issue. When any video can be fabricated, visual proof loses its value. Yet, our brain continues to treat images as a sign of reality. According to cybersecurity experts at McAfee, only 29% of people feel capable of identifying a deepfake. However, some clues can help spot such content. Mismatched lip movements, abnormal eye blinking, or overly smooth skin texture can betray the deception. These flaws disappear as generative AI tools advance.
For Ashley James, the lesson is bitter. Her face and voice were used to spread exactly the message she has been fighting against for years, the one that tells women they must lose weight. Her case has at least raised public awareness of a reality that numbers have long described. In a world where seeing is no longer believing, digital vigilance becomes a skill as essential as reading or counting.






