AI

Late Dr. Michael Mosley Used in AI Deepfake Health Scams

Confidence can evaporate in a moment when technology becomes harmful. This is the latest in the world of Wild World of Ai, where the fraudsters use DeepFake videos of Dr. Michael Musaly – with a reliable face on healthy broadcasts – for supplements such as Ashwagandha and Beetroot Gummies.

These syllables appear on social media, which include Musli’s advice enthusiastically for viewers with false demands about menopause, inflammation and other healthy heresy – which he has ever supported.

When you sell familiar faces imagination

Passing via Instagram or Tiktok, you can journey through a video and think, “Wait – is this Musli?” You will be right … a kind of. These creations of artificial intelligence use clips of well -known manifestations and manifestations, which were collected together to imitate its accent, expressions and frequency.

It is frighteningly convincing until you stop thinking about: sticking – died last year.
A researcher from the Torring Institute warned that progress occurs very quickly so that it is almost impossible to have a real discovery of fake content by sight alone.

The Fallout: Health Information in Overdrive

Here where things become sticky. These Deepfake videos are not harmful delusions. They pay the allegations that have not been verified – such as beet gum that treat vascular expansion, or the budget of hormones – which are dangerously stopped from reality.

Nutritionists have warned that such exciting content is seriously undermined the audience’s understanding of nutrition. Dietary supplements are not a shortcut, and exaggeration of such confusion is born, not wellness. The UK medical organizer, MHRA, is looking into these allegations, while public health experts continue to urge people to rely on reliable sources – take your thinking about your NHS and GP – not the promotions of artificial intelligence.

Platforms in the hot seat

I found social media platforms themselves at the intersection. Despite the policies against deceptive content, experts say technology giants like Meta are struggling to keep pace with the huge size and viruses of these deep acts.

Under the online safety law in the United Kingdom, platforms are now required for a law to process illegal content, including fraud and plagiarism. Offcom monitors enforcing, but so far, bad content often appears at the speed that is downloaded.

Echoes of real reality: a disturbing direction

This is not isolated hiccups – it is part of an increased pattern. The last CBS News report revealed dozen of DeepFake videos that are the personality of the real doctors who provide medical advice around the world, and reach millions of viewers.

In one of the examples, the doctor discovered a deep boost for a product that he never relied on – and the similarity was chilling. Viewers were deceived, and the comments were rolled in praise of the doctor – all of this based on manufacturing.

Take: When technology is misled

What strikes me hard in this is not only that technology can imitate reality – it believes that. We have built our confidence on experts and sounds that seem calm and knowledgeable. When this confidence is armed, it gets rid of the basis of scientific communication.

The real battle here not only discover artificial intelligence – it’s confidence rebuilding. The platforms need more powerful examination, clear signs, and perhaps – perhaps only – a realistic examination of users before hitting “sharing”.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-08-19 11:27:00

Related Articles

Back to top button