
Miscreants are using AI to create faked images of a sexual nature, which they then employ in sextortion schemes.
Scams of this sort used to see crims steal intimate images – or convince victims to send share them – before demanding payments to prevent their wide release.
But scammers are now accessing publicly available and benign images from social media sites or other sources and using AI techniques to render explicit videos or pictures, then demanding money – even though the material is not real. The FBI this week issued an advisory about the threat, warning people to be cautious when posting or sending any images of themselves, or identifying information, over social media, dating apps, or other online sites.
Thomas Uhlemann, Security Specialist at ESET commented:
Deepfakes have recently seen a revival, thanks to innovation and availability of AI powered tools to alter any kind of images, video and sound. This technology has been used predominantly by nation states in war scenarios, but it has recently become available to almost everyone. About 5 years ago faked videos and images could still be easily spotted – this has become harder and harder over time.
In the past, deepfakes were mostly employed to attack high-value targets, such as politicians or big corporations in CEO-fraud attacks, because creating the fakes used to be relatively costly. The situation has changed dramatically. AI can employ openly available data, for example from anybody’s social media account and create hyper-realistic but threatening media which can be used in blackmailing attacks on Average Joe and Jane.
It is therefore crucial to bolster basic privacy measures, such as “locking” social media profiles to “private” and apply a thorough thought process before posting any content to the open world. Parents are advised to talk to their (grown up) children about the potential dangers associated with “revenge porn” attacks via deepfakes and grandparents need to be advised on how to deal with forged phone calls and messages.
