Photo (c) Kishore Kumar – Getty ImagesEarlier this week, ConsumerAffairs reported that âvoice cloningâ has elbowed its way onto the scam landscape, an artificial intelligence-driven (AI) ruse that can convincingly trick someone into believing their boss or a relative is asking them for money.
But where thereâs smoke, thereâs fire. When we interviewed AI and scam experts, it appears that fire is getting hot, too.
The most wicked AI that bad actors can employ are deepfakes (âdeep learningâ + âfakeâ) â synthetic media where a photo of a person can be integrated into a video or image and virtually turn someone into someone else. With software similar to a Snapchat or TikTok filter, creative scammers can spin all sorts of deepfakes to deceive the average person, too.
Who better to validate that than DeepMedia CEO and co-founder Rijul Gupta, whose company actually specializes in both detecting and creating deepfakes? Gupta told ConsumerAffairs that phone and email phishing is just a start and that scammers are quickly moving toward deepfake videos. He said thereâs even an open-source program that allows anyone to deepfake a live video call and all they need is a 10-second clip of a TikTok video.
Holding your kid hostage
On top of other family emergency scams starting to trend, Gupta says parents need to be on the alert for AI scams using their children as a decoy.
âImagine youâre a parent and you get a FaceTime request from your kidâs principal. Youâve seen them and spoken to them before and recognize the person youâre talking to on FaceTime, so you assume the video call is real,â Gupta said.
âThe person on the other end could scam you out of thousands, saying something like âyour kid broke something at school, it costs $10,000 to replace. Usually weâd have to file a police report but if you make a bank transfer today [for $500] we can resolve this issue quickly.â
Gupta said that a scammer could even use this technology to kidnap your child for ransom, or worse.
“Thereâs a problem at school, youâll need to drop off your kid at this location instead! There are a lot of frightening scenarios that could happen when someone deepfakes a live video call.â
Theyâve already found a place in the office
Deepfakes are showing up in virtual meetings, too, says the FBI. In a recent internet crime report, the agency said itâs seen scammers compromise a CEO or CFOâs email, then turn around and use it to request employees to participate in virtual meeting platforms where the scam magic unfurls.
âIn those meetings, the fraudster would insert a still picture of the CEO with no audio, or a âdeepfakeâ audio through which fraudsters, acting as business executives, would then claim their audio/video was not working properly,â the agency wrote.
âThe fraudsters would then use the virtual meeting platforms to directly instruct employees to initiate wire transfers or use the executivesâ compromised email to provide wiring instructions.â
ChatGPT can turn amateur hackers into geniuses
ChatGPT is the current belle of the AI ball, but one privacy expert says itâs quickly turning into a beast. Dimitri Shelest, the CEO & founder of OneRep â a company that automates the removal of unauthorized private listings from the web to help people restore privacy â told ConsumerAffairs that the latest version of ChatGPT-4 has generated a new wave of criticism from privacy experts.
âIn the same way as writing or improving useful code, ChatGPT can be used to write code that steals data. More specifically, it can help build websites and bots that trick users into sharing their information at such scales that can potentially take social engineering scams to the industrial level,â Shelest told us.
He added that cybercrooks who have little command of English are a thing of the past with ChatGPT, because AI will help them create credible, highly-targeted phishing campaigns that can be difficult to crack and difficult for the average person to detect as fake.
How to spot a fake
The AI deepfakes ConsumerAffairs found floating around on the web are impressive, but Gupta says there are some wrinkles you can look for that might give you a clue.
âYou can recognize fake AI-generated images if the background is blurry or warped, if the face is asymmetrical and the shadowing doesnât make sense, or if the teeth donât look as sharp as they should,â he said.
As far as voice cloning is concerned, he suggests that listening for the subtleties in emotions and accents can help spot a deepfake.
However, from his experience, those suggestions may only be band-aid fixes.
âAI is evolving and getting more sophisticated every day, which makes it really difficult to determine what is real and what is fake,â he warned. âBut as deepfake faces and voices become more advanced, these techniques will no longer be enough to spot a fake, making detection AI the only viable solution to keeping people safe.â
Found At: Privacy experts say AI-driven scams are getting so good you won't…