These People Don’t Exist but could still be committing fraud.
Deepfakes have grown from glitchy youtube sensations to political, social and criminal weapons. What are the implications for fraud prevention?
Do you recognise these people? They look familiar, no? But they’re not real. Using artificial intelligence and Deepfake technology, anyone with a computer and access to the internet can create authentic-looking videos, photos and audio clips of people doing things and saying things they didn’t actually do/say. Using a Generative Adversarial Networks (GAN), thousands of recordings of a person’s voice can be analysed, and from this, a completely new, fictional audio file can be created that sounds the same, and uses the same speech patterns (Panda Media Centre). Through this technology, a fake version of the past, present and future can be created and become the new accepted truth. Experts state that in the near future, Deepfakes will be completely indistinguishable from real images. This is a huge benefit to criminals, and a gaping-gap of vulnerabilities for crime prevention.
State Farm’s debut tv advert is a good benign example of how the nation can be fooled. They aired footage supposedly from 1998 of an ESPN analyst predicting the events of 2020. People were astounded only to later realise that it was fake. Other examples have gone viral recently, such as Mark Zuckerberg stating that Facebook is exploiting its users. This technology is expanding at an incredible rate. In 2019 the number of online deepfake videos almost doubled from 7,964 to 14,678 (Deeptrace).
But what happens when Deepfakes transcend the realm of fun youtube sensations and are used within the criminal sphere. Imagine the harm that will be done if the world believes a fabricated video (created to suit someone’s personal agenda) is the truth. Imagine a scenario where your company is defrauded of £200,000 because a scammer was able to imitate your CEO’s voice and ask for an urgent transfer? They even replicate the slight German accent. Oh wait, it’s already happened, in March 2019. By the time the employee realised it was a fraud, the funds had been transferred to Mexico via Hungry.
We Fight Fraud have seen ID documents created with deepfake photos. The fraudster uses stolen personal details from a real person along with the photo. They control the image so they can be sure that anti-fraud systems will not pick up the real identity from facial recognition.
Systems are being developed to identify deep fake images the latest being from Microsoft. But systems can only give a probability score, that leaves a very big error margin to be exploited by automated fraud networks. The real challenge is for this new threat to be taken seriously by the fraud prevention industry faster than it's being taken seriously by fraudsters. We cannot be complacent and need to stay on top of this technology; because one thing is for sure, criminals won’t be missing this opportunity.
We Fight Fraud provides training which is constantly updated to address current and emerging threats. More information at wefightfraud.org
Panda media Centre: https://www.pandasecurity.com/mediacenter/news/deepfake-voice-