Close This site uses cookies. If you continue to use the site you agree to this. For more details please see our cookies policy.


Type your text, and hit enter to search:

Malicious actors and deepfakes

Lina Kolesnikova describes the implications of deepfakes being used by criminals to trigger civil unrest, influence political decisions and for large-scale fraud.

main pic

For a typical deepfake video, AI learns what a chosen face looks like at different angles in order to transpose that face onto an actor, as if it were a mask. Image: Valex113/123rf

This February, the French newspaper, Le Figaro named deepfakes as one of the threats to the presidential campaign of 2022. The proliferation of ‘fakes’ that have become almost undetectable (Le Figaro, 23/02/2021) is the trigger for this terrible accolade.

No one doubts that a deepfake can cause significant problems and poses a serious threat to privacy, democracy and national security. Deepfakes can be employed by malicious actors to influence decision-making, election results or to start civil unrest. They have also strong potential as a weapon of psychological warfare. With the proliferation of deep fakes, they will become increasingly difficult to distinguish from realistic legitimate evidence of wrongdoing, thus compromising the latter. More generally, they undermine public trust in audio-visual content.

The term deepfakes is a portmanteau of deep learning and fake; therefore they often rely on artificial intelligence (AI) and machine learning.

The story started in 2017 when a Reddit user of the same name posted doctored pornographic clips on the site, swapping actors’ faces with those of several Hollywood celebrities. 

In addition to fake pornography, some of the more harmful uses of such false content include fake news and hoaxes, as well as the opportunity for large-scale financial fraud. Among the most disturbing phenomena in the latter group might be so-called CEO fraud, which could prove extremely dangerous in the current pandemic situation – as most people work remotely, while still operating with access to many critical systems and capabilities.

When in the office, employees work alongside their manager or colleagues, but when remote, especially if working unusual hours, they might be confronted with a lack of peer review of their actions, which could allow them to be easily manipulated or to be manipulative themselves.

State of the art of deepfake technology allows not only videos to be faked, but also photo and audio material using imaginary voices or clones of real people’s voices. Scams using faked WhatsApp voice messages have already been reported. When coupled with the reach and speed of social media, deep fake technology can appear to be credible and convincing, quickly reaching millions of people and bringing negative impacts upon society in no time.

Deepfake is gradually maturing as a technology. Although according to a recent report from the company Deeptrace, much of the deepfake content online is still pornographic and disproportionately victimises women, the real danger is only around the corner. There is a growing concern about potential growth in the use of deepfakes for other purposes, particularly disinformation of all different sorts and for various reasons. While in the past, partial truth mixed with lies or partial hiding of truth were the main instruments of disinformation, deepfakes bring this to new heights, where they pose as explicit, credible and reliable and thereby can trigger opinions and actions.

For a typical deepfake video, AI learns what a chosen face looks like at different angles in order to transpose that face onto an actor, as if it were a mask. The deepfake video is now showing a targeted individual saying and doing things that have never actually happened. 

The breakthrough in the technology came about with the application named GAN – Generative adversarial networks – which allows two AI algorithms to be set against each other: one creates the fakes and the other grades its efforts, teaching the synthesis engine to create better fakes, leading to hyper-realistic of fake videos.
Creating a deepfake no longer requires long videos of a targeted person: a few Instagram stories will suffice.

Technology has reached the stage when anyone with basic computer skills and a home computer can create a deepfake. Relevant computer applications are openly available on the Internet. They are often accompanied by tutorials explaining how to create such videos. Still, to develop a somewhat realistic deepfake, applications generally demand hundreds or thousands of training images of the faces to be swapped or manipulated. This explains why celebrities and government leaders are, so far, the most common subjects.

On the other hand, if less material is available, creating more convincing deepfakes – with GAN for instance – requires more advanced technical skills and resources. As artificial neural network technologies have advanced rapidly in parallel with more powerful and abundant computing, so has the ability to produce realistic deepfakes. It becomes easier and easier to create impactful, trustworthy looking deepfakes with fewer and fewer skills.

All of this said, the situation is not completely hopeless. While deepfakes are a significant threat to our society, political systems and businesses, they can be combatted in several ways via new legislation and regulations, policies and voluntary actions, education and training. Many of these measures will run as deterrent, preventive and response controls.

At the same time, the development of technologies for deepfake detection and content authentication has improved. Researchers and internet companies have already experimented with several methods to detect deepfakes. They typically make use of AI to analyse videos for digital artifacts or details that deepfakes, so far, fail to imitate realistically, such as blinking or facial tics.

A new competition is starting. Like the ‘regular’ and cybersecurity races, where malicious actors constantly compete with defence activities, the deepfake capability is likely to compete continuously with our ability to contextualise and to build a situational awareness in our protection, detection, prevention and response techniques.

Further reading around deepfakes and fake news

News: Deep fake videos could 'spark' violent social unrest (2019, first reported by the BBC)

Blog: Deep learning to detect the language patterns of fake and real news - A study uncovers language patterns that AI models link to factual and false articles

Blog: The Amazon Apocalypse? Elton Cunha, based in Brazil, unpicks the tangle of misinformation surrounding the fires of 2018, finding that the facts are serious enough not to need embellishment 

Patrick Meschenmoser wrote about deep fakes and false flag campaigns in CRJ 14.2. Read his article for free here


    Tweet       Post       Post
Oops! Not a subscriber?

This content is available to subscribers only. Click here to subscribe now.

If you already have a subscription, then login here.