'Politicians fear this like fire': The rise of the deepfake and the threat to democracy

On 4 May 2016, Jimmy Fallon, the host of NBC’s The Tonight Show, appeared in a sketch dressed as Donald Trump, then the Republican presidential nominee. Wearing a blond wig and three coats of bronzer, he pretended to phone Barack Obama – played by Dion Flynn – to brag about his latest primary win in Indiana. Both men appeared side by side in split screen, facing the camera. Flynn’s straight-man impression of Obama, particularly his soothing, expectant voice, was convincing, while Fallon played the exaggerated caricature that all of Trump’s mimics – and the man himself – settle into.

Three years later, on 5 March 2019, footage of the sketch was posted on the YouTube channel derpfakes under the title The Presidents. The first half of the clip shows the opening 10 seconds or so of the sketch as it originally aired. Then the footage is replayed, except the faces of Fallon and Flynn have been transformed into, seemingly, the real Trump and Obama, delivering the same lines in the same voices, but with features rendered almost indistinguishable from those of the presidents.

The video, uploaded to YouTube by the founder of derpfakes, a 28-year-old Englishman called James (he asked us not to use his surname), is a forgery created by a neural network, a type of “deep” machine-learning model that analyses video footage until it is able algorithmically to transpose the “skin” of one human face on to the movements of another – as if applying a latex mask. The result is known as a deepfake.

DerpFakes
A deepfake Donald Trump (right) based on talkshow host Jimmy Fallon’s impression of the president (left)

James’s video wasn’t intended to fool anyone – it was, he says, created “purely for laughs”. But the lifelike rendering of the presidents, along with thousands of similar deepfakes posted on the internet in the past two years, has alarmed many observers, who believe the technology could be used to disgrace politicians and even swing elections. Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.

The Star Wars clip helped to kickstart a community of meme-creating film fans around the world, who use deepfake technology to place actors in films in which they never appeared, often to comic or meaningful effect. A popular subgenre of deepfakes places Nicolas Cage into films such as Terminator 2 and The Sound Of Music, or recasts him as every character in Friends. One deepfake convincingly transposes Heath Ledger’s The Joker into the actor’s role in A Knight’s Tale. In February, a video grafting the face of one of China’s best-known actors, Yang Mi, into a 25-year-old Hong Kong television drama, The Legend Of The Condor Heroes, went viral, picking up an estimated 240m views before it was removed by Chinese authorities. Its creator wrote on the video-sharing platform Bilibili that he had made the video as a warning.

Since then, deepfake technology has continued to gain momentum. In May, researchers at Samsung’s AI lab in Moscow published “footage” of Marilyn Monroe, Salvador Dalí and the Mona Lisa, each clip generated from one still image. While it is still fairly easy to discern a deepfake from genuine footage, foolproof fabrications appear to be disconcertingly close. Recent electoral upsets have demonstrated the unprecedented power of political entities to microtarget individuals with news and content that confirms their biases. The incentive to use deepfakes to injure political opponents is great.

Samsung Ai
'Footage' of Salvador Dalí and the Mona Lisa, each clip generated, from one still image, by Samsung’s AI lab in Moscow

There is only one confirmed attempt by a political party to use a deepfake video to influence an election (although a deepfake may also have played a role in a political crisis in Gabon in December). In May 2018, a Flemish socialist party called sp.a posted a deepfake video to its Twitter and Facebook pages showing Trump appearing to taunt Belgium for remaining in the Paris climate agreement. The video, which remains on the party’s social media, is a poor forgery: Trump’s hair is curiously soft-focus, while his mouth moves with a Muppet-like elasticity. Indeed, the video concludes with Trump saying: “We all know that climate change is fake, just like this video,” although this sentence alone is not subtitled in Flemish Dutch. (The party declined to comment, but a spokesperson previously told the site Politico that it commissioned the video to “draw attention to the necessity to act on climate change”.)

But James believes forgeries may have gone undetected. “The idea that deepfakes have already been used politically isn’t so farfetched,” he says. “It could be the case that deepfakes have already been widely used for propaganda.”

At a US Senate intelligence committee hearing in May last year, the Republican senator Marco Rubio warned that deepfakes would be used in “the next wave of attacks against America and western democracies”. Rubio imagined a scenario in which a provocative clip could go viral on the eve of an election, before analysts were able to determine it was a fake. A report in the Washington Times in December claimed that policy insiders and Democratic and Republican senators believe “the Russian president or other actors hostile to the US will rely on deepfakes to throw the 2020 presidential election cycle into chaos”.

Some question the scale of this threat. Russell Brandom, policy editor at the Verge, the US tech news site, argued recently that deepfake propaganda is “a crisis that doesn’t exist”, while the New York Times has called deepfakes “emerging, long-range threats” that “pale in comparison” with established peddlers of political falsity, such as Fox News. But many experts disagree. Eileen Donahoe, the director of the Transatlantic Commission on Election Integrity (TCEI) and an adjunct professor at Stanford University, has been studying the deepfake threat to democracy for the past year. “There is little to no doubt that Russia’s digital disinformation conglomerate has people working on deepfakes,” she says. So far, the TCEI has not seen evidence that the Russians have tried to deploy deepfakes in a political context. “But that doesn’t mean it’s not coming, or that Russia-generated deepfakes haven’t already been tried elsewhere.”

'Those who seek to undermine democracy won’t be deterred by the law'

Ivan is a 33-year-old Russian programmer who, having earned a fortune in the video-game industry, is enjoying an extended sabbatical spent cycling, running and camping near where he lives, on the banks of the Volga. He is the creator of DeepFaceLab, one of the most popular pieces of software used by the public to create forged videos. Ivan, who claims to be an “ordinary programmer” and not a political activist, discovered the technology on Reddit in 2017. The software he used to create his first deepfake left a watermark on his video, which irritated him. After the creator of the software rejected a number of changes Ivan suggested, he decided to create his own program.

In the past 12 months, DeepFaceLab’s popularity has brought Ivan numerous offers of work, including regular approaches from Chinese TV companies. “This is not interesting to me,” he says, via email. For Ivan, creating deepfake software is like solving an intellectual puzzle. Currently, DeepFaceLab can only replace the target’s face below the forehead. Ivan is working to get to the stage where an entire head can be grafted from one body to another. This will allow deepfake makers to assume “full control of another person”, he says, an evolutionary step that “all politicians fear like fire”. But while such technology exists behind closed doors, there is no source code in the public domain. (Ivan cites a 2018 presentation, Deep Video Portraits, delivered at a conference by Stanford researchers, as the gold standard towards which he is working.)

The most sophisticated deepfakes require advanced machine-learning skills and their development is computationally intensive and expensive. One expert estimates the cost to be about £1,000 a day. For an amateur creating fake celebrity pornography, this is a major barrier to entry. But for a government or a well-funded political organisation, the cost is insignificant – and falling every month. Ivan flipflops in his assessment of the threat. “I do not think that so many stupid rulers… are capable of such complicated schemes as deepfakes,” he says. Then, when asked if politicians and journalists have overestimated the risk of deepfake propaganda, he says: “Did the gods overestimate the risk of giving people fire?”

deepFaceLab: Training preview
DeepFaceLab, one of the most popular pieces of software used to create forged videos. Nicolas Cage is a popular choice for fan fakes.

James, founder of derpfake, uses Ivan’s software to create his fakes. He says it is only a matter of time before “truly convincing” forgeries are created by amateurs, but he believes public awareness of the technology will prevent such footage from being able to “significantly disrupt or interfere” politically. “If I show you the latest Transformers film, you fully understand the world isn’t being attacked by robot aliens and that [the film] has been created using computers,” he says. “But show the same footage to a person from 1900 and the reaction would likely be very different.”

Not everyone shares James’s optimism. In December, the Republican senator Ben Sasse introduced the US’s first bill to criminalise the malicious creation and distribution of deepfakes, describing the threat as “something that keeps our intelligence community up at night”. A similar bill is being debated in New York state, while last month a Chinese law to regulate the use of deepfakes reached its second review before the country’s legislative body. For James, however, legislation cannot halt the rising tide: “Those who seek to undermine democracy or the rights of others won’t be deterred by the laws in another country, or even their own.”

'There will always be an arms race between detection and generation'

Just north of Oxford Circus in central London, 80-odd data analysts work in a four-storey mansion, the lofty rooms of which each contain a blackboard, giving it the feel of a Victorian schoolhouse. Unlike most of London’s tech startups, Faculty chose an office in Marylebone, rather than the industry hub of Shoreditch, due to its proximity to University College London, where many of the company’s employees studied.

For the past year, one of Faculty’s teams has focused exclusively on generating thousands of deepfakes, of varying quality, using all the main deepfake algorithms in the market. The idea is not to sow disinformation, but to compile a library that will help train systems to distinguish real video or audio from fakes. While politicians scrabble to write laws that may protect societies from weaponised deepfakes, startups such as Faculty, whose clients including the Home Office and numerous police forces, hope to inoculate the internet-going public to their effects.

The theory is that a machine-learning detective will adapt quickly as new deepfake technology emerges, whereas human forensics experts will take much longer to get up to speed. The results of Faculty’s deepfake experiments are improving at a pace that has startled the company’s co-founder and CEO, Marc Warner. Earlier this year, the company created an AI-generated audio deepfake, trained on clips of Trump’s speeches, that sounded more like him than some of the best human impersonators. The company’s latest version, Warner says, is almost impossible to distinguish from Trump. “We’re trying to work on this before it’s a large problem, to ensure that we’re prepared,” says Warner, who has tousled hair, tortoiseshell glasses, a dusting of startup founder’s stubble and a PhD in quantum computing. If anything, he argues, the danger posed by this new form of lying has been underestimated. “It’s an extremely challenging problem and it’s likely there will always be an arms race between detection and generation.”

Faculty, which is working with the TCEI, is not the only tech company aiming to fight fire with fire. In May 2018, the Pentagon’s Defense Advanced Research Projects Agency awarded three contracts to a nonprofit group called SRI International to work on its “media forensics” research programme. Then there is Amber, a company in New York with an even bolder vision for cleaning up the internet: the creation of a ubiquitous “truth layer” – software embedded in smartphone cameras to act as a kind of watermark, used to verify a video’s authenticity in perpetuity. The technology works by creating a fingerprint at the moment of a film’s recording. It then compares any “playback” of the footage with the original fingerprint to check for a match and provides the viewer with a score that indicates the likelihood of tampering.

Amber’s CEO, Shamir Allibhai, is driven by a moral belief in the importance of his work. “Society is increasingly in an unfair fight against bad actors wielding powerful AI tools for ill intent,” he says. “A postfact world could undo much of the last century’s progress toward peace, stability and prosperity, driven in part by a belief in evidence-based conclusions.” Without detection tools powerful enough to match the deepfakes, Allibhai believes society will be forced to become more cynical. But that cynicism presents an additional risk, enabling powerful people to discredit authentic video – dismissing potentially damaging footage as fakery.

Listen to the audio of Donald Trump created by machine learning

For derpfakes’ James, however, cynicism is the perfect protection. “Erosion of public trust in everything people see on the internet is surely a positive for society,” he says. “Far better than the assumption of everything as truth as the default.” James is also sceptical of companies such as Faculty and Amber, claiming that authentication would detect only amateur deepfakes. “Authentication hasn’t completely stopped any other sort of crime or nefarious activity. I have little reason to believe it would stop anyone working at a serious enough level.”

There is also the issue that the original disinformation can have a much greater effect than its subsequent debunking. In March, the Conservative political activist Theodora Dickinson posted a video alongside the tweet: “In response to the New Zealand mosque attacks, Islamists have burned down a Christian church in Pakistan. Why is this not being shown on @BBCNews?!” In fact, the video showed an attack on a church in Egypt in 2013. Despite scores of Twitter users pointing out the error, Dickinson left her tweet uncorrected (it has since been deleted) and continued to use the site. At the point at which she knew it wasn’t true, she apparently still believed – somehow – it a point worth making. She did not respond to a request for a comment.

Once a political narrative is shifted, it’s almost impossible to bring it back to its original trajectory

Eileen Donahoe

“Once a political narrative is shifted, it’s almost impossible to bring it back to its original trajectory,” says Donahoe of the TCEI. This, for her, is the issue with authentication tools such as Faculty and Amber. “Claiming a deepfake is not real or true can’t completely erase its impact.” Whatever the creators of deepfakes and the software that builds them may say, the loss of citizen confidence in the trustworthiness of information is destructive for democracy.

Media literacy can only go so far; humans often believe first, then look for things that support those beliefs. As elections loom in Israel, Canada, Europe and the US, Donahoe wants political leaders and candidates of all stripes to pledge not to use deepfakes against their opponents and to disavow any deepfakes put out on their behalf, even if their campaigns had nothing to do with them.

“We have to inoculate the public before deepfakes affect elections,” she says. “People have a right to choose their government and representatives. We all need to stand up to protect this right from interference.”

If you would like a comment on this piece to be considered for inclusion on Weekend magazine’s letters page in print, please email weekend@theguardian.com, including your name and address (not for publication).

How to spot a deepfake

A lack of blinking Many older deepfake methods failed to mimic the rate at which a person blinks – a problem recent programs have fixed.

Face wobble Shimmer or distortion is a giveaway. Also, look for abnormal movements from fixed objects in the frame – a microphone stand or a lamp, for example.

Strange behaviour An individual doing something implausible or out of character should always be a red flag.

But obvious fakes may not be what they seem It is easy to sow doubt about real footage by adding an inconsistency.

Source: Alexander Adam, data scientist, Faculty

This article was archived on 27 October 2021. Some elements may be out of date.