Skip to Content
Artificial intelligence

The year deepfakes went mainstream

In 2020, AI-synthetic media started moving away from the darker corners of the internet.
collage of best deep fakes of 2020
Ms Tech

In 2018, Sam Cole, a reporter at Motherboard, discovered a new and disturbing corner of the internet. A Reddit user by the name of “deepfakes” was posting nonconsensual fake porn videos using an AI algorithm to swap celebrities’ faces into real porn. Cole sounded the alarm on the phenomenon, right as the technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible apps that could “strip” clothes off any woman photographed.

Since then deepfakes have had a bad rap, and rightly so. The vast majority of them are still used for fake pornography. A female investigative journalist was severely harassed and temporarily silenced by such activity, and more recently, a female poet and novelist was frightened and shamed. There’s also the risk that political deepfakes will generate convincing fake news that could wreak havoc in unstable political environments.

But as the algorithms for manipulating and synthesizing media have grown more powerful, they’ve also given rise to positive applications—as well as some that are humorous or mundane. Here is a roundup of some of our favorites in a rough chronological order, and why we think they’re a sign of what’s to come.

Whistleblower shielding

Left: a photo grid of Maxim shot at many angles. Right: a photo grid of his deepfake cover shot at many angles.
TEUS MEDIA

In June, Welcome to Chechyna, an investigative film about the persecution of LGBTQ individuals in the Russian republic, became the first documentary to use deepfakes to protect its subjects’ identities. The activists fighting the persecution, who served as the main characters of the story, lived in hiding to avoid being tortured or killed. After exploring many methods to conceal their identities, director David France settled on giving them deepfake “covers.” He asked other LGBTQ activists from around the world to lend their faces, which were then grafted onto the faces of the people in his film. The technique allowed France to preserve the integrity of his subjects’ facial expressions and thus their pain, fear, and humanity. In total the film shielded 23 individuals, pioneering a new form of whistleblower protection.

Revisionist history

A split screen of actor Lewis D. Wheeler to the left and deepfake Nixon to the right.
PANETTA AND BURGUND

In July, two MIT researchers, Francesca Panetta and Halsey Burgund, released a project to create an alternative history of the 1969 Apollo moon landing. Called In Event of Moon Disaster, it uses the speech that President Richard Nixon would have delivered had the momentous occasion not gone according to plan. The researchers partnered with two separate companies for deepfake audio and video, and hired an actor to provide the “base” performance. They then ran his voice and face through the two types of software, and stitched them together into a final deepfake Nixon.

While this project demonstrates how deepfakes could create powerful alternative histories, another one hints at how deepfakes could bring real history to life. In February, Time magazine re-created Martin Luther King Jr.’s March on Washington for virtual reality to immerse viewers in the scene. The project didn’t use deepfake technology, but Chinese tech giant Tencent later cited it in a white paper about its plans for AI, saying deepfakes could be used for similar purposes in the future.

Memes

MS TECH | NEURIPS (TRAINING SET); HAO (COURTESY)

In late summer, the memersphere got its hands on simple-to-make deepfakes and unleashed the results into the digital universe. One viral meme in particular, called “Baka Mitai” (pictured above), quickly surged as people learned to use the technology to create their own versions. The specific algorithm powering the madness came from a 2019 research paper that allows a user to animate a photo of one person’s face with a video of someone else’s. The effect isn’t high quality by any stretch of the imagination, but it sure produces quality fun. The phenomenon is not entirely surprising; play and parody have been a driving force in the popularization of deepfakes and other media manipulation tools. It’s why some experts emphasize the need for guardrails to prevent satire from blurring into abuse.

Sports ads

Busy schedules make it hard to get celebrity sports stars in the same room at the best of times. In the middle of a lockdown, it’s impossible. So when you need to film a commercial in LA featuring people in quarantine bubbles across the country, the only option is to fake it. In August the streaming site Hulu ran an ad to promote the return of sports to its service, starring NBA player Damian Lillard, WNBA player Skylar Diggins-Smith, and Canadian hockey player Sidney Crosby. We see these stars giving up their sourdough baking and returning to their sports, wielding basketballs and hockey sticks. Except we don’t: the faces of those stars were superimposed onto body doubles using deepfake tech. The algorithm was trained on footage of the players captured over Zoom. Computer trickery has been used to fake this kind of thing for years, but deepfakes make it easier and cheaper than ever, and this year of remote everything has given the tech a boost. Hulu wasn’t the only one. Other advertisers, including ESPN, experimented with deepfakes as well. 

Political campaigns

In September, during the lead-up to the US presidential elections, the nonpartisan advocacy group RepresentUs released a pair of deepfake ads. They featured fake versions of Russian president Vladimir Putin and North Korean leader Kim Jong-un delivering the same message: that neither needed to interfere with US elections, because America would ruin its democracy by itself. This wasn’t the first use of deepfakes during a political campaign. In February, Indian politician Manoj Tiwari used deepfakes in a campaign video to make it appear as if he were speaking Haryanvi, the Hindi dialect spoken by his target voters. But RepresentUs notably flipped the script on the typical narrative around political deepfakes. While experts often worry about the technology’s ability to sow confusion and disrupt elections, the group sought to do the exact opposite: raise awareness of voter suppression to protect voting rights and increase turnout.

TV shows

If deepfake commercials and one-off stunts are starting to feel familiar, trust the makers of South Park to take it to extremes. In October, Trey Parker and Matt Stone debuted their new creation, Sassy Justice, the first deepfake TV show. The weekly satirical show revolves around the character Sassy Justice, a local news reporter with a deepfaked Trump face. Sassy interviews deepfaked figures such as Jared Kushner (with Kushner’s face superimposed on a child) and Al Gore. With Sassy Justice, deepfakes have gone beyond marketing gimmick or malicious deception to hit the cultural mainstream. Not only is the technology used to create the characters, but it is the subject of satire itself. In the first episode, Sassy “Trump” Justice, playing a consumer advocate, investigates the truth behind “deepfake news.”

Deep Dive

Artificial intelligence

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.

This AI-powered “black box” could make surgery safer

A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.

Why does AI hallucinate?

The tendency to make things up is holding chatbots back. But that’s just what they do.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.