A deepfake bot is being used to “undress” underage girls
A technology similar to DeepNude, the 2019 app that shut down shortly after launch, is now spreading unfettered on Telegram.
Update 10/28: Since the publication of this article, the deepfake bot on Telegram has been blocked on iOS for violating App Store guidelines, according to the researchers. The main Telegram channel that hosted the bot and an affiliated channel for sharing its creations has also been removed.
In June of 2019, Vice uncovered the existence of a disturbing app that used AI to “undress” women. Called DeepNude, it allowed users to upload a photo of a clothed woman for $50 and get back a photo of her seemingly naked. In actuality, the software was using generative adversarial networks, the algorithm behind deepfakes, to swap the women’s clothes for highly realistic nude bodies. The more scantily clad the victim, the better. It didn’t work on men.
Within 24 hours, the Vice article had inspired such a backlash that the creators of the app quickly took it down. The DeepNude Twitter account announced that no other versions would be released, and no one else would get access to the technology.
But a new investigation from Sensity AI (previously Deeptrace Labs), a cybersecurity company focused on detecting the abuse of manipulated media, has now found very similar technology being used by a publicly available bot on the messaging app Telegram. This time it has an even simpler user interface: anyone can send the bot a photo through the Telegram mobile or web app and receive a nude back within minutes. The service is also completely free, though users can pay a base of 100 rubles (approximately $1.50) for perks such as removing the watermark on the “stripped” photos or skipping the processing queue.
As of July 2020, the bot had already been used to target and “strip” at least 100,000 women, the majority of whom likely had no idea. “Usually it’s young girls,” says Giorgio Patrini, the CEO and chief scientist of Sensity, who coauthored the report. “Unfortunately, sometimes it’s also quite obvious that some of these people are underage.”
The gamification of harassment
The deepfake bot, launched on July 11, 2019, is connected to seven Telegram channels with a combined total of over 100,000 members. (This number doesn’t account for duplicate membership across channels, but the main group has more than 45,000 unique members alone.)
The central channel is dedicated to hosting the bot itself, while the others are used for functions like technical support and image sharing. The image-sharing channels include interfaces that people can use to post and judge their nude creations. The more a photo gets liked, the more its creator is rewarded with tokens to access the bot’s premium features. “The creator will receive an incentive as if he’s playing a game,” Patrini says.
The community, which is easily discoverable via search and social media, has steadily grown in membership over the last year. A poll of 7,200 users showed that roughly 70% of them are from Russia or other Russian-speaking countries. The victims, however, seem to come from a broader range of countries, including Argentina, Italy, Russia, and the US. The majority of them are private individuals whom the bot’s users say they know in real life or whom they found on Instagram. The researchers were able to identify only a small handful of the women and tried to contact them to understand their experiences. None of the women responded, Patrini says.
The researchers also reached out to Telegram and to relevant law enforcement agencies, including the FBI. Telegram did not respond to either their note or MIT Technology Review’s follow-up request for comment. Patrini says they also haven’t seen “any tangible effect on these communities” since contacting the authorities.
Deepfake revenge porn
Abusers have been using pornographic imagery to harass women for some time. In 2019, a study from the American Psychological Association found that one in 12 women end up being victims of revenge porn at some point in their life. A study from the Australian government, looking at Australia, the UK, and New Zealand, found that ratio to be as high as one in three. Deepfake revenge porn adds a whole new dimension to the harassment, because the victims don’t realize such images exist.
There are also many cases in which deepfakes have been used to target celebrities and other high-profile individuals. The technology first grew popular in the deep recesses of the internet as a way to face-swap celebrities into porn videos, and it’s been used as part of harassment campaigns to silence female journalists. Patrini says he’s spoken with influencers and YouTubers, as well, who’ve had deepfaked pornographic images of them sent directly to their sponsors, costing them immense emotional and financial strain.
Patrini suspects these targeted attacks could get a whole lot worse. He and his fellow researchers have already seen the technology advance and spread. For example, they discovered yet another ecosystem of over 380 pages dedicated to the creation and sharing of explicit deepfakes on the Russian social-media platform VK. (After the publication of this article, a spokesperson from VK sent MIT Technology Review a statement: "VK doesn’t tolerate such content or links on the platform and blocks communities that distribute them. We will run an additional check and block inappropriate content and communities.") The researchers also found that the “undressing” algorithm is starting to be applied to videos, such as footage of bikini models walking down a runway. Right now, the algorithm must be applied frame by frame—“it’s very rudimentary at the moment,” Patrini says. “But I’m sure people will perfect it and also put up a license service for that.”
Unfortunately, there are still few ways to stop this kind of activity—but awareness of the issues is growing. Companies like Facebook and Google, and researchers who produce tools for deepfake creation, have begun to more seriously invest in countermeasures like automated deepfake detection. Last year, the US Congress also introduced a new bill that would create a mechanism for victims to seek legal recourse for reputational damage.
In the meantime, Patrini says, Sensity will continue to track and report these types of malicious deepfakes, and seek to understand more about the motivations of those who create them and the impacts on victims’ lives. "Indeed, the data we share in this report is only the tip of the iceberg," he says.
Update: An official statement from the Russian social media platform VK has been added to the article.
Keep Reading
Most Popular
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
How a simple circuit could offer an alternative to energy-intensive GPUs
The creative new approach could lead to more energy-efficient machine-learning hardware.
How gamification took over the world
Gamification was always just behaviorism dressed up in pixels and point systems. Why did we fall for it?
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.