People are hiring out their faces to become deepfake-style marketing clones
AI-powered characters based on real people can star in thousands of videos and say anything, in any language.
Like many students, Liri has had several part-time jobs. A 23-year-old in Israel, she does waitressing and bartending gigs in Tel Aviv, where she goes to university.
She also sells cars, works in retail, and conducts job interviews and onboarding sessions for new employees as a corporate HR rep. In Germany.
Liri can juggle so many jobs, in multiple countries, because she has hired out her face to Hour One, a startup that uses people’s likenesses to create AI-voiced characters that then appear in marketing and educational videos for organizations around the world. It is part of a wave of companies overhauling the way digital content is produced. And it has big implications for the human workforce.
Liri does her waitressing and bar work in person, but she has little idea what her digital clones are up to. “It is definitely a bit strange to think that my face can appear in videos or ads for different companies,” she says.
Hour One is not the only company taking deepfake tech mainstream, using it to produce mash-ups of real footage and AI-generated video. Some have used professional actors to add life to deepfaked personas. But Hour One doesn’t ask for any particular skills. You just need to be willing to hand over the rights to your face.
Character driven
Hour One is building up a pool of what it calls “characters.” It says it has around 100 on its books so far, with more being added each week. “We’ve got a queue of people that are dying to become these characters,” says Natalie Monbiot, the company’s head of strategy.
Anyone can apply to become a character. Like a modeling agency, Hour One filters through applicants, selecting those it wants on its books. The company is aiming for a broad sample of characters that reflect the ages, genders, and racial backgrounds of people in the real world, says Monbiot. (Currently, around 80% of its characters are under 50 years old, 70% are female, and 25% are white.)
To create a character, Hour One uses a high-resolution 4K camera to film a person talking and making different facial expressions in front of a green screen. And that’s it for the human part of the performance. Plugging the resulting data into AI software that works in a similar way to deepfake tech, Hour One can generate an endless amount of footage of that person saying whatever it wants, in any language.
Hour One’s clients pay the company to use its characters in promotional or commercial video. They select a face, upload the text they want it to say, and get back a video of what looks like a real person delivering that script to a camera. The quickest service uses text-to-speech software to generate synthetic voices, which are synced with the characters’ mouth movements and facial expressions. Hour One also offers a premium service where the audio is recorded by professional voice actors. These voices are again fitted to the movements of the character in the video. Hour One says it has more than 40 clients, including real estate, e-commerce, digital health, and entertainment firms. One major client is Berlitz, an international language school that provides teacher-led video courses for dozens of languages.
According to Monbiot, Berlitz wanted to increase the number of videos it offered but struggled to do so using real human actors. They had to have production crews creating the same setup with the same actor over and over again, she says: “They found it really unsustainable. We’re talking about thousands of videos.”
Berlitz now works with Hour One to generate hundreds of videos in minutes. “We’re replacing the studio,” says Monbiot. “A human being doesn’t need to waste their time filming.”
Another early example of the technology in action is Alice Receptionist, a company that provides firms with an avatar on a screen to handle visitors’ queries, replacing the role of a human receptionist in a range of physical locations in the US. Hour One is working with Alice Receptionist to update its video footage of human actors so that the digital receptionists can be made to say different things in different languages without having to reshoot hours of video.
Liri, like everyone on Hour One’s books, receives a micropayment every time a client licenses a video that uses her face. Monbiot won’t say exactly how large these payments are except that it’s dollars, not cents. “I can’t say that anyone today is making a living doing this,” she says. “But we think if all goes well it will be a viable way to make an income.”
By removing the need for film crews, studio technicians, and—for all but a few minutes—actors, Hour One’s technology is a boon to companies wanting to scale up video production, even as it offers a bit of easy money to a handful of people like Liri. But some are troubled by the implications for the future of work.
Changing roles
“This looks like a fairly extreme case of technology scaling back the human’s role in a particular work process,” says Jessie Hammerling at the Center for Labor Research and Education at the University of California, Berkeley, who studies the impact of new technologies on work. Automation doesn’t always eliminate human roles entirely, but it does change those roles in ways that affect people’s ability to earn a fair wage or turn a job into a long-term career, she says.
Hammerling notes that allowing companies to reuse one-time footage of actors for multiple video projects will reduce the availability of this kind of acting work. According to SAG-AFTRA, a union for US movie, television, and radio performers, many actors do promotional and marketing work for clients like those now working with Hour One.
SAG-AFTRA says it is important that people hiring out their likeness to firms like Hour One be able to maintain control over how that likeness is used.
“For a lot of talent, their likenesses are valuable assets that warrant proper protection and compensation for their use,” says a union spokesperson. “There is a risk of being put into content they may object to or that may conflict with other work.”
Hour One appears to get this right. The firm does not let people have a say in how their likeness will be used or what words will be put into their mouths, but it has an ethics policy specifying that it will not work with certain industries. “We’re pretty conservative about the types of businesses that we work with,” says Monbiot. That means no gambling, no sex, and no politics.
Liri doesn’t worry too much. She says she trusts Hour One not to use her face for anything that might make her feel uncomfortable. She even recommended the gig to her friends. “I’ve had friends send me videos they’ve seen my face in, which felt very strange,” she says. “All of a sudden, I realized this thing is for real.”
Deep Dive
Artificial intelligence
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
This AI-powered “black box” could make surgery safer
A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.
Why does AI hallucinate?
The tendency to make things up is holding chatbots back. But that’s just what they do.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.