Skip to Content
Artificial intelligence

We are all AI’s free data workers

Plus: DeepMind’s game-playing AI just found another way to make code faster.

contractors sit over a larger than life size keyboard while checking streams of colored data
Anna Sorokina

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

This week I’ve been thinking a lot about the human labor behind fancy AI models. 

The secret to making AI chatbots sound smart and spew less toxic nonsense is to use a technique called reinforcement learning from human feedback, which uses input from people to improve the model’s answers. 

It relies on a small army of human data annotators who evaluate whether a string of text makes sense and sounds fluent and natural. They decide whether a response should be kept in the AI model’s database or removed. 

Even the most impressive AI chatbots require thousands of human work hours to behave in a way their creators want them to, and even then they do it unreliably. The work can be brutal and upsetting, as we will hear this week when the ACM Conference on Fairness, Accountability, and Transparency (FAccT) gets underway. It’s a conference that brings together research on things I like to write about, such as how to make AI systems more accountable and ethical.

One panel I am looking forward to is with AI ethics pioneer Timnit Gebru, who used to co-lead Google’s AI ethics department before being fired. Gebru will be speaking about how data workers in Ethiopia, Eritrea, and Kenya are exploited to clean up online hate and misinformation. Data annotators in Kenya, for example, were paid less than $2 an hour to sift through reams of unsettling content on violence and sexual abuse in order to make ChatGPT less toxic. These workers are now unionizing to gain better working conditions. 

In an MIT Technology Review series last year, we explored how AI is creating a new colonial world order, and data workers are bearing the brunt of it. Shining a light on exploitative labor practices around AI has become even more urgent and important with the rise of popular AI chatbots such as ChatGPT, Bing, and Bard and image-generating AI such as DALL-E 2 and Stable Diffusion. 

Data annotators are involved in every stage of AI development, from training models to verifying their outputs to offering feedback that makes it possible to fine-tune a model after it has been launched. They are often forced to work at an incredibly rapid pace to meet high targets and tight deadlines, says Srravya Chandhiramowuli, a PhD researcher studying labor practices in data work at City, University of London.

“This notion that you can build these large-scale systems without human intervention is an absolute fallacy,” says Chandhiramowuli.

Data annotators give AI models important context that they need to make decisions at scale and seem sophisticated. 

Chandhiramowuli tells me of one case where a data annotator in India had to differentiate between images of soda bottles and pick out ones that looked like  Dr. Pepper. But Dr. Pepper is not a product that is sold in India, and the onus was on the data annotator to figure it out. 

The expectation is that annotators figure out the values that are important to the company, says Chandhiramowuli. “They’re not just learning these distant faraway things that are absolutely meaningless to them—they’re also figuring out not only what those other contexts are, but what the priorities of the system they’re building are,” she says.

In fact, we are all data laborers for big technology companies, whether we are aware of it or not, argue researchers at the University of California, Berkeley, the University of California, Davis, the University of Minnesota, and Northwestern University in a new paper presented at FAccT.

Text and image AI models are trained using huge data sets that have been scraped from the internet. This includes our personal data and copyrighted works by artists, and that data we have created is now forever part of an AI model that is built to make a company money. We unwittingly contribute our labor for free by uploading our photos on public sites, upvoting comments on Reddit, labeling images on reCAPTCHA, or performing online searches.  

At the moment, the power imbalance is heavily skewed in favor of some of the biggest technology companies in the world. 

To change that, we need nothing short of a data revolution and regulation. The researchers argue that one way people can take back control of their online existence is by advocating for transparency about how data is used and coming up with ways to give people the right to offer feedback and share revenues from the use of their data. 

Even though this data labor forms the backbone of modern AI, data work remains chronically underappreciated and invisible around the world, and wages remain low for annotators. 

“There is absolutely no recognition of what the contribution of data work is,” says Chandhiramowuli. 

Deeper Learning

The future of generative AI and business

What are you doing on Wednesday? Why not join me and MIT Technology Review’s senior editor for AI, Will Douglas Heaven, at EmTech Next, where we’ll be joined by a great panel of experts to analyze how the AI revolution will change business? 

My sessions will look at AI in cybersecurity, the importance of data, and the new rules we need for AI. Tickets are still available here.

To whet your appetite, my colleague David Rotman has a deep dive on generative AI and how it is going to change the economy. Read it here

Even Deeper Learning

DeepMind’s game-playing AI just found another way to make code faster

Using a new version of the game-playing AI AlphaZero called AlphaDev, the UK-based firm (recently renamed Google DeepMind after a merge with its sister company’s AI lab in April) has discovered a way to sort items in a list up to 70% faster than the best existing method. It has also found a way to speed up a key algorithm used in cryptography by 30%. 

Why this matters: As computer chips powering AI models are approaching their physical limits, computer scientists are having to find new and innovative ways of optimizing computing. These algorithms are among the most common building blocks in software. Small speed-ups can make a huge difference, cutting costs and saving energy. Read more from Will Douglas Heaven here.

Bits and Bytes

Ron DeSantis ad uses AI-generated photos of Donald Trump and Anthony Fauci
The US presidential election is going to get messy. Exhibit A: A campaign backing Ron DeSantis as the Republican presidential nominee in 2024 has used an AI-generated deepfake to attack rival Donald Trump. The image depicts Trump kissing Anthony Fauci, a former White House chief medical advisor loathed by many on the right. (AFP

Humans are biased, but generative AI is worse 
This visual investigation shows how the open-source text-to-image model Stable Diffusion amplifies stereotypes about race and gender. The piece is a great visualization of research showing that the AI model presents a more biased worldview than reality. For example, women made up just 3% of the images generated for the keyword “judge,” when in reality 34% of US judges are women. (Bloomberg)

Meta is throwing generative AI at everything
After a rocky year of layoffs, Meta’s CEO, Mark Zuckerberg, told staff that the company is intending to integrate generative AI into its flagship products, such as Facebook and Instagram. People will, for example, be able to use text prompts to edit photos and share them on Instagram Stories. The company is also developing AI assistants or coaches that people can interact with. (The New York Times)

A satisfying use of generative AI
Watch someone fixing things using generative AI in photo-editing software. 

Deep Dive

Artificial intelligence

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.

This AI-powered “black box” could make surgery safer

A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.