What if we could just ask AI to be less biased?
Plus: ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it’s a white man with glasses.
Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities.
Although I’ve written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like “CEO” or “director.”
And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they’re asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me.
As the world becomes increasingly filled with AI-generated imagery, we are going to mostly see images that reflect America’s biases, culture, and values. Who knew AI could end up being a major instrument of American soft power?
So how do we address these problems? A lot of work has gone into fixing biases in the data sets AI models are trained on. But two recent research papers propose interesting new approaches.
What if, instead of making the training data less biased, you could simply ask the model to give you less biased answers?
A team of researchers at the Technical University of Darmstadt, Germany, and AI startup Hugging Face developed a tool called Fair Diffusion that makes it easier to tweak AI models to generate the types of images you want. For example, you can generate stock photos of CEOs in different settings and then use Fair Diffusion to swap out the white men in the images for women or people of different ethnicities.
As the Hugging Face tools show, AI models that generate images on the basis of image-text pairs in their training data default to very strong biases about professions, gender, and ethnicity. The German researchers’ Fair Diffusion tool is based on a technique they developed called semantic guidance, which allows users to guide how the AI system generates images of people and edit the results.
The AI system stays very close to the original image, says Kristian Kersting, a computer science professor at TU Darmstadt who participated in the work.
This method lets people create the images they want without having to undertake the cumbersome and time-consuming task of trying to improve the biased data set that was used to train the AI model, says Felix Friedrich, a PhD student at TU Darmstadt who worked on the tool.
However, the tool is not perfect. Changing the images for some occupations, such as “dishwasher,” didn’t work as well because the word means both a machine and a job. The tool also only works with two genders. And ultimately, the diversity of the people the model can generate is still limited by the images in the AI system’s training set. Still, while more research is needed, this tool could be an important step in mitigating biases.
A similar technique also seems to work for language models. Research from the AI lab Anthropic shows how simple instructions can steer large language models to produce less toxic content, as my colleague Niall Firth reported recently. The Anthropic team tested different language models of varying sizes and found that if the models are large enough, they self-correct for some biases after simply being asked to.
Researchers don’t know why text- and image-generating AI models do this. The Anthropic team thinks it might be because larger models have larger training data sets, which include lots of examples of biased or stereotypical behavior—but also examples of people pushing back against this biased behavior.
AI tools are becoming increasingly popular for generating stock images. Tools like Fair Diffusion could be useful for companies that want their promotional pictures to reflect society’s diversity, says Kersting.
These methods of combating AI bias are welcome—and raise the obvious question of whether they should be baked into the models from the start. At the moment, the best generative AI tools we have amplify harmful stereotypes on a large scale.
It’s worth remembering that bias isn’t something that can be fixed with clever engineering. As researchers at the US National Institute of Standards and Technology (NIST) pointed out in a report last year, there’s more to bias than data and algorithms. We need to investigate the way humans use AI tools and the broader societal context in which they are used, all of which can contribute to the problem of bias.
Effective bias mitigation will require a lot more auditing, evaluation, and transparency about how AI models are built and what data has gone into them, according to NIST. But in this frothy generative AI gold rush we’re in, I fear that might take a back seat to making money.
Deeper Learning
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
Since OpenAI released its sensational text-generating chatbot ChatGPT last November, app developers, venture-backed startups, and some of the world’s largest corporations have been scrambling to make sense of the technology and mine the anticipated business opportunities
Productivity boom or bust: While companies and executives see a clear chance to cash in, the likely impact of the technology on workers and the economy on the whole is far less obvious.
In this story, my colleague David Rotman explores one of the biggest questions surrounding the new tech: Will ChatGPT make the already troubling income and wealth inequality in the US and many other countries even worse? Or could it in fact help? Read more here.
Bits and Bytes
Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
Google has entered the chatroom. (MIT Technology Review)
The bearable mediocrity of Baidu’s ChatGPT competitor
The Chinese Ernie Bot is okay. Not mind-blowing, but good enough. In China Report, our weekly newsletter on Chinese tech, my colleague Zeyi Yang reviews the new chatbot and looks at what’s next for it. (MIT Technology Review)
OpenAI had to shut down ChatGPT to fix a bug that exposed user chat titles
It was only a matter of time before this happened. The popular chatbot was temporarily disabled as OpenAI tried to fix a bug that came from open-source code. (Bloomberg)
Adobe has entered the generative AI game
Adobe, the company behind photo editing software Photoshop, announced it has made an AI image generator that doesn’t use artists’ copyrighted work. Artists say AI companies have stolen their intellectual property to train generative AI models and are suing them to prove it, so this is a big development.
Conservatives want to build a chatbot of their own
Conservatives in the US have accused OpenAI of giving ChatGPT a liberal bias. While it’s unclear whether that’s a fair accusation, OpenAI told The Algorithm last month that it is working on building an AI system that better reflects different political ideologies. Others have beaten it to the punch. (The New York Times)
The case for slowing down AI
This story pushes back against common arguments for the fast pace of AI development—that technological development is inevitable, we need to beat China, and we need to make AI better to be safer. Instead, it has a radical proposal during today’s AI boom: we need to slow down development in order to get the technology right and minimize harm. (Vox)
The swagged-out pope is an AI fake—and an early glimpse of a new reality
No, the Pope is not wearing Prada. Viral images of the “Balenciaga bishop” wearing a white puffy jacket were generated using the AI image generator Midjourney. As AI image generators edge closer to generating realistic images of people, we’re going to see more and more images of real people that will fool us. (The Verge)
Deep Dive
Artificial intelligence
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
This AI-powered “black box” could make surgery safer
A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.
Why does AI hallucinate?
The tendency to make things up is holding chatbots back. But that’s just what they do.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.