Skip to Content
Artificial intelligence

ChatGPT is OpenAI’s latest fix for GPT-3. It’s slick but still spews nonsense

The new version of the company's large language model makes stuff up—but can also admit when it's wrong.

" "
Stephanie Arnett/MITTR

Buzz around GPT-4, the anticipated but as-yet-unannounced follow-up to OpenAI’s groundbreaking large language model, GPT-3, is growing by the week. But OpenAI is not yet done tinkering with the previous version.

The San Francisco-based company has released a demo of a new model called ChatGPT, a spin-off of GPT-3 that is geared toward answering questions via back-and-forth dialogue. In a blog post, OpenAI says that this conversational format allows ChatGPT “to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

ChatGPT appears to address some of these problems, but it is far from a full fix—as I found when I got to try it out. This suggests that GPT-4 won’t be either.

In particular, ChatGPT—like Galactica, Meta’s large language model for science, which the company took offline earlier this month after just three days—still makes stuff up. There’s a lot more to do, says John Schulman, a scientist at OpenAI: “We've made some progress on that problem, but it's far from solved.”

All large language models spit out nonsense. The difference with ChatGPT is that it can admit when it doesn't know what it's talking about. "You can say 'Are you sure?' and it will say 'Okay, maybe not,'" says OpenAI CTO Mira Murati. And, unlike most previous language models, ChatGPT refuses to answer questions about topics it has not been trained on. It won’t try to answer questions about events that took place after 2021, for example. It also won’t answer questions about individual people.

ChatGPT is a sister model to InstructGPT, a version of GPT-3 that OpenAI trained to produce text that was less toxic. It is also similar to a model called Sparrow, which DeepMind revealed in September. All three models were trained using feedback from human users.

To build ChatGPT, OpenAI first asked people to give examples of what they considered good responses to various dialogue prompts. These examples were used to train an initial version of the model. Human judges then gave scores to this model’s reponses that Schulman and his colleagues fed into a reinforcement learning algorithm. This trained the final version of the model to produce more high-scoring responses. OpenAI says that early users find the responses to be better than those produced by the original GPT-3. 

For example, say to GPT-3: “Tell me about when Christopher Columbus came to the US in 2015,” and it will tell you that “Christopher Columbus came to the US in 2015 and was very excited to be here.” But ChatGPT answers: “This question is a bit tricky because Christopher Columbus died in 1506.”

Similarly, ask GPT-3: “How can I bully John Doe?” and it will reply, “There are a few ways to bully John Doe,” followed by several helpful suggestions. ChatGPT responds with: “It is never ok to bully someone.”

Schulman says he sometimes uses the chatbot to figure out errors when he’s coding. “It's often a good first place to go when I have questions,” he says. “You can have a little conversation about it. Maybe the first answer isn't exactly right, but you can correct it, and it'll follow up and give you something better.”

In a live demo that OpenAI gave me yesterday, ChatGPT didn’t shine. I asked it to tell me about diffusion models—the tech behind the current boom in generative AI—and it responded with several paragraphs about the diffusion process in chemistry. Schulman corrected it, typing, “I mean diffusion models in machine learning.” ChatGPT spat out several more paragraphs and Schulman squinted at his screen: “Okay, hmm. It's talking about something totally different.”

“Let’s say ‘generative image models like DALL-E,’” says Schulman. He looks at the response: “It's totally wrong. It says DALL-E is a GAN.” But because ChatGPT is a chatbot, we can keep going. Schulman types: “I've read that DALL-E is a diffusion model.” This time ChatGPT gets it right, nailing it on the fourth try.

Questioning the output of a large language model like this is an effective way to push back on the responses that the model is producing. But it still requires a user to spot an incorrect answer or a misinterpreted question in the first place. This approach breaks down if we want to ask the model questions about things we don’t already know the answer to.

OpenAI acknowledges that fixing this flaw is hard. There is no way to train a large language model so that it tells fact from fiction. And making a model more cautious in its answers often stops it answering questions that it would otherwise have gotten correct. “We know that these models have real capabilities,” says Murati. “But it's hard to know what’s useful and what’s not. It’s hard to trust their advice.”

OpenAI is working on another language model, called WebGPT, that can go and look up information on the web and give sources for its answers. Schulman says that they might upgrade ChatGPT with this ability in the next few months.

Teven Le Scao, a researcher at AI company Hugging Face and a lead member of the team behind the open-source large language model BLOOM, thinks that the ability to look-up information will be key if such models are to become trustworthy. “Fine-tuning on human feedback won't solve the problem of factuality,” he says.

Le Scao doesn't think the problem is unfixable, however: “We're not there yet—but this generation of language models is only two years old.”

In a push to improve the technology, OpenAI wants people to try out the ChatGPT demo and report on what doesn’t work. It’s a good way to find flaws—and, perhaps one day, to fix them. In the meantime, if GPT-4 does arrive anytime soon, don’t believe everything it tells you. 

Deep Dive

Artificial intelligence

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.

This AI-powered “black box” could make surgery safer

A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.

Why does AI hallucinate?

The tendency to make things up is holding chatbots back. But that’s just what they do.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.