Skip to Content
Artificial intelligence

Google is throwing generative AI at everything

But experts say that releasing these models into the wild before fixing their flaws could prove extremely risky for the company.

a rangefinder style box finding the faces of two dogs
Google

Google is stuffing powerful new AI tools into tons of its existing products and launching a slew of new ones, including a coding assistant, it announced at its annual I/O conference today. 

Billions of users will soon see Google’s latest AI language mode, PaLM 2, integrated into over 25 products like Maps, Docs, Gmail, Sheets, and the company’s chatbot, Bard. For example, people will be able to simply type a request such as “Write a job description” into a text box that appears in Google Docs, and the AI language model will generate a text template that users can customize. 

Because of safety and reputational risks, Google has been slower than competitors to launch AI-powered products. But fierce competition from competitors Microsoft, OpenAI, and others has left it no choice but to start, says Chirag Shah, a computer science professor at the University of Washington.

It’s a high-risk strategy, given that AI language models have numerous flaws with no known fixes. Embedding them into its products could backfire and run afoul of increasingly hawkish regulators, experts warn.

Google is also opening up access to its ChatGPT competitor, Bard, from a select group in the US and the UK to the general public in over 180 countries. Bard will “soon” allow people to prompt it using images as well as words, Google said, and the chatbot will be able to reply to queries with pictures. Google is also launching AI tools that let people generate and debug code.

Google has been using AI technology for years in products like text translation and speech recognition. But this is the company’s biggest push yet to integrate the latest wave of AI technology into a variety of products. 

“[AI language models’] capabilities are getting better. We’re finding more and more places where we can integrate them into our existing products, and we’re also finding real opportunities to provide value to people in a bold but responsible way,” Zoubin Ghahramani, vice president of Google DeepMind, told MIT Technology Review. 

“This moment for Google is really a moment where we are seeing the power of putting AI in people’s hands,” he says.

The hope, Ghahramani says, is that people will get so used to these tools that they will become an unremarkable part of day-to-day life.  

One-stop shop

Google’s announcement comes as rivals like Microsoft, OpenAI, and Meta and open-source groups like Stability.AI compete to launch impressive AI tools that can summarize text, fluently answer people’s queries, and even produce images and videos from word prompts. 

With this updated suite of AI-powered products and features, Google is targeting not only individuals but also startups, developers, and companies that might be willing to pay for access to models, coding assistance, and enterprise software, says Shah.

“It’s very important for Google to be that one-stop shop,” he says. 

Google is making new features and models available that harness its AI language technology as a coding assistant, allowing people to generate and complete code and converse with a chatbot to get help with debugging and code-related questions. 

The trouble is that the sorts of large language models Google is embedding in its products are prone to making things up. Google experienced this firsthand when it originally announced it was launching Bard as a trial in the US and the UK. Its own advertising for the bot contained a factual error, an embarrassment that wiped billions off the company’s stock price. 

Google faces a trade-off between releasing new, exciting AI products and doing scientific research that would make its technology reproducible and allow external researchers to audit it and test it for safety, says Sasha Luccioni, an AI researcher at AI startup Hugging Face. 

In the past, Google has taken a more open approach and has open-sourced its language models, such as BERT in 2018. “But because of the pressure from the market and from OpenAI, they’re shifting all that,” Luccioni says.

The risk with code generation is that users will not be skilled enough at programming to spot any errors introduced by AI, says Luccioni. That could lead to buggy code and broken software. There is also a risk of things going wrong when AI language models start giving advice on life in the real world, she adds.

Even Ghahramani warns that businesses should be careful about what they choose to use these tools for, and he urges them to check the results thoroughly rather than just blindly trusting them. 

“These models are very powerful. If they generate things that are flawed, then with software you have to be concerned about whether you just take the generated output and incorporate it into your mission-critical software,” he says. 

But there are risks associated with AI language models that even the most up-to-date and tech-savvy people have barely begun to understand. It is hard to detect when text and, increasingly, images are AI generated, which could allow these tools to be used for disinformation or scamming on a large scale. 

The models are easy to “jailbreak” so that they violate their own policies against, for example, giving people instructions to do something illegal. They are also vulnerable to attacks from hackers when integrated into products that browse the web, and there is no known fix for that problem. 

Ghahramani says Google does regular tests to improve the safety of its models and has built in controls to prevent people from generating toxic content. But he admits that it still hasn’t solved that vulnerability—nor the problem of “hallucination,” in which chatbots confidently generate nonsense. 

Hard launch

Going all in on generative AI could backfire on Google. Tech companies are currently facing heightened scrutiny from regulators over their AI products. The EU is finalizing its first AI regulation, the AI Act, while in the US, the White House recently summoned leaders from Google, Microsoft, and OpenAI to discuss the need to develop AI responsibly. US federal agencies, such as the Federal Trade Commission, have signaled that they are paying more attention to the harm AI can cause.

Shah says that if some of the AI-related fears do end up panning out, it could give regulators or lawmakers grounds for action with the teeth to actually hold Google accountable. 

But in a fight to retain its grip on the enterprise software market, Google feels it can’t risk losing out to its rivals, Shah believes. “This is the war they created,” he says. And at the moment, “there’s very little to nothing to stop them.”

Deep Dive

Artificial intelligence

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

Apple is promising personalized AI in a private cloud. Here’s how that will work.

Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.

This AI-powered “black box” could make surgery safer

A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.