Are we ready to trust AI with our bodies?
Over the next few years, artificial intelligence is going to have a bigger and bigger effect on the way we live.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
I hate going to the gym. Last year I hired a personal trainer for six months in the hope she would brainwash me into adopting healthy exercise habits longer-term. It was great, but personal trainers are prohibitively expensive, and I haven’t set foot in a gym once since those six months came to an end.
That’s why I was intrigued when I read my colleague Rhiannon Williams’s latest piece about AI gym trainers.
Lumin Fitness is a gym in Texas staffed pretty much entirely by virtual AI coaches designed to guide gym goers through workouts (there’s one human employee on hand—to switch everything off and on, perhaps).
Patrons can complete a solo workout program with the help of a virtual coach in their own designated station, or participate in a high-intensity functional training class with others. Sensors in both the equipment and the floor-to-ceiling LED screens that line the walls of the gym track users’ movements, and Lumin uses machine learning to tailor advice.
The gym owners are confident that these new AI trainers will encourage people like me who feel intimidated or unmotivated to work out. Read more from Rhiannon here.
Over the next few years, artificial intelligence is going to have a bigger and bigger effect on us and the way we live. We’re already pretty used to tracking our bodies through wearables like smart watches. Getting a pep talk from an AI avatar doesn’t feel like much of a stretch. People are also using ChatGPT to come up with workout plans, as Rhiannon reported earlier this year.
And it’s not just AI for working out. Waitrose, a posh chain of grocery stores in the UK, used generative AI to create recipes for its range of Japanese food. Others are using it to generate books, which are flooding Amazon, including instruction manuals for mushroom foraging. For my birthday last year, a dear friend gave me a perfume with notes that were AI-generated. It smells citrusy and cinnamony, a bit floral and spicy, and I haven’t used it much. (Sorry, Roosa.)
Even the White House wants us to use AI to help with our health. In a readout from a meeting between Biden administration officials and AI and health-care experts last week, Arati Prabhakar, director of the White House Office of Science and Technology Policy, called on the health sector to “seize the powerful tools of AI to improve health outcomes for more Americans” in clinical settings, drug development, and public health challenges.
This makes sense. Neural networks are excellent at analyzing data and recognizing patterns. They could help speed up diagnoses, spot things humans might have missed, or help us come up with new ideas. And AI personal trainers that gamify exercise can help people feel good about their achievements and encourage us to do more exercise, Andy Lane, a professor of sport psychology at the University of Wolverhampton, told Rhiannon.
But as AI enters ever more sensitive areas, we need to keep our wits about us and remember the limitations of the technology. Generative AI systems are excellent at predicting the next likely word in a sentence, but they don’t have a grasp on the wider context and meaning of what they are generating. Neural networks are competent pattern seekers and can help us make new connections between things, but they are also easy to trick and break and prone to biases.
The biases of AI systems in settings such as health care are well documented. But as AI enters new arenas, I am on the lookout for the inevitable weird failures that will crop up. Will the foods that AI systems recommend skew American? How healthy will the recipes be? And will the workout plans take into account physiological differences between male and female bodies, or will they default to male-oriented patterns?
And most important, it’s crucial to remember these systems have no knowledge of what exercise feels like, what food tastes like, or what we mean by “high quality.” AI workout programs might come up with dull, robotic exercises. AI recipe makers tend to suggest combinations that taste horrible, or are even poisonous. Mushroom foraging books are likely riddled with incorrect information about which varieties are toxic and which are not, which could have catastrophic consequences.
Humans also have a tendency to place too much trust in computers. It’s only a matter of time before “death by GPS” is replaced by “death by AI-generated mushroom foraging book.” Including labels on AI-generated content is a good place to start. In this new age of AI-powered products, it will be more important than ever for the wider population to understand how these powerful systems work and don’t work. And to take what they say with a pinch of salt.
Deeper Learning
How generative AI is boosting the spread of disinformation and propaganda
Governments and political actors around the world are using AI to create propaganda and censor online content. In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”
Downward spiral: The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. Read more from Tate Ryan-Mosley in her weekly newsletter on tech policy, The Technocrat.
Bits and Bytes
Predictive policing software is terrible at predicting crimes
The New Jersey police department used an algorithm called Geolitica that was right less than 1% of the time, according to a new investigation. We’ve known about how deeply flawed and racist these systems are for years. It’s incredibly frustrating that public money is still being wasted on them. (The Markup and Wired)
The G7 plans to ask AI companies to agree to watermarks and audits
There is a real push to come up with cross-border guidelines for governing AI. Canada, France, Germany, Italy, Japan, the UK, and the US are proposing voluntary rules for AI companies that would require them to run more tests before and after launching products, and label AI-generated content using watermarks, among other requirements. (Bloomberg)
Could AI “constitutions” lead to safer AI systems?
This story looks at “AI constitutions”—a set of values and principles, such as honesty and respect, that AI models must follow, as part of an effort by researchers to prevent failures. The idea is being developed by the likes of Google DeepMind and Anthropic, but it’s unclear if it will work in practice. (FT)
OpenAI is considering making its own AI chips
Training and running massive AI models takes up a lot of computing power, and OpenAI is limited by the global chip shortage. The company is considering developing its own chips in an effort to cut down on the cost of developing new AI models and improving existing ones, such as ChatGPT. (Reuters)
Facebook’s new AI-generated stickers are lewd, rude, and occasionally nude
Meta rolled out a new suite of generative AI features, with basic content filters, which allowed users to generate nude Trudeau stickers and Karl Marx with boobs. Quelle surprise. (The Verge)
Deep Dive
Artificial intelligence
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
This AI-powered “black box” could make surgery safer
A new smart monitoring system could help doctors avoid mistakes—but it’s also alarming some surgeons and leading to sabotage.
An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary
Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.