Get ready to fight misinformation in 2024. Eric Schmidt has advice.
Next year brings over 40 national elections worldwide amidst big changes to social media platforms.
This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
We’re already at that time of year when we start looking ahead to what’s coming in 2024. For Technocrat readers (and the rest of the globe!), next year is going to be a doozy, with over 40 national elections worldwide and a landscape of constantly evolving information technologies.
One of the biggest areas to watch, of course, will be generative AI, particularly how it changes social media, political campaigning, and the fight over election misinformation. This confluence of new tech and big elections is also happening while the social media industry is going through major changes, including shifts in moderation approaches, legal battles, cuts to trust and safety teams, and platform shake-ups.
This is all poised to make the future of the fight against misinformation murky, to say the least. It’s a topic my colleagues and I take very seriously and have covered extensively in the past. And recently in MIT Technology Review, former Google boss Eric Schmidt penned an op-ed that lays out what he calls “a paradigm shift for social media platforms”:
The role of Facebook and others has conditioned our understanding of social media as centralized, global “public town squares” with a never-ending stream of content and frictionless feedback. Yet the mayhem on X (a.k.a. Twitter) and declining use of Facebook among Gen Z—alongside the ascent of apps like TikTok and Discord—indicate that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds.
But that’s taken agency away from users (we don’t control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens … Now, with AI starting to make social media much more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy.
Schmidt goes on to lay out a six-point plan social media companies can follow to meet the moment. One thing I was happy to see him mention is the importance of provenance information, which I have written about a few times previously. It’s an insightful and useful piece that I’d definitely urge you to read!
This is the last Technocrat of 2023, and I’ll be back in your inbox in January. In the meantime, over the next few weeks we’ll be publishing more stories about what’s to come in technology in 2024, so be on the lookout for those. And if you want to catch up on some past stories that you may have missed, here are just a few of my favorites from my colleagues in 2023:
- This new tool could give artists an edge over AI from Melissa Heikkilä
- ChatGPT is going to change education, not destroy it from Will Douglas Heaven
- ChatGPT is about to revolutionize the economy. We need to decide what that looks like from David Rotman
- Why the dream of fusion power isn’t going away from Casey Crownhart
- Deepfakes of Chinese influencers are livestreaming 24/7 from Zeyi Yang
What I am reading this week
- Well, the EU AI Act has now been agreed on, setting the global standard for AI regulation! Here are five things you need to know about it, from my colleague Melissa. And if you want to know more about why this was so hard to get across the finish line, read my Technocrat from last week.
- I found this story from Vox on how chatbot therapy may be useful very enlightening.
- This investigation into the Yahoo Human Rights Fund and an ongoing lawsuit, which claims very little of the money went where it was supposed to go, raises really interesting questions about how tech companies deal with political pressure and messaging.
What I learned this week
Microsoft’s Bing AI chatbot, renamed Microsoft Copilot, got election information wrong one third of the time, according to a new study from nonprofits AI Forensics and AlgorithmWatch. Will Oremus in the Washington Post writes that the study results “reinforce concerns that today’s AI chatbots could contribute to confusion and misinformation around future elections as Microsoft and other tech giants race to integrate them into everyday products, including internet search.” Here’s a reminder to not rely on generative AI for news!
Keep Reading
Most Popular
How to opt out of Meta’s AI training
Your posts are a gold mine, especially as companies start to run out of AI training data.
How a simple circuit could offer an alternative to energy-intensive GPUs
The creative new approach could lead to more energy-efficient machine-learning hardware.
How gamification took over the world
Gamification was always just behaviorism dressed up in pixels and point systems. Why did we fall for it?
Apple is promising personalized AI in a private cloud. Here’s how that will work.
Apple’s first big salvo in the AI wars makes a bet that people will care about data privacy when automating tasks.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.