AI could deliver us a superpowered future. But first we must navigate AI technology’s many risks

Hello and welcome to Eye on AI.

I’ve got some AI news of my own this week. My book Mastering AI: A Survival Guide to Our Superpowered Future is officially being published today by Simon & Schuster in the U.S.

When ChatGPT debuted in November 2022, it was a light bulb moment—one that suddenly awakened people to the possibilities of AI technology. But it was also a vertigo-inducing moment, one that prompted a lot of anxious questions: Was AI about to take their job? Would entire companies be put out of business? Was the ability of AI to write cogent essays and analyses about to blow up our education system? Were we about to be hit with a tsunami of AI-crafted misinformation? Might AI soon develop consciousness and decide to kill or enslave us?

Mastering AI is my attempt to explain how we arrived at this moment and answer these questions. It is intended to serve as an essential primer for how to think through the impacts AI is poised to have on our personal and professional lives, our economy, and on society. In the book, I have tried to illuminate a path—a narrow one, but a path nonetheless—that can ensure that the good AI does outweighs the harm it might cause.

In researching the book, I interviewed individuals who are at the forefront of developing AI, thinking through its impacts, and putting new AI tools to use. I spoke to OpenAI cofounders Sam Altman and Greg Brockman, as well as its former chief scientist Ilya Sutskever; Google DeepMind cofounders Demis Hassabis and Shane Legg; and Anthropic cofounder Dario Amodei. I also talked to dozens of startup founders, economists, and philosophers, as well as writers and artists, and entrepreneurs and executives inside some of America’s largest corporations.

If we design AI software carefully and regulate it vigilantly, it will have tremendous positive impacts. It will boost labor productivity and economic growth, something developed economies desperately need. It will give every student a personal tutor. It will help us find new treatments for disease and usher in an era of more personalized medicine. It could even enhance our democracy and public discourse, helping to break down filter bubbles and persuade people to abandon conspiracy theories.

But, as it stands, we are too often not designing this technology carefully and deliberately. And regulation is, for the moment, lacking. This should scare us. For all its opportunities, AI presents grave dangers too. In Mastering AI, I detail many of these risks, some of which have not received the attention they deserve. Dependence on AI software could diminish critical human cognitive skills, including our memory, critical thinking, and writing skills; reliance on AI chatbots and assistants could damage important social skills, making it harder to form human relationships. If we don’t get the development and regulation of this technology right, AI will depress wages, concentrate corporate power, and make inequality worse. It will boost fraud, cybercrime, and misinformation. It will erode societal trust and hurt democracy. AI could exacerbate geopolitical tensions, particularly between the U.S. and China. All of these risks are present with AI technology that exists today. There is also a remote—but not completely nonexistent—chance that a superintelligent AI system could pose an existential risk to humanity. It would be wise to devote some effort to taking this last risk off the table, but we should not let these efforts distract or crowd out work we need to do to solve AI’s more immediate challenges.

In Mastering AI, I recommend a series of steps we can take to avoid these dangers. The most important is to ensure we don’t allow AI to displace the central role that human decision-making and empathy should play in high-consequence domains, from law enforcement and military affairs to lending and social welfare decisions. Beyond this, we need to encourage the development of AI as a complement to human intelligence and skills, rather than a replacement. This requires us to reframe how we think about AI and how we assess its capabilities. Benchmarking that evaluates how well humans can perform when paired with AI software—as opposed to constantly pitting AI’s abilities against those of people—would be a good place to start. Policies such as a targeted robot tax could also help companies see AI as a way to boost the productivity of existing workers, not as a way to eliminate jobs. Mastering AI contains many more insights about AI’s likely impacts.

Today, Fortune has published an excerpt from the book about how AI could make filter bubbles worse, but also how—with the right design choices—the same technology could help pop these bubbles and combat polarization. You can read that excerpt here. I hope you’ll also consider reading the rest of the book, which is now available at your favorite bookstore and can be purchased online here. (If you are in the U.K., you’ll have to wait a few more weeks for the release of the U.K. edition, which can be preordered here.)

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news…If you want a better understanding of how AI can transform your business and hear from some of Asia’s top business leaders about AI’s impact across industries, please join me at Fortune Brainstorm AI Singapore. The event takes place July 30-31 at the Ritz Carlton in Singapore. We’ve got Ola Electric’s CEO Bhavish Aggarwal discussing his effort to build an LLM for India, Alation CEO Satyen Sangani talking about AI’s impact on the digital transformation of Singapore’s GXS Bank, Grab CTO Sutten Thomas Pradatheth speaking on how quickly AI can be rolled out across the APAC region, Josephine Teo, Singapore’s minister for communication and information talking about that island nation’s quest to be an AI superpower, and much much more. You can apply to attend here. Just for Eye on AI readers, I’ve got a special code that will get you a 50% discount on the registration fee. It is BAI50JeremyK.

The Eye on AI News, Eye on AI Research, Fortune on AI, and Brain Food sections of this edition of the newsletter were curated and written by Fortune’s Sharon Goldman.

AI IN THE NEWS

Republican party's new anti-AI regulation stance aligns with AI ‘accelerationists.’ This week the GOP laid out its 2024 platform—the first since 2016—which is mostly Donald Trump’s former platform. But there is one notable change: In a sign that AI regulation has become politicized, the platform says it will champion innovation in artificial intelligence by repealing Joe Biden’s “dangerous Executive Order” that “hinders innovation, and imposes Radical Leftwing ideas on the development of this technology.” In its place, it says that Republicans support “AI development rooted in Free Speech and Human Flourishing.” The policy, which also emphasizes an end to a “crypto crackdown,” aligns with the pro-technology/anti-regulation “e/acc” (effective accelerationism) crowd. Investor Julie Frederickson, for example, claimed on X that “a coalition of e/acc and crypto and El Segundo hardware and deep tech autists changed a political party’s platform.” 

Leaders of major world religions gather to sign the Rome Call for AI Ethics. In 2020, the Vatican, along with Microsoft, IBM, the UN Food and Agriculture Organization (FAO), and the Italian Government, released the Rome Call for AI Ethics. Now, leaders of major world religions are gathering today and tomorrow in Hiroshima, Japan, to sign the Rome Call, in a city that a press release called “a powerful testament to the consequences of destructive technology and the enduring quest for peace.” The event will emphasize the “vital importance of guiding the development of artificial intelligence with ethical principles to ensure it serves the good of humanity.” 

Nicolas Cage is ‘terrified’ of AI using his likeness. In an interview with the New Yorker, actor Nicolas Cage said that he was on his way to get a digital scan for his next movie—and he worried whether they were using AI. “They have to put me in a computer and match my eye color and change—I don’t know,” he said. “They’re just going to steal my body and do whatever they want with it via digital AI...God, I hope not AI I’m terrified of that. I’ve been very vocal about it.” He said he worried about whether artists would be replaced: “Is it going to be transmogrified? Where’s the heartbeat going to be? I mean, what are you going to do with my body and my face when I’m dead? I don’t want you to do anything with it!” Others, however, apparently don’t have that concern: The estates of deceased celebrities like Judy Garland and James Dean recently gave AI company ElevenLabs permission to use the stars’ voices in audiobook voiceovers.

EYE ON AI RESEARCH

AI researchers consider evaluations and benchmarking to be critical for assessing the performance, robustness, and performance of AI models—that is, ensuring the systems meet certain standards before they are deployed in real-world applications. But a new research paper by five Princeton University researchers says the current evaluation and benchmarking processes for AI agents—based on using large language models in combination with other tools, like web search, to take actions like booking flight tickets or fixing software bugs—may encourage the development of agents that do well in benchmarks, but not in practice. 

“The North Star of this field is to build assistants like Siri or Alexa and get them to actually work—handle complex tasks, accurately interpret users’ requests, and perform reliably,” said a blog post about the paper by two of its authors, Sayash Kapoor and Arvind Narayanan, who are also the authors of AI Snake Oil. “But this is far from a reality, and even the research direction is fairly new.”

FORTUNE ON AI

China is still a decade behind the U.S. in chip technology—but the world still needs the mature chips it’s making, says ASML’s CEO —by Lionel Lim

Instacart’s AI-powered smart carts, which offer real-time recommendations and ‘gamified’ shopping, are coming to more U.S. grocery stores —by Sasha Rogelberg

Two self-driving car guys take on OpenAI’s Sora, Kling, and Runway to be Hollywood’s favorite AI —by Jeremy Kahn

Chinese self-driving cars have quietly traveled 1.8 million miles on U.S. roads, collecting detailed data with cameras and lasers —by Rachyl Jones

AI CALENDAR

July 15-17: Fortune Brainstorm Tech in Park City, Utah (register here)

July 21-27: International Conference on Machine Learning (ICML), Vienna, Austria

July 30-31: Fortune Brainstorm AI Singapore (register here)

Aug. 12-14: Ai4 2024 in Las Vegas

BRAIN FOOD

Small language models hit the big time

The phrase "large language model," or LLM, became part of the public discourse when OpenAI's ChatGPT launched in 2022 and showed that giant models based on massive datasets could attempt to mimic human-level "intelligence." But over the past few months, "small language model," or SLM, has begun to make regular appearances in my email inbox. A recent Wall Street Journal article tackled this trend, with a deep dive into these mini-models that are trained on far less data than LLMs, and typically designed for specific tasks. Big pros of SLMs include their lower training costs—LLMs cost hundreds of millions of dollars to train, while smaller models can be had for $10 million or less. They also use less computing power, making it less costly to generate each query response (a process called inference). Microsoft and Google, as well as AI startups including Anthropic, Cohere, and Mistral, have all released small models. Meta has also released SLM versions of its Llama family of models. 

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.