What Nvidia’s earnings say about the future of generative AI

Companies as well as governments are using Nvidia's GPUs to power their own AI efforts

We may earn a commission from links on this page.
Startup Cerebras System's new AI supercomputer Andromeda is seen at a data center in Santa Clara, California, U.S. October 2022.
Data center business is booming thanks to generative AI.
Photo: Rebecca Lewington/Cerebras Systems (Reuters)

Sure, there’s a lot of drama at OpenAI, the leading company in the generative artificial intelligence space. But there’s a lot of other AI activity happening elsewhere: In the three months ending on Oct. 29, Nvidia’s revenue more than tripled from the same period last year, reaffirming that generative AI adoption is very much underway.

Company applications

For one, consumer internet companies and enterprise companies are driving half of the revenue from data centers, which are equipped with GPUs that are designed to process large amounts of information to power AI applications, according to Nvidia.

For starters, consumer Internet companies are working on generative AI applications for consumers; Meta, for instance, is investing in generative AI to help advertisers optimize images and text. Enterprise software companies like Adobe and Microsoft are adding AI co-pilots and assistants to their platforms for business customers. Tesla and other autonomous driving companies are continuing to work on their own AI applications.

“The enterprise wave of AI adoption is now beginning,” Nvidia chief financial officer Colette Kress said on a conference call with investors and analysts.

Advertisement

Together, consumer internet and enterprise software companies are driving half the revenue now coming from Nvidia’s data centers, which are equipped with graphics processing units (GPUs) to handle the large amounts of information powering AI applications.

Cloud service providers like Amazon and Google are driving the other half of the company’s data center revenue, Kress said.

Chip supply is healing

Nivdia’s H100 Tensure Core GPU, which was in short supply earlier this year, is now generally available to every cloud service, according to Nvidia. “We have significantly increased supply every quarter this year to meet strong demand and expect to continue to do so next year,” Kress said. “We will also have a broader and faster product launch cadence to meet a growing and diverse set of AI opportunities.”

Advertisement

The H100 remains the top-performing and most versatile chip for AI training, and by a wide margin, Kress said. But tech companies like Microsoft are now building their own Nvidia-like chips to be less dependent on Nvidia.

Advertisement

Countries investing in AI infrastructure

Nvidia is working with India’s government and some of the country’s largest companies, like Infosys and Reliance, to boost public infrastructure that “supports economic growth and industrial innovation,” Kress said. And French private cloud provider Scaleway is using Nvidia’s H100 to build a regional AI cloud across Europe. “National investment in compute capacity is a new economic imperative,” Kress said.

“Surely, every major country will have their own AI cloud,” Nvidia CEO Jensen Huang added.

Generative AI is already being used daily

The way people access data is changing, too. Instead of explicit queries, data can now be accessed by questions or instructions that are written or spoken out loud, Huang noted.

Advertisement

The retrieval of information from some form of storage can now be augmented with a generative method, whether that’s text-to-text, text-to-image, text-to-video, text-to-3D, or even, text-to-proteins—where a text-based input can be used to generate or predict a protein sequence. “These are things that were processed and typed in by humans in the past, and these are now generative approaches,” Huang said.