Nvidia Grace Hopper Superchip
Source: Nvidia

Quarterly earnings reports are usually a bore. Profits, losses, whatever, it doesn't really matter to most people how a giant corporation's bank accounts are fairing. But sometimes, there's a result that's surprising in a way that it says something about the whole industry. Last week that role was played by Nvidia, who posted shockingly high revenue on the back of strong sales driven by a surge in demand for chips to power AI.

The kind of computational work that is needed to make generative AI like ChatGPT work is best suited for high-throughput processing units — something that Nvidia has a lot of expertise in. In fact, while Nvidia is known widely for their high-powered gaming GPUs, their biggest sales division is actually their data center products.

The rush to AI, fueled by the excitement of the potential of tools like ChatGPT, was a massive boon for Nvidia's data center business. While there are plenty of huge data centers all over the globe, most are set up for less processing-intensive operations. The thirst for AI tools is has lead to a demand for new server farms to process the neural networks behind the large language models, and to produce results to users in a reasonable time. And it is a hugely resource intensive process — the reason that ChatGPT puts out one word at a time is because that's literally how it is assembling the response. Just predicting the next word requires massive computational power, over and over and over. If you tried running this on your home PC it would take forever while running at full power.

That demand for processing units like those made by Nvidia is also a bottleneck in the development of new AI tools. Not only do you need a lot of them (and a lot of money), but they're in short supply as manufacturers sell them as fast as they can make them. There's only so much that can be done to improve that manufacturing capacity in the short term, but long term it will eventually find a level where supply and demand are balanced.

It's a little funny to think that the physical constraints of needing enough of the right servers is part of the problem with AI, but here we are.

Read more