Nvidia has introduced its newest GPU structure, Blackwell, and it is stuffed with upgrades galore for AI inference and some hints at what is likely to be instore for next-gen gaming graphics playing cards. In the identical breath as its announcement at GTC, main tech corporations had been additionally asserting the numerous hundreds of techniques they’ve simply snapped up with Blackwell on-board.
AWS, Amazon’s datacenter arm, introduced it is bringing Nvidia’s Grace Blackwell Superchips—two Blackwell GPUs and a Grace CPU included on a single board—to EX2, that are successfully on-demand compute assets within the cloud. It is also already agreed to deliver 20,736 GB200 Superchips (that is 41,472 Blackwell GPUs in complete) to Venture Ceiba, an AI supercomputer on AWS utilized by Nvidia for its personal in-house AI R&D. Positive, that is roundabout Nvidia shopping for its personal product, however there are different situations that present simply how massive demand is for these kinds of chips proper now.
Google says it is leaping on the Blackwell practice. It should provide Blackwell on its cloud providers, together with GB200 NVL72 techniques. Every one comprised of 72 Blackwell GPUs and 36 CPUs. Flush with money, eh Google? Although we do not know what number of Blackwell GPUs that Google has signed up for simply but, with the corporate’s race to be aggressive with OpenAI in AI techniques, it is most likely quite a bit.
Oracle, of Java fame to of us of a sure age or extra not too long ago of Oracle Cloud fame, has put a quantity to the precise quantity of Blackwell GPUs it is shopping for from Nvidia: 20,000 GB200 Superchips might be headed Oracle’s method, at first. That is 40,000 Blackwell GPUs in complete. A few of Oracle’s order might be dedicated to Oracle’s OCI Supercluster or OCI Compute—two lumps of superconnected silicon for AI workloads.
Microsoft is enjoying coy with the precise numbers of Blackwell chips it is shopping for, but it surely’s throwing heaps of money behind OpenAI and its personal AI efforts on Home windows (to a blended response), so I count on massive cash to be altering palms right here. Microsoft is bringing Blackwell to Azure, although we have no precise timelines.
That is the factor, we do not have all the main points about Blackwell’s rollout or availability. Nvidia’s offered heaps of them, that we all know of, proper because the announcement for the structure dropped. The precise launch instances and even specs for the chips are nonetheless up within the air. That is extra ordinary of enterprise chips corresponding to this, however we do not even have the white paper in our palms and tens of hundreds of GPUs—if not lots of of hundreds when you embrace different corporations cited in Nvidia’s GTC announcement together with Meta, OpenAI, and xAI—have already been offered to the very best bidder.
This stage of demand for Nvidia’s chips is not in any respect shocking: it is fueled by the marketplace for AI chips. Only one instance of many, Meta introduced earlier within the yr that it goals for 350,000 H100s by the tip of the yr, which may value as a lot as $40,000 every by some estimations. Although I’m wondering how Blackwell and the B200 will issue into Meta’s estimates. It should take a while for Nvidia to ramp as much as full manufacturing for Blackwell, and even longer to satiate the wants of the exponentially rising AI market, and it is possible possession of H100s will stay the marker of whether or not you are an enormous fish in AI or not for a short while longer.