Micron Know-how on Monday stated that it had initiated quantity manufacturing of its HBM3E reminiscence. The corporate’s HBM3E identified good stack dies (KGSDs) can be used for Nvidia’s H200 compute GPU for synthetic intelligence (AI) and high-performance computing (HPC) functions, which is able to ship in the second quarter of 2024.
Micron has introduced it’s mass-producing 24 GB 8-Hello HBM3E gadgets with a knowledge switch fee of 9.2 GT/s and a peak reminiscence bandwidth of over 1.2 TB/s per system. In comparison with HBM3, HBM3E will increase information switch fee and peak reminiscence bandwidth by a whopping 44%, which is especially necessary for bandwidth-hungry processors like Nvidia’s H200.
Nvidia’s H200 product depends on the Hopper structure and provides the identical computing efficiency because the H100. In the meantime, it’s outfitted with 141 GB of HBM3E reminiscence that includes bandwidth of as much as 4.8 TB/s, a major improve from 80 GB of HBM3 and as much as 3.35 TB/s bandwidth within the case of the H100.
Micron’s reminiscence roadmap for AI is additional solidified with the upcoming launch of a 36 GB 12-Hello HBM3E product in March 2024. In the meantime, it stays to be seen the place these gadgets can be used.
Micron makes use of its 1β (1-beta) course of expertise to provide its HBM3E, which is a major achievement for the corporate because it makes use of its newest manufacturing node for its information center-grade merchandise, which is a testomony to the manufacturing expertise.
Beginning mass manufacturing of HBM3E reminiscence forward of rivals SK Hynix and Samsung is a major achievement for Micron, which at present holds a ten% market share within the HBM sector. This transfer is essential for the corporate, because it permits Micron to introduce a premium product sooner than its rivals, probably growing its income and revenue margins whereas gaining a bigger market share.
“Micron is delivering a trifecta with this HBM3E milestone: time-to-market management, best-in-class {industry} efficiency, and a differentiated energy effectivity profile,” stated Sumit Sadana, government vice chairman and chief enterprise officer at Micron Know-how. “AI workloads are closely reliant on reminiscence bandwidth and capability, and Micron may be very well-positioned to assist the numerous AI development forward by means of our industry-leading HBM3E and HBM4 roadmap, in addition to our full portfolio of DRAM and NAND options for AI functions.“
Supply: Micron