South Korean reminiscence big SK Hynix has introduced it has begun the mass manufacturing of the world’s first 12-layer HBM3E, that includes a complete reminiscence capability of 36GB, an enormous improve from the earlier 24GB capability within the 8-layer configuration.
This new design was made attainable by decreasing the thickness of every DRAM chip by 40%, permitting extra layers to be stacked whereas sustaining the identical total measurement. The corporate plans to begin quantity shipments by the tip of 2024.
The HBM3E reminiscence helps a bandwidth of 9600 MT/s, translating to an efficient pace of 1.22 TB/s if utilized in an eight-stack configuration. The development makes it excellent for dealing with LLMs and AI workloads that require each pace and excessive capability. The power to course of extra knowledge at quicker charges permits AI fashions to run extra effectively.
Nvidia and AMD {hardware}
For superior reminiscence stacking, SK Hynix employs progressive packaging applied sciences, together with Via Silicon By way of (TSV) and the Mass Reflow Molded Underfill (MR-MUF) course of. These strategies are important for sustaining the structural integrity and warmth dissipation required for secure, high-performance operation within the new HBM3E. The enhancements in warmth dissipation efficiency are notably vital for sustaining reliability throughout intensive AI processing duties.
Along with its elevated pace and capability, the HBM3E is designed to supply enhanced stability, with SK Hynix’s proprietary packaging processes making certain minimal warpage throughout stacking. The corporate’s MR-MUF expertise permits for higher administration of inner stress, decreasing the probabilities of mechanical failures and making certain long-term sturdiness.
Early sampling for this 12-layer HBM3E product started in March 2024, with Nvidia’s Blackwell Extremely GPUs and AMD’s Intuition MI325X accelerators anticipated to be among the many first to make use of this enhanced reminiscence, profiting from as much as 288GB of HBM3E to assist complicated AI computations. SK Hynix lately rejected a $374 million superior fee from an unknown firm to make sure it might present Nvidia with sufficient HMB for its in-demand AI {hardware}.
“SK Hynix has as soon as once more damaged by technological limits demonstrating our business management in AI reminiscence,” stated Justin Kim, President (Head of AI Infra) at SK Hynix. “We’ll proceed our place because the No.1 international AI reminiscence supplier as we steadily put together next-generation reminiscence merchandise to beat the challenges of the AI period.”