The previous highest capacity HBM3E was 24GB using eight stacked 3GB die.
‘Hynix has increased the capacity by 50% by stacking 12 layers of 3GB DRAM chips at the same thickness as the previous eight-layer product,’ says Hynix, ‘to achieve this, the company made each DRAM chip 40% thinner than before and stacked vertically using TSV4 technology.’
‘The company also solved structural issues that arise from stacking thinner chips higher by applying its core technology, the Advanced MR-MUF5 process,’ adds Hynix, ‘this allows to provide 10% higher heat dissipation performance compared to the previous generation, and secure the stability and reliability of the product through enhanced warpage controlling.’
According to the company, the 12-layer HBM3E product outperforms other HBMs in all areas that are essential for AI memory – speed, capacity and stability.
Hynix has increased the speed of memory operations to 9.6 Gbps, the highest memory speed available today.
If ‘Llama 3 70B’3, a Large Language Model (LLM), is driven by a single GPU equipped with four HBM3E products, it can read 70 billion total parameters 35 times within a second.