GPUs dominate with $100 billion in 2024, set to more than double by 2030. Yole Group, the market research & strategy consulting company, forecasts GPU revenue will grow from $100 billion in 2024 to $215 billion by 2030. Despite their high ASPs, GPUs are indispensable for AI training and are increasingly used in inference. Nvidia has a 93% server GPU market share, while AMD, Intel, and a wave of startups look to gain footholds.
AI ASICs are growing rapidly and will reach almost $85 billion in 2030 as hyperscalers pursue vertical integration and cost control. Google, Amazon, and Microsoft are investing in domain-specific silicon to optimize performance and reduce dependence on Nvidia. Based on the entrance of these leading companies, AI ASIC revenue is expected to skyrocket to $84.5 billion by 2030.
R&D is focused on resolving bottlenecks in memory and interconnect segments:
The use of DDR5, HBM, and CXL solutions is increasing to address memory bandwidth and capacity challenges. Photonics technologies (optical I/O, CPO) will contribute as well.
.



Compute is not the only bottleneck. Memory architecture is also evolving rapidly. DDR5 adoption continues. HBM is seeing exceptional demand, especially for AI training. CXL is gaining traction to solve memory disaggregation and latency challenges in new server architectures.
Leadership in data center silicon is also shifting. US players remain dominant, especially Nvidia, AMD, and Intel. But Yole Group’s analysts point out that China is scaling up its domestic capabilities through strategic investment and policy. Export controls continue to impact supply chains but also reinforce sovereign development goals in China and beyond.
Startups and newcomers are also part of the game and shaping the market. From Groq to Cerebras and Tenstorrent, innovation in chip design is pushing the frontier of what AI inference hardware can do. Sometimes, novel solutions challenge established players on cost, performance, or energy efficiency…