Extended lead times for key components weigh on server growth
Extended lead times for key semiconductor and hardware components are increasingly shaping supply conditions in the server market, according to TrendForce.
TrendForce now expects global server shipments to grow by around 13% YoY in 2026. This is a downward revision from earlier expectations of close to 20% growth, as shipment volumes are being constrained by longer lead times rather than underlying demand weakness.
Demand for general-purpose servers remains stable. However, suppliers are increasingly allocating capacity toward higher-margin AI server products, which is contributing to extended lead times across multiple component categories.
According to TrendForce, lead times for components such as PCBs and CPUs have already extended to nearly one year. More recently, power management ICs (PMICs) and baseboard management controllers (BMCs) have also seen significantly longer lead times.
For PMICs, AI servers require substantially higher power density, leading suppliers to prioritise AI-related demand. As a result, 8-inch wafer Bipolar-CMOS-DMOS (BCD) capacity is increasingly allocated to AI-focused PMIC production. In addition, planned 8-inch fab closures, including by Samsung Electronics, are further tightening available capacity. Lead times for PMICs are now expected to extend from 21–26 weeks to 35–40 weeks.
BMC chips manufactured on mature process nodes are experiencing similar developments. Limited foundry capacity, combined with prioritisation of higher-margin and time-sensitive AI orders, is contributing to lead times increasing from 11–16 weeks to 21–26 weeks.
On the AI server side, TrendForce expects shipments to grow by around 28% in 2026, driven by demand from cloud service providers (CSPs). ASIC-based AI servers are expected to grow faster than GPU-based systems.
However, TrendForce notes that chip validation and tuning processes at companies such as Meta Platforms and Amazon Web Services may affect shipment timing. As a result, the share of ASIC-based AI servers has been slightly revised down to around 27%, with GPU-based systems continuing to account for the majority.


