Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131

Notice: _filter_block_template_part_area(): "sidebar" is not a supported wp_template_part area value and has been added as "uncategorized". in /home/ntsnews/public_html/wp-includes/functions.php on line 6131
Why the next semiconductor shortage could be worse than t... - NTS News

Why the next semiconductor shortage could be worse than t…

Why the next semiconductor shortage could be worse than t…

The global semiconductor industry hit $791.7 billion in revenue in 2025 and is on track to cross $1 trillion in 2026 — the first time in history. NVIDIA alone surpassed $100 billion in chip sales last year, accounting for over 35% of the industry’s total grow…

The global semiconductor industry hit $791.7 billion in revenue in 2025 and is on track to cross $1 trillion in 2026 — the first time in history. NVIDIA alone surpassed $100 billion in chip sales last year, accounting for over 35% of the industry’s total growth. But behind the record-breaking revenue numbers is a structural supply crisis that’s already rippling through every corner of the technology market.

High-bandwidth memory is fully sold out through 2026 across all three major producers. TSMC’s advanced packaging capacity is oversubscribed through at least mid-2026. DDR5 spot prices have quadrupled since September 2025. And unlike the pandemic-era chip shortage, this one isn’t caused by supply chain disruptions — it’s caused by AI devouring a disproportionate share of the world’s silicon manufacturing capacity.

The semiconductor shortage of 2020 to 2023 was a logistics problem — factories shut down, shipping routes clogged, demand spiked as the world went remote. The shortage emerging in 2026 is fundamentally different. It’s a structural reallocation problem, driven by the fact that the most profitable use of limited wafer capacity has shifted decisively toward AI infrastructure. Every wafer allocated to a high-bandwidth memory stack for an NVIDIA GPU is a wafer denied to the DDR5 module in a consumer laptop.

That zero-sum math is now playing out across the entire memory and advanced packaging supply chain. High-bandwidth memory has become the single most constrained component in the semiconductor supply chain. The HBM market reached an estimated $30 billion in 2025, representing 23% of total DRAM revenue, and Bank of America projects it will hit $54.6 billion in 2026 — a 58% year-over-year increase.

But the demand curve is steeper than the supply curve, and it’s not close. SK Hynix, which controls roughly 62% of HBM shipments, has told investors its entire 2026 supply is sold out. Micron’s CEO confirmed the same — HBM capacity for both 2025 and 2026 is fully booked. Samsung is raising HBM contract prices by high-teens to low-twenties percentages for 2026. And OpenAI’s landmark $71 billion order for Korean HBM chips in late 2025 illustrates the scale at which hyperscalers are locking up supply regardless of cost.

The core problem is physics. Manufacturing one gigabyte of HBM consumes roughly three times the wafer capacity of standard DDR5. AI workloads are projected to consume nearly 20% of global DRAM wafer capacity in 2026, yet HBM and AI-related chips represent less than 0.2% of total chip volume while generating roughly 50% of semiconductor revenue. That concentration — enormous revenue from tiny volume requiring outsized manufacturing resources — is what makes this shortage structurally different from anything the industry has faced before.

Even if memory supply were unlimited, AI chip production would still be constrained by advanced packaging. TSMC’s CoWoS (Chip-on-Wafer-on-Substrate) technology is the critical process for assembling the most advanced AI processors, and the scale of investment flowing into AI infrastructure has overwhelmed available capacity. TSMC’s CoWoS capacity grew from roughly 13,000 wafers per month at the end of 2023 to 75,000 to 80,000 by late 2025 — a sixfold increase in two years.

The company is targeting 120,000 to 130,000 wafers per month by the end of 2026, with additional outsourcing of 240,000 to 270,000 wafers annually to packaging partners Amkor and SPIL. But even this aggressive ramp isn’t enough. TSMC executives have been unusually direct in stating that CoWoS capacity remains sold out through 2026, and NVIDIA — which has secured an estimated 60% of total CoWoS allocation — confirmed the oversubscription extends through at least mid-2026.

The allocation hierarchy tells the story. NVIDIA takes roughly 60% of CoWoS capacity for its Blackwell and upcoming Rubin architectures. Broadcom takes approximately 15% for custom AI ASICs it builds for Google and Meta. AMD secures about 11% for its Instinct MI350 and MI400 series. That leaves less than 15% for every other customer — including the growing number of companies designing custom AI silicon.

For second-tier clients, getting CoWoS allocation has become as much a strategic procurement challenge as a technical one. The structural reallocation toward AI is now visibly affecting consumer electronics in ways that most coverage of the AI boom hasn’t adequately captured. DRAM prices have risen 171% year-over-year. DDR5 spot prices have quadrupled since September 2025. Adata described the situation in late 2025 as a simultaneous shortage of DRAM, NAND, and HDDs — something the company said hadn’t happened in 30 years.

The downstream effects are concrete. Dell, Lenovo, and HP have signaled PC price increases of 15% to 20% in early 2026 as memory costs surge. Memory now accounts for roughly 18% of a new PC’s bill of materials — approximately double its 2024 share. IDC projects that the global PC market could contract by 5% to 9% as a result, with the smartphone market facing a 3% to 5% contraction under moderate scenarios.

Lenovo’s CFO described the cost surge as “unprecedented” and disclosed that the company had built memory inventories to roughly 50% above normal levels in anticipation of further price increases. Morgan Stanley downgraded Dell from overweight to underweight in late 2025, citing the company’s heavy exposure to rising server memory costs. The warning was blunt: skyrocketing memory prices could significantly erode margins for server and PC OEMs that lack the pricing power to pass costs through to enterprise customers with existing contracts.

In previous semiconductor cycles, shortages resolved themselves through a predictable mechanism: high prices attracted investment, new capacity came online, and supply eventually caught demand. That mechanism is breaking down for three reasons. First, building new semiconductor fabrication capacity takes three to five years from groundbreaking to production. The US push to boost domestic chip production through the CHIPS Act is real, but the new fabs won’t materially affect supply until 2028 at the earliest.

The physics of semiconductor manufacturing don’t compress to match the urgency of AI deployment timelines. Second, the economics of HBM and advanced packaging create perverse incentives for memory manufacturers. Why expand DDR5 capacity at commodity margins when HBM sells at a five-times premium? Samsung, SK Hynix, and Micron are all rational actors, and they’re all making the same calculation — invest in AI memory where margins are highest and let consumer memory pricing float upward.

The AI-driven surge in data center demand has permanently altered the investment calculus for memory manufacturers. Third, the demand side shows no sign of moderating. Hyperscaler capital expenditure is approaching $600 billion in 2026, a 36% year-over-year increase. Technology companies including Google, Amazon, Microsoft, and Meta have placed what amount to open-ended memory orders — they’ll take as much supply as available regardless of cost.

When the buyers with the deepest pockets are price-insensitive, the market-clearing mechanism that normally resolves shortages simply doesn’t function. For investors, the semiconductor shortage creates a bifurcated landscape. Companies positioned on the supply side of AI infrastructure — TSMC, SK Hynix, NVIDIA — benefit directly from scarcity pricing. The enterprise technology landscape in 2026 is being reshaped by who can secure allocation and who can’t.

For enterprises, the implications are more immediately painful. Server lead times are extending. Memory costs for enterprise infrastructure are rising at rates that Deloitte’s 2026 semiconductor outlook describes as creating genuine budget pressure for IT departments. Companies that delayed hardware refresh cycles are now facing both higher prices and longer wait times — the worst combination for capital planning.

The most concerning signal is what’s happening at the margin. Memory module retailers in Japan and Europe are rationing stock. Taiwanese distributors are bundling DRAM with motherboards to control allocations. Team Group’s general manager expects availability to worsen through the first half of 2026 as distribution stockpiles are exhausted, warning that obtaining allocation could become difficult regardless of willingness to pay.

The semiconductor industry is entering what some analysts call a “gigacycle” — a multi-year structural expansion driven by AI that could push annual semiconductor revenue from $792 billion in 2025 to more than $1 trillion by year-end 2026 and potentially $2 trillion by 2036. But record revenue doesn’t mean balanced supply. It means a market where AI infrastructure consumes an ever-larger share of finite manufacturing capacity, and everyone else — from PC makers to smartphone manufacturers to enterprise IT departments — competes for what’s left.

The shortage isn’t a glitch in the system. It is the system, reshaped by AI’s insatiable demand for silicon.

Summary

This report covers the latest developments in samsung. The information presented highlights key changes and updates that are relevant to those following this topic.


Original Source: Techpinions.com | Author: David Graff | Published: March 6, 2026, 3:00 pm

Leave a Reply