AI Energy
Summary of Nvidia's "AI Power Crisis" Closed-Door Summit
An unverified summary of Nvidia's closed-door summit on the "AI Power Crisis" has been leaked.
GFM
••5 min
Date: December 17, 2025
Location: Nvidia Headquarters, Santa Clara, California, USA
Conference Topic: AI Data Center Power Shortage Solutions
Host: Jensen Huang
Attendees: Approximately 25 top startups in the power sector (mostly NVIDIA portfolio companies), covering areas such as liquid cooling, solid-state transformers, fuel cells, on-site hydrogen production, microgrids, and energy storage.
Meeting format: Strictly closed doors, no media, no recording.
⸻

Image caption: A leaked NVIDIA closed-door meeting summary circulating in private communications. Its authenticity cannot be independently verified, but the "high urgency at the enterprise level regarding power shortages" it reflects presents a structural signal that is highly consistent with the energy bottleneck currently faced by US AI data centers.
I. The Core Explosive Point (Selected Quotes from Huang Renxun)
1. By 2027, NVIDIA's GPU clusters alone will consume 150–200 GW of electricity globally, equivalent to 1.5–2 times the total electricity consumption of France—and this is just the tip of the iceberg of computing power demand. The power shortage has become the ultimate bottleneck for AI development.
2. If the power problem cannot be solved, all the advanced chips and powerful models will just be "empty shells that cannot start," and the AI revolution will be directly stuck at the energy level.
3. China's installed power capacity is twice that of the United States, which is a key advantage for them in AI infrastructure; while the United States will face a power shortage of 47GW for data centers by 2028, equivalent to the power generation of 44 standard nuclear power plants, and will lose its computing power dominance if it does not take action.
4. We cannot just be chip suppliers; we must become full-stack solution providers of "computing power + energy"—800V HV-DC is not an option, but a matter of survival for AI data centers.
5. Nuclear energy, green electricity, and energy storage are not substitutes for each other, but rather a "trinity": AI needs stable, carbon-free, and scalable energy, which is the core logic behind our investment in Terra Power.
⸻
II. Crisis Consensus and Key Data
• Extreme power consumption of a single cluster:
Currently, the annual power consumption of a multi-GPU cluster reaches 300 million kWh, equivalent to a small city; in 2027, the power of a single Kyber rack will exceed 1MW, which is more than 5 times that of a traditional rack.
• Global Gap Warning:
The global power shortage for AI data centers is projected to surge from 47GW to over 100GW between 2025 and 2028. Even if all projects under construction in the United States come online, there will still be a shortfall of 5-15GW that cannot be filled.
• Traditional architecture fails:
Traditional 54V power supply architecture requires more than 18kA of current to carry 1MW of power, with copper cable consumption exceeding 1 ton per rack and an efficiency of only 90%, which is completely unsuitable for megawatt-level computing power requirements.
⸻
III. Core Solutions (Defining Implementation Path)
(I) Revolution in power supply architecture: Full-scale implementation of 800V HV-DC
• Target:
The transition to 400V will be completed in 2026, and mass production of 800V HV-DC will be achieved simultaneously with Kyber racks in 2027, with end-to-end efficiency improved to over 98%.
• Core advantages:
By using a single-step conversion from "13.8kV power grid to 800V DC direct supply", intermediate steps are reduced, power loss is reduced by 40%, and copper cable demand is reduced by 45%.
• Standard Release:
The white paper "800V DC Architecture for Next Generation AI Infrastructure" has been released to unify industry technology interface specifications.
(II) Breakthrough in key technologies: hardware and energy synergy
1. Solid-state transformers (SST):
The 10kV/800V SST conversion efficiency reaches 98.5%, and the volume is reduced by 60%–90%. The Kaohsiung K-1 Data Center is the first demonstration project.
2. Liquid cooling is mandatory as a standard feature.
By 2025, liquid cooling penetration will reach 40%, and with the collaboration of the 800V architecture, heat dissipation energy consumption will be reduced by another 25%, with the PUE target set below 1.1.
3. Chip-level power management:
Blackwell series GPUs will feature Max-Q (high energy efficiency) / Max-P (high performance) intelligent modes, saving 15% energy while maintaining over 97% performance.
(III) Energy Supply Combination: Phased Energy Supplementation
• Short term (0–2 years):
Gas turbines combined with lithium iron phosphate energy storage are used for emergency backup, and Bitcoin mining farms are upgraded (power cycle reduced to 6–12 months) to quickly fill the gap.