AI Energy
How can the US solve its power shortage?
Engineering Practice and Path Exploration by Mr. Cheng Maiyue's Team, Power Experts
Editor-in-Chief of GFM / Jeff Morgan
••5 min
Mr. Cheng Maiyue's Profile:
Mr. Cheng Maiyue, a leading expert in the field of global energy transition and energy system restructuring He is also a special advisor and core research leader for the GFM "AI Energy" column.
Mr. Cheng Maiyue is a leading expert in the field of global energy transition and energy system restructuring, with over thirty years of practical and research experience in international energy, infrastructure, and energy finance. He is currently the Managing Partner of CT Green Capital, a Founding Director of the Wuzhen Think Tank, and a former Partner of the Rocky Mountain Institute (RMI).
Mr. Cheng began his career at the World Bank headquarters in 1991. He has worked for leading global energy and technology companies such as Southern Company, CLP, AES, Qwest, and Cisco. He has also participated in strategic research and practice on energy transition at Kearney and RMI. He has long focused on feasible solutions for power systems, demand-side management, and new energy systems.
In GFM's "AI Energy Sovereignty" column, Mr. Cheng Maiyue will focus on "Energy × AI × Industrial Security," systematically analyzing the electricity consumption for computing power in the AI era, energy bottlenecks, and decarbonization paths. He will also explore feasible models for zero-carbon energy, virtual power plants, and city-level and industrial-level energy transformation, providing verifiable and replicable energy solutions for global AI and industrial upgrading.
⸻——————————————
When AI becomes a national strategic asset, electricity will no longer be just an energy issue, but a race against time in terms of systems and engineering.
With the rapid expansion of computing power and the dense deployment of data centers, the US power system is facing structural pressures. This article, through the engineering practices of power expert Professor Cheng Maiyue and his team, dissects the "truth behind power shortages" behind tech giants like Nvidia, and examines this emerging AI energy constraint from three perspectives: return on investment, national security, and institutional efficiency.
Introduction | What if "computing power hegemony" hits the "electricity ceiling"?
In 2025, Washington's anxiety did not begin with the White House energy report, but rather emanated from Silicon Valley.
When NVIDIA CEO Jensen Huang repeatedly mentioned on various occasions that "electricity is becoming the biggest bottleneck to the development of AI," this statement sounded like a forward-looking reminder from a technology leader to outsiders. However, within the US energy and national security system, it was interpreted as a warning sign—computing power is still advancing at a high speed, while the infrastructure supporting it is approaching its limit.
AI has no shortage of chips, capital, or market demand.
What it lacks is the most basic, realistic, and difficult-to-replicate thing: a stable, predictable, and scalable power supply.
It is against this backdrop that Professor Cheng Maiyue, an expert in power systems and managing director of the Rocky Mountain Institute (RMI), and his team are increasingly being mentioned by insiders in the technology and policy circles. This is because their focus is not on "how fast AI is," but on a more fundamental question:
If the power system itself cannot adapt to the computing power era, then even the most advanced AI will only remain in the computer room.
(Image caption) The rapid expansion of data centers across the United States is pushing power grids, originally designed for traditional industry and cities, to their limits. As the demand for AI computing power grows exponentially, the pace of power system construction and dispatch is becoming a key bottleneck restricting the deployment of computing power. ⸻
Part 1 | Power Shortages: Not a Sudden Crisis
In his interview with GFM, Professor Cheng Maiyue's first sentence was not sensational:
"This should have happened a long time ago."
In his view, the United States is not facing a sudden energy accident, but a structural mismatch that has been neglected for a long time.
Over the past two decades, the core tasks of the U.S. energy system have been threefold:
Reduce costs, improve reliability, and promote carbon reduction.
However, no system was designed for the AI era.
The power consumption characteristics of AI data centers are completely different from those of traditional industries:
• Extremely high power density
• Severe load fluctuations
• Extremely demanding on power stability
Once a large GPU cluster goes online, its daily power consumption doesn't "increase gradually," but rather directly locks up the power grid capacity of an entire region.
"If the GPU runs out of power, it's just expensive metal," Cheng Maiyue said.
"For Nvidia, this is not an energy issue, but an asset risk issue."
⸻
Part Two | Nvidia isn't actually a major electricity consumer, but it's been hampered by power shortages.
A fact that is easily misunderstood is:
Nvidia itself does not directly consume this power.
It sells chips, not electricity.
But that's precisely the problem—
If downstream data centers cannot get power, then chips cannot be deployed, and computing power cannot be monetized.
Cheng Maiyue described this relationship as:
"You're selling an F1 car, but the track isn't finished."
From a supply chain perspective, this is a typical systemic blockage:
• Chip technology is racing forward.
• Data centers are queuing up for electricity.
• Local power grids, approval systems, and construction cycles remain stuck in the previous era.
This is why the power shortage problem is escalating from an "energy issue" to an "institutional bottleneck in technological competition."
⸻
Part Three | What Cheng Maiyue's team truly did was not just slogans, but "engineering practice".
Unlike many experts who remain at the level of policy discussions, Cheng Maiyue's team has long been doing something "unglamorous but crucial":
Break down macro-level energy issues into feasible engineering solutions.
Their core judgment is clear:
The United States cannot "produce more electricity" in the short term.
However, the generation, scheduling, and use of electricity can be reconfigured at different time scales.
① Short-term: Use existing electricity faster
The team assessed that a large number of existing industrial sites (including some decommissioned mines and energy facilities) could provide usable power to data centers within 6–12 months if paired with rapidly deployable gas and energy storage systems, rather than waiting for the traditional power grid to undergo a three to five-year expansion cycle.
② Mid-term: Let the data center "absorb part of the load"
By combining photovoltaics, energy storage, liquid cooling, and power electronics optimization, data centers can significantly reduce the instantaneous impact on the external power grid.
This is not just a "green slogan," but a practical strategy to reduce the risk of grid rejection.
③ Long term: Structural solutions can only come from nuclear energy and system integration.
Cheng Maiyue does not shy away from acknowledging that small modular nuclear reactors (SMRs) are currently one of the only options that simultaneously meet the requirements of scale, stability, and carbon reduction. However, he also emphasizes that the real challenge lies not in the technology, but in:
"Is the United States willing to let engineering efficiency outpace its systems?"
(Image caption) In the U.S. national security and energy assessment system, the power grid is no longer just infrastructure, but a potential amplifier of strategic risks. When computing power, military capabilities, and critical communications are highly concentrated in a few power nodes, the length of construction and repair cycles directly affects national resilience. ⸻
Part Four | This is no longer an energy issue, but a national security constraint.
When AI was formally incorporated into the U.S. national security strategy, the nature of the electricity problem was completely changed.
Internal assessments by the Pentagon and national security think tanks have identified power grid vulnerability as a potential amplifier of strategic risks.
• Data center centralization
• Power node unification
• The construction and repair cycle is too long.
This means that in the event of a conflict or extreme event, the advantage in computing power may become invalid within hours.
Cheng Maiyue spoke frankly:
"If the power supply can't keep up, the United States' AI advantage won't be overtaken, but rather it will be hampered by its own actions."
⸻
The real problem isn't a lack of power, but a lack of "engineering time".
Returning to the original question: The United States is facing a power shortage, and who can solve it?
The answer may not be romantic.
It's not a particular company, nor a particular technology.
Instead—
Whether this country is willing to admit that the era of computing power requires engineering pace, institutional patience, and long-term infrastructure investment.
The value of Professor Cheng Maiyue and his team lies not in their "solving the power shortage," but in their clear pointing out:
If we don't start adjusting our project path today, all we'll be left with tomorrow is strategic anxiety.
In the AI era, what is truly scarce is never algorithms.
Rather, it is the real world that supports the operation of algorithms.