AI Energy
Nvidia's "AI Power Crisis": A Systemic Risk That Has Reached the Core Technology and National Security Decision-Making Levels in the United States
GFM / Special Research Group
••5 min
(Image caption) Jensen Huang, founder and CEO of Nvidia. As a key figure in the global AI computing power ecosystem, the "power anxiety" he faces is no longer just an operational issue for a single company, but is gradually evolving into an institutional signal regarding the continued expansion of AI infrastructure. First, this is not just a problem of one company, but a harbinger of an era.
In every major technological wave, the real bottleneck often lies not in the technology itself, but in the social system that supports the technology.
The Industrial Revolution of the 19th century was constrained by urban sanitation, labor systems, and public governance;
The electrification and automotive revolution of the 20th century was hampered by issues related to power grids, roads, and standards systems.
The digital revolution of the 21st century is constantly colliding with data governance, privacy, and institutional boundaries.
Today, AI is repeating this historical pattern.
The current development of AI is not slowing down due to insufficient model capabilities, nor is it stagnating due to limitations in chip manufacturing processes. Rather, it is gradually revealing a more fundamental and unavoidable structural constraint—electricity.
Nvidia isn't the first tech company to recognize this problem, but it's likely the first to be forced to confront it publicly and institutionally. The reason is simple: it's positioned with the highest global computing density, the fastest deployment pace, and the most aggressive technological approach.
When structural constraints emerge, the most cutting-edge companies are always the first to feel the pressure.
Therefore, Nvidia's "power anxiety" should not be understood as an operational challenge for a single company, but rather as a systemic warning signal—indicating that the entire AI ecosystem is not far from reaching its institutional ceiling.
Second, AI is changing the "nature" of electricity demand, not just the "quantity".
Most discussions about AI and energy still focus on "how much electricity consumption will increase," but this actually underestimates the essence of the problem.
The International Energy Agency (IEA) and several energy research institutions have repeatedly pointed out in recent years that:
The impact of AI on the power system is not a linear expansion, but rather a structural transformation.
Traditional industrial and commercial electricity use typically has three characteristics:
• dispersion
• Predictable
• Adjustable
AI's electricity consumption pattern is almost the opposite in every dimension.
First, there is a sharp increase in power density.
The new generation of AI servers compresses extremely high computing power into a limited space, making the power consumption per unit area far exceed that of traditional data centers, and placing unprecedented demands on power distribution, heat dissipation and stability.
Secondly, there is a shift in load patterns.
AI training and inference are not short-term peaks, but rather long-term, high-intensity, and uninterrupted loads, which transforms the power system from a "demand management" problem to a "continuous and stable supply" problem.
Third, there is a trend toward geographical centralization.
Computing power is no longer evenly distributed, but highly concentrated in a few nodes with network, talent, capital and policy conditions, putting disproportionate pressure on local power grids.
Against this backdrop, the IEA predicts that global data center electricity consumption will approach 945 TWh by 2030. This is no longer just an energy statistics forecast, but a stress test of the carrying capacity of existing systems.
III. The U.S. Electricity Dilemma: Not a Lack of Resources, but Excessive Institutional Friction
From a resource perspective, the United States is not a country that "lacks electricity".
The problem lies not in the total amount, but in the speed, coordination, and governance structure.
In the AI era, three long-standing characteristics of the US energy system are simultaneously transforming into structural weaknesses:
First, a highly fragmented governance structure.
The multi-layered approval process at the federal, state, and local levels means that the construction cycle for power and grid infrastructure is often measured in years, making it difficult to match the quarterly or even monthly pace of AI infrastructure development.
Second, the power grid is aging and designed for a low-density era.
The existing power grid in the United States was mostly built in an era of dispersed demand and flat load, and it lacks sufficient resilience to the concentrated, high-density impact brought by AI.
Third, the tension between energy transition and immediate demand.
Driven by decarbonization goals, the retirement of some stable base-load energy sources may outpace the replacement by new-generation stable energy sources and energy storage systems, leading to institutional friction during the transition period.
Several investment banks and energy research institutions estimate that the United States may face a power shortage of tens of gigawatts for data centers in the coming years. This is not a sudden crisis, but a chronic, accumulating institutional friction—which is why it is particularly easy to underestimate.
IV. The Return of Engineering Reality: When the Laws of Physics Regain Dominate the Technological Narrative
In the AI narrative, the public and media often focus on model capabilities, parameter scale, and chip manufacturing processes, but as computing power density approaches physical limits, engineering realities are regaining the discourse power.

(Image caption) Advanced chip manufacturing and packaging lines. Although Nvidia's AI GPUs are primarily manufactured by foundry partners, their computing density and power requirements are being translated into enormous pressure on power, heat dissipation, and infrastructure through these highly sophisticated manufacturing systems, and are spilling over into the entire energy system.
When faced with megawatt-level server racks, current, thermodynamics, and materials science no longer allow for any romanticized narratives:
• Low voltage supply causes current runaway
• Multi-stage energy conversion erodes overall efficiency
• The demand for heat dissipation, in turn, increases energy consumption.
This is a moment when technological civilization "returns to physical reality"—
No matter how intelligent the software is, the hardware must ultimately obey the laws of physics.
It is against this backdrop that solutions such as high-voltage direct current (HVDC), liquid cooling, and localized energy utilization have gradually become the mainstream focus of discussion. This is not due to companies' technological preferences, but rather a choice forced by engineering realities.
V. Why did Nvidia become the "developer" in this crisis?
Nvidia has become a symbol of the AI power problem not because it is more pessimistic than other companies, but because it happens to stand at the intersection of three pressure lines:
• At the forefront of global computing power demand
• The Boundary Between Technology Density and Power Density
• The core hub of the AI ecosystem
When Nvidia encounters power constraints, it means that this constraint is no longer a "future problem," but is approaching the institutional boundaries of the entire AI ecosystem.
For this reason, this issue has begun to be seen by international media, policy circles, and security communities as a structural risk, rather than just an industrial challenge.
Nvidia's role here is more like that of a "developer"—
It brings problems that were originally hidden deep within the system to the public eye.
VI. When Electricity Enters the National Security Context: A Strategic Redefinition of AI Infrastructure
Before AI was formally incorporated into the national security narrative, it was primarily an efficiency tool—enhancing enterprise productivity, optimizing decision-making processes, and reducing marginal costs. However, its nature has fundamentally changed as computing power has become deeply embedded in military command, intelligence analysis, cyber defense, and critical infrastructure management.
Recent public and semi-public documents from the United States have repeatedly emphasized that AI capabilities themselves constitute a strategic asset. However, a premise that is often overlooked is that the security attributes of any strategic asset will rapidly collapse if it lacks a stable and controllable energy supply.
During the Cold War, the core of national security discussions was "whether oil could be cut off."
In the AI era, the question becomes "Can computing power be continuously supplied?"
A power outage at a data center is no longer just a business loss, but could mean:
• Decline in intelligence processing capabilities
• Delay in the reaction speed of autonomous systems
• The key algorithm cannot run in real time under high-pressure conditions.
In this context, power stability has been redefined as part of digital sovereignty. This is why energy and power grid issues are increasingly appearing at the intersection of technology, defense, and domestic policy.
VII. The Structural Paradox of Energy Transition: AI is Amplifying Institutional Friction
The energy transition itself is not the problem; the real challenge lies in the mismatch between the pace of the transition and the technological needs.
On the one hand, the United States and most developed economies are accelerating the phase-out of high-carbon energy sources; on the other hand, the electricity demand brought about by AI is characterized by being "immediate, centralized, and uninterrupted." This puts the already complex energy dispatch system under unprecedented stress.
More importantly, the energy transition has not reduced system complexity; on the contrary, it has amplified it during the transition period.
• Intermittent energy increases backup demand
• Energy storage technology is still constrained by cost and material supply.
• Upgrading the power grid requires interstate and interagency coordination.
The emergence of AI acts like a magnifying glass, prematurely exposing structural problems that could have been delayed. This is not a technological failure, but rather a stress test of the system's capacity to withstand pressure.

(Image caption) High-density AI data center operation scenario. AI applications are moving from the software layer to the infrastructure layer. Their long-term, high-power, and uninterrupted power consumption characteristics are redefining the nature of electricity demand and posing structural challenges to the existing power grid and energy governance system.
VIII. Spillover Effects: When the Demand for Computing Power Reshapes Local Economic and Social Structures
The impact of power bottlenecks will eventually penetrate the technology industry itself and enter a wider social sphere.
In areas with a high concentration of data centers, local governments are beginning to face a dilemma:
• Is the priority given to high-value-added computing power industries?
• Or should we maintain stable electricity consumption and prices for residents and traditional industries?
This choice itself is political. Rising electricity prices, power rationing, and infrastructure priorities could raise questions about the unequal distribution of the benefits of technology.
Furthermore, the high concentration of capital and computing power may exacerbate regional imbalances in development. AI hub cities absorb resources and investment, while peripheral regions bear the burden of power grid pressures but may not necessarily share in the benefits.
These spillover effects make AI-powered electricity a social governance issue, rather than just an engineering or business issue.
IX. Comparative Perspective: The True Watershed in AI Competition is Emerging
From a global comparative perspective, AI competition is gradually shifting from "whose model is more advanced" to "whose system is more capable of supporting it".
The differences in power and computing power deployment among different systems are creating observable gaps in results:
• Countries with a high degree of concentration in energy and infrastructure planning are more likely to quickly deploy computing power.
• A fragmented system with lengthy approval cycles faces challenges in actual implementation.
This does not imply the superiority or inferiority of a single system, but rather demonstrates that the technological characteristics of AI are favoring certain governance capabilities. In this process, the speed of power construction, cross-departmental coordination capabilities, and long-term planning capabilities are gradually becoming implicit competitive advantages.
The electricity problem in the AI era is a long-term governance challenge.
What Nvidia’s “AI power crisis” reveals is not a short-term crisis that is about to erupt, but a systemic risk that is accumulating.
Its danger lies not in sudden collapse, but in continuous friction, loss of efficiency and strategic stagnation.
In the AI era, electricity is no longer just a factor of production, but a:
• Constraints affecting the speed of technology implementation
• Leverages that influence the transformation of national competitiveness
• Pressure points affecting societal consensus on science and technology
The real question is no longer "whether there will be a power outage".
Rather, it's about whether the system can adapt to this change in advance.
In this sense, electricity is becoming one of the most underestimated yet most critical sovereign resources in the AI era.