A nuclear power plant and transmission lines at sunset. The grid, not the reactor, is the real story.
getty
The real innovation was not the nuclear reactor itself, but the development of the grid that supports it.
Discussions about artificial intelligence (AI) often draw comparisons to nuclear power, focusing on risks and regulatory challenges. While these parallels exist, they overlook a more significant aspect.
Fundamentally, a nuclear power plant operates with a steam turbine, a technology that predates modern nuclear energy. The key advancement was the introduction of nuclear reactions as a new energy source, enabling unprecedented power generation. However, the true impact of this power was realized only when the surrounding infrastructure advanced.
To enable nuclear power’s viability, the entire system had to be redesigned. This included expanding transmission networks, improving load balancing, and establishing new safety protocols. These changes required not just engineering feats but also governance, regulation, accountability, and trust to ensure safety and usability.
Similarly, artificial intelligence is on a parallel path. AI models serve as the new power source, but they do not constitute the entire system.
Efstathia Andrikopoulou, MD, MBA, a cardiologist, highlights an issue in healthcare AI. She states, “Detection is not an outcome. We need detection, but detection means nothing unless there are clearly defined actions and a system designed to absorb the follow-up.”
This statement underscores the core issue in healthcare AI, where the technology itself is not the limiting factor. The challenge lies in what follows after detection.
AI models may identify diseases or risks, but without well-defined workflows, ownership, and follow-up, patient care remains unchanged. Results may linger unnoticed in an inbox, or patients might receive contextless information. While detection and model accuracy are often celebrated, the critical question remains: what follows?
In many instances, the answer is either nothing or inconsistency.
This situation is why AI resembles a new power source, much like nuclear energy. Its value depends on the system’s ability to safely and effectively utilize its output.
The Constraint Is The Grid
Healthcare systems are not equipped to handle the influx of output from AI. Fragmented workflows, isolated data, and unclear responsibilities create challenges for those in critical situations.
AI generates more signals than ever, but the systems designed to act on them have not kept up. Without effective transmission, signals turn into noise. In healthcare, noise translates to inconsistency, which poses risks.
This is not merely a deployment issue; it is a systemic problem. AI needs to be integrated into real workflows, with clear accountability and success measured by outcomes rather than model performance.
Nuclear power’s requirements often go unnoticed: creating new safety systems, regulations, incident responses, and roles. These were not due to flaws in nuclear energy but its immense power.
The healthcare sector has not undergone this transformation.
While AI tools are introduced, they are not fully integrated, and performance assessments are sporadic. When risks are flagged, ownership is often ambiguous.
We have developed the reactor, but the grid remains unbuilt.
Our limitations stem not from AI’s capabilities but from our systems’ capacity to absorb its output.
Regulate The Source. Enable The System
The nuclear analogy also clarifies governance issues.
Nuclear energy is strictly regulated due to its inherent risks. However, regulation aims to ensure safe usage, not prohibit it.
Artificial intelligence requires a similar approach.
We must rigorously test, approve, and monitor AI models, but the goal should be safe usage, not containment.
In the United States, technology regulation already considers how tools are used in practical settings, especially when safety is a concern. The objective is responsible use with clear accountability, rather than banning the technology.
AI should follow the same path.
History serves as a lesson. Early failures influenced public perception of nuclear power, which still affects its adoption.
We should avoid repeating this mistake with AI.
Infrastructure Is Policy
The most critical policy decisions regarding AI are not about the models themselves but the infrastructure supporting them.
The bipartisan AI-Ready Data Act, introduced by Senators Ted Budd and Andy Kim, emphasizes this by focusing on data quality, interoperability, and accessibility—fundamental elements for AI functionality in the real world.
This legislation does not regulate the models but invests in the foundational conditions necessary for them to operate effectively.
In healthcare, the question is seldom whether a model can generate insights but whether the data exists in a reliable and actionable form. Fragmented or inconsistent data can hinder even the most accurate outputs.
The bipartisan nature of the bill highlights that infrastructure is one area in AI where consensus is achievable, focusing on responsible use rather than limiting capabilities.
If nuclear power required the development of the electrical grid, AI necessitates building the data layer. This includes not only development but also validation, monitoring, and real-world application.
More Power Will Not Fix A Broken System
Artificial intelligence represents a significant advancement, but breakthroughs alone do not drive impact. Systems do.
While we produce more intelligence than ever, the systems responsible for leveraging it remain outdated. Simply adding more power to an existing system will not yield better results.
More power will not repair a flawed system; it will reveal its weaknesses.

