Electronic brain and Concept of artificial intelligence(AI).Graphic of a digital brain and Human head outline made from circuit board, connecting on dark blue background.
getty
In an April 2 Wall Street Journal op-ed, Bernie Sanders expressed concern that artificial intelligence poses a significant threat to cherished American values. This sentiment echoes a widespread worry about how AI might impact jobs, power structures, misinformation, and interpersonal relationships. However, framing AI merely as a threat can lead to stagnation at a time when active engagement is crucial.
A paradox emerges as many Americans utilize AI while simultaneously expressing distrust. More than half of Americans use AI for tasks like research, writing, and analysis, yet only approximately one in five express trust in AI-generated information. This skepticism, if unaddressed, can solidify into disengagement.
This issue is particularly evident in public health, a field where caution is essential due to high stakes and sensitive data. Yet, excessive caution can lead to avoidance. While public health debates the implications of AI, other sectors are already integrating it into decision-making processes. Waiting for certainty means public health may end up inheriting systems rather than shaping them.
The real question lies not in whether AI poses risks but in how prepared we are to use it effectively. In practical terms, AI can enhance public health efforts by simplifying complex information, tailoring messages for diverse audiences, and identifying feedback patterns. This does not replace expertise but extends it, which is vital in an under-resourced field.
Some institutions are moving forward. The Centers for Disease Control and Prevention’s recent guidance on AI indicates a shift towards using AI with caution, emphasizing human oversight, privacy, and scientific integrity. This forward-thinking approach suggests starting with small steps, using AI responsibly, and learning through experience.
The debate often gets stuck between establishing guardrails and avoiding engagement. While Sanders rightly highlights concerns around bias and power concentration, it’s crucial to understand that creating guardrails is not about building barriers. Guardrails ensure safe use, whereas walls delay participation until others set the terms. Public health should focus on the former.
Similar clarity is needed regarding employment. Seven in ten Americans believe AI will decrease job opportunities. While this concern is genuine, new tools often reshape rather than eliminate work. The immediate challenge is whether the field is equipping itself to adapt. Are agencies training staff to use these tools? Are leaders fostering experimentation, or are they avoiding engagement altogether?
The notable aspect of this moment is not the prevalence of fear but how often the conversation halts there. Fear is natural but incomplete. In a results-driven field, AI can be unsettling. The key question is whether we can navigate this discomfort to influence its application.
For the public health sector, the choice is not about accepting or rejecting AI but whether to play a role in shaping its use or adapting once others have set the parameters.

