I read an article in the Journal of Public Health Management and Practice the other day about preparing our public health workforce for the future of AI. The article begins with a strategy called “promoting a leadership mindset,” which encourages public health organizational leaders to adopt “a mindset that encourages exploration, innovation, and optimism in considering AI’s potential.”

Yes, leadership is crucial in adopting AI across public health. So, how would a leader do that?

If you look through the academic and grey literature, as I have, you’ll find some key principles on how to “promote a leadership mindset”: invest in AI training, conduct prompt engineering workshops, and ensure workforce development before allowing staff to use AI. That is the safest approach. (Note that the authors of the paper I mentioned are all employed by institutions that offer such services.)

They’re not wrong to note that AI carries significant risks. With great power comes great potential for hallucinated public health guidance. However, an educational and workforce development approach has several significant flaws.

I served on the front lines of public health, working in a local health department from 2018 to 2022. If you told me that, to use a tool that could save me time and effort, I’d have to spend countless hours learning about that tool to really benefit from it… well, I might have done it.

But that’s the problem.

This is how we’ve always done things in public health. We spend endless hours learning to use new tools that probably only save us a small fraction of that time. I was able to do this because I worked at a highly resourced health department. If it were just me and three other professionals trying to keep the public healthy, I wouldn’t be able to invest the time needed to learn everything about AI to make it “safe.”

Which brings me to the second major flaw. The only reason I would agree to “hours of training” to learn how to use AI safely is that I worked at one of the largest health departments in the state. Claiming you need extensive training before safely using AI creates a huge equity gap between well-resourced departments and those without resources. This isn’t new, but it’s one that AI will likely worsen.

Even if everyone had unlimited time and resources to follow the authors’ advice and upskill before using AI, it would only be a temporary fix. AI evolves faster than we can keep up with. In just the past three weeks, I’ve seen the AI I use and experiment with change in ways I never expected. There’s no way anyone in the field could do enough upskilling to stay fully current.

We’ve Been Here Before

In my fifteen years in public health informatics, these suggestions are par for the course. The technology sector has repeatedly asked public health professionals to adapt to its tools. Inspection systems. Electronic Health Records. Communicable Disease Surveillance Systems. You name it, we’ve had to adapt to it. Each time, the message was the same: “learn our system.”

So why would it be any different with AI?

Because we can build it differently.

We can say to technology vendors trying to push the latest AI solution on us: “No more.” AI is a paradigm shift that puts public health in the driver’s seat. We can demand that the technology be built in a way that doesn’t require us to absorb the complexity of this new technology, with the expectation that we spend countless hours learning for the privilege of using a vendor’s tool.

What “Building Differently” Actually Looks Like

Let me introduce you to my philosophy of AI system design.

The way AI works is by passing information between the user and a foundational Large Language Model (think Claude, Gemini, GPT). When we say we need to “train” public health professionals on how to use AI safely, what we really mean is that we need to teach them how to provide the correct information in the right order to the LLM to achieve the correct result. Or at least what we mere humans would consider the “correct” result.

But here’s the thing: that’s a system design problem, not a workforce development problem.

Instead of training people to navigate the risks of AI, we should be building layers of protection between the user and the foundational model. Every concern the workforce development crowd raises, whether it is hallucinations, HIPAA violations, inconsistent guidance, or audit requirements, can be addressed through system design rather than user training.

In building our public health AI platform, we identified seven layers of protection that should exist between a public health professional’s question and the response they receive. These aren’t theoretical. We built them. They work.

Article content

Layer 1: Secure Data Connection. Data never leaves the control of the organization that owns it. The system automatically enforces access permissions, so users only see what they’re authorized to see. No training is required; the system handles it automatically.

Layer 2: Smart Information Retrieval. Instead of relying on whatever the LLM “remembers” from its training data (which may be outdated or wrong), the system finds the most relevant information from curated, authoritative sources. It ranks sources by authority and relevance. Responses are based on actual policies and the latest guidance.

Layer 3: Tested Instruction Templates. The system utilizes pre-approved instruction formats that have been tested and validated for public health scenarios. When guidance changes, we test new instructions against historical requests to ensure accuracy before implementing them. This provides consistent, reliable responses without requiring users to craft the perfect prompt.

Layer 4: Right Tool for the Job. Not every request needs to go to the same foundational LLM (or any foundational LLM for that matter). The system routes requests to the most appropriate resource, using trusted techniques and foundational LLMs specialized in the task at hand. Fast answers when speed matters. Deep analysis when the question is complex. The user doesn’t need to know which tool to use; the system automatically determines the correct one.

Layer 5: Multi-Step Coordination. Complex requests often require searching multiple systems in a specific sequence. The system coordinates these steps automatically, remembering context throughout. It mimics the public health thought process when order of operations matters.

Layer 6: Safety and Accuracy Checks. Every request and response undergoes multiple validation checks. The system detects and obfuscates protected health information. It verifies public health facts. It checks for bias. It ensures regulatory compliance. Problems are identified and addressed before they reach the user. If the bias is too high or the facts can’t be verified, the response returns a warning.

Layer 7: Complete Audit Trail. Every request, every source consulted, every response generated are all recorded. A complete, searchable record for public records requests, quality improvement, and accountability. Records retained for at least seven years.

The Point Is Public Health Should Not Carry The Burden

I’m not suggesting every AI system needs exactly these seven layers. What I am suggesting is a fundamental shift in how we think about AI safety in public health.

The current consensus puts the burden on the user. Learn prompt engineering. Understand how LLMs hallucinate. Know when to trust the output and when to verify. Attend the workshop. Get certified.

That approach protects vendors, not public health workers.

A well-designed system puts the burden where it belongs: on those of us who build these tools. We should be the ones losing sleep over hallucinations, not the epidemiologist trying to answer a community member’s question. We should be the ones ensuring HIPAA compliance is baked into every interaction, not the health educator who just wants to draft an outreach message.

Public health professionals already carry enough. They don’t need to become AI experts. They need AI that was built by people who understand public health and took the time to build safety into the system itself.

A Different Call to Action

The article I mentioned ends with a call to action: invest in leadership development, management training, and online AI learning resources. More training. More upskilling. More burden on an already overwhelmed workforce.

I have a different call to action.

If you’re a public health leader evaluating AI tools, stop asking “how will we train our staff?” Start asking vendors: “How does your system handle these concerns so my staff doesn’t have to?”

Ask them how they prevent hallucinations. Ask them how they ensure responses reflect current guidance. Ask them how they protect PHI. Ask them about their audit trail. If they can’t answer or if their response is “well, that’s what the training is for”, they’re asking you to be the safety layer.

You deserve better than that.

We can build AI that carries its own weight. We’ve proven it can be done. Now it’s time to demand it.

Subscribe To Flourish Notes
Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Subscribe To Flourish Notes

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

You have Successfully Subscribed!