Developing an AI Policy for Your Local Health Department

By Juliana McMillan-Wilhoit

Your staff are already using AI. They are revising emails with it, summarizing meeting notes, writing Excel formulas, analyzing survey data, and in some cases, using it to assist with clinical decisions. Some are on paid subscriptions. Others are using free tools you may not even know about. And in most health departments, there is no policy, no training, and no monitoring around any of it.

This is not a hypothetical scenario. It is the reality we hear from health departments across the country. And it is exactly the situation one local health department found themselves in when they reached out to Flourish & Thrive Labs for help.

At F&T Labs, we view AI as a tool for augmenting intelligence: a powerful asset that works alongside your current staff, enhancing their capabilities and productivity. The question is not whether your team will use AI. The question is whether they will use it with guardrails in place.

The Problem: AI Adoption Without a Safety Net

The numbers tell a clear story. According to Pew Research Center, 23% of American adults have used ChatGPT. A Business.com study found that 57% of American workers have tried it, with 16% using it regularly at work. And only 17% of workers say their employer has communicated clear AI policies.

23%
of American adults have used ChatGPT
57%
of workers have tried ChatGPT; 16% use it regularly
17%
of workers say their employer has clear AI policies

For local health departments, the stakes are especially high. Staff handle protected health information, serve vulnerable populations, and operate under strict regulatory requirements. When someone pastes client data into a free AI tool to draft a case summary, or uses a model to help identify a clinical diagnosis without understanding how that model was trained, the risks are real: disclosure of personal information, potential security breaches, ethical dilemmas in decision-making, and bias in algorithms affecting the people you serve.

One local health department described this exact situation. They knew staff were using AI across the organization for everything from low-risk tasks like writing formulas to higher-stakes work like assisting with diagnoses. They had no scaffolding around any of it.

“We knew we had a handful of folks using paid subscriptions to AI models, and we knew that inevitably there were team members using free versions of similar tools. We had no scaffolding or monitoring or training for any acceptable use of any of it.”

IT Application Specialist, Local Health Department

They did what many departments do first: they looked around. They checked with their county IT department to see if an AI policy existed or was in development. It was not. They reached out to nearby health departments and their state networks. No one had built one yet, though many were interested in learning from whoever did first.

“We really didn’t know where to start, and we didn’t feel knowledgeable enough in AI technology to build a policy ourselves from ground zero.”

IT Application Specialist, Local Health Department

This is the gap we see repeatedly. Health departments recognize the need for an AI policy, but the technology feels complex, the landscape is evolving quickly, and there is no template sitting on a shelf that accounts for the specific realities of public health work.

What a Comprehensive AI Policy Actually Covers

A meaningful AI policy is more than a one-page memo telling staff to “use AI responsibly.” It needs to address the specific decisions your organization has to make about how AI tools are approved, who can access them, what data can and cannot be shared with them, and what happens when someone violates the policy.

We have identified 12 key areas that a comprehensive AI policy for public health needs to cover, including scope and coverage, how your organization defines AI, training requirements, tool approval processes, permitted and prohibited uses, data privacy and security, ethical considerations, and governance structures. You can read the full breakdown of all 12 areas in our post, 12 Things Your AI for Public Health Policy Needs.

The important thing to understand is that these are not abstract categories. Each one requires your leadership team to make concrete decisions. What is your organization’s risk tolerance? Which tools will you approve? What training will you require before someone can use AI in their work? These decisions need to reflect how your organization actually operates, not how a generic template assumes you do.

The Solution: A Collaborative, Facilitated Process

Jefferson McMillan-Wilhoit facilitating an AI policy session with a health department team

Jefferson McMillan-Wilhoit facilitating an AI policy session

When this health department came to F&T Labs, we designed a process that respected two realities: their team did not have the AI expertise to build a policy from scratch, and they absolutely did not have weeks of extra time to dedicate to the project.

Our process centers on three one-hour facilitated sessions. In each session, we work through a subset of the 12 policy areas with your team. We bring the structure, the questions, and the knowledge of what other health departments have decided. Your team brings the context: what your workflows look like, what tools people are already using, what your IT infrastructure supports, and what level of risk your leadership is comfortable with.

You make all the decisions. We guide the discussion and capture what you decide. This is a critical distinction. We are not handing you a generic policy and asking you to sign off. We are facilitating a conversation where your leadership team works through each area and lands on decisions that reflect your organization’s specific reality.

Juliana McMillan-Wilhoit leading a workshop with public health professionals

Juliana McMillan-Wilhoit leading a workshop

After the three sessions, we take everything your team decided and draft the complete policy document. Then you review it, give us one round of feedback, and we finalize it. The whole process typically takes four to six weeks from start to finish.

One of the details that matters more than you might expect: we review your existing policies before we start. That includes your information security policy, HIPAA policies, acceptable use policies, and anything else that touches adjacent topics. We use your existing policy format as a template so the AI policy is ready to go through your adoption process immediately, with no reformatting required. For health departments where every additional step is a barrier to getting something done, this saves real time.

“A huge advantage to working with F&T was that they were all from the public health sector, so they understood the constraints in budget and time and the requisition requirements.”

IT Application Specialist, Local Health Department

What We Need from You

Duration
4 to 6 weeks from start to finish
Sessions
Three one-hour facilitated sessions
Your Team
A core working group of 4 to 8 people representing IT, privacy/legal, programs, and leadership
Your Commitment
Participate in the three sessions and provide one round of feedback on the draft

The Impact: From Uncertainty to Opportunity

For the health department that went through this process, the immediate result was a complete, ready-to-adopt AI policy. It was formatted to match their existing policy templates, it had been reviewed against their current policies for alignment, and it was ready to enter their formal adoption process without any additional preparation.

“It was such an easy process for us that netted a pretty substantial deliverable.”

IT Application Specialist, Local Health Department

The policy gave their staff clear guidelines on acceptable AI use, protecting both the organization and the individuals they serve. It established training expectations so staff understand what a learning model is, how to verify AI-generated responses, and which types of data should never be shared with an AI tool. It created an approval process for new tools so the department can evaluate security and appropriateness before staff start using them.

The less obvious impact was what happened after the policy was in place. With clear guardrails established, the department’s leadership could shift their thinking from risk mitigation to opportunity. Instead of worrying about what staff might be doing wrong with AI, they could start asking a more productive question: how can we deliberately leverage AI to handle the tasks a computer can do well, so our staff can focus their time on the work that requires a human?

“Now that we have a policy written, it’s opened up the opportunity for us to think bigger and consider how we can leverage AI to do some of those tasks that a computer can do, so our staff can focus their time on those tasks that a computer can’t.”

IT Application Specialist, Local Health Department

That shift from defensive to strategic is the real value of getting a policy in place. The policy itself is essential. It protects your organization, your staff, and the people you serve. It is the floor, not the ceiling. Once you have it, you can start building.

“Your staff are absolutely using AI tools whether you realize it or not, so this is a chance for us to be proactive.”

IT Application Specialist, Local Health Department

Where to Go from Here

If your health department does not have an AI policy yet, you are not behind. Many departments are in the same position. The important thing is to start. Your staff are already making decisions every day about how to use AI in their work. A policy gives them the clarity they need to use these tools confidently and safely.

We have put together resources to help you get started. Our post on the 12 things your AI policy needs to cover walks through each area in detail. We also conducted a webinar on “Why Your Health Department Needs an AI Policy,” and you can view the recording here.

Ready to Develop Your AI Policy?

Three sessions. Four to six weeks. A policy your team actually built together. We will walk you through the entire process.

Schedule a Conversation

30 minutes. No pitch. No slides.

Subscribe To Flourish Notes
Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Subscribe To Flourish Notes

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

You have Successfully Subscribed!