A practical checklist for health department leaders who want to move forward without creating new risk

Your board is asking about AI. Your legal counsel is starting to forward articles. And somewhere in your department, staff are already using it: not in a pilot, not with formal approval, but right now, in the middle of their workday.

This is the pattern we see in nearly every health department we talk to. The question isn’t whether AI is entering your organization. It already has. The question is whether you have any say in how.

You don’t need to have all the answers yet. You do need to ask the right questions before any AI tool goes live in a formal capacity. This is that list.

Data Security

1. Where does the data go when someone types into this tool?

Most off-the-shelf AI tools, including the free tiers of popular chatbots, send your inputs to third-party servers, where vendors may store them, review them for model training, or retain them indefinitely. If a staff member pastes a case note, a client name, or an address into one of these tools, you may have already created a compliance problem. Ask every vendor: where does input data go, who can access it, and is it ever used to train models?

2. Is this tool covered under a Business Associate Agreement?

If your department handles Protected Health Information (and most do), any tool that processes that data needs a BAA under HIPAA. Many AI vendors don’t offer BAAs at all. Others only offer them on enterprise contracts.

Under the HIPAA minimum necessary standard, the disclosure happens at input, not intent. Whether the staff member meant harm is irrelevant. What triggers a potential violation is whether the disclosure was authorized and covered. Find out before deployment, not after an audit.

3. What happens to our data if we stop using the tool?

Vendor relationships end. Contracts expire. Companies get acquired. Make sure you understand what data the vendor retains after you offboard, how long they keep it, and whether you can request deletion. Ideally, your contract spells this out before you sign.

Staff Usage

4. Do we know what AI tools our staff are already using?

Shadow AI (staff using tools without formal approval) is one of the most common and underappreciated risks in local government right now. Before you write a policy, survey your team honestly. In our experience, you will find that staff are already using AI to draft reports, answer emails, or summarize meetings. That may not be a problem on its own, but you need to know what you’re working with. “We didn’t know our staff were using it” is not a defense OCR recognizes when investigating HIPAA violations.

5. Have we told staff what they can and can’t put into these tools?

Even with an approved tool and a BAA in place, staff need clear guidance on acceptable input. PHI, PII, personally identifiable client information, internal strategy documents, and legal communications all require explicit policy. “Use good judgment” is not sufficient instruction when a staff member is under deadline pressure and a free tool is one tab away.

6. Who is responsible when an AI output is wrong?

AI tools make mistakes. They hallucinate facts, misread context, and produce outputs that sound plausible yet miss the mark. In a government context, a wrong AI answer becomes a decision record. If a staff member relies on AI-generated information to make a call about a client, a program, or a policy, and that information is wrong, who owns that outcome? Your policy needs to answer the accountability question before it becomes a grievance or a legal matter.

Governance

7. Who in our department has authority to approve a new AI tool?

Right now, many departments have no formal approval pathway for AI tools. Program-level staff are making purchasing decisions, sometimes without the full compliance picture. Designating a clear approval authority (whether that’s IT, legal, the health officer, or a small committee) is one of the highest-leverage governance decisions you can make.

8. How will we update our policy as the technology changes?

AI is not a stable landscape. Tools that appear safe today may change their data practices next quarter. A policy written in 2026 without a built-in review cycle will be outdated by 2027. Build in a cadence: at minimum annually, and triggered by any significant vendor change or new tool deployment.

This also matters for accreditation: PHAB is moving toward requiring documented AI policies as part of the re-accreditation process, with expectations building toward the 2027 cycle. Departments that start this work now will be ahead of the requirement.

Public Records

9. Are AI-generated documents and chat logs subject to public records law?

In most states, yes. And this is catching departments off guard. If staff are using AI to draft correspondence, generate reports, or answer program questions, those outputs may be discoverable under FOIA or your state’s public records statutes. Conversation logs (the actual back-and-forth with an AI tool) may also qualify as public records. Commercial AI platforms were not built to meet government records retention requirements. Check with your county attorney before deploying any tool that generates text on behalf of the department.

10. How do we retain and produce AI-generated records on request?

If AI-generated content is a public record, you need to be able to find it, preserve it, and produce it on request. That means knowing: where is it stored, does the vendor retain it after the conversation ends, and can your records management system capture it? Most departments can’t answer these questions yet. Getting there is core to responsible deployment.

Equity and Accuracy

11. Does this tool work accurately for the communities we serve?

General-purpose AI doesn’t account for jurisdiction-specific public health work. Your protocols are not the same as the next county’s. A model trained on the open internet knows nothing about your referral network, your disease burden, or your population context. When a tool gives a confident, fluent, incorrect answer about your harm reduction protocols or your safe messaging guidelines, that’s not a neutral error. Ask vendors specifically about performance for your use cases. Build in a human review step for any output that touches clinical guidance or client communication.

12. How will we catch errors before they cause harm?

A wrong answer delivered with confidence is worse than no answer at all. The person receiving it may not know to doubt it. Build in structured checkpoints: who reviews AI-generated content before it goes to a client or the public, what’s the escalation path when something looks wrong, and how do staff report errors so you can identify patterns. The goal is to use the technology in a way that keeps humans accountable.


This list doesn’t have to feel overwhelming. Most health departments we work with haven’t answered all twelve of these questions yet. The value is in the structure, not in completeness. It gives you a framework for a conversation that too many departments are avoiding because it feels too large to start.

Start with four: What tools are staff using right now? Is there a BAA? Who approves new tools? Do we have a policy review on the calendar? Everything else follows from there.

If you want help working through these questions with your team, we do this every day. Schedule a conversation.

Subscribe To Flourish Notes
Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Subscribe To Flourish Notes

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

You have Successfully Subscribed!