Your staff are already using AI—60% of county workers use it monthly. The problem? Few health departments have official policies in place. This gap creates real risks around data privacy, HIPAA compliance, and ethical use. As former public health officials and NACCHO plenary speakers, we created this step-by-step workshop guide to help you close that gap. Download it below to build an AI policy that actually works for your department.

As former public health officials who served in large health departments, we’ve been where you are. We understand the pressure to innovate while protecting sensitive data. We know the budget constraints, the staffing challenges, and the reality that your team is already using these tools whether you have a policy or not.

That’s why we built this workshop guide—a practical, step-by-step framework that walks you through building an AI policy that fits your department’s capacity and risk tolerance. This isn’t generic corporate guidance. It’s a practitioner-built toolkit that addresses the unique challenges of government health departments, from HIPAA compliance to public records laws to workforce development.

Whether you’re starting from scratch or refining an existing draft, this guide gives you the questions to ask, the language to use, and the structure to implement. Download the complete workshop guide below and start closing the gap between unofficial AI use and official AI governance.


PART 1: SCOPE & COVERAGE Define who your policy applies to—all staff, specific departments, contractors, or interns. This foundational section ensures everyone knows whether they’re covered by your AI guidelines.

PART 2: DEFINING AI FOR YOUR DEPARTMENT Establish what “AI” means in your context. Decide which tools (ChatGPT, Claude, Microsoft Copilot) fall under your policy and create clear definitions that your staff will actually understand.

PART 3: TRAINING & AWARENESS Set training requirements before staff can use AI tools. Covers AI literacy basics, approved tool training, privacy best practices, bias recognition, and determines retraining frequency.

PART 4: AI TOOL APPROVAL PROCESS Create your vetting process. Identify who approves tools (IT, Privacy Officer, AI Committee), establish evaluation criteria (HIPAA compliance, data security, bias safeguards), and determine how to communicate approved tools to staff.

PART 5: ACCESS & REQUEST PROCESS Control who can access which tools and how they request permission. Decide whether staff need to submit IT tickets, get supervisor approval, complete training, or use an online request form.

PART 6: PERMITTED & PROHIBITED USES Draw clear lines around AI use. Define what’s allowed (drafting communications, summarizing research) and what’s prohibited (clinical decisions, hiring decisions, processing PHI/PII).

PART 7: DATA PRIVACY & SECURITY Protect sensitive information and ensure HIPAA compliance. Determine whether PHI or PII can enter AI tools, establish security requirements, and list absolutely prohibited data categories.

PART 8: ETHICAL CONSIDERATIONS Address bias, fairness, and responsible use. Assign responsibility for ethical AI use, establish guidelines for recognizing and addressing bias, and require staff to evaluate outputs for accuracy and cultural humility.

PART 9: MONITORING & ACCOUNTABILITY Track usage and create reporting channels. Decide how to monitor AI tool usage (logs, audits, self-reporting) and establish clear processes for reporting concerns or incidents.

PART 10: POLICY GOVERNANCE Ensure your policy stays current and integrates with existing policies. Set review frequency, identify who keeps the policy updated, and connect with data privacy, information security, and acceptable use policies.

PART 11: VIOLATIONS & ENFORCEMENT Establish consequences for policy violations. Define proportionate responses that distinguish between first-time mistakes and serious or repeated non-compliance.

PART 12: GOVERNANCE FRAMEWORK Coordinate with parent government entities. Align with county-wide or state-level AI policies, identify oversight authority, build relationships with government AI leadership, and establish ongoing collaboration mechanisms.

Ready to Build Your AI Policy?

The 55% gap between AI usage and AI policy isn’t going to close itself. Every day without clear guidelines creates additional risk for your department and confusion for your staff. The good news? You don’t have to start from scratch.

This 12-part workshop guide gives you everything you need to facilitate productive policy discussions with your team, make informed decisions about tool approval and data security, and create policy language you can implement immediately. Hundreds of health departments have used this framework to move from policy paralysis to practical governance.

Download the complete Building Your AI Policy Workshop Guide and get:

  • 12 comprehensive sections covering scope to enforcement
  • Discussion questions designed for collaborative policy development
  • Ready-to-use policy language you can customize for your department
  • Real-world examples from health departments nationwide
  • A framework built by former health officials who understand your constraints

The download is completely free. No lengthy forms, no sales calls—just practical tools to help you govern AI responsibly in your health department.

Need More Than Just a Guide?

If you’re looking for hands-on support implementing your AI policy, developing a comprehensive training program, or exploring how PH360 can provide secure AI tools purpose-built for public health, we’re here to help. As NACCHO plenary speakers and NNPHI training partners, we bring both practitioner experience and technical expertise to support your AI journey.

Subscribe To Flourish Notes

Subscribe To Flourish Notes

Sign up to recieve our monthly dose of public health analysis, joy, and favorite things. 

You have Successfully Subscribed!