Your Staff Are Already Using AI. Now You Need a Policy.
A comprehensive AI governance framework for public health needs to address 12 key areas. We facilitate collaborative sessions where your team makes all the decisions. The policy reflects your organization, not ours.
Woman-owned. 8(a) certified. Trusted by NACCHO, NNPHI, and local health departments nationwide.
- Scope & Coverage
- Defining AI for Your Organization
- Training & Awareness Requirements
- AI Tool Approval Process
- Access & Request Process
- Permitted & Prohibited Uses
- Data Privacy & Security
- Ethical Considerations
- Monitoring & Accountability
- Policy Governance
- Violations & Enforcement
- Communication & Transparency
The AI Policy Gap Is a Growing Risk
Your staff are using ChatGPT to draft communications, summarize reports, and answer public inquiries. They're doing it without guidance, without guardrails, and often without your knowledge.
This isn't a technology problem. It's a governance problem.
-
Privacy Violations Staff inputting protected health information into consumer AI tools
-
Inconsistent Messaging AI-generated content that doesn't reflect your department's voice or accuracy standards
-
Legal Exposure No defensible position if something goes wrong with AI-assisted decisions
-
Staff Confusion Well-meaning employees unsure what's acceptable and what's not
Collaborative Policy Development
We don't hand you a 50-page document that sits in a drawer. We facilitate sessions where your team makes the decisions. Your risk tolerance, your capacity, your needs.
Discovery Sessions
Three one-hour sessions where we work through policy sections together. We tackle a few areas each session. You make all the decisions about what works for your organization.
Draft Development
We take everything your team decided and draft the complete policy document. Every decision is documented with rationale that reflects your organization's context.
Review & Finalization
You review the draft, provide feedback, and we finalize. One round of revisions included. The result is a policy that actually reflects how your organization operates.
Building an AI Policy for Public Health
Download our comprehensive guide to developing AI governance frameworks specifically for health departments. Covers 12 key policy areas with real examples, practical templates, and implementation guidance.
Framework informed by 50+ conversations with public health leaders about what actually works in practice.
What We Need From You
Successful policy development requires the right people at the table. We'll guide the process, but your team provides the organizational knowledge.
-
Core Working Group (4-8 people) Representatives from IT, privacy/legal, programs, and leadership who can make decisions
-
Three Hours of Session Time Three one-hour facilitated sessions spread over 2-3 weeks
-
One Round of Feedback Review the draft policy and provide consolidated feedback
Ottawa County Department of Public Health
When Ottawa County needed AI governance, they chose a collaborative approach. Working with their cross-functional team, we developed a policy that addressed their specific compliance requirements while enabling innovation.
Read the Case Study →Frequently Asked Questions
How should a local health department create an AI policy?
The best AI policies for local health departments are developed collaboratively, not handed down from IT. Start by assembling a cross-functional team (IT, legal, programs, leadership), then work through key decision areas: scope, permitted uses, data privacy, training requirements, and accountability. We recommend facilitated sessions where your team makes decisions based on your specific risk tolerance and capacity - that's exactly how our AI policy development process works.
What are the risks of AI in public health?
The primary risks of AI in public health include privacy violations (staff inputting PHI into consumer AI tools), inconsistent messaging, legal exposure from AI-assisted decisions, and equity concerns from biased algorithms. A comprehensive AI governance framework addresses each of these through clear guidelines on permitted and prohibited uses, data handling requirements, human oversight protocols, and regular monitoring. The biggest risk, though, is having no policy at all - your staff are using AI whether you have guidance or not.
What should be in a public health AI governance framework?
A complete AI governance framework for government health agencies should cover 12 key areas: scope and coverage, defining AI for your organization, training requirements, tool approval process, access protocols, permitted and prohibited uses, data privacy and security, ethical considerations, monitoring and accountability, policy governance, violations and enforcement, and communication and transparency. Our framework is informed by 50+ conversations with public health leaders about what actually works in practice.
How long does AI policy development take?
Most engagements run 4-6 weeks from kickoff to adopted policy, depending on your internal approval processes. The collaborative sessions happen in the first 2-3 weeks, followed by drafting and review. Some organizations move faster if they have urgency; others take longer due to board approval cycles.
Do health departments need AI training for staff?
Absolutely. AI policy without AI training for government employees is just paper. Staff need to understand what the policy means in practice - what tools are approved, how to use them safely, what to avoid. We offer AI training programs specifically designed for public health staff, and many clients pair policy development with workforce development to ensure adoption.
What AI tools are safe for government use?
Safety depends on your specific use case and data sensitivity. Generally, enterprise versions of AI tools (like Microsoft Copilot with proper data handling agreements) are safer than consumer versions (like the free ChatGPT). Your AI policy should establish a tool approval process that evaluates security, privacy compliance, and appropriate use cases. We help health departments build approval frameworks that balance innovation with risk management.
What makes F&T Labs different from other AI consultants?
We're former public health officials who've navigated the same political pressures, budget constraints, and compliance requirements you face. We don't theorize about public health constraints - we've lived them. As an 8(a) certified, woman-owned small business, we understand government contracting requirements and can work within your procurement processes.
Ready to Close the AI Policy Gap?
A 30-minute conversation to understand your situation. No pitch deck. No pressure. We'll discuss your current AI landscape and whether formal policy makes sense for your timeline.
Schedule a Conversation →