In our recent analysis of 50 conversations with public health professionals, we discovered a striking reality: your staff are already using generative AI tools like ChatGPT, Gemini, and Claude in their daily work. They’re using these tools whether you have a policy or not. Whether you know about it or not. Whether it’s allowed or not.
This isn’t a future risk—it’s a possible liability today. And every day without a clear AI policy is another day your organization operates with significant legal, ethical, and operational vulnerabilities.
When we talk about AI in this context, we’re primarily referring to generative AI tools—platforms like ChatGPT, Gemini, and Claude. This includes both consumer-grade technology accessed directly through web interfaces and generative AI capabilities embedded within other software tools. While generative AI can be implemented in ways that pose significant data risks, it can also be deployed in ways that are HIPAA-compliant (like our PH360 product)
Recent webinars and conference presentations from national public health organizations have started discussing AI policies. But when we analyze the landscape from these state and local public health contexts, we found that these conversations remains surface level. They often miss the practical complexities we uncovered in our first blog post—the reality of unofficial usage, the specific legal risks beyond general privacy concerns, and the urgent need for actionable guidance.
Consider this: You wouldn’t let staff handle sensitive information via personal email accounts without clear policies and consequences. You have an email policy. You have a social media policy. Why is AI any different?
The answer is simple: it shouldn’t be. Just like email, generative AI is now a ubiquitous tool in the workplace (surveys show that at least half of government employees use AI in their work). And just like email, it needs clear guardrails to be used safely and effectively.
This post is the second in our series exploring AI in public health, building on the insights from our field research.
Table of Contents
What We Found: Five Uncomfortable Truths About AI Risk in Public Health
Staff Don’t Know If You Have a Policy
And they’re using AI anyway. Most professionals have no idea if their organization has guidelines.
Free AI Tools Are a Security Nightmare
Personal ChatGPT accounts tied to work emails create data security risks you can’t ignore.
Legal Compliance Is a Ticking Time Bomb
Sunshine Laws, discovery requests, and HIPAA violations are waiting to happen without clear policies.
No Consequences Creates Chaos
Without clear disciplinary guidelines, underground AI usage continues without accountability.
Tool Agnostic Approach Is Failing
Vague policies provide no real guidance, pushing staff toward risky consumer-grade tools.
Truth #1: Your Staff Don’t Know If You Have an AI Policy (And They’re Using AI Anyway)
The most alarming finding from our conversations was how many public health professionals simply don’t know if their organization has an AI policy. A medical director at a county public health department captured this perfectly:
“I’ve just started to get really into it lately… I’m finding new ways to try to use it to see what happens… I don’t know if [my organization has an AI policy]. We probably don’t.”
This isn’t an isolated case. Across our conversations, we heard variations of “I’m not sure” or “I don’t think so” when we asked about AI policies. Meanwhile, these same professionals described using ChatGPT to draft grant proposals, summarize community feedback, and even analyze health data.
The risk compounds daily. As AI tools become more integrated into people’s work lives—helping them write emails, draft meeting notes, or organize their thoughts—the likelihood of accidentally inputting sensitive information increases exponentially. One slip and protected health information could end up training a public AI model.
Truth #2: Free AI Tools Are Creating a Data Security Nightmare
There is a disconnect that should keep every health department leader awake at night: Many of the same leaders who would immediately flag concerns if staff were drafting sensitive documents on personal computers or sending them to their personal email accounts see no issue with those same staff members using free AI tools tied to personal email accounts.
An expert we interviewed described using free AI models for work as “incredibly, incredibly unsafe,” explaining that users aren’t sufficiently filtering what they input. But the practice is widespread. Staff are logging into ChatGPT with personal accounts, uploading documents, asking questions about cases, and drafting communications—all without institutional oversight or security measures.
The risk isn’t theoretical. When you input data into free consumer AI tools, that information can be used to train future versions of the model. OpenAI’s ongoing lawsuit with the New York Times highlights how seriously AI companies treat data retention—they keep prompts even when users opt out of training. Your staff’s “quick question” to ChatGPT about a disease outbreak could become part of the AI’s permanent training data. However, if that question includes PHI, you’ll quickly be looking at a hefty fine.
Truth #3: Legal Compliance Is a Ticking Time Bomb
The legal implications of ungoverned AI use extend far beyond HIPAA. Most public health organizations haven’t grappled with fundamental questions:
- Are AI chat logs subject to Sunshine Laws and public records requests?
- What happens when an AI-generated report contains errors that lead to public health decisions?
- How do you handle discovery requests for AI interactions in legal proceedings?
- Who is liable when AI-generated content violates privacy laws or contains misinformation?
We heard from multiple organizations struggling with these questions. One surveillance coordinator worried that even internal brainstorming sessions with AI could inadvertently become public record. While many others noted their organization’s complete lack of guidance on record retention for AI interactions.
Without clear policies, every AI interaction is a potential legal liability. And unlike email, where decades of case law provide guidance, AI use exists in a legal gray area that’s still being defined.
Truth #4: The Absence of Consequences Creates Chaos
When we asked about disciplinary guidelines for AI misuse, the responses were telling. Many organizations that have clear consequences for mishandling email or accidentally disclosing PHI have nothing in place for AI violations.
A former public health officer from a large California county described how their organization’s “cultural bias” against AI led to complete “stasis”—no adoption, no innovation, but also no guidelines for the staff who were inevitably using these tools anyway. This creates the worst of both worlds: underground usage without accountability.
Organizations are essentially “punting the decision down the road,” but the game is already in play. Your staff are making decisions about AI use every day, just without your input or protection.
Truth #5: The “Tool Agnostic” Approach Is Failing
We’ve heard many organizations proudly declare their AI policies are “tool agnostic” to remain flexible. But in practice, this often translates to policies so vague they provide no real guidance. A public health strategist from a large metro health department told us their citywide policy was “very vague,” leaving individual departments and staff to interpret what’s acceptable.
This vagueness pushes staff toward the path of least resistance: free, consumer-grade tools that seem harmless but carry significant risks. Being tool agnostic shouldn’t mean being guidance absent.
Building Your AI Policy: A Practical Framework
An effective AI policy isn’t about prohibition—it’s about protection. It’s a guardrail that enables safe, responsible use while protecting your organization, your staff, and your community’s data. Based on our work with health departments and insights from our field research, here’s what comprehensive AI governance should include:
For a detailed breakdown of all essential components of a comprehensive AI policy, read our guide: The 12 Essential Elements of an Effective AI Policy. This resource walks through each component in detail and provides practical examples from our work with health departments nationwide.
From Policy to Practice: The Need for Secure Tools
Even the best policy fails if it doesn’t provide practical alternatives. Telling staff they can’t use ChatGPT without offering secure options is like banning cars without providing public transportation—people will find a way to meet their needs, just less safely.
This is why effective AI governance requires two components: comprehensive policies AND purpose-built, secure tools.
Commercial off-the-shelf AI tools, even those marketed as “enterprise” solutions, often fall short of public health’s unique requirements:
- They may not be truly HIPAA-compliant
- They lack proper audit trails for public sector accountability
- They don’t understand public health workflows and terminology
- They can’t guarantee your data won’t train future models
That’s why we developed PH360, our HIPAA-compliant chatbot builder designed specifically for public health organizations. It provides the AI capabilities your staff need while meeting the security requirements your policy demands:
- Secure document upload without data training risks
- Complete audit trails for compliance and accountability
- Purpose-built for public health workflows
- Designed to handle everything from routine tasks to sensitive communications
Learn more about how PH360 can help your organization move from risky unofficial AI use to secure, governed adoption.
The Time for Action Is Now
Every day without an AI policy is another day of accumulated risk. Your staff are already using AI—the question is whether they’re doing it safely and compliantly, or in ways that expose your organization to legal, ethical, and operational vulnerabilities.
The Path Forward
Five essential steps to move from risky AI experimentation to secure, strategic adoption
Acknowledge the Reality
AI use is happening in your organization right now. Recognize that staff are already using these tools and address it proactively.
Develop Comprehensive Policies
Create clear, practical guidelines that enable safe use while protecting your organization and community data.
Provide Secure Tools
Ensure staff have approved, HIPAA-compliant alternatives to consumer-grade AI tools that meet your security requirements.
Implement Ongoing Training
Build AI literacy across your organization so staff understand not just what’s allowed, but why certain boundaries exist.
Monitor and Adapt
Regularly review and update your approach as technology evolves. AI governance is an ongoing process, not a one-time task.
At Flourish & Thrive Labs, we’ve helped health departments navigate this transition from ungoverned AI experimentation to secure, strategic adoption. We understand the unique challenges public health faces—from HIPAA compliance to Sunshine Laws, from equity concerns to resource constraints.
The future of AI in public health isn’t about if—it’s about how. And that how starts with a comprehensive AI policy that protects your organization while empowering your team.
Don’t wait for a data breach, a public records nightmare, or a compliance violation to force your hand. The time to act is now.
Ready to develop your AI policy and implement secure AI tools? Schedule a consultation with our team. For more insights on AI adoption in public health, read our first post in this series on how public health is really using AI. Our next post will tackle a fundamental question: Is AI actually necessary for public health? And if so, what does responsible adoption look like?
