Artificial intelligence dominates public health conversations, from conferences to vendor pitches promising “AI-powered” solutions. But much of the current literature, studies, and conversations about AI in public health remain surface-level, offering little actionable insight into how professionals are actually using these public health AI tools.

The official statistics tell one story. According to NACCHO’s 2024 Public Health Informatics Profile, only 5% of local health departments report currently using AI, and 78% perceive significant threats related to AI adoption in public health. But a recent survey from the National Association of Counties paints a strikingly different picture: 60% of county employees are already using generative AI tools at work. This gap suggests that people within public health organizations are using AI technology, even when their organization may not be officially using them and where there may not be official policies.

As the team at Flourish & Thrive Labs was thinking about what it means to build an AI tool for public health, we wanted to move beyond the statistics, conference presentations, and marketing hype. Over the past few months, our team conducted nearly 50 conversations with public health professionals from state and local departments across the country. Our goal was simple: to listen and understand how AI is really being used, what concerns shape adoption patterns, and what professionals actually want from these public health technology solutions.

This post is the first in a series exploring AI in public health. We will update this post with links once those other posts are published.

Page 1
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7

What We Found: Five Key Insights About AI Implementation in Public Health

In this post, we’ll explore:

  • People are using AI tools regardless of official policies, driven by workforce pressures and efficiency needs
  • Most users rely on free, consumer-grade generative AI tools with limited enterprise solutions available
  • A clear pattern emerged between nimble smaller departments and more cautious larger organizations
  • Professionals want purpose-built public health AI tools for specific public health challenges, not generic AI assistance
  • Concerns about accuracy, privacy, environmental impact, and human connection significantly shape AI adoption in public health.

The quotes throughout this piece come from our conversations with public health professionals, with titles generalized to protect anonymity while providing context for their perspectives.

Finding #1: AI Adoption Regardless of Official Technology Policies

The most striking finding from our conversations is that public health staff are using AI tools regardless of official policies—and in some cases, even when policies explicitly prohibit such use. This workforce is under immense pressure. According to the 2024 Public Health Workforce Interests and Needs Survey (PH WINS), 21% of the public health workforce is considering leaving their job within one year.

The combination of budget cuts, staff reductions, and increased demands in the wake of the COVID-19 pandemic and recent communicable disease outbreaks like measles have created an environment where efficiency is paramount. Public health department staff feel increased pressure in terms of communicating with their communities, and find the realities of budgetary constraints to be particularly difficult.

A former health officer explained the reality:

“With budget cuts and staff cuts, if you still want to do the right thing for your people, you have to be more efficient. You have to do the same with less.”

This pressure is driving remarkable ingenuity in public health AI implementation. We heard from professionals using AI for practical tasks: turning technical reports into digestible summaries for county commissioners, analyzing grant reports that they never had time for, reformatting grant summaries from one format to another, and drafting communications. When there was an urgent need to develop a shared leave policy, a director at a small health agency described using an AI tool:

“I gave a prompt that I need a shared leave policy or donated leave policy [for my agency]… And I got it and it had everything I wanted in it… in 5 minutes I had my policy or it would have taken me probably two or three days before.”

Importantly, this was still a draft requiring expert review and iteration. The human remained firmly in the loop, but the process was dramatically accelerated.

For many, there’s a desire to use these public health technology tools in their work, and it may increasingly become a retention or recruiting issue. Many younger professionals and those coming out of school leverage AI and large language models as part of their workflow and have some level of skepticism about jobs where AI use is restricted. One epidemiologist told us that the policy inertia around AI was so frustrating, “it’s actually pushing me to look for other opportunities right now.”

Finding #2: Consumer-Grade Generative AI Tools Dominate Public Health AI Usage

Our conversations revealed that most public health professionals using AI are turning to free, consumer-grade tools, with ChatGPT being the most commonly mentioned platform for AI implementation in public health. Other commonly used tools include Google Gemini for note-taking and transcription. One IT Director noted that “Google Gemini for me has been wonderful… It’s been a great, great, great note taker.”

Some professionals have experimented with uploading documents to AI platforms to create specialized assistance for specific topics. A common practice that health department staff seem to be doing is uploading a variety of documents and using that as a base for answering questions or doing their work. However, most people are not creating replicable custom GPTs or using more advanced features like knowledge bases—they’re simply uploading documents for one-time conversations without the ability to share or systematize these public health AI tools across their organization.

The sophistication of use varies widely, but in virtually every conversation we had, professionals are using consumer-grade tools and not relying on knowledge bases or even creating custom GPTs. Some use AI for basic administrative tasks—drafting emails, summarizing reports, or generating policy templates. Others have found more complex applications, such as using AI for strategic planning, work plan development, or even code review to identify technical errors.

What’s notable is that most organizations don’t have enterprise-wide AI tools available. Instead, people are individually logging onto ChatGPT or similar platforms, often using personal accounts tied to either personal or work email addresses, but with no monitoring or centralization related to that usage. This creates a patchwork of unofficial public health AI implementation across departments, with varying levels of security awareness and oversight. Some organizations have blocked common chatbot URLs like ChatGPT, which has pushed usage further underground.

When pressed about concerns regarding HIPAA compliance, people often said, “Well, I know that I’m not uploading anything that has sensitive information.”

Finding #3: Organizational Patterns in Public Health AI Technology Adoption

A clear pattern emerged from our conversations around organizational size and AI adoption in public health. Smaller health departments appear significantly more nimble and adaptive when it comes to AI tools. They’re generally more willing to experiment with solutions, even if they’re not using enterprise-grade tools.

This nimbleness comes with trade-offs. While smaller departments may move faster, they often lack the resources for comprehensive security infrastructure. Meanwhile, larger departments face what one former health officer described as “cultural bias really against artificial intelligence… because it was seen as potentially not trustworthy” and struggle with developing cohesive policies across diverse service lines and data sets.

This results in departments with the greatest capacity for secure, well-governed AI implementation often being the slowest to adopt, while those most willing to experiment may be using less secure solutions.

Finding #4: Demand for Purpose-Built AI Tools

Perhaps the most important finding is that public health professionals aren’t asking for generic AI assistance. They want tools built for their specific challenges. The demand isn’t for another ChatGPT; it’s for AI that understands public health language, workflows, and requirements.

A public health strategist explained the limitation of current tools:

“ChatGPT really lacks a public health voice. When you ask it to rewrite things or help me write for community, it gets almost very removed from that person-to-person conversation… it doesn’t feel authentic or very public health driven.”

Instead, professionals described wanting AI that could handle specific public health functions: tools that understand food codes for environmental health inspections, systems that can summarize hundred-page medical records for disease investigation, or platforms that can generate reports at appropriate reading levels without being insulting to communities.

This need for specificity extends beyond language to functionality. A surveillance coordinator described the challenge of manually reviewing hundreds of pages of medical records, calling it “very time consuming and very frustrating.” The desire isn’t for AI to make medical decisions, but for tools that can quickly surface relevant information for expert review.

At Flourish & Thrive Labs, we’ve been working to address this need through our AI solutions for public health teams, building purpose-built tools that understand the specific requirements of public health work.

Finding #5: Public Health AI Implementation Challenges and Concerns

While enthusiasm for AI’s potential was widespread, so were sophisticated concerns that help explain adoption patterns. These concerns fall into several key categories:

AI Accuracy and Reliability in Public Health Settings

Professionals expressed deep concerns about AI “hallucinations” and the generation of false information. The recent “Make America Healthy Again” report, which included AI-generated citations to studies that don’t actually exist, exemplified these fears. As one small agency administrator put it, “No doubt the accuracy… I still always want that human reviewing it.”

Beyond factual errors, professionals worry about AI systems that lack transparency about their sources and methodology. There’s a general sense of concern about security and how the public views AI, leading to questions about public health’s role as stewards of people’s trust.

AI Security and Privacy Concerns

The use of free AI models was described as “incredibly, incredibly unsafe” by multiple professionals, particularly given the sensitive nature of public health data. A surveillance coordinator explained:

“We work with people’s private health information and keeping that protected is becoming increasingly challenging… I would want to be able to rest easy at night knowing that whatever AI package we implemented… was secure.”

This concern extends beyond technical security to questions about data governance and organizational responsibility for protecting community health information. When pressed about concerns regarding HIPAA compliance, people often said, “Well, I know that I’m not uploading anything that has sensitive information.” However, experts worry about the reliability of such self-regulation, particularly with free tools where data may be used to train models.

AI Equity Concerns in Public Health Technology

Professionals expressed concerns about AI’s impact on equity in multiple dimensions:

Access to AI Tools: There’s worry that unequal access to AI technology could widen existing gaps between well-resourced and under-resourced health departments.

Bias Within AI Tools: Professionals noted that large language models are often trained on data from “privileged populations” and may produce outputs that are “skewed towards white, educated audiences.” This can perpetuate existing biases in public health communications and interventions.

Environmental Justice: The environmental impact of AI data centers raises concerns about whether these burdens disproportionately affect vulnerable communities, creating another dimension of inequity.

Some professionals also see AI’s potential to support equity. For example, one food code AI chatbot was developed specifically to “support racial equity” by helping BIPOC-owned businesses better understand regulations and prevent violations.

Human Connection in Public Health AI Applications

A community program lead worried about AI’s impact on the relational aspects of public health work: “I don’t want it to take the place of that one-on-one connection… We will lose sight of really the one-on-one connection, being out in the community and talking to people and listening to people.”

This concern reflects a deeper anxiety about whether AI might undermine the trust and personal relationships that are fundamental to effective public health practice.

Environmental Impact of AI Technology

Environmental concerns emerged as significant, particularly among environmentally conscious professionals. A public health strategist asked: “Are we contributing to this mass climate change by being able to use AI?” This concern is particularly acute for a field that often champions environmental health.

Even when the actual environmental impact may be manageable, professionals noted that the perception of environmental harm among the public could undermine trust in public health agencies that use AI tools extensively.

Moving Forward: Building AI Technology That Serves Public Health

These conversations revealed a field caught between opportunity and caution, between urgent needs and legitimate concerns. Perhaps most striking was a disconnect we encountered repeatedly: health department leaders who saw no issue with staff using free AI tools tied to personal email accounts, yet these same leaders would have significant concerns if someone was doing work in a personal Google Doc instead of the organization’s word processor.

This disconnect highlights a critical gap in understanding about AI security and data governance. The current approach of blocking access to AI tools entirely or waiting for perfect policies before any experimentation isn’t preventing AI use. It’s simply pushing it underground, creating risks while missing opportunities to shape how these public health AI tools develop.

What’s clear is that there’s an absolute need for AI tools built specifically with public health in mind. These tools must address real workflow challenges while respecting the privacy, security, and ethical requirements that define this field. They must augment human expertise rather than replace it, and they must be designed by people who understand both the technical possibilities and the practical realities of public health work.

At Flourish & Thrive Labs, we’re grateful for these conversations and feedback. We’ve taken some of the insights from these discussions and built a HIPAA-compliant chatbot builder that enables people to upload documents in a secure manner. We believe the future of AI in public health isn’t about replacement or automation—it’s about creating technology that serves the people who serve communities.

This is the first in a series exploring AI in public health. Our next post will examine a fundamental question: Is AI actually necessary for public health? And if so, what does responsible AI adoption look like, particularly regarding the need for secure AI tools? For those ready to start developing AI policies, you can find our case study on AI policy development for local public health.