And why that’s a feature, not a limitation

When people first see PH360, some of them ask: can I choose which AI model it uses? Can I switch to the latest release when it drops? Can I use the model I prefer personally?

The answer is no. And that’s intentional.

Several AI platforms — including some that market specifically to public health — offer model selection as a feature. It looks like transparency. It feels like control. But for workflows that touch disease surveillance, case investigation, and regulated health data, model choice isn’t empowering. It’s a quality control problem dressed up as a user preference.

We Ran the Benchmarks. That’s How We Chose.

PH360 isn’t built on a default model. It’s built on the models that won — for specific tasks.

Before deploying any AI capability in PH360, we ran structured evaluations across multiple leading models against the specific tasks the platform performs: parsing eCR documents, extracting structured data from clinical notes, drafting case follow-up letters, answering food code questions. We measured accuracy, consistency, latency, and output structure for each task, on each model, under reproducible conditions.

Different tasks have different winners. The model that performs best on eCR extraction may not be the best model for drafting a provider letter. There is no single ‘best model’ — there are best models for specific tasks, and the only way to know which is which is to measure. Health department staff don’t have time to run those evals. They shouldn’t have to. That’s our job.

Prompts Are Written for Specific Models

Here’s the part that’s easy to miss: every workflow in PH360 that produces consistent, reliable output does so because the prompts behind it were engineered and validated against a specific model’s behavior.

No two models think alike. One is more literal; another more associative. One handles structured data extraction cleanly; another drifts toward narrative. Those differences aren’t stylistic — in a workflow where outputs need to be structured, traceable, and accurate, they’re functional failures.

PH360 maintains a library of validated prompts, each tested against the model they’re designed for, for the specific task they perform. When a disease investigator uses PH360 to process an eCR, the system is running a prompt that was designed for that model, tested on that task, and validated against the case definition criteria for that condition.

Swap the model, and that validation no longer holds. The outputs may look fine. But the consistency guarantee is gone — and in a setting where a wrong output could mean a missed case or an incorrect follow-up action, ‘looks fine’ is not good enough.

Consistency Is a Public Health Requirement

In most consumer AI contexts, variance in output is acceptable. If your writing assistant produces a slightly different tone today than last week, nothing breaks.

Public health is different.

A disease investigator using PH360 to process eCRs needs to trust that the system extracts the same fields, applies the same case definition logic, and flags the same gaps — every time, for every case. A sanitarian using the food code chatbot needs an answer traceable to a specific section of their jurisdiction’s actual code. A health officer reviewing an AI-drafted provider letter needs output that meets the same standard of accuracy every time it’s generated.

That kind of consistency requires knowing exactly which model is running, how it behaves, and how it was tested. Model-switching breaks that chain. In a regulated environment, a broken chain isn’t just a quality issue. It’s an accountability issue.

What We Do Instead

Rather than a dropdown, PH360 uses model orchestration: the right model for the right task, chosen based on our evaluation data.

Here’s what that looks like in practice: a new model releases. We don’t flip a switch. We run it against our benchmark tasks — eCR parsing, letter drafting, food code Q&A — and measure whether it outperforms the current model on accuracy, consistency, and latency for each one. If it does, we route that task’s traffic to the new model, with updated prompt validation. If it doesn’t, we don’t.

Users never see this process. They don’t manage it, configure it, or think about it. What they see is that PH360 keeps working — correctly, consistently, and without requiring them to evaluate AI infrastructure to get reliable outputs.

That’s the experience model choice can’t deliver.

A Note on Compliance

One more thing that should be simple but often isn’t: the Business Associate Agreement.

Under HIPAA, any vendor processing protected health information on your behalf needs a BAA. This is not a technicality — it’s a legal requirement. And yet we hear the same story repeatedly: a department asks a vendor for a BAA, the salesperson says ‘yes, no problem,’ and then weeks or months go by with nothing to show for it. The salesperson said yes without understanding what they were agreeing to. The legal team gets involved. The department is already using the tool.

PH360’s BAA is available immediately. No waiting on a sales process, no back-and-forth with a vendor legal team who’s never seen a public health contract. You can find it at trust.fandtlabs.com — along with our security documentation, privacy practices, and subprocessor list.

Getting a BAA shouldn’t be an obstacle. If it is, that’s a signal worth paying attention to.

The Deeper Point

There’s a version of AI product design that prioritizes the feeling of control over the reality of outcomes. Offering a model dropdown is that kind of design. It looks like a feature. It feels empowering. But it quietly transfers the consequences of a complex engineering decision onto people who didn’t sign up to make it.

Public health practitioners signed up to protect their communities. They should be able to use AI tools that are optimized for that work — validated, consistent, compliant, and traceable — without having to become AI engineers to use them safely.

Model choice isn’t a feature we’re withholding. It’s a responsibility we’re keeping — so that what comes out of PH360 is something you can stand behind.

Subscribe To Flourish Notes
Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Subscribe To Flourish Notes

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

Sign up to receive our monthly dose of public health analysis, joy, and favorite things.

You have Successfully Subscribed!