Hi all, Nate here.

A few months ago I was facilitating a WHO training on AI for public health professionals. Right after the privacy session, a Ministry of Health official raised his hand. He'd been uploading ministry data into ChatGPT to analyze it, he said. Had he exposed that data?

Yes.

He wasn't being careless. He was trying to do his job better with the tools he had. And he's far from alone. A January 2026 Wolters Kluwer survey found nearly 1 in 5 healthcare workers admit to using unauthorized AI tools at work. A separate 2025 survey of 6,500 people across seven countries found 43% of employed AI users had pasted sensitive work information into AI tools without telling their employer.

Banning AI won't fix this. People use it because it makes them better at their jobs. The real question is how to use it safely. Here's how I'd walk through it.

1. Is your conversation private?

By default, the major LLM providers train their models on the conversations you have with them. On Anthropic's Claude consumer plans (Free, Pro at $20/month, Max at $100-200/month) and OpenAI's ChatGPT consumer plans (Free, Plus at $20/month), training is on unless you turn it off. That means your inputs can be reviewed by company staff, used to improve future models, and retained for up to five years. If your sensitive information is used to train the model, it can reach other users in two ways: as verbatim fragments the model has memorized and reproduces when prompted, or as knowledge the model has absorbed and repackages when someone asks a related question.

Every major provider lets you turn this off. On Claude: "Help improve Claude" in Privacy Settings. On ChatGPT: "Improve the model for everyone" in Data Controls.

I often encounter a lot of suspicion of the major AI companies among friends and colleagues in global health. They don't trust that the companies will honor their agreement to not use your data. This is unlikely. The terms you clicked are a contract, even on consumer plans, and a public no-training commitment is a binding promise. Violating it would be a breach of contract and, in the US, a "deceptive practice" the FTC has repeatedly fined companies for. And the legal and reputational consequences would be severe.

What's left after training-off is still not trivial. The consumer privacy policy lets the company collect your inputs and outputs, retain them for safety review, and share them with third parties including government authorities. The company can also change consumer terms unilaterally. Anthropic did exactly that in September 2025, when it flipped the training default from off to on.

If consumer use is all you need (personal tasks, your own content, nothing sensitive), consumer with training off is reasonable.

2. Are you handling data or documents that aren't yours?

This is where consumer plans stop being enough. There are two issues you should consider.

Personal data (privacy law). If you paste information about identifiable individuals (beneficiary records, patient notes, staff lists), you're processing what privacy laws in most countries call "personal data." This includes the laws that apply across the EU and UK, as well as national privacy laws in Rwanda, Nigeria, Kenya, and many other countries where global health work happens. These laws require a written agreement between you and any vendor that touches the data. It's called a data processing addendum, or DPA. Consumer plans don't include one. Business plans do.

Confidential organizational data (contract and duty). Internal strategy memos, draft proposals, program reports, budgets: none of that is "personal data," but almost all of it is confidential to the organization that owns it. Employees are bound by their handbooks and contracts, consultants by their NDAs. Pasting confidential material into a consumer AI tool generally counts as disclosure to a third party, regardless of training settings.

If you're an employee or a consultant, it's your responsibility to understand the terms of your contract and your organization's AI guidelines, and to have a conversation with your organization about what AI tools you can use, with what data, and under what conditions. Don't assume. Ask.

Here's where privacy policy vs. commercial agreement matters. A consumer plan gives you a privacy policy (a public statement the provider can modify) plus terms that cap damages at nominal amounts and force disputes into individual arbitration. A Business plan gives you commercial terms: no-training commitment written into the contract, a DPA attached, stronger confidentiality obligations, and terms that can't be changed unilaterally. Both are contracts, but a business contract is stronger and more enforceable.

Claude Team and ChatGPT Business are the stronger-contract tiers. Claude Team is $20-100/seat/month on annual billing with a 5-seat minimum. ChatGPT Business is similar with a 2-seat minimum.

Who should pay for Business? The test isn't your job title, it's what data you put in front of the AI. Consumer is fine for your own content, as long as you're comfortable with it being shared. Business is the right tier if you're a staff member who pastes internal documents or non-public program data, or a consultant handling clients' confidential material. Many large organizations already provide this; if yours does, use it.

3. Is any of that data health data?

Business plans at both Anthropic and OpenAI do not include the health data agreement HIPAA requires. For protected health information, you need Enterprise or the API with a separately negotiated Business Associate Agreement. You can also reach the same models through AWS, Google Cloud, or Microsoft Azure, which have their own health data agreements.

For global health professionals outside the US health system, HIPAA may not apply, but equivalent local rules and organizational policies often will. For maximum security, use Enterprise.

One note: OpenAI launched "ChatGPT Health" in January 2026, a consumer feature for connecting medical records and health apps. Despite the name, it is not HIPAA-compliant and is not the same as their enterprise "ChatGPT for Healthcare."

Quick reference

Consumer

Business

Enterprise or API

No-training commitment

Privacy policy (opt-out)

Commercial terms

Commercial terms

Data processing addendum

No

Yes

Yes

Health data agreement (HIPAA/BAA)

No

No

On request

Use for

Your own content, non-sensitive information

Internal org data, confidential docs, personal data

Health data, regulated data

4. Where does your data physically go?

When someone in Kigali enters program data into ChatGPT, it goes to servers in the United States. No privacy setting changes that. The training toggle controls what the AI company does with your data, not where the data lives.

Countries like Rwanda and Nigeria restrict or require authorization for cross-border transfers of personal data. If your organization works across borders, you may face rules a US-hosted AI tool can't satisfy, regardless of plan.

If data genuinely can't leave your infrastructure, locally-hosted open-source models are the answer. But for organizations without strong IT security, cloud providers may actually be safer; major AI companies invest more in security than most organizations can match.

5. What about hackers?

Privacy settings protect you from the AI company. They don't protect you from someone breaking in. Big tech companies and AI providers have both been hacked. User accounts are compromised all the time through stolen credentials.

The threat is about to get worse. In April 2026, Anthropic announced a model called Mythos Preview that discovered thousands of previously unknown security flaws across every major operating system and web browser, some decades old. They're holding it back because of what it could do in the wrong hands. Comparable models from other companies will follow.

What You Can Do Today

  1. Turn off training. On Claude: "Help improve Claude" in Privacy Settings. On ChatGPT: "Improve the model for everyone" in Data Controls.

  2. Don't paste sensitive information into consumer AI tools. Personal data, patient information, internal strategy documents, unpublished research. If you wouldn't want it disclosed to a third party, don't enter it on a consumer plan.

  3. Match your plan to your data. Your own non-sensitive content is fine on consumer. Internal org data, third-party personal data, or sensitive data, better to have a business account. Health or regulated data on Enterprise.

  4. Find out your organization's AI policy and push for the right account. If yours has a business or enterprise account, learn the policy and follow it. If not, advocate for one.

  5. Check the data protection laws where you work. Localization requirements may override your settings. Ask your legal team.

Disclaimer: I'm not a lawyer. This is based on publicly available information and my own experience using AI tools every day for global health work. For guidance specific to your situation, consult your legal and compliance team.

Keep Reading