AI Compliance5 min readUpdated Apr 2, 2026

What to Disclose When You Use OpenAI (or Any LLM)

If your SaaS uses OpenAI, Anthropic, or another LLM API, here's exactly what you need to disclose to users, what your privacy policy must say, and what compliance requirements apply in 2026.

By Smolde EditorialEditorial TeamPublished Jan 16, 2026

Most founders using the OpenAI API (or Claude, Gemini, or any LLM) have no idea what they're legally required to disclose to users. The answer depends on how you're using it, but in almost every case, you need to say something.

Here's what's required and why.

The Core Rule: Disclose AI Use in Your Privacy Policy

If you're passing user data to an LLM API (even just user-typed input), that's a data processing activity that must be disclosed in your privacy policy.

At minimum, your privacy policy must state:

  • That you use AI/ML technology
  • Which vendor(s) (OpenAI, Anthropic, etc.)
  • What user data is sent to the AI
  • Whether that data is used to train models (and if so, with what consent)

This isn't optional. GDPR, CCPA, and the EU AI Act all require it. Failing to disclose a data processor is a compliance violation regardless of jurisdiction.

What OpenAI's Terms Actually Say

OpenAI's API terms (as of 2026) state that they do not use API inputs/outputs to train their models by default. This is good. It means you don't need to disclose "your data trains OpenAI" unless you've opted into fine-tuning.

However: you still need to disclose that you use OpenAI as a subprocessor. Their data processing terms must be accepted, and they should appear in your subprocessor list.

Anthropic, Google (Gemini), and most other API providers have similar policies. Inputs aren't used for training by default. Check your specific provider's data processing terms to confirm.

The Training Data Question

If you are using user inputs to fine-tune or train your own models, different rules apply:

  • CCPA (California): You must disclose this in your Terms of Use before collecting the data. Users have a right to opt out of having their data used for model training.
  • Connecticut (effective July 1, 2026): Must explicitly disclose if you collect data for the purpose of training large language models.
  • COPPA: You cannot use data from children under 13 to train AI models without separate verifiable parental consent (effective April 2026).
  • EU GDPR: Requires a valid legal basis for training use. Consent is safest. Legitimate interest requires a documented balancing test.

The practical rule: If users' inputs feed your training pipeline in any way, disclose it clearly before they submit anything.

Automated Decision-Making Disclosure

If your LLM integration makes decisions that affect users (approvals, recommendations, filtering, scoring) you have additional disclosure requirements:

CCPA Automated Decision-Making Technology (ADMT) rules (effective Jan 1, 2027 for existing systems):

  • Must disclose that automated decision-making is used
  • Must explain what data it uses
  • Must provide an opt-out mechanism
  • Must provide an appeals process for significant decisions

EU AI Act (in force 2026):

  • Chatbots and AI assistants = "limited risk" → must inform users they're interacting with AI
  • Systems making significant decisions about people = "high risk" → requires formal risk assessment, documentation, human oversight

"Significant decisions" means decisions that affect livelihood, opportunities, financial status, or access to services. If your AI recommends, approves, or rejects things that matter to users, it probably qualifies.

What Your Privacy Policy Must Say

Here's a template section you can adapt:

Artificial Intelligence

Our product uses artificial intelligence technology, including the [OpenAI / Anthropic Claude] API, to [describe specific function: e.g., "generate responses to your queries," "analyze your inputs," "provide compliance recommendations"].

When you use [feature name], your inputs may be transmitted to [OpenAI / Anthropic] for processing. We have a Data Processing Agreement with [OpenAI / Anthropic] and your data is not used to train their models.

[If applicable:] We may use anonymized, aggregated inputs to improve our own AI models. You can opt out of this at any time by [opt-out method].

Our AI features [do / do not] make automated decisions that significantly affect you. [If they do: describe the decision, the opt-out, and appeals process.]

The Chatbot Disclosure (EU AI Act)

If your product has an AI chat interface (even a simple one), EU law requires that users know they're talking to AI, not a human.

This needs to be:

  • Visible at the start of the conversation
  • In plain language
  • Not buried in fine print

Something as simple as: "You're chatting with an AI assistant. [Switch to human support]" satisfies this.

What Breaks Your Compliance Position

ChangeWhat It Means
Adding a new LLM providerMust add to privacy policy and subprocessor list
Starting to fine-tune on user dataMust disclose before collecting; requires opt-out
New AI feature that makes recommendationsMay trigger ADMT disclosure requirements
Adding AI to a children's productRequires separate parental consent for data use
Generating AI content in marketingMust label as AI-generated (FTC)

The Practical Checklist

  • Privacy policy mentions AI/LLM use by vendor name
  • Data Processing Agreement signed with LLM provider (OpenAI, Anthropic, etc.)
  • LLM provider listed as subprocessor
  • Clear statement on whether user data trains any models
  • If training on user data: opt-out mechanism in place
  • If automated decisions: ADMT disclosure + opt-out + appeals process
  • If chat interface: AI disclosure at start of interaction (EU users)
  • If using AI in marketing content: FTC disclosure in place

The disclosure requirements aren't designed to stop you from using AI. They're designed to make sure your users understand what's happening with their data. Most of this is a few sentences in your privacy policy and one line in your UI.

Smolde

Know when your docs go stale.

Smolde monitors your product and tells you when a change (a new vendor, a new feature, a new market) means your compliance docs need updating.

Run a free compliance check

No account required. Not legal advice.

Keep Reading

Related guides

View all guides