Skip to main content

Compliance Hotline — AI Policy

Last updated: May 5, 2026

This AI Policy explains how the Compliance Hotline service operated by Exclusion Screening, LLC ("Exclusion Screening," "we," "us," "our") uses artificial intelligence ("AI"), what data is sent to AI providers, what those providers may and may not do with the data, and the limitations Reporters, Customers, and Authorized Users should keep in mind. It is incorporated by reference into the Subscription Agreement, the Terms of Use, and the Privacy Policy.

We wrote this Policy in plain language because AI behavior in a compliance hotline matters to real people who are reporting real concerns. If anything here is unclear, email privacy@compliancehotline.com.


1. Where AI is used in the Service

There are three places in the Service where AI processes content:

1.1 AI intake chatbot

Reporters can submit a report by chatting with an AI assistant on a Customer's reporting page. The chatbot:

  • greets the Reporter and explains confidentiality;
  • asks the questions needed to capture a usable report (concern type, severity, department, location, name or anonymous, contact email, and the narrative);
  • gently prompts for missing specifics (date, location, people involved); and
  • summarizes the report and asks for confirmation before submission.

The chatbot is supportive and non-judgmental by design. It does not advise the Reporter on whether their concern is meritorious, what their legal rights are, or what action to take outside of submitting the report.

1.2 AI completeness review (web form)

When a Reporter uses the standard web form, the system runs a brief AI check on the narrative to flag whether key specifics (a date or timeframe, a location, the people involved, a clear description of what happened) are missing. The Reporter sees the suggestions and can choose to add detail before submitting. The AI does not judge or score the credibility of the report.

1.3 AI summary of telephone calls

If a Customer has enabled the telephone hotline, the AI generates a short summary of each inbound call from the Dialpad transcription so an administrator can triage the call faster. The original transcript and recording remain available; the AI summary is a convenience, not a substitute.

1.4 What AI is not used for

  • We do not use AI to decide which reports are real or fake.
  • We do not use AI to score, rank, or prioritize Reporters or reports based on identity or demographic factors.
  • We do not use AI to identify or unmask anonymous Reporters.
  • We do not use AI to monitor an Authorized User's investigative work or evaluate their performance.
  • We do not use AI to make any employment, disciplinary, legal, or regulatory decision about anyone.

2. The AI provider

The AI features are powered by Anthropic, PBC ("Anthropic"), using the Claude family of models, accessed through the Anthropic API.

2.1 What Anthropic does with the data

When we send a prompt to Anthropic (for example, the chatbot conversation, or the report narrative for completeness review):

  • Anthropic processes the input only to generate the response we requested;
  • Anthropic does not use the input or the output to train, fine-tune, or improve any of its models, by default and by contract under the Anthropic API; and
  • Anthropic may retain the input and output for a limited period for safety, abuse-prevention, and operational reasons under its standard API terms.

If Anthropic ever changes those defaults in a way that materially affects this commitment, we will update this Policy and provide reasonable notice to Customers.

2.2 Where the data goes

Anthropic processes API requests in the United States. We do not currently route Compliance Hotline AI requests through any other AI provider.


3. What data is sent to the AI

3.1 What goes to Anthropic

For the chatbot and the completeness review, we send to Anthropic only what is needed to generate the response:

  • the Reporter's chat messages or the narrative they typed into the form;
  • a system prompt that includes the Customer's tenant name and the Customer's configured case types, departments, and locations (so the AI can offer the right options); and
  • the conversation history within the same intake session.

For telephone summaries, we send the call transcript provided by Dialpad to Anthropic to produce the summary, and we save the summary alongside the call record.

3.2 What does not go to Anthropic

  • Customer Authorized User credentials, passwords, or tokens.
  • Any other tenant's data — each Reporter's session is fully isolated.
  • The Reporter's name and email when the Reporter has chosen to be anonymous. The encrypted identity fields are stored only in our database; they are never included in an AI prompt.
  • The encryption keys used to protect anonymous Reporter identities. Those are managed in AWS KMS and are not accessible to the application logic that calls the AI provider.
  • Bulk historical case data. We do not send our case archive to the AI provider for any purpose.

4. Anonymous Reporters and the AI

A Reporter who chooses to be anonymous can use the AI chatbot. The narrative the Reporter types is sent to the AI. The Reporter's name and email — if provided — are used only by the system to generate access links and notifications; they are not included in the AI prompt.

A Reporter who is concerned about leaving identifying information in the narrative itself should not include details (specific job titles, schedules, or unique events) that could uniquely identify them. The AI cannot strip or anonymize those details after the fact.


5. Limitations of AI — please read

AI models, including the ones we use, can:

  • produce inaccurate, incomplete, or misleading output;
  • misunderstand context, tone, or nuance;
  • pick up on patterns in the input that do not reflect the user's intent; and
  • behave inconsistently between sessions.

Because of that:

  • AI output is informational, not authoritative. The Customer's investigators are responsible for reading the underlying report and making their own judgments. AI summaries and completeness suggestions are convenience tools.
  • The AI is not a counselor, lawyer, or investigator. It cannot give legal, medical, or psychological advice. Reporters in crisis should contact emergency services or qualified professionals.
  • The AI does not gate submission. A Reporter can always submit a report regardless of what the AI suggests.
  • Do not rely on the AI to redact sensitive data. It will not consistently catch SSNs, payment card numbers, or PHI, and the Service is not designed to store those data sets.

6. How Customers can control AI features

A Customer with admin privileges can:

  • choose whether to expose the AI chatbot intake option on its public reporting page (by configuring available reporting modalities);
  • view and edit AI-generated call summaries on the Customer's call records; and
  • contact support@compliancehotline.com to request that AI features be disabled for the Customer's tenant. We will accommodate reasonable requests; some features (notably the Dialpad call summary, which is generated centrally) may need to be disabled across the whole queue rather than per tenant.

The AI completeness review on the web form is a lightweight prompt-and-respond check. We currently treat it as part of the core intake experience; if a Customer needs it disabled, contact support.


7. Human oversight

Every AI-touched artifact in the Service is reviewable by a human:

  • a Reporter sees and confirms the AI chatbot's summary before submission;
  • a Reporter sees and chooses whether to act on the AI completeness suggestions;
  • an Authorized User reads the actual report content, not just the AI summary; and
  • AI-generated call summaries sit alongside the original transcript and recording, not in place of them.

8. Errors, harms, and feedback

If the AI has produced output that is inaccurate, biased, harmful, or that you believe should not have been generated, please tell us so we can investigate, log the feedback, and adjust prompts or guardrails:

We log and review feedback, and we update system prompts and feature behavior accordingly. We do not retaliate against anyone who reports an AI concern.


9. Changes to AI features and to this Policy

The AI landscape is moving quickly. We may add, change, or retire AI features as the underlying models, the law, and our experience evolve. We may also change AI providers. Material changes to AI features or to this Policy will be communicated to Customer admins by email and will be reflected in the "Last updated" date above. Continued use of the Service after the effective date constitutes acceptance.


10. Contact

Questions about this Policy or about how AI is used in the Service:

Exclusion Screening, LLC Email: privacy@compliancehotline.com Web: compliancehotline.com