Ettic Docs
OpenTrust

AI Chat

Provider choice, key encryption, model selection, budgets, rate limits, Turnstile, and what visitors see at /trust-center/ask/.

The AI chat is fully optional. The plugin works as a static trust center without ever adding an API key. When you do enable it, visitors get a dedicated /trust-center/ask/ page where they can ask questions and the AI answers from your published corpus only, with inline citations back to the source policy or page.

Why an AI chat at all

Procurement teams ask the same 30 questions in different shapes: "Where do you store data?", "Who are your subprocessors?", "Are you SOC 2?", "Do you encrypt at rest?". Most of those answers already exist in your published policies. The AI chat lets a visitor ask in their own words and get an answer grounded in your real corpus, with a clickable citation. It saves them a procurement cycle and saves you a deflected support email.

If the corpus does not cover a question, the chat refuses cleanly and points the visitor at your contact CTA. It does not hallucinate, and it does not answer from outside your trust center.

Choose a provider

Use Anthropic. The other two providers are supported but not recommended; they exist for users with hard contractual or procurement constraints that block Anthropic.

The only provider where citations are a first-class feature of the API. Claude returns search_result_location blocks that anchor each claim to a specific document and a specific character range inside that document. OpenTrust validates those blocks against the corpus before rendering them, so a citation cannot point at content the model did not actually read.

For a trust-center chatbot, the citation is the product. Buyers and auditors need to verify the answer against the source. Anthropic is the only provider that gives you provenance you can defend in a procurement review:

  • Source-anchored. Every citation is tied to a real document by ID and offset, not pattern-matched out of the response text.
  • Verifiable. A reviewer clicking the citation lands on the exact passage the model based the claim on.
  • Hard to hallucinate. The Citations API returns the citation block as structured output; the model cannot fabricate a citation for content it did not retrieve, the way it can fabricate [[cite:…]] text in a free-form completion.

Endpoint: https://api.anthropic.com/v1/messages. Default suggested model: Claude Sonnet 4. Get a key from console.anthropic.com.

Falls back to inline [[cite:doc-id]] markers embedded in the model's prose. OpenTrust parses those markers out of the response and renders them as citations after the fact.

The marker convention is fragile in a way the Citations API is not: a model that drops, mis-matches, or invents a marker produces a citation that looks valid in the UI but does not actually correspond to a passage the model read. Server-side allowlist validation catches obviously fabricated doc-ids, but it cannot catch a marker placed in the wrong location of a paragraph.

Use this only if your organisation cannot procure an Anthropic key. Expect occasional citation drift on edge-case questions and budget for closer review of refusals and citation accuracy in the Questions log.

Endpoint: https://api.openai.com/v1/chat/completions.

Same inline-marker citation mode as OpenAI, brokered across a wider model catalogue (Claude, GPT, Llama, Gemini, Mistral). Same caveats apply, plus you are now reasoning about citation reliability across whichever model OpenRouter routes you to.

If you must use OpenRouter, route to an Anthropic model through it. You'll get the inline-marker fragility (because that's how OpenTrust talks to OpenRouter) but at least the underlying model has been trained for high-fidelity citation behaviour.

Endpoint: https://openrouter.ai/api/v1/chat/completions.

Set up an API key

In OpenTrust → Settings → AI Chat, paste your provider API key into the field and click Save. The plugin:

  1. Encrypts the key with libsodium secretbox, salted from wp_salt('auth').
  2. Stores the ciphertext (prefix ot_enc_v1:) under opentrust_provider_keys, autoload off.
  3. Refreshes the model list from the provider so you can pick a model.

The plaintext key never lives on disk and never appears anywhere in wp-admin after save. The settings UI shows only a masked fingerprint (sk-…f3e2).

Rotating the WordPress AUTH_KEY constant in wp-config.php invalidates every encrypted secret OpenTrust has stored. After a rotation, the chat will refuse to start until you re-enter the API key (and the Turnstile secret, if used). This is a feature, not a bug. It means a database leak alone does not leak your AI keys.

Forget a key

The Forget key button in the AI Chat tab deletes the ciphertext from opentrust_provider_keys and disables the chat. Useful when rotating credentials or moving providers.

Pick a model

After saving the key, Refresh model list queries the provider's /models endpoint and caches the result. Pick from the dropdown. The cached list expires after 24 hours; click Refresh again to re-fetch.

OpenTrust does not curate the model list. Anything the provider exposes shows up. Pick something with reliable instruction-following at small context (2-8K is plenty for trust-center queries).

Set budgets and limits

The chat has three independent cost-control layers. They stack: a request must pass all of them.

Daily and monthly token budgets

Hard ceilings on tokens consumed per day and per month, enforced via reserve-commit-release accounting. The budget reserves enough tokens for the worst-case response, runs the request, then releases the unused portion back when the response finishes.

FieldDefaultNotes
Daily token budget500 000Resets at site-local midnight.
Monthly token budget10 000 000Resets on the first of the month.

Setting a budget to 0 removes that ceiling. Setting both to 0 is admin opt-out from cost ceilings entirely.

When a budget is exhausted, the chat surfaces a graceful "come back later" state with a button pointing at your Contact CTA URL. Visitors never see an internal error and never see a surprise bill.

Per-IP and per-session rate limits

Sliding-window rate limits on chat requests, hashed to avoid storing raw identifiers.

FieldDefaultNotes
Per-IP10 / 60sSliding window. Range 0 to 1000.
Per-session50 / 60minSliding window. Range 0 to 10000.

Per-IP catches a single-source flood. Per-session catches a chatty bot that rotates IPs. Both fail open at 0 if you're behind your own DDoS protection that already covers this.

Cloudflare Turnstile (optional)

For an additional bot defence layer, enable Turnstile by entering site and secret keys in the AI Chat tab. The chat page loads Turnstile's challenge widget, the resulting token is verified server-side on the first message of each visitor session, and the visitor gets a 1-hour bypass transient on success so repeat readers are not pestered.

The secret key is stored libsodium-encrypted, same as the provider API key.

Logging

When Logging is on, every chat request writes a row to wp_opentrust_chat_log:

ColumnStores
created_atTimestamp
session_hash16-char salted hash of the visitor session
ip_hash16-char salted hash of the visitor IP
questionThe visitor's text, capped at 1000 chars
model / providerWhat answered
tokens_in / tokens_outAggregate counts
citation_countHow many sources the answer cited
response_msEnd-to-end latency
refusedBoolean: did the model refuse?
tool_turns / tool_namesRetrieval steps taken

There is no column capable of holding a raw IP, email, user agent, or referer. The privacy posture is enforced by the schema itself, not by good intentions.

A daily cron (opentrust_chat_log_purge) removes rows older than 90 days. To disable logging entirely, untick the checkbox. The chat continues to work; you just lose the audit trail.

Questions log screen

When the chat is enabled, an extra Questions menu item appears under OpenTrust. Lists the most recent logged rows with question text, model, tokens, citation count, and refused flag. Useful for spotting repeated unanswerable questions (a sign your corpus has a gap).

Three actions on the Questions screen:

  • Export CSV for offline analysis.
  • Clear all truncates wp_opentrust_chat_log.
  • Toggle logging flips the master setting.

Auto-summarize policies

When Auto-summarize is on (default), saving a policy schedules a single-event cron that asks your configured AI provider for a 2-3 sentence routing summary of the policy's content. The summary is stored in _ot_policy_chat_summary postmeta and used when the model is browsing the corpus index for "which document should I read?".

Per-locale: a translated policy gets a separate summary in its own language, keyed by the WPML/Polylang-resolved locale.

The summarizer is debounced. Saving the same policy three times in a minute results in one summary regeneration, not three. If a summary fails to generate (key invalid, provider down), the chat falls back to using the policy excerpt as the routing hint.

Summarize sweep

A Summarize sweep button on the AI Chat tab regenerates summaries for every published policy at once. Useful after the first install, after switching providers, or after re-importing content.

What visitors see

The /ask page

A clean, branded chat surface. Single text input, suggested starter questions pulled from your FAQ, a streaming reply area, and a citation rail on the right.

The page respects your accent colour and logo. It uses the same standalone-rendering approach as the rest of the trust center: theme isolation, inlined CSS, no jQuery, no framework runtime.

Citations

When the model cites a source, the chat shows a numbered marker [1] after the relevant claim and a card on the right rail with the source title and a deep link. Clicking the marker scrolls the citation card into view.

Citations are validated server-side against an allowlist of corpus document URLs before being rendered. The model cannot fabricate a citation pointing at an external URL.

No-JS fallback

Visitors with JavaScript disabled get a plain HTML form. POST submission, server-side render, full answer with inline citations on a single page reload. Used for accessibility tooling, locked-down corporate browsers, and older mobile clients. Same backend, same corpus, same citation validation.

Refusals and escalation

If the model decides the corpus does not cover a question, it returns a soft refusal that the chat detects via a sentinel marker. The UI shows the refusal text plus a CTA button that links to your Contact CTA URL (or your contact email / form URL as fallback). The visitor reads "I can't answer that from the published trust center; here's how to reach the team."

Refusals are logged with refused = 1 so you can see which questions are escaping the corpus.

Operator actions

Three buttons on the AI Chat tab beyond Save:

ButtonWhat it does
Refresh model listRe-queries the provider for available models. Cache TTL is 24 hours.
Forget keyDeletes the encrypted key, disables the chat.
Summarize sweepRegenerates _ot_policy_chat_summary for every published policy.

On this page