Ettic Docs
OpenTrustDevelopers

REST API

POST /wp-json/opentrust/v1/chat. Request, four-gate auth, SSE protocol, and error codes.

OpenTrust exposes one REST route. It powers the visitor-facing AI chat at /trust-center/ask/, and you can call it directly if you want to build a custom chat surface (a Slack bot, a different page layout, an embedded widget).

Endpoint

POST /wp-json/opentrust/v1/chat
HeaderRequiredNotes
Content-Type: application/jsonYesRequest body is JSON.
X-WP-NonceYesAction wp_rest, the standard WordPress REST nonce. Generate via wp_create_nonce('wp_rest') (or wpApiSettings.nonce if you've localized it).
Accept: text/event-streamOptionalSwitches the response to SSE streaming. Without this header you get a synchronous JSON response.

Request body

{
  "messages": [
    { "role": "user", "content": "Where is your data stored?" }
  ],
  "turnstile_token": "0x..."
}
FieldTypeNotes
messages{role, content}[]Conversation history including the new user turn. role is 'user' or 'assistant'. The full history is forwarded to the provider for context.
turnstile_tokenstringRequired only when Turnstile is enabled and the session has no bypass transient yet. Obtain from the Turnstile widget on the page.

The plugin clamps the latest user message to ai_max_message_length (default 1000 chars). Messages over the cap are rejected with 400 Bad Request.

Authentication and rate-limit gates

Every request passes through four gates in order. A failure at any gate aborts the request with the corresponding error code.

GateFailure codeWhen it fires
1. WP REST noncerest_forbiddenMissing or invalid X-WP-Nonce.
2. Turnstileai_turnstile_requiredTurnstile is enabled, no valid turnstile_token in the body, and the session has no 1-hour bypass transient.
3. Per-IP rate limitai_rate_limited_ipVisitor's hashed IP exceeded ai_rate_limit_per_ip requests in the last 60 seconds.
4. Per-session rate limitai_rate_limited_sessionVisitor's hashed session token exceeded ai_rate_limit_per_session requests in the last hour.

Token-budget reservation happens inside the handler after all four gates pass. A budget rejection returns code ai_budget_exhausted.

Response: streaming (SSE)

When the request includes Accept: text/event-stream, the response is a server-sent-events stream. Each event is a JSON object with a type field.

Event typePayloadNotes
token{text: "…"}A delta of the model's response. Append to the rendered answer.
tool_call{name: "…", summary: "…"}The model invoked a retrieval tool. summary is a user-facing label (e.g. "Reading Privacy Policy").
citation{id: "…", title: "…", url: "…"}A source citation. id is the corpus document ID. url is validated against the corpus URL allowlist; the model cannot fabricate external URLs.
error{code: "…", message: "…"}A recoverable provider-side error during streaming. The stream may continue or terminate.
done{usage: {tokens_in, tokens_out}, refused: bool, citations: […], …}Final event. Always emitted at end of stream. refused is true if the model issued a soft refusal.

Wire format: standard SSE, data: {…}\n\n.

Streaming example

const res = await fetch('/wp-json/opentrust/v1/chat', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'X-WP-Nonce': window.wpApiSettings.nonce,
    'Accept': 'text/event-stream',
  },
  body: JSON.stringify({ messages: [{ role: 'user', content: 'Where is your data stored?' }] }),
});

const reader = res.body.getReader();
const decoder = new TextDecoder();
let buffer = '';

while (true) {
  const { value, done } = await reader.read();
  if (done) break;
  buffer += decoder.decode(value, { stream: true });
  for (const event of buffer.split('\n\n').slice(0, -1)) {
    const data = JSON.parse(event.replace(/^data: /, ''));
    if (data.type === 'token') process.stdout.write(data.text);
    if (data.type === 'citation') console.log('cite:', data.title, data.url);
    if (data.type === 'done') console.log('done', data.usage);
  }
  buffer = buffer.split('\n\n').slice(-1)[0];
}

Response: synchronous (JSON)

When Accept: text/event-stream is absent, the handler buffers the full response and returns:

{
  "answer": "Your data is stored in our EU-West-1 region…",
  "citations": [
    { "id": "policy-data-hosting", "title": "Data Hosting Policy", "url": "https://…" }
  ],
  "refused": false,
  "usage": { "tokens_in": 1842, "tokens_out": 327 }
}

The synchronous mode is what the no-JS fallback uses internally. It's also the easiest path for server-side integrations (CLI scripts, Slack bots).

Error codes

CodeHTTPCause
rest_forbidden403Nonce missing or invalid.
ai_not_configured400Chat is disabled in settings.
ai_no_key400No API key for the configured provider.
ai_turnstile_required401Turnstile is enabled and no valid token was supplied.
ai_rate_limited_ip429Per-IP sliding-window cap. Response includes a Retry-After header.
ai_rate_limited_session429Per-session sliding-window cap. Response includes a Retry-After header.
ai_budget_exhausted429Daily or monthly token cap.
ai_provider_error502Upstream provider returned an error. Body includes the provider's error code where possible.
ai_message_too_long400Latest user message exceeds ai_max_message_length.

Tool surface (for context)

Internally, the model has access to two tools every turn:

  • search_documents(query: string, limit?: int): pure-PHP BM25 search over the corpus inverted index.
  • get_document(id: string): fetch a single document's full content (truncated at 30K tokens).

MAX_TOOL_TURNS is 8 per request. Same-call loop detection injects a guidance message on the second identical call. Cap-hit emits a soft refusal that routes through the existing contact CTA UI.

You can't directly call these tools from outside the plugin; they exist only as the model's working set.

What the chat does NOT expose

  • No multi-tenancy. The endpoint serves the single trust center installed on the WordPress site.
  • No long-term conversation history. The endpoint is stateless: every request includes the full message history. Session identity is only used for rate-limit accounting.
  • No file uploads. The chat accepts text questions only.
  • No alternate models per request. The model is configured globally in AI Chat → Settings.

On this page