Blog article

AI chat assistant — ask the platform, not the docs

Published on March 1, 2026

Even power users forget things. Which view had that one filter configured the right way? Which automation triggers when a customer is marked inactive, and what side effects does it cascade? How does one bulk-update a hundred records whose statuses need to shift? Where's the setting that controls whether tag names are free-form or from a controlled list? Every mature implementation accumulates enough detail that nobody holds all of it in their head, and the usual recourse — asking a colleague, hunting through menus, searching documentation — costs minutes that add up to hours across a team across a week. A grounded AI assistant built into the platform short-circuits that cycle: the user types a question, the assistant gives a useful answer based on the specific configuration of the specific tenant, and the user gets back to their actual work.

Grounded in the tenant's own model is the feature that distinguishes this assistant from a generic chatbot. The assistant sees the tenant's types, properties, queries, views, automations, scripts, and the relationships between them. A question like "which types reference the Customer type?" gets a concrete answer based on the real schema. A question like "what changed in the order processing automation last week?" gets an answer based on the real version history. A question like "show me all open tickets tagged urgent" gets a working query as an answer, not a suggestion that the user write one. The assistant isn't making things up; it's reading the tenant's own data.

Grounded in platform documentation covers the other half of the grounding. For questions about how the platform itself works — "how do I set up a webhook," "what happens if I change a property type," "how are tenant backups retained" — the assistant pulls from the current documentation rather than from potentially stale training data. The answers reflect the platform's actual current behavior, not whatever the general-purpose language model last absorbed from the internet. For a customer asking how a specific feature works today, "today" is what they get.

Source citations make the assistant's answers verifiable rather than trust-me-they-are. Every answer includes links back to what it's based on — specific documentation sections, specific configuration objects in the tenant, specific records or queries. A user skeptical of an answer can click through and read the source; a user confident in it can move on. That citation discipline is what turns the assistant from a black box into a transparent tool: it's not making claims, it's surfacing what the platform already knows.

Scoped by role ensures the assistant respects the same access rules as the rest of the platform. A user asking about data they can't access gets told they can't access it, rather than getting a helpful answer that leaks information they shouldn't see. A user asking about types they don't have permission to view gets answers that exclude those types. The grounding respects the asker's permissions end-to-end, which means the assistant is safe to deploy to users with varying access levels. The same question from two different users may get two different answers, because the two users have different legitimate views of the tenant — and that's correct.

Task-aware answers handle the "how do I..." questions with concrete guidance. A user asking "how do I bulk-update records to change their status" gets not just an explanation but a pointer to the right place in the UI — the bulk operations action, the status filter, the specific clicks. Where feasible, the assistant offers to take the user directly to the relevant screen rather than just describing the path. For users unfamiliar with a corner of the platform they don't use often, that concrete navigation is often more valuable than the conceptual explanation.

Data queries in natural language turn the assistant into an ad-hoc analysis tool. "How many open orders from last month?" returns a real number, computed from the real data. "What were the top five customers by revenue this quarter?" returns a real ranked list. The assistant translates the natural-language question into a query against the tenant's data, runs it, and returns the result — with the query itself visible so the user can see exactly what was computed. For users who know what they want to ask but aren't sure how to express it as a query, this natural-language path is a major accessibility improvement.

Stays in context across follow-up questions. The conversation is a conversation, not a series of unrelated queries. A question about "customers with outstanding invoices" can be followed by "now sort by total outstanding" without repeating the customers-with-outstanding-invoices context; the assistant remembers the prior turn and refines the previous answer. For the exploratory patterns users naturally fall into — "show me X, now narrow to Y, now add Z" — the contextual memory is what makes the interaction feel like a dialogue rather than a brittle prompt-engineering exercise.

Conversation history is retrievable, so users can revisit a useful chat later. A user who asked the assistant for help setting up a workflow yesterday can reopen the conversation today, see the exchange, and continue from where they left off. For longer pieces of work that span multiple sessions, that persistence turns the assistant from an ephemeral helper into a durable collaborator. Conversations are private to the user; nothing bleeds between users.

Feedback loop lets users flag bad answers and operators review them. When the assistant gets something wrong — a stale citation, a misread of the model, an answer that's confidently incorrect — the user can flag it, and the feedback accumulates for review. Operators can see patterns in the flags, identify systematic issues, and guide improvements over time. The assistant is never claimed to be perfect; the feedback path is how it keeps getting better.

Private by default. Conversations stay within the tenant; content is not shared across tenants; data discussed in the chat is subject to the tenant's own data handling policies. For enterprise customers with data sensitivity requirements, this isolation is what makes the assistant viable at all — the alternative of shipping tenant data off to a general-purpose external service would be a non-starter for most of them.

Where the assistant declines gracefully matters as much as where it answers. A question outside the tenant's model or the platform's documentation is declined with an explanation rather than a hallucinated guess. A question that requires permissions the user doesn't have is declined with a clear reason. A question that needs clarification is answered with a clarifying question of its own. The assistant is confident where it has grounding and honest where it doesn't — which is more useful than a confident bluff on every question.

For users who are building their intuition for the platform, for implementers who want answers specific to their own tenant, and for admins who need to understand what their system is currently doing, the AI chat assistant is the interactive layer that ties together everything the platform already knows. For the surrounding topics, the AI in data model and automations article covers the AI features built into authoring surfaces, the AI agents and bundles article covers the shareable packages of AI-assisted functionality, and the AI interoperability article covers how the platform integrates with external AI services. Ask the platform, not the docs — that's the interaction the assistant is designed to make possible.