Users increasingly live inside an AI assistant of their own choosing. They spend substantial portions of their working day asking a general-purpose chat assistant for help — drafting documents, analyzing data, brainstorming, navigating their other tools, planning their schedule. That assistant is increasingly where the user's work happens, and the work increasingly flows between the assistant and the other systems the user relies on. For a business platform to stay relevant in that world, it has to be reachable from the user's chosen AI — not just through the platform's own chat surface, useful as that is, but through whatever AI tool the user is already using. Otherwise, the platform becomes a silo in an environment where silos are being connected. The solution is a well-defined interoperability layer: the tenant exposes itself as a set of tools that external AI assistants can call, and those calls are governed by the same permissions and audit mechanisms that govern every other kind of access.
Tool interface for external AI is what the platform exposes to the outside. Tool definitions — discoverable, self-describing, and machine-readable — cover the operations an external AI might want to perform: list the types in the tenant, inspect a type's properties, run a query, read a record, create or update records, invoke an automation, trigger an agent. The definitions are structured so that any tool-calling AI can discover what's available and choose the right operation for the user's request. The platform's functionality becomes addressable from outside, in the same vocabulary the rest of the AI ecosystem is converging on.
Per-user authentication is the access mechanism. The external AI doesn't act with elevated platform privileges; it acts as a specific user. Authentication happens through tokens that the user generates for their chosen AI tool, scoped to their own account. Anything the external AI does through the interoperability layer is attributable to the user whose token is in use, which means the audit trail is fully coherent with the rest of the platform's logging. The external AI isn't a mysterious robot acting on the platform; it's just another client of the user's account, same as the user's browser or mobile app.
Role-scoped access keeps the external AI from being a privilege escalator. The external AI can only do what the underlying user could do themselves. A user with read-only access to a type can have an external AI read data from that type; they can't have the AI magically gain write access. A user without access to certain records doesn't suddenly get access through the AI layer. The same permission boundary applies, because the external AI is literally using the user's own authorization context. For enterprise customers concerned about the security implications of AI access, this consistency is what makes the feature adoptable.
Everything logged closes the loop on observability. Every tool call initiated by an external AI lands in the event log with the user, the tool invoked, the arguments, the result, and the timestamp. Admins can see how external AI tools are being used across the tenant, investigate any specific interaction after the fact, and audit patterns of behavior over time. For compliance-sensitive environments, the AI access path isn't a blind spot — it's one more source of structured events flowing into the same log everything else flows into.
Read, query, and action tools cover the span of operations that external AI assistants plausibly need. Read tools expose individual records by identifier or by query. Query tools run defined queries, possibly with parameters, and return result sets. Action tools invoke automations, trigger agents, create or update records, or send emails — whatever the platform supports internally, with the same governance applied externally. That breadth of coverage is what makes the interoperability layer genuinely useful rather than merely announced: a user's external AI can do the full range of things the user would do themselves, not just a handful of token operations.
Schema discovery lets the AI inspect the tenant's structure dynamically. Rather than requiring the AI to be hard-coded against a specific tenant's types and properties, it can ask the platform what types exist, what properties each type has, what queries are available, what automations can be invoked. The AI then uses that information to fulfill the user's request against the actual structure of the tenant, rather than against a generic assumption of what "the platform" looks like. For tenants with substantial and distinctive implementations, this dynamic discovery is what lets external AI assistants interact meaningfully with each tenant's specific shape.
Rate limiting and cost controls keep automated usage from running away. An AI tool that misbehaves, whether through a bad loop or a user's overly enthusiastic usage, encounters the same rate limits that protect the rest of the API surface. Operators can configure per-tenant limits specifically for AI-initiated traffic if that pattern of usage differs from human-interactive usage. The API protection article covers the general rate-limiting machinery; the interoperability layer plugs into it rather than running outside it.
Revocable tokens give users and admins granular control over what external tools are connected. A token issued for a specific AI client on a specific device can be revoked independently; losing a device doesn't mean revoking the user's entire platform access, just the tokens tied to the compromised environment. An admin concerned about a specific AI tool can revoke the tokens that connect to it. For the operational reality that external AI tools are going to come and go as users experiment with different providers, this revocability is what keeps the access surface manageable.
Works for both chat and agent systems means the same interoperability layer serves multiple patterns. A user in a conversational chat client uses it to ask questions and trigger actions interactively. An autonomous agent system uses it to perform operations on the user's behalf as part of a longer workflow. Both patterns are valid clients of the tool interface; the tool interface doesn't care whether the caller is human-in-the-loop or fully automated, as long as the authentication and authorization check out.
Operator controls round out the feature for tenants with governance requirements. Operators can enable or disable the interoperability layer per tenant; they can restrict which tools are exposed externally; they can require administrative approval before users generate their first token. For tenants with strict external-integration policies, those controls are what make the feature adoptable at all; for tenants with looser policies, the defaults work without additional configuration.
The platform stays the system of record. The interoperability layer is a way to interact with the platform from outside; it's not a way to move the platform's data somewhere else. Records stay in the tenant; queries run against the tenant's data; audit events accumulate in the tenant's event log. The external AI is a client; the platform is still the source of truth. That separation matters because the alternative — shipping the platform's data to external AI services wholesale — would be a non-starter for the data-sensitive organizations the platform serves.
For users whose workflow spans both the platform and an external AI tool, for organizations adopting AI assistants as part of their standard working environment, and for implementers wanting to expose the platform's capabilities to the broader AI ecosystem without custom integrations, the interoperability layer is how the platform participates. For the adjacent topics, the AI chat assistant article covers the in-platform chat surface that serves a complementary role, the AI agents and bundles article covers the internal agent platform, the REST API article covers the broader programmatic access surface, and the event log article covers the audit trail that governs all of it. Your AI, your data — that's the relationship the interoperability layer is designed to make practical.