A raw chat box is general-purpose. You can ask it anything, and it will try to answer; that's useful for exploratory work and for one-off questions, but it's not infrastructure. Infrastructure is the thing that runs reliably in the background, does one job well, and produces predictable outputs. "Draft the weekly sales summary." "Classify this invoice by type and urgency." "Screen incoming support tickets and flag the ones that look urgent." These are repeatable tasks. Each benefits from a fixed prompt, a fixed set of tools, a defined grounding, and outputs that fit into a downstream workflow. They're not chat sessions; they're specialists. A platform that lets implementers package AI behavior into named, versioned, permissioned agents — and lets operators ship pre-built bundles of agents that solve standard business problems — turns AI from a novelty feature into a genuine part of the implementation toolkit.
An agent is a first-class platform object. It has a name, a version history, a permission model, an audit trail, and a place in the tenant's implementation the same way a query, a view, or an automation does. It's editable in the same in-browser editor as everything else. It's exportable in the data-model-export format the same way everything else is. An agent that works in the staging tenant can be moved to production along with the rest of the implementation; an agent that needs to change is versioned as its prompt and tools evolve. That integration is what makes agents a sustainable part of an implementation rather than a scattered set of experimental scripts.
A fixed prompt and tool set is what makes an agent a specialist. The prompt is part of the agent's definition — curated, tested, and tuned for the specific task the agent exists to do. The tool set is the list of operations the agent is allowed to invoke: read records of certain types, write to certain properties, send email through the tenant's engine, call specific HTTP endpoints, run specific scripts. That constraint is deliberate. An unconstrained agent can do anything and therefore can't be trusted with anything; a constrained agent does exactly what it's configured to do and therefore can be relied on. Specialists beat generalists for the repeatable work that makes up the bulk of a business.
Tools the agent can call cover the operations that turn the AI's reasoning into real effect in the platform. Model operations — read a record, update a property, create a related record — let the agent act on the data it's reasoning about. HTTP tools let it reach external services for enrichment or posting. The send-email tool lets it issue communications through the tenant's email engine. The run-script tool lets it delegate complex computation to server-side code where appropriate. Each tool call is logged, attributable, and governed by the agent's permission scope. The agent is effectively an automation step that reasons before acting, with all the governance that implies.
Grounding sources are scoped per agent. The invoice-classifier agent sees invoice templates and the tenant's historical classifications but doesn't need access to payroll data. The weekly-summary agent sees sales records and the executive team but doesn't need to see support tickets. Narrow grounding improves both accuracy (less noise for the agent to reason around) and safety (the agent can't leak data it never saw). Agents are configured with only the knowledge they need for their specific job.
Invocation from automations is the most common integration pattern. An automation that fires when an invoice arrives can include an agent step that classifies the invoice, and use the classification in subsequent steps to route the invoice appropriately. An automation that fires when a support ticket is created can ask a triage agent whether the ticket needs urgent attention, and page the on-call team if it does. The agent lives inside the automation flow the same way any other action lives there, with inputs from upstream steps and outputs available to downstream steps.
Invocation from the UI exposes agents as user-triggered actions. A button on a record — "generate a draft response" — invokes an agent that drafts a response against the context of the current record. A button on a view — "summarize this group" — invokes an agent that reads the filtered set and produces a summary. The user sees a button; the implementation is an agent; the agent's configured prompt, tools, and grounding determine what happens. For the "take this AI-assisted action now" use case, this path is the right shape.
Scheduled agents run on a cron-like schedule, handling the recurring-task patterns that benefit from reliability more than from spontaneity. A weekly-report agent runs every Monday morning, generates a summary, emails it to the executive team, and stores a copy in the file repository. A nightly-reconciliation agent runs after hours, reviews the day's unmatched transactions, attempts to pair them, and flags the ones it couldn't match. That unattended operation is what turns an agent from a tool-used-sometimes into a permanent part of the tenant's workflow.
Bundles are the way packaged agent functionality ships to tenants without every tenant rebuilding the same thing. A bundle is a pre-packaged set of agents along with the types, views, automations, and templates they depend on — an installable unit that adds a coherent capability to a tenant in one operation. An operator serving customers in a specific industry can publish a bundle with agents tuned for that industry's common processes; customers install the bundle and get a working implementation of those processes instantly, with the agents ready to run.
Bundle authoring is the operator-side workflow. Operators build bundles in a template tenant, test them against representative data, version them, and publish them to the tenant fleet. Tenants see available bundles, review what each one installs, and opt in. Published bundles can be updated — an improvement to the invoice-classifier agent in version 2.1 reaches every installation when the operator pushes it — with the same version management story the rest of the platform has. Bundles are how operators turn their own implementation experience into portable value for their customer base.
Usage and cost visibility keep AI infrastructure from being a financial surprise. Every agent run is logged with the cost and latency of the invocation; tenants see their agent usage in a dashboard; operators see fleet-wide usage for capacity planning. Cost limits can be configured per agent or per tenant to prevent runaway expenses. For the legitimate operational concern that AI features can be expensive if left unbounded, this visibility is what turns the uncertainty into a managed operational cost.
Failure modes are first-class concerns. An agent that encounters an error logs it structurally in the event log. An agent that produces output that fails downstream validation flags the issue and doesn't corrupt data. An agent that exceeds its time or cost limits is stopped cleanly. For the operational reality that AI services sometimes return bad output, the platform treats those failures the same way it treats any other kind — observable, recoverable, and bounded in their damage.
From chat to agents is the conceptual progression the AI features encourage. Exploratory work happens in the chat assistant. Repeatable work gets packaged as an agent. Portable bundled work ships in bundles. Each layer builds on the one below, and each layer is the right place for a different kind of AI use. For implementers developing their intuition for when to reach for which tool, this progression is the rough guide; in practice, a mature implementation uses all three.
For the adjacent topics, the AI chat assistant article covers the exploratory conversational surface, the AI in data model and automations article covers the AI features that accelerate implementation work, the AI interoperability article covers how the platform integrates with external AI services, and the automations article covers the workflow layer agents integrate into. Reusable assistants for your processes — that's what agents and bundles are for, and it's how AI earns its place as part of the platform's long-term infrastructure rather than as a shiny feature that never quite graduated from demo.