Designing a data model is the slowest part of getting a new implementation off the ground. The conceptual work — deciding which types exist, which properties live on each, what references what, what constraints apply — is real intellectual labor, but there's also a substantial amount of pure mechanical labor wrapped around it: typing in property names, picking types from dropdowns, configuring default values, setting up views, scaffolding the queries everyone will need. The mechanical part is where implementation time gets consumed without producing much intellectual value, and it's exactly where a well-grounded AI assistant can help. The implementer describes what they want; the AI drafts the shape of it; the implementer reviews, edits, and accepts. Days of tedious configuration collapse into a conversation, and the implementer stays fully in control of the outcome.
The describe-generate-review loop is the core interaction. The implementer types a description of what they're trying to model — "track orders with line items, customers, and shipment events" or "an applicant tracking system where candidates move through stages tied to open roles" — and the AI proposes a concrete draft: the types it inferred, the properties on each, the relationships between them, the queries and views that would obviously apply. The implementer reads the draft, adjusts what's wrong, accepts what's right, and the platform commits the changes. The AI is doing the drafting; the implementer is doing the judging. That division of labor is the right one, because the judging is where human expertise matters and the drafting is where it doesn't.
Types and properties from description are where the AI's work is most obviously useful. A natural-language description of a business process gets translated into a set of types with sensibly-typed properties: names are text, prices are currency, dates are dates, statuses are choice properties with inferred options, references to other types become reference properties. The AI uses the context of the current tenant's existing types when relevant — proposed additions tend to reference existing types rather than creating redundant ones where an existing type already fits. The result is a first draft that's usually close enough to be refined rather than rewritten.
Queries and views scaffolded take the model generation a step further. Alongside the types and properties, the AI drafts the obvious queries a real implementation would need — active records, records by status, recent records, records assigned to the current user — and the obvious views to show them. A model for customer orders ships with a draft view of all open orders, a draft view of orders by status, a draft view of today's shipments. The implementer reviews the drafts, keeps what fits, and moves on. For the "standard views that everyone configures anyway" portion of an implementation, this scaffolding removes the most repetitive part of the setup.
Automations from intent turn plain-language business rules into working workflows. "When an order ships, email the customer with tracking information" becomes an automation with the right trigger, the right conditions, and the right email action — with the email body drafted against the relevant templates and the customer-email property correctly referenced. "Once a week, send a summary of new applicants to the hiring manager" becomes a scheduled automation with a query, a template, and a send action configured. The implementer reviews the generated automation, adjusts the details, and activates it. Descriptions that used to require picking through the automation builder's options one at a time are now rendered directly into working automations.
Validation before commit is the safety mechanism that keeps AI-generated changes from corrupting the model. Every proposal is validated against the current tenant's state: referential integrity is checked, property type compatibility is verified, role and permission constraints are respected, any constraints on the existing model are honored. Proposals that would violate the model's consistency are flagged rather than silently applied. That validation layer is what makes it safe to accept the AI's drafts with confidence — the platform itself vets the proposal before any of it becomes real.
Diff preview shows the implementer exactly what will change before they confirm. Types being added are listed; properties being added to existing types are listed; automations being created or modified are shown; views that will appear are previewed. The implementer sees the full scope of the proposed change in a single screen, not as an opaque "apply this AI output" button. For the "no surprises" expectation that implementers rightly have about their tenant, the diff preview is the interface that delivers it.
Iterative refinement lets the implementer refine the proposal in conversation rather than having to accept or reject it wholesale. "Add a priority property" extends the current proposal. "Make the address property nullable" revises the current proposal. "Rename the Item type to Product" adjusts the current proposal. Each refinement turn updates the proposal in place, the diff preview reflects the new shape, and the implementer converges on a draft that actually fits before committing. That iterative loop is usually faster than either writing the whole thing by hand or fighting a single-shot AI output until it happens to match what the implementer wanted.
Direct extension of existing model is the everyday pattern once a tenant has some implementation in place. The AI proposes additions that compose with the current model rather than replacing it. When the implementer asks for a new feature — "add a reviews relationship between customers and products" — the AI extends what's there, preserving every existing configuration and adding the new shape on top. The tenant's implementation evolves additively; past work isn't wasted by future AI-assisted expansions.
Scoped by role and tenant ensures the AI only proposes changes the asking user could make by hand. A user without permission to create new types can use the AI to draft changes that affect existing ones, but can't circumvent their role by asking the AI to do something they couldn't do themselves. A user working in tenant A can't use the AI to reach into tenant B, because the AI is scoped to the tenant's own context. Every permission the platform enforces for manual work applies to AI-assisted work in the same shape.
Full audit trail tags every AI-authored change as such in the event log. When a type is created by an AI-assisted flow, the event log records both the fact of the creation and the fact that it came from the AI builder, along with the description that prompted it. For compliance audits that need to account for how the current state of the implementation came to be, the AI-authored changes are distinguishable from human-authored ones and fully traceable. That traceability is what makes AI-assisted model work acceptable in regulated environments, not just permissible in exploratory ones.
The implementer stays in control. That's the deliberate design principle: the AI drafts, the implementer decides. Nothing is committed without explicit approval; every change is visible in the diff; every refinement is in the implementer's hands. The AI is a productive collaborator that handles the mechanical work quickly, not an autonomous agent that makes decisions without oversight. For the implementers who will actually use this day to day, that control is what makes the feature trustworthy.
For implementers accelerating the setup of new implementations, for teams iterating on their data model as their business evolves, and for anyone who wants to spend less time clicking through configuration screens and more time thinking about what to build, AI-assisted model and automation drafting is the feature that shifts the cost curve. For the adjacent topics, the AI chat assistant article covers the conversational interface, the AI agents and bundles article covers packaged AI-authored assemblies, the automations article covers what the AI-drafted automations ultimately become, and the object model article covers the primitives the AI is drafting against. The model builds itself — or closer to "builds the first draft of itself" — and the implementer does the rest.