Every serious implementation makes two moves that the day-to-day UI isn't built for. Data comes in — from a legacy system that's being retired, from a spreadsheet that somebody's been maintaining for five years, from a partner's export, from a one-time data-cleaning exercise that produced a CSV somebody needs to load. Configuration goes out — from the staging tenant where implementers have been building, to the production tenant where end-users will live; or from a "template" tenant that represents a standard rollout, to a new-customer tenant being spun up from that template. Platforms that don't make both moves first-class end up with brittle hand-written scripts doing the job, which is exactly where data-migration projects go to die. We've built both as features.
Record import starts with a spreadsheet. CSV and XLSX are the two formats users already have — whether they came from an old system, a partner, or a colleague who lives in a spreadsheet. The importer accepts either, reads the columns, and presents a mapping step where the user tells the platform which source column feeds which target property. The mapping UI is designed for the common case where column names are close but not identical — "Customer Name" maps to the Name property, "Email Addr" maps to Email, and so on — with sensible defaults suggested where the mapping is obvious.
Column mapping is deliberately flexible because source data rarely matches target structure exactly. Not every column in the source has to map somewhere; columns that aren't relevant can be ignored. Not every target property has to be sourced; properties without a mapped column take their defaults or remain empty. Missing-column handling is graceful — if the source is missing a column that the target expects, the import doesn't fail outright; it proceeds with the columns it has, and the user decides whether to backfill later. That tolerance matters because real-world data is never as clean as the target model assumes.
Type safety keeps imports from corrupting the target. The importer respects property types: numeric properties only accept values that parse as numbers; date properties only accept valid dates; choice properties only accept values from the configured options. Rows with values that don't fit are flagged rather than silently coerced into something wrong. The validation machinery is the same machinery that guards the normal form submissions, so the import and the UI share one definition of "valid," which means data imported from a spreadsheet is as trustworthy as data entered by hand.
Preview before import is the safety mechanism that keeps mistakes from becoming crises. Before any data is written, the platform shows a preview: this many rows will be created, this many will be updated, this many were flagged and won't be imported at all. The user reviews the preview, confirms, and the import runs for real. For large imports where a mistake could affect thousands of records, that preview step is what turns the import from a scary one-shot into a confident decision.
Bulk create or update behavior is configurable per import. Some imports should always create new records (a fresh batch of leads from a campaign). Some should always update existing records by match on an identifier (a nightly sync from a source of truth). Some should be "upsert" — create where no match exists, update where a match does. The user picks the right mode for the task, and the matching key can be any unique property on the target type.
Partial failure handling keeps large imports from being all-or-nothing. If row 1,247 out of 10,000 contains a value that doesn't pass validation, rows 1 through 1,246 aren't rolled back and rows 1,248 onward aren't skipped. The bad row is logged with a clear reason; the rest of the import proceeds; at the end the user gets a report listing which rows succeeded and which were flagged for review. That's how import actually works when it works well: most data is fine, some data needs attention, and the user wants to handle the exceptions without redoing the whole job.
Data-model export is the companion feature for the configuration side. The entire implementation — types, properties, queries, views, automations, email templates, homepage widget arrangements — can be exported as a single portable file. That file is a serialized representation of the model, not the data: it captures "what the application looks like" rather than "what the application contains." An implementer who has built a working configuration in a staging tenant can export it, hand the file to a production tenant, and re-import it to bring the same configuration to life there.
Export is available as a UI action and as an automation action. The UI path is what a human implementer uses: open the export dialog, select the scope, download the file. The automation path is what automated deployment pipelines use: a scheduled workflow that snapshots the current model on a regular basis, attaches the export to an email, or pushes it to the file repository for archival. Both paths produce the same format, so the same file works for a manual re-import and for a programmatic one.
Versioning and point-in-time capture make model exports suitable as a configuration archive, not just a one-shot transfer. An export taken today represents the model as of today; an export taken next month represents the model as of next month. Comparing the two tells the story of what changed in the interim. For regulated implementations where configuration changes need to be documented, that sequence of exports is a genuine audit trail.
Re-importing an implementation handles the conflict cases carefully. An import into an empty tenant is the easy case — everything gets created fresh. An import into a tenant that already has some configuration is where conflict resolution matters: the importer detects collisions, shows them to the user, and lets them choose whether to overwrite, skip, or rename. That flexibility is what makes model export and import actually usable in real-world scenarios where the target isn't a clean slate.
Tenant sync is the related feature for keeping multiple tenants in step continuously, rather than as point-in-time exports. That feature is covered in the tenant customizations article; the short version is that where model export is "take a snapshot, move it, re-import it," tenant sync is "keep these tenants aligned automatically." Different tools for different jobs: model export is right for one-time promotions and template rollouts, tenant sync is right for ongoing fleet management.
The two moves — data in, configuration out — are where a lot of implementation projects reveal whether the platform takes them seriously. We do. For the adjacent topics, the spreadsheet exports article covers the data-out direction, the tenant customizations article covers the continuous sync alternative, and the file repository article covers where import sources and export archives conveniently live.