Storing files is rarely a single decision. Some files should live close to the application for speed — the images a busy customer-facing page loads dozens of times a minute. Some should live on cheap cold storage because they're accessed rarely but must be kept for compliance. Some should be served through a content-delivery network because they're public and latency matters. Some need to live in a specific jurisdiction for regulatory reasons. A platform that forces every customer to pick one storage backend and accept the compromises is a platform that every customer eventually outgrows. A pluggable storage layer lets each file type — and each tenant — land where it should.
The storage layer supports multiple pluggable backends. Object storage in several providers. Traditional file transfer protocols. Local disk for development and small deployments. The rest of the platform — forms, views, automations, the file repository — works with files without knowing which backend they're actually sitting on. That abstraction is the whole point: implementers think in terms of files, not storage vendors, and the storage vendor is a configuration choice rather than an architectural commitment.
Per-type routing is where the pluggability pays off. A tenant can configure different storage backends for different file types. Customer photos go to one place — fast, CDN-fronted, regionally distributed. Invoice PDFs go to another — cheaper, archival-grade, retained per compliance policy. Raw data files from a scientific-instrument integration go to a third — high-capacity, low-cost, designed for bulk read-once workloads. That routing is transparent to users and to most of the platform; it's a decision made once at configuration time and reflected in where the files actually physically live.
CDN fronting is supported where the backend offers it. Public files — those that end up on public pages, in partner portals, or in email attachments with signed URLs — can be served through a content-delivery network so latency stays low regardless of the viewer's location. For international tenants or public-facing deployments, CDN fronting is usually the difference between a pleasant experience and an actively-painful one. The file repository and the record attachments both benefit.
Local fallback handles the case where remote storage is temporarily unavailable. If the configured backend is slow, degraded, or completely offline for a few minutes, the platform degrades gracefully: uploads are cached locally and queued for retry; reads fall back to the last known state where possible; the user sees something other than a hard failure. That's important because storage backends, like any remote system, do occasionally misbehave, and a platform that hard-fails every file operation during a five-minute backend hiccup is not a platform businesses want to rely on.
Failed-upload caching is the related feature on the write side. A file that was uploaded successfully to the platform but couldn't be pushed to the configured backend — because the backend timed out, or threw a transient error, or rejected the upload for a reason that has since resolved — isn't lost. The platform keeps the bytes locally, keeps a retryable record of the failed push, and retries on a sensible schedule until the backend accepts it. That's what turns storage reliability into a platform concern rather than a user concern: users don't see "your file couldn't be saved, please try again later" because the retry happens automatically.
Signed URLs are the mechanism for granting time-limited access to private files. A user who needs to share a document with an external party — without creating an account for that party, and without making the document permanently public — generates a signed URL with a configurable expiry. The URL is valid only for the chosen window; after that it becomes inactive. For one-off sharing of confidential content, this is the friction-free option that the "email them the file" pattern would otherwise be.
Per-tenant configuration is what makes the storage layer work in multi-tenant deployments. Each tenant picks its own backend set — their own CDN, their own buckets, their own routing rules — without those choices leaking across tenants. An operator can offer tenants a few preset configurations (a shared default for small tenants, a per-tenant dedicated backend for larger ones, a specific regional option for customers with data-residency requirements) and tenants pick what fits.
Transparency to the rest of the platform is the feature worth emphasizing. Forms that upload files, views that display them, automations that manipulate them, the API that exposes them, the file repository that organizes them — none of these see the storage backend. They see files, with URLs and metadata. The storage layer handles the rest. That separation is what makes it reasonable to migrate between backends later: the platform's behavior doesn't change when you do, and existing files can be moved across without affecting any upstream feature.
Migrating between backends is a supported operation. When a tenant outgrows their initial backend — their data volume crossed a threshold, their regional footprint changed, their cost profile shifted — files can be moved from one backend to another in a controlled migration, with URL mappings updated transparently. The platform's references don't have to be rewritten, because the references were never tied to a specific backend in the first place.
File promise handling — covered in more detail in the file uploads article — is tightly integrated with the storage layer. The flow from "user selected a file" to "file is safely stored in the right backend" goes through the uploader, through the promise layer, through the routing configuration, and into the backend, with resilience and retries at each stage. Users don't see any of this. They see a file that uploaded successfully and a record that references it.
For tenants whose data-storage needs are genuinely non-trivial — regulated industries, international operations, high-volume attachments, specific-cost constraints — the cloud-storage layer is often the feature that makes the platform viable where a single-backend alternative wouldn't be. For tenants whose storage needs are simple, it's invisible: they pick a default, files end up somewhere reasonable, and they never have to think about it again. Both outcomes are intentional.