Blog article

Real-time delivery — updates arrive the moment they happen

Published on October 1, 2023

Nothing says "modern" quite like seeing the data change the instant someone else saves it. Nothing says "brittle" quite like aggressive polling, stale caches, and UI that requires a refresh to show what happened five seconds ago. In the years since we built the platform's real-time delivery layer, the expectation has shifted decisively: users now notice when a list fails to update after a colleague's change, when a notification arrives late, when a kanban board shows stale state. A modern business platform either ships real-time as part of its foundation or it pays for the absence every day in confused users and polling-generated server load. We shipped it as part of the foundation, and every subsequent feature — lists, kanbans, notifications, comments, collaborative presence — has benefited from the infrastructure underneath.

Push-based updates are the core mechanism. The server maintains a live connection to each connected client and pushes relevant events as they happen: a record was updated, a notification was created, a comment was posted, a status was changed. The client receives these events immediately and updates the visible UI without user intervention. No polling interval defines how fresh the data can be; the data is as fresh as the network round-trip. For the features where currency genuinely matters — active collaboration, operational dashboards, notification inboxes — this immediacy is what separates a modern feel from a clunky one.

Scoped subscriptions keep the push channel efficient at scale. A user viewing a specific record subscribes to events relevant to that record; a user watching a kanban board subscribes to events relevant to the board's filter; a user viewing the notification inbox subscribes to notification events for their account. Clients only receive the events that apply to what they're currently viewing, which means the push channel doesn't flood anyone's browser with irrelevant traffic and the server doesn't waste bandwidth on deliveries nobody will use. The same push infrastructure can serve thousands of users concurrently because each user's subscription set is narrow.

Automatic reconnect handles the reality of real-world networks. Connections drop — because Wi-Fi stuttered, because a tunnel closed and reopened, because a laptop went to sleep and woke up, because the user's browser tab was briefly backgrounded. The client detects the disconnection, attempts to reconnect on a sensible backoff schedule, and — crucially — catches up on any events that happened during the outage window once the connection is re-established. Users don't have to refresh; the UI stitches itself back together automatically, and the data they're looking at is accurate again within moments.

Backpressure and batching keep bursts of activity from flooding clients. A bulk update that changes a hundred records in five seconds would, on a naive implementation, send a hundred separate events to every connected client viewing the affected view — each event triggering a UI update, each update costing CPU, each accumulated into a storm of redraws that brings the browser to its knees. The delivery layer coalesces bursts into batched events, delivering the hundred changes as a handful of consolidated updates that each redraw the UI once. Users see a smooth update even under unusual load; the browser stays responsive.

Consistency with cached state matters because the client has cached data from earlier interactions, and pushed events have to reconcile with that cache coherently. The delivery layer includes enough metadata — version numbers, causality information — that the client can detect when an incoming event belongs after a local write it hasn't yet confirmed with the server, when an event is a duplicate of one already processed, when an event would conflict with local state that needs to be resolved. The end result is a client whose visible data is reliably consistent, not a client that shows transient bizarre states because two information sources disagreed.

Notifications delivered live means the notification inbox updates the moment a new notification is created. A user working in one tab sees a badge appear on their inbox tab the instant the notification arrives; clicking through takes them to the latest item without a refresh. For notifications that matter operationally — a ticket assignment, a comment mention, an approval request — the live delivery is what keeps the notification system from being a delayed-email substitute.

View updates apply the same liveness to data views. A table view reflects updates to rows as they happen: a colleague changes a status, the cell changes. A kanban view moves a card between columns when someone transitions the underlying record. A calendar view updates events as they're scheduled or rescheduled. For the views that multiple people share — operational dashboards, team task boards, shared schedules — seeing the current state without refreshing is essential.

Collaborative presence is the social layer that the delivery infrastructure makes possible. A user viewing a record sees avatars of colleagues who are currently viewing the same record. Two users editing the same record can see that they're both there and coordinate accordingly rather than accidentally overwriting each other's work. For teams that collaborate in real time, presence is what makes the platform feel like a shared workspace rather than a set of parallel individual sessions.

Mentions arrive instantly. When a comment tags a user, that user's notification appears immediately, not on the next scheduled poll. For workflows that depend on rapid back-and-forth — "hey, can you review this before the meeting in twenty minutes?" — the difference between instant and "up to sixty seconds" is substantial. The comment, the notification, the badge on the inbox all update in concert, and the mentioned user sees the interaction as quickly as a colleague tapping them on the shoulder.

Graceful degradation covers the unusual cases where the live channel isn't available — a corporate firewall that blocks long-lived connections, a particularly hostile network intermediary, a client configuration that prevents the push channel from working. In those cases, the UI falls back to periodic refresh: slower updates, but the application still works. Users on healthy networks get the live experience; users in degraded environments get a slower but functional experience; nobody gets a broken one. That fallback is what makes the platform usable in the full range of environments its customers actually deploy it in.

For implementers, custom canvas pages can tap into the delivery stream to make their own custom surfaces live as naturally as the built-in views. A canvas-built dashboard that shows aggregate metrics can subscribe to the types that feed it and update the metrics as the underlying data changes. A custom view of a multi-record workflow can update as workflow steps are completed. The infrastructure is available to custom pages, which means implementer-built interfaces get the same modern feel as the platform's native ones without bespoke real-time plumbing.

For users who want their tools to feel as fast and fluid as the collaborative applications they use in the rest of their digital lives, for teams whose work depends on knowing the current state without asking, and for implementers who want their custom surfaces to benefit from the same liveness the platform provides everywhere else, real-time delivery is the quiet infrastructure that makes it happen. For the adjacent topics, the views article covers how different viewers use the live updates, the notifications article covers the delivery of the notification stream, the mentions and comments article covers the collaborative communication that depends on live delivery, and the canvas page builder article covers how custom pages subscribe to the same stream. Updates arrive the moment they happen — that's the promise of modern software, and the delivery layer is how the platform keeps it.