An API worth integrating with is also an API worth attacking, misusing, or accidentally hammering. A runaway script that polls every two seconds instead of every two minutes. A brute-force attempt against an administrative login. A newly-launched partner integration whose author forgot to add a back-off. An honest mistake that, without protection, turns into a real incident. The point of protection isn't to say no — it's to say "not this fast" in a way well-behaved clients can cope with automatically, and to flag trouble before it becomes an outage.
The core mechanism is per-tenant rate limiting. Each tenant has its own quotas — not a shared global pool — which means a busy tenant can't starve quieter ones, and a misbehaving integration on one tenant doesn't affect API availability for others. Limits are configured per tenant and can be tuned for the tenant's real usage patterns rather than a one-size-fits-all default. Tenants with heavy, well-understood integration traffic get higher ceilings; tenants with minimal integration usage get tighter defaults that catch surprises early.
Retry-after headers are the industry-standard signal that lets integrations back off gracefully. When a client exceeds a limit, the platform returns an HTTP 429 with a Retry-After header specifying how long to wait. Well-behaved integrations — and almost all modern HTTP clients — respect this automatically. No custom logic required from the integrator; a standard client library handles the right back-off and retry without intervention. That's important: protection is only as useful as integrators' ability to live with it, and using the standard mechanism means they don't have to learn ours.
Per-endpoint limits recognize that not all API calls cost the same. A lightweight read of a single record deserves a different ceiling than a bulk create, a complex nested-filter query, or a file upload. Different endpoints carry different limits, so the overall quota is a weighted picture of actual resource use rather than a flat counter that punishes cheap requests alongside expensive ones.
Login-attempt throttling sits at the other end of the protection spectrum. Repeated failed logins against the same account or from the same source get progressively slower, which is what turns a brute-force attempt into a multi-year proposition rather than a fifteen-minute one. The mechanism applies to the web UI, to API-key authentication, and to any other entry point that challenges the user for credentials — a consistent policy across every authentication surface.
Challenge logic handles the cases where a rate limit isn't the right tool but you still want a friction step. Sensitive operations — changing a payment method, approving a large transfer, granting administrative permissions — can require a challenge (a confirmation, a second factor) before they complete. That's configured at the action level, so operators choose which operations warrant the extra step without adding friction to normal use. It's the quiet feature that makes compromised-session scenarios less catastrophic.
API-key authentication is the primary way external integrations authenticate. Keys are issued per integration and can be revoked independently, so rotating the key for one partner doesn't break the others. A dedicated validity-check endpoint lets a client verify its key without performing a real operation — useful for integration health checks and for partner portals that want to show a clear "your connection is healthy" indicator. The REST API article covers the authentication side in more detail; the relevant point here is that protection and authentication are layered, not intertwined, so each can be adjusted without affecting the other.
Approaching-limit notifications are what keep tenant admins ahead of problems rather than behind them. When a tenant's usage is climbing toward its limit, the platform emails the administrators with enough notice to act — to review the integration that's driving the traffic, to raise the ceiling if it's a legitimate workload, or to identify a runaway process. Most tenants never hit their limits in practice; the ones that do usually do so because something changed — a new partner integration, an accidental infinite loop, a spike in customer traffic — and the notification catches that change before it becomes a 429 storm.
Rate-limit clearing gives administrators a recovery tool when a limit has been hit for a legitimate reason. A sudden, unexpected spike — a product launch, a migration, a bulk import — may legitimately exceed the tenant's normal allowance. An admin can clear the current counter to restore immediate capacity, and separately raise the limit if the new usage represents a lasting change. Clearing the counter is an explicit operation with a clear audit trail, not a quiet override — which keeps the mechanism honest.
CORS handling is the practical detail that matters for browser-based integrations. Front-end code running in a customer's own domain can call the API directly when configured to do so, with a controlled origin whitelist on the tenant side. For simple public-facing integrations — a widget on a marketing site, a small dashboard, a partner-portal front end — this is what makes direct API use possible without a backend proxy.
Protection is one of those features that's most valuable when it's invisible. Well-tuned limits that integrators never hit. Throttling that quietly slows brute-force attempts to nothing. Notifications that catch unusual usage before it becomes a problem. The goal is an API that stays available and responsive for every legitimate caller, without either losing its teeth for misbehavior or being painful for the integrations doing things right. That balance is the whole design, and it's what makes the API genuinely safe to open up — not just safe to document.