Rate Limit
Also known as: API throttle, Quota, Request limit
Quick definition
A rate limit is the maximum number of API requests a client can make in a given time period — typically requests per minute, per hour, or per day. Every social media API enforces rate limits to prevent abuse, ensure fair usage, and protect platform infrastructure. Hitting a rate limit returns HTTP 429 Too Many Requests, often with a Retry-After header indicating when to retry.
Contents
What is a rate limit?
A rate limit is a cap on how many API requests a client can make to a service in a given time window. Every modern API enforces some form of rate limiting — typically expressed as 'X requests per Y seconds' (60/minute, 1000/hour, etc.). The purpose is twofold: prevent abuse (a misconfigured script or attacker can't hammer the service indefinitely) and ensure fair usage (one tenant can't starve others by consuming all capacity).
When a client exceeds the rate limit, the API returns HTTP 429 Too Many Requests. The response usually includes a Retry-After header (in seconds or as an HTTP date) telling the client when it's safe to retry. Well-designed clients respect this header; naive clients hammer harder, get rate-limited longer, sometimes get permanently banned for abuse patterns.
Common rate limit patterns
Three common implementations. (1) Token bucket — the client gets a 'bucket' of N tokens; each request consumes one; the bucket refills at rate R per second. Allows burst usage up to bucket size, sustained usage at refill rate. (2) Fixed window — count of requests resets at the start of each minute or hour. Simple but allows boundary-spike abuse. (3) Sliding window — rolling count over the trailing N seconds. More accurate, slightly more expensive to compute.
Most social media APIs publish their limits in documentation (e.g., 'X API: 300 requests/15-min for posting'). Some platforms have multi-tier limits — global per-app (across all users), per-user (one user can't dominate), and per-endpoint (different operations have different costs).
How to handle rate limits gracefully
Five practices for production code. (1) Honor Retry-After always — sleep that long, then retry. (2) Use exponential backoff for 429s without Retry-After — start at 1s, double each retry, cap at 60s. (3) Spread requests evenly — bursts that pack to one second are more likely to hit limits than the same volume spread evenly. (4) Cache where possible — repeated identical requests should hit your cache, not the API. (5) Monitor 429 rate — climbing 429 rate is an early warning sign you're approaching capacity, not just hitting transient spikes.
For scheduled posting specifically: the scheduler usually batches requests internally and stays well under platform limits. If you're building your own integration, the most common rate-limit footgun is bulk-posting in tight loops; spread by 1-2 seconds between requests to stay safe.
Approximate rate limits for major social platforms (2026, subject to change)
| Platform | Posting / write rate limit | Notes |
|---|---|---|
| X / Twitter (Free) | 1,500 tweets/24h, ~17 reads/min | Heavily restricted post-2023 monetization |
| X / Twitter (Basic $100/mo) | 10,000 tweets/24h, much higher reads | Realistic baseline for production tools |
| Instagram Graph API | 200 posts/24h per user | Strict; bulk users hit ceiling on heavy days |
| TikTok Content Posting API | Limited; varies by app tier | Approval required; published limits not always documented |
| YouTube Data API | 10,000 quota units/day (default) | Each upload = ~1,600 units; ~6 uploads/day on default |
| LinkedIn Marketing API | 100 reads/sec, lower for writes | Per-user and per-app tiers |
| Facebook Graph API | Per-user-per-token tiers | Complex tiered system; consult their docs |
| Pinterest API | 1,000 requests/hour per user | Modest but reasonable for posting workflows |
| Bluesky AT Protocol | 5,000 writes/hour per session | Generous for now while platform is small |
Common pitfalls
- ×Ignoring Retry-After — hammering after a 429 extends the lockout and risks permanent ban
- ×Linear retry without backoff — exponential backoff is industry standard for a reason
- ×Polling APIs at high frequency — primary cause of accidental rate-limit exhaustion in social schedulers
- ×Sharing API keys across multiple environments — combined load hits limits faster than each environment alone would
Tips
- ✓Always implement Retry-After + exponential backoff — single biggest reliability win
- ✓Use webhooks instead of polling — webhooks consume zero rate-limit; polling burns it constantly
- ✓Spread bulk operations evenly — 100 posts over 5 minutes works; 100 posts in 1 second hits limits
- ✓Monitor 429 rate weekly — climbing 429s = capacity warning; act before they become outages
Frequently asked questions
What's the HTTP 429 status code?+
Too Many Requests — the standard HTTP response when a server enforces a rate limit. Most APIs return 429 with a Retry-After header indicating when it's safe to retry. The well-designed pattern: parse Retry-After, sleep that long, retry.
How do I avoid rate limits when using a social scheduling API?+
Most scheduling APIs (CodivUpload, Buffer, Postiz, Ayrshare) handle rate limits internally — they batch requests, respect platform limits, and queue work. You as the user almost never see 429s from the scheduler itself. The exception is when you're directly hitting the underlying platform API yourself.
Can I increase my rate limit?+
Sometimes. Most platforms offer paid tiers (X Basic $100/mo, X Pro $5K/mo) with substantially higher limits. Some allow case-by-case quota expansion requests if your use case is legitimate and well-documented (YouTube Data API quota requests, for example).
Are rate limits per user or per app?+
Often both. Most APIs have per-app global limits (across all users of your app), per-user limits (one user can't dominate), and sometimes per-endpoint limits (writes more constrained than reads). Hit any of the three and you get 429.
Rate-limit-respecting scheduling for all 11 platforms
CodivUpload's API handles rate limits internally — batching, queuing, and respecting platform-specific Retry-After. You don't need to implement rate-limit logic yourself.
See API docsRelated glossary terms