API Rate Limits

Learn how we implement rate limits and how it affects you.

We use rate limits to ensure system stability and protect against DoS attacks. Rate limiting is implemented using the request per minute criteria. This means we limit the number of requests you can make every minute. If you exceed the set number of requests, you will have to wait till the end of the minute before you can make another request.

The below scenario explains how rate limiting works.

Let us assume you make 100 APIs calls between 12:00:00 and 12:00:45. All subsequent API calls till 12:01:00 return a 429 error.

Rate Limits in Different Environments

The table below lists the rate limits for different environments and operations.

EnvironmentWrite + Read API Calls (per minute)
Sandbox25
Live100

Rate Limit Error Message

When the rate limit is breached, subsequent API calls return a 429 HTTP response code with the below error message.

{
  "error": {
    "type": "resource_limit_exhausted",
    "message": "Too many requests hit the API too quickly."
  }
}

Handle Rate Limits

When integrating with our APIs, ensure a retry mechanism is built to handle 429 errors. The retry mechanism should use an exponential backoff schedule to reduce request volume when necessary. The retry mechanism should also include some randomness into the backoff schedule to avoid the thundering herd effect.

A more sophisticated approach would be to use algorithms such as the token bucket rate-limiting algorithm to handle rate limits. Ready-made and mature implementations for token buckets are available in most programming languages.

Increase Rate Limits

Contact our integrations team to increase your rate limits.