API Rate Limit Decorator

Overview

ratelimit build Maintainability

APIs are a very common way to interact with web services. As the need to consume data grows, so does the number of API calls necessary to remain up to date with data sources. However many API providers constrain developers from making too many API calls. This is know as rate limiting and in a worst case scenario your application can be banned from making further API calls if it abuses these limits.

This packages introduces a function decorator preventing a function from being called more often than that allowed by the API provider. This should prevent API providers from banning your applications by conforming to their rate limits.

Installation

PyPi

Add this line to your application's requirements.txt:

ratelimit

And then execute:

$ pip install -r requirements.txt

Or install it yourself:

$ pip install ratelimit

GitHub

Installing the latest version from Github:

$ git clone https://github.com/tomasbasham/ratelimit
$ cd ratelimit
$ python setup.py install

Usage

To use this package simply decorate any function that makes an API call:

from ratelimit import limits

import requests

FIFTEEN_MINUTES = 900

@limits(calls=15, period=FIFTEEN_MINUTES)
def call_api(url):
    response = requests.get(url)

    if response.status_code != 200:
        raise Exception('API response: {}'.format(response.status_code))
    return response

This function will not be able to make more then 15 API call within a 15 minute time period.

The arguments passed into the decorator describe the number of function invocation allowed over a specified time period (in seconds). If no time period is specified then it defaults to 15 minutes (the time window imposed by Twitter).

If a decorated function is called more times than that allowed within the specified time period then a ratelimit.RateLimitException is raised. This may be used to implement a retry strategy such as an expoential backoff

from ratelimit import limits, RateLimitException
from backoff import on_exception, expo

import requests

FIFTEEN_MINUTES = 900

@on_exception(expo, RateLimitException, max_tries=8)
@limits(calls=15, period=FIFTEEN_MINUTES)
def call_api(url):
    response = requests.get(url)

    if response.status_code != 200:
        raise Exception('API response: {}'.format(response.status_code))
    return response

Alternatively to cause the current thread to sleep until the specified time period has ellapsed and then retry the function use the sleep_and_retry decorator. This ensures that every function invocation is successful at the cost of halting the thread.

from ratelimit import limits, sleep_and_retry

import requests

FIFTEEN_MINUTES = 900

@sleep_and_retry
@limits(calls=15, period=FIFTEEN_MINUTES)
def call_api(url):
    response = requests.get(url)

    if response.status_code != 200:
        raise Exception('API response: {}'.format(response.status_code))
    return response

License

This project is licensed under the MIT License.

Comments
  • Running once in x time instead of up to y times in x time

    Running once in x time instead of up to y times in x time

    Just to clarify, let me show an example: wheen you set it up for 10 calls in 100 seconds, it lets you call it once every ten seconds, which is not exactly the same. See the following example:

    >>> from ratelimit import rate_limited
    >>> import datetime
    >>> import time
    >>> @rate_limited(10,100)
    ... def test(i=1):
    ...     ts = time.time()
    ...     st = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
    ...     print("{}::  {}".format(i, st))
    ... 
    >>> for i in range(12):
    ...     test()
    ... 
    1::  12:14:05
    1::  12:14:15
    1::  12:14:25
    1::  12:14:35
    1::  12:14:45
    1::  12:14:55
    1::  12:15:05
    1::  12:15:15
    1::  12:15:25
    1::  12:15:35
    1::  12:15:45
    1::  12:15:55
    >>>
    
    
    

    I was expecting test() function to be called 10 times and then be blocked, but instead, there is only one call in 10 seconds. Is there a way to get the behaviour I'm expecting?

    Thanks in advance

    opened by rola93 5
  • Return error instead of waiting

    Return error instead of waiting

    This is a feature request.

    It would be great, if there was a way to return an error, if there are too many calls of the limited function, instead of simply waiting and still doing the requested thing later. Perhaps a third parameter in the decorator, which takes an error class?

    opened by ZelphirKaltstahl 4
  • Feature/asyncio compatible decorators

    Feature/asyncio compatible decorators

    Hi! This PR implements idea from #26 to make limits decorator compatible with async functions and asyncio coroutines. It uses simple check inspect.iscoroutinefunction to dispatch work to sync/async implementation.

    opened by evemorgen 3
  • BUG? sleep_and_retry only retries once

    BUG? sleep_and_retry only retries once

    If you're really hammering a rate limited call with multiple threads, the retry can also raise RateLimitException. I'm using this now, and might add a max_retries parameter

    def sleep_and_retry_forever(func):
        '''
        Return a wrapped function that retries rate limit exceptions, sleeping the
        current thread until rate limit resets. Continues to retry until the call makes
        it through.
    
        :param function func: The function to decorate.
        :return: Decorated function.
        :rtype: function
        '''
        @wraps(func)
        def wrapper(*args, **kargs):
            '''
            Call the rate limited function. If the function raises a rate limit
            exception sleep for the remaing time period and retry the function, until it succeeds.
            :param args: non-keyword variable length argument list to the decorated function.
            :param kargs: keyworded variable length argument list to the decorated function.
            '''
            done = False
            while not done:
                try:
                    return func(*args, **kargs)
                    done = True
                except RateLimitException as exception:
                    time.sleep(exception.period_remaining)
        return wrapper
    
    opened by aarcro 3
  • The rate limit `period` appears to be a `frequency` (inverted logic)

    The rate limit `period` appears to be a `frequency` (inverted logic)

    Hi, I just scratched my head over this for a while then I made the following simple test:

    import datetime
    
    from ratelimit import rate_limited
    
    
    @rate_limited(2)
    def rate_limited_function():
        print("function called at: {}".format(datetime.datetime.now()))
    
    if __name__ == '__main__':
        while True:
            rate_limited_function()
    

    which produces the following output:

    function called at: 2017-02-02 12:51:00.969778
    function called at: 2017-02-02 12:51:01.470701
    function called at: 2017-02-02 12:51:01.971523
    function called at: 2017-02-02 12:51:02.472208
    function called at: 2017-02-02 12:51:02.972889
    function called at: 2017-02-02 12:51:03.473723
    

    e.g. the it's rate limiting the function call to once every 0.5 seconds, not once every 2 seconds.

    conversely, passing a period of 0.5 runs the function every 2 seconds: (and curiously it doesn't do the first call for 2 seconds)

    function called at: 2017-02-02 12:58:56.494749
    function called at: 2017-02-02 12:58:58.496957
    function called at: 2017-02-02 12:59:00.499353
    function called at: 2017-02-02 12:59:02.501679
    

    The README says "The argument passed into the decorator imposes the time that must elapse before a method can be called again." and the argument is called period so I believe passing period=2 is supposed to mean "no more than every 2 seconds". But seemingly the logic is inverted.

    There's a very simple, thread safe, [imer-based approach you might like to consider at http://stackoverflow.com/questions/30918772/rate-limiting-python-decorator/30918773#30918773

    opened by fawkesley 3
  • Multiprocess aware rate limit - Redis or Django DB backend ?

    Multiprocess aware rate limit - Redis or Django DB backend ?

    It would be good if ratelimit could store it's ratelimit somewhere like Redis or the database, this would make it more useful under Django, where there can be more than one process running.

    opened by stuaxo 2
  • Shimon/ratelimit/add condition to rate limiter

    Shimon/ratelimit/add condition to rate limiter

    This PR include additions below:

    1. Adding a condition to RateLimitDecorator that will allow 'bypassing' limit, in exceptional cases.
    2. Unit test for conditional ratelimiter.
    3. Add the ability to avoid rasing an exception in case limit was reached, but also, avoid calling the function.
    opened by shimon-lb 2
  • Reinstate backwards-compatible `rate_limited` function?

    Reinstate backwards-compatible `rate_limited` function?

    It seems the decorator is now called limits and rate_limited has disappeared, breaking old code.

    Is it possible to reinstate the old decorator for backwards compatibility?

    Thanks!

    opened by fawkesley 2
  • Improve example in README

    Improve example in README

    Following our discussion on https://github.com/tomasbasham/ratelimit/issues/1 I thought it would be helpful to update the README with an example using both parameters.

    (I use this library all the time, and I keep having to go back to that issue to remember what the arguments mean!)

    opened by fawkesley 2
  • Concurrency issues

    Concurrency issues

    I noticed the code here has been changed since my last commit.

    I do realize that my version will probably block 2 or more threads from accessing the API call at the same time, thus compromising my intention of multithreading. However, it seems that the new code might have issues on that subject too.

            with lock:
                elapsed = time.time() - last_called[0]
            left_to_wait = frequency - elapsed
            if left_to_wait > 0:
                time.sleep(left_to_wait)
            ret = func(*args, **kargs)
            last_called[0] = time.time()
            return ret
    

    The above code only locks the operation elapsed = time.time() - last_called[0], but not the operation last_called[0] = time.time(). Now, for example, if func call is an operation that blocks for 20 seconds (maybe due to slow API communications), and my frequency is 1 second, and I have 10 threads at the same time. What is going to happen?

    The first thread will acquire the lock, calculate the elapsed, release the lock, run func, which will block for 20 seconds. Note that last_called[0] will not be updated during the 20 seconds! Therefore, during the 20 seconds, all of my other 9 threads will be able to acquire and release the lock, call func. If more than 2 threads call func in one second, the effect of the rate limiter will be compromised.

    Therefore, I propose the change.

    Note that I moved the last_called[0] within the lock structure. The advantage is that if func blocks for a bit, the threads won't block.

    I tested the code with the following code:

    import time
    import threading
    @rate_limited(2, 1)
    def increment():
        '''
        Increment the counter at most twice every second.
        '''
        time.sleep(5)
        print time.time()
    
    thread1 = threading.Thread(target = increment)
    thread2 = threading.Thread(target = increment)
    thread3 = threading.Thread(target = increment)
    
    thread1.start()
    thread2.start()
    thread3.start()
    

    The output of the old commit is:

    1502028965.591502028965.591502028965.59

    The output of my new commit is:

    1502029318.34
    1502029318.84
    1502029319.35
    

    Apparently, all three threads called the same function at the same seconds in the old commit, which broke the limit. While the new commit ensures the limit.

    opened by ginward 2
  • ProcessPoolExecutor support

    ProcessPoolExecutor support

    Hi there,

    Thanks for this work, really helpful.

    However, I am running into an issue. I need to use ratelimit with concurrent.futures.ProcessPoolExecutor.

    Sadly, I am observing that the decorator is not aware of the function invocations going on on these parallel processes.

    Do you have plans to support such use case? Or could you maybe point me to related work which supports concurrent processes?

    Thanks,

    Alex

    opened by alexcombessie 1
  • Does not work for me

    Does not work for me

    When calling the function, it throws the exception and dies. What I am doing wrong?

    
    @on_exception(expo, RateLimitException, max_tries=8)
    @limits(calls=15, period=300)
    def mint_nft_to_address(cust_address):
        global contract_address
        assert contract_address, "Unable to mint. Empty contract address given"
        assert cust_address, "Unable to mint. Empty customer address given"
        assert metadata_directory_ipfs_uri, "Unable to mint. Empty metadata directory url given"
        url = "https://api.nftport.xyz/v0/mints/customizable"
    
        payload = {
            "chain": "polygon",
            "contract_address": contract_address,
            "metadata_uri": metadata_directory_ipfs_uri + "/1",
            "mint_to_address": cust_address
        }
        headers = {
            "Content-Type": "application/json",
            "Authorization": API_KEY
        }
    
        response = requests.post(url, json=payload, headers=headers)
    
        if response.status_code != 200:
            raise Exception('API response: {}'.format(response.status_code))
        print(response.text)
        return response
    
    opened by mikklaos 0
  • Add a “Usage note”?

    Add a “Usage note”?

    Sometimes you need to limit API calls from another package that you do not control. In this case, monkey patching the API may be the best option. But where? In the requests package, calls to .get, .head, and other methods of a Request() object are funneled to self.request(), so that is one possibility.  However, that method creates a temporary Session object and sends all invocations through it. As a result, the best way to limit all API calls is:

    throttled = limits(calls=15, period=FIFTEEN_MINUTES)
    Session.request = throttled(Session.request)
    
    opened by samwyse 0
  • New repo maintainer(s) / alternative projects?

    New repo maintainer(s) / alternative projects?

    Hi @tomasbasham, first of all thanks for this handy package. It solves a small but common enough problem such that this is currently the top Google result for "python rate limit" and similar searches.

    It appears that the repo has accumulated a number of unanswered issues and PRs over the last couple years. It's understandable if you don't have time to maintain this, but since there are others who are willing and able to make improvements to it, would you be willing to either:

    • Transfer the repo to a different owner, or
    • Add some users with 'Collaborator' level permissions so they can help maintain the project?

    Plan B

    Otherwise, would anyone else like to volunteer to do the following?

    • Start a fork (if you haven't done so already) that will be a recommended replacement for ratelimit
    • Be available to respond to issues and PRs within a reasonable amount of time
    • Add at least one or two Collaborators to help with this
    • Be willing to transfer the repo to a different owner if at some point in the future you will no longer have time to maintain it

    I would suggest @deckar01's fork, containing changes described in #31, as a good starting point. I would also like to see @evemorgen and @Jude188's changes from issue #26 / PR #39 included. I believe the changes from these two forks could be integrated with the use of aiosqlite to support both sliding log persistence and async calls.

    opened by JWCook 7
  • Support for requests-toolbelt

    Support for requests-toolbelt

    I'm using the requests extension called requests-toolbelt which allows for mult-threaded requests. I believe the algorithm feeds multiple URLs to a requests.Session object and threads them up in a pool.

    Can this be adapted to do rate limit/throttling for APIs that limit requests to something like 20 requests per second?

    https://toolbelt.readthedocs.io/en/latest/

    opened by leeprevost 1
Owner
Tomas Basham
Software Developer at @statuscake, hobbyist maker, baker and food lover. Trying to learn Rust in my spare time.
Tomas Basham
Python USD rate in RUB parser

Python EUR and USD rate parser. Python USD and EUR rate in RUB parser. Parsing i

Andrew 2 Feb 17, 2022
PyHook is an offensive API hooking tool written in python designed to catch various credentials within the API call.

PyHook is the python implementation of my SharpHook project, It uses various API hooks in order to give us the desired credentials. PyHook Uses

Ilan Kalendarov 158 Dec 22, 2022
API for obtaining results from the Beery-Bukenica test of the visomotor integration development (VMI) 4th edition.

VMI API API for obtaining results from the Beery-Bukenica test of the visomotor integration development (VMI) 4th edition. Install docker-compose up -

Victor Vargas Sandoval 1 Oct 26, 2021
Early version for manipulate Geo localization data trough API REST.

Backend para obtener los datos (beta) Descripción El servidor está diseñado para recibir y almacenar datos enviados en forma de JSON por una aplicació

Víctor Omar Vento Hernández 1 Nov 14, 2021
A simple API that will return a key-value pair of randomly generated UUID

A simple API that will return a key-value pair of randomly generated UUID. Key will be a timestamp and value will be UUID. While the server is running, whenever the API is called, it should return all the previous UUIDs ever generated by the API alongside a new UUID.

Pius Lucky 2 Jan 18, 2022
API Rate Limit Decorator

ratelimit APIs are a very common way to interact with web services. As the need to consume data grows, so does the number of API calls necessary to re

Tomas Basham 574 Dec 26, 2022
API Rate Limit Decorator

ratelimit APIs are a very common way to interact with web services. As the need to consume data grows, so does the number of API calls necessary to re

Tomas Basham 575 Jan 5, 2023
Farlimit - FastAPI rate limit with python

FastAPIRateLimit Contributing is F&E (free&easy) Y Usage pip install farlimit N

omid 27 Oct 6, 2022
A Flask inspired, decorator based API wrapper for Python-Slack.

A Flask inspired, decorator based API wrapper for Python-Slack. About Tangerine is a lightweight Slackbot framework that abstracts away all the boiler

Nick Ficano 149 Jun 30, 2022
decorator

Decorators for Humans The goal of the decorator module is to make it easy to define signature-preserving function decorators and decorator factories.

Michele Simionato 734 Dec 30, 2022
Decorator for PyMC3

sampled Decorator for reusable models in PyMC3 Provides syntactic sugar for reusable models with PyMC3. This lets you separate creating a generative m

Colin 50 Oct 8, 2021
esguard provides a Python decorator that waits for processing while monitoring the load of Elasticsearch.

esguard esguard provides a Python decorator that waits for processing while monitoring the load of Elasticsearch. Quick Start You need to launch elast

po3rin 5 Dec 8, 2021
Python decorator for `TODO`s

Python decorator for `TODO`s. Don't let your TODOs rot in your python projects anymore !

Klemen Sever 74 Sep 13, 2022
Python @deprecat decorator to deprecate old python classes, functions or methods.

deprecat Decorator Python @deprecat decorator to deprecate old python classes, functions or methods. Installation pip install deprecat Usage To use th

null 12 Dec 12, 2022
This package tries to emulate the behaviour of syntax proposed in PEP 671 via a decorator

Late-Bound Arguments This package tries to emulate the behaviour of syntax proposed in PEP 671 via a decorator. Usage Mention the names of the argumen

Shakya Majumdar 0 Feb 6, 2022
A Runtime method overload decorator which should behave like a compiled language

strongtyping-pyoverload A Runtime method overload decorator which should behave like a compiled language there is a override decorator from typing whi

null 20 Oct 31, 2022
A Pancakeswap v2 trading client (and bot) with limit orders, stop-loss, custom gas strategies, a GUI and much more.

Pancakeswap v2 trading client A Pancakeswap trading client (and bot) with limit orders, stop-loss, custom gas strategies, a GUI and much more. If you

null 571 Mar 15, 2022
A Pancakeswap and Uniswap trading client (and bot) with limit orders, marker orders, stop-loss, custom gas strategies, a GUI and much more.

Pancakeswap and Uniswap trading client Adam A A Pancakeswap and Uniswap trading client (and bot) with market orders, limit orders, stop-loss, custom g

null 570 Mar 9, 2022
PancakeTrade - Limit orders and more for PancakeSwap on Binance Smart Chain

PancakeTrade helps you create limit orders and more for your BEP-20 tokens that swap against BNB on PancakeSwap. The bot is controlled by Telegram so you can interact from anywhere.

Valentin Bersier 187 Dec 20, 2022