CacheControl is a port of the caching algorithms in httplib2 for use with requests session object.

Related tags

Caching cachecontrol
Overview

CacheControl

Latest Version https://travis-ci.org/ionrock/cachecontrol.png?branch=master

CacheControl is a port of the caching algorithms in httplib2 for use with requests session object.

It was written because httplib2's better support for caching is often mitigated by its lack of thread safety. The same is true of requests in terms of caching.

Quickstart

import requests

from cachecontrol import CacheControl


sess = requests.session()
cached_sess = CacheControl(sess)

response = cached_sess.get('http://google.com')

If the URL contains any caching based headers, it will cache the result in a simple dictionary.

For more info, check out the docs

Comments
  • Use msgpack for cache serialization

    Use msgpack for cache serialization

    This is a pull request meant to improve the efficiency of wheel download caching in pip (https://github.com/pypa/pip/issues/3515).

    Msgpack is fast, supports all major Python versions, and does not add overhead for the serialization of large binary values (as commonly handled by pip).

    Benchmark results:

    # Before
    $ python ./examples/benchmark.py
    Total time for 1000 requests: 0:00:00.670020
    
    # After
    $ python ./examples/benchmark.py
    Total time for 1000 requests: 0:00:00.574051
    
    opened by StephanErb 20
  • Implementing caching of chunked/streamed responses

    Implementing caching of chunked/streamed responses

    Resolves #105, #81. This takes a different approach from #82 in that it doesn't get rid of CallbackFileWrapper, so there is still the same delay actually caching the response until the calling application itself reads the data.

    On the other hand, this requires dynamically monkeypatching urllib3, so ¯_(ツ)_/¯.

    opened by rmcgibbo 19
  • Make msgpack optional?

    Make msgpack optional?

    We use CacheControl in the Blender Cloud add-on for Blender. Due to the wide range of platforms Blender can run on, binary Python packages are very impractical to use. So, in order to bundle CacheControl with a Blender add-on, we need to restrict ourselves to pure-Python packages.

    Would it be an option to make msgpack an optional feature? If not, we're bound to limit ourselves to older versions of CacheControl.

    opened by sybrenstuvel 17
  • Large responses cause increased memory usage.

    Large responses cause increased memory usage.

    When downloading large files, memory usage is not constant when using CacheControl.

    I believe this is due to the FileWrapper that buffers the response in memory.

    If using requests directly:

    import shutil
    import requests
    
    response = requests.get(url, stream=True)
    with open('/var/tmp/out.bin') as fh:
        shutil.copyfileobj(response.raw, fh)
    

    Yields constant memory usage. If you throw CacheControl into the mix, memory shoots up based on the size of the downloaded object.

    opened by dsully 17
  • avoid infinite regress in __getattr__

    avoid infinite regress in __getattr__

    After a large amount of debugging a deadlock in pip's test suite, I found an infinite recursion in the below __getattr__ method. I've fixed it and regression-tested it as best I could.

    It's not clear to me how an attribute of this class is referenced after its __fp is deleted, but it did happen.

    For historicity's sake, this is the stack that caused the inifinite recursion:

      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/workspace/venv/bin/pip", line 9, in <module>
        load_entry_point('pip==1.6.dev1', 'console_scripts', 'pip')()
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/__init__.py", line 198, in main
        return command.main(cmd_args)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/basecommand.py", line 212, in main
        status = self.run(options, args)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/commands/install.py", line 317, in run
        requirement_set.prepare_files(finder)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/req/req_set.py", line 238, in prepare_files
        req_to_install, self.upgrade)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/index.py", line 284, in find_requirement
        for page in self._get_pages(url_locations, req):
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/index.py", line 391, in _get_pages
        page = self._get_page(location, req)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/index.py", line 611, in _get_page
        result = HTMLPage.get_page(link, req, session=self.session)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/index.py", line 693, in get_page
        "Cache-Control": "max-age=600",
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/requests/sessions.py", line 463, in get
        return self.request('GET', url, **kwargs)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/download.py", line 286, in request
        result = super(PipSession, self).request(method, url, *args, **kwargs)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/requests/sessions.py", line 451, in request
        resp = self.send(prep, **send_kwargs)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/requests/sessions.py", line 557, in send
        r = adapter.send(request, **kwargs)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/cachecontrol/adapter.py", line 38, in send
        return self.build_response(request, cached_response, from_cache=True)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/cachecontrol/adapter.py", line 95, in build_response
        request, response
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/requests/adapters.py", line 203, in build_response
        response.headers = CaseInsensitiveDict(getattr(resp, 'headers', {}))
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/requests/structures.py", line 46, in __init__
        self.update(data, **kwargs)
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/workspace/venv/lib/python2.7/_abcoll.py", line 541, in update
        for key in other:
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/requests/packages/urllib3/response.py", line 304, in closed
        elif hasattr(self._fp, 'isclosed'):  # Python 2
      File "/tmp/pytest-29/test_upgrade_user_conflict_in_globalsite0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 27, in __getattr__
        f.write(''.join(traceback.format_stack()))
    

    I believe that gc is activating during Mapping.update, and jumping to response.closed. Why it would touch a property, I don't know. Further, this is happening after something (the gc I assume) has deleted __fp from the filewrapper. Because filewrapper references self.__fp in __getattr__, and __fp isn't present, it recurses there. Why I don't get a StackOverflowError rather than a busy deadlock, I also don't know.

    But, this fixes it :D

    opened by bukzor 17
  • UnicodeDecodeError raised on some cache max-age headers

    UnicodeDecodeError raised on some cache max-age headers

    I'm encountering this exception when fetching some URLs:

    UnicodeDecodeError
     'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
    

    The function that is raising is this:

    def _b64_encode_str(s):
        return _b64_encode_bytes(s.encode("utf8"))
    

    And some example data that some HTTP servers seem to be sending is for example: '\u201cmax-age=31536000\u2033'

    It would be great if cache-control could handle these edge cases without dying.

    Thanks for a great package!

    opened by hakanw 16
  • Not getting the caching I expected

    Not getting the caching I expected

    Here's my CCSSE:

    import github3
    import cachecontrol
    g = github3.GitHub()
    cachecontrol.CacheControl(g._session)
    print(g.rate_limit()['resources']['core']) # initial rate limit
    print(g.rate_limit()['resources']['core']) # rate_limit - should not count
    repository = g.repository('sigmavirus24', 'github3.py')
    print(g.rate_limit()['resources']['core']) # get repo data - should count
    repository = g.repository('sigmavirus24', 'github3.py')
    print(g.rate_limit()['resources']['core']) # get repo again - should be served from cache
    

    In the output, I'm seeing that the rate limit is being ticked off for each g.repository call.

    {u'reset': 1438863563, u'limit': 60, u'remaining': 53}
    {u'reset': 1438863563, u'limit': 60, u'remaining': 53}
    {u'reset': 1438863563, u'limit': 60, u'remaining': 52}
    {u'reset': 1438863563, u'limit': 60, u'remaining': 51}
    

    With logging cranked up to the max and tons of logging added (see #93 for the exact code, 50a76aa to be specific), I'm seeing the long trace below. Conspicuously, there's no Updating cache with response from "http://...", which is what I added at https://github.com/toolforger/cachecontrol/blob/50a76aa0f022a47d34c65c13c4c813ecb1f2c086/cachecontrol/controller.py#L228, so I guess indicating cachecontrol.controller.CacheController.cache_response is never called. Since that call is stashed away in a functools.partial, I have no idea where and when that call should have happened, so I have come to a dead end.

    INFO:github3:Building a url from ('https://api.github.com', 'rate_limit')
    INFO:github3:Missed the cache building the url
    DEBUG:github3:GET https://api.github.com/rate_limit with {}
    DEBUG:cachecontrol.controller:Looking up "https://api.github.com/rate_limit" in the cache
    DEBUG:cachecontrol.controller:No cache entry available
    INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): api.github.com
    DEBUG:requests.packages.urllib3.connectionpool:"GET /rate_limit HTTP/1.1" 200 None
    INFO:github3:Attempting to get JSON information from a Response with status code 200 expecting 200
    INFO:github3:JSON was returned
    INFO:github3:Building a url from ('https://api.github.com', 'rate_limit')
    DEBUG:github3:GET https://api.github.com/rate_limit with {}
    {u'reset': 1438867921, u'limit': 60, u'remaining': 60}
    DEBUG:cachecontrol.controller:Looking up "https://api.github.com/rate_limit" in the cache
    DEBUG:cachecontrol.controller:No cache entry available
    DEBUG:requests.packages.urllib3.connectionpool:"GET /rate_limit HTTP/1.1" 200 None
    {u'reset': 1438867922, u'limit': 60, u'remaining': 60}
    INFO:github3:Attempting to get JSON information from a Response with status code 200 expecting 200
    INFO:github3:JSON was returned
    INFO:github3:Building a url from ('https://api.github.com', 'repos', 'sigmavirus24', 'github3.py')
    INFO:github3:Missed the cache building the url
    DEBUG:github3:GET https://api.github.com/repos/sigmavirus24/github3.py with {}
    DEBUG:cachecontrol.controller:Looking up "https://api.github.com/repos/sigmavirus24/github3.py" in the cache
    DEBUG:cachecontrol.controller:No cache entry available
    DEBUG:requests.packages.urllib3.connectionpool:"GET /repos/sigmavirus24/github3.py HTTP/1.1" 200 None
    INFO:github3:Attempting to get JSON information from a Response with status code 200 expecting 200
    INFO:github3:JSON was returned
    INFO:github3:Building a url from ('https://api.github.com', 'rate_limit')
    DEBUG:github3:GET https://api.github.com/rate_limit with {}
    DEBUG:cachecontrol.controller:Looking up "https://api.github.com/rate_limit" in the cache
    DEBUG:cachecontrol.controller:No cache entry available
    DEBUG:requests.packages.urllib3.connectionpool:"GET /rate_limit HTTP/1.1" 200 None
    INFO:github3:Attempting to get JSON information from a Response with status code 200 expecting 200
    INFO:github3:JSON was returned
    INFO:github3:Building a url from ('https://api.github.com', 'repos', 'sigmavirus24', 'github3.py')
    DEBUG:github3:GET https://api.github.com/repos/sigmavirus24/github3.py with {}
    {u'reset': 1438867922, u'limit': 60, u'remaining': 59}
    DEBUG:cachecontrol.controller:Looking up "https://api.github.com/repos/sigmavirus24/github3.py" in the cache
    DEBUG:cachecontrol.controller:No cache entry available
    DEBUG:requests.packages.urllib3.connectionpool:"GET /repos/sigmavirus24/github3.py HTTP/1.1" 200 None
    INFO:github3:Attempting to get JSON information from a Response with status code 200 expecting 200
    INFO:github3:JSON was returned
    INFO:github3:Building a url from ('https://api.github.com', 'rate_limit')
    DEBUG:github3:GET https://api.github.com/rate_limit with {}
    DEBUG:cachecontrol.controller:Looking up "https://api.github.com/rate_limit" in the cache
    DEBUG:cachecontrol.controller:No cache entry available
    DEBUG:requests.packages.urllib3.connectionpool:"GET /rate_limit HTTP/1.1" 200 None
    INFO:github3:Attempting to get JSON information from a Response with status code 200 expecting 200
    INFO:github3:JSON was returned
    {u'reset': 1438867922, u'limit': 60, u'remaining': 58}
    
    opened by toolforger 14
  • Excessive memory usage, part 2

    Excessive memory usage, part 2

    I'm testing new cachecontrol 0.12.8 with pip, and it only partially solves the excess memory usage: memory usage goes down from 1300MB to 900MB when doing pip install tensorflow.

    The next bottleneck is the fact msgpack doesn't have a streaming interface, so packing always makes a copy of the data in memory. (I believe something like this was mentioned as a potential issue in the previous issues #145, but unfortunately the test script I was using didn't use that code path).

    Some potential approaches:

    1. Modify msgpack upstream to support streaming writes, perhaps coupled with streaming API on cachecontrol side.
    2. Switch to a new data format where instead of having one giant bytestring, the msgpack is a series of bytestrings. This requires no changes to msgpack, and perhaps no public API changes assuming more mmap hackery.
    3. Re-implement msgpack serialization just for this use case; essentially the code only packs a byte string, so just need to write the appropriate header beforehand. Depending on what APIs are public this may be the least intrusive option; hacky, but it'll work.
    4. No doubt others (e.g. some other data format).
    opened by itamarst 13
  • FileCache does not work

    FileCache does not work

    I tried to test for existence of cache directory, similar to included test test_storage_filecache.py but it does not get created. forever=True flag does not help, changing directory .web_cache to something else neither.

    import os.path
    import logging
    logging.basicConfig(level=logging.DEBUG)
    
    import requests
    from cachecontrol import CacheControl
    from cachecontrol.caches import FileCache
    
    webcache_dir = ".web_cache"
    cache = FileCache(webcache_dir)
    sess = CacheControl(requests.Session(), cache=cache)
    response = sess.get("http://google.com")
    
    print()
    print(cache)
    print("%s exists?" % webcache_dir, os.path.exists(webcache_dir))
    

    Attached log:

    INFO:urllib3.connectionpool:Starting new HTTP connection (1): google.com
    DEBUG:urllib3.connectionpool:Setting read timeout to None
    DEBUG:urllib3.connectionpool:"GET / HTTP/1.1" 302 258
    INFO:urllib3.connectionpool:Starting new HTTP connection (1): www.google.cz
    DEBUG:urllib3.connectionpool:Setting read timeout to None
    DEBUG:urllib3.connectionpool:"GET /?gfe_rd=cr&ei=DnKeVs2tOOWI8QfDyYbwDw HTTP/1.1" 200 7699
    
    <cachecontrol.caches.file_cache.FileCache object at 0x7f72120f4b00>
    .web_cache exists? False
    
    opened by burtgulash 13
  • recent change to urllib3's is_fp_closed broke cachecontrol for Python 3 and PyPy

    recent change to urllib3's is_fp_closed broke cachecontrol for Python 3 and PyPy

    FileCache stopped working for me with Python 3 and PyPy, and I think I tracked down the cause to a recent change to urllib3's is_fp_closed utility.

    From https://github.com/ionrock/cachecontrol/blob/master/cachecontrol/filewrapper.py#L32

            # Is this the best way to figure out if the file has been completely
            #   consumed?
            if is_fp_closed(self.__fp):
                self.__callback(self.__buf.getvalue())
    

    In my Python 3 and PyPy environments, is_fp_closed was never returning True. Reverting the changes in shazow/urllib3#435 fixed it.

    I tried cloning urllib3 and running the tox tests to dig in further but couldn't get the tests to run, and thought my best next step would be reporting here.

    It may be that cachecontrol's code is just fine and the issue is in urllib3, but I figured I'd confirm here first. Does that look like the problem?

    Thanks in advance for taking a look!

    opened by requiredfield 12
  • No cache when no internet connection - even with forever set to True

    No cache when no internet connection - even with forever set to True

    Hello,

    I try this code with my internet connection enabled

    import requests
    from cachecontrol import CacheControl
    from cachecontrol.caches import FileCache
    
    req_session = requests.session()
    cache = FileCache('web_cache', forever=True)
    session = CacheControl(req_session, cache=cache)
    response = session.get('http://www.google.com')
    print(response.status_code)
    

    I disabled my internet connection and run again this code.

    It raised ConnectionError: ('Connection aborted.', gaierror(8, 'nodename nor servname provided, or not known'))

    That's probably a misunderstanding from my side. But I thought that if I store in a file both request and response I could get it when my connection was disabled.

    I also don't understand why this forever flag exists. In my understanding we should pass a custom caching strategies (aka caching heuristics) to CacheControl

    class Forever(BaseHeuristic):
        pass
    

    and use it like

    req_session = requests.session()
    cache = FileCache('web_cache')
    session = CacheControl(req_session, cache=cache, heuristic=Forever())
    response = session.get('http://www.google.com')
    print(response.status_code)
    

    Any idea ? but that's like I said probably a misunderstanding from my side.

    Kind regards

    bug 
    opened by femtotrader 11
  • New release with SeparateBody cache fixes?

    New release with SeparateBody cache fixes?

    Hey!

    I just spent an hour trying to get split body/metadata to work for a cache until I realized that (a) it's broken due to the body not being loaded when updating the cache and (b) it's already fixed here. Unfortunately it seems like this was in the now yanked 0.12.12, which was slated to be re-released as 0.13 as I understand it. Is there a schedule for this happening? Any help needed? Would be much appreciated!

    Thanks! :-)

    opened by tgolsson 0
  • Packaging the tests, or not

    Packaging the tests, or not

    Fixes #281, #282

    • first commit explicitly excludes tests from the binary distribution, it is indeed unconventional to include them
    • second commit explicitly re-adds them to the source distribution
      • I don't think there's universal consensus about whether this is desirable or not eg here is a recent discussion
      • but it seems harmless and seeing as how #281 explicitly asks for them, well why not?
    opened by dimbleby 1
  • No tests in pypi sdist tarball

    No tests in pypi sdist tarball

    Hi! I'm packaging this project for Arch Linux.

    We usually try to rely on pypi.org for sdist tarballs to build from. In a packaging context we also run tests to ensure, that the given project is compatible with our own Python ecosystem. Unfortunately the tests are not included in the sdist tarball on pypi.org, so it would be awesome if you could add them there. It allows anyone to also rely on tests when they build a wheel from source! :)

    opened by dvzrv 1
  • Add type annotations

    Add type annotations

    I probably should have checked first whether this is something you even want, but if the answer is "no thanks" then fair enough: it wasn't so much work that either of us need feel bad about that.

    I've added type annotations everywhere, and mypy configuration to check them. I haven't tried to update your github workflows, but all you should need to do is run like this:

    $ mypy cachecontrol
    Success: no issues found in 13 source files
    
    opened by dimbleby 2
Owner
Eric Larson
Eric Larson
Automatic caching and invalidation for Django models through the ORM.

Cache Machine Cache Machine provides automatic caching and invalidation for Django models through the ORM. For full docs, see https://cache-machine.re

null 846 Nov 26, 2022
A caching extension for Flask

Flask-Caching Adds easy cache support to Flask. This is a fork of the Flask-Cache extension. Flask-Caching also includes the cache module from werkzeu

Peter Justin 774 Jan 2, 2023
johnny cache django caching framework

Johnny Cache is a caching framework for django applications. It works with the django caching abstraction, but was developed specifically with the use

Jason Moiron 304 Nov 7, 2022
Simple caching transport for httpx

httpx-cache is yet another implementation/port is a port of the caching algorithms in httplib2 for use with httpx Transport object.

Ouail 28 Jan 1, 2023
Fully Automated YouTube Channel ▶️with Added Extra Features.

Fully Automated Youtube Channel ▒█▀▀█ █▀▀█ ▀▀█▀▀ ▀▀█▀▀ █░░█ █▀▀▄ █▀▀ █▀▀█ ▒█▀▀▄ █░░█ ░░█░░ ░▒█░░ █░░█ █▀▀▄ █▀▀ █▄▄▀ ▒█▄▄█ ▀▀▀▀ ░░▀░░ ░▒█░░ ░▀▀▀ ▀▀▀░

sam-sepiol 249 Jan 2, 2023
The purpose of this bot is to take soundcloud track requests, that are posted in the stream-requests channel, and add them to a playlist, to make the process of scrolling through the requests easier for Root

Discord Song Collector Dont know if anyone is actually going to read this, but the purpose of this bot is to check every message in the stream-request

null 2 Mar 1, 2022
ANKIT-OS/TG-SESSION-HACK-BOT: A Special Repository.Telegram Bot Which Can Hack The Victim By Using That Victim Session

?? ᵀᴱᴸᴱᴳᴿᴬᴹ ᴴᴬᶜᴷ ᴮᴼᵀ ?? The owner would not be responsible for any kind of bans due to the bot. • ⚡ INSTALLING ⚡ • • ??️ Lᴀɴɢᴜᴀɢᴇs Aɴᴅ Tᴏᴏʟs ?? • If

ANKIT KUMAR 2 Dec 24, 2021
FastAPI Server Session is a dependency-based extension for FastAPI that adds support for server-sided session management

FastAPI Server-sided Session FastAPI Server Session is a dependency-based extension for FastAPI that adds support for server-sided session management.

DevGuyAhnaf 5 Dec 23, 2022
Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 short.

Session-aware BERT4Rec Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 shor

Jamie J. Seol 22 Dec 13, 2022
A simple port scanner for Web/ip scanning Port 0/500 editable inside the .py file

Simple-Port-Scanner a simple port scanner for Web/ip scanning Port 0/500 editable inside the .py file Open Cmd/Terminal Cmd Downloads Run Command: pip

YABOI 1 Nov 22, 2021
Multiple-requests-poster - A tool to send multiple requests to a particular website written in Python

Multiple-requests-poster - A tool to send multiple requests to a particular website written in Python

RLX 2 Feb 14, 2022
Aircache is an open-source caching and security solution that can be integrated with most decoupled apps that use REST APIs for communicating.

AirCache Aircache is an open-source caching and security solution that can be integrated with most decoupled apps that use REST APIs for communicating

AirCache 2 Dec 22, 2021
Python library for serializing any arbitrary object graph into JSON. It can take almost any Python object and turn the object into JSON. Additionally, it can reconstitute the object back into Python.

jsonpickle jsonpickle is a library for the two-way conversion of complex Python objects and JSON. jsonpickle builds upon the existing JSON encoders, s

null 1.1k Jan 2, 2023
zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

zoofs is a Python library for performing feature selection using a variety of nature-inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics-based to Evolutionary. It's easy to use , flexible and powerful tool to reduce your feature size.

Jaswinder Singh 168 Dec 30, 2022
Object-oriented programming exercise session held in Petnica.

OOP vežba ⚠️ The code in this repo is used for a OOP practice session held in Petnica. All instructions in the README file are written in Serbian. Ops

Pavle Ćirić 1 Jan 30, 2022
WSGI middleware for sessions and caching

Cache and Session Library About Beaker is a web session and general caching library that includes WSGI middleware for use in web applications. As a ge

Ben Bangert 500 Dec 29, 2022
Automatic caching and invalidation for Django models through the ORM.

Cache Machine Cache Machine provides automatic caching and invalidation for Django models through the ORM. For full docs, see https://cache-machine.re

null 846 Nov 26, 2022
A caching extension for Flask

Flask-Caching Adds easy cache support to Flask. This is a fork of the Flask-Cache extension. Flask-Caching also includes the cache module from werkzeu

Peter Justin 774 Jan 2, 2023
Automatic caching and invalidation for Django models through the ORM.

Cache Machine Cache Machine provides automatic caching and invalidation for Django models through the ORM. For full docs, see https://cache-machine.re

null 846 Nov 26, 2022