No effort, no worry, maximum performance.

Overview

Django Cachalot

Caches your Django ORM queries and automatically invalidates them.

Documentation: http://django-cachalot.readthedocs.io


http://img.shields.io/pypi/v/django-cachalot.svg?style=flat-square&maxAge=3600 https://img.shields.io/pypi/pyversions/django-cachalot https://travis-ci.com/noripyt/django-cachalot.svg?branch=master http://img.shields.io/coveralls/noripyt/django-cachalot/master.svg?style=flat-square&maxAge=3600 http://img.shields.io/scrutinizer/g/noripyt/django-cachalot/master.svg?style=flat-square&maxAge=3600 https://img.shields.io/discord/773656139207802881

Table of Contents:

  • Quickstart
  • Usage
  • Hacking
  • Benchmark
  • Third-Party Cache Comparison
  • Discussion

Quickstart

Cachalot officially supports Python 3.5-3.9 and Django 2.0-2.2, 3.0-3.1 with the databases PostgreSQL, SQLite, and MySQL.

Usage

  1. pip install django-cachalot
  2. Add 'cachalot', to your INSTALLED_APPS
  3. If you use multiple servers with a common cache server, double check their clock synchronisation
  4. If you modify data outside Django – typically after restoring a SQL database –, use the manage.py command
  5. Be aware of the few other limits
  6. If you use django-debug-toolbar, you can add 'cachalot.panels.CachalotPanel', to your DEBUG_TOOLBAR_PANELS
  7. Enjoy!

Hacking

To start developing, install the requirements and run the tests via tox.

Make sure you have the following services:

  • Memcached
  • Redis
  • PostgreSQL
  • MySQL

For setup:

  1. Install: pip install -r requirements/hacking.txt
  2. For PostgreSQL: CREATE ROLE cachalot LOGIN SUPERUSER;
  3. Run: tox --current-env to run the test suite on your current Python version.
  4. You can also run specific databases and Django versions: tox -e py38-django3.1-postgresql-redis

Benchmark

Currently, benchmarks are supported on Linux and Mac/Darwin. You will need a database called "cachalot" on MySQL and PostgreSQL. Additionally, on PostgreSQL, you will need to create a role called "cachalot." You can also run the benchmark, and it'll raise errors with specific instructions for how to fix it.

  1. Install: pip install -r requirements/benchmark.txt
  2. Run: python benchmark.py

The output will be in benchmark/TODAY'S_DATE/

TODO Create Docker-compose file to allow for easier running of data.

Third-Party Cache Comparison

There are three main third party caches: cachalot, cache-machine, and cache-ops. Which do you use? We suggest a mix:

TL;DR Use cachalot for cold or modified <50 times per minutes (Most people should stick with only cachalot since you most likely won't need to scale to the point of needing cache-machine added to the bowl). If you're an enterprise that already has huge statistics, then mixing cold caches for cachalot and your hot caches with cache-machine is the best mix. However, when performing joins with select_related and prefetch_related, you can get a nearly 100x speed up for your initial deployment.

Recall, cachalot caches THE ENTIRE TABLE. That's where its inefficiency stems from: if you keep updating the records, then the cachalot constantly invalidates the table and re-caches. Luckily caching is very efficient, it's just the cache invalidation part that kills all our systems. Look at Note 1 below to see how Reddit deals with it.

Cachalot is more-or-less intended for cold caches or "just-right" conditions. If you find a partition library for Django (also authored but work-in-progress by Andrew Chen Wang), then the caching will work better since sharding the cold/accessed-the-least records aren't invalidated as much.

Cachalot is good when there are <50 modifications per minute on a hot cached table. This is mostly due to cache invalidation. It's the same with any cache, which is why we suggest you use cache-machine for hot caches. Cache-machine caches individual objects, taking up more in the memory store but invalidates those individual objects instead of the entire table like cachalot.

Yes, the bane of our entire existence lies in cache invalidation and naming variables. Why does cachalot suck when stuck with a huge table that's modified rapidly? Since you've mixed your cold (90% of) with your hot (10% of) records, you're caching and invalidating an entire table. It's like trying to boil 1 ton of noodles inside ONE pot instead of 100 pots boiling 1 ton of noodles. Which is more efficient? The splitting up of them.

Note 1: My personal experience with caches stems from Reddit's: https://redditblog.com/2017/01/17/caching-at-reddit/

Note 2: Technical comparison: https://django-cachalot.readthedocs.io/en/latest/introduction.html#comparison-with-similar-tools

Discussion

Help? Technical chat? It's here on Discord.

Legacy chats:

https://raw.github.com/noripyt/django-cachalot/master/django-cachalot.jpg

Comments
  • Django 1.8 support

    Django 1.8 support

    Just wanted to bring some clarity whether django-cachalot in it's current state supports Django 1.8 or not. I tried running the package on Django 1.8, and while it's not crashing anything, I see that django1.8 branch hasn't been merged to master and is failing the build. So maybe some update for the community about this would be good from you. Thanks.

    enhancement 
    opened by maryokhin 38
  • Add final SQL check when looking up involved tables

    Add final SQL check when looking up involved tables

    Description

    • Add a final SQL check to include potentially overlooked tables when looking up involved tables.
    • Add unit tests showing queries which do "order by" using a field of a referenced table. These tests would fail without the final SQL check.

    Rationale

    Changing the referenced object should also invalidate the query as calling the query again might lead to another result.

    "Order by" allows expressions such as Coalesce as well: https://docs.djangoproject.com/en/3.2/ref/models/querysets/#order-by

    Discussion

    Initially I thought of adding the final SQL check as configuration option. After having looked at all the queries, I believe that it should be the default behavior. Thus I did not make it an option for now.

    opened by dbartenstein 23
  • Cachalot error

    Cachalot error

    Getting this error , with cachalot enabled ., when running unit testcases. backend cache is BACKEND': 'django_redis.cache.RedisCache'

    File "/home/ubuntu/virtualenvs/venv-system/lib/python2.7/site-packages/django/db/migrations/executor.py", line 198, in apply_migration state = migration.apply(state, schema_editor) File "/home/ubuntu/virtualenvs/venv-system/lib/python2.7/site-packages/django/db/backends/base/schema.py", line 92, in exit self.atomic.exit(exc_type, exc_value, traceback) File "/home/ubuntu/virtualenvs/venv-system/lib/python2.7/site-packages/cachalot/monkey_patch.py", line 146, in inner self.using, exc_type is None and not needs_rollback) File "/home/ubuntu/virtualenvs/venv-system/lib/python2.7/site-packages/cachalot/cache.py", line 47, in exit_atomic atomic_caches = self.atomic_caches[db_alias].pop().values() IndexError: pop from empty list

    can you please help with this?

    needs info 
    opened by gokulrajridecell 20
  • Huge SQL query reaches memcached size limit per key

    Huge SQL query reaches memcached size limit per key

    First, I'm using Django 1.8, Python 3.4.3, Postgres 9.3.x, and memcached 1.4.4 with pylibmc.

    I have a form on my site that has a jQuery autocomplete box. This is used to for selecting locations (we have roughly ~13k locations in our database - continents, countries, states, and cities). Here's the view:

    def location_query(request):
        # first handle the location autocomplete
        if request.is_ajax():
            term = request.GET['term']
    
            # I want to explicitly order matching countries at the front of the list
            matching_countries = Location.get_countries().filter(full_name__icontains=term)
            matching_states = Location.get_states().filter(full_name__icontains=term)
            matching_cities = Location.get_cities().filter(full_name__icontains=term)
    
            matching_locations = list()
            matching_locations.extend(matching_countries)
            matching_locations.extend(matching_states)
            matching_locations.extend(matching_cities)
    
            locations_json = list()
            for matching_location in matching_locations[:10]:
                location_json = dict()
                location_json['id'] = matching_location.pk
                location_json['label'] = '%s (%s)' % (matching_location.full_name, matching_location.admin_level)
                location_json['value'] = matching_location.pk
                locations_json.append(location_json)
    
            return JsonResponse(locations_json, safe=False)
    

    And here's the Location model:

    class Location(models.Model):
        name = models.CharField(max_length=255)
        full_name = models.CharField(max_length=255, blank=True)  # the name might be "Paris", but full_name would be "Paris, Texas, United States of America"; allowed to be blank only because the script that populates this table will fill it in after all locations are added
        imported_from = models.CharField(max_length=255)
        admin_level = models.CharField(max_length=255, blank=True)
        geometry = models.MultiPolygonField(blank=True, default=None, null=True)
        objects = models.GeoManager()  # override the default manager with a GeoManager instance
        parent = models.ForeignKey('self', blank=True, default=None, null=True)
    
        def __str__(self):
            return self.full_name
    
        def get_full_name(self, include_continent=False):
            """
                Get the full name of a location. This includes the entire hierarchy, optionally including the continent.
                    e.g., Paris, Texas, United States of America
            """
            full_name = self.name
            current_parent = self.parent
            while current_parent is not None and (include_continent or (not include_continent and current_parent.parent is not None)):
                full_name += ', ' + current_parent.name
                current_parent = current_parent.parent
            return full_name
    
        def get_country(self):
            if self.admin_level == 'Country':
                return self.name
            return self.parent.get_country()
    
        @staticmethod
        def get_continents():
            return Location.objects.filter(parent=None).order_by('name')
    
        @staticmethod
        def get_countries(continent=None):
            if continent:
                # return a single continent's countries, sorted
                return Location.objects.filter(parent=continent).order_by('name')
            else:
                # return all countries, sorted
                return Location.objects.filter(admin_level='Country').order_by('name')
    
        @staticmethod
        def get_states(country=None):
            if country:
                # return a single country's states, sorted
                return Location.objects.filter(parent=country).order_by('name')
            else:
                # return all states, sorted
                return Location.objects.filter(admin_level='State').order_by('name')
    
        @staticmethod
        def get_cities(state=None):
            if state:
                # return a single state's cities, sorted
                return Location.objects.filter(parent=state).order_by('name')
            else:
                # return all cities, sorted
                return Location.objects.filter(admin_level='City').order_by('name')
    
        @staticmethod
        def get_non_continents():
            return Location.objects.exclude(parent=None).order_by('name')
    
        class Meta:
            ordering = ['full_name']
    

    When I disable cachalot by commenting out the line in INSTALLED_APPS, the autocomplete works. When I enable it, it doesn't work. Other things on my site do indeed work, and the DDT panel shows that cachalot is doing its job. Can it deal with ajax calls like this?

    documentation 
    opened by gfairchild 20
  • Make it possible to disable cachalot on per-query basis

    Make it possible to disable cachalot on per-query basis

    Description

    This patch makes it possible to disable caching on per-cache basis by settings the attribute cachalot_do_not_cache to True on the query.

    Rationale

    Sometimes the programmer knows that a query will return a large response which will not be reused (in our case it is an export which can have hundreds of megabytes of data, which is streamed by chunk directly into a compressed file). This can potentially lead even to memory errors when the maximum size of the cache is reached (this is exactly what we see with our Redis cache). In such cases, it would be great to be able to mark a specific query as excluded from caching which would save the work required to store it and also prevent possible memory errors. The method I implemented here is not super-nice, but it is simple and works with minimum changes to the Cachalot source code.

    opened by beda42 17
  • Exception 'Exists' object has no attribute 'rhs'

    Exception 'Exists' object has no attribute 'rhs'

    What happened?

    We are running django-cachalot in production and keep getting exceptions "'Exists' object has no attribute 'rhs'". This is captured in sentry, so we can see some of the details.

    It's comming from the following line in cachalot/utils.py: rhs = child.rhs

    Here are some of the details set at this point:

    • child - django.db.models.expressions.Exists object
    • child_class - django.db.models.expressions.Exists
    • children - [django.db.models.lookups.Exact,django.db.models.expressions.Exists]
    • rhs - uuid.UUID
    • rhs_class - uuid.UUID

    What should've happened instead?

    No exception

    Steps to reproduce

    Seems to happen on DB quieries that make use of django.db.models.expressions.Exists.

    Django==3.0.6 Postgres DB (12.x)

    opened by shield007 16
  • Invalidation of data stored in a primary/replica configured DB invalidates only for primary instance

    Invalidation of data stored in a primary/replica configured DB invalidates only for primary instance

    I have a Django app with two MariaDB databases configured as primary/replica with the following configuration:

    DATABASE_ROUTERS = ['app.db_replica.PrimaryReplicaRouter', ]
    
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.mysql',
            'NAME': 'app',
            'USER': 'app',
            'PASSWORD': '*******************',
            'HOST': 'mysql-master',
        },
        'replica1': {
            'ENGINE': 'django.db.backends.mysql',
            'NAME': 'app',
            'USER': 'app',
            'PASSWORD': '*******************',
            'HOST': 'mysql-slave',
        },
    }
    

    The router is configured to write on primary and read on replica:

    class PrimaryReplicaRouter(object):
        def db_for_read(self, model, **hints):
            return 'replica1'
    
        def db_for_write(self, model, **hints):
            return 'default'
    
        def allow_relation(self, obj1, obj2, **hints):
            db_list = ('default', 'replica1')
            if obj1._state.db in db_list and obj2._state.db in db_list:
                return True
            return None
    
        def allow_migrate(self, db, app_label, model=None, **hints):
            return True
    

    When Django writes on a table, cache gets invalidated only on primary. You can test this by configuring the router to randomly return the primary or the replica, you will see old and new value alternated refreshing the page.

    Calling ./manage.py invalidate_cachalot works, I think because it invalidates all the cache ignoring the database instance.

    documentation 
    opened by micku 14
  • Queryset with annotated Now() is cached

    Queryset with annotated Now() is cached

    What happened?

    An annotated query including Now() is cached.

    What should've happened instead?

    An annotated query containing Now() must not be cached.

    Steps to reproduce

    from django.db.models.functions import Now
    # Replace Vehicle with some sort of cachable model
    Vehicle.objects.all().annotate(now=Now())
    

    Debian Linux Django version 3.2 LTS Postgresql 10

    bug 
    opened by dbartenstein 12
  • Also support django_prpmetheus wrapped caches

    Also support django_prpmetheus wrapped caches

    Description

    Add the cache backends wrapped by django_prometheus as supported backends.

    Rationale

    cachalot already explicitly supports django_prometheus' wrappers for database backends. djanfo_prometheus does the same wrapping for cache backends, so they should be supported as well.

    opened by Natureshadow 12
  • Race condition?

    Race condition?

    What happened?

    I have an app called menus which contains to models Menu(models.Model) and MenuItem(MP_Node). Both tables are enabled for cache by cachalot. Note that MenuItem implements tree structure from django-treebeard.

    There is an edit form that is responsible for updating particular Menu instance by creating or moving MenuItem's around. The code responsible of that is a little bit complicated but nonetheless I'm adding it below:

    root = anytree.AnyNode()
                nodes = {}
                node_ids = [form.cleaned_data.get("node_id") for form in form.menu_items.ordered_forms]
                node_ids = set(instance.root_item.get_descendants().filter(id__in=node_ids).values_list("id", flat=True))
    
                for form in form.menu_items.ordered_forms:
                    node_id = form.cleaned_data.get("node_id")
                    page = form.cleaned_data.get("page")
                    page_id = None
                    if isinstance(page, Page):
                        page_id = page.pk
    
                    item = {
                        "id": node_id if node_id in node_ids else None,
                        "parent_id": form.cleaned_data.get("parent_id"),
                        "data": {
                            "title": form.cleaned_data.get("title"),
                            "icon": form.cleaned_data.get("icon"),
                            "classes": form.cleaned_data.get("classes"),
                            "type": form.cleaned_data.get("type"),
                            "url": form.cleaned_data.get("url"),
                            "named_url": form.cleaned_data.get("named_url"),
                            "page_id": page_id,
                        },
                    }
                    if item["data"]["type"] in [
                        MenuItem.TEXT_LABEL,
                        MenuItem.CUSTOM_URL,
                        MenuItem.NAMED_URL,
                    ]:
                        item["data"]["page_id"] = None
    
                    nodes[node_id] = anytree.AnyNode(**item)
                for node in nodes.values():
                    if node.parent_id == -1:
                        node.parent = root
                        continue
                    node.parent = nodes[node.parent_id]
    
                exporter = DictExporter(
                    dictcls=OrderedDict,
                    attriter=lambda attrs: [(k, v) for k, v in attrs if k != "parent_id"],
                )
                bulk_data = exporter.export(root)["children"]
    
                # tree, iterative preorder
                stack = [(instance.root_item, node) for node in bulk_data[::-1]]
                while stack:
                    parent, node_struct = stack.pop()
                    parent.refresh_from_db()
                    node_data = node_struct["data"].copy()
                    node_data["id"] = node_struct["id"]
                    if node_data["id"] is None:
                        node_obj = parent.add_child(**node_data)
                        node_ids.add(node_obj.pk)
                    else:
                        node_obj = MenuItem.objects.get(pk=node_data["id"])
                        for attr, value in node_data.items():
                            if attr != "id":
                                setattr(node_obj, attr, value)
                        node_obj.save()
                        node_obj.move(parent, pos="last-child")
    
                    if "children" in node_struct:
                        # extending the stack with the current node as the parent of
                        # the new nodes
                        stack.extend([(node_obj, node) for node in node_struct["children"][::-1]])
    
                # Delete outdated items
                instance.root_item.get_descendants().exclude(id__in=node_ids).delete()
    

    What is happening is that while saving big menu (300+ items) and having live traffic at the same time I'm experiencing outdated data being stored in cache. I'm seeing changes on db level but cache data remains the same (loading edit interface shows data that does not match db level data). To confirm that theory invalidate_cachalot menus solves the issue.

    I guess it may be a result of race condition (?) because there is lots of changes are made in a cycle:

    while stack:
      ..
      node_obj = MenuItem.objects.get(pk=node_data["id"])
      for attr, value in node_data.items():
          if attr != "id":
              setattr(node_obj, attr, value)
      node_obj.save()
      node_obj.move(parent, pos="last-child")
    

    Am I missing something? Should I simply use api.invalidate method in such cases?

    What should've happened instead?

    Cached data should be changed.

    needs info 
    opened by pySilver 11
  • Adds django_db_geventpool.backends.postgresql_psycopg2 as supported database

    Adds django_db_geventpool.backends.postgresql_psycopg2 as supported database

    Description

    We've been using django-cachalot in a production environment for a while and after questioning about that on the Slack help page, it was recommended the addition of django_db_geventpool.backends.postgresql_psycopg2 as a supported database.

    Adding django_db_geventpool.backends.postgresql_psycopg2 as supported database.

    Rationale

    I'm not sure how to add a test for it but it's being used for at least 5 months in a production environment.

    It would remove the warning from deployments using the django_db_geventpool.

    opened by aemitos 11
  • Does django-cachalot lib support the new django.core.cache.backends.redis.RedisCache cache backend in Django version 4.0?

    Does django-cachalot lib support the new django.core.cache.backends.redis.RedisCache cache backend in Django version 4.0?

    Question

    The new django.core.cache.backends.redis.RedisCache cache backend provides built-in support for caching with Redis in Django version 4.0 and I used it in my application. I've used the latest version django-cachalot and got this warning: Cache backend 'django.core.cache.backends.redis.RedisCache' is not supported by django-cachalot. My question is: Does django-cachalot lib support the new django.core.cache.backends.redis.RedisCache cache backend in Django version 4.0? Please help answer my question, thanks!

    opened by vuphan-agilityio 4
  • UncachableQuery raised when exporting models of table even if listed in `CACHALOT_UNCACHABLE_APPS`.

    UncachableQuery raised when exporting models of table even if listed in `CACHALOT_UNCACHABLE_APPS`.

    What happened?

    UncachableQuery exception raised when exporting model accounts.Account, even if the app accounts is listed in CACHALOT_UNCACHABLE_APPS:

    Settings

    CACHALOT_CACHE = "cachalot"
    CACHALOT_ENABLED = True
    CACHALOT_ONLY_CACHABLE_APPS = [
        # ...
        "website",
    ]
    CACHALOT_UNCACHABLE_APPS = [
        "accounts",
        # ...
    ]
    

    Error

    UncachableQuery: 
      File "cachalot/monkey_patch.py", line 92, in inner
        table_cache_keys = _get_table_cache_keys(compiler)
      File "cachalot/utils.py", line 276, in _get_table_cache_keys
        for t in _get_tables(db_alias, compiler.query, compiler)]
      File "cachalot/utils.py", line 212, in _get_tables
        raise UncachableQuery
    

    What should've happened instead?

    No exception should have been raised by django-cachalot.

    Steps to reproduce

    Env

    Django version: Django==3.2.16 Cachalot version: django-cachalot==2.5.2 Database: MySQL Cache backend: django.core.cache.backends.memcached.PyMemcacheCache

    Context

    • I have a table accounts with ~20k entries, in the admin I try to export it almost all rows as .csv, then I end up with the reported error.
    • The account model has not foreign keys to a app.model listed in CACHALOT_ONLY_CACHABLE_APPS.
    • The export works fine if I select just few rows.
    • The export works fine if I don't use django-cachalot at all.
    opened by fabiocaccamo 3
  • Tables defined in CACHALOT_UNCACHABLE_APPS are still being cached

    Tables defined in CACHALOT_UNCACHABLE_APPS are still being cached

    Question

    I have a structure like this:

    App: AppA
    class ModelA(models.Model):
       some fields ....
    
    App: AppB
    class ModelB(models.Model):
       linked_field = models.Foreignkey(ModelA, ...)
    

    My settings are:

    CACHALOT_UNCACHABLE_APPS = INSTALLED_APPS
    
    CACHALOT_ONLY_CACHABLE_APPS = [
        "AppA"
    ]
    

    Why are my setting likes this? I've noticed that some tables are being cached of apps that were not defined to be cached. That's why I am exluding all and whitelist them later on. When I only defined CACHALOT_ONLY_CACHABLE_APPS without setting CACHALOT_UNCACHABLE_APPS even more apps were cached even though they should't be (e.g celery beats table)

    When using my application, I've noticed in the debug toolbar that also the tables of AppB are being cached even though they should not. I am also not access ModelB.

    Is this behaviour as expected?

    What have you tried so far?

    opened by hendrikschneider 0
  • Allow enum-likes in CACHABLE_PARAM_TYPES

    Allow enum-likes in CACHABLE_PARAM_TYPES

    Description

    Currently, Django's TextChoices property __class__ indicates it's enum-like

    >>> class MockTextChoices(models.TextChoices):
    ...     foo="foo"
    ...     bar="bar"
    ... 
    >>> MockTextChoices.foo
    MockTextChoices.foo
    >>> MockTextChoices.foo.__class__
    <enum 'MockTextChoices'>
    

    And it's perfectly fine to do something like this, when constructing queryset:

    queryset.filter(
                Q(
                    mock__in=(
                        MockTextChoices.foo,
                        MockTextChoices.bar
                    )
                )
            )
    

    Yet, such query, even if valid, can't be cached. It happens because value under MockTextChoices.foo.__class__ is not present in CACHABLE_PARAM_TYPES, which is used in check_parameter_types method. So long story short it will throw UncachableQuery and make db call anyway.

    If you replace MockTextChoices.foo with "foo" or MockTextChoices.foo.value it works just fine for obv reasons. //: # (What are you proposing and how would we implement it?) My suggestion would be to modify check_parameter_types to allow syntax above. I don't know right away how to implement that, but I'll have a thought, and I'll edit this ticket with proposed answer :^)

    Rationale

    I think such behavior might lead to unexpected queries, that are hard to catch without debug toolbar and pure luck lol. I mean, if Django accepts such syntax as valid query django-cachalot should be able to cache it right? 👀

    enhancement 
    opened by rythm-of-the-red-man 1
  • Unexpected behavior when using DjangoDebugToolbar

    Unexpected behavior when using DjangoDebugToolbar

    What happened?

    When setting CACHALOT_ENABLED = False but adding cachalot.panels.CachalotPanel panel in the DEBUG_TOOLBAR_PANELS variable, it acts like CACHALOT_ENABLED was True

    What should've happened instead?

    Adding cachalot.panels.CachalotPanel should not override the CACHALOT_ENABLED setting

    Steps to reproduce

    django 2.2 cachalot 2.5 django-debug-toolbar 3.0.0

    bug 
    opened by Vyko 0
  • caching before the first request

    caching before the first request

    Description

    I have a use case where I need to cache a particular request before the user requests the same. This is a list page paginated by pagenumberpagination and table is 30M records. Can you please suggest a way to cache the queryset before user hits the page?

    Rationale

    opened by thesealednectar22 1
Releases(v2.5.2)
  • v2.5.2(Aug 27, 2022)

    What's Changed

    • Add Django 4.1 support by @dmkoch in https://github.com/noripyt/django-cachalot/pull/219

    New Contributors

    • @dmkoch made their first contribution in https://github.com/noripyt/django-cachalot/pull/219

    Full Changelog: https://github.com/noripyt/django-cachalot/compare/v2.5.1...v2.5.2

    Source code(tar.gz)
    Source code(zip)
  • v2.5.1(Feb 24, 2022)

    What's Changed

    • table invalidation condition enhanced by @JanoValaska in https://github.com/noripyt/django-cachalot/pull/213
    • Include docs in sdist. Closes #201 by @debdolph in https://github.com/noripyt/django-cachalot/pull/202
    • Add test settings to sdist. Closes #200. by @debdolph in https://github.com/noripyt/django-cachalot/pull/203

    New Contributors

    • @JanoValaska made their first contribution in https://github.com/noripyt/django-cachalot/pull/213
    • @debdolph made their first contribution in https://github.com/noripyt/django-cachalot/pull/202

    Full Changelog: https://github.com/noripyt/django-cachalot/compare/v2.5.0...v2.5.1

    Source code(tar.gz)
    Source code(zip)
  • v2.5.0(Jan 14, 2022)

    What's Changed

    • Add final SQL check when looking up involved tables by @dbartenstein in https://github.com/noripyt/django-cachalot/pull/199

    Full Changelog: https://github.com/noripyt/django-cachalot/compare/v2.4.5...v2.5.0

    Source code(tar.gz)
    Source code(zip)
  • v2.4.5(Dec 8, 2021)

    What's Changed

    • Add Django 4.0 support, drop Python 3.6 and Django 3.1 by @Andrew-Chen-Wang in https://github.com/noripyt/django-cachalot/pull/208

    Full Changelog: https://github.com/noripyt/django-cachalot/compare/v2.4.4...v2.4.5

    Source code(tar.gz)
    Source code(zip)
  • v2.4.4(Nov 3, 2021)

    What's Changed

    • Handle queryset implementations without lhs/rhs attribute by @sumpfralle in https://github.com/noripyt/django-cachalot/pull/204
    • Add Python 3.10 Support (fixes #205) by @Andrew-Chen-Wang in https://github.com/noripyt/django-cachalot/pull/206

    New Contributors

    • @sumpfralle made their first contribution in https://github.com/noripyt/django-cachalot/pull/204

    Full Changelog: https://github.com/noripyt/django-cachalot/compare/v2.4.3...v2.4.4

    Source code(tar.gz)
    Source code(zip)
  • v2.4.3(Aug 23, 2021)

    • Fix annotated Now being cached (#195)
    • Fix conditional annotated expressions not being cached (#196)
    • Simplify annotation handling by using the flatten method (#197)
    • Fix Django 3.2 default_app_config deprecation (#198)
    • (Internal) Pinned psycopg2 to <2.9 due to Django 2.2 incompatibility
    Source code(tar.gz)
    Source code(zip)
  • v2.3.2(Sep 16, 2020)

  • v2.3.1(Aug 10, 2020)

  • v2.3.0(Jul 29, 2020)

  • v2.2.2(Jun 25, 2020)

  • 2.2.0(Feb 14, 2020)

Comparing Database performance with Django ORM

Comparing Database performance with Django ORM Postgresql MySQL MariaDB SQLite Comparing database operation performance using django ORM. PostgreSQL v

Sarath ak 21 Nov 14, 2022
No effort, no worry, maximum performance.

Django Cachalot Caches your Django ORM queries and automatically invalidates them. Documentation: http://django-cachalot.readthedocs.io Table of Conte

NoriPyt 979 Jan 3, 2023
No effort, no worry, maximum performance.

Django Cachalot Caches your Django ORM queries and automatically invalidates them. Documentation: http://django-cachalot.readthedocs.io Table of Conte

NoriPyt 976 Dec 28, 2022
Encrypt your code without a worry. Stark utilizes the base64, hashlib and Crypto lib to encrypt your code which cannot be decrypted with any online tools.

Stark Encrypt your code without a worry. Stark utilizes the base64, hashlib and Crypto lib to encrypt your code which cannot be decrypted with any onl

cliphd 3 Sep 10, 2021
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

DatasetGAN This is the official code and data release for: DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort Yuxuan Zhang*, Huan Li

null 302 Jan 5, 2023
Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.

Rayvens augments Ray with events. With Rayvens, Ray applications can subscribe to event streams, process and produce events. Rayvens leverages Apache

CodeFlare 32 Dec 25, 2022
Send Informative, Concise Slack Notifications With Minimal Effort

slack-templates Send Informative, Concise Slack Notifications With Minimal Effort slack-templates Slack Integration Available Templates Usage Report t

null 9 Nov 3, 2022
Generating rent availability info from Effort rent

Rent-info Generating rent availability info from Effort rent Pre-Installation Latest version of python Pip module json, os, requests, datetime, time i

Laixuan 1 Oct 20, 2021
A community effort to bring back Duino-Coin

Duino-Coin-Revived A community effort to bring back Duino-Coin! Along with reviving the cryptocurrency, we will add many improvements to it, including

null 1 Dec 22, 2021
This library is an ongoing effort towards bringing the data exchanging ability between Java/Scala and Python

PyJava This library is an ongoing effort towards bringing the data exchanging ability between Java/Scala and Python

Byzer 6 Oct 17, 2022
Docker image with Uvicorn managed by Gunicorn for high-performance FastAPI web applications in Python 3.6 and above with performance auto-tuning. Optionally with Alpine Linux.

Supported tags and respective Dockerfile links python3.8, latest (Dockerfile) python3.7, (Dockerfile) python3.6 (Dockerfile) python3.8-slim (Dockerfil

Sebastián Ramírez 2.1k Dec 31, 2022
peace-performance (Rust) binding for python. To calculate star ratings and performance points for all osu! gamemodes

peace-performance-python Fast, To calculate star ratings and performance points for all osu! gamemodes peace-performance (Rust) binding for python bas

null 9 Sep 19, 2022
PyTorch implementation of Algorithm 1 of "On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models"

Code for On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models This repository will reproduce the main results from our pape

Mitch Hill 32 Nov 25, 2022
Implementation of fast algorithms for Maximum Spanning Tree (MST) parsing that includes fast ArcMax+Reweighting+Tarjan algorithm for single-root dependency parsing.

Fast MST Algorithm Implementation of fast algorithms for (Maximum Spanning Tree) MST parsing that includes fast ArcMax+Reweighting+Tarjan algorithm fo

Miloš Stanojević 11 Oct 14, 2022
grungegirl is the hacker's drug encyclopedia. programmed in python for maximum modularity and ease of configuration.

grungegirl. cli-based drug search for girls. welcome. grungegirl is aiming to be the premier drug culture application. it is the hacker's encyclopedia

Eristava 10 Oct 2, 2022
PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Rithesh Kumar 135 Oct 27, 2022
The Multi-Mission Maximum Likelihood framework (3ML)

PyPi Conda The Multi-Mission Maximum Likelihood framework (3ML) A framework for multi-wavelength/multi-messenger analysis for astronomy/astrophysics.

The Multi-Mission Maximum Likelihood (3ML) 62 Dec 30, 2022
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 6, 2022