A simple app that provides django integration for RQ (Redis Queue)

Overview

Django-RQ

Build Status

Django integration with RQ, a Redis based Python queuing library. Django-RQ is a simple app that allows you to configure your queues in django's settings.py and easily use them in your project.

Support Django-RQ

If you find django-rq useful, please consider supporting its development via Tidelift.

Requirements

Installation

pip install django-rq
  • Add django_rq to INSTALLED_APPS in settings.py:
INSTALLED_APPS = (
    # other apps
    "django_rq",
)
  • Configure your queues in django's settings.py (syntax based on Django's database config):
RQ_QUEUES = {
    'default': {
        'HOST': 'localhost',
        'PORT': 6379,
        'DB': 0,
        'PASSWORD': 'some-password',
        'DEFAULT_TIMEOUT': 360,
    },
    'with-sentinel': {
        'SENTINELS': [('localhost', 26736), ('localhost', 26737)],
        'MASTER_NAME': 'redismaster',
        'DB': 0,
        'PASSWORD': 'secret',
        'SOCKET_TIMEOUT': None,
        'CONNECTION_KWARGS': {
            'socket_connect_timeout': 0.3
        },
    },
    'high': {
        'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379/0'), # If you're on Heroku
        'DEFAULT_TIMEOUT': 500,
    },
    'low': {
        'HOST': 'localhost',
        'PORT': 6379,
        'DB': 0,
    }
}

RQ_EXCEPTION_HANDLERS = ['path.to.my.handler'] # If you need custom exception handlers
  • Include django_rq.urls in your urls.py:
# For Django < 2.0
urlpatterns += [
    url(r'^django-rq/', include('django_rq.urls')),
]

# For Django >= 2.0
urlpatterns += [
    path('django-rq/', include('django_rq.urls'))
]

Usage

Putting jobs in the queue

Django-RQ allows you to easily put jobs into any of the queues defined in settings.py. It comes with a few utility functions:

  • enqueue - push a job to the default queue:
import django_rq
django_rq.enqueue(func, foo, bar=baz)
  • get_queue - returns an Queue instance.
import django_rq
queue = django_rq.get_queue('high')
queue.enqueue(func, foo, bar=baz)

In addition to name argument, get_queue also accepts default_timeout, is_async, autocommit, connection and queue_class arguments. For example:

queue = django_rq.get_queue('default', autocommit=True, is_async=True, default_timeout=360)
queue.enqueue(func, foo, bar=baz)

You can provide your own singleton Redis connection object to this function so that it will not create a new connection object for each queue definition. This will help you limit number of connections to Redis server. For example:

import django_rq
import redis
redis_cursor = redis.StrictRedis(host='', port='', db='', password='')
high_queue = django_rq.get('high', connection=redis_cursor)
low_queue = django_rq.get('low', connection=redis_cursor)
  • get_connection - accepts a single queue name argument (defaults to "default") and returns a connection to the queue's Redis server:
import django_rq
redis_conn = django_rq.get_connection('high')
  • get_worker - accepts optional queue names and returns a new RQ Worker instance for specified queues (or default queue):
import django_rq
worker = django_rq.get_worker() # Returns a worker for "default" queue
worker.work()
worker = django_rq.get_worker('low', 'high') # Returns a worker for "low" and "high"

@job decorator

To easily turn a callable into an RQ task, you can also use the @job decorator that comes with django_rq:

from django_rq import job

@job
def long_running_func():
    pass
long_running_func.delay() # Enqueue function in "default" queue

@job('high')
def long_running_func():
    pass
long_running_func.delay() # Enqueue function in "high" queue

You can pass in any arguments that RQ's job decorator accepts:

@job('default', timeout=3600)
def long_running_func():
    pass
long_running_func.delay() # Enqueue function with a timeout of 3600 seconds.

It's possible to specify default for result_ttl decorator keyword argument via DEFAULT_RESULT_TTL setting:

RQ = {
    'DEFAULT_RESULT_TTL': 5000,
}

With this setting, job decorator will set result_ttl to 5000 unless it's specified explicitly.

Running workers

django_rq provides a management command that starts a worker for every queue specified as arguments:

python manage.py rqworker high default low

If you want to run rqworker in burst mode, you can pass in the --burst flag:

python manage.py rqworker high default low --burst

If you need to use custom worker, job or queue classes, it is best to use global settings (see Custom queue classes and Custom job and worker classes). However, it is also possible to override such settings with command line options as follows.

To use a custom worker class, you can pass in the --worker-class flag with the path to your worker:

python manage.py rqworker high default low --worker-class 'path.to.GeventWorker'

To use a custom queue class, you can pass in the --queue-class flag with the path to your queue class:

python manage.py rqworker high default low --queue-class 'path.to.CustomQueue'

To use a custom job class, provide --job-class flag.

Support for RQ Scheduler

If you have RQ Scheduler installed, you can also use the get_scheduler function to return a Scheduler instance for queues defined in settings.py's RQ_QUEUES. For example:

import django_rq
scheduler = django_rq.get_scheduler('default')
job = scheduler.enqueue_at(datetime(2020, 10, 10), func)

You can also use the management command rqscheduler to start the scheduler:

python manage.py rqscheduler

Support for django-redis and django-redis-cache

If you have django-redis or django-redis-cache installed, you can instruct django_rq to use the same connection information from your Redis cache. This has two advantages: it's DRY and it takes advantage of any optimization that may be going on in your cache setup (like using connection pooling or Hiredis.)

To use configure it, use a dict with the key USE_REDIS_CACHE pointing to the name of the desired cache in your RQ_QUEUES dict. It goes without saying that the chosen cache must exist and use the Redis backend. See your respective Redis cache package docs for configuration instructions. It's also important to point out that since the django-redis-cache ShardedClient splits the cache over multiple Redis connections, it does not work.

Here is an example settings fragment for django-redis:

CACHES = {
    'redis-cache': {
        'BACKEND': 'redis_cache.cache.RedisCache',
        'LOCATION': 'localhost:6379:1',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient',
            'MAX_ENTRIES': 5000,
        },
    },
}

RQ_QUEUES = {
    'high': {
        'USE_REDIS_CACHE': 'redis-cache',
    },
    'low': {
        'USE_REDIS_CACHE': 'redis-cache',
    },
}

Queue Statistics

django_rq also provides a dashboard to monitor the status of your queues at /django-rq/ (or whatever URL you set in your urls.py during installation.

You can also add a link to this dashboard link in /admin by adding RQ_SHOW_ADMIN_LINK = True in settings.py. Be careful though, this will override the default admin template so it may interfere with other apps that modifies the default admin template.

These statistics are also available in JSON format via /django-rq/stats.json, which is accessible to staff members. If you need to access this view via other HTTP clients (for monitoring purposes), you can define RQ_API_TOKEN and access it via /django-rq/stats.json/<API_TOKEN>.

demo-django-rq-json-dashboard.png

Additionally, these statistics are also accessible from the command line.

python manage.py rqstats
python manage.py rqstats --interval=1  # Refreshes every second
python manage.py rqstats --json  # Output as JSON
python manage.py rqstats --yaml  # Output as YAML

demo-django-rq-cli-dashboard.gif

Configuring Sentry

Django-RQ >= 2.0 uses sentry-sdk instead of the deprecated raven library. Sentry should be configured within the Django settings.py as described in the Sentry docs.

You can override the default Django Sentry configuration when running the rqworker command by passing the sentry-dsn option:

./manage.py rqworker --sentry-dsn=https://*****@sentry.io/222222

This will override any existing Django configuration and reinitialise Sentry, setting the following Sentry options:

{
    'debug': options.get('sentry_debug'),
    'ca_certs': options.get('sentry_ca_certs'),
    'integrations': [RedisIntegration(), RqIntegration(), DjangoIntegration()]
}

Configuring Logging

Starting from version 0.3.3, RQ uses Python's logging, this means you can easily configure rqworker's logging mechanism in django's settings.py. For example:

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        "rq_console": {
            "format": "%(asctime)s %(message)s",
            "datefmt": "%H:%M:%S",
        },
    },
    "handlers": {
        "rq_console": {
            "level": "DEBUG",
            "class": "rq.utils.ColorizingStreamHandler",
            "formatter": "rq_console",
            "exclude": ["%(asctime)s"],
        },
        # If you use sentry for logging
        'sentry': {
            'level': 'ERROR',
            'class': 'raven.contrib.django.handlers.SentryHandler',
        },
    },
    'loggers': {
        "rq.worker": {
            "handlers": ["rq_console", "sentry"],
            "level": "DEBUG"
        },
    }
}

Note: error logging to Sentry is known to be unreliable with RQ when using async transports (the default transport). Please configure Raven to use sync+https:// or requests+https:// transport in settings.py:

RAVEN_CONFIG = {
    'dsn': 'sync+https://public:[email protected]/1',
}

For more info, refer to Raven's documentation.

Custom Queue Classes

By default, every queue will use DjangoRQ class. If you want to use a custom queue class, you can do so by adding a QUEUE_CLASS option on a per queue basis in RQ_QUEUES:

RQ_QUEUES = {
    'default': {
        'HOST': 'localhost',
        'PORT': 6379,
        'DB': 0,
        'QUEUE_CLASS': 'module.path.CustomClass',
    }
}

or you can specify DjangoRQ to use a custom class for all your queues in RQ settings:

RQ = {
    'QUEUE_CLASS': 'module.path.CustomClass',
}

Custom queue classes should inherit from django_rq.queues.DjangoRQ.

If you are using more than one queue class (not recommended), be sure to only run workers on queues with same queue class. For example if you have two queues defined in RQ_QUEUES and one has custom class specified, you would have to run at least two separate workers for each queue.

Custom Job and Worker Classes

Similarly to custom queue classes, global custom job and worker classes can be configured using JOB_CLASS and WORKER_CLASS settings:

RQ = {
    'JOB_CLASS': 'module.path.CustomJobClass',
    'WORKER_CLASS': 'module.path.CustomWorkerClass',
}

Custom job class should inherit from rq.job.Job. It will be used for all jobs if configured.

Custom worker class should inherit from rq.worker.Worker. It will be used for running all workers unless overriden by rqworker management command worker-class option.

Testing Tip

For an easier testing process, you can run a worker synchronously this way:

from django.test import TestCase
from django_rq import get_worker

class MyTest(TestCase):
    def test_something_that_creates_jobs(self):
        ...                      # Stuff that init jobs.
        get_worker().work(burst=True)  # Processes all jobs then stop.
        ...                      # Asserts that the job stuff is done.

Synchronous Mode

You can set the option ASYNC to False to make synchronous operation the default for a given queue. This will cause jobs to execute immediately and on the same thread as they are dispatched, which is useful for testing and debugging. For example, you might add the following after you queue configuration in your settings file:

# ... Logic to set DEBUG and TESTING settings to True or False ...

# ... Regular RQ_QUEUES setup code ...

if DEBUG or TESTING:
    for queueConfig in RQ_QUEUES.itervalues():
        queueConfig['ASYNC'] = False

Note that setting the is_async parameter explicitly when calling get_queue will override this setting.

Running Tests

To run django_rq's test suite:

`which django-admin.py` test django_rq --settings=django_rq.tests.settings --pythonpath=.

Deploying on Ubuntu

Create an rqworker service that runs the high, default, and low queues.

sudo vi /etc/systemd/system/rqworker.service

[Unit]
Description=Django-RQ Worker
After=network.target

[Service]
WorkingDirectory=<<path_to_your_project_folder>>
ExecStart=/home/ubuntu/.virtualenv/<<your_virtualenv>>/bin/python \
    <<path_to_your_project_folder>>/manage.py \
    rqworker high default low

[Install]
WantedBy=multi-user.target

Enable and start the service

sudo systemctl enable rqworker
sudo systemctl start rqworker

Deploying on Heroku

Add django-rq to your requirements.txt file with:

pip freeze > requirements.txt

Update your Procfile to:

web: gunicorn --pythonpath="$PWD/your_app_name" config.wsgi:application

worker: python your_app_name/manage.py rqworker high default low

Commit and re-deploy. Then add your new worker with:

heroku scale worker=1

Django Suit Integration

You can use django-suit-rq to make your admin fit in with the django-suit styles.

Changelog

See CHANGELOG.md.

Comments
  • Passing timeout to queued job

    Passing timeout to queued job

    Is it possible to pass a timeout value to an enqueued job? I believe the default is 180 seconds -- which is short for some long-running jobs.

    Thanks for the great tool!

    opened by ErikEvenson 18
  • AttributeError: 'module' object has no attribute

    AttributeError: 'module' object has no attribute

    I am trying to create a background job with RQ:

    import django_rq                                                         
    
    
        def _send_password_reset_email_async(email):                             
            print(email)                                                         
    
        # Django admin action to send reset password emails                                                                 
        def send_password_reset_email(modeladmin, request, queryset):            
            for user in queryset:                                                
                django_rq.enqueue(_send_password_reset_email_async, user.email)  
        send_password_reset_email.short_description = 'Send password reset email'
    

    I keep getting this error:

    Traceback (most recent call last):
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/worker.py", line 568, in perform_job
        rv = job.perform()
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/job.py", line 
    
    495, in perform
        self._result = self.func(*self.args, **self.kwargs)
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/job.py", line 206, in func
        return import_attribute(self.func_name)
      File "/home/lee/Code/cas/venv/lib/python3.4/site-packages/rq/utils.py", line 151, in import_attribute
        return getattr(module, attribute)
    AttributeError: 'module' object has no attribute '_send_password_reset_email_async
    

    I also posted it to SO earlier http://stackoverflow.com/questions/32733934/rq-attributeerror-module-object-has-no-attribute

    opened by lee-kagiso 16
  • OperationalError: SSL error: decryption failed or bad record mac

    OperationalError: SSL error: decryption failed or bad record mac

    Not sure if this is a django-rq issue or python-rq, so I figured I'd start here...

    My application was working perfectly under Django 1.7.x.

    I updated to Django 1.8.x and my workers blew up.

    WARNING 11:29:35 worker 16516 140021590959936 Moving job to u'failed' queue
    WARNING 2015-08-30 11:29:35,394 worker 16516 140021590959936 Moving job to u'failed' queue
    ERROR 11:29:35 worker 16518 140021590959936 OperationalError: SSL error: decryption failed or bad record mac
    
    Traceback (most recent call last):
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/rq/worker.py", line 568, in perform_job
        rv = job.perform()
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/rq/job.py", line 495, in perform
        self._result = self.func(*self.args, **self.kwargs)
      File "/tank/code/uitintranet/intranet/tasks.py", line 131, in backup_router
        router = Router.objects.get(pk=router_pk)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/manager.py", line 127, in manager_method
        return getattr(self.get_queryset(), name)(*args, **kwargs)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 328, in get
        num = len(clone)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 144, in __len__
        self._fetch_all()
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 965, in _fetch_all
        self._result_cache = list(self.iterator())
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/query.py", line 238, in iterator
        results = compiler.execute_sql()
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 840, in execute_sql
        cursor.execute(sql, params)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
        return super(CursorDebugWrapper, self).execute(sql, params)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
        return self.cursor.execute(sql, params)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
        six.reraise(dj_exc_type, dj_exc_value, traceback)
      File "/home/aaron/.virtualenvs/uitintranet/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
        return self.cursor.execute(sql, params)
    OperationalError: SSL error: decryption failed or bad record mac
    

    I checked, and updated rq from 0.5.4 to 0.5.5 and the error continues. If I run a command from my Django shell_plus without '.delay()', it runs fine. If I run a command with '.delay()' or I run a command using the scheduler, I get the error above.

    I noticed a post on StackOverflow that basically says 'close the DB connection at the beginning of each job'. (http://stackoverflow.com/questions/17523912/django-python-rq-databaseerror-ssl-error-decryption-failed-or-bad-record-mac)

    I tested with one of my jobs, and it does fix the problem. But I'm worried about possible side effects, and having to add a few lines of code to hundreds of job definitions.

    opened by darkpixel 16
  • Adding scheduled jobs to UI

    Adding scheduled jobs to UI

    Adds Scheduled Jobs image

    and associated page to show scheduled jobs

    Most of the work was already done in: https://github.com/rq/django-rq/pull/162 so thank you to @sburns

    opened by quantumlink 13
  • If the worker loses the DB connection it is not able to reconnect and you must restart it

    If the worker loses the DB connection it is not able to reconnect and you must restart it

    If the worker loses the DB connection it isn't able to reconnect and you must restart it

    Workers should adopt the same DB connection/reconnection policy that Django uses in his request/response cycle

    Every time a django-rq worker performs a job, the worker should assure the database-orm-connections are in a valid state. This policy is enforced by django in its request/response cycle via the function close_old_connections

    Look at: https://github.com/django/django/blob/master/django/db/init.py

    Similary, in django-rq worker, the close_old_connections should be called before and after the execution of each job

    (see also #49 for some preliminary thoughts)

    opened by depaolim 12
  • Add view for scheduled jobs

    Add view for scheduled jobs

    Attempt on #120

    This will probably break with a huge list of scheduled jobs since there's no pagination. Not sure how to approach that properly though.

    opened by marksteve 12
  • Added DEFAULT_TIMEOUT and RESULT_TTL in settings.RQ_QUEUES

    Added DEFAULT_TIMEOUT and RESULT_TTL in settings.RQ_QUEUES

    As stated in #57 supports code:

    RQ_QUEUES = {
        'default': {
            'HOST': 'localhost',
            'PORT': 6379,
            'DB': 0,
            'PASSWORD': 'some-password',
            'DEFAULT_TIMEOUT': 60,
            'RESULT_TTL': 30,
        },
        'high': {
            'URL': os.getenv('REDISTOGO_URL', 'redis://localhost:6379'), # If you're on Heroku
            'DB': 0,
        },
    }
    

    You can close #57 and try to think of tests...

    opened by lechup 12
  • Test django-rq

    Test django-rq

    Hello,

    I try to test an app which has some asynchronous tasks using django-rq, and it's not always as easy as it could/should be.

    For the moment I have got two use cases - Mail tests and Signal tests, that work with synchronous job, but doesn't work with asynchronous tasks.

    Mail test:

    # in jobs.py for example
    
    @job
    def send_async_mail():
        print get_connection()  # return an instance of locmem.EmailBackend
        send_mail('foo', 'bar', '[email protected]', ['[email protected]'], fail_silently=False)
    
    
    def send_custom_mail():
         send_async_mail.delay()
    
    # in tests.py
    
    from django_rq import get_worker
    from jobs import send_custom_mail
    
    class AsyncSendMailTestCase(unittest.TestCase):
        def test_async_mail(self):
            send_custom_mail()
            get_worker().work(burst=True)
            print get_connection() # return an OTHER instance of locmem.EmailBackend
            print len(mail.outbox)  # return 0, should return 1
    
    

    As you see above, the "problem" is that the instance of get_connection() (https://docs.djangoproject.com/en/dev/topics/email/?from=olddocs#django.core.mail.get_connection) is not the same.

    Also, I'm facing (I think) the same problem using mock_signal_receiver https://github.com/dcramer/mock-django/blob/master/mock_django/signals.py in asynchronous jobs. A kind of a reference to the receiver is losted with asynchronous jobs. I'm quite sure of it because, when I remove the asynchronous part of my code, it works.

    Any ideas on how to deal with that kind of problem ?

    Regards,

    opened by ouhouhsami 12
  • Add support for Django 3.0, drop Django 1.x, Python 2

    Add support for Django 3.0, drop Django 1.x, Python 2

    Fixes #382


    Summary

    • Add support for Django 3.0
    • Drop support for Django 1.x
    • Drop support for Python 2.x
    • Add Python 3.8 to test matrix

    Version support

    | Django | 1.x | 2.x | 3.x | | :-- | :-- | --- | --- | | Python 2.7 | ✖️ | ✖️ | ✖️ | | Python 3.5 | ✖️ | 👍| ✖️^ | | Python 3.6 | ✖️ | 👍| 👍 | | Python 3.7 | ✖️ | 👍| 👍| | Python 3.8 | ✖️ | 👍| 👍|

    ^ Django 3.0 does not support Python 3.5 (https://docs.djangoproject.com/en/3.0/faq/install/#what-python-version-can-i-use-with-django)

    Changes

    Aside from supporting Django 3.0, the dropping of Django 1.x and Python 2.x enables a number of changes across the project, including:

    • Replace django.conf.urls with django.urls (and use path, re_path instead of url)
    -from django.conf.urls import url
    +from django.urls import path
    ...
     urlpatterns = [
    -    url(r'^$', views.home, name='home'),
    -    url(r'^admin/', admin.site.urls),
    +    path('', views.home, name='home'),
    +    path('admin/', admin.site.urls),
     ]
    
    • Use of unittest.mock instead of importing mock project
    • Replace admin_static template tags with static

    Additional changes (open to debate):

    1. Consolidate all local imports as relative
    2. Add a Pipfile for aiding with local development
    opened by hugorodgerbrown 11
  • add argument to disable sentry integration

    add argument to disable sentry integration

    allow disabling sentry integration in the case where the project use sentry-sdk as SENTRY_DSN is not compatible between sentry_sdk and raven (missing secret key, a.k.a. password)

    opened by Bolayniuss 11
  • More consistent usage of queue class / worker class overriding

    More consistent usage of queue class / worker class overriding

    What has been changed:

    • Added RQ.WORKER_CLASS setting to complement RQ.QUEUE_CLASS;
    • Changed rqworker management command to use defaults via get_worker_class/get_queue_class instead of defining it's own;
    • in queues.py get_queue and get_queue_by_index now have **kwargs to allow passing any additional kwargs to queue class constructor;
    • in worker.py add get_worker_class and use it in other functions;
    • isort in all changed files.
    opened by skirsdeda 10
  • broken state after redis reset (restart)

    broken state after redis reset (restart)

    hi there, apologies in advance if this is not appropriate or not related to this project itself.

    today we had a minor outage caused by a redis restart: all workers kept running after this disconnect (redis went down for a patch upgrade) but seemingly workers did not registered and stopped processing any entries from the queue ever since the restart. killing the it all and starting workers again got things back on track

    is that by design? I'm not very familiar with this project TBH, I found very strange that this reconnection wasn't handled gracefully. I'm currently on django-rq = "==2.3.1" rq = "==1.2.2"

    should I be specifically handling this situation with a custom exception handler? I don't have any descent logs but I assume the worker has successfully reconnected to redis, it just did not process the queue anymore. thanks in advance

    edit: before posting it, I found this https://github.com/rq/rq/pull/1387 would that fix the issue reported above? I'm keeping this issue because I think it might help others facing the same problem.

    opened by enapupe 0
  • Worker sometimes loose connection and needs to be restarted with psycopg2

    Worker sometimes loose connection and needs to be restarted with psycopg2

    Similar to #216 , My worker sometimes loose the connection and must be restarted

    [...]
    cursor = self.connection.cursor()
    psycopg2.InterfaceError: connection already closed
    

    My env :

    python==3.8.13
    Django==3.2.16
    django-rq==2.6.0
    psycopg2==2.9.5
    redis==4.4.0rc4
    rq==1.11.1
    
    opened by Jeanbouvatt 2
  • “python_requires” should be set with “>=3.4”, as django-rq  is not compatible with all Python versions.

    “python_requires” should be set with “>=3.4”, as django-rq is not compatible with all Python versions.

    Currently, the keyword argument python_requires of setup() is not set, and thus it is assumed that this distribution is compatible with all Python versions. However, I found it is not compatible with Python 2. My local Python version is 2.7, and I encounter the following error when executing “pip install django-rq”

    Collecting django-rq
      Downloading django_rq-2.5.1-py2.py3-none-any.whl (48 kB)
         |████████████████████████████████| 48 kB 424 kB/s 
    Collecting rq>=1.2
      Downloading rq-1.3.0-py2.py3-none-any.whl (59 kB)
         |████████████████████████████████| 59 kB 594 kB/s 
    ERROR: Could not find a version that satisfies the requirement django>=2.0 (from django-rq) (from versions: 1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1.11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1.11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29)
    ERROR: No matching distribution found for django>=2.0 (from django-rq)
    
    

    Dependencies of this distribution are listed as follows:

    'django>=2.0', 
    'rq>=1.2',
    'redis>=3'
    

    I found that 'django>=2.0' requires Python>=3.4, which results in installation failure of django-rq in Python 2.7.

    Way to fix: modify setup() in setup.py, add python_requires keyword argument:

    setup(…
         python_requires=">=3.4",
         …)
    

    Thanks for your attention. Best regrads, PyVCEchecker

    opened by PyVCEchecker 0
  • Sporadic test failures with sqlite: no such table: django_rq_queue

    Sporadic test failures with sqlite: no such table: django_rq_queue

    We have a big test suite in our django project. We use django-rq. Sometimes, when we run the tests (usually on our gitlab ci), the tests fail with the error: django.db.utils.OperationalError: no such table: django_rq_queue

    The don't always fail. It's more a 10% to 20% chance of failure.

    Any ideas what's going on?

    opened by finsterwalder 0
  • Job decorator ignored when using enqueue_in

    Job decorator ignored when using enqueue_in

    Hi! I have a little problem with the @job decorator when using enqueue_in it seems that the decorator is ignored when using enqueue_in. Below is a little example tasks.py

    from django_rq import get_scheduler, job
    @job("default", timeout=185)
    def test():
        time.sleep(182)
        print("Sleeped 190s")
        get_scheduler().enqueue_in(timedelta(seconds=1), test)
    

    Logs from the worker, after us delay() on test job. TLTR: After the first delay, everything works fine, and timeout works. but when the next job is queued, the timeout param declared in the job decorator is ignored.

    worker-1 | [2022-09-26 19:05:33,422] default: tasks.test() (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | default: tasks.test() (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | Sleeped 190s
    worker-1 | [2022-09-26 19:08:35,533] default: Job OK (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | default: Job OK (bcb6b0e2-f30b-4de4-a2fb-219d05d38da5)
    worker-1 | [2022-09-26 19:08:35,534] Result is kept for 500 seconds
    worker-1 | Result is kept for 500 seconds
    worker-1 | [2022-09-26 19:08:36,601] default: tasks.test() (ef8e854f-b412-4b50-8ba0-5d650b721cf0)
    worker-1 | default: tasks.test() (ef8e854f-b412-4b50-8ba0-5d650b721cf0)
    worker-1 | [2022-09-26 19:11:36,620] Traceback (most recent call last):
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute     
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test
    worker-1 |     time.sleep(182)
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '   
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)  
    worker-1 | Traceback (most recent call last):  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform    
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test     
    worker-1 |     time.sleep(182)     
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)
    worker-1 | Traceback (most recent call last):  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job  
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test     
    worker-1 |     time.sleep(182)     
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)
    worker-1 | Traceback (most recent call last):  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/worker.py", line 1068, in perform_job
    worker-1 |     rv = job.perform()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 847, in perform
    worker-1 |     self._result = self._execute()  
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/job.py", line 870, in _execute      
    worker-1 |     result = self.func(*self.args, **self.kwargs)    
    worker-1 |   File "/code/app/pipelines/tasks.py", line 21, in test     
    worker-1 |     time.sleep(182)     
    worker-1 |   File "/venv/lib/python3.10/site-packages/rq/timeouts.py", line 61, in handle_death_penalty
    worker-1 |     raise self._exception('Task exceeded maximum timeout value '
    worker-1 | rq.timeouts.JobTimeoutException: Task exceeded maximum timeout value (180 seconds)
    
    opened by krzysieqq 0
Releases(v2.6.0)
  • v2.6.0(Nov 5, 2022)

    • Added --max-jobs argument to rqworker management command. Thanks @arpit-goel!
    • Remove job from ScheduledJobRegistry if a scheduled job is enqueued from admin. Thanks @robertaistleitner!
    • Minor code cleanup. Thanks @reybog90!
    Source code(tar.gz)
    Source code(zip)
  • v2.5.1(Nov 22, 2021)

  • v2.5.0(Nov 17, 2021)

    • Better integration with Django admin, along with a new Access admin page permission that you can selectively grant to users. Thanks @haakenlid!
    • Worker count is now updated everytime you view workers for that specific queue. Thanks @cgl!
    • Add the capability to pass arbitrary Redis client kwargs. Thanks @juanjgarcia!
    • Always escape text when rendering job arguments. Thanks @rhenanbartels!
    • Add @never_cache decorator to all Django-RQ views. Thanks @Cybernisk!
    • SSL_CERT_REQS argument should also be passed to Redis client even when Redis URL is used. Thanks @paltman!
    Source code(tar.gz)
    Source code(zip)
  • v2.4.0(Nov 8, 2020)

  • v2.3.2(May 14, 2020)

  • v2.3.1(Apr 10, 2020)

    • Added --with-scheduler argument to rqworker management command. Thanks @stlk!
    • Fixed a bug where opening job detail would crash if job.dependency no longer exists. Thanks @selwin!
    Source code(tar.gz)
    Source code(zip)
  • v2.3.0(Feb 9, 2020)

    • Support for RQ's new ScheduledJobRegistry. Thanks @Yolley!
    • Improve performance when displaying pages showing a large number of jobs by using Job.fetch_many(). Thanks @selwin!
    • django-rq will now automatically cleanup orphaned worker keys in job registries. Thanks @selwin!
    • Site name now properly displayed in Django-RQ admin pages. Thanks @tom-price!
    • NoSuchJobErrors are now handled properly when requeuing all jobs. Thanks @thomasmatecki!
    • Support for displaying jobs with names containing $. Thanks @gowthamk63!
    Source code(tar.gz)
    Source code(zip)
Owner
RQ
RQ community
RQ
Py-instant-search-redis - Source code example for how to build an instant search with redis in python

py-instant-search-redis Source code example for how to build an instant search (

Giap Le 4 Feb 17, 2022
This is raw connection between redis server and django python app

Django_Redis This repository contains the code for this blogpost. Running the Application Clone the repository git clone https://github.com/xxl4tomxu9

Tom Xu 1 Sep 15, 2022
A Django chatbot that is capable of doing math and searching Chinese poet online. Developed with django, channels, celery and redis.

Django Channels Websocket Chatbot A Django chatbot that is capable of doing math and searching Chinese poet online. Developed with django, channels, c

Yunbo Shi 8 Oct 28, 2022
Django-shared-app-isolated-databases-example - Django - Shared App & Isolated Databases

Django - Shared App & Isolated Databases An app that demonstrates the implementa

Ajai Danial 5 Jun 27, 2022
A multiprocessing distributed task queue for Django

A multiprocessing distributed task queue for Django Features Multiprocessing worker pool Asynchronous tasks Scheduled, cron and repeated tasks Signed

Ilan Steemers 1.7k Jan 3, 2023
Django URL Shortener is a Django app to to include URL Shortening feature in your Django Project

Django URL Shortener Django URL Shortener is a Django app to to include URL Shortening feature in your Django Project Install this package to your Dja

Rishav Sinha 4 Nov 18, 2021
The best way to have DRY Django forms. The app provides a tag and filter that lets you quickly render forms in a div format while providing an enormous amount of capability to configure and control the rendered HTML.

django-crispy-forms The best way to have Django DRY forms. Build programmatic reusable layouts out of components, having full control of the rendered

null 4.6k Jan 7, 2023
Strawberry-django-plus - Enhanced Strawberry GraphQL integration with Django

strawberry-django-plus Enhanced Strawberry integration with Django. Built on top

BLB Ventures 138 Dec 28, 2022
Full featured redis cache backend for Django.

Redis cache backend for Django This is a Jazzband project. By contributing you agree to abide by the Contributor Code of Conduct and follow the guidel

Jazzband 2.5k Jan 3, 2023
A Redis cache backend for django

Redis Django Cache Backend A Redis cache backend for Django Docs can be found at http://django-redis-cache.readthedocs.org/en/latest/. Changelog 3.0.0

Sean Bleier 1k Dec 15, 2022
Show how the redis works with Python (Django).

Redis Leaderboard Python (Django) Show how the redis works with Python (Django). Try it out deploying on Heroku (See notes: How to run on Google Cloud

Tom Xu 4 Nov 16, 2021
Django-Audiofield is a simple app that allows Audio files upload, management and conversion to different audio format (mp3, wav & ogg), which also makes it easy to play audio files into your Django application.

Django-Audiofield Description: Django Audio Management Tools Maintainer: Areski Contributors: list of contributors Django-Audiofield is a simple app t

Areski Belaid 167 Nov 10, 2022
django-reversion is an extension to the Django web framework that provides version control for model instances.

django-reversion django-reversion is an extension to the Django web framework that provides version control for model instances. Requirements Python 3

Dave Hall 2.8k Jan 2, 2023
a little task queue for python

a lightweight alternative. huey is: a task queue (2019-04-01: version 2.0 released) written in python (2.7+, 3.4+) clean and simple API redis, sqlite,

Charles Leifer 4.3k Dec 29, 2022
Django project starter on steroids: quickly create a Django app AND generate source code for data models + REST/GraphQL APIs (the generated code is auto-linted and has 100% test coverage).

Create Django App ?? We're a Django project starter on steroids! One-line command to create a Django app with all the dependencies auto-installed AND

imagine.ai 68 Oct 19, 2022
APIs for a Chat app. Written with Django Rest framework and Django channels.

ChatAPI APIs for a Chat app. Written with Django Rest framework and Django channels. The documentation for the http end points can be found here This

Victor Aderibigbe 18 Sep 9, 2022
A Django app to initialize Sentry client for your Django applications

Dj_sentry This Django application intialize Sentry SDK to your Django application. How to install You can install this packaging by using: pip install

Gandi 1 Dec 9, 2021
Django application and library for importing and exporting data with admin integration.

django-import-export django-import-export is a Django application and library for importing and exporting data with included admin integration. Featur

null 2.6k Dec 26, 2022
Django admin CKEditor integration.

Django CKEditor NOTICE: django-ckeditor 5 has backward incompatible code moves against 4.5.1. File upload support has been moved to ckeditor_uploader.

null 2.2k Jan 2, 2023