Mr. Queue - A distributed worker task queue in Python using Redis & gevent

Related tags

Task Queues mrq
Overview

MRQ

Build Status MIT License

MRQ is a distributed task queue for python built on top of mongo, redis and gevent.

Full documentation is available on readthedocs

Why?

MRQ is an opinionated task queue. It aims to be simple and beautiful like RQ while having performances close to Celery

MRQ was first developed at Pricing Assistant and its initial feature set matches the needs of worker queues with heterogenous jobs (IO-bound & CPU-bound, lots of small tasks & a few large ones).

Main Features

  • Simple code: We originally switched from Celery to RQ because Celery's code was incredibly complex and obscure (Slides). MRQ should be as easy to understand as RQ and even easier to extend.
  • Great dashboard: Have visibility and control on everything: queued jobs, current jobs, worker status, ...
  • Per-job logs: Get the log output of each task separately in the dashboard
  • Gevent worker: IO-bound tasks can be done in parallel in the same UNIX process for maximum throughput
  • Supervisord integration: CPU-bound tasks can be split across several UNIX processes with a single command-line flag
  • Job management: You can retry, requeue, cancel jobs from the code or the dashboard.
  • Performance: Bulk job queueing, easy job profiling
  • Easy configuration: Every aspect of MRQ is configurable through command-line flags or a configuration file
  • Job routing: Like Celery, jobs can have default queues, timeout and ttl values.
  • Builtin scheduler: Schedule tasks by interval or by time of the day
  • Strategies: Sequential or parallel dequeue order, also a burst mode for one-time or periodic batch jobs.
  • Subqueues: Simple command-line pattern for dequeuing multiple sub queues, using auto discovery from worker side.
  • Thorough testing: Edge-cases like worker interrupts, Redis failures, ... are tested inside a Docker container.
  • Greenlet tracing: See how much time was spent in each greenlet to debug CPU-intensive jobs.
  • Integrated memory leak debugger: Track down jobs leaking memory and find the leaks with objgraph.

Dashboard Screenshots

Job view

Worker view

Get Started

This 5-minute tutorial will show you how to run your first jobs with MRQ.

Installation

  • Make sure you have installed the dependencies : Redis and MongoDB
  • Install MRQ with pip install mrq
  • Start a mongo server with mongod &
  • Start a redis server with redis-server &

Write your first task

Create a new directory and write a simple task in a file called tasks.py :

$ mkdir test-mrq && cd test-mrq
$ touch __init__.py
$ vim tasks.py
from mrq.task import Task
import urllib2


class Fetch(Task):

    def run(self, params):

        with urllib2.urlopen(params["url"]) as f:
          t = f.read()
          return len(t)

Run it synchronously

You can now run it from the command line using mrq-run:

$ mrq-run tasks.Fetch url http://www.google.com

2014-12-18 15:44:37.869029 [DEBUG] mongodb_jobs: Connecting to MongoDB at 127.0.0.1:27017/mrq...
2014-12-18 15:44:37.880115 [DEBUG] mongodb_jobs: ... connected.
2014-12-18 15:44:37.880305 [DEBUG] Starting tasks.Fetch({'url': 'http://www.google.com'})
2014-12-18 15:44:38.158572 [DEBUG] Job None success: 0.278229s total
17655

Run it asynchronously

Let's schedule the same task 3 times with different parameters:

$ mrq-run --queue fetches tasks.Fetch url http://www.google.com &&
  mrq-run --queue fetches tasks.Fetch url http://www.yahoo.com &&
  mrq-run --queue fetches tasks.Fetch url http://www.wordpress.com

2014-12-18 15:49:05.688627 [DEBUG] mongodb_jobs: Connecting to MongoDB at 127.0.0.1:27017/mrq...
2014-12-18 15:49:05.705400 [DEBUG] mongodb_jobs: ... connected.
2014-12-18 15:49:05.729364 [INFO] redis: Connecting to Redis at 127.0.0.1...
5492f771520d1887bfdf4b0f
2014-12-18 15:49:05.957912 [DEBUG] mongodb_jobs: Connecting to MongoDB at 127.0.0.1:27017/mrq...
2014-12-18 15:49:05.967419 [DEBUG] mongodb_jobs: ... connected.
2014-12-18 15:49:05.983925 [INFO] redis: Connecting to Redis at 127.0.0.1...
5492f771520d1887c2d7d2db
2014-12-18 15:49:06.182351 [DEBUG] mongodb_jobs: Connecting to MongoDB at 127.0.0.1:27017/mrq...
2014-12-18 15:49:06.193314 [DEBUG] mongodb_jobs: ... connected.
2014-12-18 15:49:06.209336 [INFO] redis: Connecting to Redis at 127.0.0.1...
5492f772520d1887c5b32881

You can see that instead of executing the tasks and returning their results right away, mrq-run has added them to the queue named fetches and printed their IDs.

Now start MRQ's dasbhoard with mrq-dashboard & and go check your newly created queue and jobs on localhost:5555

They are ready to be dequeued by a worker. Start one with mrq-worker and follow it on the dashboard as it executes the queued jobs in parallel.

$ mrq-worker fetches

2014-12-18 15:52:57.362209 [INFO] Starting Gevent pool with 10 worker greenlets (+ report, logs, adminhttp)
2014-12-18 15:52:57.388033 [INFO] redis: Connecting to Redis at 127.0.0.1...
2014-12-18 15:52:57.389488 [DEBUG] mongodb_jobs: Connecting to MongoDB at 127.0.0.1:27017/mrq...
2014-12-18 15:52:57.390996 [DEBUG] mongodb_jobs: ... connected.
2014-12-18 15:52:57.391336 [DEBUG] mongodb_logs: Connecting to MongoDB at 127.0.0.1:27017/mrq...
2014-12-18 15:52:57.392430 [DEBUG] mongodb_logs: ... connected.
2014-12-18 15:52:57.523329 [INFO] Fetching 1 jobs from ['fetches']
2014-12-18 15:52:57.567311 [DEBUG] Starting tasks.Fetch({u'url': u'http://www.google.com'})
2014-12-18 15:52:58.670492 [DEBUG] Job 5492f771520d1887bfdf4b0f success: 1.135268s total
2014-12-18 15:52:57.523329 [INFO] Fetching 1 jobs from ['fetches']
2014-12-18 15:52:57.567747 [DEBUG] Starting tasks.Fetch({u'url': u'http://www.yahoo.com'})
2014-12-18 15:53:01.897873 [DEBUG] Job 5492f771520d1887c2d7d2db success: 4.361895s total
2014-12-18 15:52:57.523329 [INFO] Fetching 1 jobs from ['fetches']
2014-12-18 15:52:57.568080 [DEBUG] Starting tasks.Fetch({u'url': u'http://www.wordpress.com'})
2014-12-18 15:53:00.685727 [DEBUG] Job 5492f772520d1887c5b32881 success: 3.149119s total
2014-12-18 15:52:57.523329 [INFO] Fetching 1 jobs from ['fetches']
2014-12-18 15:52:57.523329 [INFO] Fetching 1 jobs from ['fetches']

You can interrupt the worker with Ctrl-C once it is finished.

Going further

This was a preview on the very basic features of MRQ. What makes it actually useful is that:

  • You can run multiple workers in parallel. Each worker can also run multiple greenlets in parallel.
  • Workers can dequeue from multiple queues
  • You can queue jobs from your Python code to avoid using mrq-run from the command-line.

These features will be demonstrated in a future example of a simple web crawler.

More

Full documentation is available on readthedocs

Comments
  • Experimental python3 support

    Experimental python3 support

    First try on implementing Python3 support. This was done using futurize (http://python-future.org/) and then manually fixing a few issues.

    Changes made:

    Adds python-future as dependency which only supports python >=2.6. Most of the small changes should be pretty straightforward e.g. python3 style print function and no .iteritems() in python3

    Probably the most problematic area is strings. Unicodes were converted to use str from python-future library and now that python3 has separate bytes type there are few places where those are converted. There is also few places where it is explicitly checking if python version is 3 and doing different stuff e.g. logger just skips encode/decode in python3 and probably there is better way to do it.

    Redis is now using decode_responses flag which may or may not be the good solution but at least it allowed me to get it working w/o decoding all the responses manually.

    Also changed urllib2 to use urllib trough python-future. And needed to add monkey.patch_all(subprocess=False) to mrq_worker to get that running fine. Didn't really dig too deep in this monkeypatching stuff so not sure if this is a problem.

    There are few test cases failing but couldn't get them to pass even with master. There is also no automated test runner for python3 ( I was locally testing with modified Dockerfile ).

    requirements.txt also now contains different packages based on python version so installing with recent enough pip works but no idea about setuptools(?)

    Any feedback on how I could improve this would be great.

    ( I have been running this with some "real" python3 jobs which seem to work but there is most likely some corner cases which I might be missing and also didn't run any benchmarks so no idea about the performance )

    opened by tume 25
  • Started jobs but no workers running

    Started jobs but no workers running

    Hi - we're seeing a condition where we have some number of jobs in the "started" state, but no workers are running (they have all exited due to "burst mode" and the input queue being empty).

    Any ideas what could cause this? For instance what if the worker consumes a job but then quits unexpectedly? And any suggestions how to defend against it, like requeue a job after some timeout, or via some manual call from the top-level script.

    Thanks!

    opened by mark-99 12
  • Ease run of jobs

    Ease run of jobs

    Instead of queue_job(path, data), we should consider to use similar to celery style: task.delay(data) or task.enqueue(data).

    Also, since it is intended to be just one function in task, maybe we should do it... celery-way? Create decorator and write simple function - just as short way. Saves 1 indent and makes code easier to understand.

    opened by iorlas 12
  • Why mongo?

    Why mongo?

    Not an issue, really, just wondering, why mrq uses mongodb?

    I mean, e.g. celery and python-rq use only one broker (redis or rmq) without additional databases.

    I believe there is a pretty good reason for using two storages, so I wanted to ask about it :)

    Thanks in advance.

    opened by DataGreed 10
  • mrq-dashboard initialization fails

    mrq-dashboard initialization fails

    Hi @ all, installed mrq as readme tells, everything went well, but trying to launch the dashboard results in the following error. Does anyone knows, how to fix it. I'd like to use the dashboard alone. Redis-server and mongodb server are running. I'm using latest python 2.7 interpreter.

    Thanks in advance...

    Traceback (most recent call last): File "/home/tom3kk/.conda/envs/py27_mrq/bin/mrq-dashboard", line 7, in <module> from mrq.dashboard.app import main File "/home/tom3kk/.conda/envs/py27_mrq/lib/python2.7/site-packages/mrq/dashboard/app.py", line 29, in <module> set_current_config(cfg) File "/home/tom3kk/.conda/envs/py27_mrq/lib/python2.7/site-packages/mrq/context.py", line 82, in set_current_config patch_import() File "/home/tom3kk/.conda/envs/py27_mrq/lib/python2.7/site-packages/mrq/monkey.py", line 127, in patch_import import gevent.coros File "/home/tom3kk/.conda/envs/py27_mrq/lib/python2.7/site-packages/gevent/builtins.py", line 93, in __import__ result = _import(*args, **kwargs) ImportError: No module named coros

    opened by ThetomekK 8
  • Dashboard should have settings for host and port

    Dashboard should have settings for host and port

    Since I use docker containers for development via boot2docker, it is common to force application host to 0.0.0.0. Since mrq-dashboard runs just like...

    run_simple('', int(os.environ.get("PORT", 5555)), app)
    

    It is imposible to even look at dashboard for me. Yea, I can create container with proxy setup like nginx, but really, dude.

    Dashboard 
    opened by iorlas 8
  • Information about the MrQ status

    Information about the MrQ status

    Hello PricingAssistant,

    We are implementing a task-based processing for our infrastructure management API requests. We are looking for Python queue frameworks. At first, we were interested in using Celery+Redis for it. Then we remembered that RQ was a lighter alternative. After digging into the subject, we discovered your project.

    From what says your README, it looks like the proper solution for us: simple as RQ, stores results in a MongoDB instance, comes with a beautiful dashboard.

    So my question is can we try to use MrQ for our infrastructure API? Is it easy to start with?

    opened by frankrousseau 8
  • First impressions good, but fails under load

    First impressions good, but fails under load

    So first the good: I've ported the basic functionalty of our job system from RQ to MRQ and generally it went well, at least so far. Dashboard is certainly a huge improvement. The burst mode you added also works well (note as such I'm using latest code from github, rather than the pip install version).

    Some observations/suggestions:

    • I couldn't actually find any example of queueing jobs from Python (vs command line). Eventually I reverse-engineered mrq-run.py and figured out I had to do this to get jop.queue_jobs() to work: mrq.context.set_current_config(mrq.config.get_config())
    • Python being untyped, it would be good to document params better. E.g. in RQ enqueue() takes a callable, but MRQ job.queue_job(s) takes a string. This isn't immediately obvious.
    • Dashboard frequently reads -ve jobs/second.

    However big issue is that the system starts to fail above a certain number of workers, somewhere in the region of 1000. It seems like database updates fail or time out and aren't retried. In some cases the job fails with an exception, but mostly just the status is wrong, ie it gets permanently stuck in either "queued" or "started" state in the DB. Note that workers are still running (variously in "wait" or "full" status) but the queues are not consumed, ie there's an inconsistency between the worker process and/or mongo and/or redis which never resolves (I've just been wiping mongo+redis between tests).

    'Failed' is perhaps fixable with the retry operations, but the stuck jobs where presumably some other part of the socket communications failed/timed out are more of an issue.

    Here's a code snippet:

        for job_id in job_ids:
            job_res = mrq.job.get_job_result(job_id)
            if job_res["status"] == "queued": num_queued += 1
            elif job_res["status"] == "started": num_started += 1
            elif job_res["status"] == "success": num_finished += 1
            else: num_failed += 1
    
        log("num_queued: %d, num_started: %d, num_finished:%d, num_failed: %d" % 
                (num_queued, num_started, num_finished, num_failed))
    

    Here's the final output for 50k jobs (number of jobs is less important than number of workers): 09:19:26.906279: num_queued: 32, num_started: 8, num_finished:49957, num_failed: 3

    Different runs give different numbers of stuck or failed (sometimes it will complete ok). Fewer workers (<500) and it's always ok, it goes to zero queued+started, and my script exits. Note the stats also agree on the dashboard "Statuses" tab.

    For the failed jobs, here's the backtrace:

    Traceback (most recent call last): 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/mrq/worker.py", line 540, in perform_job 
        job.perform() 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/mrq/job.py", line 279, in perform 
        self.save_success(result) 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/mrq/job.py", line 364, in save_success 
        self._save_status("success", updates) 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/mrq/job.py", line 437, in _save_status 
        }, {"$set": db_updates}, w=w, j=j, manipulate=False) 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/collection.py", line 1956, in update 
        with self._socket_for_writes() as sock_info: 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/contextlib.py", line 17, in __enter__ 
        return self.gen.next() 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/mongo_client.py", line 665, in _get_socket 
        with server.get_socket(self.__all_credentials) as sock_info: 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/contextlib.py", line 17, in __enter__ 
        return self.gen.next() 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/server.py", line 102, in get_socket 
        with self.pool.get_socket(all_credentials, checkout) as sock_info: 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/contextlib.py", line 17, in __enter__ 
        return self.gen.next() 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/pool.py", line 509, in get_socket 
        sock_info = self._get_socket_no_auth() 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/pool.py", line 543, in _get_socket_no_auth 
        sock_info, from_pool = self.connect(), False 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/pool.py", line 475, in connect 
        DEFAULT_CODEC_OPTIONS)) 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/network.py", line 48, in command 
        response = receive_message(sock, 1, request_id) 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/network.py", line 60, in receive_message 
        header = _receive_data_on_socket(sock, 16) 
      File "/apps/infrafs1/matkinson/venv-infra/lib/python2.7/site-packages/pymongo/network.py", line 84, in _receive_data_on_socket 
        raise AutoReconnect("connection closed") 
    AutoReconnect: connection closed 
    

    Any ideas? Thanks.

    opened by mark-99 7
  • Workers failing to report Queues

    Workers failing to report Queues

    Whilst upgrading to 0.1.10 we noticed an issue in the dashboard where queues are not showing in the workers tab..

    Can confirm this information isn't in the database

    db.mrq_workers.find({}, {name:1, config:1})
    { "_id" : ObjectId("54e5d55f2431ff5cc80a00d7"), "config" : { "max_jobs" : 0, "processes" : 0, "name" : "", "queues" : [  "st_queue_a" ], "greenlets" : 20, "scheduler" : false }, "name" : "ip-" }
    { "_id" : ObjectId("54e5d5682431ff5cf648adb2"), "config" : { "max_jobs" : 0, "processes" : 0, "name" : "", "queues" : [  "st_queue_b" ], "greenlets" : 8, "scheduler" : false }, "name" : "ip-" }
    { "_id" : ObjectId("54e5d5882431ff5d3f2ba1c9"), "config" : { "max_jobs" : 0, "processes" : 0, "name" : "", "queues" : [  "st_queue_c" ], "greenlets" : 8, "scheduler" : false }, "name" : "ip-" }
    { "_id" : ObjectId("54e5d55c2431ff5cb9bc719a"), "config" : { "max_jobs" : 0, "processes" : 0, "scheduler" : false, "greenlets" : 50, "name" : "" }, "name" : "ip-" }
    { "_id" : ObjectId("54e5d5642431ff5cdef8ce38"), "config" : { "max_jobs" : 0, "processes" : 0, "scheduler" : false, "greenlets" : 30, "name" : "" }, "name" : "ip-" }
    

    We run a customer worker script instead of mrq-worker as it allows us to work with our own config setup.. It's almost identical to mrq-worker with a couple of tweaks

    #!/usr/bin/env python
    # -*- coding:utf-8 -*-
    import os
    
    # Needed to make getaddrinfo() work in pymongo on Mac OS X
    # Docs mention it's a better choice for Linux as well.
    # This must be done asap in the worker
    if "GEVENT_RESOLVER" not in os.environ:
        os.environ["GEVENT_RESOLVER"] = "ares"
    
    from gevent import monkey
    monkey.patch_all()
    
    import config
    import sys
    
    sys.path.insert(0, os.getcwd())
    
    from mrq.utils import load_class_by_path
    from mrq.context import set_current_config
    
    ....
    
        worker_class = load_class_by_path(cfg["worker_class"])
        set_current_config(cfg)
    
        w = worker_class()
        exitcode = w.work_loop()
        sys.exit(exitcode)
    

    I have verified cfg is valid and contains 'queues', i.e cfg['queues'] = ["queue_names"] We have also changed the redis_prefix and mongo db name to ensure a fresh test for this.. But problem persists..

    selection_003

    Any help would be appreciated on this :)

    Update: From tests, it looks like the workers that are the busiest don't get updated... Will keep looking

    opened by eddie 7
  • Adding jobs to queue from code

    Adding jobs to queue from code

    Hi! In the docs says:

    You can queue jobs from your Python code to avoid using mrq-run from the command-line.

    But in the crawling example it uses the command-line.

    How could I add jobs to a queue without the command-line?

    Thaks

    bug 
    opened by pabloriera 6
  • LuaLock name error

    LuaLock name error

    I get this error:

    D:\myproject>mrq-run a.Fetch url http://www.google.com
    Monkey-patching MongoDB methods...
    Traceback (most recent call last):
      File "c:\miniconda3\envs\dgpy-dev\lib\runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "c:\miniconda3\envs\dgpy-dev\lib\runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "C:\miniconda3\envs\dgpy-dev\Scripts\mrq-run.exe\__main__.py", line 9, in <module>
      File "c:\miniconda3\envs\dgpy-dev\lib\site-packages\mrq\bin\mrq_run.py", line 56, in main
        worker_class = load_class_by_path(cfg["worker_class"])
      File "c:\miniconda3\envs\dgpy-dev\lib\site-packages\mrq\utils.py", line 99, in __missing__
        ret = self[key] = f(key)
      File "c:\miniconda3\envs\dgpy-dev\lib\site-packages\mrq\utils.py", line 113, in load_class_by_path
        taskpath)),
      File "c:\miniconda3\envs\dgpy-dev\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
      File "<frozen importlib._bootstrap>", line 983, in _find_and_load
      File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "c:\miniconda3\envs\dgpy-dev\lib\site-packages\mrq\worker.py", line 18, in <module>
        from redis.lock import LuaLock
    ImportError: cannot import name 'LuaLock' from 'redis.lock' (c:\miniconda3\envs\dgpy-dev\lib\site-packages\redis\lock.py)
    
    opened by jessekrubin 5
  • docs: fix simple typo, instanciate -> instantiate

    docs: fix simple typo, instanciate -> instantiate

    There is a small typo in docs/jobs.md.

    Should read instantiate rather than instanciate.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • Bump lxml from 3.4.2 to 4.9.1 in /examples/simple_crawler

    Bump lxml from 3.4.2 to 4.9.1 in /examples/simple_crawler

    Bumps lxml from 3.4.2 to 4.9.1.

    Changelog

    Sourced from lxml's changelog.

    4.9.1 (2022-07-01)

    Bugs fixed

    • A crash was resolved when using iterwalk() (or canonicalize()) after parsing certain incorrect input. Note that iterwalk() can crash on valid input parsed with the same parser after failing to parse the incorrect input.

    4.9.0 (2022-06-01)

    Bugs fixed

    • GH#341: The mixin inheritance order in lxml.html was corrected. Patch by xmo-odoo.

    Other changes

    • Built with Cython 0.29.30 to adapt to changes in Python 3.11 and 3.12.

    • Wheels include zlib 1.2.12, libxml2 2.9.14 and libxslt 1.1.35 (libxml2 2.9.12+ and libxslt 1.1.34 on Windows).

    • GH#343: Windows-AArch64 build support in Visual Studio. Patch by Steve Dower.

    4.8.0 (2022-02-17)

    Features added

    • GH#337: Path-like objects are now supported throughout the API instead of just strings. Patch by Henning Janssen.

    • The ElementMaker now supports QName values as tags, which always override the default namespace of the factory.

    Bugs fixed

    • GH#338: In lxml.objectify, the XSI float annotation "nan" and "inf" were spelled in lower case, whereas XML Schema datatypes define them as "NaN" and "INF" respectively.

    ... (truncated)

    Commits
    • d01872c Prevent parse failure in new test from leaking into later test runs.
    • d65e632 Prepare release of lxml 4.9.1.
    • 86368e9 Fix a crash when incorrect parser input occurs together with usages of iterwa...
    • 50c2764 Delete unused Travis CI config and reference in docs (GH-345)
    • 8f0bf2d Try to speed up the musllinux AArch64 build by splitting the different CPytho...
    • b9f7074 Remove debug print from test.
    • b224e0f Try to install 'xz' in wheel builds, if available, since it's now needed to e...
    • 897ebfa Update macOS deployment target version from 10.14 to 10.15 since 10.14 starts...
    • 853c9e9 Prepare release of 4.9.0.
    • d3f77e6 Add a test for https://bugs.launchpad.net/lxml/+bug/1965070 leaving out the a...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Corrected spelling issues in docs

    Corrected spelling issues in docs

    While reading the documents, noticed a spelling typo.

    Did a keep sweep of the rest of the markdown documents and made a couple other corrections.

    Please let me know if there are additional actions to be taken as part of this pull request. There's not really any contribution process I could find in the docs.

    opened by jdonboch 0
  • Dashboard: pymongo.errors.OperationFailure: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.

    Dashboard: pymongo.errors.OperationFailure: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.

    I'm getting this error when I try to browse the "statuses" page in the dashboard and I have a large (~ 200.000) number of jobs.

    172.17.0.1 - - [17/Feb/2021 17:13:11] "GET /api/datatables/status?sEcho=1&iColumns=4&sColumns=&iDisplayStart=0&iDisplayLength=20&mDataProp_0=function&mDataProp_1=function&mDataProp_2=function&mDataProp_3=function&sSearch=&bRegex=false&sSearch_0=&bRegex_0=false&bSearchable_0=true&sSearch_1=&bRegex_1=false&bSearchable_1=true&sSearch_2=&bRegex_2=false&bSearchable_2=true&sSearch_3=&bRegex_3=false&bSearchable_3=true&iSortCol_0=0&sSortDir_0=asc&iSortingCols=1&bSortable_0=true&bSortable_1=true&bSortable_2=true&bSortable_3=true HTTP/1.1" 500 -
    Error on request:
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/werkzeug/serving.py", line 304, in run_wsgi
        execute(self.server.app)
      File "/usr/local/lib/python3.6/dist-packages/werkzeug/serving.py", line 292, in execute
        application_iter = app(environ, start_response)
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2463, in __call__
        return self.wsgi_app(environ, start_response)
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2449, in wsgi_app
        response = self.handle_exception(e)
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1866, in handle_exception
        reraise(exc_type, exc_value, tb)
      File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
        raise value
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2446, in wsgi_app
        response = self.full_dispatch_request()
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1951, in full_dispatch_request
        rv = self.handle_user_exception(e)
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1820, in handle_user_exception
        reraise(exc_type, exc_value, tb)
      File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
        raise value
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1949, in full_dispatch_request
        rv = self.dispatch_request()
      File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1935, in dispatch_request
        return self.view_functions[rule.endpoint](**req.view_args)
      File "/usr/local/lib/python3.6/dist-packages/mrq/dashboard/app.py", line 82, in api_jobstatuses
        {"$group": {"_id": "$status", "jobs": {"$sum": 1}}}
      File "/usr/local/lib/python3.6/dist-packages/mrq/monkey.py", line 99, in mrq_monkey_patched
        ret = base_method(self, *args, **kwargs)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/collection.py", line 2458, in aggregate
        **kwargs)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/collection.py", line 2377, in _aggregate
        retryable=not cmd._performs_write)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/mongo_client.py", line 1471, in _retryable_read
        return func(session, server, sock_info, slave_ok)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/aggregation.py", line 148, in get_cursor
        user_fields=self._user_fields)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/pool.py", line 694, in command
        exhaust_allowed=exhaust_allowed)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/network.py", line 161, in command
        parse_write_concern_error=parse_write_concern_error)
      File "/home/thomas/.local/lib/python3.6/site-packages/pymongo/helpers.py", line 160, in _check_command_response
        raise OperationFailure(errmsg, code, response, max_wire_version)
    pymongo.errors.OperationFailure: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting., full error: {'ok': 0.0, 'errmsg': 'Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.', 'code': 16819, 'codeName': 'Location16819'}
    
    opened by tfriedel 1
  • add support to delay a job and enqueue it N seconds later

    add support to delay a job and enqueue it N seconds later

    add support to delay a job and enqueue it N seconds later. those jobs are going to have the delayed status. e.g: delay a job 1 hour.

    queue_job(main_task_path, params, delay=3600)
    
    opened by orlandobcrra 4
Releases(0.9.10)
Owner
Pricing Assistant
We optimize the pricing of online stores
Pricing Assistant
A multiprocessing distributed task queue for Django

A multiprocessing distributed task queue for Django Features Multiprocessing worker pool Asynchronous tasks Scheduled, cron and repeated tasks Signed

Ilan Steemers 1.7k Jan 3, 2023
Distributed Task Queue (development branch)

Version: 5.1.0b1 (singularity) Web: https://docs.celeryproject.org/en/stable/index.html Download: https://pypi.org/project/celery/ Source: https://git

Celery 20.7k Jan 1, 2023
RQ (Redis Queue) integration for Flask applications

Flask-RQ RQ (Redis Queue) integration for Flask applications Resources Documentation Issue Tracker Code Development Version Installation $ pip install

Matt Wright 205 Nov 6, 2022
A simple app that provides django integration for RQ (Redis Queue)

Django-RQ Django integration with RQ, a Redis based Python queuing library. Django-RQ is a simple app that allows you to configure your queues in djan

RQ 1.6k Dec 28, 2022
Redis-backed message queue implementation that can hook into a discord bot written with hikari-lightbulb.

Redis-backed FIFO message queue implementation that can hook into a discord bot written with hikari-lightbulb. This is eventually intended to be the backend communication between a bot and a web dashboard.

thomm.o 7 Dec 5, 2022
a little task queue for python

a lightweight alternative. huey is: a task queue (2019-04-01: version 2.0 released) written in python (2.7+, 3.4+) clean and simple API redis, sqlite,

Charles Leifer 4.3k Jan 8, 2023
Django database backed celery periodic task scheduler with support for task dependency graph

Djag Scheduler (Dj)ango Task D(AG) (Scheduler) Overview Djag scheduler associates scheduling information with celery tasks The task schedule is persis

Mohith Reddy 3 Nov 25, 2022
Asynchronous tasks in Python with Celery + RabbitMQ + Redis

python-asynchronous-tasks Setup & Installation Create a virtual environment and install the dependencies: $ python -m venv venv $ source env/bin/activ

Valon Januzaj 40 Dec 3, 2022
Sync Laravel queue with Python. Provides an interface for communication between Laravel and Python.

Python Laravel Queue Queue sync between Python and Laravel using Redis driver. You can process jobs dispatched from Laravel in Python. NOTE: This pack

Sinan Bekar 3 Oct 1, 2022
Full featured redis cache backend for Django.

Redis cache backend for Django This is a Jazzband project. By contributing you agree to abide by the Contributor Code of Conduct and follow the guidel

Jazzband 2.5k Jan 3, 2023
Accept queue automatically on League of Legends.

Accept queue automatically on League of Legends. I was inspired by the lucassmonn code accept-queue-lol-telegram, and I modify it according to my need

null 2 Sep 6, 2022
A fast and reliable background task processing library for Python 3.

dramatiq A fast and reliable distributed task processing library for Python 3. Changelog: https://dramatiq.io/changelog.html Community: https://groups

Bogdan Popa 3.4k Jan 1, 2023
Beatserver, a periodic task scheduler for Django 🎵

Beat Server Beatserver, a periodic task scheduler for django channels | beta software How to install Prerequirements: Follow django channels documenta

Raja Simon 130 Dec 17, 2022
Dagon - An Asynchronous Task Graph Execution Engine

Dagon - An Asynchronous Task Graph Execution Engine Dagon is a job execution sys

null 8 Nov 17, 2022
Py_extract is a simple, light-weight python library to handle some extraction tasks using less lines of code

py_extract Py_extract is a simple, light-weight python library to handle some extraction tasks using less lines of code. Still in Development Stage! I

I'm Not A Bot #Left_TG 7 Nov 7, 2021
Simple job queues for Python

RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is desi

RQ 8.7k Jan 7, 2023
OpenQueue is a experimental CS: GO match system written in asyncio python.

What is OpenQueue OpenQueue is a experimental CS: GO match system written in asyncio python. Please star! This project was a lot of work & still has a

OpenQueue 10 May 13, 2022
Aiorq is a distributed task queue with asyncio and redis

Aiorq is a distributed task queue with asyncio and redis, which rewrite from arq to make improvement and include web interface.

PY-GZKY 5 Mar 18, 2022
An alternative to OpenFaaS nats-queue-worker for long-running functions

OpenFaas Job Worker OpenFaas Job Worker is a fork of project : OSCAR Worker - https://github.com/grycap/oscar-worker Thanks to Sebástian Risco @srisco

Sebastien Aucouturier 1 Jan 7, 2022
Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.

FastAPI with Celery Minimal example utilizing FastAPI and Celery with RabbitMQ for task queue, Redis for Celery backend and flower for monitoring the

Grega Vrbančič 371 Jan 1, 2023