Flower is a web based tool for monitoring and administrating Celery clusters.

Overview

Flower

https://travis-ci.org/mher/flower.svg?branch=master

Flower is a web based tool for monitoring and administrating Celery clusters.

Features

  • Real-time monitoring using Celery Events

    • Task progress and history
    • Ability to show task details (arguments, start time, runtime, and more)
    • Graphs and statistics
  • Remote Control

    • View worker status and statistics
    • Shutdown and restart worker instances
    • Control worker pool size and autoscale settings
    • View and modify the queues a worker instance consumes from
    • View currently running tasks
    • View scheduled tasks (ETA/countdown)
    • View reserved and revoked tasks
    • Apply time and rate limits
    • Configuration viewer
    • Revoke or terminate tasks
  • Broker monitoring

    • View statistics for all Celery queues
    • Queue length graphs
  • HTTP API

  • Basic Auth, Google, Github, Gitlab and Okta OAuth

  • Prometheus integration

Installation

PyPI version:

$ pip install flower

Development version:

$ pip install https://github.com/mher/flower/zipball/master

Usage

Launch the server and open http://localhost:5555:

$ flower --port=5555

Launch from celery:

$ celery flower -A proj --address=127.0.0.1 --port=5555

Launch using docker:

$ docker run -p 5555:5555 mher/flower

Launch with unix socket file:

$ flower --unix-socket=/tmp/flower.sock

Broker URL and other configuration options can be passed through the standard Celery options:

$ celery flower -A proj --broker=amqp://guest:guest@localhost:5672//

API

Flower API enables to manage the cluster via REST API, call tasks and receive task events in real-time via WebSockets.

For example you can restart worker's pool by:

$ curl -X POST http://localhost:5555/api/worker/pool/restart/myworker

Or call a task by:

$ curl -X POST -d '{"args":[1,2]}' http://localhost:5555/api/task/async-apply/tasks.add

Or terminate executing task by:

$ curl -X POST -d 'terminate=True' http://localhost:5555/api/task/revoke/8a4da87b-e12b-4547-b89a-e92e4d1f8efd

Or receive task completion events in real-time:

var ws = new WebSocket("ws://localhost:5555/api/task/events/task-succeeded/");
ws.onmessage = function (event) {
    console.log(event.data);
}

For more info checkout API Reference and examples.

Documentation

Documentation is available at Read the Docs and IPython Notebook Viewer

License

Flower is licensed under BSD 3-Clause License. See the LICENSE file in the top distribution directory for the full license text.

Comments
  • Upgrading to Celery 5.0.0 breaks flower due to change in celery.bin.base module.

    Upgrading to Celery 5.0.0 breaks flower due to change in celery.bin.base module.

    If you run flower with Celery 5.0.0 or if you use the docker image, it will say it cannot import "Command".

    Steps to reproduce the behavior:

    1. pip install celery==5.0.0 THEN
    2. flower --version OR
    3. flower -A proj --broker=redis://redis OR
    4. flower

    image

    The Command class is no longer part of the base module: https://docs.celeryproject.org/en/v5.0.0/reference/celery.bin.base.html

    Which means it can no longer import and inherit Command:

    from __future__ import absolute_import
    from __future__ import print_function
    
    import os
    import sys
    import atexit
    import signal
    import logging
    
    from pprint import pformat
    
    from logging import NullHandler
    
    from tornado.options import options
    from tornado.options import parse_command_line, parse_config_file
    from tornado.log import enable_pretty_logging
    from celery.bin.base import Command
    
    from . import __version__
    from .app import Flower
    from .urls import settings
    from .utils import abs_path, prepend_url
    from .options import DEFAULT_CONFIG_FILE, default_options
    
    
    logger = logging.getLogger(__name__)
    
    
    class FlowerCommand(Command):
        # ...
    
    bug 
    opened by mbayabo 73
  • --url_prefix not supported anymore, so flower can only run on root now?

    --url_prefix not supported anymore, so flower can only run on root now?

    Although using --url_prefix parameter does not give error when running Flower, but it doesn't work anymore. Also deprecated is written on this parameter in the latest source code. So does there exist an alternative to this parameter or from now on Flower can only be run on root?

    enhancement 
    opened by mtahirtariq 40
  • Web client hanging forever

    Web client hanging forever

    Im trying to run flower on my local celery deployment. When I try to connect via web browser to localhost:5555 or via curl , it just hangs forever (waiting for localhost). I can telnet to port 5555, but other than that no response. Below is some debug info:

    celery -A myproject flower --broker=redis://localhost:6379/0 --broker_api=redis://localhost:6379/0 --debug --inspect_timeout=10 [I 161120 20:08:01 command:136] Visit me at http://localhost:5555 [I 161120 20:08:01 command:141] Broker: redis://localhost:6379/0 [I 161120 20:08:01 command:144] Registered tasks: [u'app.tasks.generate_scheduled', u'celery.accumulate', u'celery.backend_cleanup', u'celery.chain', u'celery.chord', u'celery.chord_unlock', u'celery.chunks', u'celery.group', u'celery.map', u'celery.starmap', u'app.tasks.generate_result', 'normalize_time_task', u'myplatform.celery.debug_task', u'tasks.bake_block', 'tasks.baker', 'tasks.mailman'] [D 161120 20:08:01 command:146] Settings: {'cookie_secret': 'pnKuJkmuRPW43p1N4KYmvRAZjwFJtE+dvDWUNGrHTl8=', 'debug': True, 'login_url': '/login', 'static_path': '/usr/local/lib/python2.7/site-packages/flower-0.9.1-py2.7.egg/flower/static', 'static_url_prefix': '/static/', 'template_path': '/usr/local/lib/python2.7/site-packages/flower-0.9.1-py2.7.egg/flower/templates'} [D 161120 20:08:01 control:29] Updating all worker's cache... [I 161120 20:08:01 mixins:224] Connected to redis://localhost:6379/0

    opened by yadid 39
  • Celery 5

    Celery 5

    Celery 5 consists of a refactor to the CLI framework. The celery command and its sub-commands are now powered by Click. In version 5.0.5, the ability to extend the base command was restored, and so in order to invoke flower on Celery >=5, <5.0.5 using this change, you would have to:

    flower -A app flower 
    

    For the newest patch, it can be executed as any other celery subcommand:

    celery -A app flower
    

    This change probably requires a major version upgrade as it has major version dependency upgrade for celery.

    opened by avikam 35
  • Allow to remove offline workers from dashboard

    Allow to remove offline workers from dashboard

    Offline workers create a lot of noise in the flower dashboard when running Celery in a containerised environment like kubernetes. See also #840.

    I've added a new option purge_offline_workers (--purge_offline_workers / FLOWER_PURGE_OFFLINE_WORKERS) that removes offline workers from the flower dashboard.

    purge_offline_workers is optional and defaults to False so that it does not impact default behaviour.

    opened by bstiel 34
  • Monitor not Drawing/Displaying Traffic for Remote Worker

    Monitor not Drawing/Displaying Traffic for Remote Worker

    On the Monitor tab, the graph is only displaying the traffic for my local (same server that runs rabbitMQ and flower) celery worker. It is not displaying the traffic for my remote (running on different server) celery worker.

    I am sure I have started the remote worker correctly and have it configured correctly, because the other Flower tabs do show that the remote tasks are being executed successfully. And, my app is working fine.

    Not sure if it is a factor or not, but each worker is assigned its own distinct queue. Also I should mention that the local worker task uses apply_async to send a task to the remote worker, in case this is unusual.

    I'm new to python, Celery, and Flower. If you can offer suggestions on what I might be doing wrong, I'll sure appreciate it.

    Thanks, Steve

    monitor 
    opened by sterrell 32
  • Data table error appears when trying to open tasks page on flower 0.9.0

    Data table error appears when trying to open tasks page on flower 0.9.0

    Error appears in an alert dialog containing this text: DataTables warning: table id=tasks-table - Ajax error. For more information about this error, please see http://datatables.net/tn/7

    opened by mikeengland 31
  • Tornado no longer has asynchronous decorator

    Tornado no longer has asynchronous decorator

    Looks like Tornado dropped this decorator in the latest release v6.0.0b1.

    When trying to run 'celery flower' last message in stack trace is;

    AttributeError: module 'tornado.web' has no attribute 'asynchronous'

    opened by 2stacks 29
  • There is no data when using redis broker.

    There is no data when using redis broker.

    celery report software -> celery:3.1.13 (Cipater) kombu:3.0.21 py:3.4.0 billiard:3.3.0.18 py-amqp:1.4.5 platform -> system:Linux arch:64bit, ELF imp:CPython loader -> celery.loaders.default.Loader settings -> transport:amqp results:disabled

    redis-server --version Redis server v=2.8.9 sha=00000000:0 malloc=jemalloc-3.2.0 bits=64 build=6183a7cf6dbec67f

    pip list amqp (1.4.5) anyjson (0.3.3) billiard (3.3.0.18) celery (3.1.13) cement (2.2.2) certifi (14.05.14) flower (0.7.2) hiredis (0.1.4) humanize (0.5) kombu (3.0.21) msgpack-python (0.4.2) Pillow (2.5.1) pip (1.5.4) pycket (0.3.0) pymongo (2.7.2) pytz (2014.4) redis (2.10.1) requests (2.3.0) setuptools (2.1) tornado (4.0) WTForms (2.0.1) wtforms-tornado (0.0.1)

    supervisord.conf [program:lusir-server-celery-worker-storage] command=celery worker -A celeryapp.celeryapp -l info -E -Q storage -I lusir.storage.task -n storage@%%n directory=%(ENV_HOME)s/projects/longyuan/lusir-server/src autostart=true autorestart=true startsecs=10 stopwaitsecs=30

    [program:lusir-server-celery-worker-message] command=celery worker -A celeryapp.celeryapp -l info -E -Q message -I lusir.message.task -n message@%%n directory=%(ENV_HOME)s/projects/longyuan/lusir-server/src autostart=true autorestart=true startsecs=10 stopwaitsecs=30

    [program:lusir-server-celery-worker-face] command=celery worker -A celeryapp.celeryapp -l info -E -Q face -I lusir.face.task -n face@%%n directory=%(ENV_HOME)s/projects/longyuan/lusir-server/src autostart=true autorestart=true startsecs=10 stopwaitsecs=30

    [program:lusir-server-celery-worker-star] command=celery worker -A celeryapp.celeryapp -l info -E -Q star -I lusir.star.task -n star@%%n directory=%(ENV_HOME)s/projects/longyuan/lusir-server/src autostart=true autorestart=true startsecs=10 stopwaitsecs=30

    [program:lusir-server-celery-flower] command=flower --address=0.0.0.0 --port=8110 --broker=redis://lusir-redis-test:6379/12 --broker_api=redis://lusir-redis-test:6379/12 --url_prefix=flower directory=%(ENV_HOME)s/projects/longyuan/lusir-server/src autostart=true autorestart=true startsecs=10 stopwaitsecs=30

    opened by jaggerwang 29
  • Option to store events inside a Postgres database

    Option to store events inside a Postgres database

    This backward-compatible change introduces a new persistence mode that is more reliable for a long-running process.

    With the shelve based persistence, data is written to disk only when cleanly stopping Flower. If for any reason the process is abnormally terminated, the in-memory data is lost. This unfortunately happens quite often when running Flower inside a Docker container.

    Postgres persistence implemented here works by adding an event callback to the EventsState instance that inserts events into the database as they happen (excluding worker heartbeat events). On process start, these events are replayed to recreate the State instance as it was.

    Note that migration of data persisted in shelf databases is not possible (or at least easy) -- as such a migration would need to recreate events based on aggregated state.

    Please review and comment; if I this feature is to be accepted I will add documentation for it.

    opened by bosonogi 27
  • Flower not finding queue or workers when monitoring apps not on the same instance

    Flower not finding queue or workers when monitoring apps not on the same instance

    I am deploying Flower with Chef on a sentinel node.

    I've been trying to start the flower service with path/to/bin/celery --broker=redis://brokerurl/0 --port=5555 flower

    I can navigate to the Flower page just fine, but no tasks appear. When I go to the broker tab, no queues are listed.

    What could be wrong? What else is needed?

    opened by rschwiebert 25
  • Missing documentation on Metrics

    Missing documentation on Metrics

    The following metrics of flower do show on prometheus but aren't documented, I do not know what they represent.

    flower_task_runtime_seconds_sum
    flower_task_runtime_seconds_count
    

    this expression rate(flower_task_runtime_seconds_sum[5m]) / rate(flower_task_runtime_seconds_count[5m]) is used on example dashboard in this repo but not documentation is provided.

    opened by kaizendae 0
  • Unable to start flower when redis is protected by password

    Unable to start flower when redis is protected by password

    Describe the bug Unable to start flower when redis is protected by password

    $celery --broker=redis://:$REDIS_PASSWORD@localhost:6379/0 flower

    System information Traceback (most recent call last): File "/venv/lib/python3.8/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors yield File "/venv/lib/python3.8/site-packages/kombu/connection.py", line 433, in _ensure_connection return retry_over_time( File "/venv/lib/python3.8/site-packages/kombu/utils/functional.py", line 312, in retry_over_time return fun(*args, **kwargs) File "/venv/lib/python3.8/site-packages/kombu/connection.py", line 877, in _connection_factory self._connection = self._establish_connection() File "/venv/lib/python3.8/site-packages/kombu/connection.py", line 812, in _establish_connection conn = self.transport.establish_connection() File "/venv/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 949, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/venv/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 927, in create_channel channel = self.Channel(connection) File "/venv/lib/python3.8/site-packages/kombu/transport/redis.py", line 737, in init self.client.ping() File "/venv/lib/python3.8/site-packages/redis/commands/core.py", line 1132, in ping return self.execute_command("PING", **kwargs) File "/venv/lib/python3.8/site-packages/redis/client.py", line 1235, in execute_command conn = self.connection or pool.get_connection(command_name, **options) File "/venv/lib/python3.8/site-packages/redis/connection.py", line 1387, in get_connection connection.connect() File "/venv/lib/python3.8/site-packages/redis/connection.py", line 623, in connect self.on_connect() File "/venv/lib/python3.8/site-packages/redis/connection.py", line 713, in on_connect auth_response = self.read_response() File "/venv/lib/python3.8/site-packages/redis/connection.py", line 839, in read_response raise response redis.exceptions.ResponseError: AUTH called without any password configured for the default user. Are you sure your configuration is correct?

    The above exception was the direct cause of the following exception
    
    bug 
    opened by supreme-core 0
  • Flower Exceptions in GoogleAuth2LoginHandler without explicit usage

    Flower Exceptions in GoogleAuth2LoginHandler without explicit usage

    Describe the bug

    My flower is up and running. Nevertheless, my Sentry instance is collecting those exceptions from time to time. I'd like to keep my Sentry clean and tidy and therefore like to find a solution to avoid those errors.

    To Reproduce

    Command:

    celery -A apps.config.celery_settings flower --port=5555 --basic_auth=my_user:my_password

    Install flower and wait for some time 😞

    Actually, I don't use the Google feature that is apparently causing the error.

    Expected behavior

    We don't get these exceptions or can disable them.

    Screenshots grafik

    I am using flower v1.2.0.

    bug 
    opened by GitRon 1
  • Use SPDX license expression in project metadata

    Use SPDX license expression in project metadata

    As a downstream user it is desirable to be able to programmatically determine the precise licenses used by our dependencies. The emerging convention for this is the SPDX License List. SPDX License Expressions are already used in the JavaScript (npm) and Rust (crates.io) ecosystems, for example.

    In Python, there is PEP 639 which would add a standard 'License-Expression' metadata field. The discussion around this PEP seems to have gone stale in 2021.

    This merge request is in lieu of that PEP coming soon. Unless you as project maintainers think the proposed change in this PR has downsides (which you are entitled to!), I would ask that you accept this change so that it's possible for users to attempt to parse your license metadata as an SPDX License Expression. In particular, BSD is ambiguous as there are multiple BSD licenses.

    If PEP 639 is accepted, this package would be ready for the change just by changing the license key to license-expression or whatever key the PEP lands on.

    opened by RazerM 0
  • Ability to record / persist only failed tasks

    Ability to record / persist only failed tasks

    I could set max_tasks to a much smaller number if this was possible, thus wasting less memory / database space storing successful tasks that I'm not interested in.

    (If this is already possible, please enlighten me - I looked through the docs and didn't find anything.)

    enhancement 
    opened by austinbravodev 0
Queuing with django celery and rabbitmq

queuing-with-django-celery-and-rabbitmq Install Python 3.6 or above sudo apt-get install python3.6 Install RabbitMQ sudo apt-get install rabbitmq-ser

null 1 Dec 22, 2021
FastAPI with Celery

Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.

Grega Vrbančič 371 Jan 1, 2023
Asynchronous tasks in Python with Celery + RabbitMQ + Redis

python-asynchronous-tasks Setup & Installation Create a virtual environment and install the dependencies: $ python -m venv venv $ source env/bin/activ

Valon Januzaj 40 Dec 3, 2022
Django database backed celery periodic task scheduler with support for task dependency graph

Djag Scheduler (Dj)ango Task D(AG) (Scheduler) Overview Djag scheduler associates scheduling information with celery tasks The task schedule is persis

Mohith Reddy 3 Nov 25, 2022
A fast and reliable background task processing library for Python 3.

dramatiq A fast and reliable distributed task processing library for Python 3. Changelog: https://dramatiq.io/changelog.html Community: https://groups

Bogdan Popa 3.4k Jan 1, 2023
Sync Laravel queue with Python. Provides an interface for communication between Laravel and Python.

Python Laravel Queue Queue sync between Python and Laravel using Redis driver. You can process jobs dispatched from Laravel in Python. NOTE: This pack

Sinan Bekar 3 Oct 1, 2022
Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.

FastAPI with Celery Minimal example utilizing FastAPI and Celery with RabbitMQ for task queue, Redis for Celery backend and flower for monitoring the

Grega Vrbančič 371 Jan 1, 2023
A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes.

OMNI A very lightweight monitoring system for Raspberry Pi clusters running Kubernetes. Why? When I finished my Kubernetes cluster using a few Raspber

Matias Godoy 148 Dec 29, 2022
To design and implement the Identification of Iris Flower species using machine learning using Python and the tool Scikit-Learn.

To design and implement the Identification of Iris Flower species using machine learning using Python and the tool Scikit-Learn.

Astitva Veer Garg 1 Jan 11, 2022
CLabel is a terminal-based cluster labeling tool that allows you to explore text data interactively and label clusters based on reviewing that data.

CLabel is a terminal-based cluster labeling tool that allows you to explore text data interactively and label clusters based on reviewing that

Peter Baumgartner 29 Aug 9, 2022
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

null 3 Nov 23, 2022
My HA controller for veg and flower rooms

HAGrowRoom My HA controller for veg and flower rooms I will do my best to keep this updated as I change, add and improve. System heavily uses custom t

null 4 May 25, 2022
Monitoring plugin to check network interfaces with Icinga, Nagios and other compatible monitoring solutions

check_network_interface - Monitor network interfaces This is a monitoring plugin for Icinga, Nagios and other compatible monitoring solutions to check

DinoTools 3 Nov 15, 2022
Monitoring plugin to check disk io with Icinga, Nagios and other compatible monitoring solutions

check_disk_io - Monitor disk io This is a monitoring plugin for Icinga, Nagios and other compatible monitoring solutions to check the disk io. It uses

DinoTools 3 Nov 15, 2022
Logging-monitoring-instrumentation - A brief repository on logging monitoring and instrumentation in Python

logging-monitoring-instrumentation A brief repository on logging monitoring and

Noah Gift 6 Feb 17, 2022
Flower classification model that classifies flowers in 10 classes made using transfer learning (~85% accuracy).

flower-classification-inceptionV3 Flower classification model that classifies flowers in 10 classes. Training and validation are done using a pre-anot

Ivan R. Mršulja 1 Dec 12, 2021
A Python function that makes flower plots.

Flower plot A Python 3.9+ function that makes flower plots. Installation This package requires at least Python 3.9. pip install

Thomas Roder 4 Jun 12, 2022
HTTP(s) "monitoring" webpage via FastAPI+Jinja2. Inspired by https://github.com/RaymiiOrg/bash-http-monitoring

python-http-monitoring HTTP(s) "monitoring" powered by FastAPI+Jinja2+aiohttp. Inspired by bash-http-monitoring. Installation can be done with pipenv

itzk 39 Aug 26, 2022
Real-time monitor and web admin for Celery distributed task queue

Flower Flower is a web based tool for monitoring and administrating Celery clusters. Features Real-time monitoring using Celery Events Task progress a

Mher Movsisyan 5.5k Dec 28, 2022
Tool for synchronizing clickhouse clusters

clicksync Tool for synchronizing clickhouse clusters works only with partitioned MergeTree tables can sync clusters with different node number uses in

Alexander Rumyantsev 1 Nov 30, 2021