Clearly see and debug your celery cluster in real time!

Overview

Clearly logo

Clearly see and debug your celery cluster in real time!

Travis Coverage Status codecov Maintenance PyPI version PyPI pyversions PyPI status PyPI downloads

Do you use celery, and monitor your tasks with flower? You'll probably like Clearly! ๐Ÿ‘

Clearly is a real time monitor for your celery tasks and workers!

While I do like flower, to me it's not been totally up to the task (pun intended :).
Why is that? flower needs page refreshes, filters only one task type at a time, displays results as plain strings without any formatting or syntax highlighting, and on top of that also truncates them!

And clearly is actually real time, has multiple simultaneous filters, a beautiful syntax highlighting system, an advanced formating system that shows parameters, results and tracebacks just as an ipython would, has complete un-truncated results and is very easy to use! ๐Ÿ˜ƒ
Also you can install it very easily with a docker image!

It's great to actually see in a totally real time way what's going on in your celery workers! So it's very nice for inspecting, debugging, and even demonstrating your company async-superpowers (put clearly on a big screen TV showing all tasks of your production environment ๐Ÿ˜œ )!

Clearly is composed of a server, which collects real time events from the celery cluster, generates missing states, and streams filtered data to connected clients; and a client, which you use to send filter commands and display both real time and stored data. They communicate with each other via gRPC and ProtocolBuffers.

See what clearly looks like: very cool

๐Ÿ“Œ New version!

Clearly has received a major revamp in version 0.9, since its very near 1.0! \o/

All code has been revisited and several new features were implemented, which took ~170 commits, ~70 files changed and ~2700 additions! Clearly is now more mature, more reliable and way more polished in general, with beautiful colors and complete error handling, making this new Clearly way more pleasant to use!

And there's also the unit tests, which were greatly streamlined. The suite has gone from ~2600 tests down to less than 700, while keeping 100% code coverage (branch-coverage)! The PR is in https://github.com/rsalmei/clearly/pull/52 if you'd like to see it.

This endeavor has taken five weeks of full-time work, and demanded a great effort. If you appreciate my work, you could buy me a coffee ๐Ÿ˜Š , I would really appreciate that! (the button is on the top-right corner)

Also please help Clearly gain more momentum! Tweet about it, write a blog post about it or just recommend it!

Enjoy!


Features

Clearly enables you to:

  • Be informed of any and all tasks running, failing or just being enqueued, both in real time and stored;
    • if you enable task_send_sent_event in your code, you can track tasks even before they get into a worker!
  • Be informed of workers availability, knowing immediately if any goes down or up;
  • Filter tasks any way you want by several fields, both in real time and stored;
  • Debug the actual parameters the tasks were called with, and analyze their outcome, such as success results or failure tracebacks and retries;
  • All types and representations of those parameters and outcomes can be clearly seen with the advanced formatting system and syntax highlighting;
  • Analyze metrics of your system.

Clearly is

  • compatible with any version of celery, from 3.1 to 4.4+;
  • aware of your result backend, using it if available to retrieve tasks' results;
  • running inside docker, so anyone with any version of python can use it! ๐Ÿ‘

Clearly is not

  • an administration tool for celery clusters, it is specialized in monitoring.

Get clearly the docker way

Requirements

To use clearly, you just have to:

  • enable Events in your celery workers (celery worker -E)

and you're good to go!

Start the server

$ docker run --rm --name clearly-server -p <clearly_port>:12223 \
      rsalmei/clearly server <broker_url> [--backend backend_url]

You should see something like:

server

Start the client

$ docker run --rm -ti --name clearly-client -v ipythonhist:/root/.ipython/profile_default/ \
      rsalmei/clearly client [clearly_server [clearly_port]]

Note: The above volume (-v) is not really necessary, but it's very nice to not lose your clearlycli history every time you leave it. I recommend it ๐Ÿ‘ .

That's it, you're good to go! \o/

What, you really need to use it inside your own REPL?

Ok, you can use clearly the pip way, but this is not recommended anymore. The docker way is much more portable, and avoids coupling between yours and clearly code. Also enables me to use internally any python version I see fit, without fear of breaking user's code. See in the changelog the currently supported python version.

$ pip install clearly

Start the server

$ clearly server <broker_url> [--backend backend_url] [--port 12223]
Just a quickie debug?

Clearly Client used to not need any server, which was convenient but had several shortcomings, like losing all history when closed, and receiving every single event from the celery cluster for each new client, which stressed the broker way more. But if you'd like to use it quickly like that, be it to just assert something or to trial clearly before committing, you still can. Just start the server in-process:

    from clearly.server import ClearlyServer
    clearly_server = ClearlyServer('<broker_url>', '<backend>')
    clearly_server.start_server(<port>)

Then you can simply start the client without arguments: clearlycli = ClearlyClient()

Start the client

Enter any REPL like python or IPython:

from clearly.client import ClearlyClient
clearlycli = ClearlyClient('<clearly_server>', <12223>)

That's it, you're good to go!

Constrained to Python 2.7?

Use the docker way! :)
If you still really want this, maybe to embed in your legacy projects' shell, please use version 0.4.2 as in:

$ pip install clearly==0.4.2

This version is prior to the server split, so it connects directly to the broker, but should still work.

You could also use 0.6.4, which is newer and already has a server, but please do mind that gRPC and Python 2.7 have a very annoying bug in this case: you can't CTRL+C out of clearly client's capture method (or any streaming methods), so you have to kill the process itself when needed with CTRL+\. For more details on this nasty bug:


How to use it

So, you are ready to see tasks popping up in your screen faster than you can see? (Remember to filter them!)

Grab them

In real time โšก๏ธ

clearlycli.capture()
clearlycli.capture_tasks()
clearlycli.capture_workers()

Past, stored events ๐Ÿ—„

clearlycli.tasks()
clearlycli.workers()

Example using the capture() method, which will show all real time activity in the celery cluster, including both tasks and workers.

capture

The real time method variants block to receive streaming events from the server.
At any moment, you can CTRL+C out them, and rest assured the server will continue to gather all events seamlessly, it's just this client that will stop receiving them. The capture_tasks() and capture_workers() methods receive only its respective real time events.
The tasks() and workers() methods operates similarly, but retrieving only stored events without blocking.

The client will display those events in a format configured by the corresponding Display Mode.

Display modes

clearlycli.display_modes()

Display modes specify the level of details you want to see. Things like to show or not arguments being sent, to show exceptions, with or without arguments, to show tasks' results, etc.

display modes

To change a display mode, just call the same method with the constant number beside it or the enum constant.

clearlycli.display_modes(ModeTask.RESULT, ModeWorker.STATS)  # the enums are automatically imported
clearlycli.display_modes(2, 13)  # has the same effect, but easier on the fingers

You can also change only one display mode at a time (just call with one argument).

clearlycli.display_modes(ModeTask.SUCCESS)  # the same as calling with (5)
clearlycli.display_modes(12)  # sets the worker display mode to ModeWorker.WORKER

And even configure the default directly in the docker run env: just include a -e CLI_DISPLAY_MODES="...", with one or two constant numbers (the enum constants are not accepted here).

Seen tasks, metrics and reset tasks

clearlycli.seen_tasks()  # prints all seen task types
clearlycli.metrics()  # prints some metrics about the celery cluster and Clearly itself
clearlycli.reset_tasks()  # resets stored tasks

This section should be pretty self-explanatory.

seen metrics


How clearly works

Clearly has started as a fully contained tool, one that you just run in a python REPL and filter tasks! It used to connect directly to the broker, which had several shortcomings like stressing the broker a lot (repeating all events to all connected clearlys), and losing all history whenever it was closed.

Nowadays, its software architecture is quite different. Since version 0.5, it is split in two software components: a server that extracts events and share data, and a client that interfaces with it, sending commands and displaying results.

Clearly server has three subsystems: the listener, the dispatcher and the RPC server. The listener runs on a background thread, that captures raw celery events in real time directly from the broker, updates their tasks in an LRU cache and dynamically generates missing states if needed (so it seems the events were perfectly ordered to the user). The dispatcher runs two other threads that handle connected users, filtering and dispatching selected events to the interested parties. The RPC server implements the gRPC communication mechanism.

The events in the server may (and usually will) come in a chaotic order, so it must dynamically generate missing states and ignore late ones, so you never see a STARTED task before it being even RECEIVED, which would be weird. In a real time monitoring system this is very important, as it's not only about displaying the current state, but displaying the complete train of events that lead up to that state.

Clearly client can filter real time and persisted tasks by name, uuid, routing key and state, and workers by hostname, including brief metrics about the clearly server itself. It has advanced formatting and syntax highlighting abilities, and configurable display modes to exhibit selected data. It does not use any threads and is totally broker and celery agnostic, relying only on the server via gRPC.

The arguments of the tasks are safely compiled into an AST (Abstract Syntax Tree), and beautifully syntax highlighted. Successful tasks get their results directly from the result backend if available, to overcome the problem of truncated results (the broker events have a size limit).

All async workers' life cycles are also processed and displayed, beautifully syntax highlighted too. If you enable task_send_sent_event, all triggered tasks show up immediately on screen (it's quite nice to see!).

The memory consumption for server persisted tasks, although very optimized, must of course be limited. By default it stores 10,000 tasks and 100 workers at a time, and is configurable.


Client commands Reference

def capture_tasks(self, tasks: Optional[str] = None,
                  mode: Union[None, int, ModeTask] = None) -> None:
    """Start capturing task events in real time, so you can instantly see exactly
    what your publishers and workers are doing. Filter as much as you can to find
    what you need, and don't worry as the Clearly Server will still seamlessly
    handle all tasks updates.

    Currently, you can filter tasks by name, uuid, routing key or state.
    Insert an '!' in the first position to select those that do not match criteria.

    This runs in the foreground. Press CTRL+C at any time to stop it.

    Args:
        tasks: a simple pattern to filter tasks
            ex.: 'email' to find values containing that word anywhere
                 'failure|rejected|revoked' to find tasks with problem
                 '^trigger|^email' to find values starting with any of those words
                 'trigger.*123456' to find values with those words in that sequence
                 '!^trigger|^email' to filter values not starting with both those words
        mode: an optional display mode to present data

    See Also:
        ClearlyClient#display_modes()

    """

def capture_workers(self, workers: Optional[str] = None,
                    mode: Union[None, int, ModeWorker] = None) -> None:
    """Start capturing worker events in real time, so you can instantly see exactly
    what your workers states are. Filter as much as you can to find
    what you need, and don't worry as the Clearly Server will still seamlessly
    handle all tasks and workers updates.

    Currently, you can filter workers by hostname.
    Insert an '!' in the first position to select those that do not match criteria.

    This runs in the foreground. Press CTRL+C at any time to stop it.

    Args:
        workers: a simple pattern to filter workers
            ex.: 'email' to find values containing that word anywhere
                 'service|priority' to find values containing any of those words
                 '!service|priority' to find values not containing both those words
        mode: an optional display mode to present data

    See Also:
        ClearlyClient#display_modes()

    """

def capture(self, tasks: Optional[str] = None, workers: Optional[str] = None,
            modes: Union[None, int, ModeTask, ModeWorker, Tuple] = None) -> None:
    """Start capturing all events in real time, so you can instantly see exactly
    what your publishers and workers are doing. Filter as much as you can to find
    what you need, and don't worry as the Clearly Server will still seamlessly
    handle all tasks and workers updates.

    This runs in the foreground. Press CTRL+C at any time to stop it.

    Args:
        tasks: the pattern to filter tasks
        workers: the pattern to filter workers
        modes: optional display modes to present data
            send one or a tuple, as described in display_modes()

    See Also:
        ClearlyClient#capture_tasks()
        ClearlyClient#capture_workers()
        ClearlyClient#display_modes()

    """

def tasks(self, tasks: Optional[str] = None, mode: Union[None, int, ModeTask] = None,
          limit: Optional[int] = None, reverse: bool = True) -> None:
    """Fetch current data from past tasks.

    Note that the `limit` field is just a hint, it may not be accurate.
    Also, the total number of tasks fetched may be slightly different from
    the server `max_tasks` setting.

    Args:
        tasks: the pattern to filter tasks
        mode: an optional display mode to present data
        limit: the maximum number of events to fetch, fetches all if None or 0 (default)
        reverse: if True (default), shows the most recent first

    See Also:
        ClearlyClient#capture_tasks()
        ClearlyClient#display_modes()

    """

def workers(self, workers: Optional[str] = None,
            mode: Union[None, int, ModeWorker] = None) -> None:
    """Fetch current data from known workers.
    
    Args:
        workers: the pattern to filter workers
        mode: an optional display mode to present data

    See Also:
        ClearlyClient#capture_workers()
        ClearlyClient#display_modes()

    """

def seen_tasks(self) -> None:
    """Fetch a list of seen task types."""

def reset_tasks(self) -> None:
    """Reset stored tasks."""

def metrics(self) -> None:
    """List some metrics about the celery cluster and Clearly itself.

    Shows:
        Tasks processed: actual number of tasks processed, including retries
        Events processed: total number of events processed
        Tasks stored: number of currently stored tasks
        Workers stored: number of workers seen, including offline

    """

Hints to extract the most out of it

  • write a small celery router and generate dynamic routing keys, based on the actual arguments of the async call in place. That way, you'll be able to filter tasks based on any of those, like an id of an entity or the name of a product! Remember the args and kwargs are not used in the filtering.
  • if you're using django and django-extensions, put in your settings a SHELL_PLUS_POST_IMPORT to auto import clearly! Just create a module to initialize a clearlycli instance for the django shell_plus and you're good to go. It's really nice to have Clearly always ready to be used, without importing or configuring anything, and quickly see what's going on with your tasks, even in production :)
  • the more you filter, the less you'll have to analyze and debug! A production environment can have an overwhelmingly high number of tasks, like thousands of tasks per minute, so filter wisely.

Maybe some day

  • support python 3;
  • split Clearly client and server, to allow an always-on server to run, with multiple clients connecting, without any of the shortcomings;
  • remove python 2 support
  • dockerize the server, to make its deploy way easier;
  • improve the client command line, to be able to call capture() and others right from the shell;
  • make the search pattern apply to args and kwargs too;
  • support to constrain search pattern to only some fields;
  • ability to hide sensitive information, directly in event listener;
  • secure connection between server and client;
  • any other ideas welcome!

Changelog:

  • 0.9.2: fix a warning on client startup
  • 0.9.1: fix reset() breaking protobuf serialization; also rename it to reset_tasks()
  • 0.9.0: major code revamp with new internal flow of data, in preparation to the 1.0 milestone! Now there's two StreamingDispatcher instances, each with its own thread, to handle tasks and workers separately (reducing ifs, complexity and latency); include type annotation in all code; several clearlycli improvements: introduced the "!" instead of "negate", introduced display modes instead of "params, success and error", renamed stats() to metrics(), removed task() and improved tasks() to also retrieve tasks by uuid, general polish in all commands and error handling; server log filters passwords from both broker and result backend; worker states reengineered, and heartbeats now get through to the client; unified the streaming and stored events filtering; refactor some code with high cyclomatic complexity; include the so important version number in both server and client initializations; adds a new stylish logo; friendlier errors in general; fix a connection leak, where streaming clients were not being disconnected; included an env var to configure default display modes; streamlined the test system, going from ~2600 tests down to less than 700, while keeping the same 100% branch coverage (removed fixtures combinations which didn't make sense); requires Python 3.6+
  • 0.8.3: extended user friendliness of gRPC errors to all client rpc methods; last version to support Python 3.5
  • 0.8.2: reduce docker image size; user friendlier gRPC errors on client capture (with --debug to raise actual exception); nicer client autocomplete (no clearly package or clearly dir are shown)
  • 0.8.1: keep un-truncate engine from breaking when very distant celery versions are used in publisher and server sides
  • 0.8.0: clearly is dockerized! both server and client are now available in rsalmei/clearly docker hub; include new capture_tasks and capture_workers methods; fix task result being displayed in RETRY state; detect and break if can't connect to the broker
  • 0.7.0: code cleanup, supporting only Python 3.5+
  • 0.6.4: last version to support Python 2.7
  • 0.6.0: supports again standalone mode, in addition to client/server mode
  • 0.5.0: independent client and server, connected by gRPC (supports a server with multiple clients)
  • 0.4.2: last version with client and server combined
  • 0.4.0: supports both py2.7 and py3.4+
  • 0.3.6: last version to support py2 only
  • 0.3.0: result backend is not mandatory anymore
  • 0.2.6: gets production quality
  • 0.2.0: support standard celery events
  • 0.1.4: last version that doesn't use events

License

This software is licensed under the MIT License. See the LICENSE file in the top distribution directory for the full license text.


Did you like it?

Thank you for your interest!

I've put much โค๏ธ and effort into this.
If you appreciate my work you can sponsor me, buy me a coffee! The button is on the top right of the page (the big orange one, it's hard to miss ๐Ÿ˜Š )

Comments
  • Unable to see tasks with clearly client

    Unable to see tasks with clearly client

    Hello - I have been trying out the docker version of clearly with our instance of celery, backed by rabbitmq. I made the recommended change to the worker invocation to enable events. The server starts up with the following output

    2020-04-13 14:54:17,376 clearly.event_core.event_listener INFO Creating EventListener: max_tasks=10000; max_workers=100 Celery broker=amqp://production:**@169.254.255.254:5672; backend=None; using_result_backend=False 2020-04-13 14:54:17,378 clearly.event_core.event_listener INFO Starting listener: <Thread(clearly-listener, started daemon 140432189708032)> 2020-04-13 14:54:17,415 clearly.event_core.streaming_dispatcher INFO Creating StreamingDispatcher 2020-04-13 14:54:17,418 clearly.event_core.streaming_dispatcher INFO Starting dispatcher: <Thread(clearly-dispatcher, started daemon 140432180209408)> 2020-04-13 14:54:17,419 clearly.server INFO Creating ClearlyServer 2020-04-13 14:54:17,419 clearly.server INFO Initiating gRPC server: port=12223 2020-04-13 14:54:17,428 clearly.server INFO gRPC server ok

    After I start the client, I can query the workers and the output seems legit

    In [3]: clearlycli.workers() celery@10-65-70-179-useast1aprod 6 sw: Linux py-celery 3.1.20 load: [1.34, 1.22, 1.25] processed: 84998 fetched: 1 in 974.91us (1025.73/s)

    However, when I try to just clearlycli.capture() it seems to hang and never provides any output. When I try to query tasks already captured by the server, I don't get anything, even though I can check our application logs and see new output and new results of processing:

    In [4]: clearlycli.tasks() fetched: 0 in 431.52us (-)

    Also the server output doesn't indicate any connection by the client, which seems strange to me, but maybe that's normal.

    I'd be grateful for any advice you have. Clearly looks like it could be helpful.

    Best regards,

    Steve

    opened by stevetho-wpa 9
  • Got this Attribute Error

    Got this Attribute Error

    Hello,

    I can't get to start the clearly server on ubuntu 16.04 /python 3.6.8. I got this error:

     from google.protobuf import symbol_database as _symbol_database
      File "/var/www/deddd/lib/python3.6/site-packages/google/protobuf/symbol_database.py", line 193, in <module>
        _DEFAULT = SymbolDatabase(pool=descriptor_pool.Default())
    AttributeError: module 'google.protobuf.descriptor_pool' has no attribute 'Default'
    

    I used this command to start the server:

    clearly server amqp://localhost

    3rd party error 
    opened by bobozar 8
  • TypeError: expected string or buffer on txt = NON_PRINTABLE_PATTERN.sub(_encode_to_hex, txt)

    TypeError: expected string or buffer on txt = NON_PRINTABLE_PATTERN.sub(_encode_to_hex, txt)

    Hey bud, found a bug with latest version :)

    # clearly server $BROKER_URL 
    2020-02-04 21:58:57,871 clearly.core.event_listener INFO Creating EventListener: max_tasks=10000; max_workers=100
    2020-02-04 21:58:57,871 clearly.core.event_listener INFO Celery broker=<url>; backend=None; using_result_backend=False
    2020-02-04 21:58:57,872 clearly.core.event_listener INFO Starting listener: <Thread(clearly-listener, started daemon 140312796280576)>
    2020-02-04 21:58:57,898 clearly.core.streaming_dispatcher INFO Creating StreamingDispatcher
    2020-02-04 21:58:57,899 clearly.core.streaming_dispatcher INFO Starting dispatcher: <Thread(clearly-dispatcher, started daemon 140312785483520)>
    2020-02-04 21:58:57,900 clearly.server INFO Creating ClearlyServer
    2020-02-04 21:58:57,900 clearly.server INFO Initiating gRPC server: port=12223
    2020-02-04 21:58:57,911 clearly.server INFO gRPC server ok
    2020-02-04 21:58:58,058 celery.events.state WARNING Substantial drift from <serverX> may mean clocks are out of sync.  Current drift is
    10801 seconds.  [orig: 2020-02-04 18:58:57.925612 recv: 2020-02-04 21:58:58.058112]
    
    2020-02-04 21:58:58,060 celery.events.state WARNING Substantial drift from <serverX> may mean clocks are out of sync.  Current drift is
    10801 seconds.  [orig: 2020-02-04 18:58:57.944250 recv: 2020-02-04 21:58:58.060834]
    
    2020-02-04 21:58:58,062 celery.events.state WARNING Substantial drift from <serverX> may mean clocks are out of sync.  Current drift is
    10801 seconds.  [orig: 2020-02-04 18:58:57.945517 recv: 2020-02-04 21:58:58.061910]
    
    2020-02-04 21:58:58,064 celery.events.state WARNING Substantial drift from <serverX> may mean clocks are out of sync.  Current drift is
    10801 seconds.  [orig: 2020-02-04 18:58:57.971172 recv: 2020-02-04 21:58:58.064185]
    
    2020-02-04 21:58:58,065 celery.events.state WARNING Substantial drift from <serverX> may mean clocks are out of sync.  Current drift is
    10801 seconds.  [orig: 2020-02-04 18:58:57.998381 recv: 2020-02-04 21:58:58.065216]
    
    2020-02-04 21:58:58,065 celery.events.state WARNING Substantial drift from <serverX> may mean clocks are out of sync.  Current drift is
    10800 seconds.  [orig: 2020-02-04 18:58:58.000780 recv: 2020-02-04 21:58:58.065810]
    
    2020-02-04 21:58:58,066 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,066 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,066 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,067 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,067 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,067 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,067 amqp WARNING Received method (60, 60) during closing channel 1. This method will be ignored
    2020-02-04 21:58:58,068 amqp WARNING Received method (60, 31) during closing channel 1. This method will be ignored
    Exception in thread clearly-listener:
    Traceback (most recent call last):
      File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
        self.run()
      File "/usr/lib/python2.7/threading.py", line 754, in run
        self.__target(*self.__args, **self.__kwargs)
      File "/usr/local/lib/python2.7/dist-packages/clearly/event_core/event_listener.py", line 112, in __run_listener
        self._celery_receiver.capture(limit=None, timeout=None, wakeup=True)
      File "/usr/local/lib/python2.7/dist-packages/celery/events/receiver.py", line 93, in capture
        return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
      File "/usr/local/lib/python2.7/dist-packages/kombu/mixins.py", line 197, in consume
        conn.drain_events(timeout=safety_interval)
      File "/usr/local/lib/python2.7/dist-packages/kombu/connection.py", line 323, in drain_events
        return self.transport.drain_events(self.connection, **kwargs)
      File "/usr/local/lib/python2.7/dist-packages/kombu/transport/pyamqp.py", line 103, in drain_events
        return connection.drain_events(**kwargs)
      File "/usr/local/lib/python2.7/dist-packages/amqp/connection.py", line 505, in drain_events
        while not self.blocking_read(timeout):
      File "/usr/local/lib/python2.7/dist-packages/amqp/connection.py", line 511, in blocking_read
        return self.on_inbound_frame(frame)
      File "/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py", line 79, in on_frame
        callback(channel, msg.frame_method, msg.frame_args, msg)
      File "/usr/local/lib/python2.7/dist-packages/amqp/connection.py", line 518, in on_inbound_method
        method_sig, payload, content,
      File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 145, in dispatch_method
        listener(*args)
      File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 1615, in _on_basic_deliver
        fun(msg)
      File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 624, in _receive_callback
        return on_m(message) if on_m else self.receive(decoded, message)
      File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 590, in receive
        [callback(body, message) for callback in callbacks]
      File "/usr/local/lib/python2.7/dist-packages/celery/events/receiver.py", line 130, in _receive
        [process(*from_message(event)) for event in body]
      File "/usr/local/lib/python2.7/dist-packages/celery/events/receiver.py", line 71, in process
        handler and handler(event)
      File "/usr/local/lib/python2.7/dist-packages/clearly/event_core/event_listener.py", line 119, in _process_event
        data = self._process_task_event(event)
      File "/usr/local/lib/python2.7/dist-packages/clearly/event_core/event_listener.py", line 135, in _process_task_event
        task.result = EventListener.compile_task_result(task)
      File "/usr/local/lib/python2.7/dist-packages/clearly/event_core/event_listener.py", line 157, in compile_task_result
        safe_compile_text(result, raises=True)
      File "/usr/local/lib/python2.7/dist-packages/clearly/safe_compiler.py", line 77, in safe_compile_text
        txt = NON_PRINTABLE_PATTERN.sub(_encode_to_hex, txt)
    TypeError: expected string or buffer
    

    Hugs :)

    3rd party error 
    opened by chronossc 6
  • Exception in thread clearly-listener: task.worker.sw_ver is None

    Exception in thread clearly-listener: task.worker.sw_ver is None

    Hi,

    I'm hitting an error in the clearly listener that stops new tasks from being processed. When using cli.workers() some workers have sw:.

    Here's the traceback:

      File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
        self.run()
      File "/usr/local/lib/python3.7/threading.py", line 870, in run
        self._target(*self._args, **self._kwargs)
      File "/usr/local/lib/python3.7/site-packages/clearly/event_core/event_listener.py", line 103, in __run_listener
        self._celery_receiver.capture(limit=None, timeout=None, wakeup=True)
      File "/usr/local/lib/python3.7/site-packages/celery/events/receiver.py", line 93, in capture
        return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
      File "/usr/local/lib/python3.7/site-packages/kombu/mixins.py", line 197, in consume
        conn.drain_events(timeout=safety_interval)
      File "/usr/local/lib/python3.7/site-packages/kombu/connection.py", line 321, in drain_events
        return self.transport.drain_events(self.connection, **kwargs)
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/virtual/base.py", line 979, in drain_events
        get(self._deliver, timeout=timeout)
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/redis.py", line 370, in get
        ret = self.handle_event(fileno, event)
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/redis.py", line 352, in handle_event
        return self.on_readable(fileno), self
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/redis.py", line 348, in on_readable
        chan.handlers[type]()
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/redis.py", line 679, in _receive
        ret.append(self._receive_one(c))
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/redis.py", line 709, in _receive_one
        message, self._fanout_to_queue[exchange])
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/virtual/base.py", line 999, in _deliver
        callback(message)
      File "/usr/local/lib/python3.7/site-packages/kombu/transport/virtual/base.py", line 633, in _callback
        return callback(message)
      File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 624, in _receive_callback
        return on_m(message) if on_m else self.receive(decoded, message)
      File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 590, in receive
        [callback(body, message) for callback in callbacks]
      File "/usr/local/lib/python3.7/site-packages/kombu/messaging.py", line 590, in <listcomp>
        [callback(body, message) for callback in callbacks]
      File "/usr/local/lib/python3.7/site-packages/celery/events/receiver.py", line 132, in _receive
        self.process(*self.event_from_message(body))
      File "/usr/local/lib/python3.7/site-packages/celery/events/receiver.py", line 71, in process
        handler and handler(event)
      File "/usr/local/lib/python3.7/site-packages/clearly/event_core/event_listener.py", line 110, in _process_event
        data = self._process_task_event(event)
      File "/usr/local/lib/python3.7/site-packages/clearly/event_core/event_listener.py", line 126, in _process_task_event
        task.result = EventListener.compile_task_result(task)
      File "/usr/local/lib/python3.7/site-packages/clearly/event_core/event_listener.py", line 144, in compile_task_result
        if task.worker.sw_ver < '4':
    TypeError: '<' not supported between instances of 'NoneType' and 'str'
    

    Do you know why our workers might be reporting an empty sw_ver?

    I'm using: clearly: 0.7.0 celery: 4.4.0rc3 python: 3.7

    Thanks.

    opened by OmegaDroid 4
  • Support custom Event exchange / multiple exchanges

    Support custom Event exchange / multiple exchanges

    Hi, We are using a custom named event queue (set by the event_exchange option in celery), but from what I can see, there is no way of configuring event_exchange in clearly at this point. Can you support configuring the event_exchange?

    Also, we have multiple subapplication, and we are considering splitting event exchange on a per-subapplication basis. Would it be possible to support multiple event exchanges in clearly (so that we only need one cleary instance?

    Thank you,

    opened by tjader 3
  • Code revamp toward 1.0

    Code revamp toward 1.0

    Major code revamp with new internal flow of data, in preparation to the 1.0 milestone! Now there's two StreamingDispatcher instances, each with its own thread, to handle tasks and workers separately (reducing ifs, complexity and latency).

    • include type annotation in all code;
    • several clearlycli improvements: introduced the "!" instead of "negate", introduced display modes instead of "params, success and error", renamed stats() to metrics(), removed task() and improved tasks() to also retrieve tasks by uuid, general polish in all commands and error handling;
    • worker states reengineered, and heartbeats now get through to the client;
    • unified the streaming and persisted events filtering;
    • refactor some code with high cyclomatic complexity;
    • include the so important version number in both server and client initializations;
    • adds a new stylish logo;
    • friendlier errors in general;
    • fix a connection leak, where streaming clients were not being disconnected;
    • included an env var to configure default display modes;
    • streamlined the test system, going from ~2600 tests down to less than 700, while keeping the same 100% branch coverage (removed fixtures combinations which didn't make sense);
    • requires Python 3.6+.
    opened by rsalmei 3
  • 3.7 Support

    3.7 Support

    I'm working on some polishing of clearly in my fork and noticed errors with 3.7 in the tests?

    e.g.

    ___________________________________________________ test_expected_states_task[STARTED-STARTED-expected21] ____________________________________________________
    
    self = <clearly.expected_state.ExpectedStateHandler object at 0x7f5b41078a20>, pre = 'STARTED', post = 'STARTED'
    
        def states_through(self, pre, post):
            if pre == post:
    >           raise StopIteration
    E           StopIteration
    
    clearly/expected_state.py:28: StopIteration
    
    opened by jrabbit 3
  • Fix about_time unpinned

    Fix about_time unpinned

    I did yesterday a major refac in about_time, making it much simpler to use! And it wasn't pinned in here, so the next push I've made, got the new version and broke. I like to always pin up to the minor version, so the project always gets build fixes, so I've pinned to ~=3.0 ๐Ÿ˜‰

    If you don't know about_time, please do! It ended up pretty nice, and more people should know it: https://github.com/rsalmei/about-time

    opened by rsalmei 1
  • Makes it simpler to use a not installed server

    Makes it simpler to use a not installed server

    Also organize code and use about_time (also a framework of mine) to clean code and give duration and throughput in client operations that are finite (do not block in a stream indefinitely).

    opened by rsalmei 0
  • Independent client and server, connected by gRPC, 100% CODE COVERAGE

    Independent client and server, connected by gRPC, 100% CODE COVERAGE

    Finally we'll be able to let a server running all the time, collecting celery events, independent of any connected clients! This will allow to use Clearly as an actual flower substitute.

    The software architecture is quite different now. We have a threaded EventListener that captures raw celery events, storing tasks and workers in the default LRU memory, and converting to an immutable format before passing up to the next phase via a Queue.

    Then there's the StreamingDispatcher, another threaded processor, that maintains connected interested parties and shares events with them. For each new event, it tests whether a client would like to see it, and if yes it generates the missing gaps in states and sends them to the client Queue.

    Finally there's the ClearlyServerServicer, a gRPC server in a ThreadPoolExecutor, that accepts connections and redirects to: the streaming dispatcher if a realtime capture was requested, or to the listener memory for already persisted events. Of course there is the new ClearlyClient, which does not use threads anymore or any server resources like broker or celery app, instead it uses a stub to connect to the server host:port via gRPC.

    The code is way better, way more testable, and now all modules has 100% code coverage! There's also a few small bugs fixed, and better tools (makefile) to support the project.

    Achievement unlocked: 100% CODE COVERAGE

    pytest --cov=clearly --cov-branch --cov-report=term-missing
    Test session starts (platform: darwin, Python 3.6.6, pytest 3.7.1, pytest-sugar 0.9.1)
    rootdir: /Users/rogerio/Documents/projects/clearly, inifile:
    plugins: xdist-1.22.5, sugar-0.9.1, forked-0.2, cov-2.5.1, celery-4.2.1
    
     tests/unit/test_client.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“  2% โ–Ž
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“  5% โ–Œ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“  7% โ–Š
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“  9% โ–‰
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 11% โ–ˆโ–Ž
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 14% โ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 16% โ–ˆโ–‹
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 18% โ–ˆโ–‰
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 21% โ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 23% โ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 25% โ–ˆโ–ˆโ–Œ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 28% โ–ˆโ–ˆโ–Š
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 30% โ–ˆโ–ˆโ–ˆ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 32% โ–ˆโ–ˆโ–ˆโ–Ž
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 34% โ–ˆโ–ˆโ–ˆโ–Œ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 37% โ–ˆโ–ˆโ–ˆโ–‹
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 39% โ–ˆโ–ˆโ–ˆโ–‰
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 41% โ–ˆโ–ˆโ–ˆโ–ˆโ–Ž
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 44% โ–ˆโ–ˆโ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 46% โ–ˆโ–ˆโ–ˆโ–ˆโ–‹
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 48% โ–ˆโ–ˆโ–ˆโ–ˆโ–‰
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 51% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 53% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 55% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 57% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 60% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 62% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 64% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 67% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 69% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 71% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 74% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 76% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 78% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 80% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
                               โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“                          82% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž
     tests/unit/test_code_highlighter.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 83% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
                                         โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“                           84% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ
     tests/unit/test_expected_state.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 86% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹
                                       โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“         87% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š
     tests/unit/test_safe_compiler.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 89% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰
                                      โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 91% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž
                                      โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“             93% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
     tests/unit/test_server.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“                                 94% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–
     tests/unit/event_core/test_event_listener.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 95% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ
                                                  โœ“โœ“โœ“โœ“โœ“                          95% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ
     tests/unit/event_core/test_events_immutable.py โœ“โœ“โœ“โœ“                         95% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹
     tests/unit/event_core/test_streaming_dispatcher.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“               96% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹
     tests/unit/utils/test_colors.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“        98% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š
     tests/unit/utils/test_data.py โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“โœ“ 99% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
                                   โœ“โœ“โœ“โœ“โœ“โœ“                                       100% โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ
    
    ---------- coverage: platform darwin, python 3.6.6-final-0 -----------
    Name                                         Stmts   Miss Branch BrPart  Cover   Missing
    ----------------------------------------------------------------------------------------
    clearly/__init__.py                              6      0      0      0   100%
    clearly/client.py                               98      0     34      0   100%
    clearly/code_highlighter.py                     63      0     36      0   100%
    clearly/event_core/__init__.py                   0      0      0      0   100%
    clearly/event_core/event_listener.py            56      0      6      0   100%
    clearly/event_core/events.py                    12      0      4      0   100%
    clearly/event_core/streaming_dispatcher.py      45      0     12      0   100%
    clearly/expected_state.py                       60      0     24      0   100%
    clearly/safe_compiler.py                        46      0     26      0   100%
    clearly/server.py                               80      0     17      0   100%
    clearly/utils/__init__.py                        0      0      0      0   100%
    clearly/utils/colors.py                         18      0      2      0   100%
    clearly/utils/data.py                            9      0      4      0   100%
    clearly/utils/text.py                            3      0      0      0   100%
    clearly/utils/worker_states.py                   4      0      0      0   100%
    ----------------------------------------------------------------------------------------
    TOTAL                                          500      0    165      0   100%
    
    
    Results (8.11s):
        2132 passed
    
    
    opened by rsalmei 0
  • Send logs to Logstash?

    Send logs to Logstash?

    Hello,

    Is there any way to send these logs to Logstash with Clearly container?

    I mean using something like these : https://logz.io/blog/python-logs-elk-elastic-stack/ https://python-logstash-async.readthedocs.io/en/stable/

    opened by Seb012007 1
  • Possible to gracefully recover from StatusCode.RESOURCE_EXHAUSTED?

    Possible to gracefully recover from StatusCode.RESOURCE_EXHAUSTED?

    To give a little context, I have a couple of Celery tasks. One that processes and aggregates chunks of data and another that gets called when all of the data has been aggregated (my callback task). When my callback task gets called with all of the data as the task payload, Clearly chokes and stops listening for events. Are there any settings or flags I can pass in to gracefully ignore or recover and continue monitoring the tasks?

    19:52:52.926  SUCCESS 0 v2.tasks.process_data.create_data_processors d4add4a5-1396-4697-8ff2-52cc3c21e681
    19:52:52.926  STARTED 0 v2.tasks.process_data.chunk_processor ba383a3d-a7be-4044-b2bd-e0f63bd510b4
    19:52:52.928 RECEIVED 0 v2.tasks.process_data.chunk_processor ab7741dc-cd3e-494d-929d-169bbc6c164b
    19:52:52.929 RECEIVED 0 v2.tasks.process_data.chunk_processor ddae671f-ccbf-4b18-9b29-c69c20e9617a
    19:52:53.012  STARTED 0 v2.tasks.process_data.chunk_processor 405ffbb1-4316-4107-8386-4cf6f53879fc
    19:52:53.014 RECEIVED 0 v2.tasks.process_data.chunk_processor d4cc2c42-e5ca-4309-b533-5e65a5980ee7
    19:52:53.114  STARTED 0 v2.tasks.process_data.chunk_processor 1bb463af-74de-46d7-ba25-a85bca290914
    19:52:53.116 RECEIVED 0 v2.tasks.process_data.chunk_processor b7b0ddd3-a767-4cc2-ad76-a08c1a969147
    19:52:53.222  STARTED 0 v2.tasks.process_data.chunk_processor ab7741dc-cd3e-494d-929d-169bbc6c164b
    19:52:53.223  STARTED 0 v2.tasks.process_data.chunk_processor ddae671f-ccbf-4b18-9b29-c69c20e9617a
    19:52:53.224  STARTED 0 v2.tasks.process_data.chunk_processor d4cc2c42-e5ca-4309-b533-5e65a5980ee7
    19:52:53.228  STARTED 0 v2.tasks.process_data.chunk_processor b7b0ddd3-a767-4cc2-ad76-a08c1a969147
    19:52:56.302  SUCCESS 0 v2.tasks.process_data.chunk_processor b7b0ddd3-a767-4cc2-ad76-a08c1a969147
    19:52:59.080  SUCCESS 0 v2.tasks.process_data.chunk_processor ba383a3d-a7be-4044-b2bd-e0f63bd510b4
    19:52:59.136  SUCCESS 0 v2.tasks.process_data.chunk_processor 405ffbb1-4316-4107-8386-4cf6f53879fc
    
    # --> my callback task happened here
    
    Server communication error: Received message larger than max (4764885 vs. 4194304) (StatusCode.RESOURCE_EXHAUSTED)
    
    3rd party error 
    opened by cold-logic 1
  • Frontend?

    Frontend?

    Any interest making a front end for this? I tend to be a very visual person, and would really like a better alternative to flower. Would that makes sense?

    new feature 
    opened by lowercase00 13
  • Tox, Pipenv, Docker (-compose) and a sketch of a client cli

    Tox, Pipenv, Docker (-compose) and a sketch of a client cli

    Hey hows it going; I moved a bunch of stuff into a new workflow let me know how you feel about it! The client isn't done yet but I'd like some feedback first. I need to add the dist/dev tools to the pipfile still (or you can as an exercise ๐Ÿ˜‰) I also had some issues with pytest sugar

    opened by jrabbit 5
Owner
Rogรฉrio Sampaio de Almeida
Passionate about Rust, former Python expert, curious, perfectionist, and always intrigued about the art of computer programming.
Rogรฉrio Sampaio de Almeida
Queuing with django celery and rabbitmq

queuing-with-django-celery-and-rabbitmq Install Python 3.6 or above sudo apt-get install python3.6 Install RabbitMQ sudo apt-get install rabbitmq-ser

null 1 Dec 22, 2021
FastAPI with Celery

Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.

Grega Vrbanฤiฤ 371 Jan 1, 2023
Asynchronous tasks in Python with Celery + RabbitMQ +ย Redis

python-asynchronous-tasks Setup & Installation Create a virtual environment and install the dependencies: $ python -m venv venv $ source env/bin/activ

Valon Januzaj 40 Dec 3, 2022
Django database backed celery periodic task scheduler with support for task dependency graph

Djag Scheduler (Dj)ango Task D(AG) (Scheduler) Overview Djag scheduler associates scheduling information with celery tasks The task schedule is persis

Mohith Reddy 3 Nov 25, 2022
A fast and reliable background task processing library for Python 3.

dramatiq A fast and reliable distributed task processing library for Python 3. Changelog: https://dramatiq.io/changelog.html Community: https://groups

Bogdan Popa 3.4k Jan 1, 2023
Sync Laravel queue with Python. Provides an interface for communication between Laravel and Python.

Python Laravel Queue Queue sync between Python and Laravel using Redis driver. You can process jobs dispatched from Laravel in Python. NOTE: This pack

Sinan Bekar 3 Oct 1, 2022
Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.

FastAPI with Celery Minimal example utilizing FastAPI and Celery with RabbitMQ for task queue, Redis for Celery backend and flower for monitoring the

Grega Vrbanฤiฤ 371 Jan 1, 2023
๐Ÿž A debug toolbar for FastAPI based on the original django-debug-toolbar. ๐Ÿž

Debug Toolbar ?? A debug toolbar for FastAPI based on the original django-debug-toolbar. ?? Swagger UI & GraphQL are supported. Documentation: https:/

Dani 74 Dec 30, 2022
Real-time monitor and web admin for Celery distributed task queue

Flower Flower is a web based tool for monitoring and administrating Celery clusters. Features Real-time monitoring using Celery Events Task progress a

Mher Movsisyan 5.5k Dec 28, 2022
Python cluster client for the official redis cluster. Redis 3.0+.

redis-py-cluster This client provides a client for redis cluster that was added in redis 3.0. This project is a port of redis-rb-cluster by antirez, w

Grokzen 1.1k Jan 5, 2023
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Mail-Checker is a python script that lets you see your mails directly from the terminal without having to login each time.

Mail-Checker ##Mail-Checker is a python script that lets you see your mails directly from the terminal without having to login each time. ##Before you

Siddharth Pradeep 1 Jan 12, 2022
python debugger and anti-vm that checks if you're in a virtual machine or if someones trying to debug your file

Anti-Debug was made by Love โŒ code โœ… ?? ใƒปWhat it checks for ใƒป Kills tools that can be used to debug your file ใƒป Exits if ran in vm (supports different

Rdimo 31 Aug 9, 2022
This is your launchpad that comes with a variety of applications waiting to run on your kubernetes cluster with a single click

This is your launchpad that comes with a variety of applications waiting to run on your kubernetes cluster with a single click.

M. Rehan 2 Jun 26, 2022
Powerful Telegram userbot to turn your PROFILE PICTURE & LAST NAME into a real time clock & to change your BIO automatically.

DATE_TIME_USERBOT-TeLeTiPs Powerful Telegram userbot to turn your PROFILE PICTURE & LAST NAME into a real time clock & to change your BIO automaticall

null 53 Jan 5, 2023
A Django chatbot that is capable of doing math and searching Chinese poet online. Developed with django, channels, celery and redis.

Django Channels Websocket Chatbot A Django chatbot that is capable of doing math and searching Chinese poet online. Developed with django, channels, c

Yunbo Shi 8 Oct 28, 2022
ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions

A library for debugging/inspecting machine learning classifiers and explaining their predictions

null 154 Dec 17, 2022
Dahua Console, access internal debug console and/or other researched functions in Dahua devices.

Dahua Console, access internal debug console and/or other researched functions in Dahua devices.

bashis 156 Dec 28, 2022