A Python package for easy multiprocessing, but faster than multiprocessing

Overview

MPIRE (MultiProcessing Is Really Easy)

Build status Docs status

MPIRE, short for MultiProcessing Is Really Easy, is a Python package for multiprocessing, but faster and more user-friendly than the default multiprocessing package. It combines the convenient map like functions of multiprocessing.Pool with the benefits of using copy-on-write shared objects of multiprocessing.Process, together with easy-to-use worker state, worker insights, and progress bar functionality.

Full documentation is available at https://slimmer-ai.github.io/mpire/.

Features

  • Faster execution than other multiprocessing libraries. See benchmarks.
  • Intuitive, Pythonic syntax
  • Multiprocessing with map/map_unordered/imap/imap_unordered functions
  • Easy use of copy-on-write shared objects with a pool of workers
  • Each worker can have its own state and with convenient worker init and exit functionality this state can be easily manipulated (e.g., to load a memory-intensive model only once for each worker without the need of sending it through a queue)
  • Progress bar support using tqdm
  • Progress dashboard support
  • Worker insights to provide insight into your multiprocessing efficiency
  • Graceful and user-friendly exception handling
  • Automatic task chunking for all available map functions to speed up processing of small task queues (including numpy arrays)
  • Adjustable maximum number of active tasks to avoid memory problems
  • Automatic restarting of workers after a specified number of tasks to reduce memory footprint
  • Nested pool of workers are allowed when setting the daemon option
  • Child processes can be pinned to specific or a range of CPUs
  • Optionally utilizes dill as serialization backend through multiprocess, enabling parallelizing more exotic objects, lambdas, and functions in iPython and Jupyter notebooks.

Installation

Note

MPIRE currently only supports Linux based operating systems that support 'fork' as start method. Support for Windows is coming soon.

Through pip (PyPi):

pip install mpire

From source:

python setup.py install

Getting started

Suppose you have a time consuming function that receives some input and returns its results. Simple functions like these are known as embarrassingly parellel problems, functions that require little to no effort to turn into a parellel task. Parallelizing a simple function as this can be as easy as importing multiprocessing and using the multiprocessing.Pool class:

import time
from multiprocessing import Pool

def time_consuming_function(x):
    time.sleep(1)  # Simulate that this function takes long to complete
    return ...

with Pool(processes=5) as pool:
    results = pool.map(time_consuming_function, range(10))

MPIRE can be used almost as a drop-in replacement to multiprocessing. We use the mpire.WorkerPool class and call one of the available map functions:

from mpire import WorkerPool

with WorkerPool(n_jobs=5) as pool:
    results = pool.map(time_consuming_function, range(10))

The differences in code are small: there's no need to learn a completely new multiprocessing syntax, if you're used to vanilla multiprocessing. The additional available functionality, though, is what sets MPIRE apart.

Progress bar

Suppose we want to know the status of the current task: how many tasks are completed, how long before the work is ready? It's as simple as setting the progress_bar parameter to True:

with WorkerPool(n_jobs=5) as pool:
    results = pool.map(time_consuming_function, range(10), progress_bar=True)

And it will output a nicely formatted tqdm progress bar. In case you're running your code inside a notebook it will automatically switch to a widget.

MPIRE also offers a dashboard, for which you need to install additional dependencies. See Dashboard for more information.

Shared objects

If you have one or more objects that you want to share between all workers you can make use of the copy-on-write shared_objects option of MPIRE. MPIRE will pass on these objects only once for each worker without copying/serialization. Only when you alter the object in the worker function it will start copying it for that worker.

def time_consuming_function(some_object, x):
    time.sleep(1)  # Simulate that this function takes long to complete
    return ...

def main():
    some_object = ...
    with WorkerPool(n_jobs=5, shared_objects=some_object) as pool:
        results = pool.map(time_consuming_function, range(10), progress_bar=True)

See shared_objects for more details.

Worker initialization

Workers can be initialized using the worker_init feature. Together with worker_state you can load a model, or set up a database connection, etc.:

def init(worker_state):
    # Load a big dataset or model and store it in a worker specific worker_state
    worker_state['dataset'] = ...
    worker_state['model'] = ...

def task(worker_state, idx):
    # Let the model predict a specific instance of the dataset
    return worker_state['model'].predict(worker_state['dataset'][idx])

with WorkerPool(n_jobs=5, use_worker_state=True) as pool:
    results = pool.map(task, range(10), worker_init=init)

Similarly, you can use the worker_exit feature to let MPIRE call a function whenever a worker terminates. You can even let this exit function return results, which can be obtained later on. See the worker_init and worker_exit section for more information.

Worker insights

When you're multiprocessing setup isn't performing as you want it to and you have no clue what's causing it, there's the worker insights functionality. This will give you insight in your setup, but it will not profile the function you're running (there are other libraries for that). Instead, it profiles the worker start up time, waiting time and working time. When worker init and exit functions are provided it will time those as well.

Perhaps you're sending a lot of data over the task queue, which makes the waiting time go up. Whatever the case, you can enable and grab the insights using the enable_insights flag and mpire.WorkerPool.get_insights function, respectively:

with WorkerPool(n_jobs=5) as pool:
    results = pool.map(time_consuming_function, range(10), enable_insights=True)
    insights = pool.get_insights()

See worker insights for a more detailed example and expected output.

Documentation

See the full documentation at https://slimmer-ai.github.io/mpire/ for information on all the other features of MPIRE.

If you want to build the documentation yourself, please install the documentation dependencies by executing:

pip install mpire[docs]

or

pip install .[docs]

Documentation can then be build by executing:

python setup.py build_docs

Documentation can also be build from the docs folder directly. In that case MPIRE should be installed and available in your current working environment. Then execute:

make html

in the docs folder.

Comments
  • worker_state is lost between map calls if the input is too large

    worker_state is lost between map calls if the input is too large

    This is very related to #15.

    Since your awesome release v2.3.0 (which fix #15), I've been using mpire a lot, I love it :)


    But I'm having a problem, very similar to #15.

    In the following script, each worker get to deal with several numbers i, which they keep in state. Then I retrieve these values in another call.

    from mpire import WorkerPool
    
    
    N = 12
    W = 4
    
    
    def set_state(w_state, i):
        w_state[i] = 2 * i + 1
        return None
    
    
    def get_state(w_state, i):
        return w_state[i]
    
    
    if __name__ == "__main__":
        pool = WorkerPool(n_jobs=W, use_worker_state=True, keep_alive=True)
        s = N // W
    
        pool.map(set_state, list(range(N)), iterable_len=N, n_splits=s)
        r = pool.map(get_state, list(range(N)), iterable_len=N, n_splits=s)
    
        print(r)
    

    If I run this script, everything works perfectly, I get my expected output :

    [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23]


    Now, if I change N (the number of tasks) to a higher number (like 128) and I run the script again, I get the following error :

    KeyError: '\n\nException occurred in Worker-2 with the following arguments:\nArg 0: 32\nTraceback (most recent call last):\n File "/root/miniconda3/envs/housing_sb3/lib/python3.8/site-packages/mpire/worker.py", line 322, in _run_safely\n results = func()\n File "/root/miniconda3/envs/housing_sb3/lib/python3.8/site-packages/mpire/worker.py", line 276, in _func\n results = func(args)\n File "/root/miniconda3/envs/housing_sb3/lib/python3.8/site-packages/mpire/worker.py", line 415, in _helper_func_with_idx\n return args[0], self._call_func(func, args[1])\n File "/root/miniconda3/envs/housing_sb3/lib/python3.8/site-packages/mpire/worker.py", line 442, in _call_func\n return func(args)\n File "housing_drl/sb3/swag.py", line 14, in get_state\n return w_state[i]\nKeyError: 32\n'

    It's the exact same error as in #15, so it seems the worker state is somehow erased ?


    @sybrenjansen Do you have any idea what's the problem ? Did I do something wrong in my script ?

    enhancement 
    opened by ghost 9
  • How to handle defunct processes in child processes?

    How to handle defunct processes in child processes?

    This issue is not brought up by mpire, but I thought I'd discuss how to deal with it or improve it here.

    When I start multiple child processes with the Pool module and call exit() in one of them, or just use the kill command to kill the child process, it will become a defunct process and cause the parent process to fail to exit.

    The code to reproduce this problem is very simple:

    from time import sleep
    from mpire.pool import WorkerPool
    
    def main_naive(i):
        if i == 0:
            sleep(1)
            print("Exiting", i)
            exit()  # If remove this line, everything works fine.
        else:
            sleep(0.1)
            print("Exiting", i)
    
    def main():
        with WorkerPool(n_jobs=2, start_method="spawn", daemon=False) as pool:
            pool.map(main_naive, list(range(2)))
    
    if __name__ == "__main__":
        main()
    

    So how to make the parent process exit normally in this case?

    enhancement 
    opened by sailxjx 9
  • `shared_objects` not applicable for Windows?

    `shared_objects` not applicable for Windows?

    Hi,

    I've tested the shared_objects feature on Windows and saw the copying of the supposedly "shared" objects was still taking place. Then I notice the documentation states that multi-processing with sharing objects is only possible when using start_method='fork', which is not available on Windows... I've tested the feature on Linux and it worked as expected. Is there a way to make it work on Windows as well?

    My sample code to reproduce (have to remove the start_method='fork' for it to run on Windows):

    from time import time
    import numpy as np
    from mpire import WorkerPool
    
    
    def main():
        x = 10 + 2 * np.random.randn(10_000_000)  # N(10, 4)
        y = 20 + 1 * np.random.randn(10_000_000)  # N(20, 1)
        n_tests = 100
    
        print('no multi-processing')
        t0 = time()
        obj_ids = [func((x, y)) for _ in range(n_tests)]
        print(f"{time() - t0:.2f}s, number of copies {len(set(obj_ids))}")
    
        print('multi-processing')
        t0 = time()
        with WorkerPool(n_jobs=4, start_method='fork') as pool:
            obj_ids = pool.map(func, [((x, y), ) for _ in range(n_tests)])
        print(f"{time() - t0:.2f}s, number of copies {len(set(obj_ids))}")
    
        print('multi-processing with shared objects')
        t0 = time()
        with WorkerPool(n_jobs=4, shared_objects=(x, y), start_method='fork') as pool:
            obj_ids = pool.map(func, range(n_tests))
        print(f"{time() - t0:.2f}s, number of copies {len(set(obj_ids))}")
    
    
    def func(d, _=None):
        x, y = d
        diff = np.random.choice(x, size=10_000, replace=False).mean() - np.random.choice(y, size=10_000, replace=False).mean()
        return id(x)
    
    
    if __name__ == '__main__':
        main()
    
    opened by ranshadmi 7
  • python 3.9 + Window10 mpire problem with freeze_support, should I fix runpy.py file?

    python 3.9 + Window10 mpire problem with freeze_support, should I fix runpy.py file?

    Sorry for asking similar issue with #40 #41 .

    However, I don't know where should I input below line as you mentioned before

    from multiprocessing import freeze_support()

    I can find if name == 'main' in the last part of runpy.py file.

    Should I fix that file to use empire module in Window Pycharm?

    or change the code style which can only run by terminal line.

    opened by gangilseo 6
  • Hanging mpire worker_pool.map_unordered

    Hanging mpire worker_pool.map_unordered

    When creating many processes which return large result, mpire will hang in collecting the results. Analyzed is caused by crossing the system limit size for the os pipes, for linux defaults to 4K.

    bug 
    opened by fzonneveld 6
  • TypeError at tqdm_utils

    TypeError at tqdm_utils

    Hi there,

    I'm getting an issue when setting progress_bar = True.

    Traceback:

    Traceback (most recent call last):
      File "/Users/user/Projects/project/hk_analysis/matching.py", line 74, in <module>
        process_matches_multi()
      File "/Users/user/Projects/project/hk_analysis/matching.py", line 49, in process_matches_multi
        matching_results = pool.map(atomic_matching, [{'item': item} for item in updated[:10]], progress_bar=True)
      File "/Users/user/.virtualenvs/pdf_analysis/lib/python3.9/site-packages/mpire/pool.py", line 265, in map
        results = self.map_unordered(func, ((args_idx, args) for args_idx, args in enumerate(iterable_of_args)),
      File "/Users/user/.virtualenvs/pdf_analysis/lib/python3.9/site-packages/mpire/pool.py", line 324, in map_unordered
        return list(self.imap_unordered(func, iterable_of_args, iterable_len, max_tasks_active, chunk_size,
      File "/Users/user/.virtualenvs/pdf_analysis/lib/python3.9/site-packages/mpire/pool.py", line 490, in imap_unordered
        tqdm_manager_owner = TqdmManager.start_manager() if progress_bar else False
      File "/Users/user/.virtualenvs/pdf_analysis/lib/python3.9/site-packages/mpire/tqdm_utils.py", line 121, in start_manager
        cls.MANAGER_HOST.value = cls.MANAGER.address[1:]
      File "<string>", line 11, in setvalue
    TypeError: bytes expected instead of str instance
    

    Snapshot of code:

    from mpire import WorkerPool
    from mpire.dashboard import connect_to_dashboard
    
    connect_to_dashboard(8080)
    
    def atomic_matching(shared_objects, item):
        ### ... match processing
        return data
    
    def process_matches_multi():
        ### ... get data
        ### ... set choices and previous
    
        with WorkerPool(n_jobs=5) as pool:
            pool.set_shared_objects((choices, previous))
            matching_results = pool.map(atomic_matching, [{'item': item} for item in updated])
        
        ### ... output results
    

    If I change tqdm_utils as follows (line 121), it disappears but no progress bar appears and the dashboard does not connect

    cls.MANAGER_HOST.value = cls.MANAGER.address[1:].encode()
    

    I've checked tqdm and it is working in isolation.

    Versions are: mpire==2.3.0 tqdm==4.62.1 (Also tried tqdm==4.62.3) Python 3.9.7

    Love the project and thanks for all the hard work :)

    Many thanks, Paul

    bug 
    opened by pemm8 6
  • ValueError: signal only works in main thread of the main interpreter

    ValueError: signal only works in main thread of the main interpreter

    Greetings. I am trying to use mpire to improve a long-running function's performance as compared to multiprocess.Pool (which works fine) and am getting this error. Thank you in advance for your help.

    I did see the closed issue related to the same error msg. Full trace below.

    I pip installed mpire on Windows 10, Python 3.10.2, Django 4.0.2. I am not running any threading but am sure what Django does under the covers. I know others have mpire working with Django.

    Code (skinnied down, the run_monte_carlo_mp() func is called from a different module.) def run_monte_carlo_mp(pd, s): from time import time_ns, sleep import multiprocessing from mpire import WorkerPool from rpm.main_calc import calc_years

    iterations = 500
    cpus = 4
    time_start = time_ns()
    
    # This runs fine in about 7-8 seconds vs 25 secs without multiprocessing.
    with multiprocessing.Pool(processes=cpus) as pool:
        inputs = [[pd, 1] for i in range(iterations)]
        pool.starmap(calc_years, inputs)
    print('multiprocessing.Pool time:', time_ns() - time_start)
    
    time_start = time_ns()
    # This generates the error below.
    with WorkerPool(n_jobs=cpus) as pool:
        # pd is a python dictionary containing info needed by calc_years_mp()
        inputs = [[pd, 1] for i in range(iterations)]
        pool.map(calc_years, inputs)
    print('mpire.WorkerPool time:', time_ns() - time_start)
    

    --> Traceback (most recent call last): --> File "main.py", line 78, in load_plan --> run_monte_carlo_mp(pd, 1) --> File "multiprocess_test.py", line 27, in run_monte_carlo_mp --> pool.map(calc_years, inputs) --> File "D:\RPM\env\lib\site-packages\mpire\pool.py", line 265, in map --> results = self.map_unordered(func, ((args_idx, args) for args_idx, args in enumerate(iterable_of_args)), --> File "D:\RPM\env\lib\site-packages\mpire\pool.py", line 324, in map_unordered --> return list(self.imap_unordered(func, iterable_of_args, iterable_len, max_tasks_active, chunk_size, --> File "D:\RPM\env\lib\site-packages\mpire\pool.py", line 509, in imap_unordered --> self._start_workers(progress_bar) --> File "D:\RPM\env\lib\site-packages\mpire\pool.py", line 148, in _start_workers --> self._workers.append(self._start_worker(worker_id)) --> File "D:\RPM\env\lib\site-packages\mpire\pool.py", line 175, in start_worker --> with DisableKeyboardInterruptSignal(): --> File "D:\RPM\env\lib\site-packages\mpire\signal.py", line 37, in enter --> ignore_keyboard_interrupt() --> File "D:\RPM\env\lib\site-packages\mpire\signal.py", line 45, in ignore_keyboard_interrupt --> signal(SIGINT, SIG_IGN) --> File "C:\Users\Dad\AppData\Local\Programs\Python\Python310\lib\signal.py", line 56, in signal --> handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler)) --> ValueError: signal only works in main thread of the main interpreter

    opened by charleslstone 5
  • Feature request: Specify a timeout after which a worker is stopped

    Feature request: Specify a timeout after which a worker is stopped

    In case a worker hangs/exceeds some time limit it would be nice to have a way to kill that worker without affecting the other ones.

    For example the pdfkit package is known to produce hanging processes so it would be useful to be able to kill workers that happen to end up with such processes.

    I can try to implement this feature if you also think it is a good idea.

    enhancement 
    opened by charalamm 5
  • python does not work properly in jupyter notebook after long time running mpire

    python does not work properly in jupyter notebook after long time running mpire

    os: ubuntu 20.04 LTS conda : python 3.8 jupyter notebook

    question: I filter the data from two tsv file use mpire. Get the data after running for a long time. however, the python can not work when i run the next cell .

    1.run first the mpire ,ok 2. mpire return the data, but not show in the Variable inspector 3. run the next cell ,the python will not work properly . but ,python use the one cpu core 100% , like Into the infinite loop,can not return results. 4. then , interrupt the cell . 5. run the cell again ,the data will return .

    so, I had to run the program manually.

    Can you find the cause of this problem?

    two files uesd in my program : interval_input has rows 2070751 hg19_hg38_relation_input has rows 3299923

    run time ~ 14 hours cpu : 8 cores ,all used total RAM: 32GB, python used in running ~12GB, system used ~ 3.5GB

    code:

    def get_hg_relation(chr_name, pos_hg19_start, pos_hg19_end, pos_relation):
        pos_relation_filter_tmp =  (pos_relation['chr'].isin([chr_name])) & (pos_relation['pos_hg19'].isin(range(pos_hg19_start,pos_hg19_end+1)))
        pos_relation_filter=pos_relation[pos_relation_filter_tmp==True]
        return pos_relation_filter
    
    
    # use mipre 
    args_isin=[(x, y, z, hg19_hg38_relation_input) for x,y,z in zip(interval_input['chr'], interval_input['pos_hg19_str'],interval_input['pos_hg19_end'])]
    
    with WorkerPool(n_jobs=7) as pool:
        hg_relation_result = pool.map(get_hg_relation, args_isin, enable_insights=True, progress_bar=True)
    #     print(hg_relation_result)
        print(pool.get_insights())
    
    
    opened by ichobits 5
  • In Notebook: AttributeError: 'tqdm' object has no attribute 'sp'

    In Notebook: AttributeError: 'tqdm' object has no attribute 'sp'

    When using mpire in jupyter notebook, there is an error when enabling progress_bar=True:

    Process ForkProcess-34:
    Traceback (most recent call last):
      File "/home/ubuntu/anaconda3/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
        self.run()
      File "/home/ubuntu/anaconda3/lib/python3.8/multiprocessing/process.py", line 108, in run
        self._target(*self._args, **self._kwargs)
      File "/home/ubuntu/anaconda3/lib/python3.8/site-packages/mpire/progress_bar.py", line 194, in _progress_bar_handler
        progress_bar.sp(bar_style='success')
    AttributeError: 'tqdm' object has no attribute 'sp'
    

    After reviewing the codes in Line 194, sp seems to be activated only when in_notebook is True:

    if in_notebook:
        progress_bar.sp(bar_style='success')
    

    Testing Environment

    • OS: Ubuntu 16.04
    • Interpreter: Python 3.8.8 [GCC 7.3.0]
    • Libraries: mpire (2.1.1), tqdm (4.62.2), jupiter (1.0.0)
    bug 
    opened by LIU-Yinyi 5
  • Deadlock when using dill and an exception is thrown

    Deadlock when using dill and an exception is thrown

    When using dill and an exception is thrown in one of the worker processes the main process deadlocks.

    reproducable example:

    from mpire import WorkerPool
    
    # Create some fake data. It should raise when x=10, y=52, z=0
    data = [(x, y, z) for x, y, z in zip(range(0, 100), range(42, 142), range(10, -90, -1))]
    with WorkerPool(n_jobs=5, use_dill=True) as pool:
        for res in pool.imap(lambda x, y, z: x*y/z, data):
            print(res)
    
    

    Wwhen setting use_dill = False it works like a charm. Unfortunaeley in my case I have to use dill to pickle lambda functions.

    bug 
    opened by derHeinzer 4
  • Other threads are currently calling into gRPC, skipping fork() handlers

    Other threads are currently calling into gRPC, skipping fork() handlers

    I am a new mpire use. I just tried parallelizing a simple workflow, and it worked just fine, but I get the following warning:

    E1123 14:15:42.696150185  928936 fork_posix.cc:76]           Other threads are currently calling into gRPC, skipping fork() handlers
    
    opened by ma-sadeghi 2
  • add possibility to disable multiprocessing for debugging purpose w/o changing code

    add possibility to disable multiprocessing for debugging purpose w/o changing code

    Hey! Sometimes I have to debug my code in sequential mode. But duplicating code (parallel and sequential execution) looks ugly. It seems to me that introducing DummyWorkerPool is good solution for this purpose, it just call the function sequentially in loop w/o any overhead. What do you think?

    enhancement 
    opened by vlomshakov 3
  • resource_tracker: There appear to be 32 leaked semaphore objects to clean up at shutdown

    resource_tracker: There appear to be 32 leaked semaphore objects to clean up at shutdown

    I've got this message by resource tracker when I use spawn mode on my mac, the code is really simple:

    import time
    from mpire import WorkerPool
    
    
    def global_runner(*args):
        print("I am Running", args)
        time.sleep(0.3)
        return
    
    
    if __name__ == "__main__":
        params_group = [1, 2]
        with WorkerPool(n_jobs=2, start_method="spawn") as pool:
            pool.map(global_runner, params_group)
    
    Screen Shot 2021-12-01 at 17 19 32

    Can anyone help me solve this problem?

    help wanted 
    opened by sailxjx 2
Owner
null
SCOOP (Scalable COncurrent Operations in Python)

SCOOP (Scalable COncurrent Operations in Python) is a distributed task module allowing concurrent parallel programming on various environments, from h

Yannick Hold 573 Dec 27, 2022
A curated list of awesome Python asyncio frameworks, libraries, software and resources

Awesome asyncio A carefully curated list of awesome Python asyncio frameworks, libraries, software and resources. The Python asyncio module introduced

Timo Furrer 3.8k Jan 8, 2023
Trio – a friendly Python library for async concurrency and I/O

Trio – a friendly Python library for async concurrency and I/O The Trio project aims to produce a production-quality, permissively licensed, async/awa

null 5k Jan 7, 2023
A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs.

A lightweight (serverless) native python parallel processing framework based on simple decorators and call graphs, supporting both control flow and dataflow execution paradigms as well as de-centralized CPU & GPU scheduling.

null 102 Jan 6, 2023
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
Python disk-backed cache (Django-compatible). Faster than Redis and Memcached. Pure-Python.

DiskCache is an Apache2 licensed disk and file backed cache library, written in pure-Python, and compatible with Django.

Grant Jenks 1.7k Jan 5, 2023
NoSecerets is a python script that is designed to crack hashes extremely fast. Faster even than Hashcat

NoSecerets NoSecerets is a python script that is designed to crack hashes extremely fast. Faster even than Hashcat How does it work? Instead of taking

DosentTrust GithubDatabase 9 Jul 4, 2022
50% faster, 50% less RAM Machine Learning. Numba rewritten Sklearn. SVD, NNMF, PCA, LinearReg, RidgeReg, Randomized, Truncated SVD/PCA, CSR Matrices all 50+% faster

[Due to the time taken @ uni, work + hell breaking loose in my life, since things have calmed down a bit, will continue commiting!!!] [By the way, I'm

Daniel Han-Chen 1.4k Jan 1, 2023
A faster pytorch implementation of faster r-cnn

A Faster Pytorch Implementation of Faster R-CNN Write at the beginning [05/29/2020] This repo was initaited about two years ago, developed as the firs

Jianwei Yang 7.1k Jan 1, 2023
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 3, 2023
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

Salesforce 1.3k Dec 28, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 3, 2023
Providing a working, flexible, easier and faster installer than the one officially provided by Arch Linux

Purpose The purpose is to bring more people to Arch Linux by providing a working, flexible, easier and faster installer than the one officially provid

André Luís 0 Nov 9, 2022
A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.

torchsynth The fastest synth in the universe. Introduction torchsynth is based upon traditional modular synthesis written in pytorch. It is GPU-option

torchsynth 229 Jan 2, 2023
Stopmagic gives you the power of creating amazing Stop Motion animations faster and easier than ever before.

Stopmagic gives you the power of creating amazing Stop Motion animations faster and easier than ever before. This project is maintained by Aldrin Mathew.

Aldrin's Art Factory 67 Dec 31, 2022
Small toolkit for python multiprocessing logging to file

Small Toolkit for Python Multiprocessing Logging This is a small toolkit for solving unsafe python mutliprocess logging (file logging and rotation) In

Qishuai 1 Nov 10, 2021
A multiprocessing distributed task queue for Django

A multiprocessing distributed task queue for Django Features Multiprocessing worker pool Asynchronous tasks Scheduled, cron and repeated tasks Signed

Ilan Steemers 1.7k Jan 3, 2023
A multiprocessing distributed task queue for Django

A multiprocessing distributed task queue for Django Features Multiprocessing worker pool Asynchronous tasks Scheduled, cron and repeated tasks Signed

Ilan Steemers 1.7k Jan 3, 2023
Just your basic port scanner - with multiprocessing capabilities & further nmap enumeration.

Just-Your-Basic-Port-Scanner Just your basic port scanner - with multiprocessing capabilities & further nmap enumeration. Use at your own discretion,

Edward Zhou 0 Nov 6, 2021
Synchronize Two Cameras in Real Time using Multiprocessing

Synchronize Two Cameras in Real Time using Multiprocessing In progress ... ?? Project Structure ?? Install Libraries for this Project (requirements.tx

Eduardo Carvalho Nunes 2 Oct 31, 2021