Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!

Overview

Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your machine!

Motivation

Would you like fully reproducible research or reusable workflows that seamlessly run on HPC clusters? Tired of writing and managing large Slurm submission scripts? Do you have comment out large parts of your pipeline whenever its results have been generated? Don't waste your precious time! awflow allows you to directly describe complex pipelines in Python, that run on your personal computer and large HPC clusters.

import awflow as aw
import glob
import numpy as np

n = 100000
tasks = 10

@aw.cpus(4)  # Request 4 CPU cores
@aw.memory("4GB")  # Request 4 GB of RAM
@aw.postcondition(aw.num_files('pi-*.npy', 10))
@aw.tasks(tasks)  # Requests '10' parallel tasks
def estimate(task_index):
    print("Executing task {} / {}.".format(task_index + 1, tasks))
    x = np.random.random(n)
    y = np.random.random(n)
    pi_estimate = (x**2 + y**2 <= 1)
    np.save('pi-' + str(task_index) + '.npy', pi_estimate)

@aw.dependency(estimate)
def merge():
    files = glob.glob('pi-*.npy')
    stack = np.vstack([np.load(f) for f in files])
    np.save('pi.npy', stack.sum() / (n * tasks) * 4)

@aw.dependency(merge)
@aw.postcondition(aw.exists('pi.npy'))  # Prevent execution if postcondition is satisfied.
def show_result():
    print("Pi:", np.load('pi.npy'))

aw.execute()

Executing this Python program (python examples/pi.py) on a Slurm HPC cluster will launch the following jobs.

           1803299       all    merge username PD       0:00      1 (Dependency)
           1803300       all show_res username PD       0:00      1 (Dependency)
     1803298_[6-9]       all estimate username PD       0:00      1 (Resources)
         1803298_3       all estimate username  R       0:01      1 compute-xx
         1803298_4       all estimate username  R       0:01      1 compute-xx
         1803298_5       all estimate username  R       0:01      1 compute-xx

Check the examples directory and guide to explore the functionality.

Installation

The awflow package is available on PyPi, which means it is installable via pip.

you@local:~ $ pip install awflow

If you would like the latest features, you can install it using this Git repository.

you@local:~ $ pip install git+https://github.com/JoeriHermans/awflow

If you would like to run the examples as well, be sure to install the optional example dependencies.

you@local:~ $ pip install 'awflow[examples]'

Usage

The core concept in awflow is the notion of a task. Essentially, this is a method that will be executed in your workflow. Tasks are represented as a node in a directed graph. In doing so, we can easily specify (task) dependencies. In addition, we can attribute properties to tasks using decorators defined by awflow. This allows you to specify things like CPU cores, GPU's and even postconditions. Follow the guide for additional examples and descriptions.

Decorators

TODO

Workflow storage

By default, workflows will be stored in the current working direction within the ./workflows folder. If desired, a central storage directory can be used by specifying the AWFLOW_STORAGE environment variable.

The awflow utility

This package comes with a utility program to manage submitted, failed, and pending workflows. Its functionality can be inspected by executing awflow -h. In addition, to streamline the management of workflows, we recommend to give every workflow as specific name to easily identify a workflow. This name does not have to be unique for every distinct workflow execution.

aw.execute(name=r'Some name')

Executing awflow list after submitting the pipeline with python pipeline.py [args] will yield.

you@local:~ $ awflow list
  Postconditions      Status      Backend     Name          Location
 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  50%                 Running     Slurm       Some name     /home/jhermans/awflow/examples/.workflows/tmpntmc712a

Modules

you@local:~ $ awflow cancel [workflow] TODO

you@local:~ $ awflow clear TODO

you@local:~ $ awflow list TODO

you@local:~ $ awflow inspect [workflow] TODO

Contributing

See CONTRIBUTING.md.

Roadmap

  • Documentation
  • README

License

As described in the LICENSE file.

You might also like...
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Lightweight, Python library for fast and reproducible experimentation :microscope:

Steppy What is Steppy? Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation. Steppy lets data scientist fo

Open-sourcing the Slates Dataset for recommender systems research
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

Research on controller area network Intrusion Detection Systems

Group members information Member 1: Lixue Liang Member 2: Yuet Lee Chan Member 3: Xinruo Zhang Member 4: Yifei Han User Manual Generate Attack Packets

GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

Comments
  • [BUG] conda activation crashes standalone execution

    [BUG] conda activation crashes standalone execution

    Issue description

    In the standalone backend on Unix systems, the os.system(command) used here

    https://github.com/JoeriHermans/awflow/blob/1fcf255debfbc18d39a6b2baa387bbc85050209d/awflow/backends/standalone/executor.py#L53-L60

    actually calls /bin/sh. For some OS, like Ubuntu, sh links to dash which does not support the scripting features required by conda activations. This results in runtime errors like

    sh: 5: /home/username/miniconda3/envs/envname/etc/conda/activate.d/activate-binutils_linux-64.sh: Syntax error: "(" unexpected
    

    Proposed solution

    A solution would be to change the shell with which the commands are called. This is possible thanks to the subprocess package. A good default would be bash as almost all Unix systems use it.

        if node.tasks > 1:
            for task_index in range(node.tasks):
                task_command = command + ' ' + str(task_index)
                return_code = subprocess.call(task_command, shell=True, executable='/bin/bash')
        else:
            return_code = subprocess.call(command, shell=True, executable='/bin/bash')
    

    One could also add a way to change this default. Additionally, wouldn't it be better to launch the tasks as background jobs for the standalone backend (simply add & at the end of the command) ?

    bug 
    opened by francois-rozet 1
  • [BUG] pip install fails for version 0.0.4

    [BUG] pip install fails for version 0.0.4

    $ pip install awflow==0.0.4
    Collecting awflow==0.0.4
      Using cached awflow-0.0.4.tar.gz (19 kB)
        ERROR: Command errored out with exit status 1:
         command: /home/francois/awf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ou4rxs3q/awflow/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ou4rxs3q/awflow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-ou4rxs3q/awflow/pip-egg-info
             cwd: /tmp/pip-install-ou4rxs3q/awflow/
        Complete output (7 lines):
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-install-ou4rxs3q/awflow/setup.py", line 54, in <module>
            'examples': _load_requirements('requirements_examples.txt')
          File "/tmp/pip-install-ou4rxs3q/awflow/setup.py", line 17, in _load_requirements
            with open(file_name, 'r') as file:
        FileNotFoundError: [Errno 2] No such file or directory: 'requirements_examples.txt'
        ----------------------------------------
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    
    bug high priority 
    opened by francois-rozet 1
  • Jobs submitted with awflow doesn't work with Multiprocessing.pool

    Jobs submitted with awflow doesn't work with Multiprocessing.pool

    Hi,

    I tried submitting a few jobs with awflow but somehow each time I run it with slurm backend it never produces a pool.starmap and the process simply times out on cluster. `0 0 8196756 5.1g 85664 S 0.0 1.0 2:12.27 python 790517 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.66 python

    790518 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.45 python

    790519 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.76 python

    790520 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:02.02 python

    790521 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.99 python `

    An example of what happens in the cluster where the processes are spawned but each process uses 0 % of the cpu slurmstepd: error: *** JOB 1933332 ON compute-04 CANCELLED AT 2022-04-08T19:33:26 DUE TO TIME LIMIT ***

    opened by digirak 0
Releases(0.1.0)
Owner
Joeri Hermans
Combining Machine Learning and Physics to automate science.
Joeri Hermans
This is a simple backtesting framework to help you test your crypto currency trading. It includes a way to download and store historical crypto data and to execute a trading strategy.

You can use this simple crypto backtesting script to ensure your trading strategy is successful Minimal setup required and works well with static TP a

Andrei 154 Sep 12, 2022
DI-HPC is an acceleration operator component for general algorithm modules in reinforcement learning algorithms

DI-HPC: Decision Intelligence - High Performance Computation DI-HPC is an acceleration operator component for general algorithm modules in reinforceme

OpenDILab 185 Dec 29, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 3, 2023
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Nb workflows - A workflow platform which allows you to run parameterized notebooks programmatically

NB Workflows Description If SQL is a lingua franca for querying data, Jupyter sh

Xavier Petit 6 Aug 18, 2022
Spam your friends and famly and when you do your famly will disown you and you will have no friends.

SpamBot9000 Spam your friends and family and when you do your family will disown you and you will have no friends. Terms of Use Disclaimer: Please onl

DJ15 0 Jun 9, 2022
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data

Introduction PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch. Key features include: Data structure for

Facebook Research 6.8k Jan 1, 2023
Reusable constraint types to use with typing.Annotated

annotated-types PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] shou

null 125 Dec 26, 2022
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022