PaRT: Parallel Learning for Robust and Transparent AI

Related tags

Deep Learning PaRT
Overview

PaRT: Parallel Learning for Robust and Transparent AI

This repository contains the code for PaRT, an algorithm for training a base network on multiple tasks in parallel. The diagram of PaRT is shown in the figure below.

Below, we provide details regarding dependencies and the instructions for running the code for each experiment. We have prepared scripts for each experiment to help the user have a smooth experience.

Dependencies

  • python >= 3.8
  • pytorch >= 1.7
  • scikit-learn
  • torchvision
  • tensorboard
  • matplotlib
  • pillow
  • psutil
  • scipy
  • numpy
  • tqdm

SETUP ENVIRONMENT

To setup the conda env and create the required directories go to the scripts directory and run the following commands in the terminal:

conda init bash
bash -i setupEnv.sh

Check that the final output of these commands is:

Installed torch version {---}
Virtual environment was made successfully

CIFAR 100 EXPERIMENTS

Instructions to run the code for the CIFAR100 experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR100Baseline.sh ../../scripts/test_case0_cifar100_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar100_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR100Parallel.sh ../../scripts/test_case0_cifar100_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar100_parallel.json to 1,2,3, or 4.

CIFAR 10 AND CIFAR 100 EXPERIMENTS

Instructions to run the code for the CIFAR10 and CIFAR100 experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR10_100Baseline.sh ../../scripts/test_case0_cifar10_100_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar10_100_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR10_100Parallel.sh ../../scripts/test_case0_cifar10_100_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar10_100_parallel.json to 1,2,3, or 4.

FIVETASKS EXPERIMENTS

The dataset for this experiment can be downloaded from the link provided by the CPG GitHub Page or Here. Instructions to run the code for the FiveTasks experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i run5TasksBaseline.sh ../../scripts/test_case0_5tasks_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_5tasks_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i run5TasksParallel.sh ../../scripts/test_case0_5tasks_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_5tasks_parallel.json to 1,2,3, or 4.

Paper

Please cite our paper:

Paknezhad, M., Rengarajan, H., Yuan, C., Suresh, S., Gupta, M., Ramasamy, S., Lee H. K., PaRT: Parallel Learning Towards Robust and Transparent AI, arXiv:2201.09534 (2022)

You might also like...
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Parallel and High-Fidelity Text-to-Lip Generation; AAAI 2022 ; Official code

Parallel and High-Fidelity Text-to-Lip Generation This repository is the official PyTorch implementation of our AAAI-2022 paper, in which we propose P

Utility tools for the
Utility tools for the "Divide and Remaster" dataset, introduced as part of the Cocktail Fork problem paper

Divide and Remaster Utility Tools Utility tools for the "Divide and Remaster" dataset, introduced as part of the Cocktail Fork problem paper The DnR d

Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

An implementation of the 1. Parallel, 2. Streaming, 3. Randomized SVD using MPI4Py
An implementation of the 1. Parallel, 2. Streaming, 3. Randomized SVD using MPI4Py

PYPARSVD This implementation allows for a singular value decomposition which is: Distributed using MPI4Py Streaming - data can be shown in batches to

Implementation of Kaneko et al.'s MaskCycleGAN-VC model for non-parallel voice conversion.
Implementation of Kaneko et al.'s MaskCycleGAN-VC model for non-parallel voice conversion.

MaskCycleGAN-VC Unofficial PyTorch implementation of Kaneko et al.'s MaskCycleGAN-VC (2021) for non-parallel voice conversion. MaskCycleGAN-VC is the

Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Parallel Tacotron2 Pytorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling

Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Code for the paper
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Comments
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
Owner
Mahsa
I develop DL, ML, computer vision, and image processing algorithms for problems in deep learning and medical domain.
Mahsa
RGB-D Local Implicit Function for Depth Completion of Transparent Objects

RGB-D Local Implicit Function for Depth Completion of Transparent Objects [Project Page] [Paper] Overview This repository maintains the official imple

NVIDIA Research Projects 43 Dec 12, 2022
LBK 35 Dec 26, 2022
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

English | 简体中文 Welcome to the PaddlePaddle GitHub. PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open

null 19.4k Jan 4, 2023
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Alibaba 185 Dec 21, 2022
TorchIO is a Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Fernando Pérez-García 1.6k Jan 6, 2023
A parallel framework for population-based multi-agent reinforcement learning.

MALib: A parallel framework for population-based multi-agent reinforcement learning MALib is a parallel framework of population-based learning nested

MARL @ SJTU 348 Jan 8, 2023
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 3, 2023
Understanding Hyperdimensional Computing for Parallel Single-Pass Learning

Understanding Hyperdimensional Computing for Parallel Single-Pass Learning Authors: Tao Yu* Yichi Zhang* Zhiru Zhang Christopher De Sa *: Equal Contri

Cornell RelaxML 4 Sep 8, 2022
Model parallel transformers in Jax and Haiku

Mesh Transformer Jax A haiku library using the new(ly documented) xmap operator in Jax for model parallelism of transformers. See enwik8_example.py fo

Ben Wang 4.8k Jan 1, 2023
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 7, 2022