Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

Overview

NN Template

PyTorch Lightning Conf: hydra Logging: wandb Conf: hydra Code style: black

Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for:

  • PyTorch Lightning, lightweight PyTorch wrapper for high-performance AI research.
  • Hydra, a framework for elegantly configuring complex applications.
  • DVC, track large files, directories, or ML models. Think "Git for data".
  • Weights and Biases, organize and analyze machine learning experiments. (educational account available)

nn-template is opinionated so you don't have to be. If you use this template, please add to your README.

Usage Examples

Checkout the mwe branch to view a minimum working example on MNIST.

Structure

.
├── conf                # Hydra compositional config
│   ├── default.yaml    # current experiment configuration
│   ├── data
│   ├── hydra
│   ├── logging
│   ├── model
│   ├── optim
│   └── train
├── data                # datasets
├── experiments         # local logs
├── README.md
├── requirements.txt    # basic requirements
└── src
    ├── common          # common Python modules
    ├── pl_data         # PyTorch Lightning datamodules and datasets
    ├── pl_modules      # PyTorch Lightning modules
    └── run.py          # entry point to run current conf

Data Version Control

DVC runs alongside git and uses the current commit hash to version control the data.

Initialize the dvc repository:

$ dvc init

To start tracking a file or directory, use dvc add:

$ dvc add data/ImageNet

DVC stores information about the added file (or a directory) in a special .dvc file named data/ImageNet.dvc, a small text file with a human-readable format. This file can be easily versioned like source code with Git, as a placeholder for the original data (which gets listed in .gitignore):

git add data/ImageNet.dvc data/.gitignore
git commit -m "Add raw data"

Making changes

When you make a change to a file or directory, run dvc add again to track the latest version:

$ dvc add data/ImageNet

Switching between versions

The regular workflow is to use git checkout first to switch a branch, checkout a commit, or a revision of a .dvc file, and then run dvc checkout to sync data:

$ git checkout <...>
$ dvc checkout

Read more in the docs!

Weights and Biases

Weights & Biases helps you keep track of your machine learning projects. Use tools to log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues.

This is an example of a simple dashboard.

Quickstart

Login to your wandb account, running once wandb login. Configure the logging in conf/logging/*.


Read more in the docs. Particularly useful the log method, accessible from inside a PyTorch Lightning module with self.logger.experiment.log.

W&B is our logger of choice, but that is a purely subjective decision. Since we are using Lightning, you can replace wandb with the logger you prefer (you can even build your own). More about Lightning loggers here.

Hydra

Hydra is an open-source Python framework that simplifies the development of research and other complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line. The name Hydra comes from its ability to run multiple similar jobs - much like a Hydra with multiple heads.

The basic functionalities are intuitive: it is enough to change the configuration files in conf/* accordingly to your preferences. Everything will be logged in wandb automatically.

Consider creating new root configurations conf/myawesomeexp.yaml instead of always using the default conf/default.yaml.

Sweeps

You can easily perform hyperparameters sweeps, which override the configuration defined in /conf/*.

The easiest one is the grid-search. It executes the code with every possible combinations of the specified hyperparameters:

PYTHONPATH=. python src/run.py -m optim.optimizer.lr=0.02,0.002,0.0002 optim.lr_scheduler.T_mult=1,2 optim.optimizer.weight_decay=0,1e-5

You can explore aggregate statistics or compare and analyze each run in the W&B dashboard.


We recommend to go through at least the Basic Tutorial, and the docs about Instantiating objects with Hydra.

PyTorch Lightning

Lightning makes coding complex networks simple. It is not a high level framework like keras, but forces a neat code organization and encapsulation.

You should be somewhat familiar with PyTorch and PyTorch Lightning before using this template.

Environment Variables

System specific variables (e.g. absolute paths to datasets) should not be under version control, otherwise there will be conflicts between different users.

The best way to handle system specific variables is through environment variables.

You can define new environment variables in a .env file in the project root. A copy of this file (e.g. .env.template) can be under version control to ease new project configurations.

To define a new variable write inside .env:

export MY_VAR=/home/user/my_system_path

You can dynamically resolve the variable name from Python code with:

get_env('MY_VAR')

and in the Hydra .yaml configuration files with:

${env:MY_VAR}
You might also like...
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥
A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)
PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR)

This is a PyTorch implementation of EGVSR: Efficcient & Generic Video Super-Resolution (VSR), using subpixel convolution to optimize the inference speed of TecoGAN VSR model. Please refer to the official implementation ESPCN and TecoGAN for more information.

Official implementation of our paper "Learning to Bootstrap for Combating Label Noise"

Learning to Bootstrap for Combating Label Noise This repo is the official implementation of our paper "Learning to Bootstrap for Combating Label Noise

An essential implementation of BYOL in PyTorch + PyTorch Lightning
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

A general framework for deep learning experiments under PyTorch based on pytorch-lightning

torchx Torchx is a general framework for deep learning experiments under PyTorch based on pytorch-lightning. TODO list gan-like training wrapper text

a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Generic Event Boundary Detection: A Benchmark for Event Segmentation

Generic Event Boundary Detection: A Benchmark for Event Segmentation We release our data annotation & baseline codes for detecting generic event bound

[ICCV2021] IICNet: A Generic Framework for Reversible Image Conversion
[ICCV2021] IICNet: A Generic Framework for Reversible Image Conversion

IICNet - Invertible Image Conversion Net Official PyTorch Implementation for IICNet: A Generic Framework for Reversible Image Conversion (ICCV2021). D

Comments
  • Curious if you checked out DAGsHub

    Curious if you checked out DAGsHub

    Hi @lucmos, this looks like an awesome repo. I stumbled on it while doing some research on project templates for ML projects. I'm one of the creators of DAGsHub which is a platform built on Git, DVC, and MLflow. It integrates with GitHub and provides a free DVC remote and MLflow server so that you can track experiments and share your data & models in one UI.

    Here's an example project to showcase the abilities: https://dagshub.com/OperationSavta/SavtaDepth

    It seems really in line with what you're creating here, and I would love to hear your thoughts about it.

    opened by deanp70 4
  • Streamlit UI - Weights and Biases login

    Streamlit UI - Weights and Biases login

    The template is really awesome.

    I had a small issue. When I run the Streamlit UI without being logged in in Weights and Biases. The UI just hanged with the loading status without giving me any feedback about what was happening, so that I have to log in first in wandb. I had to login manually from the console. Is there any way to solve this issue? For example to have feedback from the UI if I'm not logged in.

    Thanks!

    opened by andreim14 1
  • load_model

    load_model

    Hi, This issue concerns the function from nn_core.serialization import load_model Suppose i train a pytorch model with class MyLightningModule, and that I saved the checkpoint in model_path. Suppose now that the class MyLightningModule has received some minor changes, like a new class variable has been added. Let's call this version MyLightningModuleV2. When I load a model using this function, like:

    self.model = load_model(module_class=MyLightningModuleOld, checkpoint_path=Path(model_path), map_location=self.device).to(self.device).eval()

    I get an error because the chekpoint refers to the model of class MyLightningModule and therefore the new variable is (obviously) missing. To make it work, i need to load the model with the old version of the class, that is, MyLightningModule, and then manually setting "model.new_variable" to the value i want to get, like the following:

    self.model = load_model(module_class=MyLightningModuleOld, checkpoint_path=Path(model_path), map_location=self.device).to(self.device).eval()
    self.model.new_variable = False
    

    It would be nice to have this option in the load_model function to avoid creating multiple versions of the same class.

    opened by framolfese 0
  • PyTorch Lightning EcoCI integration to check for compatibility with latest & upcoming releases

    PyTorch Lightning EcoCI integration to check for compatibility with latest & upcoming releases

    Hey Valentino & Luca,

    I am just catching up with some bookmarks and remembered your repo here :). As someone who constantly fuzzes about the ideal project structure, that's actually pretty cool. I have been using an adapted version of the data science cookiecutter for generic ML projects, but nothing sophisticated like this here with code stubs.

    Haven't thoroughly played with it yet, though, besides creating an example folder and looking at the pl_module.py and datamodule.py files, which look good to me!

    In any case, long story short, I was wondering if you'd be interested in the PyTorch Lightning's ecosystem CI to make sure that it stays fresh and relevant wrt to upcoming version releases (comes with free CPU and multi-GPU CI tests): https://devblog.pytorchlightning.ai/stay-ahead-of-breaking-changes-with-the-new-lightning-ecosystem-ci-b7e1cf78a6c7

    If you are interested in that, I am sure my colleague @Borda would be happy to assist with questions & technical details -- he built this thing, so he probably knows best :)

    opened by rasbt 4
Releases(0.2.3)
  • 0.2.3(Dec 15, 2022)

    What's Changed

    • Bump dependency versions by @lucmos in https://github.com/grok-ai/nn-template/pull/79
    • Version 0.2.3 by @lucmos in https://github.com/grok-ai/nn-template/pull/80

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.2.2...0.2.3

    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Jun 13, 2022)

    What's Changed

    • Update README.md by @Flegyas in https://github.com/grok-ai/nn-template/pull/70
    • Improve documentation by @Flegyas in https://github.com/grok-ai/nn-template/pull/71
    • Update documentation by @Flegyas in https://github.com/grok-ai/nn-template/pull/72
    • Add asciinema gif in the README and docs by @lucmos in https://github.com/grok-ai/nn-template/pull/74
    • Add papers by @lucmos in https://github.com/grok-ai/nn-template/pull/76
    • Update precommits versions by @lucmos in https://github.com/grok-ai/nn-template/pull/75
    • Version 0.2.2 by @lucmos in https://github.com/grok-ai/nn-template/pull/77

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.2.1...0.2.2

    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Mar 1, 2022)

    Changelog for nn-template 0.2.1 (2022-03-01)

    What's Changed

    • Fix status badge in the documentation by @lucmos in https://github.com/grok-ai/nn-template/pull/64
    • Minor fixes post release by @lucmos in https://github.com/grok-ai/nn-template/pull/65
    • Fix typos in the documentation by @mikcnt in https://github.com/grok-ai/nn-template/pull/67
    • Fix broken relative links due to mike root folder by @lucmos in https://github.com/grok-ai/nn-template/pull/68
    • Version 0.2.1 by @lucmos in https://github.com/grok-ai/nn-template/pull/69

    New Contributors

    • @mikcnt made their first contribution in https://github.com/grok-ai/nn-template/pull/67

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.2.0...0.2.1

    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Mar 1, 2022)

    We are very pleased to present you NN Template 0.2.0!

    Changelog for nn-template 0.2.0 (2022-03-01)

    Summary

    • Cookiecutter parametrization
    • CI/CD Integration via GitHub Actions
    • Automate testing of your projects
    • Logic decoupling thanks to nn-template-core
    • Advanced restore options for trainings
    • Documentation website
    • Support for Python logging (with colors!)

    What's Changed

    • Refactor configuration by @lucmos in https://github.com/grok-ai/nn-template/pull/8
    • Refactor project to a python package by @lucmos in https://github.com/grok-ai/nn-template/pull/10
    • Add tooling configuration by @lucmos in https://github.com/grok-ai/nn-template/pull/9
    • Refactor codebase to be compliant to the pre-commits by @lucmos in https://github.com/grok-ai/nn-template/pull/11
    • Refactor the project root management by @lucmos in https://github.com/grok-ai/nn-template/pull/12
    • Added wandb to .gitignore by @Flegyas in https://github.com/grok-ai/nn-template/pull/14
    • Refactor logging by @lucmos in https://github.com/grok-ai/nn-template/pull/15
    • Enable pin-memory if not on CPU by @lucmos in https://github.com/grok-ai/nn-template/pull/16
    • Factor our PyTorch Module from the Lightning Module by @lucmos in https://github.com/grok-ai/nn-template/pull/17
    • Force the .cache folder to be in the PROJECT_ROOT by @lucmos in https://github.com/grok-ai/nn-template/pull/19
    • Add the configuration to the Lightning checkpoints by @lucmos in https://github.com/grok-ai/nn-template/pull/20
    • Use extend-ignore instead of ignore in .flake8 by @lucmos in https://github.com/grok-ai/nn-template/pull/21
    • Fix formatting by @lucmos in https://github.com/grok-ai/nn-template/pull/22
    • Log the code used in the current experiment to wandb by @lucmos in https://github.com/grok-ai/nn-template/pull/18
    • Functionalities decoupling via external library (nn-core). by @Flegyas in https://github.com/grok-ai/nn-template/pull/23
    • Add tests by @lucmos in https://github.com/grok-ai/nn-template/pull/24
    • Implement resuming behaviour by @lucmos in https://github.com/grok-ai/nn-template/pull/25
    • Refactor NNLogger usages by @lucmos in https://github.com/grok-ai/nn-template/pull/27
    • Add CI on pre-commits and tests by @lucmos in https://github.com/grok-ai/nn-template/pull/26
    • Remove some trigger from the Test Suite workflow by @lucmos in https://github.com/grok-ai/nn-template/pull/28
    • Overwrite Lightning logging configuration by @lucmos in https://github.com/grok-ai/nn-template/pull/29
    • Ensure tags are defined asking interactively for them by @lucmos in https://github.com/grok-ai/nn-template/pull/30
    • Introduce the seed index concept by @lucmos in https://github.com/grok-ai/nn-template/pull/31
    • Force execution of init.py on direct execution by @lucmos in https://github.com/grok-ai/nn-template/pull/33
    • Move functions from template to core by @lucmos in https://github.com/grok-ai/nn-template/pull/34
    • Add functionality to upload the run files in the storage to wandb by @lucmos in https://github.com/grok-ai/nn-template/pull/35
    • Move ui_utils entirely to nn-core by @lucmos in https://github.com/grok-ai/nn-template/pull/36
    • Add dynamic parametrized badges for the Test Suite and docs by @lucmos in https://github.com/grok-ai/nn-template/pull/45
    • Fix files hashing in workflow cache keys by @lucmos in https://github.com/grok-ai/nn-template/pull/46
    • Add seed_index determinism test by @lucmos in https://github.com/grok-ai/nn-template/pull/44
    • Refactor references to organization name into grok-ai by @lucmos in https://github.com/grok-ai/nn-template/pull/48
    • Push the default version in mike on release by @lucmos in https://github.com/grok-ai/nn-template/pull/49
    • Improve docs status badge to monitor the github-pages environment by @lucmos in https://github.com/grok-ai/nn-template/pull/50
    • Fix mike rebasing and pushing logic on release by @lucmos in https://github.com/grok-ai/nn-template/pull/51
    • Add a DAG in the post hook interactive setup by @lucmos in https://github.com/grok-ai/nn-template/pull/47
    • Skip test if no dataset is provided by @Flegyas in https://github.com/grok-ai/nn-template/pull/52
    • Fix remote parametrization in the README by @lucmos in https://github.com/grok-ai/nn-template/pull/53
    • Fix type hint in dataset.py by @lucmos in https://github.com/grok-ai/nn-template/pull/55
    • Improve the "add git remote" message in the post hook by @lucmos in https://github.com/grok-ai/nn-template/pull/54
    • Update nn-template-core dependency to 0.0.7 by @lucmos in https://github.com/grok-ai/nn-template/pull/56
    • Update docs by @lucmos in https://github.com/grok-ai/nn-template/pull/57
    • Add custom collate function by @Flegyas in https://github.com/grok-ai/nn-template/pull/58
    • Set metadata as a cached property in DataModule by @Flegyas in https://github.com/grok-ai/nn-template/pull/59
    • Pass run tags to the WandbLogger by @Flegyas in https://github.com/grok-ai/nn-template/pull/60
    • Feature/bump core by @Flegyas in https://github.com/grok-ai/nn-template/pull/61
    • Version 0.2.0 by @Flegyas in https://github.com/grok-ai/nn-template/pull/62

    Full Changelog: https://github.com/grok-ai/nn-template/compare/0.1.0...0.2.0

    Source code(tar.gz)
    Source code(zip)
Owner
Luca Moschella
PhD student at University of Rome La Sapienza in Computer Science.
Luca Moschella
A practical ML pipeline for data labeling with experiment tracking using DVC.

Auto Label Pipeline A practical ML pipeline for data labeling with experiment tracking using DVC Goals: Demonstrate reproducible ML Use DVC to build a

Todd Cook 4 Mar 8, 2022
Template repository for managing machine learning research projects built with PyTorch-Lightning

Tutorial Repository with a minimal example for showing how to deploy training across various compute infrastructure.

Sidd Karamcheti 3 Feb 11, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 4 Nov 16, 2021
Usable Implementation of "Bootstrap Your Own Latent" self-supervised learning, from Deepmind, in Pytorch

Bootstrap Your Own Latent (BYOL), in Pytorch Practical implementation of an astoundingly simple method for self-supervised learning that achieves a ne

Phil Wang 1.4k Dec 29, 2022
Hydra: an Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems

Hydra: An Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems Paper Finding Semantic Bugs in File Systems with an Extensible Fuzzin

gts3.org (SSLab@Gatech) 129 Dec 15, 2022
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022
The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends

Who's there? The spiritual successor to knockknock for PyTorch Lightning, to get a notification when your training is complete or when it crashes duri

twsl 70 Oct 6, 2022
Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers 1 Using Colab Please notic

Hila Chefer 489 Jan 7, 2023
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

null 31 Sep 27, 2022