High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Overview
image image imageimage image image
image image image image image
image image image
image image image image Twitter
image link

TL;DR

Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

PyTorch-Ignite teaser

Click on the image to see complete code

Features

  • Less code than pure PyTorch while ensuring maximum control and simplicity

  • Library approach and no program's control inversion - Use ignite where and when you need

  • Extensible API for metrics, experiment managers, and other components

Table of Contents

Why Ignite?

Ignite is a library that provides three high-level features:

  • Extremely simple engine and event system
  • Out-of-the-box metrics to easily evaluate models
  • Built-in handlers to compose training pipeline, save artifacts and log parameters and metrics

Simplified training and validation loop

No more coding for/while loops on epochs and iterations. Users instantiate engines and run them.

Example
from ignite.engine import Engine, Events, create_supervised_evaluator
from ignite.metrics import Accuracy


# Setup training engine:
def train_step(engine, batch):
    # Users can do whatever they need on a single iteration
    # E.g. forward/backward pass for any number of models, optimizers etc
    # ...

trainer = Engine(train_step)

# Setup single model evaluation engine
evaluator = create_supervised_evaluator(model, metrics={"accuracy": Accuracy()})

def validation():
    state = evaluator.run(validation_data_loader)
    # print computed metrics
    print(trainer.state.epoch, state.metrics)

# Run model's validation at the end of each epoch
trainer.add_event_handler(Events.EPOCH_COMPLETED, validation)

# Start the training
trainer.run(training_data_loader, max_epochs=100)

Power of Events & Handlers

The cool thing with handlers is that they offer unparalleled flexibility (compared to say, callbacks). Handlers can be any function: e.g. lambda, simple function, class method etc. Thus, we do not require to inherit from an interface and override its abstract methods which could unnecessarily bulk up your code and its complexity.

Execute any number of functions whenever you wish

Examples
trainer.add_event_handler(Events.STARTED, lambda _: print("Start training"))

# attach handler with args, kwargs
mydata = [1, 2, 3, 4]
logger = ...

def on_training_ended(data):
    print(f"Training is ended. mydata={data}")
    # User can use variables from another scope
    logger.info("Training is ended")


trainer.add_event_handler(Events.COMPLETED, on_training_ended, mydata)
# call any number of functions on a single event
trainer.add_event_handler(Events.COMPLETED, lambda engine: print(engine.state.times))

@trainer.on(Events.ITERATION_COMPLETED)
def log_something(engine):
    print(engine.state.output)

Built-in events filtering

Examples
# run the validation every 5 epochs
@trainer.on(Events.EPOCH_COMPLETED(every=5))
def run_validation():
    # run validation

# change some training variable once on 20th epoch
@trainer.on(Events.EPOCH_STARTED(once=20))
def change_training_variable():
    # ...

# Trigger handler with customly defined frequency
@trainer.on(Events.ITERATION_COMPLETED(event_filter=first_x_iters))
def log_gradients():
    # ...

Stack events to share some actions

Examples

Events can be stacked together to enable multiple calls:

@trainer.on(Events.COMPLETED | Events.EPOCH_COMPLETED(every=10))
def run_validation():
    # ...

Custom events to go beyond standard events

Examples

Custom events related to backward and optimizer step calls:

from ignite.engine import EventEnum


class BackpropEvents(EventEnum):
    BACKWARD_STARTED = 'backward_started'
    BACKWARD_COMPLETED = 'backward_completed'
    OPTIM_STEP_COMPLETED = 'optim_step_completed'

def update(engine, batch):
    # ...
    loss = criterion(y_pred, y)
    engine.fire_event(BackpropEvents.BACKWARD_STARTED)
    loss.backward()
    engine.fire_event(BackpropEvents.BACKWARD_COMPLETED)
    optimizer.step()
    engine.fire_event(BackpropEvents.OPTIM_STEP_COMPLETED)
    # ...

trainer = Engine(update)
trainer.register_events(*BackpropEvents)

@trainer.on(BackpropEvents.BACKWARD_STARTED)
def function_before_backprop(engine):
    # ...

Out-of-the-box metrics

Example
precision = Precision(average=False)
recall = Recall(average=False)
F1_per_class = (precision * recall * 2 / (precision + recall))
F1_mean = F1_per_class.mean()  # torch mean method
F1_mean.attach(engine, "F1")

Installation

From pip:

pip install pytorch-ignite

From conda:

conda install ignite -c pytorch

From source:

pip install git+https://github.com/pytorch/ignite

Nightly releases

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly

Docker Images

Using pre-built images

Pull a pre-built docker image from our Docker Hub and run it with docker v19.03+.

docker run --gpus all -it -v $PWD:/workspace/project --network=host --shm-size 16G pytorchignite/base:latest /bin/bash

Available pre-built images are :

  • pytorchignite/base:latest | pytorchignite/hvd-base:latest
  • pytorchignite/apex:latest | pytorchignite/hvd-apex:latest | pytorchignite/msdp-apex:latest
  • pytorchignite/vision:latest | pytorchignite/hvd-vision:latest
  • pytorchignite/apex-vision:latest | pytorchignite/hvd-apex-vision:latest | pytorchignite/msdp-apex-vision:latest
  • pytorchignite/nlp:latest | pytorchignite/hvd-nlp:latest
  • pytorchignite/apex-nlp:latest | pytorchignite/hvd-apex-nlp:latest | pytorchignite/msdp-apex-nlp:latest

For more details, see here.

Getting Started

Few pointers to get you started:

Documentation

Additional Materials

Examples

Complete list of examples can be found here.

Tutorials

Reproducible Training Examples

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

  • ImageNet - logs on Ignite Trains server coming soon ...
  • Pascal VOC2012 - logs on Ignite Trains server coming soon ...

Features:

Communication

User feedback

We have created a form for "user feedback". We appreciate any type of feedback and this is how we would like to see our community:

  • If you like the project and want to say thanks, this the right place.
  • If you do not like something, please, share it with us and we can see how to improve it.

Thank you !

Contributing

Please see the contribution guidelines for more information.

As always, PRs are welcome :)

Projects using Ignite

Research papers

Blog articles, tutorials, books

Toolkits

Others

See other projects at "Used by"

If your project implements a paper, represents other use-cases not covered in our official tutorials, Kaggle competition's code or just your code presents interesting results and uses Ignite. We would like to add your project in this list, so please send a PR with brief description of the project.

About the team & Disclaimer

This repository is operated and maintained by volunteers in the PyTorch community in their capacities as individuals (and not as representatives of their employers). See the "About us" page for a list of core contributors. For usage questions and issues, please see the various channels here. For all other questions and inquiries, please send an email to [email protected].

Issues
  • TQDM Progress Bar

    TQDM Progress Bar

    adds a tqdm progress bar to the contrib module

    opened by miguelvr 76
  • Multilabel Metrics

    Multilabel Metrics

    Related to #310 Description: Adds multilabel support for metrics.

    Check list:

    • [x] New tests are added (if a new feature is modified)
    • [x] New doc strings: text and/or example code are in RST format
    • [x] Documentation is updated (if required)
    opened by anmolsjoshi 62
  • Bug Related to Calculation of Binary Metrics

    Bug Related to Calculation of Binary Metrics

    Fixes #348

    Description: Bug in Binary Precision/Recall maps binary cases into 2 classes and then averages the metrics of both. This is an incorrect method of calculating precision and recall for Precision and Recall. It should be treated as a one person class only.

    I have included the following in the code:

    • Created _check_shape to process and check the shapes of y, y_pred
    • Created _check_type to determine the type of problem - binary or multiclass - based on y and y_pred, also raises error if the problem type changes during training. Type is decided on first update, and then checked for each subsequent update.
    • Calculates binary precision using threshold function, torch.round default
    • Includes check of binary output eg: torch.equal(y, y ** 2)
    • Only inputs torch.round as default is problem is binary
    • Appropriate checks for threshold_function
    • Added better tests - improved binary tests, incorrect threshold function, incorrect y, changing type in between updates.

    Check list:

    • [x] New tests are added (if a new feature is modified)
    • [x] New doc strings: text and/or example code are in RST format
    • [x] Documentation is updated (if required)
    0.1.2 
    opened by anmolsjoshi 57
  • GH Action for docker builds

    GH Action for docker builds

    Related to #1644 and #1721

    Description:

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    opened by trsvchn 51
  • Update Precision, Recall, add Accuracy (Binary and Categorical combined)

    Update Precision, Recall, add Accuracy (Binary and Categorical combined)

    Fixes #262

    Description: This PR updates Precision and Recall, and adds Accuracy to handle binary and categorical cases for different types of input.

    Check list:

    • [x] New tests are added.
    • [x] Updated doc string RST format
    • [x] Edited metrics.rst to add information about Accuracy.
    opened by anmolsjoshi 48
  • Add GH Action to build and publish Docker images

    Add GH Action to build and publish Docker images

    Fixes #1305

    Description:

    Adds GitHub Action that triggers on push to master docker folder or releases to build and publish Docker images

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    opened by trsvchn 40
  • Adopt PyTorch's doc theme

    Adopt PyTorch's doc theme

    Fixes #625

    Description: As detailed in the issue, this is a proposal to switch the website's theme to PyTorch's. This illustrates Ecosystem membership and a cleaner theme. Additionally, existing Usercss plugins can darkify with almost no change.

    I'm not sure yet that the versions links block will be properly styled when displayed on read-the-docs, but let's iterate over that.

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [X] Documentation is updated (if required)
    opened by bosr 36
  • Managing Deprecation using decorators

    Managing Deprecation using decorators

    This is a very stripped down version of the code, I have not written any tests yet. This is primarily for me to check if I am going in the correct direction. So please do tell everything that needs improvement/changing.

    Fixes #1479

    Description: Implemented till now

    • Make functions deprecated using the @deprecated decorator
    • Add arguments to the @deprecated decorator to customize it for each function
    • The deprecation messages also reflect in the documentation (written in Sphinx compatible format)

    Check list:

    • [x] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)

    Thank you to @ydcjeff for giving the idea to add the version update information :)

    opened by Devanshu24 36
  • add frequency metric to determine some average per-second metrics

    add frequency metric to determine some average per-second metrics

    Fixes # N/A

    Description:

    This code is to compute X per-second performance metrics (like words per second, images per second, etc). Likely this will be used in conjunction with ignite.metrics.RunningAverage for most utility.

    Check list:

    • [x] New tests are added (if a new feature is added)
    • [x] New doc strings: description and/or example code are in RST format
    • [x] Documentation is updated (if required)
    opened by erip 36
  • TrainsSaver doesn't respect Checkpoint's n_saved

    TrainsSaver doesn't respect Checkpoint's n_saved

    πŸ› Bug description

    As the title says, it seems that TrainsSaver bypasses the Checkpoint n_saved parameter. That means that all models are saved and never updated / deleted.

    Consider this simple example:

            task.phases['train'].add_event_handler(
                Events.EPOCH_COMPLETED(every=1),
                Checkpoint(to_save, TrainsSaver(output_uri=output_uri), 'epoch', n_saved=1,
                           global_step_transform=global_step_from_engine(task.phases['train'])))
    

    The above saves every checkpoint. You end-up with

    epoch_checkpoint_1.pt
    epoch_checkpoint_2.pt
    epoch_checkpoint_3.pt
    ...
    

    Now if we do, the same with DiskSaver:

            task.phases['train'].add_event_handler(
                Events.EPOCH_COMPLETED(every=1),
                Checkpoint(to_save, DiskSaver(dirname=dirname), 'epoch', n_saved=1,
                           global_step_transform=global_step_from_engine(task.phases['train'])))
    

    We get only:

    epoch_checkpoint_3.pt
    

    as expected.

    Same behaviour if we save only best models using score_function, i.e. TrainsSaver saves every best model.

    Environment

    • PyTorch Version: 1.3.1
    • Ignite Version: 0.4.0.dev20200519 (EDIT: update to latest nightly, issue still exists)
    • OS: Linux
    • How you installed Ignite: pip nightly
    • Python version: 3.6
    • Any other relevant information: trains version: 0.14.3
    bug 
    opened by achigeor 34
  • Raise friendly errors in case of `num_epochs` is less than `num_warmup_epochs` in CIFAR10 and Transformers examples

    Raise friendly errors in case of `num_epochs` is less than `num_warmup_epochs` in CIFAR10 and Transformers examples

    I was trying to benchmark torch.inference_mode vs torch.no_grad on CIFAR10 example, so I just wanted to run for 3 epochs and average the benchmarks, the default warmup epochs is 4, so the scheduler will raise this error, which is not very clear to new users

    f"Milestones should be increasing integers, but given {pair[0]} is smaller "
    ValueError: Milestones should be increasing integers, but given 291 is smaller than the previous milestone 388
    

    So I suggest to check num_epochs vs num_warmup_epochs inside run method and raise more clear error message for the users.

    good first issue 
    opened by KickItLikeShika 3
  • Scheduled workflow failed

    Scheduled workflow failed

    Oh no, something went wrong in the scheduled workflow PyTorch version tests with commit 554fd807ffb2d9e48b5e04f3bc710ea54505aea8. Please look into it:

    https://github.com/pytorch/ignite/actions/runs/1247377549

    Feel free to close this if this was just a one-off error.

    bug 
    opened by github-actions[bot] 0
  • ci: migrate to pip from miniconda, fix #2199

    ci: migrate to pip from miniconda, fix #2199

    fix #2199

    Description:

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [ ] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    ci 
    opened by ydcjeff 1
  • Improve GHA CI when old pytorch nightly version is installed

    Improve GHA CI when old pytorch nightly version is installed

    From time to time we have tests failing when using pytorch nightly because of old version is installed. For example: https://github.com/pytorch/ignite/runs/3620746086

    Failing tests:

    FAILED tests/ignite/distributed/test_auto.py::test_auto_methods_gloo - Assert...
    FAILED tests/ignite/distributed/test_auto.py::test_auto_methods_gloo - Assert...
    

    because of at "Install dependencies" step, conda installed

    pytorch-1.9.0.dev20210415  |py3.7_cuda10.1_cudnn7.6.3_0       676.5 MB  pytorch-nightly
    

    which dates on April 2021 and not September (today).

    Let's add an option to skip all other tests if we detect that pytorch nightly version date is 2 months old.

    This issue is open for everyone to tackle if interested in working with CI and Github Actions.

    help wanted ci 
    opened by vfdev-5 7
  • `save_handler` auto detects `DiskSaver` when path passed

    `save_handler` auto detects `DiskSaver` when path passed

    Fixes #2194

    Description: Added additional parameter to Checkpoint's save_handler to accept a string containing the dirname.

    Check list:

    • [ ] New tests are added (if a new feature is added)
    • [x] New doc strings: description and/or example code are in RST format
    • [ ] Documentation is updated (if required)
    module: handlers 
    opened by Priyansi 1
  • Load checkpoints within a DeterministicEngine

    Load checkpoints within a DeterministicEngine

    πŸ› Bug description

    When loading a checkpoint trained with anEngine using a DeterministEngine, the following error is raised:

    src/training/engine.py:44: in valid
        self.trainer.valid(self.my_task, checkpoint_path)
    src/training/trainers/single_trainer.py:65: in valid
        valid_engine.run(valid_loader)
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/engine.py:701: in run
        return self._internal_run()
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/engine.py:774: in _internal_run
        self._handle_exception(e)
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/engine.py:469: in _handle_exception
        raise e
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/engine.py:751: in _internal_run
        self._fire_event(Events.EPOCH_COMPLETED)
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/engine.py:424: in _fire_event
        func(*first, *(event_args + others), **kwargs)
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/handlers/checkpoint.py:373: in __call__
        checkpoint = self._setup_checkpoint()
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/handlers/checkpoint.py:437: in _setup_checkpoint
        checkpoint[k] = obj.state_dict()
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/deterministic.py:186: in state_dict
        state_dict = super(DeterministicEngine, self).state_dict()
    ../../miniconda3/envs/training-py36/lib/python3.6/site-packages/ignite/engine/engine.py:504: in state_dict
        return OrderedDict([(k, getattr(self.state, k)) for k in keys])
    
    .0 = <tuple_iterator object at 0x166c104a8>
    
    >   return OrderedDict([(k, getattr(self.state, k)) for k in keys])
    E   AttributeError: 'State' object has no attribute 'rng_states'
    

    Expected behavior: Since the checkpoint doesn't have rng_states, DeterministEngine should print a warning and ignore the previous rng_states (recreate on the fly)

    How to reproduce:

    1. Train a pytorch model with an Engine
    2. Save a checkpoint
    3. Resume the training using a DeterministicEngine

    Environment

    • PyTorch Version (e.g., 1.4): 1.7.1
    • Ignite Version (e.g., 0.3.0): 0.4.6
    • OS (e.g., Linux): macOS
    • How you installed Ignite (conda, pip, source): pip
    • Python version: 3.6.10
    bug module: engine 
    opened by H4dr1en 1
  • Automatically detect `DiskSaver` as `save_handler` in `Checkpoint`

    Automatically detect `DiskSaver` as `save_handler` in `Checkpoint`

    πŸš€ Feature

    Currently, when we use Checkpoint, we have to specify a save_handler even when we want to store the checkpoints on disk:

    handler = Checkpoint(
            to_save, DiskSaver('/tmp/models', create_dir=True), n_saved=2
       )
    

    An enhancement can be to automatically detect the save_handler as DiskSaver if a path is passed, like this:

    handler = Checkpoint(
            to_save, create_dir=True, dir_name='/tmp/models', n_saved=2
       )
    

    Internally,

    if dir_name is not None:
        save_handler = DiskSaver(path, create_dir = create_dir)
    
    enhancement help wanted module: handlers 
    opened by Priyansi 3
  • Use inference_mode instead of no_grad for pth v1.9.0

    Use inference_mode instead of no_grad for pth v1.9.0

    πŸš€ Feature

    The idea is to replace where it is appropriate torch.no_grad with inference_mode to speed-up computations: evalutation, metrics.

    This works since pytorch v1.9.0.

    • https://pytorch.org/docs/1.9.0/notes/autograd.html#inference-mode
    • https://pytorch.org/docs/1.9.0/generated/torch.inference_mode.html?highlight=inference_mode#torch.inference_mode

    Let's also benchmark the speed-up on a simple case.

    enhancement help wanted needs-discussion 
    opened by vfdev-5 5
  • Using ignite.distributed with 3 or more processes hangs indefinitely

    Using ignite.distributed with 3 or more processes hangs indefinitely

    ❓ Questions/Help/Support

    Trying to use the ignite.distributed to train a model with DDP. The issue I encounter is that when spawning 3 or processes to run my code, it seems to hang indefinitely. Works fine with 2 processes. I even tried a very basic script and still hangs (similar to the tutorial).

    # run.py
    import torch
    import ignite.distributed as idist
    
    def run(rank, config):
        print(f"Running basic DDP example on rank {rank}.")
    
    def main():
        world_size = 4  # if this is 3 or more it hangs
    
        # some dummy config
        config = {}
    
        # run task
        idist.spawn("nccl", run, args=(config,), nproc_per_node=world_size)
        
        # the same happens even in this case
        # with idist.Parallel(backend="nccl", nproc_per_node=world_size) as parallel:
        #     parallel.run(run, config)
    
    if __name__ == "__main__":
        main()
    

    Executing this with:

    python -m module.run
    

    I'd be very grateful if anyone can weigh in on this.

    Environment

    • PyTorch Version: 1.9.0
    • Ignite Version: 0.4.6
    • OS: Ubuntu 20.04.2 LTS
    • How you installed Ignite (conda, pip, source): conda
    • Python version: 3.9.6
    • Any other relevant information: Running on 4 A100-PCIE-40GB GPUs
    question 
    opened by ivankitanovski 15
  • Deterministic Engine failing on nightly version

    Deterministic Engine failing on nightly version

    Some of CI tests in test_deterministic.py are broken on nightly version https://github.com/pytorch/ignite/runs/3472053263#step:12:12780

    ci 
    opened by KickItLikeShika 0
Releases(v0.4.6)
  • v0.4.6(Aug 2, 2021)

    PyTorch-Ignite 0.4.6 - Release Notes

    New Features

    • Added start_lr option to FastaiLRFinder (#2111)
    • Added Model's EMA handler (#2098, #2102)
    • Improved SLURM support: added hostlist expansion without using scontrol (#2092)

    Metrics

    • Added Inception Score (#2053)
    • Added FID metric (#2049, #2061, #2085, #2094, #2103)
      • Blog post "GAN Evaluation : the Frechet Inception Distance and Inception Score metrics" (https://pytorch-ignite.ai/posts/gan-evaluation-with-fid-and-is/)
    • Improved DDP support for metrics (#2096, #2083)
    • Improved MetricsLambda to work with reset/update/compute API (#2091)

    Bug fixes

    • Modified auto_dataloader to not wrap user provided DistributedSampler (#2119)
    • Raise error in DistributedProxySampler when sampler is already a DistributedSampler (#2120)
    • Improved LRFinder error message (#2127)
    • Added py.typed for type checkers (#2095)

    Housekeeping

    • #2123, #2117, #2116, #2110, #2093, #2086

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @01-vyom, @KickItLikeShika, @gucifer, @sandylaker, @schuhschuh, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.5(Jun 24, 2021)

    PyTorch-Ignite 0.4.5 - Release Notes

    New Features

    Metrics

    • Added BLEU metric (#1834)
    • Added ROUGE metric (#1772)
    • Added MultiLabelConfusionMatrix metric (#1613)
    • Added Cohen Kappa metric (#1690)
    • Extended sync_all_reduce API (#1823)
    • Made EpochMetric more generic by extending the list of valid types (#1748)
    • Fixed issue with metric's output device (#2062)
    • Added support for list of tensors as metric input (#2055)
    • Implemented Jaccard Index shortcut for metrics (#1682)
    • Updated Loss metric to use required_output_keys (#2027)
    • Added classification report metric (#1887)
    • Added output detach for Canberra metric (#1820)
    • Improved ROC AUC (#1762)
    • Improved AveragePrecision metric and tests (#1756)
    • Uniformly handling of metric types for all loggers (#2021)
    • More DDP support for multiple contrib metrics (#1891, #1869, #1865, #1850, #1830, #1829, #1806, #1805, #1803)

    Engine

    • Added native torch.cuda.amp and apex automatic mixed precision for create_supervised_trainer and create_supervised_evaluator (#1714, #1589)
    • Updated state.batch/state.output lifespan in Engine (#1919)

    Distributed module

    • Handled IterableDataset with auto_dataloader (#2028)
    • Updated Loss metric to use required_output_keys (#2027)
    • Enabled gpu support for gloo backend (#2016)
    • Added safe_mode for idist broadcast (#1839)
    • Improved idist to support different init_methods (#1767)

    Other improvements

    • Added LR finder improvements, moved to core (#2045, #1998, #1996, #1987, #1981, #1961, #1951, #1930)
    • Moved param handler to core (#1988)
    • Added an option to store EpochOutputStore data on engine.state, moved to core (#1982, #1974)
    • Set seed for xla in ignite.utils.manual_seed (#1970)
    • Fixed case for Precision/Recall in multi_label, not averaged configuration for DDP (#1646)
    • Updated PolyaxonLogger to handle v1 and v0 (#1625)
    • Added Arguments *args, **kwargs to BaseLogger.attach method (#2034)
    • Enabled metric ordering on ProgressBar (#1937)
    • Updated wandb logger (#1896)
    • Fixed type hint for ProgressBar (#2079)

    Bug fixes

    • BC-breaking: Improved loggers to keep configuration (#1945)
    • Fixed warnings in CI (#2023)
    • Fixed Precision for all zero predictions (#2017)
    • Renamed the default logger (#2006)
    • Fixed Accumulation metric with Nvidia/Apex (#1978)
    • Updated code to raise an error if SLURM is used with torch dist launcher (#1976)
    • Updated nltk-smooth2 for BLEU metric (#1911)
    • Added full read permissions to saved file (1876) (#1880)
    • Fixed a bug with horovod _do_manual_all_reduce (#1848)
    • Fixed small bug in "Finetuning EfficientNet-B0 on CIFAR100" tutorial (#2073)
    • Fixed f-string in mnist_save_resume_engine.py example (#2077)
    • Fixed an issue when rng states accidentaly on cuda for DeterministicEngine (#2081)

    Housekeeping

    A lot of PRs
    • Test improvements (#2061, #2057, #2047, #1962, #1957, #1946, #1943, #1928, #1927, #1915, #1914, #1908, #1906, #1905, #1903, #1902, #1899, #1899, #1882, #1870, #1866, #1860, #1846, #1832, #1828, #1821, #1816, #1815, #1814, #1812, #1811, #1809, #1808, #1807, #1804, #1802, #1801, #1799, #1798, #1797, #1796, #1795, #1793, #1791, #1785, #1784, #1783, #1781, #1776, #1774, #1769, #1768, #1760, #1755, #1746, #1741, #1718, #1717, #1713, #1631)
    • Documentation improvements and updates (#2058, #2024, #2005, #2003, #2001, #1993, #1990, #1933, #1893, #1849, #1780, #1770, #1727, #1726, #1722, #1686, #1685, #1672, #1671, #1661)
    • Example improvements (#1924, #1918, #1890, #1827, #1771, #1669, #1658, #1656, #1652, #1642, #1633, #1632)
    • CI updates (#2075, #2070, #2069, #2068, #2067, #2064, #2044, #2039, #2037, #2023, #1985, #1979, #1940, #1907, #1892, #1888, #1878, #1877, #1873, #1867, #1861, #1847, #1841, #1838, #1837, #1835, #1831, #1818, #1773, #1764, #1761, #1759, #1752, #1745, #1743, #1742, #1739, #1738, #1736, #1724, #1706, #1705, #1667, #1664, #1647)
    • Code style improvements (#2050, #2014, #1817, #1749, #1747, #1740, #1734, #1732, #1731, #1707, #1703)
    • Added docker image test script (#1733)

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.4.post1(Mar 3, 2021)

    PyTorch-Ignite 0.4.4 - Release Notes

    Bug fixes:

    • BC-breaking Moved detach outside of loss function computation (#1675, #1692)
    • Added eps to avoid nans in canberra error (#1699)
    • Removed size limitation for str on collective ops (#1702)
    • Fixed imports in docker images and now install Pillow-SIMD (#1638, #1639, #1628, #1711)

    Doc improvements

    • #1645, #1653, #1654, #1671, #1672, #1691, #1687, #1686, #1685, #1684, #1676, #1688

    Other improvements

    • Fixed artifacts urls for pypi (#1629)

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff

    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Feb 7, 2021)

    PyTorch-Ignite 0.4.3 - Release Notes

    πŸŽ‰ Since september we have a new logo (#1324) πŸŽ‰

    Core

    Metrics

    • [BC-breaking] Made Metrics accumulate values on device specified by user (#1238)
    • Fixes BC if custom metric returns a dict (#1478)
    • Added PSNR metric (#1570, #1595)

    Handlers

    • Checkpoint can save model with same filename (#1423)
    • Add greater_or_equal option to Checkpoint handler (#1597)
    • Update handlers to use setup_logger (#1617)
    • Added TimeLimit handler (#1611)

    Distributed helper module

    • Distributed cpu tests on windows (#1429)
    • Added kwargs to idist.auto_model (#1552)
    • Improved horovod initializer (#1559)

    Others

    • Dropped python 3.5 support (#1500)
    • Added torch.cuda.manual_seed_all to ignite.utils.manual_seed (#1444)
    • Fixed to_onehot function to be torch scriptable (#1592)
    • Introduced standard stream for logger setup helper (#1601)

    Docker images

    • Removed Entrypoint from Dockerfile and images (#1475)

    Examples

    • Added [Cifar10 QAT example](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10_qat (#1556)

    Contrib

    Metrics

    • Improved Canberra metric for DDP (#1314)
    • Improve ManhattanDistance metric for DDP (#1320)
    • Improve R2Score metric for DDP (#1318)

    Handlers

    • Added new time profiler HandlersTimeProfiler which allows per handler time profiling (#1398, #1474)
    • Fixed attach_opt_params_handler to return RemovableEventHandle (#1502)
    • Renamed TrainsLogger to ClearMLLogger keeping BC (#1557, #1560)

    Documentation improvements

    • #1330, #1337, #1338, #1353, #1360, #1374, #1373, #1394, #1393, #1401, #1435, #1460, #1461, #1465, #1536, #1542 ...
    • Update Shpinx to v3.2.1. (#1356, #1372)

    Codebase is MyPy checked

    • #1349, #1351, #1352, #1355, #1362, #1363, #1370, #1379, #1418, #1419, #1416, #1447, #1484

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Sep 20, 2020)

    PyTorch-Ignite 0.4.2 - Release Notes

    Core

    New Features and bug fixes

    • Added SSIM metric (#1217)

    • Added prebuilt Docker images (#1218)

    • Added distributed support for EpochMetric and related metrics (#1229)

    • Added required_output_keys public attribute (#1291)

    • Pre-built docker images for computer vision and nlp tasks powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 )

    Handlers and utils

    • Allow passing keyword arguments to save function on Checkpoint (#1245)

    Distributed helper module

    • Added support of Horovod (#1195)
    • Added idist.broadcast (#1237)
    • Added sync_bn option to idist.auto_model (#1265)

    Contrib

    New Features and bug fixes

    • Added EpochOutputStore handler (#1226)
    • Improved displayed tag for tqdm progress bar (#1279)
    • Fixed bug with ParamGroupScheduler with schedulers based on different optimizers (#1274)

    And a lot of house-keeping Pre-September Hacktoberfest contributions

    • Added initial Mypy check at CI step (#1296)
    • Fixed typo in docs (concepts) (#1295)
    • Fixed link to pytorch documents (#1294)
    • Removed prints from tests (#1292)
    • Downgraded tqdm version to stabilize the CI (#1293)

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Jul 23, 2020)

    PyTorch-Ignite 0.4.1 - Release Notes

    Core

    New Features and bug fixes

    • Improved docs for custom events (#1179)

    Handlers and utils

    • Added custom filename pattern for saving checkpoints (#1127)

    Distributed helper module

    • Improved namings in _XlaDistModel (#1173)
    • Minor optimization for idist.get_* methods (#1196)
    • Fixed distributed proxy sampler runtime error (#1192)
    • Fixes bug using idist with "nccl" backend and torch cuda is not available (#1166)
    • Fixed issue with logging XLA tensors (#1207)

    Contrib

    New Features and bug fixes

    • Fixes warning about "TrainsLogger output_handler can not log metrics value" (#1170)
    • Improved usage of contrib common methods with other save handlers (#1171)

    Examples

    • Improved Pascal Voc example (#1193)

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0.post1(Jun 26, 2020)

    PyTorch-Ignite 0.4.0 - Release Notes

    Core

    BC breaking changes

    • Simplified engine - BC breaking change (#940 #939 #938)
      • no more internal patching of torch DataLoader.
      • seed argument of Engine.run is deprecated.
      • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
    • Make all Events be CallableEventsWithFilter (#788).
    • Make ignite compatible only with pytorch >=1.3 (#1016, #1150).
      • ignite is tested on the latest and nightly versions of pytorch.
      • exact compatibility with previous versions can be checked here.
    • Remove deprecated arguments from BaseLogger (#1051).
    • Deprecated CustomPeriodicEvent (#984).
    • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
    • Checkpoint stores now files with .pt extension instead of .pth (#873).
    • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
    • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

    See also migration note for details on how to update your code.

    New Features and bug fixes

    Ignite Distributed [Experimental]

    • Introduction of ignite.distributed as idist module (#1045)
      • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
      • supports native torch distributed configuration, XLA devices.
      • metrics computation works in all supported distributed configurations: GPUs and TPUs.
      • Parallel utility and auto module (#1014).

    Engine & Events

    • Add flexibility on event handlers by packing triggering events (#868).
    • Engine argument is now optional in event handlers (#889, #919).
    • We initialize engine.state before calling engine.run (#1028).
    • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
    • Added user keys into Engine's state dict (#914).
    • Bug fixes in Engine class (#1048, #994).
    • Now epoch_length argument is optional (#985)
      • suitable to work with finite-unknown-length iterators.
    • Added times in engine.state (#958).

    Metrics

    • Add Frequency metric for ops/s calculations (#760, #783, #976).
    • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
      • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
    • Metric can be detached (#827).
    • Fixed bug in RunningAverage when output is torch tensor (#943).
    • Improved computation performance of EpochMetric (#967).
    • Fixed average recall value of ConfusionMatrix (#846).
    • Now metrics can be serialized using dill (#930).
    • Added support for nested metric values (#968).

    Handlers and utils

    • Checkpoint : improved filename when score value is Integer (#758).
    • Checkpoint : fix returning worst model of the saved models. (#745).
    • Checkpoint : load_objects can load single object checkpoints (#772).
    • Checkpoint : we now save only one checkpoint per priority (#847).
    • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
    • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
    • Checkpoint and related: other improvements (#937).
    • Checkpoint and EarlyStopping become stateful (#1156)
    • Support namedtuple for convert_tensor (#740).
    • Added decorator one_rank_only (#882).
    • Update common.py (#904).

    Contrib

    • Added FastaiLRFinder (#596).

    Metrics

    • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

    Parameters scheduling

    • Enabled multi params group for LRScheduler (#1027).
    • Parameters scheduling improvements (#1072, #859).
    • Parameters scheduler can work on torch optimizer and any object with attribute param_groups (#1163).

    Support of experiment tracking systems

    • Add NeptuneLogger (#730, #821, #951, #954).
    • Add TrainsLogger (#1020, #1036, #1043).
    • Add WandbLogger (#926).
    • Added visdom_logger to common module (#796).
    • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
    • Simplified BaseLogger attach APIs (#1006).
    • Added kwargs to loggers' constructors and respective setup functions (#1015).

    Time profiling

    • Added basic time profiler to contrib.handlers (#729).

    Bug fixes (some of PRs)

    • ProgressBar output not in sync with epoch counts (#773).
    • Fixed ProgressBar.log_message (#768).
    • Progressbar now accounts for epoch_length argument (#785).
    • Fixed broken ProgressBar if data is iterator without epoch length (#995).
    • Improved setup_logger for multiple calls (#962).
    • Fixed incorrect log position (#1099).
    • Added missing colon to logging message (#1101).
    • Fixed order of checkpoint saving and candidate removal (#1117)

    Examples

    • Basic example of FastaiLRFinder on MNIST (#838).
    • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
    • Added setup_logger to mnist examples (#953).
    • Added MNIST example on TPU (#956).
    • Benchmark amp on Cifar100 (#917).
    • Updated ImageNet and Pascal VOC12 examples (#1125 #1138)

    Housekeeping

    • Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092, ...).
    • Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093, #1113, ...).
    • Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058, ...).
    • Added Serializable in mixins (#1000).
    • Merge of EpochMetric in _BaseRegressionEpoch (#970).
    • Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
    • Drop Python 2 support finalized (#806).
    • Splits engine into multiple parts (#724).
    • Add Python 3.8 to Conda builds (#781).
    • Black formatted codebase with pre-commit files (#792).
    • Activate dpl v2 for Travis CI (#804).
    • AutoPEP8 (#805).
    • Fixed device conversion method (#887).
    • Refactored deps installation (#931).
    • Return handler in helpers (#997).
    • Fixes #833 (#1001).
    • Disable propagation of loggers to ancestrors (#1013).
    • Consistent PEP8-compliant imports layout (#901).

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

    Source code(tar.gz)
    Source code(zip)
  • v0.4rc.0.post1(Jun 6, 2020)

    PyTorch-Ignite 0.4.0 RC - Release Notes

    Core

    BC breaking changes

    • Simplified engine - BC breaking change (#940 #939 #938)
      • no more internal patching of torch DataLoader.
      • seed argument of Engine.run is deprecated.
      • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
    • Make all Events be CallableEventsWithFilter (#788).
    • Make ignite compatible only with pytorch >1.0 (#1016).
      • ignite is tested on the latest and nightly versions of pytorch.
      • exact compatibility with previous versions can be checked here.
    • Remove deprecated arguments from BaseLogger (#1051).
    • Deprecated CustomPeriodicEvent (#984).
    • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
    • Checkpoint stores now files with .pt extension instead of .pth (#873).
    • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
    • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

    New Features and bug fixes

    Ignite Distributed [Experimental]

    • Introduction of ignite.distributed as idist module (#1045)
      • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
      • supports native torch distributed configuration, XLA devices.
      • metrics computation works in all supported distributed configurations: GPUs and TPUs.

    Engine & Events

    • Add flexibility on event handlers by packing triggering events (#868).
    • Engine argument is now optional in event handlers (#889, #919).
    • We initialize engine.state before calling engine.run (#1028).
    • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
    • Added user keys into Engine's state dict (#914).
    • Bug fixes in Engine class (#1048, #994).
    • Now epoch_length argument is optional (#985)
      • suitable to work with finite-unknown-length iterators.
    • Added times in engine.state (#958).

    Metrics

    • Add Frequency metric for ops/s calculations (#760, #783, #976).
    • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
      • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
    • Metric can be detached (#827).
    • Fixed bug in RunningAverage when output is torch tensor (#943).
    • Improved computation performance of EpochMetric (#967).
    • Fixed average recall value of ConfusionMatrix (#846).
    • Now metrics can be serialized using dill (#930).
    • Added support for nested metric values (#968).

    Handlers and utils

    • Checkpoint : improved filename when score value is Integer (#758).
    • Checkpoint : fix returning worst model of the saved models. (#745).
    • Checkpoint : load_objects can load single object checkpoints (#772).
    • Checkpoint : we now save only one checkpoint per priority (#847).
    • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
    • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
    • Checkpoint and related: other improvements (#937).
    • Support namedtuple for convert_tensor (#740).
    • Added decorator one_rank_only (#882).
    • Update common.py (#904).

    Contrib

    • Added FastaiLRFinder (#596).

    Metrics

    • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

    Parameters scheduling

    • Enabled multi params group for LRScheduler (#1027).
    • Parameters scheduling improvements (#1072, #859).

    Support of experiment tracking systems

    • Add NeptuneLogger (#730, #821, #951, #954).
    • Add TrainsLogger (#1020, #1036, #1043).
    • Add WandbLogger (#926).
    • Added visdom_logger to common module (#796).
    • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
    • Simplified BaseLogger attach APIs (#1006).
    • Added kwargs to loggers' constructors and respective setup functions (#1015).

    Time profiling

    • Added basic time profiler to contrib.handlers (#729).

    Bug fixes (some of PRs)

    • ProgressBar output not in sync with epoch counts (#773).
    • Fixed ProgressBar.log_message (#768).
    • Progressbar now accounts for epoch_length argument (#785).
    • Fixed broken ProgressBar if data is iterator without epoch length (#995).
    • Improved setup_logger for multiple calls (#962).
    • Fixed incorrect log position (#1099).
    • Added missing colon to logging message (#1101).

    Examples

    • Basic example of FastaiLRFinder on MNIST (#838).
    • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
    • Added setup_logger to mnist examples (#953).
    • Added MNIST example on TPU (#956).
    • Benchmark amp on Cifar100 (#917).
    • TrainsLogger semantic segmentation example (#1095).

    Housekeeping (some of PRs)

    • Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092).
    • Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093).
    • Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058).
    • Added Serializable in mixins (#1000).
    • Merge of EpochMetric in _BaseRegressionEpoch (#970).
    • Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
    • Drop Python 2 support finalized (#806).
    • Dynamic typing (#723).
    • Splits engine into multiple parts (#724).
    • Add Python 3.8 to Conda builds (#781).
    • Black formatted codebase with pre-commit files (#792).
    • Activate dpl v2 for Travis CI (#804).
    • AutoPEP8 (#805).
    • Fixes nightly version bug (#809).
    • Fixed device conversion method (#887).
    • Refactored deps installation (#931).
    • Return handler in helpers (#997).
    • Fixes #833 (#1001).
    • Disable propagation of loggers to ancestrors (#1013).
    • Consistent PEP8-compliant imports layout (#901).

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 21, 2020)

    Core

    • Added State repr and input batch as engine.state.batch (#641)
    • Adapted core metrics only to be used in distributed configuration (#635)
    • Added fbeta metric as core metric (#653)
    • Added event filtering feature (e.g. every/once/event filter logic) (#656)
    • BC breaking change: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (#673)
      • Added option n_saved=None to store all checkpoints (#703)
    • Improved accumulation metrics (#681)
    • Early stopping min delta (#685)
    • Droped Python 2.7 support (#699)
    • Added feature: Metric can accept a dictionary (#689)
    • Added Dice Coefficient metric (#680)
    • Added helper method to simplify the setup of class loggers (#712)

    Engine refactoring (BC breaking change)

    Finally solved the issue #62 to resume training from an epoch or iteration

    • Engine refactoring + features (#640)
      • engine checkpointing
      • variable epoch lenght defined by epoch_length
      • two additional events: GET_BATCH_STARTED and GET_BATCH_COMPLETED
      • cifar10 example with save/resume in distributed conf

    Contrib

    • Improved create_lr_scheduler_with_warmup (#646)
    • Added helper method to plot param scheduler values with matplotlib (#650)
    • BC Breaking change: with multiple optimizer's param groups (#690)
      • Added state_dict/load_state_dict (#690)
    • BC Breaking change: Let the user specify tqdm parameters for log_message (#695)

    Examples

    • Added an example of hyperparameters tuning with Ax on CIFAR10 (#652)
    • Added CIFAR10 distributed example

    Reproducible trainings as "References"

    Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

    Features:

    • Distributed training with mixed precision by nvidia/apex
    • Experiments tracking with MLflow or Polyaxon

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @anubhavashok, @kagrze, @maxfrei750, @vfdev-5

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Oct 3, 2019)

    Core

    Various improvements in the core part of the library:

    • Add epoch_bound parameter to RunningAverage (#488)

    • Bug fixes with Confusion matrix, new implementation (#572) - BC breaking

    • Added event_to_attr in register_events (#523)

    • Added accumulative single variable metrics (#524)

    • should_terminate is reset between runs (#525)

    • to_onehot returns tensor with uint8 dtype (#571) - may be BC breaking

    • Removable handle returned from Engine.add_event_handler() to enable single-shot events (#588)

    • New documentation style πŸŽ‰

    Distributed

    We removed mnist distrib example as being misleading and ~~provided distrib branch~~(XX/YY/2020: distrib branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.

    Now in Contributions module

    • Added mlflow logger (#558)
    • R-Squared Metric in regression metrics module (#496)
    • Add tag field to OptimizerParamsHandler (#502)
    • Improved ProgressBar with TerminateOnNan (#506)
    • Support for layer freezing with Tensorboard integration (#515)
    • Improved OutputHandler API (#531)
    • Improved create_lr_scheduler_with_warmup (#556)
    • Added "all" option to metric_names in contrib loggers (#565)
    • Added GPU usage info as metric (#569)
    • Other bug fixes

    Notebook examples

    • Added Cycle-GAN notebook (#500)
    • Finetune EfficientNet-B0 on CIFAR100 (#544)
    • Added Fashion MNIST jupyter notebook (#549)

    Updated nighlty builds

    From pip:

    pip install --pre pytorch-ignite
    

    From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

    conda install ignite -c pytorch-nightly
    

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    @ANUBHAVNATANI, @Bibonaut, @Evpok, @Hiroshiba, @JeroenDelcour, @Mxbonn, @anmolsjoshi, @asford, @bosr, @johnstill, @marrrcin, @vfdev-5, @willfrey

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 9, 2019)

    Core

    • We removed deprecated metric classes BinaryAccuracy and CategoricalAccuracy and which are replaced by Accuracy.

    • Multilabel option for Accuracy, Precision, Recall metrics.

    • Added other metrics:

    • Operations on metrics: p = Precision(average=False)

      • apply PyTorch operators: mean_precision = p.mean()
      • indexing: precision_no_bg = p[1:]
    • Improved our docs with more examples.

    • Added FAQ section with best practices.

    • Bug fixes

    Now in Contributions module

    Notebook examples

    • VAE on MNIST
    • CNN for text classification

    Nighlty builds with pytorch-nightly as dependency

    We also provide pip/conda nighlty builds with pytorch-nightly as dependency:

    pip install pytorch-ignite-nightly
    

    or

    conda install -c pytorch ignite-nightly 
    

    Acknowledgments

    πŸŽ‰ Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! πŸ’― We really appreciate your implication into the project (in alphabetical order):

    Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou

    vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 14, 2018)

    • Improve and fix bug with binary accuracy, precision, recall
    • Metrics arithmetics
    • ParamScheduler to support multiple optimizers/multiple parameter groups

    Thanks to all our contributors !

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 9, 2018)

    What's new in this release:

    • Contrib module with
      • Parameter schedule
      • TQDM ProgressBar
      • ROC/AUC, AP, MaxAE metrics
      • TBPTT Engine
    • New handlers:
      • Terminate on Nan
    • New metrics:
      • RunningAverage
      • Merged Categorical/Binary -> Accuracy
    • Refactor of examples
    • New examples:
      • Fast Neural Style
      • RL

    Thanks to all our contributors !

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jun 18, 2018)

    Introduced Engine, Handlers and Metrics.

    Metrics:

    • BinaryAccuracy
    • CategoricalAccuracy
    • Loss
    • Precision
    • Recall
    • etc

    Handlers:

    • ModelCheckpoint
    • EarlyStopping
    • Timer

    Features:

    • PyTorch 0.4 support

    Examples:

    • mnist.py
    • mnist_with_tensorboardx.py
    • mnist_with_visdom.py
    • dcgan.py
    Source code(tar.gz)
    Source code(zip)
A comprehensive list of published machine learning applications to cosmology

ml-in-cosmology This github attempts to maintain a comprehensive list of published machine learning applications to cosmology, organized by subject ma

George Stein 205 Sep 25, 2021
GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

null 187 Sep 16, 2021
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-NorΓ©n 10.2k Sep 22, 2021
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

null 3.7k Sep 23, 2021
Geometric Deep Learning Extension Library for PyTorch

Documentation | Paper | Colab Notebooks | External Resources | OGB Examples PyTorch Geometric (PyG) is a geometric deep learning extension library for

Matthias Fey 12.5k Sep 22, 2021
A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Yang He 1.2k Sep 23, 2021
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

null 577 Sep 23, 2021
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 39 Sep 22, 2021
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 51k Sep 24, 2021
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 46.1k Feb 13, 2021
Collection of generative models in Pytorch version.

pytorch-generative-model-collections Original : [Tensorflow version] Pytorch implementation of various GANs. This repository was re-implemented with r

Hyeonwoo Kang 2.3k Sep 16, 2021
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

null 1.8k Sep 26, 2021
πŸ€ Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

?? Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 1.8k Sep 19, 2021
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

Daniel Bourke 1.6k Sep 22, 2021
Must-read Papers on Physics-Informed Neural Networks.

PINNpapers Contributed by IDRL lab. Introduction Physics-Informed Neural Network (PINN) has achieved great success in scientific computing since 2017.

IDRL 21 Sep 17, 2021
StyleGAN2-ADA - Official PyTorch implementation

Need Help? If you’re new to StyleGAN2-ADA and looking to get started, please check out this video series from a course Lia Coleman and I taught in Oct

Derrick Schultz 113 Sep 19, 2021
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 1.9k Sep 22, 2021
Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! Very tiny! Stock Market Financial Technical Analysis Python library . Quant Trading automation or cryptocoin exchange

MyTT Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! to Stock Market Financial Technical Analysis Python

dev 4 Sep 13, 2021
Deep learning for spiking neural networks

A deep learning library for spiking neural networks. Norse aims to exploit the advantages of bio-inspired neural components, which are sparse and even

Electronic Vision(s) Group β€” BrainScaleS Neuromorphic Hardware 44 Sep 18, 2021