PyTorch implementation of ENet

Overview

PyTorch-ENet

PyTorch (v1.1.0) implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from the lua-torch implementation ENet-training created by the authors.

This implementation has been tested on the CamVid and Cityscapes datasets. Currently, a pre-trained version of the model trained in CamVid and Cityscapes is available here.

Dataset Classes 1 Input resolution Batch size Epochs Mean IoU (%) GPU memory (GiB) Training time (hours)2
CamVid 11 480x360 10 300 52.13 4.2 1
Cityscapes 19 1024x512 4 300 59.54 5.4 20

1 When referring to the number of classes, the void/unlabeled class is always excluded.
2 These are just for reference. Implementation, datasets, and hardware changes can lead to very different results. Reference hardware: Nvidia GTX 1070 and an AMD Ryzen 5 3600 3.6GHz. You can also train for 100 epochs or so and get similar mean IoU (± 2%).
3 Test set.
4 Validation set.

Installation

Local pip

  1. Python 3 and pip
  2. Set up a virtual environment (optional, but recommended)
  3. Install dependencies using pip: pip install -r requirements.txt

Docker image

  1. Build the image: docker build -t enet .
  2. Run: docker run -it --gpus all --ipc host enet

Usage

Run main.py, the main script file used for training and/or testing the model. The following options are supported:

python main.py [-h] [--mode {train,test,full}] [--resume]
               [--batch-size BATCH_SIZE] [--epochs EPOCHS]
               [--learning-rate LEARNING_RATE] [--lr-decay LR_DECAY]
               [--lr-decay-epochs LR_DECAY_EPOCHS]
               [--weight-decay WEIGHT_DECAY] [--dataset {camvid,cityscapes}]
               [--dataset-dir DATASET_DIR] [--height HEIGHT] [--width WIDTH]
               [--weighing {enet,mfb,none}] [--with-unlabeled]
               [--workers WORKERS] [--print-step] [--imshow-batch]
               [--device DEVICE] [--name NAME] [--save-dir SAVE_DIR]

For help on the optional arguments run: python main.py -h

Examples: Training

python main.py -m train --save-dir save/folder/ --name model_name --dataset name --dataset-dir path/root_directory/

Examples: Resuming training

python main.py -m train --resume True --save-dir save/folder/ --name model_name --dataset name --dataset-dir path/root_directory/

Examples: Testing

python main.py -m test --save-dir save/folder/ --name model_name --dataset name --dataset-dir path/root_directory/

Project structure

Folders

  • data: Contains instructions on how to download the datasets and the code that handles data loading.
  • metric: Evaluation-related metrics.
  • models: ENet model definition.
  • save: By default, main.py will save models in this folder. The pre-trained models can also be found here.

Files

  • args.py: Contains all command-line options.
  • main.py: Main script file used for training and/or testing the model.
  • test.py: Defines the Test class which is responsible for testing the model.
  • train.py: Defines the Train class which is responsible for training the model.
  • transforms.py: Defines image transformations to convert an RGB image encoding classes to a torch.LongTensor and vice versa.
Comments
  • Details in training on Camvid

    Details in training on Camvid

    Hello @davidtvs thank you for your works. I have a question about camvid training. I try to train from scratch using Camvid dataset which follow this division for training and testing. I evaluate the validation data and I just got mIOU about 31% in epoch 1000. Its using the same size as you mention in your readme. Do you also got the same problem?. Using your implementation I follow this setting:

    1. 11 classes, the unlabelled will belongs to 0, all of the other class outside of that 11 class will be belongs to class background or 0.
    2. The class roadmarking is not used!, I have check also in another implementation they don't use it.
    3. I used ENet initialization. So in your experiment, did you got 31%accuracy in validation data at epoch more than 500?
    opened by herleeyandi 8
  • Problems while using different dataset

    Problems while using different dataset

    Hello, I tried to adapt the code into a .ipynb so I could run isolated cells and check how the pipeline works, as I'm trying to evaluate the ENet performance over learning and predicting underwater images (SUIM Dataset, link), but I'm facing some problems.

    When I run the cell for training, it throws the following error: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4. Have you faced similar issue before? Any help would be greatly appreciated!

    Follow code below with minor changes:

    import torch.nn.functional as F
    import random
    import torch
    import torch.nn as nn
    import torch.optim as optim
    import torchvision.transforms.functional as TF
    import torchvision
    import torchvision.transforms as transforms
    import transforms as ext_transforms
    import torch.optim.lr_scheduler as lr_scheduler
    import os
    import numpy as np
    import matplotlib.pyplot as plt
    import glob
    from collections import OrderedDict
    from torch.utils.data import Dataset, DataLoader
    
    # ENET PYTORCH GITHUB LIBS
    import utils
    import tools
    from PIL import Image
    from enet import ENet
    from iou import IoU
    from train import Train
    from test import Test
    
    # Configuring images size
    std_size = 256
    
    # Setting device for torch
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    
    image_transform = transforms.Compose(
        [transforms.CenterCrop((std_size, std_size)),
          transforms.ToTensor()])
    
    label_transform = transforms.Compose([
        transforms.CenterCrop((std_size, std_size)),
        ext_transforms.PILToLongTensor()])
    
    root_dir = '/content/drive/My Drive/Colab Notebooks'
    save_dir = '/content/drive/My Drive/Colab Notebooks/save'
    
    # Training dataset root folders
    train_folder = os.path.normpath(root_dir + '/train/images')
    train_lbl_folder = os.path.normpath(root_dir + '/train/masks')
    
    # Validation dataset root folders
    val_folder = os.path.normpath(root_dir + '/val/images')
    val_lbl_folder = os.path.normpath(root_dir + '/val/masks')
    
    # Test dataset root folders
    test_folder = os.path.normpath(root_dir + '/test/images')
    test_lbl_folder = os.path.normpath(root_dir + '/test/masks')
    
    class CustomDataset(Dataset):
        """Custom Dataset based on CamVid dataset found on:
          https://github.com/davidtvs/PyTorch-ENet.
        
        Student disclaimer: most parts of this code were used and adapted
        for academic purposes only, with no commercial intents. All rights
        reserved to original author. Please refer to the url cited above.
    
        Keyword arguments:
        - root_dir (``string``): Root directory path.
        - mode (``string``): The type of dataset: 'train' for training set, 'val'
        for validation set, and 'test' for test set.
        - transform (``callable``, optional): A function/transform that  takes in
        an PIL image and returns a transformed version. Default: None.
        - label_transform (``callable``, optional): A function/transform that takes
        in the target and transforms it. Default: None.
        - loader (``callable``, optional): A function to load an image given its
        path. By default ``default_loader`` is used.
    
        """
    
        img_extension = '.jpg'
        label_extension = '.bmp'
    
        color_encoding = OrderedDict([
            ('Background', (0,0,0)),
            ('Human Divers', (0,0,255)),
            ('Aquatic Plants and Sea-Grass', (0,255,0)),
            ('Wrecks and Ruins', (0,255,255)),
            ('Robots', (255,0,0)),
            ('Reefs and Intertebrates', (255,0,255)),
            ('Fishs and Vertebrates', (255,255,0)),
            ('Sea-Floor and Rocks', (255,255,255))
        ])
    
        def __init__(self, mode = 'train', transform=None, 
                     label_transform = None, loader = tools.pil_loader):
            self.mode = mode
            self.transform = transform
            self.label_transform = label_transform
            self.loader = loader
        
            if self.mode.lower() == 'train':
                # Get the training data and labels filepaths
                self.train_data = tools.get_files(
                    train_folder, extension_filter=self.img_extension)
    
                self.train_labels = tools.get_files(
                    train_lbl_folder, extension_filter=self.label_extension)
                
            elif self.mode.lower() == 'val':
                # Get the validation data and labels filepaths
                self.val_data = tools.get_files(
                    val_folder, extension_filter=self.img_extension)
    
                self.val_labels = tools.get_files(
                    val_lbl_folder, extension_filter=self.label_extension)
                
            elif self.mode.lower() == 'test':
                # Get the test data and labels filepaths
                self.test_data = tools.get_files(
                    test_folder, extension_filter=self.img_extension)
    
                self.test_labels = tools.get_files(
                    test_lbl_folder, extension_filter=self.label_extension)
                
            else:
                raise RuntimeError("Unexpected dataset mode. "
                                   "Supported modes are: train, val and test")
    
        def __getitem__(self, index):
    
            """
            Args:
            - index (``int``): index of the item in the dataset
    
            Returns:
            A tuple of ``PIL.Image`` (image, label) where label is the ground-truth
            of the image.
    
            """
            if self.mode.lower() == 'train':
                data_path, label_path = self.train_data[index], self.train_labels[
                    index]
            elif self.mode.lower() == 'val':
                data_path, label_path = self.val_data[index], self.val_labels[
                    index]
            elif self.mode.lower() == 'test':
                data_path, label_path = self.test_data[index], self.test_labels[
                    index]
            else:
                raise RuntimeError("Unexpected dataset mode. "
                                   "Supported modes are: train, val and test")
    
            img, label = self.loader(data_path, label_path)
    
            if self.transform is not None:
                img = self.transform(img)
    
            if self.label_transform is not None:
                label = self.label_transform(label)
    
            return img, label
            
        
        def __len__(self):
            """Returns the length of the dataset."""
            if self.mode.lower() == 'train':
                return len(self.train_data)
            elif self.mode.lower() == 'val':
                return len(self.val_data)
            elif self.mode.lower() == 'test':
                return len(self.test_data)
            else:
                raise RuntimeError("Unexpected dataset mode. "
                                   "Supported modes are: train, val and test")
    
    # Setting Dataloader variables
    mode = input('SELECT MODE OF OPERATION: train, val or test: ')
    batch_size = 4
    num_workers = 0
    
    # Load the training set as tensors
    train_set = CustomDataset(
        transform=image_transform,
        label_transform=label_transform)
    train_loader = DataLoader(
        train_set,
        batch_size=batch_size,
        shuffle=True,
        num_workers=num_workers)
    
    # Load the validation set as tensors
    val_set = CustomDataset(
        mode='val',
        transform=image_transform,
        label_transform=label_transform)
    val_loader = DataLoader(
        val_set,
        batch_size=batch_size,
        shuffle=False,
        num_workers=num_workers)
    
    # Load the test set as tensors
    test_set = CustomDataset(
        mode='test',
        transform=image_transform,
        label_transform=label_transform)
    test_loader = DataLoader(
        root_dir,
        batch_size=batch_size,
        shuffle=False,
        num_workers=num_workers)
    
    # Retrieving color_encoding
    class_encoding = train_set.color_encoding
    
    # Get number of classes to predict
    num_classes = len(class_encoding)
    
    # Print information for debugging
    print("Number of classes to predict:", num_classes)
    print("Train dataset size:", len(train_set))
    print("Validation dataset size:", len(val_set))
    
    # Get class weights from the selected weighing technique
    weighing = 'enet'
    ignore_unlabeled = False
    
    print("\nWeighing technique:", weighing)
    print("Computing class weights...")
    print("(this can take a while depending on the dataset size)")
    
    class_weights = 0
    
    if weighing.lower() == 'enet':
        class_weights = tools.enet_weighing(train_loader, num_classes)
    elif weighing.lower() == 'mfb':
        class_weights = tools.median_freq_balancing(train_loader, num_classes)
    else:
        class_weights = None
    
    if class_weights is not None:
        class_weights = torch.from_numpy(class_weights).float().to(device)
        # Set the weight of the unlabeled class to 0
        if ignore_unlabeled:
            ignore_index = list(class_encoding).index('unlabeled')
            class_weights[ignore_index] = 0
    
    print("Class weights:", class_weights)
    
    class_weights = None
    
    learning_rate = 0.05
    weight_decay = 0.1
    lr_decay_epochs = 10
    lr_decay = 0.1
    
    # Intialize ENet
    model = ENet(num_classes).to(device)
    # Check if the network architecture is correct
    # print(model)
    
    # We are going to use the CrossEntropyLoss loss function as it's most
    # frequentely used in classification problems with multiple classes which
    # fits the problem. This criterion  combines LogSoftMax and NLLLoss.
    criterion = nn.CrossEntropyLoss(weight=class_weights)
    
    # ENet authors used Adam as the optimizer
    optimizer = optim.Adam(
        model.parameters(),
        lr=learning_rate,
        weight_decay=weight_decay)
    
    # Learning rate decay scheduler
    lr_updater = lr_scheduler.StepLR(optimizer, lr_decay_epochs,
                                      lr_decay)
    
    # Evaluation metric
    metric = IoU(num_classes, ignore_index=False)
    
    # Optionally resume from a checkpoint
    resume = True
    resume = False
    name = 'test'
    
    if resume:
        model, optimizer, start_epoch, best_miou = utils.load_checkpoint(
            model, optimizer, save_dir, name)
        print("Resuming from model: Start epoch = {0} "
              "| Best mean IoU = {1:.4f}".format(start_epoch, best_miou))
    else:
        start_epoch = 0
        best_miou = 0
    
    epochs = 10
    
    train = Train(model, train_loader, optimizer, criterion, metric, device)
    val = Test(model, val_loader, criterion, metric, device)
    for epoch in range(start_epoch, epochs):
        print(">>>> [Epoch: {0:d}] Training".format(epoch))
    
        lr_updater.step()
        epoch_loss, (iou, miou) = train.run_epoch(True)
    
        print(">>>> [Epoch: {0:d}] Avg. loss: {1:.4f} | Mean IoU: {2:.4f}".
              format(epoch, epoch_loss, miou))
    
        if (epoch + 1) % 10 == 0 or epoch + 1 == epochs:
            print(">>>> [Epoch: {0:d}] Validation".format(epoch))
    
            loss, (iou, miou) = val.run_epoch(True)
    
            print(">>>> [Epoch: {0:d}] Avg. loss: {1:.4f} | Mean IoU: {2:.4f}".
                  format(epoch, loss, miou))
    
            # Print per class IoU on last epoch or if best iou
            if epoch + 1 == epochs or miou > best_miou:
                for key, class_iou in zip(class_encoding.keys(), iou):
                    print("{0}: {1:.4f}".format(key, class_iou))
    
            # Save the model if it's the best thus far
            if miou > best_miou:
                print("\nBest model thus far. Saving...\n")
                best_miou = miou
                #utils.save_checkpoint(model, optimizer, epoch + 1, best_miou)
    
    opened by robsoncsantiago 5
  • Activation selection within the bottlenecks in the network

    Activation selection within the bottlenecks in the network

    if relu:
        activation = nn.ReLU()
    else:
        activation = nn.PReLU()
    

    Does doing this ensure that PReLU weights are unique for each instance of activation within the bottlenecks? While trying to trace this network with torch.jit, it gives errors regarding shared weights by nn.PReLU layers within the submodules. Perhaps this should be implemented with copy.deepcopy for all instances?

    To follow the original paper more closely, the number of channels can be specified for each PReLU instance to learn a weight per channel as shown here.

    opened by heethesh 5
  • Training the network on a binary mask

    Training the network on a binary mask

    I'm trying to run the training using my own dataset, which consists of images and 2d binary masks. With the current label transformation I keep getting memory exceptions, I tried handling the label transformation to make it work but when the training starts the result showing (Mean IoU: 1.0000) with first epoch. Do you have suggestions on how to make the network work with a new dataset and specifically a binary classification task?

    Thanks

    opened by murhafh 5
  • during inference, how to handle unlabeled class?

    during inference, how to handle unlabeled class?

    Hi, thank your for your great work. I've successfully trained my own models on local dataset. Just one question about unlabeled class: based on my understanding, the unlabeled class is excluded by setting the weight to 0 during training. But during inference, I found the trained model couldn't predict any unlabeled class. It predicted a cyan boundary (like the car class in attached image) where cyan color supposed to be pole and sign-symbols..Not sure if there's anything I've done wrong? results

    opened by ghost 5
  • a question about UpsamplingBottleneck block

    a question about UpsamplingBottleneck block

    @davidtvs The official code does not have any activation functions in 1x1 expansion block in all Bottleneck blocks. The paper said that they place Batch Normalization and PReLU between all convolutions, not after all convolutions This is official code fragment in Bottleneck block. I cannot find any activation functions after the region marked in red. I hope you can check this problem. 1613355171(1)

    opened by cedricgsh 4
  • RuntimeError: weight tensor should be defined either for all or no classes

    RuntimeError: weight tensor should be defined either for all or no classes

    Hi, when I trained my own dataset ( like camvid, has 4 classes), error hapened:

    >>>> [Epoch: 0] Training
    Traceback (most recent call last):
      File "main.py", line 306, in <module>
        model = train(train_loader, val_loader, w_class, class_encoding)
      File "main.py", line 191, in train
        epoch_loss, (iou, miou) = train.run_epoch(args.print_step)
      File "/media/gaoya/disk/Applications/pytorch/SemanticSegmentation/PyTorch-ENet-master/train.py", line 47, in run_epoch
        loss = self.criterion(outputs, labels)
      File "/media/gaoya/disk/Applications/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/media/gaoya/disk/Applications/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 916, in forward
        ignore_index=self.ignore_index, reduction=self.reduction)
      File "/media/gaoya/disk/Applications/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1995, in cross_entropy
        return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
      File "/media/gaoya/disk/Applications/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1826, in nll_loss
        ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
    RuntimeError: weight tensor should be defined either for all or no classes at /tmp/pip-req-build-58y_cjjl/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:27
    

    How can I solve it?

    opened by ouening 4
  • Wrong about test

    Wrong about test

    The model I train and you provide both have this problem.

    Avg. loss: 0.0000 | Mean IoU: nan unlabeled: nan road: nan sidewalk: nan building: nan wall: nan fence: nan pole: nan traffic_light: nan traffic_sign: nan vegetation: nan terrain: nan sky: nan person: nan rider: nan car: nan truck: nan bus: nan train: nan motorcycle: nan bicycle: nan

    opened by ChaosNN 4
  • About Training

    About Training

    When i want to train your ENet on my device, i run with following command:

    python main.py -m train --save-dir ./camvid_model/ --name ENet --dataset camvid --dataset-dir CamVid/ --with-unlabeled --imshow-batch

    But I meet the followed problems. Traceback (most recent call last): File "main.py", line 291, in loaders, w_class, class_encoding = load_dataset(dataset) File "main.py", line 110, in load_dataset color_labels = utils.batch_transform(labels, label_to_rgb) File "/home/amax/linrui/PyTorch-ENet-master/utils.py", line 21, in batch_transform transf_slices = [transform(tensor) for tensor in torch.unbind(batch)] File "/usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py", line 49, in call img = t(img) File "/home/amax/linrui/PyTorch-ENet-master/transforms.py", line 91, in call color_tensor[channel].masked_fill_(mask, color_value) RuntimeError: expand(torch.ByteTensor{[3, 360, 480]}, size=[360, 480]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)

    The label.size() is [B,3,H,W], And how to translate to [B,classnumber,H,W]. I didn't find in your code? Thank you

    opened by LinRui9531 4
  • utils.batch_transform requires some changes

    utils.batch_transform requires some changes

    When I run python main.py -m test --imshow-batch

    I get an error

    File "/home/sam/Documents/ComputerVision/PyTorch-ENet/utils.py", line 24, in batch_transform return F.stack(transf_slices) AttributeError: module 'torch.functional' has no attribute 'stack'

    opened by shirishr 4
  • the question about torch.load()

    the question about torch.load()

    Thanks for your code which help me a lot. I want to segment my road figure, this is my test, not belong to any research or commodity. I wrote a new code. this code is just load the pre-trained model and use it to process my picture. But, when I load the model, the error arises. "_pickle.UnpicklingError: A load persistent id instruction was encountered, but no persistent_load function was specified."

    i have searched some explanation, which one tells me this is because of verison of torch, but I am sure my version meets the requirements.

    opened by UnderTheMangoTree 3
  • the question about output shape

    the question about output shape

    I got a problem when i test the model with cityscapes's dataset like this. How can I solve it?

    Traceback (most recent call last): File "main.py", line 348, in loaders, w_class, class_encoding = load_dataset(dataset) File "main.py", line 124, in load_dataset color_labels = utils.batch_transform(labels, label_to_rgb) File "D:\Projects\ENetProjects\PyTorch-ENet-master\utils.py", line 21, in batch_transform transf_slices = [transform(tensor) for tensor in torch.unbind(batch)] File "D:\Projects\ENetProjects\PyTorch-ENet-master\utils.py", line 21, in transf_slices = [transform(tensor) for tensor in torch.unbind(batch)] File "D:\Applications\anaconda3\lib\site-packages\torchvision\transforms\transforms.py", line 61, in call img = t(img) File "D:\Projects\ENetProjects\PyTorch-ENet-master\transforms.py", line 92, in call color_tensor[channel].masked_fill_(mask, color_value) RuntimeError: output with shape [360, 480] doesn't match the broadcast shape [3, 360, 480]

    opened by lucky26418 9
  • Question about the params and FLOPs of ENet

    Question about the params and FLOPs of ENet

    Does anyone reproduce the ENet?Why the params and GFLOPs of my reproduced Network are about 10 and 4 times bigger than the values mentiond in the original paper (Table 3) respectively? My calculated value—— params: 3.5Million,GFLOPs: 16.9

    opened by mrzhouxixi 2
  • About the cityscape dataset 1024*512.

    About the cityscape dataset 1024*512.

    Hello!Thanks for your code.i trained on cityscapes for epoch=300,but the best model is when epoch210 ,miou is 50.00%.The question is, is it because the size of the data set I use is the original size not 1024*512

    opened by 100sby 2
Owner
David Silva
:eyes:🚗
David Silva
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 8, 2022
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 8, 2022
Pytorch-diffusion - A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'

PyTorch implementation of 'Denoising Diffusion Probabilistic Models' This reposi

Arthur Juliani 76 Jan 7, 2023
Fang Zhonghao 13 Nov 19, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 4, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 2, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022