The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

Related tags

Deep Learning PyCIL
Overview

PyCIL: A Python Toolbox for Class-Incremental Learning


IntroductionMethods ReproducedReproduced ResultsHow To UseLicenseAcknowledgementsContact


LICENSEPython PyTorch method CIL

The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" [paper] in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:

@misc{zhou2021pycil,
  title={PyCIL: A Python Toolbox for Class-Incremental Learning}, 
  author={Da-Wei Zhou and Fu-Yun Wang and Han-Jia Ye and De-Chuan Zhan},
  year={2021},
  eprint={2112.12533},
  archivePrefix={arXiv},
  primaryClass={cs.LG}
}

Introduction

Traditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process. However, real-world applications often face the incoming new classes, and a model should incorporate them continually. The learning paradigm is called Class-Incremental Learning (CIL). We propose a Python toolbox that implements several key algorithms for class-incremental learning to ease the burden of researchers in the machine learning community. The toolbox contains implementations of a number of founding works of CIL such as EWC and iCaRL, but also provides current state-of-the-art algorithms that can be used for conducting novel fundamental research. This toolbox, named PyCIL for Python Class-Incremental Learning, is open source with an MIT license.

Methods Reproduced

  • FineTune: Baseline method which simply updates parameters on new task, suffering from Catastrophic Forgetting. By default, weights corresponding to the outputs of previous classes are not updated.
  • EWC: Gradient Episodic Memory for Continual Learning. [paper]
  • LwF: Learning without Forgetting. [paper]
  • Replay: Baseline method with exemplars.
  • GEM: Gradient Episodic Memory for Continual Learning. [paper]
  • iCaRL: Incremental Classifier and Representation Learning. [paper]
  • BiC: Large Scale Incremental Learning. [paper]
  • WA: Maintaining Discrimination and Fairness in Class Incremental Learning. [paper]
  • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. [paper]
  • DER: DER: Dynamically Expandable Representation for Class Incremental Learning. [paper]
  • Coil: Co-Transport for Class-Incremental Learning. [paper]

Reproduced Results

CIFAR100

Imagenet100

More experimental details and results are shown in our paper.

How To Use

Clone

Clone this github repository:

git clone https://github.com/G-U-N/PyCIL.git
cd PyCIL

Dependencies

  1. torch 1.81
  2. torchvision 0.6.0
  3. tqdm
  4. numpy
  5. scipy
  6. quadprog

Run experiment

  1. Edit the [MODEL NAME].json file for global settings.
  2. Edit the hyperparameters in the corresponding [MODEL NAME].py file (e.g., models/icarl.py).
  3. Run:
python main.py --config=./exps/[MODEL NAME].json

where [MODEL NAME] should be chosen from: finetune, ewc, lwf, replay, gem, icarl, bic, wa, podnet, der.

  1. hyper-parameters

When using PyCIL, you can edit the global parameters and algorithm-specific hyper-parameter in the corresponding json file.

These parameters include:

  • memory-size: The total exemplar number in the incremental learning process. Assuming there are $K$ classes at current stage, the model will preserve $\left[\frac{memory-size}{K}\right]$ exemplar per class.
  • init-cls: The number of classes in the first incremental stage. Since there are different settings in CIL with a different number of classes in the first stage, our framework enables different choices to define the initial stage.
  • increment: The number of classes in each incremental stage $i$, $i$ > 1. By default, the number of classes per incremental stage is equivalent per stage.
  • convnet-type: The backbone network for the incremental model. According to the benchmark setting, ResNet32 is utilized for CIFAR100, and ResNet18 is utilized for ImageNet.
  • seed: The random seed adopted for shuffling the class order. According to the benchmark setting, it is set to 1993 by default.

Other parameters in terms of model optimization, e.g., batch size, optimization epoch, learning rate, learning rate decay, weight decay, milestone, temperature, can be modified in the corresponding Python file.

Datasets

We have implemented the pre-processing of CIFAR100, imagenet100 and imagenet1000. When training on CIFAR100, this framework will automatically download it. When training on imagenet100/1000, you should specify the folder of your dataset in utils/data.py.

    def download_data(self):
        assert 0,"You should specify the folder of your dataset"
        train_dir = '[DATA-PATH]/train/'
        test_dir = '[DATA-PATH]/val/'

License

Please check the MIT license that is listed in this repository.

Acknowledgements

We thank the following repos providing helpful components/functions in our work.

Contact

If there are any questions, please feel free to propose new features by opening an issue or contact with the author: Da-Wei Zhou([email protected]) and Fu-Yun Wang([email protected]). Enjoy the code.

Comments
  • why FeTrIL acc is so bad

    why FeTrIL acc is so bad

    config:

    {
        "prefix": "train",
        "dataset": "cifar100",
        "memory_size": 0,
        "shuffle": true,
        "init_cls": 50,
        "increment": 10,
        "model_name": "fetril",
        "convnet_type": "resnet32",
        "device": ["0"],
        "seed": [1993],
        "init_epochs": 200,
        "init_lr" : 0.1,
        "init_weight_decay" : 0,
        "epochs" : 50,
        "lr" : 0.05,
        "batch_size" : 128,
        "weight_decay" : 0,
        "num_workers" : 8,
        "T" : 2
    }
    

    final result:

    2022-12-16 23:22:29,436 [fetril.py] => svm train: acc: 10.01
    2022-12-16 23:22:29,451 [fetril.py] => svm evaluation: acc_list: [15.2, 12.47, 10.84, 9.78, 9.06, 8.13]
    2022-12-16 23:22:31,440 [trainer.py] => No NME accuracy.
    2022-12-16 23:22:31,441 [trainer.py] => CNN: {'total': 13.63, '00-09': 19.1, '10-19': 18.3, '20-29': 22.5, '30-39': 17.3, '40-49': 30.1, '50-59': 3.5, '60-69': 11.4, '70-79': 3.1, '80-89': 6.1, '90-99': 4.9, 'old': 14.6, 'new': 4.9}
    2022-12-16 23:22:31,441 [trainer.py] => CNN top1 curve: [28.94, 21.87, 18.54, 16.76, 15.1, 13.63]
    2022-12-16 23:22:31,441 [trainer.py] => CNN top5 curve: [53.28, 47.07, 43.17, 39.08, 36.41, 33.98]
    

    I can provide log if you need.

    opened by muyuuuu 10
  • coil fixed memory

    coil fixed memory

    Amazing toolbox!!!

    I got a question about ur results of coil.

    In your work. Section 5.2

    Since all compared methods are exemplar-based, we fix an equal number of exemplars for every method, i.e., 2,000 exemplars for CIFAR-100 and ImageNet100, 20,000 for ImageNet-1000. As a result, the picked exemplars per class is 20, which is abundant for every class.

    I just wanna check the replay size with fixed memory of 2,000 in totoal over training process, which means that the "fixed_memory" in json file is set false, as shown in this link. I'm a little bit confused about this setting due to there are different protocols in recent community.

    https://github.com/G-U-N/PyCIL/blob/6d2c1280e8ceb139f4e74a70297519af9eea4a5e/exps/coil.json#L6

    The reason why I came corss this issues is:

    1648192236(1)

    As shown in this table, the icarl results of 10 steps is reported about 61.74, which is lower than that in the original paper of about 64.

    Hope to get ur replay early. THX in advance.

    opened by qsunyuan 8
  • Inquiry about Pre-trained Model & Parameter Setup

    Inquiry about Pre-trained Model & Parameter Setup

    Many thanks for this wonderful framework! It really helps our work a lot!

    I have some questions about your experiment setup.

    1. I have reproduced your 10-stage CIFAR-100 experiments on my own PC (3*3090). The results are as followed:

    1650045621

    I followed all the bash files and parameters you have set up, but the results seem to be much lower than yours. Is that because you use an ImageNet pre-trained ResNet? Thanks!

    1. I found you set different model optimization parameters (e.g. learning rate, epoch, milestone, etc.) for each continual learning approach (instead of setting the same hyperparameter policy for all the continual learning approaches). I was wondering whether this kind of parameter setup could be considered a "fair" comparison?

    Thanks in advance for your answer!

    question 
    opened by longbai1006 4
  • 关于FOSTER的数据增强

    关于FOSTER的数据增强

    FOSTER很令我感兴趣的工作。

    我尝试了cifar100的benchmark的设置。

    我发现pycil中的实现并没有数据增强,如:https://github.com/G-U-N/ECCV22-FOSTER/blob/340fbd1a16ca6a787e522b9dc349a5e742f00c30/utils/data.py#L73

    因此,pycil的实现结果要差一点。请问还有其他的一些细节上的不同吗?

    opened by qsunyuan 3
  • Single GPU training error in DER

    Single GPU training error in DER

    Hi,

    Thank you for this wonderful code base!

    I noticed the current version doesn't support single GPU training. Could you please add this feature?

    Thank you!

    opened by htwang14 3
  • Weird Training Result with train_acc=0 & loss=0

    Weird Training Result with train_acc=0 & loss=0

    Thanks for your excellent work!

    But when I tried my own dataset on it, I found that the training result was really weired, just take EWC as an example:

    ================ EWC ================
    Task 0, Epoch 200/200 => Loss 0.886, Train_accy 70.60, Test_accy 73.00
    Task 1, Epoch 180/180 => Loss 0.002, Train_accy 0.00
     CNN: {'total': 7.36, '00-09': 7.8, 'old': 7.8, 'new': 0.0}
    Task 2, Epoch 180/180 => Loss 0.002, Train_accy 0.00
     CNN: {'total': 38.67, '00-09': 46.4, '10-19': 0.0, 'old': 43.77, 'new': 0.0}
    Task 3, Epoch 180/180 => Loss 0.003, Train_accy 0.00
     CNN: {'total': 12.99, '00-09': 17.4, '10-19': 0.0, 'old': 14.5, 'new': 0.0}
    ...
    Task 9, Epoch 180/180 => Loss 0.004, Train_accy 0.00
     CNN: {'total': 28.66, '00-09': 55.6, '10-19': 0.0, 'old': 30.89, 'new': 0.0}
    CNN top1 curve: [74.0, 7.36, 38.67, 12.99, 1.22, 23.83, 34.52, 16.55, 8.56, 28.66]
    CNN top5 curve: [98.6, 52.26, 78.0, 45.82, 29.05, 53.58, 57.26, 48.85, 31.78, 48.66]
    

    I also tried it on Lwf, the problem remains.

    Questions About Loss Function

    I am wondering whether issues exist in the loss function. And I found that the loss function is defined as: loss_clf = F.cross_entropy(logits[:,self._known_classes:],targets-self._known_classes), along with loss_ewc.

    In my experiments, I set base_session=10, increment=1. So for task1, when calculating the loss, targets=[10, 10, ..., 10] and self._known_classes=10. Therefore, the target(#2para) of the cross_entropy was [0, 0, ..., 0]. Is that mean the loss function pushes the model to reject the new task, making logits of the new task approach 0?

    Tuning Loss Function

    I tried to change the #2para to [1, 1, ..., 1] by setting targets-self._known_classes+1, but error arises:

    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
    ...
    Traceback (most recent call last):
      File "/tmp/pycharm_project_241/main.py", line 33, in <module>
        main()
      File "/tmp/pycharm_project_241/main.py", line 13, in main
        train(args)
      File "/tmp/pycharm_project_241/trainer.py", line 18, in train
        _train(args)
      File "/tmp/pycharm_project_241/trainer.py", line 48, in _train
        model.incremental_train(data_manager)
      File "/tmp/pycharm_project_241/models/ewc.py", line 62, in incremental_train
        self._train(self.train_loader, self.test_loader)
      File "/tmp/pycharm_project_241/models/ewc.py", line 84, in _train
        self._update_representation(train_loader, test_loader, optimizer, scheduler)
      File "/tmp/pycharm_project_241/models/ewc.py", line 138, in _update_representation
        loss.backward()
      File "/home/linkdata/data/yaoxinjie/anaconda3/envs/FACT18/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/linkdata/data/yaoxinjie/anaconda3/envs/FACT18/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
    

    And I don't why it happens and how to handle this. Could you please help me to clarify it? Thanks a lot!

    opened by Steven-cpp 3
  • why loss_clf = F.cross_entropy(logits[:, self._known_classes:], fake_targets)?

    why loss_clf = F.cross_entropy(logits[:, self._known_classes:], fake_targets)?

    Thank you very much for your excellent work.

    In models/finetune.py , line 117:

                    fake_targets=targets-self._known_classes
                    loss_clf = F.cross_entropy(logits[:,self._known_classes:], fake_targets)
                    
                    loss=loss_clf
    

    why not :

                    loss_clf = F.cross_entropy(logits, targets)
                    loss=loss_clf
    
    opened by chester-w-xie 3
  • foster error: TypeError: string indices must be integers

    foster error: TypeError: string indices must be integers

    after train the second task and before compress:

    Task 1, Epoch 167/170 => Loss 0.752, Loss_clf 0.000, Loss_fe 0.001, Loss_kd 0.375, Train_accy 99.00:  98%|█████████▊| 167/170 [09:02<00:10,  3.35s/it]
    Task 1, Epoch 168/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.75:  98%|█████████▊| 167/170 [09:04<00:10,  3.35s/it]
    Task 1, Epoch 168/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.75:  99%|█████████▉| 168/170 [09:04<00:06,  3.20s/it]
    Task 1, Epoch 169/170 => Loss 0.744, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.371, Train_accy 98.81:  99%|█████████▉| 168/170 [09:07<00:06,  3.20s/it]
    Task 1, Epoch 169/170 => Loss 0.744, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.371, Train_accy 98.81:  99%|█████████▉| 169/170 [09:07<00:03,  3.10s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67:  99%|█████████▉| 169/170 [09:10<00:03,  3.10s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67: 100%|██████████| 170/170 [09:10<00:00,  3.01s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67: 100%|██████████| 170/170 [09:10<00:00,  3.24s/it]
    Traceback (most recent call last):
      File "foster.py", line 31, in <module>
        main()
      File "foster.py", line 12, in main
        train(args)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/trainer.py", line 18, in train
        _train(args)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/trainer.py", line 67, in _train
        model.incremental_train(data_manager)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 89, in incremental_train
        self._train(self.train_loader, self.test_loader)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 169, in _train
        self._feature_compression(train_loader, test_loader)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 289, in _feature_compression
        self._snet = FOSTERNet(self.args["convnet_type"], False)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/utils/inc_net.py", line 402, in __init__
        self.convnet_type = args["convnet_type"]
    TypeError: string indices must be integers
    
    opened by muyuuuu 2
  • the memory of rmm

    the memory of rmm

    Hello! First of all, thank you very much for your work.When I read the rmm code, I found that memory is not 2000. Instead, only 2000 pieces of data were used for the new task. Is rmm originally like this? I add the following code to rmm.py to see the number of memories.

    image image

    opened by zhl98 2
  • What is the purpose of the reduce Exemplar function

    What is the purpose of the reduce Exemplar function

    I'd like to know the role of the Reduce examplar functions, construct examplar functions, and construct Exemplar unified functions in your base.py file. Is there a reference for how they work?

    opened by Z-ZHIZHONG 2
  •  Genral idea for the code framework

    Genral idea for the code framework

    Hello, I have some questions for you.Is your code modified from the original DER? Could you give me a general idea of your thinking.Thank you very much. I look forward to your recovery.

    opened by Z-ZHIZHONG 2
Releases(v0.1)
  • v0.1(Dec 12, 2022)

    This is the first Github release of PyCIL. Reproduced methods are listed as:

    • FineTune: Baseline method which simply updates parameters on new tasks, suffering from Catastrophic Forgetting. By default, weights corresponding to the outputs of previous classes are not updated.
    • EWC: Overcoming catastrophic forgetting in neural networks. PNAS2017 [paper]
    • LwF: Learning without Forgetting. ECCV2016 [paper]
    • Replay: Baseline method with exemplars.
    • GEM: Gradient Episodic Memory for Continual Learning. NIPS2017 [paper]
    • iCaRL: Incremental Classifier and Representation Learning. CVPR2017 [paper]
    • BiC: Large Scale Incremental Learning. CVPR2019 [paper]
    • WA: Maintaining Discrimination and Fairness in Class Incremental Learning. CVPR2020 [paper]
    • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. ECCV2020 [paper]
    • DER: DER: Dynamically Expandable Representation for Class Incremental Learning. CVPR2021 [paper]
    • PASS: Prototype Augmentation and Self-Supervision for Incremental Learning. CVPR2021 [paper]
    • RMM: RMM: Reinforced Memory Management for Class-Incremental Learning. NeurIPS2021 [paper]
    • IL2A: Class-Incremental Learning via Dual Augmentation. NeurIPS2021 [paper]
    • SSRE: Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. CVPR2022 [paper]
    • FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning. WACV2023 [paper]
    • Coil: Co-Transport for Class-Incremental Learning. ACM MM2021 [paper]
    • FOSTER: Feature Boosting and Compression for Class-incremental Learning. ECCV 2022 [paper]

    Stay tuned for more state-of-the-arts in PyCIL!

    Source code(tar.gz)
    Source code(zip)
Owner
Fu-Yun Wang
Fu-Yun Wang
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

null 117 Dec 7, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
This repository is an unoffical PyTorch implementation of Medical segmentation in 3D and 2D.

Pytorch Medical Segmentation Read Chinese Introduction:Here! Recent Updates 2021.1.8 The train and test codes are released. 2021.2.6 A bug in dice was

EasyCV-Ellis 618 Dec 27, 2022
This repository contains a pytorch implementation of "StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision".

StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision | Project Page | Paper | This repository contains a pytorch implementation of "St

null 87 Dec 9, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 1, 2022
Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Ian Pointer 368 Dec 17, 2022
This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".

AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis | Project Page | Paper | PyTorch implementation for the paper "AD-NeRF: Audio

null 551 Dec 29, 2022
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 340 Jan 1, 2023
This repository contains PyTorch models for SpecTr (Spectral Transformer).

SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation This repository contains PyTorch models for SpecTr (Spectral Transformer).

Boxiang Yun 45 Dec 13, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

null 37 Nov 21, 2022
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

Phong Nguyen Ha 4 May 26, 2022
Template repository for managing machine learning research projects built with PyTorch-Lightning

Tutorial Repository with a minimal example for showing how to deploy training across various compute infrastructure.

Sidd Karamcheti 3 Feb 11, 2022
This repository provides an efficient PyTorch-based library for training deep models.

An Efficient Library for Training Deep Models This repository provides an efficient PyTorch-based library for training deep models. Installation Make

Bytedance Inc. 123 Jan 5, 2023
This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametric Head Model (CVPR 2022)".

HeadNeRF: A Real-time NeRF-based Parametric Head Model This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametr

null 294 Jan 1, 2023
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 6, 2023