Target Propagation via Regularized Inversion
The present code implements an ideal formulation of target propagation using regularized inverses computed analytically rather than using some reverse layer optimized to approximate the inverse.
This code provides a simple and efficient implementation of stochastic training using target propagation (TP). The main idea is to use regularized inverses computed analytically rather than previously proposed approximate inverses (auto-encoders, etc.) for the target propagation. We focus on Recurrent Neural Networks to illustrate the benefits of TP to train networks involving long compositions. Indeed the experiments suggest that TP may be beneficial to train RNNs on long sequences. The target_prop function is involved in "src/model/rnn.py", while the training is done in "src/optim/run_optimizer.py"
The code allows to reproduce the results presented in the paper below.
Setup
Create a conda environment and activate it
conda create -n target_prop python=3.8
conda activate target_prop
Install dependencies
conda install seaborn matplotlib pandas
For PyTorch, the installation depends on your OS. For Mac for example, use
conda install pytorch torchvision -c pytorch
Experiments
To reproduce the plots presented in the paper run from the folder exp
python paper_conv_plots.py
python paper_regimes.py
python paper_reg_plots.py
Contact
You can report issues and ask questions in the repository's issues page. If you choose to send an email instead, please direct it to Vincent Roulet at [email protected] and include [tpri] in the subject line.
Paper
Target Propagation via Regularized Inversion
Vincent Roulet, Zaid Harchaoui.
arXiv preprint
If you use this code please cite the paper using the bibtex reference below.
@article{roulet2021target,
title={Target Propagation via Regularized Inversion},
author={Roulet, Vincent and Harchaoui, Zaid},
journal={arXiv preprint}
}
License
This code has a GPLv3 license.