OptNet: Differentiable Optimization as a Layer in Neural Networks
This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch source code to reproduce the experiments in our ICML 2017 paper OptNet: Differentiable Optimization as a Layer in Neural Networks.
If you find this repository helpful in your publications, please consider citing our paper.
@InProceedings{amos2017optnet,
title = {{O}pt{N}et: Differentiable Optimization as a Layer in Neural Networks},
author = {Brandon Amos and J. Zico Kolter},
booktitle = {Proceedings of the 34th International Conference on Machine Learning},
pages = {136--145},
year = {2017},
volume = {70},
series = {Proceedings of Machine Learning Research},
publisher ={PMLR},
}
Informal Introduction
Mathematical optimization is a well-studied language of expressing solutions to many real-life problems that come up in machine learning and many other fields such as mechanics, economics, EE, operations research, control engineering, geophysics, and molecular modeling. As we build our machine learning systems to interact with real data from these fields, we often cannot (but sometimes can) simply ``learn away'' the optimization sub-problems by adding more layers in our network. Well-defined optimization problems may be added if you have a thorough understanding of your feature space, but oftentimes we don't have this understanding and resort to automatic feature learning for our tasks.
Until this repository, no modern deep learning library has provided a way of adding a learnable optimization layer (other than simply unrolling an optimization procedure, which is inefficient and inexact) into our model formulation that we can quickly try to see if it's a nice way of expressing our data.
See our paper OptNet: Differentiable Optimization as a Layer in Neural Networks and code at locuslab/optnet if you are interested in learning more about our initial exploration in this space of automatically learning quadratic program layers for signal denoising and sudoku.
Setup and Dependencies
- Python/numpy/PyTorch
- qpth: Our fast QP solver for PyTorch released in conjunction with this paper.
- bamos/block: Our intelligent block matrix library for numpy, PyTorch, and beyond.
- Optional: bamos/setGPU: A small library to set
CUDA_VISIBLE_DEVICES
on multi-GPU systems.
Denoising Experiments
denoising
├── create.py - Script to create the denoising dataset.
├── plot.py - Plot the results from any experiment.
├── main.py - Run the FC baseline and OptNet denoising experiments. (See arguments.)
├── main.tv.py - Run the TV baseline denoising experiment.
└── run-exps.sh - Run all experiments. (May need to uncomment some lines.)
Sudoku Experiments
- The dataset we used in our experiments is available in
sudoku/data
.
sudoku
├── create.py - Script to create the dataset.
├── plot.py - Plot the results from any experiment.
├── main.py - Run the FC baseline and OptNet Sudoku experiments. (See arguments.)
└── models.py - Models used for Sudoku.
Classification Experiments
cls
├── train.py - Run the FC baseline and OptNet classification experiments. (See arguments.)
├── plot.py - Plot the results from any experiment.
└── models.py - Models used for classification.
Acknowledgments
The rapid development of this work would not have been possible without the immense amount of help from the PyTorch team, particularly Soumith Chintala and Adam Paszke.
Licensing
Unless otherwise stated, the source code is copyright Carnegie Mellon University and licensed under the Apache 2.0 License.