NeuralCompression
What's New
- July 2021 (image compression) - Released implemenation of Scale Hyperprior
- July 2021 (video compression) - Released implementation of DVC
About
NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.
NeuralCompression is alpha software. The project is under active development. The API will change as we make releases, potentially breaking backwards compatibility.
Installation
NeuralCompression is a project currently under development. You can install the repository in development mode.
PyPI Installation
First, install PyTorch according to the directions from the PyTorch website. Then, you should be able to run
pip install neuralcompression
to get the latest version from PyPI.
Development Installation
First, clone the repository and navigate to the NeuralCompression root directory. To match your local environment to the test environment, run
pip install -r dev-requirements.txt
Then, you can install the package in development mode by running
pip install -e .
If you are not interested in matching the test environment, then you only need to apply the second step to install.
Repository Structure
We use a 2-tier repository structure. The neuralcompression
package contains a core set of tools for doing neural compression research. Code committed to the core package requires stricter linting, high code quality, and rigorous review. The projects
folder contains code for reproducing papers and training baselines. Code in this folder is not linted aggressively, we don't enforce type annotations, and it's okay to omit unit tests.
The 2-tier structure enables rapid iteration and reproduction via code in projects
that is built on a backbone of high-quality code in neuralcompression
.
neuralcompression
neuralcompression
- base packagedata
- PyTorch data loaders for various data setsentropy_coders
- lossless compression algorithms in JAXcraystack
- an implementation of the rANS algorithm with the craystack API
functional
- methods for image warping, information cost, etc.layers
- building blocks for compression modelsmetrics
-torchmetrics
classes for assessing model performancemodels
- complete compression models
projects
projects
- recipes and code for reproducing papersscale_hyperprior_lightning
Scale Hyperprior (Balle et al., 2018)deep_video_compression
DVC (Lu et al., 2019)
Getting Started
For an example of package usage, see the Scale Hyperprior for an example of how to train an image compression model in PyTorch Lightning. See DVC for a video compression example.
Contributions
Please read our CONTRIBUTING guide and our CODE_OF_CONDUCT prior to submitting a pull request.
We test all pull requests. We rely on this for reviews, so please make sure any new code is tested. Tests for neuralcompression
go in the tests
folder in the root of the repository. Tests for individual projects go in those projects' own tests
folder.
We use black
for formatting, isorst
for import sorting, flake8
for linting, mypy
for type checking. We enforce these on the neuralcompression
package, but not in the projects
folder.
License
NeuralCompression is MIT licensed, as found in the LICENSE file.
Cite
If you find NeuralCompression useful in your work, feel free to cite
@misc{muckley2021neuralcompression,
author={Matthew Muckley and Jordan Juravsky and Daniel Severo and Mannat Singh and Quentin Duval and Karen Ullrich},
title={NeuralCompression},
howpublished={\url{https://github.com/facebookresearch/NeuralCompression}},
year={2021}
}