A lossless neural compression framework built on top of JAX.

Overview

Kompressor

GitHub

Branch CI Coverage
main (active) Build codecov
main Build codecov
development Build codecov

A neural compression framework built on top of JAX.

Install

setup.py assumes a compatible version of JAX and JAXLib are already installed. Automated build is tested for a cuda:11.1-cudnn8-runtime-ubuntu20.04 environment with jaxlib==0.1.76+cuda11.cudnn82.

git clone https://github.com/rosalindfranklininstitute/kompressor.git
cd kompressor
pip install -e .

# Run tests
python -m pytest --cov=src/kompressor tests/

Install & Run through Docker environment

Docker image for the Kompressor dependencies are provided in the quay.io/rosalindfranklininstitute/kompressor:main Quay.io image.

# Run the container for the Kompressor environment
docker run --rm quay.io/rosalindfranklininstitute/kompressor:main \
    python -m pytest --cov=/usr/local/kompressor/src/kompressor /usr/local/kompressor/tests

Install & Run through Singularity environment

Singularity image for the Kompressor dependencies are provided in the rosalindfranklininstitute/kompressor/kompressor:main cloud.sylabs.io image.

singularity pull library://rosalindfranklininstitute/kompressor/kompressor:main
singularity run kompressor_main.sif \
    python -m pytest --cov=/usr/local/kompressor/src/kompressor /usr/local/kompressor/tests
Comments
  • Refactor map tuples to dicts

    Refactor map tuples to dicts

    Closes #14. Functions which currently return an ordered tuple of maps (lrmap, udmap, cmap, ...) now return keyed dictionaries { 'lrmap': lrmap, 'udmap': udmap, 'cmap': cmap, ... } so that order/usage is explicitly enforced.

    List comprehensions over the tuples now use jax.tree_map and jax.tree_multimap to ensure key safety.

    @GMW99, this will break the current implementation of the Metrics Callback class which iterates over a zip of the hardcoded map names and the maps tuple. This iteration can be replaced by iterating over maps.items() since it is now a dict already.

    enhancement 
    opened by JossWhittle 1
  • Ensure jax.jit static_argnums is refactored to static_argnames

    Ensure jax.jit static_argnums is refactored to static_argnames

    Functions that currently mark static_argnums=(0, 1, 2) should be updated to use the safer static_argnames=('tom', 'dick', 'harry') that is now available.

    enhancement high priority 
    opened by JossWhittle 1
  • Update development examples

    Update development examples

    • Splits docker image into JAX base image and Kompressor dependency and install image
    • JAX image installs JAX from source to ensure correct CUDA / CUDNN versions
    • Adjust setup.py to install dependencies from requirement.txt
    • Refactors a how submodules are imported (within the kom.image submodule. Need to check volumes matches)
    • Add kom.image.data submodule for dealing with tensorflow data pipelines
    • Fixed pooling in the total variation losses (used as metrics in the example notebooks)
    • Move all the encoding/decoding functions for the maps into a kom.mapping submodule
    • Add within-k and run-length metrics to kom.image.metrics for example notebooks
    • Added example notebooks for interacting with the maps and training a basic Haiku compression model
    feature 
    opened by JossWhittle 0
  • Add mapping encode/decode functions for float32 data

    Add mapping encode/decode functions for float32 data

    Will need a bit of thinking to get right. We probably need to consider similar tricks that we used for applying Radix Sort on float32 data to make the compression numerically stable and portable between machines.

    enhancement low priority 
    opened by JossWhittle 0
  • Add mapping encode/decode functions for uint32 data

    Add mapping encode/decode functions for uint32 data

    Some of our data is uint32 volumes.

    Will need to trace through the full compression implementation and make sure intermediate value dtypes are large enough to avoid uint32 overflow when needed.

    enhancement low priority 
    opened by JossWhittle 0
  • Modify core encode decode functions to pass a dict to the prediction function

    Modify core encode decode functions to pass a dict to the prediction function

    Currently the lowres inputs are passed directly to the prediction_fn as the only input.

    • Modify to accept a dict that has at least one key for the lowres input.

    • Provide boolean flag to also pass a positional encoding tensor along with the lowres which the model can use if needed.

    • Chunked encode decode will need to generate the correct chunks of the positional encoding for the current chunk.

    • Model can choose how to use positional encodings.

      • Image case would receive (B, H, W, 2) tensor containing the Y and X coordinates of each pixel in the trailing axis.
      • Volume case would receive (B, D, H, W, 3) tensor containing the Z, Y, and X coordinates of each voxel in the trailing axis.
    enhancement high priority 
    opened by JossWhittle 0
  • Look at decompressing sliced chunks

    Look at decompressing sliced chunks

    Decompress sliced chunk of image or volume without needing to decompress the entire data element.

    • May require applying secondary compression in blocks to avoid needing to decompress the full level maps, only to apply the predictor to the target slice.

    • Instead unpack just the blocks needed for the slice then trim.

    • A kompressor (or stack of) trained to secondary compress the maps from the primary kompressor (or stack of) would be able to naturally handle slice chunked decoding.

      • Could such a secondary compressor be shared between levels? Between multiple kompressors in the primary stack?
    experiment low priority 
    opened by JossWhittle 0
  • Look at compressing timeseries data

    Look at compressing timeseries data

    • Experiment with implementing the 1D case for compressing signals.
    • Video as sequence of 2D frames using the 3D volume code directly.
    • Look at compressing within timestep using information from neighbouring timesteps without actually compressing (dropping frames) the temporal axis.
    experiment low priority 
    opened by JossWhittle 0
Releases(v0.0.0)
Owner
Rosalind Franklin Institute
The Rosalind Franklin Institute is dedicated to transforming life science through interdisciplinary research and technology development
Rosalind Franklin Institute
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

James Park 1 Dec 11, 2021
An MQA (Studio, originalSampleRate) identifier for lossless flac files written in Python.

An MQA (Studio, originalSampleRate) identifier for "lossless" flac files written in Python.

Daniel 10 Oct 3, 2022
GAN JAX - A toy project to generate images from GANs with JAX

GAN JAX - A toy project to generate images from GANs with JAX This project aims to bring the power of JAX, a Python framework developped by Google and

Valentin Goldité 14 Nov 29, 2022
Mini-hmc-jax - A simple implementation of Hamiltonian Monte Carlo in JAX

mini-hmc-jax This is a simple implementation of Hamiltonian Monte Carlo in JAX t

Martin Marek 6 Mar 3, 2022
CLOOB training (JAX) and inference (JAX and PyTorch)

cloob-training Pretrained models There are two pretrained CLOOB models in this repo at the moment, a 16 epoch and a 32 epoch ViT-B/16 checkpoint train

Katherine Crowson 64 Nov 27, 2022
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN ?? This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022
Deep GPs built on top of TensorFlow/Keras and GPflow

GPflux Documentation | Tutorials | API reference | Slack What does GPflux do? GPflux is a toolbox dedicated to Deep Gaussian processes (DGP), the hier

Secondmind Labs 107 Nov 2, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 7, 2023
A Pytorch Implementation of a continuously rate adjustable learned image compression framework.

GainedVAE A Pytorch Implementation of a continuously rate adjustable learned image compression framework, Gained Variational Autoencoder(GainedVAE). N

null 39 Dec 24, 2022
An efficient and easy-to-use deep learning model compression framework

TinyNeuralNetwork 简体中文 TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework, which contains features like neura

Alibaba 441 Dec 25, 2022
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 5, 2022
Elegy is a framework-agnostic Trainer interface for the Jax ecosystem.

Elegy Elegy is a framework-agnostic Trainer interface for the Jax ecosystem. Main Features Easy-to-use: Elegy provides a Keras-like high-level API tha

null 435 Dec 30, 2022
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 2, 2023
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 2, 2023
JAX-based neural network library

Haiku: Sonnet for JAX Overview | Why Haiku? | Quickstart | Installation | Examples | User manual | Documentation | Citing Haiku What is Haiku? Haiku i

DeepMind 2.3k Jan 4, 2023
Callable PyTrees and filtered JIT/grad transformations => neural networks in JAX.

Equinox Callable PyTrees and filtered JIT/grad transformations => neural networks in JAX Equinox brings more power to your model building in JAX. Repr

Patrick Kidger 909 Dec 30, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Alex Nichol 62 Dec 20, 2022