Denoising Normalizing Flow

Overview

Denoising Normalizing Flow

Christian Horvat and Jean-Pascal Pfister 2021

License: MIT

We combine Normalizing Flows (NFs) and Denoising Auto Encoder (DAE) by introducing the Denoising Normalizing Flow (DNF), a generative model able to

  1. approximate the data generating density p(x),
  2. generate new samples from p(x),
  3. infer low-dimensional latent variables.

As a classical NF degenerates for data living on a low-dimensional manifold embedded in high dimensions, the DNF inflates the manifold valued data using noise and learns a denoising mapping similar to DAE.

Related Work

The DNF is highly related to the Manifold Flow introduced by Johann Brehmer and Kyle Cramner. Also, our code is a cabon copy of their implementation with the following additions:

  1. The data can be inflated with Gaussian noise.
  2. We include the DNF as new mode for the ℳ-flow.
  3. New datasets, a thin spiral, a von Mises on a circle, and a mixture of von Mises on a sphere were added.
  4. A new folder, experiments/plots, for generating the images from the paper was added.
  5. A new folder, experiments/benchmarks, for benchmarking the DNF was added.
  6. The evaluate.py was modified and now includes the grid evaluation for the thin spiral and gan2d image manifold, the latent interpolations, the density estimation for the PAE, the latent density estimation on the thin spiral, and the KS statistics for the circle and sphere experiments.

The theoretical foundation of the DNF was developed in Density estimation on low-dimensional manifolds: an inflation-deflation approach.

Data sets

We trained the DNF and ℳ-flow on the following datasets:

Data set Data dimension Manifold dimension Arguments to train.py, and evaluate.py
Thin spiral 2 1 --dataset thin_spiral
2-D StyleGAN image manifold 64 x 64 x 3 2 --dataset gan2d
64-D StyleGAN image manifold 64 x 64 x 3 64 --dataset gan64d
CelebA-HQ 64 x 64 x 3 ? --dataset celeba

To use the model for your own data, you need to create a simulator (see experiments/datasets), and add it to experiments/datasets/init.py. If you have problems with that, please don't hesitate to contact us.

Benchmarks

We benchmark the DNF with the ℳ-flow, Probabilistic Auto Encoder (PAE), and InfoMax Variational Autoencoder. For that, we rely on the original implementations of those models, and modify them where appropriate, see experiments/benchmarks/vae and experiments/benchmarks/pae for more details.

Training & Evaluation

The configurations for the models and hyperparameter settings used in the paper can be found in experiments/configs.

Acknowledgements

We thank Johann Brehmer and Kyle Cramner for publishing their implementation of the Manifold Flow. For the experiments with the Probabilistic Auto-Encoder (V. Böhm and U. Seljak) and InfoMax Variational Autoencoder (A.L. Rezaabad, S. Vishwanath), we used the official implementations of these models. We thank these authors for this possibility.

You might also like...
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme
Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models This repo contains code for DDPM training. Based on Denoising Diffusion Probabilistic Models, Improved Denois

[NeurIPS 2020] Official repository for the project
[NeurIPS 2020] Official repository for the project "Listening to Sound of Silence for Speech Denoising"

Listening to Sounds of Silence for Speech Denoising Introduction This is the repository of the "Listening to Sounds of Silence for Speech Denoising" p

A two-stage U-Net for high-fidelity denoising of historical recordings
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)

ILVR + ADM This is the implementation of ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral). This repository is h

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Official repository for
Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Restormer: Efficient Transformer for High-Resolution Image Restoration Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,

Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Comments
  • Issue training with any dataset

    Issue training with any dataset

    Hello! I am very interested in this work. I would like to get some examples running, any help would be appreciated.

    I havent had any luck with any of the examples. Either evaluating or training. This is the current error I run into:

    For the thin spiral

    (ml) rorozcom3@dgx1:~/denoising-normalizing-flow/experiments$ python3 train.py -c configs/train_dnf_thin_spiral.config
    21:55 __main__             INFO    Hi!
    21:55 __main__             INFO    Training model dnf_1_thin_spiral_paper with algorithm dnf on data set thin_spiral
    Traceback (most recent call last):
      File "/nethome/rorozcom3/denoising-normalizing-flow/experiments/train.py", line 682, in <module>
        dataset = simulator.load_dataset(train=True, dataset_dir=create_filename("dataset", None, args), limit_samplesize=args.samplesize, joint_score=args.scandal is not None)
      File "/nethome/rorozcom3/denoising-normalizing-flow/experiments/datasets/base.py", line 54, in load_dataset
        x = np.load("{}/x_{}{}{}.npy".format(dataset_dir, tag, param_label, run_label))
      File "/nethome/rorozcom3/miniconda3/envs/ml/lib/python3.10/site-packages/numpy/lib/npyio.py", line 390, in load
        fid = stack.enter_context(open(os_fspath(file), "rb"))
    FileNotFoundError: [Errno 2] No such file or directory: '/nethome/rorozcom3/denoising-normalizing-flow/experiments/data/samples/thin_spiral/x_train.npy'
    

    For celeba:

    (ml) rorozcom3@dgx1:~/denoising-normalizing-flow/experiments$ python3 train.py -c configs/train_dnf_celeba.config
    21:53 __main__             INFO    Hi!
    21:53 __main__             INFO    Training model dnf_512_celeba_paper with algorithm dnf on data set celeba
    Traceback (most recent call last):
      File "/nethome/rorozcom3/denoising-normalizing-flow/experiments/train.py", line 682, in <module>
        dataset = simulator.load_dataset(train=True, dataset_dir=create_filename("dataset", None, args), limit_samplesize=args.samplesize, joint_score=args.scandal is not None)
      File "/nethome/rorozcom3/denoising-normalizing-flow/experiments/datasets/images.py", line 44, in load_dataset
        x = np.load("{}/{}.npy".format(dataset_dir, "train" if train else "test"))
      File "/nethome/rorozcom3/miniconda3/envs/ml/lib/python3.10/site-packages/numpy/lib/npyio.py", line 418, in load
        raise ValueError("Cannot load file containing pickled data "
    ValueError: Cannot load file containing pickled data when allow_pickle=False
    

    after adding allow_pickle to that load call i get this:

    (ml) rorozcom3@dgx1:~/denoising-normalizing-flow/experiments$ python3 train.py -c configs/train_dnf_celeba.config
    21:52 __main__             INFO    Hi!
    21:52 __main__             INFO    Training model dnf_512_celeba_paper with algorithm dnf on data set celeba
    Traceback (most recent call last):
      File "/nethome/rorozcom3/miniconda3/envs/ml/lib/python3.10/site-packages/numpy/lib/npyio.py", line 421, in load
        return pickle.load(fid, **pickle_kwargs)
    _pickle.UnpicklingError: invalid load key, '<'.
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/nethome/rorozcom3/denoising-normalizing-flow/experiments/train.py", line 682, in <module>
        dataset = simulator.load_dataset(train=True, dataset_dir=create_filename("dataset", None, args), limit_samplesize=args.samplesize, joint_score=args.scandal is not None)
      File "/nethome/rorozcom3/denoising-normalizing-flow/experiments/datasets/images.py", line 44, in load_dataset
        x = np.load("{}/{}.npy".format(dataset_dir, "train" if train else "test"), allow_pickle=True)
      File "/nethome/rorozcom3/miniconda3/envs/ml/lib/python3.10/site-packages/numpy/lib/npyio.py", line 423, in load
        raise pickle.UnpicklingError(
    _pickle.UnpicklingError: Failed to interpret file '/nethome/rorozcom3/denoising-normalizing-flow/experiments/data/samples/celeba/train.npy' as a pickle
    

    Thank you for your attention! I look forward to getting this running!

    opened by rafaelorozco 7
  • Getting NotImplementedError if I try any image dataset

    Getting NotImplementedError if I try any image dataset

    Here is the error log. I tried GAN-2d,64d and CelebA dataset. Almost everything is unmodified except

    00:36 __main__             INFO    Hi!
    00:36 __main__             INFO    Training model dnf_2_gan2d_paper with algorithm dnf on data set gan2d
    00:36 architectures.create INFO    Creating manifold flow for image data with 4 levels and 5 steps per level in the outer transformation, 6 layers in the inner transformation, transforms rq-coupling / rq-coupling, None context features
    00:36 manifold_flow.flows. INFO    Model has 16.4 M parameters (16.4 M trainable) with an estimated size of 65.7 MB
    00:36 manifold_flow.flows. INFO      Outer transform: 16.2 M parameters
    00:36 manifold_flow.flows. INFO      Inner transform: 0.3 M parameters
    00:36 training.trainer     INFO    Training on 8 GPUS with single precision
    00:36 __main__             INFO    Starting training denoising flow on NLL
    Traceback (most recent call last):
      File "train.py", line 702, in <module>
        learning_curves = train_model(args, dataset, model, simulator)
      File "train.py", line 637, in train_model
        learning_curves = train_dnf(args, dataset, model, simulator)
      File "train.py", line 512, in train_dnf
        learning_curves = trainer.train(
      File "/tremblerz/denoising-normalizing-flow/experiments/training/trainer.py", line 322, in train
        loss_train, loss_val, loss_contributions_train, loss_contributions_val = self.epoch(
      File "/tremblerz/denoising-normalizing-flow/experiments/training/trainer.py", line 398, in epoch
        self.first_batch(batch_data)
      File "/tremblerz/denoising-normalizing-flow/experiments/training/trainer.py", line 583, in first_batch
        self.model(x[: x.shape[0] // torch.cuda.device_count(), ...])
      File "/tremblerz/.torchP2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/flows/manifold_flow.py", line 59, in forward
        else: x_reco, inv_log_det_inner, inv_log_det_outer, inv_jacobian_outer, h_manifold_reco = self._decode(u, mode=mode, context=context)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/flows/manifold_flow.py", line 143, in _decode
        x, inv_jacobian_outer = self.outer_transform.inverse(h, full_jacobian=True, context=context if self.apply_context_to_outer else None)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/base.py", line 84, in inverse
        return self._cascade(inputs, funcs, context, full_jacobian)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/base.py", line 56, in _cascade
        outputs, jacobian = func(inputs, context, full_jacobian=True)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/base.py", line 84, in inverse
        return self._cascade(inputs, funcs, context, full_jacobian)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/base.py", line 56, in _cascade
        outputs, jacobian = func(inputs, context, full_jacobian=True)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/partial.py", line 74, in inverse
        transform_split, transform_logabsdet = self.transform.inverse(transform_split, context=context, full_jacobian=full_jacobian)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/base.py", line 84, in inverse
        return self._cascade(inputs, funcs, context, full_jacobian)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/base.py", line 56, in _cascade
        outputs, jacobian = func(inputs, context, full_jacobian=True)
      File "/tremblerz/denoising-normalizing-flow/experiments/../manifold_flow/transforms/nonlinearities.py", line 78, in inverse
        raise NotImplementedError
    NotImplementedError
    
    opened by tremblerz 6
  • Fix multi-gpu bug

    Fix multi-gpu bug

    Whenever there is more than one GPU, the code invokes first_batch that goes on to call self.model(). While other instantiations of self.model() pass on forward_kwargs, the implementation of first_batch does not which leads to some problems. I have patched it by simply passing on forward_kwargs wherever necessary. Tested on different number of GPUs.

    opened by tremblerz 1
Owner
CHrvt
CHrvt
Stochastic Normalizing Flows

Stochastic Normalizing Flows We introduce stochasticity in Boltzmann-generating flows. Normalizing flows are exact-probability generative models that

AI4Science group, FU Berlin (Frank Noé and co-workers) 50 Dec 16, 2022
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"

CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows WACV 2022 preprint:https://arxiv.org/abs/2107.1

Denis 156 Dec 28, 2022
TensorFlow implementation of "Variational Inference with Normalizing Flows"

[TensorFlow 2] Variational Inference with Normalizing Flows TensorFlow implementation of "Variational Inference with Normalizing Flows" [1] Concept Co

YeongHyeon Park 7 Jun 8, 2022
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising

EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising By Tengfei Liang, Yi Jin, Yidong Li, Tao Wang. Th

workingcoder 115 Jan 5, 2023
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021)

Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation (CVPR 2021, official Pytorch implementatio

Microsoft 247 Dec 25, 2022
HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks

HiFiGAN Denoiser This is a Unofficial Pytorch implementation of the paper HiFi-GAN: High Fidelity Denoising and Dereverberation Based on Speech Deep F

Rishikesh (ऋषिकेश) 134 Dec 27, 2022
Official implementation of Deep Convolutional Dictionary Learning for Image Denoising.

DCDicL for Image Denoising Hongyi Zheng*, Hongwei Yong*, Lei Zhang, "Deep Convolutional Dictionary Learning for Image Denoising," in CVPR 2021. (* Equ

Z80 91 Dec 21, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
A denoising diffusion probabilistic model (DDPM) tailored for conditional generation of protein distograms

Denoising Diffusion Probabilistic Model for Proteins Implementation of Denoising Diffusion Probabilistic Model in Pytorch. It is a new approach to gen

Phil Wang 108 Nov 23, 2022