A modification of Daniel Russell's notebook merged with Katherine Crowson's hq-skip-net changes

Overview

Cover

Edits made to this repo by Katherine Crowson

I have added several features to this repository for use in creating higher quality generative art (feature visualization probably also benefits):

  • Deformable convolutions have been added.

  • Higher quality non-learnable upsampling filters (bicubic, Lanczos) have been added, with matching downsampling filters. A bilinear downsampling filter which low pass filters properly has also been added.

  • The nets can now optionally output to a fixed decorrelated color space which is then transformed to RGB and sigmoided. Deep Image Prior as originally written does not know anything about the correlations between RGB color channels in natural images, which can be disadvantageous when using it for feature visualization and generative art.

Example:

from models import get_hq_skip_net

net = get_hq_skip_net(input_depth).to(device)

get_hq_skip_net() provides higher quality defaults for the skip net, using the added features, than get_net(). Deformable convolutions can be slow and if this is a problem you can disable them with offset_groups=0 or offset_type='none'. The decorrelated color space can be turned off with decorr_rgb=False. The upsample_mode and downsample_mode defaults are now 'cubic' for visual quality, I would recommend not going below 'linear'. The default channel count and number of scales has been increased.

The default configuration is to use 1x1 convolution layers to create the offsets for the deformable convolutions, because training can become unstable with 3x3. However to make full use of deformable convolutions you may want to use 3x3 offset layers and set their learning rate to around 1/10 of the normal layers:

net = get_hq_skip_net(input_depth, offset_type='full')
params = [{'params': get_non_offset_params(net), 'lr': lr},
          {'params': get_offset_params(net), 'lr': lr / 10}]
opt = optim.Adam(params)

This is a merge of Daniel Russell's deep-image-prior notebook with Katherine Crowson's notebook

Some minor additions: P. Fishwick 01/28/2022

Merged Katherine Crowson's deep_image_prior into Daniel Russell's original notebook : https://github.com/crowsonkb/deep-image-prior
Mount Google Drive to save the directory deep_image_prior
Updated to CLIP model RN50x64 with size 448
Lowered cutn to 10 for a V100 (16GB memory) - update for an A100
Iterates over num_images to create an image batch
Saves the image at each display interval

Original README

Warning! The optimization may not converge on some GPUs. We've personally experienced issues on Tesla V100 and P40 GPUs. When running the code, make sure you get similar results to the paper first. Easiest to check using text inpainting notebook. Try to set double precision mode or turn off cudnn.

Deep image prior

In this repository we provide Jupyter Notebooks to reproduce each figure from the paper:

Deep Image Prior

CVPR 2018

Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky

[paper] [supmat] [project page]

Here we provide hyperparameters and architectures, that were used to generate the figures. Most of them are far from optimal. Do not hesitate to change them and see the effect.

We will expand this README with a list of hyperparameters and options shortly.

Install

Here is the list of libraries you need to install to execute the code:

  • python = 3.6
  • pytorch = 0.4
  • numpy
  • scipy
  • matplotlib
  • scikit-image
  • jupyter

All of them can be installed via conda (anaconda), e.g.

conda install jupyter

or create an conda env with all dependencies via environment file

conda env create -f environment.yml

Docker image

Alternatively, you can use a Docker image that exposes a Jupyter Notebook with all required dependencies. To build this image ensure you have both docker and nvidia-docker installed, then run

nvidia-docker build -t deep-image-prior .

After the build you can start the container as

nvidia-docker run --rm -it --ipc=host -p 8888:8888 deep-image-prior

you will be provided an URL through which you can connect to the Jupyter notebook.

Google Colab

To run it using Google Colab, click here and select the notebook to run. Remember to uncomment the first cell to clone the repository into colab's environment.

Citation

@article{UlyanovVL17,
    author    = {Ulyanov, Dmitry and Vedaldi, Andrea and Lempitsky, Victor},
    title     = {Deep Image Prior},
    journal   = {arXiv:1711.10925},
    year      = {2017}
}
You might also like...
A colab notebook for training Stylegan2-ada on colab, transfer learning onto your own dataset.

Stylegan2-Ada-Google-Colab-Starter-Notebook A no thrills colab notebook for training Stylegan2-ada on colab. transfer learning onto your own dataset h

This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

This Jupyter notebook shows one way to implement a simple first-order low-pass filter on sampled data in discrete time.

How to Implement a First-Order Low-Pass Filter in Discrete Time We often teach or learn about filters in continuous time, but then need to implement t

A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

CFC-Net This project hosts the official implementation for the paper: CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Dete

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Code for one-stage adaptive set-based HOI detector AS-Net.

AS-Net Code for one-stage adaptive set-based HOI detector AS-Net. Mingfei Chen*, Yue Liao*, Si Liu, Zhiyuan Chen, Fei Wang, Chen Qian. "Reformulating

Owner
Paul Fishwick
Distinguished Univ. Chair of Arts, Technology, and Emerging Communication & Professor of Computer Science
Paul Fishwick
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

MIC-DKFZ 1.2k Jan 4, 2023
Neural networks applied in recognizing guitar chords using python, AutoML.NET with C# and .NET Core

Chord Recognition Demo application The demo application is written in C# with .NETCore. As of July 9, 2020, the only version available is for windows

Andres Mauricio Rondon Patiño 24 Oct 22, 2022
U-2-Net: U Square Net - Modified for paired image training of style transfer

U2-Net: U Square Net Modified for paired image training of style transfer This is an unofficial repo making use of the code which was made available b

Doron Adler 43 Oct 3, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

Phong Nguyen Ha 4 May 26, 2022
SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks (Scientific Reports)

SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks Molecular interaction networks are powerful resources for the discovery. While dee

Kexin Huang 49 Oct 15, 2022
Cross-modal Deep Face Normals with Deactivable Skip Connections

Cross-modal Deep Face Normals with Deactivable Skip Connections Victoria Fernández Abrevaya*, Adnane Boukhayma*, Philip H. S. Torr, Edmond Boyer (*Equ

null 72 Nov 27, 2022
Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch

SRDenseNet-pytorch Implementation of paper: "Image Super-Resolution Using Dense Skip Connections" in PyTorch (http://openaccess.thecvf.com/content_ICC

wxy 114 Nov 26, 2022
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

AllenXiang 65 Dec 26, 2022
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Phillip Lippe 1.1k Jan 7, 2023
A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Eugenio Herrera 175 Dec 29, 2022