Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer

Overview

Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer

Paper on arXiv

Public PyTorch implementation of two-stage peer-regularized feature recombination for arbitrary image style transfer presented at CVPR 2020. The model is trained on a selected set painters and generalizes well even to previously unseen style during testing.

Structure

The repository contains the code that we have used to produce some of the main results in the paper. We have left out additional modifications that were used to generate the ablation studies, etc.

Running examples

In order to get reasonable runtime, the code has to be run on a GPU. The code is multi-gpu ready. We have used 2 GPUs for training and a single GPU during test time. We have been running our code on a Nvidia Titan X (Pascal) 12GB GPU. Basic system requirements are to be found here.

Should you encounter some issues running the code, please first check Known issues and then consider opening a new issue in this repository.

Model training

The provided pre-trained model was trained by running the following command:

python train.py --dataroot photo2painter13 --checkpoints_dir=./checkpoints --dataset_mode=painters13 --name GanAuxModel --model gan_aux
--netG=resnet_residual --netD=disc_noisy --display_env=GanAuxModel --gpu_ids=0,1 --lambda_gen=1.0 --lambda_disc=1.0 --lambda_cycle=1.0
--lambda_cont=1.0 --lambda_style=1.0 --lambda_idt=25.0 --num_style_samples=1 --batch_size=2 --num_threads=8 --fineSize=256 --loadSize=286
--mapping_mode=one_to_all --knn=5 --ml_margin=1.0 --lr=4e-4 --peer_reg=bidir --print_freq=500 --niter=50 --niter_decay=150 --no_html

Model testing

We provide one pre-trained model that you can run and stylize images. The example below will use sample content and style images from the samples/data folder.

The pretrained model was trained on images with resolution 256 x 256, during test time it can however operate on images of arbitrary size. Current memory limitations restrict us to run images of size up to 768 x 768.

python test.py --checkpoints_dir=./samples/models --name GanAuxPretrained --model gan_aux --netG=resnet_residual --netD=disc_noisy
--gpu_ids=0 --num_style_samples=1 --loadSize=512 --fineSize=512 --knn=5 --peer_reg=bidir --epoch=200 --content_folder content_imgs
--style_folder style_imgs --output_folder out_imgs

Datasets

The full dataset that we have used for training is the same one as in this work.

Results

Comparison to existing approaches

Comparison image

Ablation study

Ablation image

Reference

If you make any use of our code or data, please cite the following:

@conference{svoboda2020twostage,
  title={Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer},
  author={Svoboda, J. and Anoosheh, A. and Osendorfer, Ch. and Masci, J.},
  booktitle={Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020}
}

Acknowledgments

The code in this repository is based on pytorch-CycleGAN.

For any reuse and or redistribution of the code in this repository please follow the license agreement attached to this repository.

You might also like...
Official PyTorch implementation of Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval.
Official PyTorch implementation of Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval.

Retrieve in Style: Unsupervised Facial Feature Transfer and Retrieval PyTorch This is the PyTorch implementation of Retrieve in Style: Unsupervised Fa

Style transfer, deep learning, feature transform
Style transfer, deep learning, feature transform

FastPhotoStyle License Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons

Two-stage CenterNet
Two-stage CenterNet

Probabilistic two-stage detection Two-stage object detectors that use class-agnostic one-stage detectors as the proposal network. Probabilistic two-st

dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)
dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ)

dualFace dualFace: Two-Stage Drawing Guidance for Freehand Portrait Sketching (CVMJ) We provide python implementations for our CVM 2021 paper "dualFac

A two-stage U-Net for high-fidelity denoising of historical recordings
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

PyTorch implementation of "A Two-Stage End-to-End System for Speech-in-Noise Hearing Aid Processing"

Implementation of the Sheffield entry for the first Clarity enhancement challenge (CEC1) This repository contains the PyTorch implementation of "A Two

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

Fast Neural Style for Image Style Transform by Pytorch
Fast Neural Style for Image Style Transform by Pytorch

FastNeuralStyle by Pytorch Fast Neural Style for Image Style Transform by Pytorch This is famous Fast Neural Style of Paper Perceptual Losses for Real

CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images

CFC-Net This project hosts the official implementation for the paper: CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Dete

Comments
  • About dataset and dataroot

    About dataset and dataroot

    Hello, I want to use your code to train my own dataset, but I am not sure about the meaning of trainA, trainB, valA, valB, etc in the dataroot, and where should my dataset be placed?

    opened by wangjing835 0
  • About the test results

    About the test results

    I have run your code using the comand python test.py --checkpoints_dir=./samples/models --name GanAuxPretrained --model gan_aux --netG=resnet_residual --netD=disc_noisy --gpu_ids=0 --num_style_samples=1 --loadSize=512 --fineSize=512 --knn=5 --peer_reg=bidir --epoch=200 --content_folder=./samples/data/content_imgs --style_folder=./samples/data/style_imgs --output_folder=output. Then I got the results with strange color.I just have no idea why my results is so bizarre. image

    opened by Wxl-stars 0
  • RuntimeError

    RuntimeError

    When I was training the model, I had this problem: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 256, 3, 3]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). How to solve this problem? Look forward to your reply.

    opened by zxt-triumph 2
Owner
NNAISENSE
NNAISENSE
Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition"

Code for Two-stage Identifier: "Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition", accepted at ACL 2021. For details of the model and experiments, please see our paper.

tricktreat 87 Dec 16, 2022
A Peer-to-peer Platform for Secure, Privacy-preserving, Decentralized Data Science

PyGrid is a peer-to-peer network of data owners and data scientists who can collectively train AI models using PySyft. PyGrid is also the central serv

OpenMined 615 Jan 3, 2023
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

null 71 Dec 14, 2022
Code for Mining the Benefits of Two-stage and One-stage HOI Detection

Status: Archive (code is provided as-is, no updates expected) PPO-EWMA [Paper] This is code for training agents using PPO-EWMA and PPG-EWMA, introduce

OpenAI 33 Dec 15, 2022
Virtual Dance Reality Stage: a feature that offers you to share a stage with another user virtually

Portrait Segmentation using Tensorflow This script removes the background from an input image. You can read more about segmentation here Setup The scr

null 291 Dec 24, 2022
[ICCV 2021] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Amplitude-Phase Recombination (ICCV'21) Official PyTorch implementation of "Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neur

Guangyao Chen 53 Oct 5, 2022
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

null 148 Dec 30, 2022
Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization'

pytorch-AdaIN This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Hua

Naoto Inoue 873 Jan 6, 2023
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022