Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Overview

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021)

img

Project page | Paper | Colab | Colab for Drawing App

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes.
Dmytro Kotovenko*, Matthias Wright*, Arthur Heimbrecht, and Björn Ommer.
* denotes equal contribution

Implementations

We provide implementations in Tensorflow 1 and Tensorflow 2. In order to reproduce the results from the paper, we recommend the Tensorflow 1 implementation.

Installation

  1. Clone this repository:
    > git clone https://github.com/CompVis/brushstroke-parameterized-style-transfer
    > cd brushstroke-parameterized-style-transfer
  2. Install Tensorflow 1.14 (preferably with GPU support).
    If you are using Conda, this command will create a new environment and install Tensorflow as well as compatible CUDA and cuDNN versions.
    > conda create --name tf14 tensorflow-gpu==1.14
    > conda activate tf14
  3. Install requirements:
    > pip install -r requirements.txt

Basic Usage

from PIL import Image
import model

content_img = Image.open('images/content/golden_gate.jpg')
style_img = Image.open('images/style/van_gogh_starry_night.jpg')

stylized_img = model.stylize(content_img,
                             style_img,
                             num_strokes=5000,
                             num_steps=100,
                             content_weight=1.0,
                             style_weight=3.0,
                             num_steps_pixel=1000)

stylized_img.save('images/stylized.jpg')

or open Colab.

Drawing App

We created a Streamlit app where you can draw curves to control the flow of brushstrokes.

img

Run drawing app on your machine

To run the app on your own machine:

> CUDA_VISIBLE_DEVICES=0 streamlit run app.py

You can also run the app on a remote server and forward the port to your local machine: https://docs.streamlit.io/en/0.66.0/tutorial/run_streamlit_remotely.html

Run streamlit app from Colab

If you don't have access to GPUs we also created a Colab from which you can start the drawing app.

Other implementations

PyTorch implementation by justanhduc.

Citation

@article{kotovenko_cvpr_2021,
    title={Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes},
    author={Dmytro Kotovenko and Matthias Wright and Arthur Heimbrecht and Bj{\"o}rn Ommer},
    journal={CVPR},
    year={2021}
}
You might also like...
This project is the PyTorch implementation of our CVPR 2022 paper:

Requirements and Dependency Install PyTorch with CUDA (for GPU). (Experiments are validated on python 3.8.11 and pytorch 1.7.0) (For visualization if

[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer

ConSERT Code for our ACL 2021 paper - ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer Requirements torch==1.6.0

Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Code for our ACL 2021 paper
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

This is the official code of our paper
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Code for our NeurIPS 2021 paper  Mining the Benefits of Two-stage and One-stage HOI Detection
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

Code to reproduce the experiments from our NeurIPS 2021 paper " The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective"

Code To run: python runner.py new --save SAVE_NAME --data PATH_TO_DATA_DIR --dataset DATASET --model model_name [options] --n 1000 - train - t

Comments
  • Paper details

    Paper details

    Hi authors, Congratulations on the acceptance! I am trying to reproduce the paper. I have the following questions regarding the details of the paper in order to implement it.

    1. What exactly are the parameters of a stroke? I thought it should be 3 control points (6 params) + RGB (3 params) + width (1 param) but in the paper you mentioned you used 12 params in total. In Figure 12, you said that d is used, but isn't it redundant given p2?
    2. Do you use tanh or sigmoid to control the locations and colors of strokes into [0,1]?
    3. In Algorithm 1, is the formula of M_strokes right? Since the values of the 2-norm are positive, the mask then should be in [.5,1] (or [0, .5] if the temperature is negative), which is not in the full range [0,1] as it should.
    4. In Algorithm 1, is A=softmax(- t.D_strokes, axis=3), the negative logit instead of positive as in the paper?
    5. What is the learning rate did you use? Did you use any lr scheduler?

    THanks in advance!

    opened by justanhduc 8
  • Call for code

    Call for code

    Hi authors, congratulations on you for your appreciating works ! I am a greenhand in DL , could you please upload your code about training and testing including the datasets as soon as possible? I do need it as my first work's reference. I 'll be grateful for your selflessness if you can open what mentioned above.

    opened by Radium98 1
Owner
CompVis Heidelberg
Computer Vision research group at the Ruprecht-Karls-University Heidelberg
CompVis Heidelberg
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Repository of our paper 'Refer-it-in-RGBD' in CVPR 2021

Refer-it-in-RGBD This is the repository of our paper 'Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD Images' in CVPR 2021 Pape

Haolin Liu 34 Nov 7, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Isen (Songyao Jiang) 128 Dec 8, 2022
Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"

GEN-VLKT Code for our CVPR 2022 paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection". Contributed by Yue Lia

Yue Liao 47 Dec 4, 2022
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"

Dataset Distillation by Matching Training Trajectories Project Page | Paper This repo contains code for training expert trajectories and distilling sy

George Cazenavette 256 Jan 5, 2023
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022