deep-photo-styletransfer
Code and data for paper "Deep Photo Style Transfer"
Disclaimer
This software is published for academic and non-commercial use only.
Setup
This code is based on torch. It has been tested on Ubuntu 14.04 LTS.
Dependencies:
CUDA backend:
Download VGG-19:
sh models/download_models.sh
Compile cuda_utils.cu
(Adjust PREFIX
and NVCC_PREFIX
in makefile
for your machine):
make clean && make
Usage
Quick start
To generate all results (in examples/
) using the provided scripts, simply run
run('gen_laplacian/gen_laplacian.m')
in Matlab or Octave and then
python gen_all.py
in Python. The final output will be in examples/final_results/
.
Basic usage
- Given input and style images with semantic segmentation masks, put them in
examples/
respectively. They will have the following filename form:examples/input/in
,.png examples/style/tar
and.png examples/segmentation/in
,.png examples/segmentation/tar
;.png - Compute the matting Laplacian matrix using
gen_laplacian/gen_laplacian.m
in Matlab. The output matrix will have the following filename form:gen_laplacian/Input_Laplacian_3x3_1e-7_CSR
;.mat
Note: Please make sure that the content image resolution is consistent for Matting Laplacian computation in Matlab and style transfer in Torch, otherwise the result won't be correct.
- Run the following script to generate segmented intermediate result:
th neuralstyle_seg.lua -content_image -style_image