Neural style in TensorFlow! 🎨

Overview

neural-style Build Status

An implementation of neural style in TensorFlow.

This implementation is a lot simpler than a lot of the other ones out there, thanks to TensorFlow's really nice API and automatic differentiation.

TensorFlow doesn't support L-BFGS (which is what the original authors used), so we use Adam. This may require a little bit more hyperparameter tuning to get nice results.

Running

python neural_style.py --content <content file> --styles <style file> --output <output file>

Run python neural_style.py --help to see a list of all options.

Use --checkpoint-output and --checkpoint-iterations to save checkpoint images.

Use --iterations to change the number of iterations (default 1000). For a 512×512 pixel content file, 1000 iterations take 60 seconds on a GTX 1080 Ti, 90 seconds on a Maxwell Titan X, or 60 minutes on an Intel Core i7-5930K. Using a GPU is highly recommended due to the huge speedup.

Example 1

Running it for 500-2000 iterations seems to produce nice results. With certain images or output sizes, you might need some hyperparameter tuning (especially --content-weight, --style-weight, and --learning-rate).

The following example was run for 1000 iterations to produce the result (with default parameters):

output

These were the input images used (me sleeping at a hackathon and Starry Night):

input-content

input-style

Example 2

The following example demonstrates style blending, and was run for 1000 iterations to produce the result (with style blend weight parameters 0.8 and 0.2):

output

The content input image was a picture of the Stata Center at MIT:

input-content

The style input images were Picasso's "Dora Maar" and Starry Night, with the Picasso image having a style blend weight of 0.8 and Starry Night having a style blend weight of 0.2:

input-style input-style

Tweaking

--style-layer-weight-exp command line argument could be used to tweak how "abstract" the style transfer should be. Lower values mean that style transfer of a finer features will be favored over style transfer of a more coarse features, and vice versa. Default value is 1.0 - all layers treated equally. Somewhat extreme examples of what you can achieve:

--style-layer-weight-exp 0.2 --style-layer-weight-exp 2.0

(left: 0.2 - finer features style transfer; right: 2.0 - coarser features style transfer)

--content-weight-blend specifies the coefficient of content transfer layers. Default value - 1.0, style transfer tries to preserve finer grain content details. The value should be in range [0.0; 1.0].

--content-weight-blend 1.0 --content-weight-blend 0.1

(left: 1.0 - default value; right: 0.1 - more abstract picture)

--pooling allows to select which pooling layers to use (specify either max or avg). Original VGG topology uses max pooling, but the style transfer paper suggests replacing it with average pooling. The outputs are perceptually different, max pool in general tends to have finer detail style transfer, but could have troubles at lower-freqency detail level:

--pooling max --pooling avg

(left: max pooling; right: average pooling)

--preserve-colors boolean command line argument adds post-processing step, which combines colors from the original image and luma from the stylized image (YCbCr color space), thus producing color-preserving style transfer:

--pooling max --pooling max

(left: original stylized image; right: color-preserving style transfer)

Requirements

Data Files

  • Pre-trained VGG network (MD5 106118b7cf60435e6d8e04f6a6dc3657) - put it in the top level of this repository, or specify its location using the --network option.

Dependencies

You can install Python dependencies using pip install -r requirements.txt, and it should just work. If you want to install the packages manually, here's a list:

Related Projects

See here for an implementation of fast (feed-forward) neural style in TensorFlow.

Try neural style client-side in your web browser without installing any software (using TensorFire).

Citation

If you use this implementation in your work, please cite the following:

@misc{athalye2015neuralstyle,
  author = {Anish Athalye},
  title = {Neural Style},
  year = {2015},
  howpublished = {\url{https://github.com/anishathalye/neural-style}},
  note = {commit xxxxxxx}
}

License

Copyright (c) 2015-2021 Anish Athalye. Released under GPLv3. See LICENSE.txt for details.

Comments
  • problem loading mat file

    problem loading mat file

    Hi, I get the following error in vgg.py:

    Traceback (most recent call last): File "/Users/cuongwilliams/Packages/neural-style/neural_style.py", line 122, in main() File "/Users/cuongwilliams/Packages/neural-style/neural_style.py", line 108, in main checkpoint_iterations=options.checkpoint_iterations) File "/Users/cuongwilliams/Packages/neural-style/stylize.py", line 24, in stylize net, mean_pixel = vgg.net(network, image) File "/Users/cuongwilliams/Packages/neural-style/vgg.py", line 24, in net mean = data['normalization'][0][0][0] KeyError: 'normalization'

    Any thoughts what I might be doing wrong ? I do use the --network parameter to specify the location of the mat file.

    bug question 
    opened by sourcesync 29
  • How do you make the imagenet-vgg-verydeep-19.mat

    How do you make the imagenet-vgg-verydeep-19.mat

    I'm working on a project and I need only part of vgg16 in my work. So I want to look at your code and figure out how to do that. But it seems like you didn't share your code for making imagenet-vgg-verydeep-19.mat. Can I know more about how do you get the imagenet-vgg-verydeep-19.mat file

    question 
    opened by menglin0320 13
  • The TensorFlow library wasn't compiled to use SSE、AVX 、FMA?

    The TensorFlow library wasn't compiled to use SSE、AVX 、FMA?

    2018-02-05 17:40:45.026070: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2018-02-05 17:40:45.026107: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2018-02-05 17:40:45.026115: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2018-02-05 17:40:45.026123: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2018-02-05 17:40:45.026130: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX512F instructions, but these are available on your machine and could speed up CPU computations. 2018-02-05 17:40:45.026137: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. killed

    ———————— when I run the code python neural_style.py --content 'examples/1-content.jpg' --styles 'examples/1-style.jpg' --output 'examples/my1.jpg' ,then give me some warring and killed,it is may be tensorflow`s trouble,but I do not should I do.

    opened by tss12 12
  • Install error

    Install error

    ➜  neural-style git:(master) ✗ python neural_style.py --content /Users/zhengzhongzhao/Desktop/IMG_0200.jpg --output 1.jpg --styles STYLE
    Traceback (most recent call last):
      File "neural_style.py", line 122, in <module>
        main()
      File "neural_style.py", line 75, in main
        content_image = imread(options.content)
      File "neural_style.py", line 113, in imread
        return scipy.misc.imread(path).astype(np.float)
    AttributeError: 'module' object has no attribute 'imread'
    

    Thanks!

    question 
    opened by zzz6519003 12
  • Why I got strange results?

    Why I got strange results?

    Dear friends,

    Recently I have tested the program on my notebook but got strange results. The output file contains just random distributed dots, neither content nor style appear as in the input files. After installing Pillow, Scipy 1.1.0 and tensorflow 1.14.0 (not tensorflow-gpu), and checking

    import PIL PIL.version '6.2.0' import scipy scipy.version '1.1.0' import numpy numpy.version '1.16.5' import tensorflow as tf tf.version '1.14.0'

    Then I ran the following command in anaconda prompt: python neural_style.py --content ./content/1-content.jpg --styles ./style/1-style.jpg --output ./output/output.jpg --checkpoint-output ./output/test_%d.jpg --checkpoint-iterations 100 --overwrite. The program goes through with no fatal errors, but the output image seems to be not right. test_1000

    Could anyone please kindly tell me what is wrong with my practice?

    Many thanks,

    David

    question 
    opened by david658 10
  • Aborted (core dumped)

    Aborted (core dumped)

    Hi,could you give me some advice to solve this problem? `` CUDA_VISIBLE_DEVICES=1` python neural_style.py --content examples/1-content.jpg --styles examples/1-style.jpg --output examples/TF1.jpg
    I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally E tensorflow/stream_executor/cuda/cuda_driver.cc:491] failed call to cuInit: CUDA_ERROR_NO_DEVICE I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:153] retrieving CUDA diagnostic information for host: icra-All-Series I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:160] hostname: icra-All-Series I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:185] libcuda reported version is: 367.48.0 I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:356] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 367.48 Sat Sep 3 18:21:08 PDT 2016 GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) """ I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] kernel reported version is: 367.48.0 I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:293] kernel version seems to match DSO: 367.48.0 terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted (core dumped)

    question 
    opened by fdcqsjn 10
  • cpu shuts down

    cpu shuts down

    I'm trying to run neural-style on a Intel i5-11400F 2.6 Ghz processor with 6 physical and 12 logical cores and 16 GB RAM. The OS used was Windows 10 running WSL2 with Ubuntu Linux. After approximately 15 iterations, neural-style causes the CPU to shutdown instantly. This occurs both on TensorFlow 2.4.0 and 2.6. I have trained other neural nets on the same machine using TF 2.4 which occupy all 12 logical cores at 100% without any issues. These neural nets take about 3 hours to train and are using approximately 3.5GB RAM. So I'm guessing that it's not the CPU which is at fault. Should I be looking at throtting the CPU frequency in my BIOS? Has anyone else seen this with neural-style?

    opened by gokhalen 9
  • TO MANY FILES

    TO MANY FILES

    When i am running this code in the terminal i get error code:

    (base) a@Erics-MacBook-Pro ~ % floyd run --gpu --env tensorflow-1.3 --data floydhub/datasets/imagenet-vgg-verydeep-19/3:vgg "python neural_style.py --network /vgg/imagenet-vgg-verydeep-19.mat --content --styles