A simple interface for editing natural photos with generative neural networks.

Overview

Neural Photo Editor

A simple interface for editing natural photos with generative neural networks.

GUI1 GUI2 GUI3

This repository contains code for the paper "Neural Photo Editing with Introspective Adversarial Networks," and the Associated Video.

Installation

To run the Neural Photo Editor, you will need:

  • Python, likely version 2.7. You may be able to use early versions of Python2, but I'm pretty sure there's some incompatibilities with Python3 in here.
  • Theano, development version.
  • lasagne, development version.
  • I highly recommend cuDNN as speed is key, but it is not a dependency.
  • numpy, scipy, PIL, Tkinter and tkColorChooser, but it is likely that your python distribution already has those.

Running the NPE

By default, the NPE runs on IAN_simple. This is a slimmed-down version of the IAN without MDC or RGB-Beta blocks, which runs without lag on a laptop GPU with ~1GB of memory (GT730M)

If you're on a Windows machine, you will want to create a .theanorc file and at least set the flag FLOATX=float32.

If you're on a linux machine, you can just insert THEANO_FLAGS=floatX=float32 before the command line call.

If you don't have cuDNN, simply change line 56 of the NPE.py file from dnn=True to dnn=False. Note that I presently only have the non-cuDNN option working for IAN_simple.

Then, run the command:

python NPE.py

If you wish to use a different model, simply edit the line with "config path" in the NPE.py file.

You can make use of any model with an inference mechanism (VAE or ALI-based GAN).

Commands

  • You can paint the image by picking a color and painting on the image, or paint in the latent space canvas (the red and blue tiles below the image).
  • The long horizontal slider controls the magnitude of the latent brush, and the smaller horizontal slider controls the size of both the latent and the main image brush.
  • You can select different entries from the subset of the celebA validation set (included in this repository as an .npz) by typing in a number from 0-999 in the bottom left box and hitting "infer."
  • Use the reset button to return to the ground truth image.
  • Press "Update" to update the ground-truth image and corresponding reconstruction with the current image. Use "Infer" to return to an original ground truth image from the dataset.
  • Use the sample button to generate a random latent vector and corresponding image.
  • Use the scroll wheel to lighten or darken an image patch (equivalent to using a pure white or pure black paintbrush). Note that this automatically returns you to sample mode, and may require hitting "infer" rather than "reset" to get back to photo editing.

Training an IAN on celebA

You will need Fuel along with the 64x64 version of celebA. See here for instructions on downloading and preparing it.

If you wish to train a model, the IAN.py file contains the model configuration, and the train_IAN.py file contains the training code, which can be run like this:

python train_IAN.py IAN.py

By default, this code will save (and overwrite!) the weights to a .npz file with the same name as the config.py file (i.e. "IAN.py -> IAN.npz"), and will output a jsonl log of the training with metrics recorded after every chunk.

Use the --resume=True flag when calling to resume training a model--it will automatically pick up from the most recent epoch.

Sampling the IAN

You can generate a sample and reconstruction+interpolation grid with:

python sample_IAN.py IAN.py

Note that you will need matplotlib. to do so.

Known Issues/Bugs

My MADE layer currently only accepts hidden unit sizes that are equal to the size of the latent vector, which will present itself as a BAD_PARAM error.

Since the MADE really only acts as an autoregressive randomizer I'm not too worried about this, but it does bear looking into.

I messed around with the keywords for get_model, you'll need to deal with these if you wish to run any model other than IAN_simple through the editor.

Everything is presently just dumped into a single, unorganized directory. I'll be adding folders and cleaning things up soon.

Notes

Remainder of the IAN experiments (including SVHN) coming soon.

I've integrated the plat interface which makes the NPE itself independent of framework, so you should be able to run it with Blocks, TensorFlow, PyTorch, PyCaffe, what have you, by modifying the IAN class provided in models.py.

Acknowledgments

This code contains lasagne layers and other goodies adopted from a number of places:

Comments
  • Failed to interpret file IAN_simple.npz as a pickle

    Failed to interpret file IAN_simple.npz as a pickle

    After changing NPE.py to model = IAN(config_path = 'IAN_simple.py', dnn = False), I get:

    $ python NPE.py
    Loading weights
    Traceback (most recent call last):
      File "NPE.py", line 53, in <module>
        model = IAN(config_path = 'IAN_simple.py', dnn = False)
      File "/Users/skurilyak/Documents/dev/testing/Neural-Photo-Editor/API.py", line 30, in __init__
        GANcheckpoints.load_weights(self.weights_fname,params)
      File "/Users/skurilyak/Documents/dev/testing/Neural-Photo-Editor/GANcheckpoints.py", line 39, in load_weights
        param_dict = np.load(fname)
      File "/usr/local/lib/python2.7/site-packages/numpy/lib/npyio.py", line 416, in load
        "Failed to interpret file %s as a pickle" % repr(file))
    IOError: Failed to interpret file 'IAN_simple.npz' as a pickle
    

    Any ideas?

    opened by slavakurilyak 9
  • IDEA: Option to save changed image as the new ground-truth image

    IDEA: Option to save changed image as the new ground-truth image

    I was trying to make multiple independent changes to an image and I came to the following conclusion:

    The masking technique means the output image is based off the original ground-truth image. After an aesthetically pleasing change is made, such as growing the length of the hair, a new ground-truth image must be saved and the latent space recalculated before a different change is made such as changing the hair color.

    By adding the following PR I was able to get these results: https://github.com/ajbrock/Neural-Photo-Editor/pull/8

    Original image: original

    First operation - increase hair length: longerhair

    Second operation AFTER saving the new ground-truth image - new hair color: newhaircolor

    Without saving the new ground-truth image, I was unable to get these results as the algorithm attempted to remove my longer black hair when I attempted to change it to yellow, because it matches the skin color.

    opened by michaelrgb 3
  • Easier option to not use cuDNN

    Easier option to not use cuDNN

    This project looks very interesting, but I don't have access to an nVidia card. The readme says "You'll need to uncomment my explicit DNN calls if you wish to not use it.", but if I have a look at the code, there's a lot of references to DNN, so this doesn't look very trivial.

    Is it possible to create a custom version (maybe a branch?) that works without having cuDNN installed?

    opened by pvginkel 3
  • IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

    IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

    Traceback (most recent call last): File ".\NPE.py", line 18, in model = IAN(config_path = 'IAN_simple.py', dnn = False) File "C:\Users\user\Desktop\Neural-Photo-Editor-master\API.py", line 30, in init GANcheckpoints.load_weights(self.weights_fname,params) File "C:\Users\user\Desktop\Neural-Photo-Editor-master\GANcheckpoints.py", line 39, in load_weights param_dict = np.load(fname) File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 429, in load "Failed to interpret file %s as a pickle" % repr(file)) IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

    Any ideas?

    opened by Elijas 1
  • Numpy fails to interpret IAN_Simple.npz?

    Numpy fails to interpret IAN_Simple.npz?

    Hi, I finally managed to install cuda and dev versions of lasagne and theano. Now when I try to launch NPE I get this:

    gray@gray-linux:~/Neural-Photo-Editor$ python NPE.py Using gpu device 0: GeForce GTX 970 (CNMeM is disabled, cuDNN 5105) Loading weights Traceback (most recent call last): File "NPE.py", line 53, in model = IAN(config_path = 'IAN_simple.py', dnn = True) File "/home/gray/Neural-Photo-Editor/API.py", line 30, in init GANcheckpoints.load_weights(self.weights_fname,params) File "/home/gray/Neural-Photo-Editor/GANcheckpoints.py", line 39, in load_weights param_dict = np.load(fname) File "/home/gray/miniconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 416, in load "Failed to interpret file %s as a pickle" % repr(file)) IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

    Am I doing something wrong? As far as I can understand from search results, np.load is used for binary .npz files, but all I can see in IAN_Simple.npz are three strings of text:

    version https://git-lfs.github.com/spec/v1 oid sha256:82e5fd3ff68b2c9095935c9db269e086e2dd27704b629853e1f03473e7059bd7 size 205207893

    UPD: Whoops, my bad. For some reason git clone didn't download raw .npz files. I downloaded them manually and now everything works

    opened by graynk 1
  • Added missing config key and removed unnecessary imports.

    Added missing config key and removed unnecessary imports.

    Added n_classes to IAN config, which is needed by get_model. This allows the sample_IAN to launch as given in the README.

    Also removed some unnecessary imports from sample_IAN (voxnet and CAcheckpoints).

    opened by dribnet 1
  •  No module named CAcheckpoints

    No module named CAcheckpoints

    Traceback (most recent call last): File "train_IAN_simple.py", line 112, in import CAcheckpoints ImportError: No module named CAcheckpoints

    opened by assadRasheed 0
  • trained faces are all blury and seems not learnt

    trained faces are all blury and seems not learnt

    I implemented a version in pytorch, with the same architecture illustrated in your paper and code, without orthogonal regularization and MDC though. However, my generated faces are 300k iteration are still very blury, like below. Do you have any idea why this might happen? thanks very much!! rec_step_300000

    opened by ecilay 5
  • Theano optimization failed

    Theano optimization failed

    Hi,

    I am trying to reproduce the code on a V100 instance and I ran into the following issues when I ran python NPE.py

    Do you have any recommendations on how we can reproduce your experimental setup in the form of a Dockerfile?

    /home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #2
      (fname, cnt))
    /home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #3
      (fname, cnt))
    Loading weights
    Compiling Theano Functions
    ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
    ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
    ERROR (theano.gof.opt): TRACEBACK:
    ERROR (theano.gof.opt): Traceback (most recent call last):
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
        remove=remove)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
        chk = fgraph.replace_all_validate(replacements, reason)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
        fgraph.replace(r, new_r, reason=reason, verbose=False)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
        ". The type of the replacement must be the same.", old, new)
    BadOptimization: BadOptimization Error
      Variable: id 139714617198864 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
      Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
      Value Type: <type 'NoneType'>
      Old Value:  None
      New Value:  None
      Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
      Old Graph:
      AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
       |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
      New Graph:
      CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
       |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
       | |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
    
    Hint: relax the tolerance by setting tensor.cmp_sloppy=1
      or even tensor.cmp_sloppy=2 for less-strict comparison
    
    
    ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
    ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
    ERROR (theano.gof.opt): TRACEBACK:
    ERROR (theano.gof.opt): Traceback (most recent call last):
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
        remove=remove)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
        chk = fgraph.replace_all_validate(replacements, reason)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
        fgraph.replace(r, new_r, reason=reason, verbose=False)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
        ". The type of the replacement must be the same.", old, new)
    BadOptimization: BadOptimization Error
      Variable: id 139714617838416 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
      Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
      Value Type: <type 'NoneType'>
      Old Value:  None
      New Value:  None
      Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
      Old Graph:
      AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
       |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
      New Graph:
      CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
       |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
       | |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
    
    Hint: relax the tolerance by setting tensor.cmp_sloppy=1
      or even tensor.cmp_sloppy=2 for less-strict comparison
    
    
    ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
    ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
    ERROR (theano.gof.opt): TRACEBACK:
    ERROR (theano.gof.opt): Traceback (most recent call last):
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
        remove=remove)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
        chk = fgraph.replace_all_validate(replacements, reason)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
        fgraph.replace(r, new_r, reason=reason, verbose=False)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
        ". The type of the replacement must be the same.", old, new)
    BadOptimization: BadOptimization Error
      Variable: id 139714617837136 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
      Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
      Value Type: <type 'NoneType'>
      Old Value:  None
      New Value:  None
      Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
      Old Graph:
      AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
       |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
      New Graph:
      CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
       |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
       | |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
    
    Hint: relax the tolerance by setting tensor.cmp_sloppy=1
      or even tensor.cmp_sloppy=2 for less-strict comparison
    
    
    ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
    ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
    ERROR (theano.gof.opt): TRACEBACK:
    ERROR (theano.gof.opt): Traceback (most recent call last):
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
        remove=remove)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
        chk = fgraph.replace_all_validate(replacements, reason)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
        fgraph.replace(r, new_r, reason=reason, verbose=False)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
        ". The type of the replacement must be the same.", old, new)
    BadOptimization: BadOptimization Error
      Variable: id 139714617737296 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
      Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
      Value Type: <type 'NoneType'>
      Old Value:  None
      New Value:  None
      Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
      Old Graph:
      AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
       |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
      New Graph:
      CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
       |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
       | |X [id B] <TensorType(float32, 4D)>
       |enc_conv1.W [id C] <TensorType(float64, 4D)>
    
    
    Hint: relax the tolerance by setting tensor.cmp_sloppy=1
      or even tensor.cmp_sloppy=2 for less-strict comparison
    
    
    ERROR (theano.gof.opt): Optimization failure due to: local_abstractconv_check
    ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
    ERROR (theano.gof.opt): TRACEBACK:
    ERROR (theano.gof.opt): Traceback (most recent call last):
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2034, in process_node
        replacements = lopt.transform(node)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/tensor/nnet/opt.py", line 500, in local_abstractconv_check
        node.op.__class__.__name__)
    LocalMetaOptimizerSkipAssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against? On the CPU we do not support float16.
    
    Traceback (most recent call last):
      File "NPE.py", line 19, in <module>
        model = IAN(config_path = 'IAN_simple.py', dnn = False)
      File "/home/ubuntu/Neural-Photo-Editor/API.py", line 51, in __init__
        self.Z_hat_fn = theano.function([self.X],self.Z_hat)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/function.py", line 317, in function
        output_keys=output_keys)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/pfunc.py", line 486, in pfunc
        output_keys=output_keys)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/function_module.py", line 1839, in orig_function
        name=name)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/function_module.py", line 1519, in __init__
        optimizer_profile = optimizer(fgraph)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 108, in __call__
        return self.optimize(fgraph)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 97, in optimize
        ret = self.apply(fgraph, *args, **kwargs)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 251, in apply
        sub_prof = optimizer.optimize(fgraph)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 97, in optimize
        ret = self.apply(fgraph, *args, **kwargs)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2143, in apply
        nb += self.process_node(fgraph, node)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2039, in process_node
        lopt, node)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 1933, in warn_inplace
        return NavigatorOptimizer.warn(exc, nav, repl_pairs, local_opt, node)
      File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 1919, in warn
        raise exc
    theano.gof.opt.LocalMetaOptimizerSkipAssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against? On the CPU we do not support float16.
    
    opened by domarps 1
  • Possibly incorrect implementation of Batch Renorm

    Possibly incorrect implementation of Batch Renorm

    I came across your implementation of batch re-normalization in the BatchReNormDNNLayer class, and I think there might be an error that might be affecting the model's performance.

    My understanding of batch re-norm is that it applies the standard BN normalization first, then applies the r/d correction, and then finally applies the gamma/beta scaling and bias. Something along the lines of this:

    normed_x = (x - batch_mean) / batch_std    # standard BN
    normed_x = normed_x * r + d                # The batch renorm correction
    normed_x = normed_x * gamma + beta         # final scale and bias
    

    However, this line is applying the r/d correction after the scaling and centering with gamma and beta. https://github.com/ajbrock/Neural-Photo-Editor/blob/master/layers.py#L128

    It probably works anyway, based on the good results you seem to have gotten. I just thought I'd bring it to your attention.

    opened by waleedka 1
  • "fuel-download celeba" returns Dropbox HTML

    I presume @vdumoulin's folder sharing was turned off somehow (or reached a limit). I'd suggest making a placeholder project release on GitHub and putting the files there—it's hosted on S3 too but no limits and public access is expected.

    opened by alexjc 3
Owner
Andy Brock
Dimensionality Diabolist
Andy Brock
MEND: Model Editing Networks using Gradient Decomposition

MEND: Model Editing Networks using Gradient Decomposition Setup Environment This codebase uses Python 3.7.9. Other versions may work as well. Create a

Eric Mitchell 141 Dec 2, 2022
Implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN This is an unofficial implementation of SinGAN from someone who's been sitting right next to SinGAN's creator for almost five years. Please ref

null 35 Nov 10, 2022
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN Project | Arxiv | CVF | Supplementary materials | Talk (ICCV`19) Official pytorch implementation of the paper: "SinGAN: Learning a Generative M

Tamar Rott Shaham 3.2k Dec 25, 2022
[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

CC 4.4k Dec 27, 2022
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 14 Dec 21, 2022
Synthesize photos from PhotoDNA using machine learning 🌱

Ribosome Synthesize photos from PhotoDNA. See the blog post for more information. Installation Dependencies You can install Python dependencies using

Anish Athalye 112 Nov 23, 2022
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

null 74 Dec 3, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Troyanskaya Laboratory 323 Jan 1, 2023
A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing.

AnimeGAN A simple PyTorch Implementation of Generative Adversarial Networks, focusing on anime face drawing. Randomly Generated Images The images are

Jie Lei 雷杰 1.2k Jan 3, 2023
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
ROS-UGV-Control-Interface - Control interface which can be used in any UGV

ROS-UGV-Control-Interface Cam Closed: Cam Opened:

Ahmet Fatih Akcan 1 Nov 4, 2022
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
OpenMMLab Image and Video Editing Toolbox

Introduction MMEditing is an open source image and video editing toolbox based on PyTorch. It is a part of the OpenMMLab project. The master branch wo

OpenMMLab 3.9k Jan 4, 2023
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

null 260 Nov 28, 2022