[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Overview

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight)

Demo | Paper

[NEW!] Time to play with our interactive web demo!

Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation.

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
Shengyu Zhao, Jonathan Cui, Yilun Sheng, Yue Dong, Xiao Liang, Eric I Chang, Yan Xu
Tsinghua University and Microsoft Research
arXiv | OpenReview

Overview

This repo is implemented upon and has the same dependencies as the official StyleGAN2 repo. We also provide a Dockerfile for Docker users. This repo currently supports:

  • Large scale image completion experiments on FFHQ and Places2
  • Image-to-image translation experiments on edges to photos and COCO-Stuff
  • Evaluation code of Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS)

Datasets

  • FFHQ dataset (in TFRecords format) can be downloaded following the StyleGAN2 repo.
  • Places2 dataset can be downloaded in this website (Places365-Challenge 2016 high-resolution images, training set and validation set). The raw images should be converted into TFRecords using dataset_tools/create_places2.py.

Training

The following script is for training on FFHQ. It will splits 10k images for validation. We recommend using 8 NVIDIA Tesla V100 GPUs for training. Training at 512x512 resolution takes about 1 week.

python run_training.py --data-dir=DATA_DIR --dataset=DATASET --metrics=ids10k --num-gpus=8

The following script is for training on Places2, which has a validation set of 36500 images:

python run_training.py --data-dir=DATA_DIR --dataset=DATASET --metrics=ids36k5 --total-kimg 50000 --num-gpus=8

Evaluation

The following script is for evaluation:

python run_metrics.py --data-dir=DATA_DIR --dataset=DATASET --network=CHECKPOINT_FILE(S) --metrics=METRIC(S) --num-gpus=1

Commonly used metrics are ids10k and ids36k5 (for FFHQ and Places2 respectively), which will compute P-IDS and U-IDS together with FID. By default, masks are generated randomly for evaluation, or you may append the metric name with -h0 ([0.0, 0.2]) to -h4 ([0.8, 1.0]) to specify the range of masked ratio.

Our pre-trained models are available on Google Drive. Below lists our provided pre-trained models:

Model name & URL Description
co-mod-gan-ffhq-9-025000.pkl Large scale image completion on FFHQ (512x512)
co-mod-gan-ffhq-10-025000.pkl Large scale image completion on FFHQ (1024x1024)
co-mod-gan-places2-050000.pkl Large scale image completion on Places2 (512x512)
co-mod-gan-coco-stuff-025000.pkl Image-to-image translation on COCO-Stuff (labels to photos) (512x512)
co-mod-gan-edges2shoes-025000.pkl Image-to-image translation on edges2shoes (256x256)
co-mod-gan-edges2handbags-025000.pkl Image-to-image translation on edges2handbags (256x256)

Use the following script to run the interactive demo locally:

python run_demo.py -d DATA_DIR/DATASET -c CHECKPOINT_FILE(S)

Citation

If you find this code helpful, please cite our paper:

@inproceedings{zhao2021comodgan,
  title={Large Scale Image Completion via Co-Modulated Generative Adversarial Networks},
  author={Zhao, Shengyu and Cui, Jonathan and Sheng, Yilun and Dong, Yue and Liang, Xiao and Chang, Eric I and Xu, Yan},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2021}
}
Comments
  • How does the image reconstruction work?

    How does the image reconstruction work?

    First of all congrats on your results, your network is state of the art for image impainting by a margin... Im currently trying to implement this in PyTorch and I am stuck because my TensorFlow 1 knowledge is limited.

    As far as I understood it, your implementation is based on StyleGAN 2, however instead of just using a Generator with a mapping network to calculate the style vectors, you use a mapping network and a encoder network to calculate the style vectors, where the style vector is calculated by applying a linear layer onto the concatenation of the output of the encoder - E(y) and the Mapping network M(z). Here y is the image with holes which we want to reconstruct and z is noise. The architecture of the Encoder is really similar to that of the discriminator in StyleGAN 2.

    However does this really suffice to reconstruct the image? Each image in the batch gets mapped by the encoder into a 512/1024 dimensional vector, and this is the only information the network gets in order to impaint the image. Im feeling like I am missing a key piece here...

    Thanks for any help in advance!

    opened by dunky11 6
  • How to control style truncation?

    How to control style truncation?

    Hi! Thank you for releasing this! Really amazing results!

    I'm trying to evaluate the model on my data and I'm curious, if there is a way to tune truncation trick in run_generator.py? I'd like to buy as much quality as possible and I'm not very interested in diversity.

    Should I pass something as input to Gs.run?

    opened by windj007 5
  • dockerfile parse error

    dockerfile parse error

    Very cool project - thanks for releasing your code! Just a heads up that the readme says:

    We also provide a Dockerfile for Docker users.

    But the Dockerfile in the repo is an empty text file.

    Edit: Actually it's a binary file as mentioned by zsyzzsoft, below.

    opened by josephrocca 5
  • Image Sequence Noise

    Image Sequence Noise

    Hi, first of all thanks you for this brilliant work!

    Is there a way to use this for a sequence of images with some kind of temporal consistency ? I`m looking for a way to force the generator to always sample from the same latent noise so that the outputs are always the the same. Like a predefined seed to be able to regenerate the same results. Any idea? Would this also be possible with an already pretrained model ?

    Many Thanks!

    opened by bananajoe182 4
  • Creating custom datasets throw ValueError: axes don't match array

    Creating custom datasets throw ValueError: axes don't match array

    I was trying to create custom dataset from dataset_tools/create_from_images.py but even after ensuring all images are of same size (512,512,3) it throws the error:

    Processes created.
    WARNING:tensorflow:From /content/co-mod-gan/dataset_tools/tfrecord_utils.py:36: The name tf.python_io.TFRecordOptions is deprecated. Please use tf.io.TFRecordOptions instead.
    
    WARNING:tensorflow:From /content/co-mod-gan/dataset_tools/tfrecord_utils.py:36: The name tf.python_io.TFRecordCompressionType is deprecated. Please use tf.compat.v1.python_io.TFRecordCompressionType instead.
    
    WARNING:tensorflow:From /content/co-mod-gan/dataset_tools/tfrecord_utils.py:38: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.
    
    Processing training images...
    ./data/training
    100% 5541/5541 [00:00<00:00, 76086.06it/s]
    Process Process-3:
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    Process Process-4:
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
      0% 0/5541 [00:00<?, ?it/s]Process Process-1:
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    Process Process-2:
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    Process Process-5:
    Process Process-7:
    Process Process-8:
    Traceback (most recent call last):
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    Process Process-6:
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    Traceback (most recent call last):
      File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
        self.run()
      File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
        self._target(*self._args, **self._kwargs)
      File "dataset_tools/create_from_images.py", line 30, in worker
        img = np.asarray(img).transpose([2, 0, 1])
    ValueError: axes don't match array
    
    opened by karynaur 4
  • Use of Normalization Layers in Encoder

    Use of Normalization Layers in Encoder

    Hi,

    First of all, thanks for sharing this great work!

    My question is about using normalization blocks (batch norm, instance norm, etc.) in the Encoder. I did not find any normalization layer when I investigated the source code. Is this really the case, or am I missing something?

    If this is the case, is there any particular reason to omit it?

    opened by hamzapehlivan 2
  • About quantitative metric?

    About quantitative metric?

    Hi, I have some doubt about the calculation of numerical metric.

    1. Your model(co-mod) is trained on 512512 resolution image, and DeepFillv2(offical) is trained on 256256 resolution. So in this different resolution situation, how do you calculate the metric(like fid)? a.the deepfillv2(offical) receive 256256 masked image, and you resize the model's output image to 512512? b.the deepfillv2(offical) receive 512*512 masked image? c.other method?
    2. DeepFillv2(retrained) model's input image resolution, 256256 or 512512?
    3. Figure 18 in your paper displays some 10241024 resolution images. Do you inference 10241024 resolution image directly on co-model which is trained on 512*512 setting?
    opened by z2liudake 2
  • How to make custom pix2pix dataset and training

    How to make custom pix2pix dataset and training

    Hi,

    Thanks for your impressive code.

    I meet a problem about pix2pix when I try to use co-mod-gan to train my dataset. Could you tell me how to make custom pix2pix dataset that can be used in the program. Thank you very much.

    sincerely yours Bell

    opened by SwordBearFire 2
  • run_generator.py throws rank do not match error

    run_generator.py throws rank do not match error

    I was trying to use the run_generator.py on example images in imgs/ and ended up running into this error.

    !python run_generator.py -c /content/drive/MyDrive/co-mod-gan-ffhq-9-025000.pkl -i imgs/example_image.jpg -m imgs/example_mask.jpg -o output.jpg
    
    Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Loading... Done.
    Setting up TensorFlow plugin "upfirdn_2d.cu": Preprocessing... Loading... Done.
    Traceback (most recent call last):
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1365, in _do_call
        return fn(*args)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1350, in _run_fn
        target_list, run_metadata)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,512,512] vs. shape[1] = [1,3,512,512]
    	 [[{{node Gs/_Run/Gs/G_synthesis/concat}}]]
    	 [[Gs/_Run/Gs/images_out/_1587]]
      (1) Invalid argument: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,512,512] vs. shape[1] = [1,3,512,512]
    	 [[{{node Gs/_Run/Gs/G_synthesis/concat}}]]
    0 successful operations.
    0 derived errors ignored.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "run_generator.py", line 35, in <module>
        main()
      File "run_generator.py", line 32, in main
        create_from_images(**vars(args))
      File "run_generator.py", line 18, in create_from_images
        fake = Gs.run(latent, None, real[np.newaxis], mask[np.newaxis])[0]
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 442, in run
        mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in)))
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 956, in run
        run_metadata_ptr)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1180, in _run
        feed_dict_tensor, options, run_metadata)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1359, in _do_run
        run_metadata)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1384, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,512,512] vs. shape[1] = [1,3,512,512]
    	 [[node Gs/_Run/Gs/G_synthesis/concat (defined at /tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py:1748) ]]
    	 [[Gs/_Run/Gs/images_out/_1587]]
      (1) Invalid argument: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,512,512] vs. shape[1] = [1,3,512,512]
    	 [[node Gs/_Run/Gs/G_synthesis/concat (defined at /tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py:1748) ]]
    0 successful operations.
    0 derived errors ignored.
    
    Original stack trace for 'Gs/_Run/Gs/G_synthesis/concat':
      File "run_generator.py", line 35, in <module>
        main()
      File "run_generator.py", line 32, in main
        create_from_images(**vars(args))
      File "run_generator.py", line 18, in create_from_images
        fake = Gs.run(latent, None, real[np.newaxis], mask[np.newaxis])[0]
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 417, in run
        out_gpu = net_gpu.get_output_for(*in_gpu, return_as_list=True, **dynamic_kwargs)
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 221, in get_output_for
        out_expr = self._build_func(*final_inputs, **build_kwargs)
      File "<string>", line 240, in G_main
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 221, in get_output_for
        out_expr = self._build_func(*final_inputs, **build_kwargs)
      File "<string>", line 387, in G_synthesis_RegionGAN
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
        return target(*args, **kwargs)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py", line 1420, in concat
        return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gen_array_ops.py", line 1257, in concat_v2
        "ConcatV2", values=values, axis=axis, name=name)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
        op_def=op_def)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 3357, in create_op
        attrs, op_def, compute_device)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
        op_def=op_def)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()```
    opened by karynaur 2
  • Error when running in Colab

    Error when running in Colab

    Does anyone know how to fix this?

    !gdown -q --id 1M2dSxlJnCFNM6LblpB2nQCnaimgwaaKu
    !git clone https://github.com/zsyzzsoft/co-mod-gan.git &> /dev/null
    %tensorflow_version 1.x
    %cd co-mod-gan
    
    !python run_generator.py -c /content/co-mod-gan-ffhq-10-025000.pkl -i imgs/example_image.jpg -m imgs/example_mask.jpg -o imgs/example_output.jpg
    
    Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Compiling... Loading... Done.
    Setting up TensorFlow plugin "upfirdn_2d.cu": Preprocessing... Compiling... Loading... Done.
    Traceback (most recent call last):
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1365, in _do_call
        return fn(*args)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1350, in _run_fn
        target_list, run_metadata)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: Input to reshape is a tensor with 8405000 values, but the requested shape requires a multiple of 33620000
    	 [[{{node Gs/_Run/Gs/G_synthesis/E_1024x1024/Conv1_down/Reshape_1}}]]
    	 [[Gs/_Run/Gs/images_out/_1767]]
      (1) Invalid argument: Input to reshape is a tensor with 8405000 values, but the requested shape requires a multiple of 33620000
    	 [[{{node Gs/_Run/Gs/G_synthesis/E_1024x1024/Conv1_down/Reshape_1}}]]
    0 successful operations.
    0 derived errors ignored.
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "run_generator.py", line 34, in <module>
        main()
      File "run_generator.py", line 31, in main
        generate(**vars(args))
      File "run_generator.py", line 16, in generate
        fake = Gs.run(latent, None, real[np.newaxis], mask[np.newaxis], truncation_psi=truncation)[0]
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 442, in run
        mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in)))
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 956, in run
        run_metadata_ptr)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1180, in _run
        feed_dict_tensor, options, run_metadata)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1359, in _do_run
        run_metadata)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/client/session.py", line 1384, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: Input to reshape is a tensor with 8405000 values, but the requested shape requires a multiple of 33620000
    	 [[node Gs/_Run/Gs/G_synthesis/E_1024x1024/Conv1_down/Reshape_1 (defined at /tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py:1748) ]]
    	 [[Gs/_Run/Gs/images_out/_1767]]
      (1) Invalid argument: Input to reshape is a tensor with 8405000 values, but the requested shape requires a multiple of 33620000
    	 [[node Gs/_Run/Gs/G_synthesis/E_1024x1024/Conv1_down/Reshape_1 (defined at /tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py:1748) ]]
    0 successful operations.
    0 derived errors ignored.
    
    Original stack trace for 'Gs/_Run/Gs/G_synthesis/E_1024x1024/Conv1_down/Reshape_1':
      File "run_generator.py", line 34, in <module>
        main()
      File "run_generator.py", line 31, in main
        generate(**vars(args))
      File "run_generator.py", line 16, in generate
        fake = Gs.run(latent, None, real[np.newaxis], mask[np.newaxis], truncation_psi=truncation)[0]
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 417, in run
        out_gpu = net_gpu.get_output_for(*in_gpu, return_as_list=True, **dynamic_kwargs)
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 221, in get_output_for
        out_expr = self._build_func(*final_inputs, **build_kwargs)
      File "<string>", line 241, in G_main
      File "/content/co-mod-gan/dnnlib/tflib/network.py", line 221, in get_output_for
        out_expr = self._build_func(*final_inputs, **build_kwargs)
      File "<string>", line 384, in G_synthesis_RegionGAN
      File "<string>", line 355, in E_block
      File "<string>", line 58, in conv2d_layer
      File "/content/co-mod-gan/dnnlib/tflib/ops/upfirdn_2d.py", line 333, in conv_downsample_2d
        x = _simple_upfirdn_2d(x, k, pad0=(p+1)//2, pad1=p//2, data_format=data_format, impl=impl)
      File "/content/co-mod-gan/dnnlib/tflib/ops/upfirdn_2d.py", line 363, in _simple_upfirdn_2d
        y = tf.reshape(y, [-1, _shape(x, 1), _shape(y, 1), _shape(y, 2)])
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/array_ops.py", line 131, in reshape
        result = gen_array_ops.reshape(tensor, shape, name)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/ops/gen_array_ops.py", line 8115, in reshape
        "Reshape", tensor=tensor, shape=shape, name=name)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
        op_def=op_def)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 3357, in create_op
        attrs, op_def, compute_device)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
        op_def=op_def)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()
    
    opened by qo4on 1
  • training failed

    training failed

    i can't figure out why is this happening when i try to fine tune on different dataset training takes 2 to 3 mins and it stops generating a pkl file and the netowrok doesn't seem to be trained issue

    opened by mostafa610 1
  • Out of memory using U-IDS/P-IDS metrics

    Out of memory using U-IDS/P-IDS metrics

    Hi @zsyzzsoft , Thank you for your awesome project. Could you help me with this: When I apply the U-IDS/P-IDS metrics on the 512*512 Places2 dataset (validation set, 36500 images). It occupies more than 64GB of memory.
    However, I have no more memory I can use on my machine. So, is there any method that can solve this problem?

    opened by LonglongaaaGo 0
  • running error

    running error

    Hi,

    I tried to run the example and get following error. Could you please help with it? Thank you.

    Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Compiling... Loading... Failed! Traceback (most recent call last): File "run_generator.py", line 34, in main() File "run_generator.py", line 31, in main generate(**vars(args)) File "run_generator.py", line 14, in generate _, _, Gs = misc.load_pkl(checkpoint) File "/local/mnt2/workspace2/yandeng/co-mod-gan/training/misc.py", line 30, in load_pkl return pickle.load(file, encoding='latin1') File "/local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/network.py", line 297, in setstate self._init_graph() File "/local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/network.py", line 154, in _init_graph out_expr = self._build_func(*self.input_templates, **build_kwargs) File "", line 383, in G_synthesis_RegionGAN File "", line 347, in E_fromrgb File "", line 68, in apply_bias_act File "/local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/ops/fused_bias_act.py", line 68, in fused_bias_act return impl_dict[impl](x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain) File "/local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/ops/fused_bias_act.py", line 122, in _fused_bias_act_cuda cuda_kernel = _get_plugin().fused_bias_act File "/local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/ops/fused_bias_act.py", line 16, in _get_plugin return custom_ops.get_plugin(os.path.splitext(file)[0] + '.cu') File "/local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/custom_ops.py", line 159, in get_plugin plugin = tf.load_op_library(bin_file) File "/local/mnt2/workspace2/yandeng/anaconda3/envs/comod/lib/python3.6/site-packages/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: /local/mnt2/workspace2/yandeng/co-mod-gan/dnnlib/tflib/_cudacache/fused_bias_act_3983cfc3f38164c8a38c81bd423d1fe1.so: undefined symbol: _ZN10tensorflow12OpDefBuilder6OutputESs

    opened by mxgbs 2
  • Value error

    Value error

    I'm trying to train ComodGAN on FFHQ. However, i'm facing with the ValueError problem below. Traceback (most recent call last): File "run_training.py", line 134, in <module> main() File "run_training.py", line 129, in main run(**vars(args)) File "run_training.py", line 72, in run dnnlib.submit_run(**kwargs) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/dnnlib/submission/submit.py", line 343, in submit_run return farm.submit(submit_config, host_run_dir) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/dnnlib/submission/internal/local.py", line 22, in submit return run_wrapper(submit_config) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/dnnlib/submission/submit.py", line 280, in run_wrapper run_func_obj(**submit_config.run_func_kwargs) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/training/training_loop.py", line 358, in training_loop metrics.run(pkl, run_dir=dnnlib.make_run_dir_path(), data_dir=dnnlib.convert_path(data_dir), num_gpus=num_gpus, tf_config=tf_config) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/metrics/metric_base.py", line 188, in run metric.run(*args, **kwargs) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/metrics/metric_base.py", line 82, in run self._evaluate(Gs, Gs_kwargs=Gs_kwargs, num_gpus=num_gpus) File "/home/dmsheng/demo/image_inpainting/co-mod-gan-default/metrics/inception_discriminative_score.py", line 69, in _evaluate s, _ = scipy.linalg.sqrtm(np.dot(sigma_fake, sigma_real), disp=False) File "/home/dmsheng/anaconda3/envs/tf1.14/lib/python3.6/site-packages/scipy/linalg/_matfuncs_sqrtm.py", line 161, in sqrtm A = _asarray_validated(A, check_finite=True, as_inexact=True) File "/home/dmsheng/anaconda3/envs/tf1.14/lib/python3.6/site-packages/scipy/_lib/_util.py", line 263, in _asarray_validated a = toarray(a) File "/home/dmsheng/anaconda3/envs/tf1.14/lib/python3.6/site-packages/numpy/lib/function_base.py", line 486, in asarray_chkfinite "array must not contain infs or NaNs") ValueError: array must not contain infs or NaNs I've tried rebuilding the tfrecord and several hyper-parameters, no work. Any ideas?

    opened by ImmortalSdm 2
  • An error occurred when running run_generator.py

    An error occurred when running run_generator.py

    hi,i have a problem: tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Incompatible shapes: [1,3,1024,1024] vs. [1,1,512,512] [[node Gs/_Run/Gs/G_synthesis/mul (defined at :387) , node Gs/_Run/Gs/G_synthesis/mul_2 (defined at :476) ]] [[Gs/_Run/Gs/images_out/_1587]] (1) Invalid argument: Incompatible shapes: [1,3,1024,1024] vs. [1,1,512,512] [[node Gs/_Run/Gs/G_synthesis/mul (defined at :387) , node Gs/_Run/Gs/G_synthesis/mul_2 (defined at :476) ]] 0 successful operations. 0 derived errors ignored.

    Can you only use 512*512 when running generators?

    opened by zzz105120 2
Owner
Shengyu Zhao
Undergraduate at IIIS, Tsinghua University. Working with MIT and Microsoft Research.
Shengyu Zhao
[ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Chenyu You, Xiaohui Xie, Zhangyang Wang

Undistillable: Making A Nasty Teacher That CANNOT teach students "Undistillable: Making A Nasty Teacher That CANNOT teach students" Haoyu Ma, Tianlong

VITA 71 Dec 28, 2022
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
This implements one of result networks from Large-scale evolution of image classifiers

Exotic structured image classifier This implements one of result networks from Large-scale evolution of image classifiers by Esteban Real, et. al. Req

null 54 Nov 25, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 2, 2023
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

null 3k Jan 8, 2023
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

null 375 Dec 31, 2022
Image Deblurring using Generative Adversarial Networks

DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo

Orest Kupyn 2.2k Jan 1, 2023
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Han Zhang 809 Dec 16, 2022
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022
Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

SSRL-for-image-classification Semi-supervised Representation Learning for Remote Sensing Image Classification Based on Generative Adversarial Networks

Feng 2 Nov 19, 2021
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
[ICLR'21] Counterfactual Generative Networks

This repository contains the code for the ICLR 2021 paper "Counterfactual Generative Networks" by Axel Sauer and Andreas Geiger. If you want to take the CGN for a spin and generate counterfactual images, you can try out the Colab below.

null 88 Jan 2, 2023
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)

Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r

Google 148 Nov 18, 2022
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology

Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology Sharon Zhou, Eric Zelikman

Stanford Machine Learning Group 34 Nov 16, 2022
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains a PyTorch implementation for the paper Score-Based Genera

Yang Song 757 Jan 4, 2023
Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Geometry-aware Instance-reweighted Adversarial Training This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (ht

Jingfeng 47 Dec 22, 2022