This repo contains source code and materials for the TEmporally COherent GAN SIGGRAPH project.

Overview

TecoGAN

This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution. Authors: Mengyu Chu, You Xie, Laura Leal-Taixe, Nils Thuerey. Technical University of Munich.

This repository so far contains the code for the TecoGAN inference and training, and downloading the training data. Pre-trained models are also available below, you can find links for downloading and instructions below. This work was published in the ACM Transactions on Graphics as "Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation (TecoGAN)", https://doi.org/10.1145/3386569.3392457. The video and pre-print can be found here:

Video: https://www.youtube.com/watch?v=pZXFXtfd-Ak Preprint: https://arxiv.org/pdf/1811.09393.pdf Supplemental results: https://ge.in.tum.de/wp-content/uploads/2020/05/ClickMe.html

TecoGAN teaser image

Additional Generated Outputs

Our method generates fine details that persist over the course of long generated video sequences. E.g., the mesh structures of the armor, the scale patterns of the lizard, and the dots on the back of the spider highlight the capabilities of our method. Our spatio-temporal discriminator plays a key role to guide the generator network towards producing coherent detail.

Lizard

Armor

Spider

Running the TecoGAN Model

Below you can find a quick start guide for running a trained TecoGAN model. For further explanations of the parameters take a look at the runGan.py file.
Note: evaluation (test case 2) currently requires an Nvidia GPU with CUDA. tkinter is also required and may be installed via the python3-tk package.

# Install tensorflow1.8+,
pip3 install --ignore-installed --upgrade tensorflow-gpu # or tensorflow
# Install PyTorch (only necessary for the metric evaluations) and other things...
pip3 install -r requirements.txt

# Download our TecoGAN model, the _Vid4_ and _TOS_ scenes shown in our paper and video.
python3 runGan.py 0

# Run the inference mode on the calendar scene.
# You can take a look of the parameter explanations in the runGan.py, feel free to try other scenes!
python3 runGan.py 1 

# Evaluate the results with 4 metrics, PSNR, LPIPS[1], and our temporal metrics tOF and tLP with pytorch.
# Take a look at the paper for more details! 
python3 runGan.py 2

Train the TecoGAN Model

1. Prepare the Training Data

The training and validation dataset can be downloaded with the following commands into a chosen directory TrainingDataPath. Note: online video downloading requires youtube-dl.

# Install youtube-dl for online video downloading
pip install --user --upgrade youtube-dl

# take a look of the parameters first:
python3 dataPrepare.py --help

# To be on the safe side, if you just want to see what will happen, the following line won't download anything,
# and will only save information into log file.
# TrainingDataPath is still important, it the directory where logs are saved: TrainingDataPath/log/logfile_mmddHHMM.txt
python3 dataPrepare.py --start_id 2000 --duration 120 --disk_path TrainingDataPath --TEST

# This will create 308 subfolders under TrainingDataPath, each with 120 frames, from 28 online videos.
# It takes a long time.
python3 dataPrepare.py --start_id 2000 --duration 120 --REMOVE --disk_path TrainingDataPath

Once ready, please update the parameter TrainingDataPath in runGAN.py (for case 3 and case 4), and then you can start training with the downloaded data!

Note: most of the data (272 out of 308 sequences) are the same as the ones we used for the published models, but some (36 out of 308) are not online anymore. Hence the script downloads suitable replacements.

2. Train the Model

This section gives command to train a new TecoGAN model. Detail and additional parameters can be found in the runGan.py file. Note: the tensorboard gif summary requires ffmpeg.

# Install ffmpeg for the  gif summary
sudo apt-get install ffmpeg # or conda install ffmpeg

# Train the TecoGAN model, based on our FRVSR model
# Please check and update the following parameters: 
# - VGGPath, it uses ./model/ by default. The VGG model is ca. 500MB
# - TrainingDataPath (see above)
# - in main.py you can also adjust the output directory of the  testWhileTrain() function if you like (it will write into a train/ sub directory by default)
python3 runGan.py 3

# Train without Dst, (i.e. a FRVSR model)
python3 runGan.py 4

# View log via tensorboard
tensorboard --logdir='ex_TecoGANmm-dd-hh/log' --port=8008

Tensorboard GIF Summary Example

gif_summary_example

Acknowledgements

This work was funded by the ERC Starting Grant realFlow (ERC StG-2015-637014).
Part of the code is based on LPIPS[1], Photo-Realistic SISR[2] and gif_summary[3].

Reference

[1] The Unreasonable Effectiveness of Deep Features as a Perceptual Metric (LPIPS)
[2] Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
[3] gif_summary

TUM I15 https://ge.in.tum.de/ , TUM https://www.tum.de/

Comments
  • Training data with LR and HR

    Training data with LR and HR

    Hello,

    I am trying to train the model with my own dataset. Instead of using bicubic, I generated some LR images using my own method. I want to train the model but I cannot find where to put the training LR data.

    Thank you!

    opened by zzhan127 15
  • FileNotFoundError from subprocess.Popen(cmd)

    FileNotFoundError from subprocess.Popen(cmd)

    Hello. When I run python runGan.py 1 to test, I encountered the error below.

    Testing test case 1 Traceback (most recent call last): File "runGan.py", line 90, in mycall(cmd1).communicate() File "runGan.py", line 21, in mycall return subprocess.Popen(cmd) File "C:\Users\user\Anaconda3\envs\tensorflow\lib\subprocess.py", line 775, in init restore_signals, start_new_session) File "C:\Users\user\Anaconda3\envs\tensorflow\lib\subprocess.py", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified

    Please give me any solution or hint to solve this problem.

    opened by TigerStone93 9
  • Docker problem with runGan.py 4

    Docker problem with runGan.py 4

    runGan.py 4 produces some error I cannot fathom. †

    root@b55c13b7b90f:/TecoGAN# python3 runGan.py 4 /usr/local/lib/python3.5/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Using TensorFlow backend.

    Preparing train_data [Config] Use random crop [Config] Use random crop [Config] Use random flip Sequenced batches: 28250, sequence length: 7 Preparing validation_data [Config] Use random crop [Config] Use random crop [Config] Use random flip Sequenced batches: 5085, sequence length: 7 tData count = 28250, steps per epoch 28250 Traceback (most recent call last): File "main.py", line 287, in Net = FRVSR( rdata.s_inputs, rdata.s_targets, FLAGS ) File "/TecoGAN/lib/Teco.py", line 522, in FRVSR return TecoGAN(r_inputs, r_targets, FLAGS, False) File "/TecoGAN/lib/Teco.py", line 434, in TecoGAN update_loss = exp_averager.apply(update_list) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/moving_averages.py", line 388, in apply raise ValueError("Moving average already computed for: %s" % var.name) ValueError: Moving average already computed for: generator_loss/content_loss/Mean:0

    † probably just stupid.

    opened by flutide 7
  • Problems training TecoGAN in Docker

    Problems training TecoGAN in Docker

    I'm running the docker-version under WSL2, Ubutu 16.04.

    GPU is found in this environment, and I've tried it with a simple MNIST dataset and TF. Training on GPU in that case works, VRAM is allocated etc.

    However, running "python3 runGan.py 3" does not seem to push anything into VRAM. The script generates the network, restores VGG19, saves a checkpoint and then exits without error or warning message.

    Any ideas?

    opened by flutide 7
  • How to resume training?

    How to resume training?

    Hi everyone.

    After some serious fiddling to get the code running on a Win10 machine (and using the GPU), I ran into the minor problem of power failure in the building. This was part way through running "python3 runGan.py 3".

    Now, how do I resume training from the last saved checkpoint?

    I can see 6 stored check points on disk, last one being "model-50000".

    opened by flutide 7
  • How much memory is required?

    How much memory is required?

    Hi, I used the project to upscale a small 1300 frames video and it worked well. Now I am trying to upscale a larger video with 32k frames but it is getting stuck before detecting the shape of the input and my memory is getting full (8 GB ram) and also the swap is full (16 GB).

    I am using tensorflow CPU and the LR video shape is 704x396. Is this a problem of low memory or something else? how can I make it run? I don't mind the long time since it will run on an old PC for hours.

    opened by farahats9 5
  • Noisy output of TecoGAN

    Noisy output of TecoGAN

    well, I used the pre-trained model, replaced 0001.png in the calendar with my own single image 180x144. an this is what I got:

    is this a problem with the type of training data that is used for the pre-trained model? it is obviously messing up the compression noise, shouldn't we train it with this type of data as well?

    also, I tried to change the output shape multiplier on line 187 in main.py and got errors how can we increase the output res? and the i couldn't use any input image other than 180x144

    opened by sfmth 5
  • RunGan Case 1 Error in Colab

    RunGan Case 1 Error in Colab

    Using Google Colab: 12.72 Gb RAM / Tesla P100-PCIE-16GB

    I am using packages of 100 frames because more than that amount the model does not work, all in .png format extracted with ffmpeg, 720p resolution; the problem is that it cannot finish executing case 1 due to this error: ---------------------------------------------ERROR CODE Testing test case 1 WARNING:tensorflow:From main.py:19: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.

    WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

    • https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
    • https://github.com/tensorflow/addons
    • https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue.

    Using TensorFlow backend. WARNING:tensorflow:From main.py:138: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

    input shape: [1, 720, 1280, 3] output shape: [1, 2880, 5120, 3] WARNING:tensorflow:From main.py:195: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

    WARNING:tensorflow:From main.py:201: The name tf.space_to_depth is deprecated. Please use tf.compat.v1.space_to_depth instead.

    WARNING:tensorflow:From main.py:203: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

    WARNING:tensorflow:From main.py:206: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

    WARNING:tensorflow:From /content/TecoGAN/lib/frvsr.py:22: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.

    Finish building the network WARNING:tensorflow:From main.py:221: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.

    WARNING:tensorflow:From main.py:224: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

    WARNING:tensorflow:From main.py:227: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

    WARNING:tensorflow:From main.py:228: The name tf.local_variables_initializer is deprecated. Please use tf.compat.v1.local_variables_initializer instead.

    WARNING:tensorflow:From main.py:230: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

    WARNING:tensorflow:From main.py:239: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

    Loading weights from ckpt model Frame evaluation starts!! Traceback (most recent call last): File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[1,64,2881,5121] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[generator/Assign_1/_203]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[1,64,2881,5121] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations. 0 derived errors ignored.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "main.py", line 259, in output_frame = sess.run(outputs, feed_dict=feed_dict) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 956, in run run_metadata_ptr) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 1180, in _run feed_dict_tensor, options, run_metadata) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 1359, in _do_run run_metadata) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[1,64,2881,5121] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1 (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[generator/Assign_1/_203]]
    

    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    (1) Resource exhausted: OOM when allocating tensor with shape[1,64,2881,5121] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1 (defined at /tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py:1748) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

    0 successful operations. 0 derived errors ignored.

    Original stack trace for 'generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1': File "main.py", line 204, in gen_output = generator_F(inputs_all, 3, reuse=False, FLAGS=FLAGS) File "/content/TecoGAN/lib/frvsr.py", line 76, in generator_F net = conv2_tran(net, 3, 64, 2, scope='conv_tran2') File "/content/TecoGAN/lib/ops.py", line 40, in conv2_tran activation_fn=None, weights_initializer=tf.contrib.layers.xavier_initializer()) File "/tensorflow-1.15.2/python3.6/tensorflow_core/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/tensorflow-1.15.2/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py", line 1417, in convolution2d_transpose outputs = layer.apply(inputs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/util/deprecation.py", line 324, in new_func return func(*args, **kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/engine/base_layer.py", line 1700, in apply return self.call(inputs, *args, **kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/layers/base.py", line 548, in call outputs = super(Layer, self).call(inputs, *args, **kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/engine/base_layer.py", line 854, in call outputs = call_fn(cast_inputs, *args, **kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/autograph/impl/api.py", line 234, in wrapper return converted_call(f, options, args, kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/autograph/impl/api.py", line 439, in converted_call return _call_unconverted(f, args, kwargs, options) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/autograph/impl/api.py", line 330, in _call_unconverted return f(*args, **kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/layers/convolutional.py", line 835, in call dilation_rate=self.dilation_rate) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/backend.py", line 4823, in conv2d_transpose data_format=tf_data_format) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/nn_ops.py", line 2204, in conv2d_transpose name=name) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/nn_ops.py", line 2275, in conv2d_transpose_v2 name=name) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/gen_nn_ops.py", line 1407, in conv2d_backprop_input name=name) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3357, in create_op attrs, op_def, compute_device) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal op_def=op_def) File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 1748, in init self._traceback = tf_stack.extract_stack()

    opened by OnlyAlec 5
  • Training #3 but ended up without error rising up

    Training #3 but ended up without error rising up

    Could you help me with some suggestions for training TecoGAN model?(runGan.py 3) Here is my running history of the last three lines: total size: 20024384 VGG19 restored successfully!! Loading weights from the pre-trained model to start a new training...

    but it will return to terminal command line without any error showing up. By the way, runGan.py 4 (FRVSR) run successfully. Thank you in advance for your time.

    opened by schoengzc 5
  • Training isn't starting with test case 3 or 4

    Training isn't starting with test case 3 or 4

    Hi, I downloaded and prepared the dataset. When I chose options 3 or 4 for training the network, all it does is run one round of evaluation on the calendar dataset and quits. Any help with that is appreciated. Here's my output from runGan.py 4

    Testing test case 4
    Delete existing folder ex_FRVSR06-23-14/?(Y/N)
    y
    ex_FRVSR06-23-14_1/
    Using TensorFlow backend.
    Preparing train_data
    [Config] Use random crop
    [Config] Use random crop
    [Config] Use random flip
    Sequenced batches: 27610, sequence length: 10
    Preparing validation_data
    [Config] Use random crop
    [Config] Use random crop
    [Config] Use random flip
    Sequenced batches: 2860, sequence length: 10
    tData count = 27610, steps per epoch 27610
    Finish building the network.
    Scope generator:
    Variable: generator/generator_unit/input_stage/conv/Conv/weights:0
    Shape: [3, 3, 51, 64]
    Variable: generator/generator_unit/input_stage/conv/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_1/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_1/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_1/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_1/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_2/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_2/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_2/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_2/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_3/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_3/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_3/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_3/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_4/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_4/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_4/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_4/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_5/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_5/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_5/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_5/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_6/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_6/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_6/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_6/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_7/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_7/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_7/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_7/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_8/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_8/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_8/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_8/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_9/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_9/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_9/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_9/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_10/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_10/conv_1/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/resblock_10/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/resblock_10/conv_2/Conv/biases:0
    Shape: [64]
    Variable: generator/generator_unit/conv_tran2highres/conv_tran1/Conv2d_transpose/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/conv_tran2highres/conv_tran1/Conv2d_transpose/biases:0
    Shape: [64]
    Variable: generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/weights:0
    Shape: [3, 3, 64, 64]
    Variable: generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/biases:0
    Shape: [64]
    Variable: generator/generator_unit/output_stage/conv/Conv/weights:0
    Shape: [3, 3, 64, 3]
    Variable: generator/generator_unit/output_stage/conv/Conv/biases:0
    Shape: [3]
    total size: 843587
    Scope fnet:
    Variable: fnet/autoencode_unit/encoder_1/conv_1/Conv/weights:0
    Shape: [3, 3, 6, 32]
    Variable: fnet/autoencode_unit/encoder_1/conv_1/Conv/biases:0
    Shape: [32]
    Variable: fnet/autoencode_unit/encoder_1/conv_2/Conv/weights:0
    Shape: [3, 3, 32, 32]
    Variable: fnet/autoencode_unit/encoder_1/conv_2/Conv/biases:0
    Shape: [32]
    Variable: fnet/autoencode_unit/encoder_2/conv_1/Conv/weights:0
    Shape: [3, 3, 32, 64]
    Variable: fnet/autoencode_unit/encoder_2/conv_1/Conv/biases:0
    Shape: [64]
    Variable: fnet/autoencode_unit/encoder_2/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: fnet/autoencode_unit/encoder_2/conv_2/Conv/biases:0
    Shape: [64]
    Variable: fnet/autoencode_unit/encoder_3/conv_1/Conv/weights:0
    Shape: [3, 3, 64, 128]
    Variable: fnet/autoencode_unit/encoder_3/conv_1/Conv/biases:0
    Shape: [128]
    Variable: fnet/autoencode_unit/encoder_3/conv_2/Conv/weights:0
    Shape: [3, 3, 128, 128]
    Variable: fnet/autoencode_unit/encoder_3/conv_2/Conv/biases:0
    Shape: [128]
    Variable: fnet/autoencode_unit/decoder_1/conv_1/Conv/weights:0
    Shape: [3, 3, 128, 256]
    Variable: fnet/autoencode_unit/decoder_1/conv_1/Conv/biases:0
    Shape: [256]
    Variable: fnet/autoencode_unit/decoder_1/conv_2/Conv/weights:0
    Shape: [3, 3, 256, 256]
    Variable: fnet/autoencode_unit/decoder_1/conv_2/Conv/biases:0
    Shape: [256]
    Variable: fnet/autoencode_unit/decoder_2/conv_1/Conv/weights:0
    Shape: [3, 3, 256, 128]
    Variable: fnet/autoencode_unit/decoder_2/conv_1/Conv/biases:0
    Shape: [128]
    Variable: fnet/autoencode_unit/decoder_2/conv_2/Conv/weights:0
    Shape: [3, 3, 128, 128]
    Variable: fnet/autoencode_unit/decoder_2/conv_2/Conv/biases:0
    Shape: [128]
    Variable: fnet/autoencode_unit/decoder_3/conv_1/Conv/weights:0
    Shape: [3, 3, 128, 64]
    Variable: fnet/autoencode_unit/decoder_3/conv_1/Conv/biases:0
    Shape: [64]
    Variable: fnet/autoencode_unit/decoder_3/conv_2/Conv/weights:0
    Shape: [3, 3, 64, 64]
    Variable: fnet/autoencode_unit/decoder_3/conv_2/Conv/biases:0
    Shape: [64]
    Variable: fnet/autoencode_unit/output_stage/conv1/Conv/weights:0
    Shape: [3, 3, 64, 32]
    Variable: fnet/autoencode_unit/output_stage/conv1/Conv/biases:0
    Shape: [32]
    Variable: fnet/autoencode_unit/output_stage/conv2/Conv/weights:0
    Shape: [3, 3, 32, 2]
    Variable: fnet/autoencode_unit/output_stage/conv2/Conv/biases:0
    Shape: [2]
    total size: 1745506
    The first run takes longer time for training data loading...
    Save initial checkpoint, before any training
    [testWhileTrain] step 0:
    python3 main.py --output_dir ex_FRVSR06-23-14_1/train/ --summary_dir ex_FRVSR06-23-14_1/train/ --mode inference --num_resblock 10 --checkpoint ex_FRVSR06-23-14_1/model-0 --cudaID 0 --input_dir_LR ./LR/calendar/ --output_pre  --output_name 000000000 --input_dir_len 10
    Using TensorFlow backend.
    input shape: [1, 144, 180, 3]
    output shape: [1, 576, 720, 3]
    Finish building the network
    Loading weights from ckpt model
    Frame evaluation starts!!
    Warming up 5
    Warming up 4
    Warming up 3
    Warming up 2
    Warming up 1
    saving image 000000000_0001
    saving image 000000000_0002
    saving image 000000000_0003
    saving image 000000000_0004
    saving image 000000000_0005
    saving image 000000000_0006
    saving image 000000000_0007
    saving image 000000000_0008
    saving image 000000000_0009
    saving image 000000000_0010
    total time 1.9974193572998047, frame number 15
    
    
    

    and it quits after that without any errors

    opened by AloshkaD 4
  • No GPU utilization for inference

    No GPU utilization for inference

    is the inference using GPU? on my PC it is very slow - around 1 second per frame - despite having a GTX1080ti and the GPU utilization is very slow (always lower than 10%).

    opened by JonathanLehner 4
  • Low gpu usage

    Low gpu usage

    Hello, I'm currently using an nvidia gpu rtx 3080 and also im trying to upscale the resolution of a video, however when tecogan process starts, gpu usage drops to 10% and cpu usage is stuck at 60%, I already installed CUDA 11.7, CUDNN, TensorFlow. I'm very new to this so please bear with me xd. Is there any way to increase gpu usage? frames take 1 second to render each and that's a lot

    feel free to message me on discord VAIRUX#3359

    tecogan

    opened by vairux 0
  • dataPrepare.py don't work

    dataPrepare.py don't work

    Hi, I am trying to train the GAN with the dataset in DataPrepare.py. When I execute dataPrepare.py I have some errors. I've noticed that some of the videos (almost all of the videos) give me errors when downloading.

    There is an example with https://vimeo.com/160578133 errorTecoGan

    opened by kalomano 0
  • successful operation but no output

    successful operation but no output

    I ran the program without any more errors, BUT I didn't get the results I wanted. And I really do not why. So I ask for you help. It just output (Testing test case 1).

    opened by cillyu 0
  • Error training own dataset - LR + HR

    Error training own dataset - LR + HR

    I'm trying to train a new model based on my own dataset consisting of my own LR + HR images (as opposed to LR images being generated on the fly). I've adjusted the code in runGan.py & dataloader.py accordingly (there have been a couple of threads about this already), but when I try to train it, I get this error:

    "tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [offset_height must be >= 0.] [[{{node load_frame_cpu/train_data/data_preprocessing/random_crop/crop_to_bounding_box_11/Assert_1/Assert}}]]"

    Full log below. Ignore notification regarding folders with empty frames - I'll add them once I get the actual training process working.

    Microsoft Windows [Version 10.0.19042.1586] (c) Microsoft Corporation. All rights reserved.

    D:\TecoGAN-mastertrain>python rungan.py 3 Testing test case 3 2022-03-29 18:12:25.068885: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

    • https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
    • https://github.com/tensorflow/addons
    • https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue.

    Using TensorFlow backend. Preparing train_data Skip HR/scene_2247, since foler doesn't contain enough frames! Skip HR/scene_2251, since foler doesn't contain enough frames! Skip HR/scene_2317, since foler doesn't contain enough frames! Skip HR/scene_2437, since foler doesn't contain enough frames! WARNING:tensorflow:From D:\TecoGAN-mastertrain\lib\dataloader.py:210: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

    [Config] Use random crop WARNING:tensorflow:From D:\TecoGAN-mastertrain\lib\dataloader.py:224: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

    WARNING:tensorflow:From D:\TecoGAN-mastertrain\lib\dataloader.py:233: The name tf.read_file is deprecated. Please use tf.io.read_file instead.

    [Config] Use random crop [Config] Use random flip Sequenced batches: 47430, sequence length: 10 Preparing validation_data [Config] Use random crop [Config] Use random crop [Config] Use random flip Sequenced batches: 2520, sequence length: 10 tData count = 47430, steps per epoch 23715 WARNING:tensorflow:From C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_backend.py:64: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

    WARNING:tensorflow:From main.py:296: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.

    variable not found in ckpt: generator/generator_unit/resblock_11/conv_1/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_11/conv_1/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_11/conv_2/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_11/conv_2/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_12/conv_1/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_12/conv_1/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_12/conv_2/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_12/conv_2/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_13/conv_1/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_13/conv_1/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_13/conv_2/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_13/conv_2/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_14/conv_1/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_14/conv_1/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_14/conv_2/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_14/conv_2/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_15/conv_1/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_15/conv_1/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_15/conv_2/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_15/conv_2/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_16/conv_1/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_16/conv_1/Conv/biases:0 Assign Zero of (64,) variable not found in ckpt: generator/generator_unit/resblock_16/conv_2/Conv/weights:0 Assign Zero of (3, 3, 64, 64) variable not found in ckpt: generator/generator_unit/resblock_16/conv_2/Conv/biases:0 Assign Zero of (64,) Prepare to load 100 weights from the pre-trained model for generator and fnet Prepare to load 0 weights from the pre-trained model for discriminator Finish building the network. 2022-03-29 18:13:17.363319: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2022-03-29 18:13:17.366164: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2022-03-29 18:13:17.387341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: NVIDIA GeForce RTX 2070 SUPER major: 7 minor: 5 memoryClockRate(GHz): 1.8 pciBusID: 0000:08:00.0 2022-03-29 18:13:17.387430: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2022-03-29 18:13:17.389661: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll 2022-03-29 18:13:17.391723: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll 2022-03-29 18:13:17.392546: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll 2022-03-29 18:13:17.395866: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll 2022-03-29 18:13:17.397837: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll 2022-03-29 18:13:17.404255: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2022-03-29 18:13:17.404338: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2022-03-29 18:13:17.830082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-03-29 18:13:17.830157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2022-03-29 18:13:17.830648: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2022-03-29 18:13:17.830920: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6695 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2070 SUPER, pci bus id: 0000:08:00.0, compute capability: 7.5) Scope generator: Variable: generator/generator_unit/input_stage/conv/Conv/weights:0 Shape: [3, 3, 51, 64] Variable: generator/generator_unit/input_stage/conv/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_1/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_1/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_1/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_1/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_2/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_2/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_2/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_2/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_3/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_3/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_3/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_3/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_4/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_4/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_4/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_4/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_5/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_5/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_5/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_5/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_6/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_6/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_6/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_6/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_7/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_7/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_7/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_7/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_8/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_8/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_8/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_8/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_9/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_9/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_9/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_9/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_10/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_10/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_10/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_10/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_11/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_11/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_11/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_11/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_12/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_12/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_12/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_12/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_13/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_13/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_13/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_13/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_14/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_14/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_14/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_14/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_15/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_15/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_15/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_15/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_16/conv_1/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_16/conv_1/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/resblock_16/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/resblock_16/conv_2/Conv/biases:0 Shape: [64] Variable: generator/generator_unit/conv_tran2highres/conv_tran1/Conv2d_transpose/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/conv_tran2highres/conv_tran1/Conv2d_transpose/biases:0 Shape: [64] Variable: generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/weights:0 Shape: [3, 3, 64, 64] Variable: generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/biases:0 Shape: [64] Variable: generator/generator_unit/output_stage/conv/Conv/weights:0 Shape: [3, 3, 64, 3] Variable: generator/generator_unit/output_stage/conv/Conv/biases:0 Shape: [3] total size: 1286723 Scope fnet: Variable: fnet/autoencode_unit/encoder_1/conv_1/Conv/weights:0 Shape: [3, 3, 6, 32] Variable: fnet/autoencode_unit/encoder_1/conv_1/Conv/biases:0 Shape: [32] Variable: fnet/autoencode_unit/encoder_1/conv_2/Conv/weights:0 Shape: [3, 3, 32, 32] Variable: fnet/autoencode_unit/encoder_1/conv_2/Conv/biases:0 Shape: [32] Variable: fnet/autoencode_unit/encoder_2/conv_1/Conv/weights:0 Shape: [3, 3, 32, 64] Variable: fnet/autoencode_unit/encoder_2/conv_1/Conv/biases:0 Shape: [64] Variable: fnet/autoencode_unit/encoder_2/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: fnet/autoencode_unit/encoder_2/conv_2/Conv/biases:0 Shape: [64] Variable: fnet/autoencode_unit/encoder_3/conv_1/Conv/weights:0 Shape: [3, 3, 64, 128] Variable: fnet/autoencode_unit/encoder_3/conv_1/Conv/biases:0 Shape: [128] Variable: fnet/autoencode_unit/encoder_3/conv_2/Conv/weights:0 Shape: [3, 3, 128, 128] Variable: fnet/autoencode_unit/encoder_3/conv_2/Conv/biases:0 Shape: [128] Variable: fnet/autoencode_unit/decoder_1/conv_1/Conv/weights:0 Shape: [3, 3, 128, 256] Variable: fnet/autoencode_unit/decoder_1/conv_1/Conv/biases:0 Shape: [256] Variable: fnet/autoencode_unit/decoder_1/conv_2/Conv/weights:0 Shape: [3, 3, 256, 256] Variable: fnet/autoencode_unit/decoder_1/conv_2/Conv/biases:0 Shape: [256] Variable: fnet/autoencode_unit/decoder_2/conv_1/Conv/weights:0 Shape: [3, 3, 256, 128] Variable: fnet/autoencode_unit/decoder_2/conv_1/Conv/biases:0 Shape: [128] Variable: fnet/autoencode_unit/decoder_2/conv_2/Conv/weights:0 Shape: [3, 3, 128, 128] Variable: fnet/autoencode_unit/decoder_2/conv_2/Conv/biases:0 Shape: [128] Variable: fnet/autoencode_unit/decoder_3/conv_1/Conv/weights:0 Shape: [3, 3, 128, 64] Variable: fnet/autoencode_unit/decoder_3/conv_1/Conv/biases:0 Shape: [64] Variable: fnet/autoencode_unit/decoder_3/conv_2/Conv/weights:0 Shape: [3, 3, 64, 64] Variable: fnet/autoencode_unit/decoder_3/conv_2/Conv/biases:0 Shape: [64] Variable: fnet/autoencode_unit/output_stage/conv1/Conv/weights:0 Shape: [3, 3, 64, 32] Variable: fnet/autoencode_unit/output_stage/conv1/Conv/biases:0 Shape: [32] Variable: fnet/autoencode_unit/output_stage/conv2/Conv/weights:0 Shape: [3, 3, 32, 2] Variable: fnet/autoencode_unit/output_stage/conv2/Conv/biases:0 Shape: [2] total size: 1745506 Scope tdiscriminator: Variable: tdiscriminator/discriminator_unit/input_stage/conv/Conv/weights:0 Shape: [3, 3, 27, 64] Variable: tdiscriminator/discriminator_unit/input_stage/conv/Conv/biases:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_1/conv1/Conv/weights:0 Shape: [4, 4, 64, 64] Variable: tdiscriminator/discriminator_unit/disblock_1/BatchNorm/beta:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_1/BatchNorm/moving_mean:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_1/BatchNorm/moving_variance:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_3/conv1/Conv/weights:0 Shape: [4, 4, 64, 64] Variable: tdiscriminator/discriminator_unit/disblock_3/BatchNorm/beta:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_3/BatchNorm/moving_mean:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_3/BatchNorm/moving_variance:0 Shape: [64] Variable: tdiscriminator/discriminator_unit/disblock_5/conv1/Conv/weights:0 Shape: [4, 4, 64, 128] Variable: tdiscriminator/discriminator_unit/disblock_5/BatchNorm/beta:0 Shape: [128] Variable: tdiscriminator/discriminator_unit/disblock_5/BatchNorm/moving_mean:0 Shape: [128] Variable: tdiscriminator/discriminator_unit/disblock_5/BatchNorm/moving_variance:0 Shape: [128] Variable: tdiscriminator/discriminator_unit/disblock_7/conv1/Conv/weights:0 Shape: [4, 4, 128, 256] Variable: tdiscriminator/discriminator_unit/disblock_7/BatchNorm/beta:0 Shape: [256] Variable: tdiscriminator/discriminator_unit/disblock_7/BatchNorm/moving_mean:0 Shape: [256] Variable: tdiscriminator/discriminator_unit/disblock_7/BatchNorm/moving_variance:0 Shape: [256] Variable: tdiscriminator/discriminator_unit/dense_layer_2/dense/kernel:0 Shape: [256, 1] Variable: tdiscriminator/discriminator_unit/dense_layer_2/dense/kernel:0 Shape: [256, 1] total size: 804096 Scope vgg_19: Variable: vgg_19/conv1/conv1_1/weights:0 Shape: [3, 3, 3, 64] Variable: vgg_19/conv1/conv1_1/biases:0 Shape: [64] Variable: vgg_19/conv1/conv1_2/weights:0 Shape: [3, 3, 64, 64] Variable: vgg_19/conv1/conv1_2/biases:0 Shape: [64] Variable: vgg_19/conv2/conv2_1/weights:0 Shape: [3, 3, 64, 128] Variable: vgg_19/conv2/conv2_1/biases:0 Shape: [128] Variable: vgg_19/conv2/conv2_2/weights:0 Shape: [3, 3, 128, 128] Variable: vgg_19/conv2/conv2_2/biases:0 Shape: [128] Variable: vgg_19/conv3/conv3_1/weights:0 Shape: [3, 3, 128, 256] Variable: vgg_19/conv3/conv3_1/biases:0 Shape: [256] Variable: vgg_19/conv3/conv3_2/weights:0 Shape: [3, 3, 256, 256] Variable: vgg_19/conv3/conv3_2/biases:0 Shape: [256] Variable: vgg_19/conv3/conv3_3/weights:0 Shape: [3, 3, 256, 256] Variable: vgg_19/conv3/conv3_3/biases:0 Shape: [256] Variable: vgg_19/conv3/conv3_4/weights:0 Shape: [3, 3, 256, 256] Variable: vgg_19/conv3/conv3_4/biases:0 Shape: [256] Variable: vgg_19/conv4/conv4_1/weights:0 Shape: [3, 3, 256, 512] Variable: vgg_19/conv4/conv4_1/biases:0 Shape: [512] Variable: vgg_19/conv4/conv4_2/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv4/conv4_2/biases:0 Shape: [512] Variable: vgg_19/conv4/conv4_3/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv4/conv4_3/biases:0 Shape: [512] Variable: vgg_19/conv4/conv4_4/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv4/conv4_4/biases:0 Shape: [512] Variable: vgg_19/conv5/conv5_1/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv5/conv5_1/biases:0 Shape: [512] Variable: vgg_19/conv5/conv5_2/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv5/conv5_2/biases:0 Shape: [512] Variable: vgg_19/conv5/conv5_3/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv5/conv5_3/biases:0 Shape: [512] Variable: vgg_19/conv5/conv5_4/weights:0 Shape: [3, 3, 512, 512] Variable: vgg_19/conv5/conv5_4/biases:0 Shape: [512] total size: 20024384 VGG19 restored successfully!! Loading weights from the pre-trained model to start a new training... The first run takes longer time for training data loading... Save initial checkpoint, before any training 2022-03-29 18:21:17.681164: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll Traceback (most recent call last): File "main.py", line 388, in results = sess.run(fetches) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 754, in run run_metadata=run_metadata) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1259, in run run_metadata=run_metadata) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1358, in run raise six.reraise(*original_exc_info) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\six.py", line 719, in reraise raise value File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1345, in run return self._sess.run(*args, **kwargs) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1418, in run run_metadata=run_metadata) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1176, in run return self._sess.run(*args, **kwargs) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run run_metadata_ptr) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run feed_dict_tensor, options, run_metadata) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run run_metadata) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: 2 root error(s) found. (0) Out of range: RandomShuffleQueue '_5_load_frame_cpu/validation_data/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 2, current size 0) [[node load_frame_cpu/validation_data/shuffle_batch (defined at C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]] (1) Out of range: RandomShuffleQueue '_5_load_frame_cpu/validation_data/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 2, current size 0) [[node load_frame_cpu/validation_data/shuffle_batch (defined at C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]] [[generator/dense_image_warp_13/interpolate_bilinear/assert_greater_equal/Assert/Assert/data_4/_2867]] 0 successful operations. 0 derived errors ignored.

    Original stack trace for 'load_frame_cpu/validation_data/shuffle_batch': File "main.py", line 281, in rdata = frvsr_gpu_data_loader(FLAGS, useValidat) File "D:\TecoGAN-mastertrain\lib\dataloader.py", line 331, in frvsr_gpu_data_loader vald_batch_list, vald_num_image_list_HR_t_cur = loadHRfunc(valFLAGS, tar_size) File "D:\TecoGAN-mastertrain\lib\dataloader.py", line 304, in loadHR min_after_dequeue=FLAGS.video_queue_capacity, num_threads=FLAGS.queue_thread, seed = FLAGS.rand_seed) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func return func(*args, **kwargs) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\input.py", line 1347, in shuffle_batch name=name) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\input.py", line 874, in _shuffle_batch dequeued = queue.dequeue_many(batch_size, name=name) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\data_flow_ops.py", line 489, in dequeue_many self._queue_ref, n=n, component_types=self._dtypes, name=name) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\gen_data_flow_ops.py", line 3862, in queue_dequeue_many_v2 timeout_ms=timeout_ms, name=name) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func return func(*args, **kwargs) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op attrs, op_def, compute_device) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal op_def=op_def) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init self._traceback = tf_stack.extract_stack()

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "main.py", line 431, in print('Optimization done!!!!!!!!!!!!') File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 861, in exit self._close_internal(exception_type) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 899, in _close_internal self._sess.close() File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1166, in close self._sess.close() File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1334, in close ignore_live_threads=True) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\coordinator.py", line 389, in join six.reraise(*self._exc_info_to_raise) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\six.py", line 718, in reraise raise value.with_traceback(tb) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\queue_runner_impl.py", line 257, in _run enqueue_callable() File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1287, in _single_operation_run self._call_tf_sessionrun(None, {}, [], target_list, None) File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [offset_height must be >= 0.] [[{{node load_frame_cpu/train_data/data_preprocessing/random_crop/crop_to_bounding_box_11/Assert_1/Assert}}]]

    D:\TecoGAN-mastertrain>

    opened by toaster345 2
  • Tensor Flow 2 Compatibility

    Tensor Flow 2 Compatibility

    I added some changes for run "python3 runGan.py 1" with Tensorflow 2.

    Also you need to install: pip install tf_slim pip install tensorflow_addons

    opened by RAFALAMAO 1
Owner
Nils Thuerey
Nils Thuerey
This repo contains research materials released by members of the Google Brain team in Tokyo.

Brain Tokyo Workshop ?? ?? This repo contains research materials released by members of the Google Brain team in Tokyo. Past Projects Weight Agnostic

Google 1.2k Jan 2, 2023
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

null 71 Dec 19, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks [Paper] [Project Website] This repository holds the source code, pretra

Humam Alwassel 83 Dec 21, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 3, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
This git repo contains the implementation of my ML project on Heart Disease Prediction

Introduction This git repo contains the implementation of my ML project on Heart Disease Prediction. This is a real-world machine learning model/proje

Aryan Dutta 1 Feb 2, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
Supplementary code for SIGGRAPH 2021 paper: Discovering Diverse Athletic Jumping Strategies

SIGGRAPH 2021: Discovering Diverse Athletic Jumping Strategies project page paper demo video Prerequisites Important Notes We suspect there are bugs i

null 54 Dec 6, 2022
Code for HodgeNet: Learning Spectral Geometry on Triangle Meshes, in SIGGRAPH 2021.

HodgeNet | Webpage | Paper | Video HodgeNet: Learning Spectral Geometry on Triangle Meshes Dmitriy Smirnov, Justin Solomon SIGGRAPH 2021 Set-up To ins

Dima Smirnov 61 Nov 27, 2022
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 5, 2023
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

null 1.1k Dec 27, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

null 748 Nov 27, 2021