I'm trying to train a new model based on my own dataset consisting of my own LR + HR images (as opposed to LR images being generated on the fly). I've adjusted the code in runGan.py & dataloader.py accordingly (there have been a couple of threads about this already), but when I try to train it, I get this error:
"tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [offset_height must be >= 0.]
[[{{node load_frame_cpu/train_data/data_preprocessing/random_crop/crop_to_bounding_box_11/Assert_1/Assert}}]]"
Full log below. Ignore notification regarding folders with empty frames - I'll add them once I get the actual training process working.
Microsoft Windows [Version 10.0.19042.1586]
(c) Microsoft Corporation. All rights reserved.
D:\TecoGAN-mastertrain>python rungan.py 3
Testing test case 3
2022-03-29 18:12:25.068885: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
- https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
- https://github.com/tensorflow/addons
- https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
Using TensorFlow backend.
Preparing train_data
Skip HR/scene_2247, since foler doesn't contain enough frames!
Skip HR/scene_2251, since foler doesn't contain enough frames!
Skip HR/scene_2317, since foler doesn't contain enough frames!
Skip HR/scene_2437, since foler doesn't contain enough frames!
WARNING:tensorflow:From D:\TecoGAN-mastertrain\lib\dataloader.py:210: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
[Config] Use random crop
WARNING:tensorflow:From D:\TecoGAN-mastertrain\lib\dataloader.py:224: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
WARNING:tensorflow:From D:\TecoGAN-mastertrain\lib\dataloader.py:233: The name tf.read_file is deprecated. Please use tf.io.read_file instead.
[Config] Use random crop
[Config] Use random flip
Sequenced batches: 47430, sequence length: 10
Preparing validation_data
[Config] Use random crop
[Config] Use random crop
[Config] Use random flip
Sequenced batches: 2520, sequence length: 10
tData count = 47430, steps per epoch 23715
WARNING:tensorflow:From C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_backend.py:64: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From main.py:296: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.
variable not found in ckpt: generator/generator_unit/resblock_11/conv_1/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_11/conv_1/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_11/conv_2/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_11/conv_2/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_12/conv_1/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_12/conv_1/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_12/conv_2/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_12/conv_2/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_13/conv_1/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_13/conv_1/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_13/conv_2/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_13/conv_2/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_14/conv_1/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_14/conv_1/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_14/conv_2/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_14/conv_2/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_15/conv_1/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_15/conv_1/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_15/conv_2/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_15/conv_2/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_16/conv_1/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_16/conv_1/Conv/biases:0
Assign Zero of (64,)
variable not found in ckpt: generator/generator_unit/resblock_16/conv_2/Conv/weights:0
Assign Zero of (3, 3, 64, 64)
variable not found in ckpt: generator/generator_unit/resblock_16/conv_2/Conv/biases:0
Assign Zero of (64,)
Prepare to load 100 weights from the pre-trained model for generator and fnet
Prepare to load 0 weights from the pre-trained model for discriminator
Finish building the network.
2022-03-29 18:13:17.363319: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2022-03-29 18:13:17.366164: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2022-03-29 18:13:17.387341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce RTX 2070 SUPER major: 7 minor: 5 memoryClockRate(GHz): 1.8
pciBusID: 0000:08:00.0
2022-03-29 18:13:17.387430: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2022-03-29 18:13:17.389661: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2022-03-29 18:13:17.391723: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2022-03-29 18:13:17.392546: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2022-03-29 18:13:17.395866: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2022-03-29 18:13:17.397837: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2022-03-29 18:13:17.404255: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2022-03-29 18:13:17.404338: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2022-03-29 18:13:17.830082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-03-29 18:13:17.830157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2022-03-29 18:13:17.830648: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2022-03-29 18:13:17.830920: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6695 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2070 SUPER, pci bus id: 0000:08:00.0, compute capability: 7.5)
Scope generator:
Variable: generator/generator_unit/input_stage/conv/Conv/weights:0
Shape: [3, 3, 51, 64]
Variable: generator/generator_unit/input_stage/conv/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_1/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_1/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_1/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_1/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_2/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_2/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_2/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_2/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_3/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_3/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_3/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_3/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_4/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_4/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_4/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_4/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_5/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_5/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_5/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_5/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_6/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_6/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_6/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_6/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_7/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_7/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_7/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_7/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_8/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_8/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_8/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_8/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_9/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_9/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_9/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_9/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_10/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_10/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_10/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_10/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_11/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_11/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_11/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_11/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_12/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_12/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_12/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_12/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_13/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_13/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_13/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_13/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_14/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_14/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_14/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_14/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_15/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_15/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_15/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_15/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_16/conv_1/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_16/conv_1/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/resblock_16/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/resblock_16/conv_2/Conv/biases:0
Shape: [64]
Variable: generator/generator_unit/conv_tran2highres/conv_tran1/Conv2d_transpose/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/conv_tran2highres/conv_tran1/Conv2d_transpose/biases:0
Shape: [64]
Variable: generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/weights:0
Shape: [3, 3, 64, 64]
Variable: generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/biases:0
Shape: [64]
Variable: generator/generator_unit/output_stage/conv/Conv/weights:0
Shape: [3, 3, 64, 3]
Variable: generator/generator_unit/output_stage/conv/Conv/biases:0
Shape: [3]
total size: 1286723
Scope fnet:
Variable: fnet/autoencode_unit/encoder_1/conv_1/Conv/weights:0
Shape: [3, 3, 6, 32]
Variable: fnet/autoencode_unit/encoder_1/conv_1/Conv/biases:0
Shape: [32]
Variable: fnet/autoencode_unit/encoder_1/conv_2/Conv/weights:0
Shape: [3, 3, 32, 32]
Variable: fnet/autoencode_unit/encoder_1/conv_2/Conv/biases:0
Shape: [32]
Variable: fnet/autoencode_unit/encoder_2/conv_1/Conv/weights:0
Shape: [3, 3, 32, 64]
Variable: fnet/autoencode_unit/encoder_2/conv_1/Conv/biases:0
Shape: [64]
Variable: fnet/autoencode_unit/encoder_2/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: fnet/autoencode_unit/encoder_2/conv_2/Conv/biases:0
Shape: [64]
Variable: fnet/autoencode_unit/encoder_3/conv_1/Conv/weights:0
Shape: [3, 3, 64, 128]
Variable: fnet/autoencode_unit/encoder_3/conv_1/Conv/biases:0
Shape: [128]
Variable: fnet/autoencode_unit/encoder_3/conv_2/Conv/weights:0
Shape: [3, 3, 128, 128]
Variable: fnet/autoencode_unit/encoder_3/conv_2/Conv/biases:0
Shape: [128]
Variable: fnet/autoencode_unit/decoder_1/conv_1/Conv/weights:0
Shape: [3, 3, 128, 256]
Variable: fnet/autoencode_unit/decoder_1/conv_1/Conv/biases:0
Shape: [256]
Variable: fnet/autoencode_unit/decoder_1/conv_2/Conv/weights:0
Shape: [3, 3, 256, 256]
Variable: fnet/autoencode_unit/decoder_1/conv_2/Conv/biases:0
Shape: [256]
Variable: fnet/autoencode_unit/decoder_2/conv_1/Conv/weights:0
Shape: [3, 3, 256, 128]
Variable: fnet/autoencode_unit/decoder_2/conv_1/Conv/biases:0
Shape: [128]
Variable: fnet/autoencode_unit/decoder_2/conv_2/Conv/weights:0
Shape: [3, 3, 128, 128]
Variable: fnet/autoencode_unit/decoder_2/conv_2/Conv/biases:0
Shape: [128]
Variable: fnet/autoencode_unit/decoder_3/conv_1/Conv/weights:0
Shape: [3, 3, 128, 64]
Variable: fnet/autoencode_unit/decoder_3/conv_1/Conv/biases:0
Shape: [64]
Variable: fnet/autoencode_unit/decoder_3/conv_2/Conv/weights:0
Shape: [3, 3, 64, 64]
Variable: fnet/autoencode_unit/decoder_3/conv_2/Conv/biases:0
Shape: [64]
Variable: fnet/autoencode_unit/output_stage/conv1/Conv/weights:0
Shape: [3, 3, 64, 32]
Variable: fnet/autoencode_unit/output_stage/conv1/Conv/biases:0
Shape: [32]
Variable: fnet/autoencode_unit/output_stage/conv2/Conv/weights:0
Shape: [3, 3, 32, 2]
Variable: fnet/autoencode_unit/output_stage/conv2/Conv/biases:0
Shape: [2]
total size: 1745506
Scope tdiscriminator:
Variable: tdiscriminator/discriminator_unit/input_stage/conv/Conv/weights:0
Shape: [3, 3, 27, 64]
Variable: tdiscriminator/discriminator_unit/input_stage/conv/Conv/biases:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_1/conv1/Conv/weights:0
Shape: [4, 4, 64, 64]
Variable: tdiscriminator/discriminator_unit/disblock_1/BatchNorm/beta:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_1/BatchNorm/moving_mean:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_1/BatchNorm/moving_variance:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_3/conv1/Conv/weights:0
Shape: [4, 4, 64, 64]
Variable: tdiscriminator/discriminator_unit/disblock_3/BatchNorm/beta:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_3/BatchNorm/moving_mean:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_3/BatchNorm/moving_variance:0
Shape: [64]
Variable: tdiscriminator/discriminator_unit/disblock_5/conv1/Conv/weights:0
Shape: [4, 4, 64, 128]
Variable: tdiscriminator/discriminator_unit/disblock_5/BatchNorm/beta:0
Shape: [128]
Variable: tdiscriminator/discriminator_unit/disblock_5/BatchNorm/moving_mean:0
Shape: [128]
Variable: tdiscriminator/discriminator_unit/disblock_5/BatchNorm/moving_variance:0
Shape: [128]
Variable: tdiscriminator/discriminator_unit/disblock_7/conv1/Conv/weights:0
Shape: [4, 4, 128, 256]
Variable: tdiscriminator/discriminator_unit/disblock_7/BatchNorm/beta:0
Shape: [256]
Variable: tdiscriminator/discriminator_unit/disblock_7/BatchNorm/moving_mean:0
Shape: [256]
Variable: tdiscriminator/discriminator_unit/disblock_7/BatchNorm/moving_variance:0
Shape: [256]
Variable: tdiscriminator/discriminator_unit/dense_layer_2/dense/kernel:0
Shape: [256, 1]
Variable: tdiscriminator/discriminator_unit/dense_layer_2/dense/kernel:0
Shape: [256, 1]
total size: 804096
Scope vgg_19:
Variable: vgg_19/conv1/conv1_1/weights:0
Shape: [3, 3, 3, 64]
Variable: vgg_19/conv1/conv1_1/biases:0
Shape: [64]
Variable: vgg_19/conv1/conv1_2/weights:0
Shape: [3, 3, 64, 64]
Variable: vgg_19/conv1/conv1_2/biases:0
Shape: [64]
Variable: vgg_19/conv2/conv2_1/weights:0
Shape: [3, 3, 64, 128]
Variable: vgg_19/conv2/conv2_1/biases:0
Shape: [128]
Variable: vgg_19/conv2/conv2_2/weights:0
Shape: [3, 3, 128, 128]
Variable: vgg_19/conv2/conv2_2/biases:0
Shape: [128]
Variable: vgg_19/conv3/conv3_1/weights:0
Shape: [3, 3, 128, 256]
Variable: vgg_19/conv3/conv3_1/biases:0
Shape: [256]
Variable: vgg_19/conv3/conv3_2/weights:0
Shape: [3, 3, 256, 256]
Variable: vgg_19/conv3/conv3_2/biases:0
Shape: [256]
Variable: vgg_19/conv3/conv3_3/weights:0
Shape: [3, 3, 256, 256]
Variable: vgg_19/conv3/conv3_3/biases:0
Shape: [256]
Variable: vgg_19/conv3/conv3_4/weights:0
Shape: [3, 3, 256, 256]
Variable: vgg_19/conv3/conv3_4/biases:0
Shape: [256]
Variable: vgg_19/conv4/conv4_1/weights:0
Shape: [3, 3, 256, 512]
Variable: vgg_19/conv4/conv4_1/biases:0
Shape: [512]
Variable: vgg_19/conv4/conv4_2/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv4/conv4_2/biases:0
Shape: [512]
Variable: vgg_19/conv4/conv4_3/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv4/conv4_3/biases:0
Shape: [512]
Variable: vgg_19/conv4/conv4_4/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv4/conv4_4/biases:0
Shape: [512]
Variable: vgg_19/conv5/conv5_1/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv5/conv5_1/biases:0
Shape: [512]
Variable: vgg_19/conv5/conv5_2/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv5/conv5_2/biases:0
Shape: [512]
Variable: vgg_19/conv5/conv5_3/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv5/conv5_3/biases:0
Shape: [512]
Variable: vgg_19/conv5/conv5_4/weights:0
Shape: [3, 3, 512, 512]
Variable: vgg_19/conv5/conv5_4/biases:0
Shape: [512]
total size: 20024384
VGG19 restored successfully!!
Loading weights from the pre-trained model to start a new training...
The first run takes longer time for training data loading...
Save initial checkpoint, before any training
2022-03-29 18:21:17.681164: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
Traceback (most recent call last):
File "main.py", line 388, in
results = sess.run(fetches)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 754, in run
run_metadata=run_metadata)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1259, in run
run_metadata=run_metadata)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1358, in run
raise six.reraise(*original_exc_info)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\six.py", line 719, in reraise
raise value
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1345, in run
return self._sess.run(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1418, in run
run_metadata=run_metadata)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1176, in run
return self._sess.run(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: 2 root error(s) found.
(0) Out of range: RandomShuffleQueue '_5_load_frame_cpu/validation_data/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 2, current size 0)
[[node load_frame_cpu/validation_data/shuffle_batch (defined at C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
(1) Out of range: RandomShuffleQueue '_5_load_frame_cpu/validation_data/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 2, current size 0)
[[node load_frame_cpu/validation_data/shuffle_batch (defined at C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
[[generator/dense_image_warp_13/interpolate_bilinear/assert_greater_equal/Assert/Assert/data_4/_2867]]
0 successful operations.
0 derived errors ignored.
Original stack trace for 'load_frame_cpu/validation_data/shuffle_batch':
File "main.py", line 281, in
rdata = frvsr_gpu_data_loader(FLAGS, useValidat)
File "D:\TecoGAN-mastertrain\lib\dataloader.py", line 331, in frvsr_gpu_data_loader
vald_batch_list, vald_num_image_list_HR_t_cur = loadHRfunc(valFLAGS, tar_size)
File "D:\TecoGAN-mastertrain\lib\dataloader.py", line 304, in loadHR
min_after_dequeue=FLAGS.video_queue_capacity, num_threads=FLAGS.queue_thread, seed = FLAGS.rand_seed)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\input.py", line 1347, in shuffle_batch
name=name)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\input.py", line 874, in _shuffle_batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\data_flow_ops.py", line 489, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\gen_data_flow_ops.py", line 3862, in queue_dequeue_many_v2
timeout_ms=timeout_ms, name=name)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 431, in
print('Optimization done!!!!!!!!!!!!')
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 861, in exit
self._close_internal(exception_type)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 899, in _close_internal
self._sess.close()
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1166, in close
self._sess.close()
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1334, in close
ignore_live_threads=True)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\six.py", line 718, in reraise
raise value.with_traceback(tb)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\training\queue_runner_impl.py", line 257, in _run
enqueue_callable()
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1287, in _single_operation_run
self._call_tf_sessionrun(None, {}, [], target_list, None)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [offset_height must be >= 0.]
[[{{node load_frame_cpu/train_data/data_preprocessing/random_crop/crop_to_bounding_box_11/Assert_1/Assert}}]]
D:\TecoGAN-mastertrain>