......
**** EPOCH 001 ****
Current batch/total batch num: 0/9949
2019-06-08 19:12:55.361276: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:666] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2019-06-08 19:12:55.437713: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:666] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
2019-06-08 19:13:01.682376: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:666] Iteration = 0, topological sort failed with message: The graph couldn't be sorted in topological order.
2019-06-08 19:13:01.729082: E tensorflow/core/grappler/optimizers/dependency_optimizer.cc:666] Iteration = 1, topological sort failed with message: The graph couldn't be sorted in topological order.
2019-06-08 19:13:04.594699: E tensorflow/stream_executor/cuda/cuda_dnn.cc:363] Loaded runtime CuDNN library: 7.0.5 but source was compiled with: 7.1.4. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
2019-06-08 19:13:04.595317: W ./tensorflow/stream_executor/stream.h:2093] attempting to perform DNN operation using StreamExecutor without DNN support
Traceback (most recent call last):
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1741, in
main()
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 333, in
train()
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 275, in train
train_one_epoch(sess, ops, train_writer)
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 320, in train_one_epoch
feed_dict=feed_dict)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: cudnn PoolBackward launch failed
[[node tower_0/gradients/tower_0/maxpool/maxpool_grad/MaxPoolGrad (defined at /media/zgh/winz/3D/deep_gcns/sem_seg/train.py:223) = MaxPoolGrad[T=DT_FLOAT, data_format="NHWC", ksize=[1, 4096, 1, 1], padding="VALID", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:GPU:0"](tower_0/adj_conv_final/Relu, tower_0/maxpool/maxpool, tower_0/gradients/tower_0/Tile_28_grad/Sum)]]
[[{{node tower_0/gradients/AddN_147/_2879}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_19645_tower_0/gradients/AddN_147", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Caused by op u'tower_0/gradients/tower_0/maxpool/maxpool_grad/MaxPoolGrad', defined at:
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1741, in
main()
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 333, in
train()
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 223, in train
grads = trainer.compute_gradients(loss)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 519, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 630, in gradients
gate_gradients, aggregation_method, stop_gradients)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 814, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 408, in _MaybeCompile
return grad_fn() # Exit early
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 814, in
lambda: grad_fn(op, *out_grads))
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_grad.py", line 607, in _MaxPoolGrad
data_format=op.get_attr("data_format"))
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 5081, in max_pool_grad
data_format=data_format, name=name)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
...which was originally created as op u'tower_0/maxpool/maxpool', defined at:
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/runrunrun/pycharm-community-2019.1.1/helpers/pydev/pydevd.py", line 1741, in
main()
[elided 2 identical lines from previous traceback]
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 333, in
train()
File "/media/zgh/winz/3D/deep_gcns/sem_seg/train.py", line 204, in train
skip_connect=SKIP_CONNECT)
File "/media/zgh/winz/3D/deep_gcns/sem_seg/model.py", line 50, in init
fusion = self.build_fusion_block(graphs, num_vertices)
File "/media/zgh/winz/3D/deep_gcns/sem_seg/model.py", line 115, in build_fusion_block
out_max = tf_util.max_pool2d(out, [num_vertices, 1], padding='VALID', scope='maxpool')
File "/media/zgh/winz/3D/deep_gcns/utils/tf_util.py", line 381, in max_pool2d
name=sc.name)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 2140, in max_pool
name=name)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 4641, in max_pool
data_format=data_format, name=name)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/media/zgh/df7f0859-33fc-4a7e-afef-851b5c4f4005/zgh/3D/.virtualenvs/py2te112/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
InternalError (see above for traceback): cudnn PoolBackward launch failed
[[node tower_0/gradients/tower_0/maxpool/maxpool_grad/MaxPoolGrad (defined at /media/zgh/winz/3D/deep_gcns/sem_seg/train.py:223) = MaxPoolGrad[T=DT_FLOAT, data_format="NHWC", ksize=[1, 4096, 1, 1], padding="VALID", strides=[1, 2, 2, 1], _device="/job:localhost/replica:0/task:0/device:GPU:0"](tower_0/adj_conv_final/Relu, tower_0/maxpool/maxpool, tower_0/gradients/tower_0/Tile_28_grad/Sum)]]
[[{{node tower_0/gradients/AddN_147/_2879}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_19645_tower_0/gradients/AddN_147", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
We've got an error while stopping in post-mortem: <type 'exceptions.KeyboardInterrupt'>
Process finished with exit code 1
Could you help me? Thank you very much!