Hi,
Thanks for the code, I am just wondering what happened if I use Distributed training with TensorFlow in your project since I am having 2 GPUs. I see in your code that during the training phase, you split your image data into several GPUs and then feed them in a for loop which you iterate through each GPU.
I am just wondering is this the optimal way to do this since I am not really familiar with Tensorflow so by looking a bit I found this Distributed training with TensorFlow.
Therefore, this is not an issue but I guess your training loop can be improved by applying this.