A Tensorflow model for text recognition (CNN + seq2seq with visual attention) available as a Python package and compatible with Google Cloud ML Engine.

Overview

Attention-based OCR

Visual attention-based OCR model for image recognition with additional tools for creating TFRecords datasets and exporting the trained model with weights as a SavedModel or a frozen graph.

Acknowledgements

This project is based on a model by Qi Guo and Yuntian Deng. You can find the original model in the da03/Attention-OCR repository.

The model

Authors: Qi Guo and Yuntian Deng.

The model first runs a sliding CNN on the image (images are resized to height 32 while preserving aspect ratio). Then an LSTM is stacked on top of the CNN. Finally, an attention model is used as a decoder for producing the final outputs.

OCR example

Installation

pip install aocr

Note: Tensorflow and Numpy will be installed as dependencies. Additional dependencies are PIL/Pillow, distance, and six.

Note #2: this project works with Tensorflow 1.x. Upgrade to Tensorflow 2 is planned, but if you want to help, please feel free to create a PR.

Usage

Create a dataset

To build a TFRecords dataset, you need a collection of images and an annotation file with their respective labels.

aocr dataset ./datasets/annotations-training.txt ./datasets/training.tfrecords
aocr dataset ./datasets/annotations-testing.txt ./datasets/testing.tfrecords

Annotations are simple text files containing the image paths (either absolute or relative to your working dir) and their corresponding labels:

datasets/images/hello.jpg hello
datasets/images/world.jpg world

Train

aocr train ./datasets/training.tfrecords

A new model will be created, and the training will start. Note that it takes quite a long time to reach convergence, since we are training the CNN and attention model simultaneously.

The --steps-per-checkpoint parameter determines how often the model checkpoints will be saved (the default output dir is checkpoints/).

Important: there is a lot of available training options. See the CLI help or the parameters section of this README.

Test and visualize

aocr test ./datasets/testing.tfrecords

Additionally, you can visualize the attention results during testing (saved to out/ by default):

aocr test --visualize ./datasets/testing.tfrecords

Example output images in results/correct:

Image 0 (j/j):

example image 0

Image 1 (u/u):

example image 1

Image 2 (n/n):

example image 2

Image 3 (g/g):

example image 3

Image 4 (l/l):

example image 4

Image 5 (e/e):

example image 5

Export

After the model is trained and a checkpoint is available, it can be exported as either a frozen graph or a SavedModel.

# SavedModel (default):
aocr export ./exported-model

# Frozen graph:
aocr export --format=frozengraph ./exported-model

Load weights from the latest checkpoints and export the model into the ./exported-model directory.

Note: During training, it is possible to pass parameters describing the dimensions of the input images (--max-width, --max-height, etc.). If you used them during training, make sure to also pass them to the export command. Otherwise the exported model will not work properly when serving (next section).

Serving

Exported SavedModel can be served as an HTTP REST API using Tensorflow Serving. You can start the server by running the following command:

tensorflow_model_server --port=9000 --rest_api_port=9001 --model_name=yourmodelname --model_base_path=./exported-model

Note: tensorflow_model_server requires a sub-directory with the version number to be present and inside it the files exported in the previous step. So you need to manually move contents of exported-model into exported-model/1.

Now you can send a prediction request to the running server, for example:

curl -X POST \
  http://localhost:9001/v1/models/aocr:predict \
  -H 'cache-control: no-cache' \
  -H 'content-type: application/json' \
  -d '{
  "signature_name": "serving_default",
  "inputs": {
     	"input": { "b64": "<your image encoded as base64>" }
  }
}'

REST API requires binary inputs to be encoded as Base64 and wrapped in an object containing a b64 key. See 'Encoding binary values' in Tensorflow Serving documentation

Google Cloud ML Engine

To train the model in the Google Cloud Machine Learning Engine, upload the training dataset into a Google Cloud Storage bucket and start a training job with the gcloud tool.

  1. Set the environment variables:
# Prefix for the job name.
export JOB_PREFIX="aocr"

# Region to launch the training job in.
# Should be the same as the storage bucket region.
export REGION="us-central1"

# Your storage bucket.
export GS_BUCKET="gs://aocr-bucket"

# Path to store your training dataset in the bucket.
export DATASET_UPLOAD_PATH="training.tfrecords"
  1. Upload the training dataset:
gsutil cp ./datasets/training.tfrecords $GS_BUCKET/$DATASET_UPLOAD_PATH
  1. Launch the ML Engine job:
export NOW=$(date +"%Y%m%d_%H%M%S")
export JOB_NAME="$JOB_PREFIX$NOW"
export JOB_DIR="$GS_BUCKET/$JOB_NAME"

gcloud ml-engine jobs submit training $JOB_NAME \
    --job-dir=$JOB_DIR \
    --module-name=aocr \
    --package-path=aocr \
    --region=$REGION \
    --scale-tier=BASIC_GPU \
    --runtime-version=1.2 \
    -- \
    train $GS_BUCKET/$DATASET_UPLOAD_PATH \
    --steps-per-checkpoint=500 \
    --batch-size=512 \
    --num-epoch=20

Parameters

Global

  • log-path: Path for the log file.

Testing

  • visualize: Output the attention maps on the original image.

Exporting

  • format: Format for the export (either savedmodel or frozengraph).

Training

  • steps-per-checkpoint: Checkpointing (print perplexity, save model) per how many steps
  • num-epoch: The number of whole data passes.
  • batch-size: Batch size.
  • initial-learning-rate: Initial learning rate, note the we use AdaDelta, so the initial value does not matter much.
  • target-embedding-size: Embedding dimension for each target.
  • attn-num-hidden: Number of hidden units in attention decoder cell.
  • attn-num-layers: Number of layers in attention decoder cell. (Encoder number of hidden units will be attn-num-hidden*attn-num-layers).
  • no-resume: Create new weights even if there are checkpoints present.
  • max-gradient-norm: Clip gradients to this norm.
  • no-gradient-clipping: Do not perform gradient clipping.
  • gpu-id: GPU to use.
  • use-gru: Use GRU cells instead of LSTM.
  • max-width: Maximum width for the input images. WARNING: images with the width higher than maximum will be discarded.
  • max-height: Maximum height for the input images.
  • max-prediction: Maximum length of the predicted word/phrase.

References

Convert a formula to its LaTex source

What You Get Is What You See: A Visual Markup Decompiler

Torch attention OCR

Comments
  • How to use

    How to use "predict" to perform prediction on a single image?

    I am trying to perform the prediction on a single image using the following script. import argparse import tensorflow as tf import numpy as np

    def load_graph(frozen_graph_filename):
        # We load the protobuf file from the disk and parse it to retrieve the
        # unserialized graph_def
        with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
    
        # Then, we can use again a convenient built-in function to import a graph_def into the
        # current default Graph
        with tf.Graph().as_default() as graph:
            tf.import_graph_def(
                graph_def,
                input_map=None,
                return_elements=None,
                name="prefix",
                op_dict=None,
                producer_op_list=None
            )
        return graph
    
    def getImage(path):
        with open(path, 'rb') as img_file:
            img = img_file.read()
        print(img)
        return img
    
    frozen_model_filename = "exported-model/frozen_graph.pb"
    graph = load_graph(frozen_model_filename)
    
    def ocrImage(image):
        x = graph.get_tensor_by_name('prefix/input_image_as_bytes:0')
        y = graph.get_tensor_by_name('prefix/prediction:0')
        allProbs = graph.get_tensor_by_name('prefix/probability:0')
    
        img = getImage(image)
    
        with tf.Session(graph=graph) as sess:
            (y_out, probs_output) = sess.run([y,allProbs], feed_dict={
                x: [img]
            })
            # print(y_out)
            # print(allProbsToScore(probs_output))
    
            return {
                "predictions": [{
                    "ocr": str(y_out),
                    "confidence": probs_output
                }]
            };
    
    if __name__ == '__main__':
        # Let's allow the user to pass the filename as an argument
        parser = argparse.ArgumentParser()
        # parser.add_argument("--frozen_model_filename", default="checkpoints_pruned/frozen_model.pb", type=str, help="Frozen model file to import")
        parser.add_argument("--image", default="0_15.png", type=str, help="Path to image")
        args = parser.parse_args()
        predictions = ocrImage(args.image)
        print(str(predictions))
    

    When I run the script I am getting below error:

    (/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env) ahmad@irs-aeye-ll-014:/data/work/tvs-part-rec/loc-aocr/aocr_50k$ python predict.py 
    WARNING:tensorflow:From predict.py:21: calling import_graph_def (from tensorflow.python.framework.importer) with op_dict is deprecated and will be removed in a future version.
    Instructions for updating:
    Please file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.
    2019-08-06 14:47:13.028800: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    2019-08-06 14:47:13.090489: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2019-08-06 14:47:13.090737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
    name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705
    pciBusID: 0000:01:00.0
    totalMemory: 5.94GiB freeMemory: 5.72GiB
    2019-08-06 14:47:13.090754: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
    2019-08-06 14:47:13.408287: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-08-06 14:47:13.408323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
    2019-08-06 14:47:13.408330: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
    2019-08-06 14:47:13.408416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5494 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
    Traceback (most recent call last):
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
        return fn(*args)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
        self._extend_graph()
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
        tf_session.ExtendSession(self._session)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation prefix/Rank: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
    Registered kernels:
      device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_HALF, DT_UINT32, DT_UINT64]
      device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, ..., DT_QINT32, DT_BFLOAT16, DT_HALF, DT_UINT32, DT_UINT64]
      device='XLA_CPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL]
      device='XLA_GPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL, DT_BFLOAT16]
      device='CPU'
      device='GPU'; T in [DT_BOOL]
      device='GPU'; T in [DT_INT32]
      device='GPU'; T in [DT_VARIANT]
      device='GPU'; T in [DT_COMPLEX128]
      device='GPU'; T in [DT_COMPLEX64]
      device='GPU'; T in [DT_INT8]
      device='GPU'; T in [DT_UINT8]
      device='GPU'; T in [DT_INT16]
      device='GPU'; T in [DT_UINT16]
      device='GPU'; T in [DT_INT64]
      device='GPU'; T in [DT_DOUBLE]
      device='GPU'; T in [DT_FLOAT]
      device='GPU'; T in [DT_BFLOAT16]
      device='GPU'; T in [DT_HALF]
    
    	 [[{{node prefix/Rank}} = Rank[T=DT_STRING, _device="/device:GPU:0"](prefix/input_image_as_bytes)]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "predict.py", line 61, in <module>
        predictions = ocrImage(args.image)
      File "predict.py", line 43, in ocrImage
        x: [img]
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
        run_metadata_ptr)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
        feed_dict_tensor, options, run_metadata)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
        run_metadata)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation prefix/Rank: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
    Registered kernels:
      device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_HALF, DT_UINT32, DT_UINT64]
      device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, ..., DT_QINT32, DT_BFLOAT16, DT_HALF, DT_UINT32, DT_UINT64]
      device='XLA_CPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL]
      device='XLA_GPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL, DT_BFLOAT16]
      device='CPU'
      device='GPU'; T in [DT_BOOL]
      device='GPU'; T in [DT_INT32]
      device='GPU'; T in [DT_VARIANT]
      device='GPU'; T in [DT_COMPLEX128]
      device='GPU'; T in [DT_COMPLEX64]
      device='GPU'; T in [DT_INT8]
      device='GPU'; T in [DT_UINT8]
      device='GPU'; T in [DT_INT16]
      device='GPU'; T in [DT_UINT16]
      device='GPU'; T in [DT_INT64]
      device='GPU'; T in [DT_DOUBLE]
      device='GPU'; T in [DT_FLOAT]
      device='GPU'; T in [DT_BFLOAT16]
      device='GPU'; T in [DT_HALF]
    
    	 [[node prefix/Rank (defined at predict.py:21)  = Rank[T=DT_STRING, _device="/device:GPU:0"](prefix/input_image_as_bytes)]]
    
    Caused by op 'prefix/Rank', defined at:
      File "predict.py", line 32, in <module>
        graph = load_graph(frozen_model_filename)
      File "predict.py", line 21, in load_graph
        producer_op_list=None
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
        return func(*args, **kwargs)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
        _ProcessNewOps(graph)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps
        for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3440, in _add_new_tf_operations
        for c_op in c_api_util.new_tf_operations(self)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3440, in <listcomp>
        for c_op in c_api_util.new_tf_operations(self)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3299, in _create_op_from_tf_operation
        ret = Operation(c_op, self)
      File "/data/work/tvs-part-rec/CRNN_Tensorflow/crnntf-env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
        self._traceback = tf_stack.extract_stack()
    
    InvalidArgumentError (see above for traceback): Cannot assign a device for operation prefix/Rank: Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
    Registered kernels:
      device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, DT_COMPLEX64, DT_INT64, DT_BOOL, DT_QINT8, DT_QUINT8, DT_QINT32, DT_HALF, DT_UINT32, DT_UINT64]
      device='XLA_GPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT8, ..., DT_QINT32, DT_BFLOAT16, DT_HALF, DT_UINT32, DT_UINT64]
      device='XLA_CPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL]
      device='XLA_GPU'; T in [DT_UINT8, DT_QUINT8, DT_INT8, DT_QINT8, DT_INT32, DT_QINT32, DT_INT64, DT_HALF, DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BOOL, DT_BFLOAT16]
      device='CPU'
      device='GPU'; T in [DT_BOOL]
      device='GPU'; T in [DT_INT32]
      device='GPU'; T in [DT_VARIANT]
      device='GPU'; T in [DT_COMPLEX128]
      device='GPU'; T in [DT_COMPLEX64]
      device='GPU'; T in [DT_INT8]
      device='GPU'; T in [DT_UINT8]
      device='GPU'; T in [DT_INT16]
      device='GPU'; T in [DT_UINT16]
      device='GPU'; T in [DT_INT64]
      device='GPU'; T in [DT_DOUBLE]
      device='GPU'; T in [DT_FLOAT]
      device='GPU'; T in [DT_BFLOAT16]
      device='GPU'; T in [DT_HALF]
    
    	 [[node prefix/Rank (defined at predict.py:21)  = Rank[T=DT_STRING, _device="/device:GPU:0"](prefix/input_image_as_bytes)]]
    

    Can someone explain the process to perform recognition on a single image.

    opened by AhmadShaik 19
  • Error when training on Synth 90k

    Error when training on Synth 90k

    2017-10-22 23:07:17.471187: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Invalid JPEG data, size 1024 In image = tf.image.decode_png(img, channels=1)

    opened by thisismohitgupta 18
  • Can't understand the error: Premature end of JPEG data

    Can't understand the error: Premature end of JPEG data

    Hi,

    I am trying to train the model on SynthText data MJSynth 90K. After successfully creating the tfrecords, I started training passing image max-prediction-length and max image width as Command Line Arguments. After about 10,000 steps the process breaks with premature end of JPEG data. Please help. This is the exact error:

    2018-06-15 19:39:40,006 root INFO Step 10607: 0.288s, loss: 0.220111, perplexity: 1.246215. 2018-06-15 19:39:40,304 root INFO Step 10608: 0.288s, loss: 0.390393, perplexity: 1.477562. 2018-06-15 19:39:40.334325: E tensorflow/core/lib/jpeg/jpeg_mem.cc:307] Premature end of JPEG data. Stopped at line 0/31 Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tensorflow_p27/bin/aocr", line 11, in sys.exit(main()) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/aocr/main.py", line 252, in main num_epoch=parameters.num_epoch File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/aocr/model/model.py", line 364, in train result = self.step(batch, self.forward_only) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/aocr/model/model.py", line 445, in step outputs = self.sess.run(output_feed, input_feed) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run run_metadata_ptr) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1140, in _run feed_dict_tensor, options, run_metadata) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run run_metadata) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid JPEG data or crop window, data size 1024 [[Node: map/while/DecodePng = DecodePngchannels=1, dtype=DT_UINT8, _device="/job:localhost/replica:0/task:0/device:CPU:0"]] [[Node: map/while/cond/cond/resize_images/ExpandDims/_862 = _SendT=DT_UINT8, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_42253_map/while/cond/cond/resize_images/ExpandDims", _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

    Caused by op u'map/while/DecodePng', defined at: File "/home/ubuntu/anaconda3/envs/tensorflow_p27/bin/aocr", line 11, in sys.exit(main()) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/aocr/main.py", line 246, in main channels=parameters.channels, File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/aocr/model/model.py", line 122, in init self.img_data = tf.map_fn(self._prepare_image, self.img_data, dtype=tf.float32) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/functional_ops.py", line 413, in map_fn swap_memory=swap_memory) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3202, in while_loop result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2940, in BuildLoop pred, body, original_loop_vars, loop_vars, shape_invariants) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2877, in _BuildLoop body_result = body(*packed_vars_for_body) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/functional_ops.py", line 403, in compute packed_fn_values = fn(packed_values) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/aocr/model/model.py", line 465, in _prepare_image img = tf.image.decode_png(image, channels=self.channels) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/gen_image_ops.py", line 1058, in decode_png name=name) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3290, in create_op op_def=op_def) File "/home/ubuntu/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1654, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    InvalidArgumentError (see above for traceback): Invalid JPEG data or crop window, data size 1024 [[Node: map/while/DecodePng = DecodePngchannels=1, dtype=DT_UINT8, _device="/job:localhost/replica:0/task:0/device:CPU:0"]] [[Node: map/while/cond/cond/resize_images/ExpandDims/_862 = _SendT=DT_UINT8, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_42253_map/while/cond/cond/resize_images/ExpandDims", _device="/job:localhost/replica:0/task:0/device:GPU:0"]]

    Best regards, Vishal

    @emedvedev

    opened by kulkarnivishal 17
  • Korean training (tensorflow serving included)

    Korean training (tensorflow serving included)

    I got below message when I tried 'test'.

    I changed several things; 1.'iso-8859-1' to 'utf-8' 2. add two Korean Character in data_gen.py CHARMAP = ['', '', ''] + list('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ신한') 3. in the bucketdata.py : raise NotImplementedError --> omit and add 3 lines (other wise the program exit with 'NotImplementedError') else: #raise NotImplementedError self.label_list[l_idx] =
    self.label_list[l_idx][:decoder_input_len] target_weights.append([1]*decoder_input_len)

    (py36) D:\attention-ocr_b2>python ./aocr/main.py test ./dataset/testing.tfrecords 2019-03-18 12:33:22.445747: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019-03-18 12:33:22,458 root INFO phase: test 2019-03-18 12:33:22,458 root INFO model_dir: checkpoints 2019-03-18 12:33:22,458 root INFO load_model: True 2019-03-18 12:33:22,459 root INFO output_dir: results 2019-03-18 12:33:22,459 root INFO steps_per_checkpoint: 0 2019-03-18 12:33:22,459 root INFO batch_size: 1 2019-03-18 12:33:22,459 root INFO learning_rate: 1.000000 2019-03-18 12:33:22,459 root INFO reg_val: 0 2019-03-18 12:33:22,460 root INFO max_gradient_norm: 5.000000 2019-03-18 12:33:22,460 root INFO clip_gradients: True 2019-03-18 12:33:22,460 root INFO max_image_width 160.000000 2019-03-18 12:33:22,460 root INFO max_prediction_length 8.000000 2019-03-18 12:33:22,460 root INFO channels: 1 2019-03-18 12:33:22,460 root INFO target_embedding_size: 10.000000 2019-03-18 12:33:22,461 root INFO attn_num_hidden: 128 2019-03-18 12:33:22,461 root INFO attn_num_layers: 2 2019-03-18 12:33:22,461 root INFO visualize: False 2019-03-18 12:33:24,005 root INFO data_gen.gen() 2019-03-18 12:33:24,225 root INFO Step 1 (0.136s). Accuracy: 0.00%, loss: 4.895189, perplexity: 133.645, probability: 1.03% 0% (85 vs 4) 2019-03-18 12:33:24,243 root INFO Step 2 (0.017s). Accuracy: 0.00%, loss: 12.590834, perplexity: 2.93853e+05, probability: 39.16% 0% (53 vs 2) 2019-03-18 12:33:24,260 root INFO Step 3 (0.016s). Accuracy: 0.00%, loss: 15.508214, perplexity: 5.43415e+06, probability: 98.23% 0% (51 vs 3) 2019-03-18 12:33:24,278 root INFO Step 4 (0.017s). Accuracy: 0.00%, loss: 16.600834, perplexity: 1.62051e+07, probability: 71.18% 0% (49 vs 1)

    opened by kspook 16
  • Non-deterministic results on GPU

    Non-deterministic results on GPU

    Hi @emedvedev ,

    I ran test on same image multiple times using the readme command. aocr test ./datasets/testing.tfrecords

    Every time I ran the command, I'm getting same predicted word as output, but the inference probabilities are changing (including loss as well).

    Run1: Step 1 (1.096s). Accuracy: 100.00%, loss: 0.000364, perplexity: 1.00036, probability: 93.33% 100%

    Run2: Step 1 (0.988s). Accuracy: 100.00%, loss: 0.000260, perplexity: 1.00026, probability: 92.58% 100%

    I've observed the same behavior when I used frozen checkpoint as well (probabilities are changing for the same image). Any reason why this is happening as it should not happen. Please let me know how to fix it.

    opened by tumusudheer 15
  • Poor results using exported model in some cases

    Poor results using exported model in some cases

    I'm wondering if anyone else is seeing this.

    I have a model trained with 20,000 synthesized images of between 1 and 50 characters, including spaces. Using the 'test' function, I'm getting good results--test images that have short text are usually 100%, and longer ones are usually off by just a character or two. So far so good.

    I used the export function and ran the tensorflow_model_server, and then used a little python client to connect to it. With the same test images that I know the model can predict well, I'm seeing terrible results--the first 2-6 characters are usually right, but then almost complete gibberish.

    Is anyone else using TensorFlow Serving, and if so, can you report how well it's working? If you're not using it, what are you doing instead? I figure I'll just make a little python wrapper around the "test" function (essentially) that I can communicate with from the rest of my system, but I'd prefer to use TensorFlow Serving if possible because it's one less thing for me to worry about.

    opened by ckirmse 15
  • Tensorflow Serving returns

    Tensorflow Serving returns "Tensor name: probability has no shape information"

    For given exported model:

    MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
    
    signature_def['serving_default']:
      The given SavedModel SignatureDef contains the following input(s):
        inputs['input'] tensor_info:
            dtype: DT_STRING
            shape: unknown_rank
            name: input_image_as_bytes:0
      The given SavedModel SignatureDef contains the following output(s):
        outputs['output'] tensor_info:
            dtype: DT_STRING
            shape: unknown_rank
            name: prediction:0
        outputs['probability'] tensor_info:
            dtype: DT_DOUBLE
            shape: unknown_rank
            name: probability:0
      Method name is: tensorflow/serving/predict
    

    Sending a following request to the serving REST API:

    curl -X POST \
      http://localhost:9001/v1/models/testmodel:predict \
      -H 'cache-control: no-cache' \
      -H 'content-type: application/json' \
      -d '{
      "signature_name": "serving_default",
      "instances": [
         { "b64": "/9j/4AAk=" }
       ]
    }'
    

    returns an error:

    {
        "error": "Tensor name: probability has no shape information "
    }
    

    is it something that needs to be adjusted in the export function in aocr?

    opened by pokonski 13
  • error running aocr

    error running aocr

    After installing aocr (pip install aocr) in both my OS and the official tensor docker container(gcr.io/tensorflow/tensorflow) I'm getting this error even when I run aocr --help:

    Traceback (most recent call last):
      File "/usr/local/bin/aocr", line 7, in <module>
        from aocr.__main__ import main
      File "/usr/local/lib/python2.7/dist-packages/aocr/__main__.py", line 15, in <module>
        from .model.model import Model
      File "/usr/local/lib/python2.7/dist-packages/aocr/model/model.py", line 20, in <module>
        from .seq2seq_model import Seq2SeqModel
      File "/usr/local/lib/python2.7/dist-packages/aocr/model/seq2seq_model.py", line 27, in <module>
        from .seq2seq import model_with_buckets
      File "/usr/local/lib/python2.7/dist-packages/aocr/model/seq2seq.py", line 74, in <module>
        linear = rnn_cell._linear  # pylint: disable=protected-access
    AttributeError: 'module' object has no attribute '_linear'
    
    # uname -a
    Linux 71e18aa0cfc4 4.10.0-42-generic #46~16.04.1-Ubuntu SMP Mon Dec 4 15:57:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
    
    opened by yanpozka 13
  • Error assertion failed: [width must be <= target - offset]

    Error assertion failed: [width must be <= target - offset]

    While training following error occured, assertion failed: [width must be <= target - offset] Detailed log as follows

    aocr train datasets/training.tfrecords

    2017-09-14 16:57:11.201375: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-14 16:57:11.201493: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-14 16:57:11,201 root INFO loading data 2017-09-14 16:57:11,244 root INFO phase: train 2017-09-14 16:57:11,245 root INFO model_dir: checkpoints 2017-09-14 16:57:11,245 root INFO load_model: True 2017-09-14 16:57:11,245 root INFO output_dir: results 2017-09-14 16:57:11,245 root INFO steps_per_checkpoint: 100 2017-09-14 16:57:11,245 root INFO batch_size: 65 2017-09-14 16:57:11,245 root INFO num_epoch: 1000 2017-09-14 16:57:11,246 root INFO learning_rate: 1 2017-09-14 16:57:11,246 root INFO reg_val: 0 2017-09-14 16:57:11,246 root INFO max_gradient_norm: 5.000000 2017-09-14 16:57:11,246 root INFO clip_gradients: True 2017-09-14 16:57:11,246 root INFO max_image_width 160.000000 2017-09-14 16:57:11,246 root INFO max_prediction_length 8.000000 2017-09-14 16:57:11,247 root INFO target_vocab_size: 39 2017-09-14 16:57:11,247 root INFO target_embedding_size: 10.000000 2017-09-14 16:57:11,247 root INFO attn_num_hidden: 128 2017-09-14 16:57:11,247 root INFO attn_num_layers: 2 2017-09-14 16:57:11,247 root INFO visualize: False 2017-09-14 16:57:19,646 root INFO Created model with fresh parameters. 2017-09-14 16:57:22,677 root INFO Starting the training process. 2017-09-14 16:57:24.235933: I tensorflow/core/common_runtime/simple_placer.cc:697] Ignoring device specification /device:GPU:0 for node 'map_1/while/foldr/while/TensorArrayReadV3/Enter' because the input edge from 'map_1/while/foldr/TensorArray' is a reference connection and already has a device field set to /job:localhost/replica:0/task:0/device:CPU:0 2017-09-14 16:57:24.236082: I tensorflow/core/common_runtime/simple_placer.cc:697] Ignoring device specification /device:GPU:0 for node 'map_1/while/TensorArrayReadV3/Enter' because the input edge from 'map_1/TensorArray' is a reference connection and already has a device field set to /job:localhost/replica:0/task:0/device:CPU:0 2017-09-14 16:57:24.236550: I tensorflow/core/common_runtime/simple_placer.cc:697] Ignoring device specification /device:GPU:0 for node 'map/while/TensorArrayReadV3/Enter' because the input edge from 'map/TensorArray' is a reference connection and already has a device field set to /job:localhost/replica:0/task:0/device:CPU:0 Traceback (most recent call last): File "/home/subhadeep/.local/bin/aocr", line 11, in sys.exit(main()) File "/home/subhadeep/.local/lib/python2.7/site-packages/aocr/main.py", line 238, in main model.train() File "/home/subhadeep/.local/lib/python2.7/site-packages/aocr/model/model.py", line 306, in train result = self.step(batch, self.forward_only) File "/home/subhadeep/.local/lib/python2.7/site-packages/aocr/model/model.py", line 391, in step outputs = self.sess.run(output_feed, input_feed) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 895, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1124, in _run feed_dict_tensor, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1321, in _do_run options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1340, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [width must be <= target - offset] [[Node: map/while/Assert/Assert = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](map/while/GreaterEqual, map/while/Assert/Assert/data_0)]]

    Caused by op u'map/while/Assert/Assert', defined at: File "/home/subhadeep/.local/bin/aocr", line 11, in sys.exit(main()) File "/home/subhadeep/.local/lib/python2.7/site-packages/aocr/main.py", line 235, in main max_prediction_length=parameters.max_prediction, File "/home/subhadeep/.local/lib/python2.7/site-packages/aocr/model/model.py", line 125, in init self.img_data = tf.map_fn(self._prepare_image, self.img_data, dtype=tf.float32) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/functional_ops.py", line 389, in map_fn swap_memory=swap_memory) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2775, in while_loop result = context.BuildLoop(cond, body, loop_vars, shape_invariants) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2604, in BuildLoop pred, body, original_loop_vars, loop_vars, shape_invariants) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 2554, in _BuildLoop body_result = body(*packed_vars_for_body) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/functional_ops.py", line 379, in compute packed_fn_values = fn(packed_values) File "/home/subhadeep/.local/lib/python2.7/site-packages/aocr/model/model.py", line 456, in _prepare_image padded = tf.image.pad_to_bounding_box(resized, 0, 0, self.height, self.max_width) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/image_ops_impl.py", line 472, in pad_to_bounding_box 'width must be <= target - offset') File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/image_ops_impl.py", line 75, in _assert return [control_flow_ops.Assert(cond, [msg])] File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/tf_should_use.py", line 175, in wrapped return _add_should_use_warning(fn(*args, **kwargs)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 124, in Assert condition, data, summarize, name="Assert") File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_logging_ops.py", line 35, in _assert summarize=summarize, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2630, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1204, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    InvalidArgumentError (see above for traceback): assertion failed: [width must be <= target - offset] [[Node: map/while/Assert/Assert = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](map/while/GreaterEqual, map/while/Assert/Assert/data_0)]]

    opened by subhadeepdas91 12
  • Whole line (multiple words) recognition

    Whole line (multiple words) recognition

    Refer this issue https://github.com/emedvedev/attention-ocr/issues/59, are some modifications I make below sufficient to recognize multiple words (as a line)?

    • Include the space character into CHARMAP in aocr/util/data_gen.py.

    • Increase max-prediction for predicted phrase.

    • Increase max-width for input image

    opened by hiepph 11
  • anyway to get

    anyway to get "Confidence" metric?

    Hello,

    I'm interested in knowing a "confidence" for a given prediction. Does anyone have any ideas on the best way to tackle this? I assume there is some output in the graph (potentially for each character?) that I could tap into to calculate this. Hope to play with this later this week but wanted to see if anyone had any ideas first.

    opened by mattfeury 11
  • README.md

    README.md

    information on data.annotation.tfrecord and data.tfrecords is missing because you are telling just the first, whilst the second format no specifying any information or clue, and still not available

    opened by cercatore 0
  • Exporting the model to tensorflowjs

    Exporting the model to tensorflowjs

    Hi, I am trying to export aocr to tensorflowjs. But I am stuck at the point of exporting this model into a file.

    When running the aocr export ANY_LOCATION I get the following error:

    ModuleNotFoundError: No module named 'tensorflow.contrib'
    
    opened by MarcoSteinke 0
  • Someone please share their trained model on Synth 90k or equivalent

    Someone please share their trained model on Synth 90k or equivalent

    Can someone please share either the checkpoint or a Saved model trained on Synth 90k? The pre-trained model shared on the original repository does not work when I try to convert it to a Saved Model.

    Tensor name "bidirectional_rnn/bw/basic_lstm_cell/bias" not found in checkpoint files ./checkpoints/translate.ckpt-47200
    

    I get this error. I need help urgently

    opened by wolf-hash 0
  • How to stop the log message appear on console

    How to stop the log message appear on console

    First of all thank you for this amazing OCR. It works great for me.

    When i run inference on exported frozen graph i get a lot of messages inside terminal

    map_2/while/foldr/while/maximum_iterations: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/while/foldr/while/iteration_counter: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/while/foldr/while/Const: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/while/foldr/while/Greater/y: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/while/foldr/while/add/y: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/while/foldr/while/sub/y: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/while/add_1/y: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/TensorArrayStack/range/start: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    map_2/TensorArrayStack/range/delta: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    strided_slice_1/stack: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    strided_slice_1/stack_1: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    strided_slice_1/stack_2: (Const): /job:localhost/replica:0/task:0/device:GPU:0
    

    Is there a way to stop them from appearing on the console?

    opened by maxpaynestory 0
  • Unable to export model after training

    Unable to export model after training

    I'm attempting to train a custom aocr model on an internal dataset. I've labeled the data using a directory of images, and an annotation file as described in the README. This was converted to a dataset (training.tfrecords) and then trained according to the instructions (with some parameters to accomodate our data):

    aocr train --max-prediction 20 --full-ascii --num-epoch 300 datasets/gen_text_train/training.tfrecords
    

    This ran to completion, and I attempt to export it as shown in the README:

    aocr export --format=frozengraph ./attempted_training
    

    This produces a VERY long error:

    Full Error WARNING:tensorflow:From /home/ericsilk/anaconda3/lib/python3.7/site-packages/aocr/__main__.py:20: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

    WARNING:tensorflow:From /home/ericsilk/anaconda3/lib/python3.7/site-packages/aocr/main.py:20: The name tf.logging.ERROR is deprecated. Please use tf.compat.v1.logging.ERROR instead.

    2021-02-10 09:18:26.028615: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2021-02-10 09:18:26.038234: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3399905000 Hz 2021-02-10 09:18:26.038452: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56193ecbc7d0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-02-10 09:18:26.038477: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-02-10 09:18:26,039 root INFO phase: export 2021-02-10 09:18:26,039 root INFO model_dir: ./checkpoints 2021-02-10 09:18:26,039 root INFO load_model: True 2021-02-10 09:18:26,039 root INFO output_dir: ./results 2021-02-10 09:18:26,039 root INFO steps_per_checkpoint: 0 2021-02-10 09:18:26,039 root INFO batch_size: 1 2021-02-10 09:18:26,040 root INFO learning_rate: 1.000000 2021-02-10 09:18:26,040 root INFO reg_val: 0 2021-02-10 09:18:26,040 root INFO max_gradient_norm: 5.000000 2021-02-10 09:18:26,040 root INFO clip_gradients: True 2021-02-10 09:18:26,040 root INFO max_image_width 160.000000 2021-02-10 09:18:26,040 root INFO max_prediction_length 8.000000 2021-02-10 09:18:26,040 root INFO channels: 1 2021-02-10 09:18:26,040 root INFO target_embedding_size: 10.000000 2021-02-10 09:18:26,040 root INFO attn_num_hidden: 128 2021-02-10 09:18:26,040 root INFO attn_num_layers: 2 2021-02-10 09:18:26,040 root INFO visualize: False 2021-02-10 09:18:27,582 root INFO Reading model parameters from ./checkpoints/model.ckpt-33936 2021-02-10 09:18:27.635972: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv1/W/Initializer/random_uniform/shape (Const) conv_conv1/W/Initializer/random_uniform/min (Const) conv_conv1/W/Initializer/random_uniform/max (Const) conv_conv1/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv1/W/Initializer/random_uniform/sub (Sub) conv_conv1/W/Initializer/random_uniform/mul (Mul) conv_conv1/W/Initializer/random_uniform (Add) conv_conv1/W (VariableV2) /device:GPU:0 conv_conv1/W/Assign (Assign) /device:GPU:0 conv_conv1/W/read (Identity) /device:GPU:0 save/Assign_5 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636123: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv2/W/Initializer/random_uniform/shape (Const) conv_conv2/W/Initializer/random_uniform/min (Const) conv_conv2/W/Initializer/random_uniform/max (Const) conv_conv2/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv2/W/Initializer/random_uniform/sub (Sub) conv_conv2/W/Initializer/random_uniform/mul (Mul) conv_conv2/W/Initializer/random_uniform (Add) conv_conv2/W (VariableV2) /device:GPU:0 conv_conv2/W/Assign (Assign) /device:GPU:0 conv_conv2/W/read (Identity) /device:GPU:0 save/Assign_6 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636214: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv3/W/Initializer/random_uniform/shape (Const) conv_conv3/W/Initializer/random_uniform/min (Const) conv_conv3/W/Initializer/random_uniform/max (Const) conv_conv3/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv3/W/Initializer/random_uniform/sub (Sub) conv_conv3/W/Initializer/random_uniform/mul (Mul) conv_conv3/W/Initializer/random_uniform (Add) conv_conv3/W (VariableV2) /device:GPU:0 conv_conv3/W/Assign (Assign) /device:GPU:0 conv_conv3/W/read (Identity) /device:GPU:0 save/Assign_11 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636282: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv3/BatchNorm/gamma/Initializer/ones (Const) conv_conv3/BatchNorm/gamma (VariableV2) /device:GPU:0 conv_conv3/BatchNorm/gamma/Assign (Assign) /device:GPU:0 conv_conv3/BatchNorm/gamma/read (Identity) /device:GPU:0 save/Assign_8 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636333: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv3/BatchNorm/beta/Initializer/zeros (Const) conv_conv3/BatchNorm/beta (VariableV2) /device:GPU:0 conv_conv3/BatchNorm/beta/Assign (Assign) /device:GPU:0 conv_conv3/BatchNorm/beta/read (Identity) /device:GPU:0 save/Assign_7 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636387: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv3/BatchNorm/moving_mean/Initializer/zeros (Const) conv_conv3/BatchNorm/moving_mean (VariableV2) /device:GPU:0 conv_conv3/BatchNorm/moving_mean/Assign (Assign) /device:GPU:0 conv_conv3/BatchNorm/moving_mean/read (Identity) /device:GPU:0 save/Assign_9 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636465: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv3/BatchNorm/moving_variance/Initializer/ones (Const) conv_conv3/BatchNorm/moving_variance (VariableV2) /device:GPU:0 conv_conv3/BatchNorm/moving_variance/Assign (Assign) /device:GPU:0 conv_conv3/BatchNorm/moving_variance/read (Identity) /device:GPU:0 save/Assign_10 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636590: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv4/W/Initializer/random_uniform/shape (Const) conv_conv4/W/Initializer/random_uniform/min (Const) conv_conv4/W/Initializer/random_uniform/max (Const) conv_conv4/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv4/W/Initializer/random_uniform/sub (Sub) conv_conv4/W/Initializer/random_uniform/mul (Mul) conv_conv4/W/Initializer/random_uniform (Add) conv_conv4/W (VariableV2) /device:GPU:0 conv_conv4/W/Assign (Assign) /device:GPU:0 conv_conv4/W/read (Identity) /device:GPU:0 save/Assign_12 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636745: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv5/W/Initializer/random_uniform/shape (Const) conv_conv5/W/Initializer/random_uniform/min (Const) conv_conv5/W/Initializer/random_uniform/max (Const) conv_conv5/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv5/W/Initializer/random_uniform/sub (Sub) conv_conv5/W/Initializer/random_uniform/mul (Mul) conv_conv5/W/Initializer/random_uniform (Add) conv_conv5/W (VariableV2) /device:GPU:0 conv_conv5/W/Assign (Assign) /device:GPU:0 conv_conv5/W/read (Identity) /device:GPU:0 save/Assign_17 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636882: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv5/BatchNorm/gamma/Initializer/ones (Const) conv_conv5/BatchNorm/gamma (VariableV2) /device:GPU:0 conv_conv5/BatchNorm/gamma/Assign (Assign) /device:GPU:0 conv_conv5/BatchNorm/gamma/read (Identity) /device:GPU:0 save/Assign_14 (Assign) /device:GPU:0

    2021-02-10 09:18:27.636984: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv5/BatchNorm/beta/Initializer/zeros (Const) conv_conv5/BatchNorm/beta (VariableV2) /device:GPU:0 conv_conv5/BatchNorm/beta/Assign (Assign) /device:GPU:0 conv_conv5/BatchNorm/beta/read (Identity) /device:GPU:0 save/Assign_13 (Assign) /device:GPU:0

    2021-02-10 09:18:27.637085: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv5/BatchNorm/moving_mean/Initializer/zeros (Const) conv_conv5/BatchNorm/moving_mean (VariableV2) /device:GPU:0 conv_conv5/BatchNorm/moving_mean/Assign (Assign) /device:GPU:0 conv_conv5/BatchNorm/moving_mean/read (Identity) /device:GPU:0 save/Assign_15 (Assign) /device:GPU:0

    2021-02-10 09:18:27.637197: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv5/BatchNorm/moving_variance/Initializer/ones (Const) conv_conv5/BatchNorm/moving_variance (VariableV2) /device:GPU:0 conv_conv5/BatchNorm/moving_variance/Assign (Assign) /device:GPU:0 conv_conv5/BatchNorm/moving_variance/read (Identity) /device:GPU:0 save/Assign_16 (Assign) /device:GPU:0

    2021-02-10 09:18:27.637320: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv6/W/Initializer/random_uniform/shape (Const) conv_conv6/W/Initializer/random_uniform/min (Const) conv_conv6/W/Initializer/random_uniform/max (Const) conv_conv6/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv6/W/Initializer/random_uniform/sub (Sub) conv_conv6/W/Initializer/random_uniform/mul (Mul) conv_conv6/W/Initializer/random_uniform (Add) conv_conv6/W (VariableV2) /device:GPU:0 conv_conv6/W/Assign (Assign) /device:GPU:0 conv_conv6/W/read (Identity) /device:GPU:0 save/Assign_18 (Assign) /device:GPU:0

    2021-02-10 09:18:27.637472: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv7/W/Initializer/random_uniform/shape (Const) conv_conv7/W/Initializer/random_uniform/min (Const) conv_conv7/W/Initializer/random_uniform/max (Const) conv_conv7/W/Initializer/random_uniform/RandomUniform (RandomUniform) conv_conv7/W/Initializer/random_uniform/sub (Sub) conv_conv7/W/Initializer/random_uniform/mul (Mul) conv_conv7/W/Initializer/random_uniform (Add) conv_conv7/W (VariableV2) /device:GPU:0 conv_conv7/W/Assign (Assign) /device:GPU:0 conv_conv7/W/read (Identity) /device:GPU:0 save/Assign_23 (Assign) /device:GPU:0

    2021-02-10 09:18:27.637608: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv7/BatchNorm/gamma/Initializer/ones (Const) conv_conv7/BatchNorm/gamma (VariableV2) /device:GPU:0 conv_conv7/BatchNorm/gamma/Assign (Assign) /device:GPU:0 conv_conv7/BatchNorm/gamma/read (Identity) /device:GPU:0 save/Assign_20 (Assign) /device:GPU:0

    2021-02-10 09:18:27.638096: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv7/BatchNorm/beta/Initializer/zeros (Const) conv_conv7/BatchNorm/beta (VariableV2) /device:GPU:0 conv_conv7/BatchNorm/beta/Assign (Assign) /device:GPU:0 conv_conv7/BatchNorm/beta/read (Identity) /device:GPU:0 save/Assign_19 (Assign) /device:GPU:0

    2021-02-10 09:18:27.638206: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv7/BatchNorm/moving_mean/Initializer/zeros (Const) conv_conv7/BatchNorm/moving_mean (VariableV2) /device:GPU:0 conv_conv7/BatchNorm/moving_mean/Assign (Assign) /device:GPU:0 conv_conv7/BatchNorm/moving_mean/read (Identity) /device:GPU:0 save/Assign_21 (Assign) /device:GPU:0

    2021-02-10 09:18:27.638308: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: conv_conv7/BatchNorm/moving_variance/Initializer/ones (Const) conv_conv7/BatchNorm/moving_variance (VariableV2) /device:GPU:0 conv_conv7/BatchNorm/moving_variance/Assign (Assign) /device:GPU:0 conv_conv7/BatchNorm/moving_variance/read (Identity) /device:GPU:0 save/Assign_22 (Assign) /device:GPU:0

    2021-02-10 09:18:27.638472: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) bidirectional_rnn/fw/basic_lstm_cell/kernel/Initializer/random_uniform (Add) bidirectional_rnn/fw/basic_lstm_cell/kernel (VariableV2) /device:GPU:0 bidirectional_rnn/fw/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0 save/Assign_4 (Assign) /device:GPU:0

    2021-02-10 09:18:27.638609: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU VariableV2: CPU Const: CPU XLA_CPU Fill: CPU XLA_CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros/Const (Const) bidirectional_rnn/fw/basic_lstm_cell/bias/Initializer/zeros (Fill) bidirectional_rnn/fw/basic_lstm_cell/bias (VariableV2) /device:GPU:0 bidirectional_rnn/fw/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0 save/Assign_3 (Assign) /device:GPU:0

    2021-02-10 09:18:27.639111: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) bidirectional_rnn/bw/basic_lstm_cell/kernel/Initializer/random_uniform (Add) bidirectional_rnn/bw/basic_lstm_cell/kernel (VariableV2) /device:GPU:0 bidirectional_rnn/bw/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0 save/Assign_2 (Assign) /device:GPU:0

    2021-02-10 09:18:27.639261: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU VariableV2: CPU Const: CPU XLA_CPU Fill: CPU XLA_CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros/shape_as_tensor (Const) bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros/Const (Const) bidirectional_rnn/bw/basic_lstm_cell/bias/Initializer/zeros (Fill) bidirectional_rnn/bw/basic_lstm_cell/bias (VariableV2) /device:GPU:0 bidirectional_rnn/bw/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0 save/Assign_1 (Assign) /device:GPU:0

    2021-02-10 09:18:27.639855: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform/shape (Const) embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform/min (Const) embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform/max (Const) embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform/RandomUniform (RandomUniform) embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform/sub (Sub) embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform/mul (Mul) embedding_attention_decoder/attention_decoder/AttnW_0/Initializer/random_uniform (Add) embedding_attention_decoder/attention_decoder/AttnW_0 (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnW_0/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnW_0/read (Identity) /device:GPU:0 save/Assign_29 (Assign) /device:GPU:0

    2021-02-10 09:18:27.640798: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform/shape (Const) embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform/min (Const) embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform/max (Const) embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform/RandomUniform (RandomUniform) embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform/sub (Sub) embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform/mul (Mul) embedding_attention_decoder/attention_decoder/AttnV_0/Initializer/random_uniform (Add) embedding_attention_decoder/attention_decoder/AttnV_0 (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnV_0/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnV_0/read (Identity) /device:GPU:0 save/Assign_28 (Assign) /device:GPU:0

    2021-02-10 09:18:27.641065: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform/shape (Const) embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform/min (Const) embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform/max (Const) embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform/sub (Sub) embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform/mul (Mul) embedding_attention_decoder/attention_decoder/kernel/Initializer/random_uniform (Add) embedding_attention_decoder/attention_decoder/kernel (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/kernel/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/kernel/read (Identity) /device:GPU:0 save/Assign_31 (Assign) /device:GPU:0

    2021-02-10 09:18:27.641259: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/bias/Initializer/Const (Const) embedding_attention_decoder/attention_decoder/bias (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/bias/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/bias/read (Identity) /device:GPU:0 save/Assign_30 (Assign) /device:GPU:0

    2021-02-10 09:18:27.641389: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/shape (Const) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/min (Const) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/max (Const) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/sub (Sub) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform/mul (Mul) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Initializer/random_uniform (Add) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/read (Identity) /device:GPU:0 save/Assign_33 (Assign) /device:GPU:0

    2021-02-10 09:18:27.641530: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Initializer/zeros (Const) embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_0/basic_lstm_cell/bias/read (Identity) /device:GPU:0 save/Assign_32 (Assign) /device:GPU:0

    2021-02-10 09:18:27.641686: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform/shape (Const) embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform/min (Const) embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform/max (Const) embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform/sub (Sub) embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform/mul (Mul) embedding_attention_decoder/attention_decoder/Attention_0/kernel/Initializer/random_uniform (Add) embedding_attention_decoder/attention_decoder/Attention_0/kernel (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/Attention_0/kernel/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/Attention_0/kernel/read (Identity) /device:GPU:0 save/Assign_25 (Assign) /device:GPU:0

    2021-02-10 09:18:27.641934: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/Attention_0/bias/Initializer/Const (Const) embedding_attention_decoder/attention_decoder/Attention_0/bias (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/Attention_0/bias/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/Attention_0/bias/read (Identity) /device:GPU:0 save/Assign_24 (Assign) /device:GPU:0

    2021-02-10 09:18:27.642040: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Assign: CPU RandomUniform: CPU XLA_CPU Const: CPU XLA_CPU Mul: CPU XLA_CPU Sub: CPU XLA_CPU Add: CPU XLA_CPU Identity: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform/shape (Const) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform/min (Const) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform/max (Const) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform/RandomUniform (RandomUniform) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform/sub (Sub) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform/mul (Mul) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Initializer/random_uniform (Add) embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnOutputProjection/kernel/read (Identity) /device:GPU:0 save/Assign_27 (Assign) /device:GPU:0

    2021-02-10 09:18:27.642139: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Identity: CPU XLA_CPU Assign: CPU Const: CPU XLA_CPU VariableV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: embedding_attention_decoder/attention_decoder/AttnOutputProjection/bias/Initializer/Const (Const) embedding_attention_decoder/attention_decoder/AttnOutputProjection/bias (VariableV2) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnOutputProjection/bias/Assign (Assign) /device:GPU:0 embedding_attention_decoder/attention_decoder/AttnOutputProjection/bias/read (Identity) /device:GPU:0 save/Assign_26 (Assign) /device:GPU:0

    2021-02-10 09:18:27.642847: W tensorflow/core/common_runtime/colocation_graph.cc:983] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] Switch: CPU XLA_CPU Enter: CPU XLA_CPU LookupTableFindV2: CPU LookupTableInsertV2: CPU MutableHashTableV2: CPU LookupTableExportV2: CPU

    Colocation members, user-requested devices, and framework assigned devices, if any: MutableHashTable (MutableHashTableV2) /device:GPU:0 MutableHashTable_lookup_table_export_values/LookupTableExportV2 (LookupTableExportV2) /device:GPU:0 MutableHashTable_lookup_table_insert/LookupTableInsertV2 (LookupTableInsertV2) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2/Enter (Enter) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2/Enter_1 (Enter) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2/Switch (Switch) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2/Enter_2 (Enter) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2/Enter_3 (Enter) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2/Switch_2 (Switch) /device:GPU:0 map_1/while/foldr/while/cond/MutableHashTable_lookup_table_find/LookupTableFindV2 (LookupTableFindV2) /device:GPU:0

    Traceback (most recent call last): File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [39,10] rhs shape= [98,10] [[{{node save/Assign_34}}]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 1290, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run run_metadata_ptr) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run feed_dict_tensor, options, run_metadata) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run run_metadata) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [39,10] rhs shape= [98,10] [[node save/Assign_34 (defined at /lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

    Original stack trace for 'save/Assign_34': File "/bin/aocr", line 8, in sys.exit(main()) File "/lib/python3.7/site-packages/aocr/main.py", line 251, in main channels=parameters.channels, File "/lib/python3.7/site-packages/aocr/model/model.py", line 261, in init self.saver_all = tf.train.Saver(tf.all_variables()) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 828, in init self.build() File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 840, in build self._build(self._filename, build_save=True, build_restore=True) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build build_restore=build_restore) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 350, in _AddRestoreOps assign_ops.append(saveable.restore(saveable_tensors, shapes)) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saving/saveable_object_util.py", line 73, in restore self.op.get_shape().is_fully_defined()) File "/lib/python3.7/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign validate_shape=validate_shape) File "/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign use_locking=use_locking, name=name) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op attrs, op_def, compute_device) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal op_def=op_def) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in init self._traceback = tf_stack.extract_stack()

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/ericsilk/anaconda3/bin/aocr", line 8, in sys.exit(main()) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/aocr/main.py", line 251, in main channels=parameters.channels, File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/aocr/model/model.py", line 268, in init self.saver_all.restore(self.sess, ckpt.model_checkpoint_path) File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 1326, in restore err, "a mismatch between the current graph and the graph") tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

    Assign requires shapes of both tensors to match. lhs shape= [39,10] rhs shape= [98,10] [[node save/Assign_34 (defined at /lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]

    Original stack trace for 'save/Assign_34': File "/bin/aocr", line 8, in sys.exit(main()) File "/lib/python3.7/site-packages/aocr/main.py", line 251, in main channels=parameters.channels, File "/lib/python3.7/site-packages/aocr/model/model.py", line 261, in init self.saver_all = tf.train.Saver(tf.all_variables()) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 828, in init self.build() File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 840, in build self._build(self._filename, build_save=True, build_restore=True) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build build_restore=build_restore) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 350, in _AddRestoreOps assign_ops.append(saveable.restore(saveable_tensors, shapes)) File "/lib/python3.7/site-packages/tensorflow_core/python/training/saving/saveable_object_util.py", line 73, in restore self.op.get_shape().is_fully_defined()) File "/lib/python3.7/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign validate_shape=validate_shape) File "/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign use_locking=use_locking, name=name) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper op_def=op_def) File "/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op attrs, op_def, compute_device) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal op_def=op_def) File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in init self._traceback = tf_stack.extract_stack()}}

    The key bits of which appear to be:
    Traceback (most recent call last):
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
        return fn(*args)
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
        target_list, run_metadata)
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [39,10] rhs shape= [98,10]
             [[{{node save/Assign_34}}]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 1290, in restore
        {self.saver_def.filename_tensor_name: save_path})
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
        run_metadata_ptr)
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
        feed_dict_tensor, options, run_metadata)
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
        run_metadata)
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [39,10] rhs shape= [98,10]
             [[node save/Assign_34 (defined at /lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
    
    Original stack trace for 'save/Assign_34':
      File "/bin/aocr", line 8, in <module>
        sys.exit(main())
      File "/lib/python3.7/site-packages/aocr/__main__.py", line 251, in main
        channels=parameters.channels,
      File "/lib/python3.7/site-packages/aocr/model/model.py", line 261, in __init__
        self.saver_all = tf.train.Saver(tf.all_variables())
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 828, in __init__
        self.build()
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 840, in build
        self._build(self._filename, build_save=True, build_restore=True)
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build
        build_restore=build_restore)
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal
        restore_sequentially, reshape)
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 350, in _AddRestoreOps
        assign_ops.append(saveable.restore(saveable_tensors, shapes))
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saving/saveable_object_util.py", line 73, in restore
        self.op.get_shape().is_fully_defined())
      File "/lib/python3.7/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign
        validate_shape=validate_shape)
      File "/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign
        use_locking=use_locking, name=name)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
        op_def=op_def)
      File "/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
        attrs, op_def, compute_device)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
        op_def=op_def)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()
    
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/ericsilk/anaconda3/bin/aocr", line 8, in <module>
        sys.exit(main())
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/aocr/__main__.py", line 251, in main
        channels=parameters.channels,
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/aocr/model/model.py", line 268, in __init__
        self.saver_all.restore(self.sess, ckpt.model_checkpoint_path)
      File "/home/ericsilk/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 1326, in restore
        err, "a mismatch between the current graph and the graph")
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
    
    Assign requires shapes of both tensors to match. lhs shape= [39,10] rhs shape= [98,10]
             [[node save/Assign_34 (defined at /lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]]
    
    Original stack trace for 'save/Assign_34':
      File "/bin/aocr", line 8, in <module>
        sys.exit(main())
      File "/lib/python3.7/site-packages/aocr/__main__.py", line 251, in main
        channels=parameters.channels,
      File "/lib/python3.7/site-packages/aocr/model/model.py", line 261, in __init__
        self.saver_all = tf.train.Saver(tf.all_variables())
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 828, in __init__
        self.build()
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 840, in build
        self._build(self._filename, build_save=True, build_restore=True)
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build
        build_restore=build_restore)
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal
        restore_sequentially, reshape)
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saver.py", line 350, in _AddRestoreOps
        assign_ops.append(saveable.restore(saveable_tensors, shapes))
      File "/lib/python3.7/site-packages/tensorflow_core/python/training/saving/saveable_object_util.py", line 73, in restore
        self.op.get_shape().is_fully_defined())
      File "/lib/python3.7/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign
        validate_shape=validate_shape)
      File "/lib/python3.7/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign
        use_locking=use_locking, name=name)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
        op_def=op_def)
      File "/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
        attrs, op_def, compute_device)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
        op_def=op_def)
      File "/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()
    

    System info: Ubuntu 18.04.5 LTS, Anaconda 3 Any suggestions?

    opened by eric-silk 1
Releases(0.7.6)
Owner
Ed Medvedev
Ed Medvedev
Handwritten Text Recognition (HTR) system implemented with TensorFlow (TF) and trained on the IAM off-line HTR dataset. This Neural Network (NN) model recognizes the text contained in the images of segmented words.

Handwritten-Text-Recognition Handwritten Text Recognition (HTR) system implemented with TensorFlow (TF) and trained on the IAM off-line HTR dataset. T

null 27 Jan 8, 2023
Convert PDF/Image to TXT using EasyOcr - the best OCR engine available!

PDFImage2TXT - DOWNLOAD INSTALLER HERE What can you do with it? Convert scanned PDFs to TXT. Convert scanned Documents to TXT. No coding required!! In

Hans Alemão 2 Feb 22, 2022
Textboxes : Image Text Detection Model : python package (tensorflow)

shinTB Abstract A python package for use Textboxes : Image Text Detection Model implemented by tensorflow, cv2 Textboxes Paper Review in Korean (My Bl

Jayne Shin (신재인) 91 Dec 15, 2022
text detection mainly based on ctpn model in tensorflow, id card detect, connectionist text proposal network

text-detection-ctpn Scene text detection based on ctpn (connectionist text proposal network). It is implemented in tensorflow. The origin paper can be

Shaohui Ruan 3.3k Dec 30, 2022
Handwritten Number Recognition using CNN and Character Segmentation

Handwritten-Number-Recognition-With-Image-Segmentation Info About this repository This Repository is aimed at reading handwritten images of numbers an

Sparsha Saha 17 Aug 25, 2022
Tensorflow-based CNN+LSTM trained with CTC-loss for OCR

Overview This collection demonstrates how to construct and train a deep, bidirectional stacked LSTM using CNN features as input with CTC loss to perfo

Jerod Weinman 489 Dec 21, 2022
CNN+LSTM+CTC based OCR implemented using tensorflow.

CNN_LSTM_CTC_Tensorflow CNN+LSTM+CTC based OCR(Optical Character Recognition) implemented using tensorflow. Note: there is No restriction on the numbe

Watson Yang 356 Dec 8, 2022
Detect text blocks and OCR poorly scanned PDFs in bulk. Python module available via pip.

doc2text doc2text extracts higher quality text by fixing common scan errors Developing text corpora can be a massive pain in the butt. Much of the tex

Joe Sutherland 1.3k Jan 4, 2023
MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition

MORAN: A Multi-Object Rectified Attention Network for Scene Text Recognition Python 2.7 Python 3.6 MORAN is a network with rectification mechanism for

Canjie Luo 595 Dec 27, 2022
[ICCV, 2021] Cloud Transformers: A Universal Approach To Point Cloud Processing Tasks

Cloud Transformers: A Universal Approach To Point Cloud Processing Tasks This is an official PyTorch code repository of the paper "Cloud Transformers:

Visual Understanding Lab @ Samsung AI Center Moscow 27 Dec 15, 2022
Handwritten Text Recognition (HTR) using TensorFlow 2.x

Handwritten Text Recognition (HTR) system implemented using TensorFlow 2.x and trained on the Bentham/IAM/Rimes/Saint Gall/Washington offline HTR data

Arthur Flôr 160 Dec 21, 2022
Handwritten Text Recognition (HTR) system implemented with TensorFlow.

Handwritten Text Recognition with TensorFlow Update 2021: more robust model, faster dataloader, word beam search decoder also available for Windows Up

Harald Scheidl 1.5k Jan 7, 2023
A curated list of resources for text detection/recognition (optical character recognition ) with deep learning methods.

awesome-deep-text-detection-recognition A curated list of awesome deep learning based papers on text detection and recognition. Text Detection Papers

null 2.4k Jan 8, 2023
Text recognition (optical character recognition) with deep learning methods.

What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis | paper | training and evaluation data | failure cases and cle

Clova AI Research 3.2k Jan 4, 2023
caffe re-implementation of R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection

R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection Abstract This is a caffe re-implementation of R2CNN: Rotational Region CNN fo

candler 80 Dec 28, 2021
Sign Language Recognition service utilizing a deep learning model with Long Short-Term Memory to perform sign language recognition.

Sign Language Recognition Service This is a Sign Language Recognition service utilizing a deep learning model with Long Short-Term Memory to perform s

Martin Lønne 1 Jan 8, 2022
PyQT5 app that colorize black & white pictures using CNN(use pre-trained model which was made with OpenCV)

About PyQT5 app that colorize black & white pictures using CNN(use pre-trained model which was made with OpenCV) Colorizor Приложение для проекта Yand

null 1 Apr 4, 2022