OCR Post Correction for Endangered Language Texts

Overview

📌 Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transactions of the Association for Computational Linguistics (TACL)!

Check out the paper here.

OCR Post Correction for Endangered Language Texts

This repository contains code for models and experiments from the paper "OCR Post Correction for Endangered Language Texts".

Textual data in endangered languages is often found in formats that are not machine-readable, including scanned images of paper books. Extracting the text is challenging because there is typically no annotated data to train an OCR system for each endangered language. Instead, we focus on post-correcting the OCR output from a general-purpose OCR system.

📌 In the paper, we present a dataset containing annotations for documents in three critically endangered languages: Ainu, Griko, Yakkha.

📌 Our model reduces the recognition error rate by 34% on average, over a state-of-the-art OCR system.

Learn more about the paper here!

OCR Post-Correction

The goal of OCR post-correction is to automatically correct errors in the text output from an existing OCR system.

The existing OCR system is used to obtain a first pass transcription of the input image (example below in the endangered language Griko):

First pass OCR transcription

The incorrectly recognized characters in the first pass are then corrected by the post-correction model.

Corrected transcription

Model

As seen in the example above, OCR post-correction is a text-based sequence-to-sequence task.

📌 We use a character-level encoder-decoder architecture with attention and add several adaptations for the low-resource setting. The paper has all the details!

📌 The model is trained in a supervised manner. The training data consists of first pass OCR outputs as the source with corresponding manually corrected transcriptions as the target.

📌 Some books that contain texts in endangered languages also contain translations of the text in another (usually high-resource) language. We incorporate an additional encoder in the model, with a multisource framework, to use the information from these translations if they are available.

We provide instructions for both single-source and multisource models:

  • The single-source model can be used for almost any document and is significantly easier to set up.

  • The multisource model can only be used if translations are available.

Dataset

This repository contains a sample from our dataset in sample_dataset, which you can use to train the post-correction model. Get the full dataset here!

However, this repository can be used to train OCR post-correction models for documents in any language!

🚀 If you want to use our model with a new set of documents, construct a dataset by following the steps here.

🚀 We'd love to hear about the new datasets and models you build: send us an email at [email protected]!

Running Experiments

Once you have a suitable dataset (e.g., sample_dataset or your own dataset), you can train a model and run experiments on OCR post-correction.

If you have your own dataset, you can use the utils/prepare_data.py script to create train, development, and test splits (see the last step here).

The steps are described below, illustrated with sample_dataset/postcorrection. If using another dataset, simply change the experiment settings to point to your dataset and run the same scripts.

Requirements

Python 3+ is required. Pip can be used to install the packages:

pip install -r postcorr_requirements.txt

Training

The process of training the post-correction model has two main steps:

  • Pretraining with first pass OCR outputs.
  • Training with manually corrected transcriptions in a supervised manner.

For a single-source model, modify the experimental settings in train_single-source.sh to point to the appropriate dataset and desired output folder. It is currently set up to use sample_dataset.

Then run

bash train_single-source.sh

For multisource, use train_multi-source.sh.

Log files and saved models are written to the user-specified experiment folder for both the pretraining and training steps. For a list of all available hyperparameters and options, look at postcorrection/constants.py and postcorrection/opts.py.

Testing

For testing with a single-source model, modify the experimental settings in test_single-source.sh. It is currently set up to use sample_dataset.

Then run

bash test_single-source.sh

For multisource, use test_multi-source.sh.

Citation

Please cite our paper if this repository was useful.

@inproceedings{rijhwani-etal-2020-ocr,
    title = "{OCR} {P}ost {C}orrection for {E}ndangered {L}anguage {T}exts",
    author = "Rijhwani, Shruti  and
      Anastasopoulos, Antonios  and
      Neubig, Graham",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-main.478",
    doi = "10.18653/v1/2020.emnlp-main.478",
    pages = "5931--5942",
}

License

You might also like...
Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph
Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph

Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph This repository provides a pipeline to create a knowledge graph from ra

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx] An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.
An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects different compression algorithms have.

ImageCompressionSimulation An Image compression simulator that uses Source Extractor and Monte Carlo methods to examine the post compressive effects o

Ansible Automation Example: JSNAPY PRE/POST Upgrade Validation
Ansible Automation Example: JSNAPY PRE/POST Upgrade Validation

Ansible Automation Example: JSNAPY PRE/POST Upgrade Validation Overview This example will show how to validate the status of our firewall before and a

Code implementation from my Medium blog post: [Transformers from Scratch in PyTorch]

transformer-from-scratch Code for my Medium blog post: Transformers from Scratch in PyTorch Note: This Transformer code does not include masked attent

A Python package to create, run, and post-process MODFLOW-based models.
A Python package to create, run, and post-process MODFLOW-based models.

Version 3.3.5 — release candidate Introduction FloPy includes support for MODFLOW 6, MODFLOW-2005, MODFLOW-NWT, MODFLOW-USG, and MODFLOW-2000. Other s

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.
To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

To propose and implement a multi-class classification approach to disaster assessment from the given data set of post-earthquake satellite imagery.

docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.
docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

docTR by Mindee (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.

Comments
  • Segmentation Fault: Pretraining Epoch 0

    Segmentation Fault: Pretraining Epoch 0

    The training process is interrupted by a segmentation fault during the very first epoch as part of the pretraining process. The error encountered is as follows:

    [dynet] random seed: 1678755796 [dynet] allocating memory: 512MB [dynet] memory allocation done. [dynet] random seed: 2822143777 [dynet] using autobatching [dynet] allocating memory: 10000MB [dynet] memory allocation done. train_single-source.sh: line 65: 1310309 Segmentation fault (core dumped) python3 postcorrection/multisource_wrapper.py --dynet-mem $dynet_mem --dynet-autobatch 1 --pretrain_src1 $pretrain_src --pretrain_tgt $pretrain_tgt $params --single --vocab_folder $expt_folder/vocab --output_folder $expt_folder --model_name $pretrained_model_name --pretrain_only

    I was able to narrow down the problem to the following piece of code: batch_loss.backward() in line 80: lm_trainer.py

    The following issue may hint towards the potential problem: [(https://github.com/clab/dynet/issues/308)]

    Has anyone encountered this problem before?

    opened by taham0 4
  • Help with installing Dynet - GPU

    Help with installing Dynet - GPU

    Hi, I am facing some issues while installing Dynet-GPU for CUDA 11.1. Can you please inform, whether you have used the CPU version of dynet or GPU ? If GPU, then can you inform the version of dynet and eigen that you have used?

    System Specifications: Cuda Version - 11.1

    I have tried the following versions.

    Build Command: To avoid Unsupported GPU architecture for compute_30 during build time, the below command is used.

    cmake .. -DEIGEN3_INCLUDE_DIR=../eigen -DPYTHON='which python' -DBACKEND=cuda -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.1

    | Dynet | Eigen | Error | | ----------- | ----------- | ----------- | | Latest(master branch) | Eigen 3.2 | CX11 folder is not present| | Latest(master branch) | Eigen 3.3 | identifier std::round is undefined in device code| | Latest(master branch) | Eigen 3.3.7 | identifier std::round is undefined in device code| | Latest(master branch) | Eigen 3.4 | Error while running make| | Dynet 2.0.3 | Eigen-2355b22 | Unsupported GPU architecture for compute_30 (while running make) | | Dynet 2.1 | Eigen-b2e267dc99d4.zip | Unsupported GPU architecture for compute_30 (while running make) |

    opened by vahini01 3
  • Dynet dynamic memory allocation

    Dynet dynamic memory allocation

    I have installed dynet with gpu compatibility as mentioned in the docs. Also the --dynet-mem is set in the train_single-source.sh file. Even then I got this error. Following is the Traceback of the entire error.

    [dynet] Device Number: 2 [dynet] Device name: GeForce GTX 1080 Ti [dynet] Memory Clock Rate (KHz): 5505000 [dynet] Memory Bus Width (bits): 352 [dynet] Peak Memory Bandwidth (GB/s): 484.44 [dynet] Memory Free (GB): 11.5464/11.7215 [dynet] Device(s) selected: 2 [dynet] random seed: 2652333402 [dynet] using autobatching [dynet] allocating memory: 6000MB [dynet] memory allocation done. Param, load_model: None Traceback (most recent call last): File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/multisource_wrapper.py", line 65, in pretrainer = PretrainHandler( File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/pretrain_handler.py", line 81, in init self.pretrain_model(pretrain_src1, pretrain_src2, pretrain_tgt, epochs) File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/pretrain_handler.py", line 88, in pretrain_model self.seq2seq_trainer.train( File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/seq2seq_trainer.py", line 55, in train batch_loss.backward() File "_dynet.pyx", line 823, in _dynet.Expression.backward File "_dynet.pyx", line 842, in _dynet.Expression.backward ValueError: Dynet does not support both dynamic increasing of memory pool size, and automatic batching or memory checkpointing. If you want to use automatic batching or checkpointing, please pre-allocate enough memory using the --dynet-mem command line option (details http://dynet.readthedocs.io/en/latest/commandline.html).

    opened by geeksouvik 3
  • Distributed training and auto-batching with this codebase

    Distributed training and auto-batching with this codebase

    Hi @shrutirij thanks for the great work and very well-documented and clean codebase, I greatly appreciate it!

    I adapted and tried running this codebase for some conceptually similar experiments and I've observed a few quirks that I wanted to run by you to get your thoughts since I haven't really used Dynet before.

    1. I ran the codebase with --dynet-gpus 8 (after also modifying opts.py to support this arg) and found that although 8 processes are spawned and attached to 8 GPUs, only the first process has > 0% GPU utilization. It appears that this codebase doesn't support distributed training in its current form. Is that accurate? Is there an equivalent to PyTorch's DistributedDataParallel and DistributedSampler that I can use to perform data-parallel training and inference with Dynet? It would greatly speed up my experiments.
    2. It appears that the training time with a CPU is the same as the training time with GPU when using 1 GPU via provision of the --dynet-gpu flag. Is this what you noticed too during your runs? If not, could you suggest how I can get this to run faster with a GPU?
    3. It appears that the Dynet auto-batching feature isn't working, because I tried running the code with and without the --dynet-autobatch 1 flag and the run-time doesn't seem to change. I see the main training loop looks like the following (where minibatch_size is always set to 1 here):
                for i in range(0, len(train_data), minibatch_size):
                    cur_size = min(minibatch_size, len(train_data) - i)
                    losses = []
                    dy.renew_cg()
                    for (src1, src2, tgt) in train_data[i : i + cur_size]:
                        losses.append(self.model.get_loss(src1, src2, tgt))
                    batch_loss = dy.esum(losses)
                    batch_loss.backward()
                    trainer.update()
                    epoch_loss += batch_loss.scalar_value()
                logging.info("Epoch loss: %0.4f" % (epoch_loss / len(train_data)))
    

    Doesn't this mean that cur_size is always 1, causing the inner for loop to just iterate over a list of size 1 by default? If I were to override minibatch_size to, say, 32, how does Dynet ensure that 1 forward operation occurs per batch of 32 examples instead of 32 separate forward passes?

    Thanks a lot for your time and thanks again for the great work toward protecting endangered languages!

    opened by g-karthik 0
Owner
Shruti Rijhwani
Ph.D. student at CMU, working on natural language processing.
Shruti Rijhwani
Release of SPLASH: Dataset for semantic parse correction with natural language feedback in the context of text-to-SQL parsing

SPLASH: Semantic Parsing with Language Assistance from Humans SPLASH is dataset for the task of semantic parse correction with natural language feedba

Microsoft Research - Language and Information Technologies (MSR LIT) 35 Oct 31, 2022
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

Grammarly 226 Dec 29, 2022
Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021)

RSCD (BS-RSCD & JCD) Towards Rolling Shutter Correction and Deblurring in Dynamic Scenes (CVPR2021) by Zhihang Zhong, Yinqiang Zheng, Imari Sato We co

null 81 Dec 15, 2022
Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021

PLOME:Pre-training with Misspelled Knowledge for Chinese Spelling Correction (ACL2021) This repository provides the code and data of the work in ACL20

null 197 Nov 26, 2022
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 2 Dec 7, 2021
A simple tutoral for error correction task, based on Pytorch

gramcorrector A simple tutoral for error correction task, based on Pytorch Grammatical Error Detection (sentence-level) a binary sequence-based classi

peiyuan_gong 8 Dec 3, 2022
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyKale 370 Dec 27, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
Generate images from texts. In Russian. In PaddlePaddle

ruDALL-E PaddlePaddle ruDALL-E in PaddlePaddle. Install: pip install rudalle_paddle==0.0.1rc1 Run with free v100 on AI Studio. Original Pytorch versi

AgentMaker 20 Oct 18, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021