Corner-based Region Proposal Network
CRPN is a two-stage detection framework for multi-oriented scene text. It employs corners to estimate the possible locations of text instances and a region-wise subnetwork for further classification and regression. In our experiments, it achieves F-measure of 0.876 and 0.845 on ICDAR 2013 and 2015 respectively. The paper is available at arXiv.
Installation
This code is based on Caffe and py-faster-rcnn. It has been tested on Ubuntu 16.04 with CUDA 8.0.
-
Clone this repository
git clone https://github.com/xhzdeng/crpn.git
-
Build Caffe and pycaffe
cd $CRPN_ROOT/caffe-fast-rcnn make -j8 && make pycaffe
-
Build the Cython modules
cd $CRPN_ROOT/lib make
-
Prepare your own training data directory. For convenience, it should have this basic structure.
$VOCdevkit/ $VOCdevkit/VOC2007 # image sets, annotations, etc.
And create symlinks for YOUR dataset
cd $CRPN_ROOT/data ln -s [path] VOCdevkit
-
Download pretrained ImageNet VGG-16 model. You can find it at Caffe Model Zoo.
-
Train with YOUR dataset
cd $CRPN_ROOT ./experiments/scripts/train.sh [NET] [MODEL] [DATASET] [ITER_NUM] # NET is the network arch to use, only {vgg16} in this implemention # MODEL is the pre-trained model you want to use to initial your weights # DATASET points to your dataset, please refer the contents of train.sh # IETR_NUM
-
Test with YOUR models
cd $CRPN_ROOT ./experiments/scripts/test.sh [NET] [MODEL] [DATASET] # NET is the network arch to use, only {vgg16} in this implemention # MODEL is the testing model # DATASET points to your dataset, please refer the contents of test.sh
Test outputs are saved under:
output/<experiment directory>/<dataset name>/<network snapshot name>/
Demo
```
cd $CRPN_ROOT
./tools/demo.py --net [NET] --model [MODEL]
# NET is the network arch to use, only {vgg16} in this implemention
# MODEL is the path of caffemodel you want to use
```
Models
Now, you can download the pretrained model from OneDrive or BaiduYun, which is trained 100k iters on SynthText. I also have uploaded a testing model trained recently. It achieves an F-measure of 0.8456 at 840p resolution on ICDAR 2015, similar performance but slightly faster than we depicted in the paper.
Citation
If you find the paper and code useful in your research, please consider citing:
@article{deng2018crpn,
Title = {Detecting Multi-Oriented Text with Corner-based Region Proposals},
Author = {Linjie Deng and Yanxiang Gong and Yi Lin and Jingwen Shuai and Xiaoguang Tu and Yufei Zhang and Zheng Ma and Mei Xie},
Journal = {arXiv preprint arXiv:1804.02690},
Year = {2018}
}