CCPD: a diverse and well-annotated dataset for license plate detection and recognition

Overview

CCPD (Chinese City Parking Dataset, ECCV)

UPdate on 10/03/2019. CCPD Dataset is now updated. We are confident that images in subsets of CCPD is much more challenging than before with over 300k images and refined annotations.

(If you are benefited from this dataset, please cite our paper.) It can be downloaded from and extract by (tar xf CCPD2019.tar.xz):

train\val\test split

The split file is available under 'split/' folder.

Images in CCPD-Base is split to train/val set. Sub-datasets (CCPD-DB, CCPD-Blur, CCPD-FN, CCPD-Rotate, CCPD-Tilt, CCPD-Challenge) in CCPD are exploited for test.


UPdate on 16/09/2020. We add a new energy vehicle sub-dataset (CCPD-Green) which has an eight-digit license plate number.

It can be downloaded from:

metric

As each image in CCPD contains only a single license plate (LP). Therefore, we do not consider recall and concerntrate on precision. Detectors are allowed to predict only one bounding box for each image.

  • Detection. For each image, the detector outputs only one bounding box. The bounding box is considered to be correct if and only if its IoU with the ground truth bounding box is more than 70% (IoU > 0.7). Also, we compute AP on the test set.

  • Recognition. A LP recognition is correct if and only if all characters in the LP number are correctly recognized.

benchmark

If you want to provide more baseline results or have problems about the provided results. Please raise an issue.

detection
FPS AP DB Blur FN Rotate Tilt Challenge
Faster-RCNN 11 84.98 66.73 81.59 76.45 94.42 88.19 89.82
SSD300 25 86.99 72.90 87.06 74.84 96.53 91.86 90.06
SSD512 12 87.83 69.99 84.23 80.65 96.50 91.26 92.14
YOLOv3-320 52 87.23 71.34 82.19 82.44 96.69 89.17 91.46
recognition

We provide baseline methods for recognition by appending a LP recognition model Holistic-CNN (HC) (refer to paper 'Holistic recognition of low quality license plates by cnn using track annotated data') to the detector.

FPS AP DB Blur FN Rotate Tilt Challenge
SSD512+HC 11 43.42 34.47 25.83 45.24 52.82 52.04 44.62

The column 'AP' shows the precision on all the test set. The test set contains six parts: DB(ccpd_db/), Blur(ccpd_blur), FN(ccpd_fn), Rotate(ccpd_rotate), Tilt(ccpd_tilt), Challenge(ccpd_challenge).

This repository is designed to provide an open-source dataset for license plate detection and recognition, described in 《Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline》. This dataset is open-source under MIT license. More details about this dataset are avialable at our ECCV 2018 paper (also available in this github) 《Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline》. If you are benefited from this paper, please cite our paper as follows:

@inproceedings{xu2018towards,
  title={Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline},
  author={Xu, Zhenbo and Yang, Wei and Meng, Ajin and Lu, Nanxue and Huang, Huan},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={255--271},
  year={2018}
}

Specification of the categorise above:

  • rpnet: The training code for a license plate localization network and an end-to-end network which can detect the license plate bounding box and recognize the corresponding license plate number in a single forward. In addition, demo.py and demo folder are provided for playing demo.

  • paper.pdf: Our published eccv paper.

Demo

Demo code and several images are provided under rpnet/ folder, after you obtain "fh02.pth" by downloading or training, run demo as follows, the demo code will modify images in rpnet/demo folder and you can check by opening demo images.


  python demo.py -i [ROOT/rpnet/demo/] -m [***/fh02.pth]

The nearly well-trained model for testing and fun (Short of time, trained only for 5 epochs, but enough for testing):

We encourage the comparison with SOTA detector like FCOS rather than RPnet as the architecture of RPnet is very old fashioned.

Training instructions

Input parameters are well commented in python codes(python2/3 are both ok, the version of pytorch should be >= 0.3). You can increase the batchSize as long as enough GPU memory is available.

Enviorment (not so important as long as you can run the code):

  • python: pytorch(0.3.1), numpy(1.14.3), cv2(2.4.9.1).
  • system: Cuda(release 9.1, V9.1.85)

For convinence, we provide a trained wR2 model and a trained rpnet model, you can download them from google drive or baiduyun.

First train the localization network (we provide one as before, you can download it from google drive or baiduyun) defined in wR2.py as follows:


  python wR2.py -i [IMG FOLDERS] -b 4

After wR2 finetunes, we train the RPnet (we provide one as before, you can download it from google drive or baiduyun) defined in rpnet.py. Please specify the variable wR2Path (the path of the well-trained wR2 model) in rpnet.py.


  python rpnet.py -i [TRAIN IMG FOLDERS] -b 4 -se 0 -f [MODEL SAVE FOLDER] -t [TEST IMG FOLDERS]

Test instructions

After fine-tuning RPnet, you need to uncompress a zip folder and select it as the test directory. The argument after -s is a folder for storing failure cases.


  python rpnetEval.py -m [MODEL PATH, like /**/fh02.pth] -i [TEST DIR] -s [FAILURE SAVE DIR]

Dataset Annotations

Annotations are embedded in file name.

A sample image name is "025-95_113-154&383_386&473-386&473_177&454_154&383_363&402-0_0_22_27_27_33_16-37-15.jpg". Each name can be splited into seven fields. Those fields are explained as follows.

  • Area: Area ratio of license plate area to the entire picture area.

  • Tilt degree: Horizontal tilt degree and vertical tilt degree.

  • Bounding box coordinates: The coordinates of the left-up and the right-bottom vertices.

  • Four vertices locations: The exact (x, y) coordinates of the four vertices of LP in the whole image. These coordinates start from the right-bottom vertex.

  • License plate number: Each image in CCPD has only one LP. Each LP number is comprised of a Chinese character, a letter, and five letters or numbers. A valid Chinese license plate consists of seven characters: province (1 character), alphabets (1 character), alphabets+digits (5 characters). "0_0_22_27_27_33_16" is the index of each character. These three arrays are defined as follows. The last character of each array is letter O rather than a digit 0. We use O as a sign of "no character" because there is no O in Chinese license plate characters.

provinces = ["皖", "沪", "津", "渝", "冀", "晋", "蒙", "辽", "吉", "黑", "苏", "浙", "京", "闽", "赣", "鲁", "豫", "鄂", "湘", "粤", "桂", "琼", "川", "贵", "云", "藏", "陕", "甘", "青", "宁", "新", "警", "学", "O"]
alphabets = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W',
             'X', 'Y', 'Z', 'O']
ads = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
       'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'O']
  • Brightness: The brightness of the license plate region.

  • Blurriness: The Blurriness of the license plate region.

Acknowledgement

If you have any problems about CCPD, please contact [email protected].

Please cite the paper 《Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline》, if you benefit from this dataset.

Comments
  • How do I download the data into Google Colab

    How do I download the data into Google Colab

    Hello Authors,

    I downloaded the data from google drive a few days back and now trying to train my model on it. But I worry it might fry my GPU since it is taking a lot of time to run on my local machine. It has already taken half of my day to run on just 5 epochs. I know I can't ask any more from since you guys have made such a big dataset public. I would really appreciate your help here.

    I wanted to export the data to google colab to speed up training. I tried to search on this issue online but could not find a good response. If anyone can help me with this issue please help.

    The download link seems to be https://drive.google.com/uc?id=1rdEsCUcIUaYOVRkx5IMTRNA7PcGMmSgc&export=download

    Is there anything I can do with this to load the data directly into google colab?

    Thank You

    opened by THINK989 1
  • 有没有同志解释一下,Annotation怎么标的么,readme写的比较混乱

    有没有同志解释一下,Annotation怎么标的么,readme写的比较混乱

    可能是我晕了,不过文件名的Annotation有没有大神看懂的或者是contributor可以解释一下? 我感觉看readme根本看不懂这一堆数字啥意思。

    尤其是车牌bbox以及车牌顶点,到底是文件名中的哪个部分?还有最后的车牌号码和亮度模糊度是不是混在一起了?到底“&”是不是split的点位?

    opened by JasonLeeFdu 1
  • Question about how to recognize chinese character

    Question about how to recognize chinese character

    I run ' python3 demo.py -i demo/ -m fh02.pth ' in terminal, but the results on the picture have no provinces. what should I do to show provinces on the picture? Thanks

    opened by ShaneYS 1
  • could you please give the dataset download link like

    could you please give the dataset download link like "baidu yun"

    could you please give the dataset download link like "baidu yun"。I want to do some research about license plate recognition。But I can not download it using the google drive?

    opened by peterpaniff 1
  • Given wR2.pth and fh02.pth performs badly?

    Given wR2.pth and fh02.pth performs badly?

    What are the files we can download trained on? If we use fh02 on rpnetEval it can only predict 10% of all the seven characters? Here is one of the outputs for rpnetEval: total 4000 correct 1661 error 836 sixCorrect 1799 precision 0.41525 six 0.44975 avg_time 0.03134229779243469 Does not really seem like the results they list in the paper?

    opened by Thomar10 0
  •  SystemExit: 2

    SystemExit: 2

    你好,我想请问一下这个错误怎么解决

    rpnet.py: error: the following arguments are required: -i/--images, -se/--start_epoch, -t/--test, -f/--folder An exception has occurred, use %tb to see the full traceback. SystemExit: 2

    %tb Traceback (most recent call last):

    File "C:\Users\asus\Desktop\CCPD-master\CCPD-master\rpnet\rpnet.py", line 35, in args = vars(ap.parse_args())

    File "F:\anaconda\lib\argparse.py", line 1730, in parse_args args, argv = self.parse_known_args(args, namespace)

    File "F:\anaconda\lib\argparse.py", line 1762, in parse_known_args namespace, args = self._parse_known_args(args, namespace)

    File "F:\anaconda\lib\argparse.py", line 1997, in _parse_known_args ', '.join(required_actions))

    File "F:\anaconda\lib\argparse.py", line 2389, in error self.exit(2, _('%(prog)s: error: %(message)s\n') % args)

    File "F:\anaconda\lib\argparse.py", line 2376, in exit _sys.exit(status)

    SystemExit: 2

    opened by victorchencai 0
  • Ownership of Images in Dataset

    Ownership of Images in Dataset

    Do the authors have ownership of the images available in CCPD since it is mentioned that the images were collected manually by workers of a roadside parking management company?

    opened by DeepakG19 0
Owner
detectRecog
I focus on object detection&&object recognition and some topics concerning autonomous driving.
detectRecog
Automatic number plate recognition using tech: Yolo, OCR, Scene text detection, scene text recognation, flask, torch

Automatic Number Plate Recognition Automatic Number Plate Recognition (ANPR) is the process of reading the characters on the plate with various optica

Meftun AKARSU 52 Dec 22, 2022
NUANCED is a user-centric conversational recommendation dataset that contains 5.1k annotated dialogues and 26k high-quality user turns.

NUANCED: Natural Utterance Annotation for Nuanced Conversation with Estimated Distributions Overview NUANCED is a user-centric conversational recommen

Facebook Research 18 Dec 28, 2021
Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection, AAAI 2021.

Co-mining: Self-Supervised Learning for Sparsely Annotated Object Detection This repository is an official implementation of the AAAI 2021 paper Co-mi

MEGVII Research 20 Dec 7, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 2, 2023
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

null 93 Nov 6, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 9, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

null 49 Dec 1, 2022
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN

Overview PyTorch 0.4.1 | Python 3.6.5 Annotated implementations with comparative introductions for minimax, non-saturating, wasserstein, wasserstein g

Shayne O'Brien 471 Dec 16, 2022
Implementation of "Debiasing Item-to-Item Recommendations With Small Annotated Datasets" (RecSys '20)

Debiasing Item-to-Item Recommendations With Small Annotated Datasets This is the code for our RecSys '20 paper. Other materials can be found here: Ful

Microsoft 34 Aug 10, 2022
Experimenting with computer vision techniques to generate annotated image datasets from gameplay recordings automatically.

Experimenting with computer vision techniques to generate annotated image datasets from gameplay recordings automatically. The collected data will then be used to train a deep neural network that can detect enemy player models in real time, during gameplay. Finally, a virtual input device will adjust the player's crosshair based on live detections for greater accuracy.

Martin Valchev 3 Apr 24, 2022
Reusable constraint types to use with typing.Annotated

annotated-types PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] shou

null 125 Dec 26, 2022
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

MIC-DKFZ 1.2k Jan 4, 2023
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on table detection and table structure recognition.

WTW-Dataset This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on ICCV 2021. Here, you can download the

null 109 Dec 29, 2022
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft 244 Jan 6, 2023
Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).

GD-VCR Code for Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning (EMNLP 2021). Research Questions and Aims: How well can a model perform o

Da Yin 24 Oct 13, 2022
[ACM MM 2021] Diverse Image Inpainting with Bidirectional and Autoregressive Transformers

Diverse Image Inpainting with Bidirectional and Autoregressive Transformers Installation pip install -r requirements.txt Dataset Preparation Given the

Yingchen Yu 25 Nov 9, 2022
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 9, 2023
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" (SPNLP@ACL2022)

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022