NeurIPS 2021 Datasets and Benchmarks Track

Related tags

Deep Learning AP-10K
Overview

AP-10K: A Benchmark for Animal Pose Estimation in the Wild

Introduction | Updates | Overview | Download | Training Code | Key Questions | License

Introduction

This repository is the official reporisity of AP-10K: A Benchmark for Animal Pose Estimation in the Wild (NeurIPS 2021 Datasets and Benchmarks Track). It contains the introduction, annotation files, and code for the dataset AP-10K, which is the first large-scale dataset for general animal pose estimation. AP-10K consists of 10,015 images collected and filtered from 23 animal families and 54 species, with high-quality keypoint annotations. We also contain another about 50k images with family and species labels. The dataset can be used for supervised learning, cross-domain transfer learning, and intra- and inter-family domain. It can also be used in self-supervised learning, semi-supervised learning, etc. The annotation files are provided following the COCO style.

Updates

01/11/2021 We have uploaded the corresponding code and pretrained models for the usage of AP-10K dataset!

01/11/2021 We have updated the dataset! It now has 54 species for training!

01/11/2021 The AP-10K dataset is integrated into mmpose! Please enjoy it!

11/10/2021 The paper is accepted to NeurIPS 2021 Datasets and Benchmarks Track!

31/08/2021 The paper is post on arxiv! We have uploaded the annotation file!

Overview

keypoint definition

Keypoint Description Keypoint Description
1 Left Eye 2 Right Eye
3 Nose 4 Neck
5 Root of Tail 6 Left Shoulder
7 Left Elbow 8 Left Front Paw
9 Right Shoulder 10 Right Elbow
11 Right Front Paw 12 Left Hip
13 Left Knee 14 Left Back Paw
15 Right Hip 16 Right Knee
17 Right Back Paw

Annotations Overview

Image Background

Id Background type Id Background type
1 grass or savanna 2 forest or shrub
3 mud or rock 4 snowfield
5 zoo or human habitation 6 swamp or rivderside
7 desert or gobi 8 mugshot

Download

The dataset and corresponding files can be downloaded from

[Google Drive] [Baidu Pan] (code: 6uz6)

(Optional) The full version with both labeled and unlabeled images can be downloaded with the script provided here

[Google Drive] [Baidu Pan] (code: 5lxi)

Training Code

Here we provide the example of training models with the AP-10K dataset. The code is based on the mmpose project.

Installation

Please refer to install.md for Installation.

Dataset Preparation

Please download the dataset from the Download Section, and please extract the dataset under the data folder, e.g.,

mkdir data
unzip ap-10k.zip -d data/
mv data/ap-10k data/ap10k

The extracted dataset should be looked like:

AP-10K
├── mmpose
├── docs
├── tests
├── tools
├── configs
|── data
    │── ap10k
        │-- annotations
        │   │-- ap10k-train-split1.json
        │   |-- ap10k-train-split2.json
        │   |-- ap10k-train-split3.json
        │   │-- ap10k-val-split1.json
        │   |-- ap10k-val-split2.json
        │   |-- ap10k-val-split3.json
        │   |-- ap10k-test-split1.json
        │   |-- ap10k-test-split2.json
        │   |-- ap10k-test-split3.json
        │-- data
        │   │-- 000000000001.jpg
        │   │-- 000000000002.jpg
        │   │-- ...

Inference

The checkpoints can be downloaded from HRNet-w32, HRNet-w48, ResNet-50, ResNet-101.

python tools/test.py <CONFIG_FILE> <DET_CHECKPOINT_FILE>

Training

bash tools/dist_train.sh <CONFIG_FILE> <GPU_NUM>

For example, to train the HRNet-w32 model with 1 GPU, please run:

bash tools/dist_train.sh configs/animal/2d_kpt_sview_rgb_img/topdown_heatmap/ap10k/hrnet_w32_ap10k_256x256.py 1

Key Questions

1. For what purpose was the dataset created?

AP-10K is created to facilitate research in the area of animal pose estimation. It is important to study several challenging questions in the context of more training data from diverse species are available, such as:

  1. how about the performance of different representative human pose models on the animal pose estimation task?
  2. will the representation ability of a deep model benefit from training on a large-scale dataset with diverse species?
  3. how about the impact of pretraining, e.g., on the ImageNet dataset or human pose estimation dataset, in the context of the large-scale of dataset with diverse species?
  4. how about the intra and inter family generalization ability of a model trained using data from specific species or family?

However, previous datasets for animal pose estimation contain limited number of animal species. Therefore, it is impossible to study these questions using existing datasets as they contains at most 5 species, which is far from enough to get sound conclusion. By contrast, AP-10K has 23 family and 54 species and thus can help researchers to study these questions.

2. Was any cleaning of the data done?

We removed replicated images by using aHash algorithm to detect similar images and manually checking. Images with heavy occlusion and logos were removed manually. The cleaned images were categorized into diifferent species and family.

3. How were the keypoints instructed to be labeled?

Annotators first learned about the physiognomy, body structure and distribution of keypoints of the animals. Then, five images of each species were presented to annotators to annotate keypoints, which were used to assess their annotation quality. Annotators with good annotation quality were further trained on how to deal with the partial absence of the body due to occlusion and were involved in the subsequent annotation process. Annotators were asked to annotate all visible keypoints. For the occluded keypoints, they were asked to annotate keypoints whose location they could estimate based on body plan, pose, and the symmetry property of the body, where the length of occluded limbs or the location of occluded keypoints could be inferred from the visible limbs or keypoints. Other keypoints were left unlabeled.

To guarantee the annotation quality, we have adopted a sequential labeling strategy. Three rounds of cross-check and correction are conducted with both manual check and automatic check (according to specific rules, \eg, keypoints belonging to an instance are in the same bounding box) to reduce the possibility of mislabeling. To begin with, annotators labeled keypoints of each instance and submited a version-1 labels to senior well-trained annotators, and then senior well-trained annotators checked the quality of the version-1 labels and returned an error list to annotators, annotators would fix these errors according to it. Finally, annotators submited a fixed version-2 labels to senior well-trained annotator and they did the last correction to find any potential mislabeled keypoints. After all three rounds of work had been done, a release-version of dataset with high-quality labels was finished.

4. Unity of keypoint and difference of walk type

If we only follow the biology and define the keypoints by the position of the bones, the actual labeled keypoint maybe hard, even invisible for labeling and which look like inharmonious with animal’s movement. Ungulates (or other unguligrade animals) mainly rely on their toes in movement, with their paws, ankles, and knees observable. Compared with these keypoints, the actual hips are less distinctive and difficult to annotate since they are hidden in their body. A similar phenomenon can also be observed in digitigrade animals. On the other hand, plantigrade animals always walk with metatarsals (paws) flat on the ground, with their paws, knees, and hips more distinguishable in movement. Thus, we denote the paws, ankles, and knees for the unguligrade and digitigrade animals, and the paws, knees, and hips for the plantigrade animals. For simplicity, we use 'hip' to denote the knees for unguligrade and digitigrade animals and 'knee' for their ankles. For plantigrade animals, the annotation is the same as the biology definition. Thus, the visual distribution of keypoints is similar across the dataset, as the 'knee' is around the middle of the limbs for all animals.

5. What tasks could the dataset be used for?

AP-10K can be used for the research of animal pose estimation. Besides, it can also be used for specific machine learning topics such as few-shot learning, domain generalization, self-supervised learning. Please see the Discussion part in the paper.

License

The dataset follows CC-BY-4.0 license.

Comments
  • AP60K cannot be downloaded

    AP60K cannot be downloaded

    Hello,

    I'm trying to download the ap60k data from the script, but it raised an error Screenshot from 2022-03-01 17-00-51 All of the images it downloaded are the image in the InternetImage folder. Can you run this script?

    opened by Drow999 3
  • KeyError: 'center'

    KeyError: 'center'

    Thanks for your amazing work!

    When I run inference code, using " python tools/test.py configs/animal/2d_kpt_sview_rgb_img/topdown_heatmap/ap10k/hrnet_w32_ap10k_256x256.py hrnet_w32_ap10k_256x256-18aac840_20211029.pth ", an Error occer, which is "KeyError: 'center'" 2022-09-16 09-30-43 的屏幕截图

    Could you please tell me if there is something missing in the ckpt?

    opened by Lxiangyue 1
  • Where are the OKS sigma values for AP-10K dataset ?

    Where are the OKS sigma values for AP-10K dataset ?

    Thanks for your contributions. I try to incoporate ap10k dataset to my own code. I compare the official evaluation code and find that ap10k seems to use different OKS sigma values from COCO. But I can not find the exact values in this repo. The sigma values in configs/base/datasets/ap10k.py seem to the same with COCO, which I think is unreasonable.

    opened by GoldExcalibur 1
  • May i know what the meaning of each data split?

    May i know what the meaning of each data split?

    Hi, Thanks for introducing this dataset. I notice that there are there data splits in the data annotations file. May I know whether they are corresponding to the different experimental settings in the paper? If yes, could you please provide more details on these three different splits?

    opened by chaneyddtt 1
  • Greetings from MMPose team!

    Greetings from MMPose team!

    Hi guys,

    Thanks for this excellent work. Animal pose estimation is a field of great potentials and the AP-10K dataset would definitely benefit the research community.

    We are currently maintaining MMPose project. We are glad to notice that MMPose has been used to benchmark algorithms on the AP-10K dataset. Hope you enjoyed it :). It has always been the purpose of OpenMMLab to facilitate computer vision research.

    And of course, all kinds of contributions from the community are mostly welcomed and would make OpenMMLab better. We have supported several datasets of animal pose estimation with a collection of pre-trained models in MMPose. We wonder if you would like to add AP-10K into MMPose and release the models in our Model Zoo. It would provide a convenient way to access and use the AP-10K dataset, and also greatly complement MMPose.

    Please let me know if you have any plans. My email is [email protected]

    Best, Yining Li

    opened by ly015 1
  • Does AP10K contain photos from other datasets?

    Does AP10K contain photos from other datasets?

    Hello, I would like to ask if AP10K contains photos from other datasets? Examples include Animal-Pose Dataset, animals with attributes 2, and animals_5?

    opened by doufengqi 0
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Key 'center' and 'scale' missing from result list

    Key 'center' and 'scale' missing from result list

    image I used my own dataset for testing purposes. The config file is hrnet_w32_ap10k_256x256. I have downloaded the pre-trained model and removed evaluation code since I don't have ground truth. I have to manually add these 2 keys to the result dict in top_down_transform.py and shared_transform.py to make the code work. Did I miss something?

    opened by PeteBai 3
Owner
AP-10K
AP-10K
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

Face Recognition: Too Bias, or Not Too Bias? Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. "Face recognition:

Joseph P. Robinson 41 Dec 12, 2022
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 1, 2023
Training code and evaluation benchmarks for the "Self-Supervised Policy Adaptation during Deployment" paper.

Self-Supervised Policy Adaptation during Deployment PyTorch implementation of PAD and evaluation benchmarks from Self-Supervised Policy Adaptation dur

Nicklas Hansen 101 Nov 1, 2022
Benchmarks for semi-supervised domain generalization.

Semi-Supervised Domain Generalization This code is the official implementation of the following paper: Semi-Supervised Domain Generalization with Stoc

Kaiyang 49 Dec 10, 2022
Benchmarks for the Optimal Power Flow Problem

Power Grid Lib - Optimal Power Flow This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emergi

A Library of IEEE PES Power Grid Benchmarks 207 Dec 8, 2022
Benchmark spaces - Benchmarks of how well different two dimensional spaces work for clustering algorithms

benchmark_spaces Benchmarks of how well different two dimensional spaces work fo

Bram Cohen 6 May 7, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma This repo provi

Jingtao Zhan 99 Dec 27, 2022
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval ?? The 1st Place Submission to AICity Challenge 2021 Natural

null 82 Dec 29, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 5, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

QQ Browser 2021 AI Algorithm Competition Track 1 1st Place Program

null 249 Jan 3, 2023
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
Code and datasets for the paper "Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction" (RA-L, 2021)

Combining Events and Frames using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction This is the code for the paper Combining E

Robotics and Perception Group 69 Dec 26, 2022