Introduction
English | 简体中文
MMPose is an open-source toolbox for pose estimation based on PyTorch. It is a part of the OpenMMLab project.
The master branch works with PyTorch 1.3+.
Major Features
-
Support diverse tasks
We support a wide spectrum of mainstream pose analysis tasks in current research community, including 2d multi-person human pose estimation, 2d hand pose estimation, 2d face landmark detection, 133 keypoint whole-body human pose estimation, 3d human mesh recovery, fashion landmark detection and animal pose estimation. See demo.md for more information.
-
Higher efficiency and higher accuracy
MMPose implements multiple state-of-the-art (SOTA) deep learning models, including both top-down & bottom-up approaches. We achieve faster training speed and higher accuracy than other popular codebases, such as HRNet. See benchmark.md for more information.
-
Support for various datasets
The toolbox directly supports multiple popular and representative datasets, COCO, AIC, MPII, MPII-TRB, OCHuman etc. See data_preparation.md for more information.
-
Well designed, tested and documented
We decompose MMPose into different components and one can easily construct a customized pose estimation framework by combining different modules. We provide detailed documentation and API reference, as well as unittests.
Model Zoo
Supported algorithms:
(click to collapse)
- DeepPose (CVPR'2014)
- Wingloss (CVPR'2018)
- CPM (CVPR'2016)
- Hourglass (ECCV'2016)
- SimpleBaseline (ECCV'2018)
- HRNet (CVPR'2019)
- HRNetv2 (TPAMI'2019)
- SCNet (CVPR'2020)
- Associative Embedding (NeurIPS'2017)
- HigherHRNet (CVPR'2020)
- DarkPose (CVPR'2020)
- UDP (CVPR'2020)
- MSPN (ArXiv'2019)
- RSN (ECCV'2020)
- HMR (CVPR'2018)
Supported datasets:
(click to collapse)
- COCO (ECCV'2014)
- COCO-WholeBody (ECCV'2020)
- MPII (CVPR'2014)
- MPII-TRB (ICCV'2019)
- AI Challenger (ArXiv'2017)
- OCHuman (CVPR'2019)
- CrowdPose (CVPR'2019)
- PoseTrack18 (CVPR'2018)
- MHP (ACM MM'2018)
- sub-JHMDB (ICCV'2013)
- Human3.6M (TPAMI'2014)
- 300W (IMAVIS'2016)
- WFLW (CVPR'2018)
- AFLW (ICCVW'2011)
- COFW (ICCV'2013)
- OneHand10K (TCSVT'2019)
- FreiHand (ICCV'2019)
- RHD (ICCV'2017)
- CMU Panoptic HandDB (CVPR'2017)
- InterHand2.6M (ECCV'2020)
- DeepFashion (CVPR'2016)
- Horse-10 (WACV'2021)
- MacaquePose (bioRxiv'2020)
- Vinegar Fly (Nature Methods'2019)
- Desert Locust (Elife'2019)
- Grévy’s Zebra (Elife'2019)
- ATRW (ACM MM'2020)
Supported backbones:
(click to expand)
- AlexNet (NeurIPS'2012)
- VGG (ICLR'2015)
- HRNet (CVPR'2019)
- ResNet (CVPR'2016)
- ResNetV1D (CVPR'2019)
- ResNeSt (ArXiv'2020)
- ResNext (CVPR'2017)
- SCNet (CVPR'2020)
- SEResNet (CVPR'2018)
- ShufflenetV1 (CVPR'2018)
- ShufflenetV2 (ECCV'2018)
- MobilenetV2 (CVPR'2018)
Results and models are available in the README.md of each method's config directory. A summary can be found in the model zoo page. We will keep up with the latest progress of the community, and support more popular algorithms and frameworks.
If you have any feature requests, please feel free to leave a comment in Issues.
Benchmark
We demonstrate the superiority of our MMPose framework in terms of speed and accuracy on the standard COCO keypoint detection benchmark.
Model | Input size | MMPose (s/iter) | HRNet (s/iter) | MMPose (mAP) | HRNet (mAP) |
---|---|---|---|---|---|
resnet_50 | 256x192 | 0.28 | 0.64 | 0.718 | 0.704 |
resnet_50 | 384x288 | 0.81 | 1.24 | 0.731 | 0.722 |
resnet_101 | 256x192 | 0.36 | 0.84 | 0.726 | 0.714 |
resnet_101 | 384x288 | 0.79 | 1.53 | 0.748 | 0.736 |
resnet_152 | 256x192 | 0.49 | 1.00 | 0.735 | 0.720 |
resnet_152 | 384x288 | 0.96 | 1.65 | 0.750 | 0.743 |
hrnet_w32 | 256x192 | 0.54 | 1.31 | 0.746 | 0.744 |
hrnet_w32 | 384x288 | 0.76 | 2.00 | 0.760 | 0.758 |
hrnet_w48 | 256x192 | 0.66 | 1.55 | 0.756 | 0.751 |
hrnet_w48 | 384x288 | 1.23 | 2.20 | 0.767 | 0.763 |
More details about the benchmark are available on benchmark.md.
Installation
Please refer to install.md for installation.
Data Preparation
Please refer to data_preparation.md for a general knowledge of data preparation.
Get Started
Please see getting_started.md for the basic usage of MMPose. There are also tutorials:
- learn about configs
- finetune model
- add new dataset
- customize data pipelines
- add new modules
- export a model to ONNX
- customize runtime settings
FAQ
Please refer to FAQ for frequently asked questions.
License
This project is released under the Apache 2.0 license.
Citation
If you find this project useful in your research, please consider cite:
@misc{mmpose2020,
title={OpenMMLab Pose Estimation Toolbox and Benchmark},
author={MMPose Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmpose}},
year={2020}
}
Contributing
We appreciate all contributions to improve MMPose. Please refer to CONTRIBUTING.md for the contributing guideline.
Acknowledgement
MMPose is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new models.
Projects in OpenMMLab
- MMCV: OpenMMLab foundational library for computer vision.
- MMClassification: OpenMMLab image classification toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMEditing: OpenMMLab image and video editing toolbox.
- MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.