This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.

Overview

Lite-HRNet: A Lightweight High-Resolution Network

Introduction

This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network. In this work, we present an efficient high-resolution network, Lite-HRNet, for human pose estimation. We start by simply applying the efficient shuffle block in ShuffleNet to HRNet (high-resolution network), yielding stronger performance over popular lightweight networks, such as MobileNet, ShuffleNet, and Small HRNet. We find that the heavily-used pointwise (1x1) convolutions in shuffle blocks become the computational bottleneck. We introduce a lightweight unit, conditional channel weighting, to replace costly pointwise (1x1) convolutions in shuffle blocks. The complexity of channel weighting is linear w.r.t the number of channels and lower than the quadratic time complexity for pointwise convolutions. Our solution learns the weights from all the channels and over multiple resolutions that are readily available in the parallel branches in HRNet. It uses the weights as the bridge to exchange information across channels and resolutions, compensating the role played by the pointwise (1x1) convolution. Lite-HRNet demonstrates superior results on human pose estimation over popular lightweight networks. Moreover, Lite-HRNet can be easily applied to semantic segmentation task in the same lightweight manner.

Results and models

Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 dataset

Arch Input Size #Params FLOPs AP AP50 AP75 AR AR50 ckpt
Lite-HRNet-18 256x192 1.1M 205.2M 0.648 0.867 0.730 0.712 0.911 GoogleDrive or OneDrive
Lite-HRNet-18 384x288 1.1M 461.6M 0.676 0.878 0.750 0.737 0.921 GoogleDrive or OneDrive
Lite-HRNet-30 256x192 1.8M 319.2M 0.672 0.880 0.750 0.733 0.922 GoogleDrive or OneDrive
Lite-HRNet-30 384x288 1.8M 717.8M 0.704 0.887 0.777 0.762 0.928 GoogleDrive or OneDrive

Results on MPII val set

Arch Input Size #Params FLOPs Mean [email protected] ckpt
Lite-HRNet-18 256x256 1.1M 273.4M 0.854 0.295 GoogleDrive or OneDrive
Lite-HRNet-30 256x256 1.8M 425.3M 0.870 0.313 GoogleDrive or OneDrive

Enviroment

The code is developed using python 3.6 on Ubuntu 16.04. NVIDIA GPUs are needed. The code is developed and tested using 8 NVIDIA V100 GPU cards. Other platforms or GPU cards are not fully tested.

Quick Start

Requirements

  • Linux (Windows is not officially supported)
  • Python 3.6+
  • PyTorch 1.3+
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • mmcv (Please install the latest version of mmcv-full)
  • Numpy
  • cv2
  • json_tricks
  • xtcocotools

Installation

a. Install mmcv, we recommend you to install the pre-build mmcv as below.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html

Please replace {cu_version} and {torch_version} in the url to your desired one. For example, to install the latest mmcv-full with CUDA 11 and PyTorch 1.7.0, use the following command:

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.0/index.html

If it compiles during installation, then please check that the cuda version and pytorch version **exactly"" matches the version in the mmcv-full installation command. For example, pytorch 1.7.0 and 1.7.1 are treated differently. See here for different versions of MMCV compatible to different PyTorch and CUDA versions.

Optionally you can choose to compile mmcv from source by the following command

git clone https://github.com/open-mmlab/mmcv.git
cd mmcv
MMCV_WITH_OPS=1 pip install -e .  # package mmcv-full, which contains cuda ops, will be installed after this step
# OR pip install -e .  # package mmcv, which contains no cuda ops, will be installed after this step
cd ..

Or directly run

pip install mmcv-full
# alternative: pip install mmcv

Important: You need to run pip uninstall mmcv first if you have mmcv installed. If mmcv and mmcv-full are both installed, there will be ModuleNotFoundError.

b. Install build requirements

pip install -r requirements.txt

Prepare datasets

It is recommended to symlink the dataset root to $LITE_HRNET/data. If your folder structure is different, you may need to change the corresponding paths in config files.

For COCO data, please download from COCO download, 2017 Train/Val is needed for COCO keypoints training and validation. HRNet-Human-Pose-Estimation provides person detection result of COCO val2017 to reproduce our multi-person pose estimation results. Please download from OneDrive Download and extract them under $LITE_HRNET/data, and make them look like this:

lite_hrnet
├── configs
├── models
├── tools
`── data
    │── coco
        │-- annotations
        │   │-- person_keypoints_train2017.json
        │   |-- person_keypoints_val2017.json
        |-- person_detection_results
        |   |-- COCO_val2017_detections_AP_H_56_person.json
        │-- train2017
        │   │-- 000000000009.jpg
        │   │-- 000000000025.jpg
        │   │-- 000000000030.jpg
        │   │-- ...
        `-- val2017
            │-- 000000000139.jpg
            │-- 000000000285.jpg
            │-- 000000000632.jpg
            │-- ...

For MPII data, please download from MPII Human Pose Dataset. We have converted the original annotation files into json format, please download them from mpii_annotations. Extract them under $LITE_HRNET/data, and make them look like this:

lite_hrnet
├── configs
├── models
├── tools
`── data
    │── mpii
        |── annotations
        |   |── mpii_gt_val.mat
        |   |── mpii_test.json
        |   |── mpii_train.json
        |   |── mpii_trainval.json
        |   `── mpii_val.json
        `── images
            |── 000001163.jpg
            |── 000003072.jpg

Training and Testing

All outputs (log files and checkpoints) will be saved to the working directory, which is specified by work_dir in the config file.

By default we evaluate the model on the validation set after each epoch, you can change the evaluation interval by modifying the interval argument in the training config

evaluation = dict(interval=5)  # This evaluate the model per 5 epoch.

According to the Linear Scaling Rule, you need to set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

Training

# train with a signle GPU
python tools/train.py ${CONFIG_FILE} [optional arguments]

# train with multiple GPUs
./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]

Optional arguments are:

  • --validate (strongly recommended): Perform evaluation at every k (default value is 5 epochs during the training.
  • --work-dir ${WORK_DIR}: Override the working directory specified in the config file.
  • --resume-from ${CHECKPOINT_FILE}: Resume from a previous checkpoint file.
  • --gpus ${GPU_NUM}: Number of gpus to use, which is only applicable to non-distributed training.
  • --seed ${SEED}: Seed id for random state in python, numpy and pytorch to generate random numbers.
  • --deterministic: If specified, it will set deterministic options for CUDNN backend.
  • JOB_LAUNCHER: Items for distributed job initialization launcher. Allowed choices are none, pytorch, slurm, mpi. Especially, if set to none, it will test in a non-distributed mode.
  • LOCAL_RANK: ID for local rank. If not specified, it will be set to 0.
  • --autoscale-lr: If specified, it will automatically scale lr with the number of gpus by Linear Scaling Rule.

Difference between resume-from and load-from: resume-from loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally. load-from only loads the model weights and the training epoch starts from 0. It is usually used for finetuning.

Examples:

Training on COCO train2017 dataset

./tools/dist_train.sh configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py 8

Training on MPII dataset

./tools/dist_train.sh configs/top_down/lite_hrnet/mpii/litehrnet_18_mpii_256x256.py 8

Testing

You can use the following commands to test a dataset.

# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRIC}] \
    [--proc_per_gpu ${NUM_PROC_PER_GPU}] [--gpu_collect] [--tmpdir ${TMPDIR}] [--average_clips ${AVG_TYPE}] \
    [--launcher ${JOB_LAUNCHER}] [--local_rank ${LOCAL_RANK}]

# multiple-gpu testing
./tools/dist_test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRIC}] \
    [--proc_per_gpu ${NUM_PROC_PER_GPU}] [--gpu_collect] [--tmpdir ${TMPDIR}] [--average_clips ${AVG_TYPE}] \
    [--launcher ${JOB_LAUNCHER}] [--local_rank ${LOCAL_RANK}]

Optional arguments:

  • RESULT_FILE: Filename of the output results. If not specified, the results will not be saved to a file.
  • EVAL_METRIC: Items to be evaluated on the results. Allowed values depend on the dataset.
  • NUM_PROC_PER_GPU: Number of processes per GPU. If not specified, only one process will be assigned for a single gpu.
  • --gpu_collect: If specified, recognition results will be collected using gpu communication. Otherwise, it will save the results on different gpus to TMPDIR and collect them by the rank 0 worker.
  • TMPDIR: Temporary directory used for collecting results from multiple workers, available when --gpu_collect is not specified.
  • AVG_TYPE: Items to average the test clips. If set to prob, it will apply softmax before averaging the clip scores. Otherwise, it will directly average the clip scores.
  • JOB_LAUNCHER: Items for distributed job initialization launcher. Allowed choices are none, pytorch, slurm, mpi. Especially, if set to none, it will test in a non-distributed mode.
  • LOCAL_RANK: ID for local rank. If not specified, it will be set to 0.

Examples:

Test LiteHRNet-18 on COCO with 8 GPUS, and evaluate the mAP.

./tools/dist_test.sh configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py \
    checkpoints/SOME_CHECKPOINT.pth 8 \
    --eval mAP

Get the compulationaly complexity

You can use the following commands to compute the complexity of one model.

python tools/summary_network.py ${CONFIG_FILE} --shape ${SHAPE}

Arguments:

  • SHAPE: Input size.

Examples:

Test the complexity of LiteHRNet-18 with 256x256 resolution input.

python tools/summary_network.py configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py \
    --shape 256 256 \

Acknowledgement

Thanks to:

Citation

If you use our code or models in your research, please cite with:

@inproceedings{Yulitehrnet21,
  title={Lite-HRNet: A Lightweight High-Resolution Network},
  author={Yu, Changqian and Xiao, Bin and Gao, Changxin and Yuan, Lu and Zhang, Lei and Sang, Nong and Wang, Jingdong},
  booktitle={CVPR},
  year={2021}
}

@inproceedings{SunXLW19,
  title={Deep High-Resolution Representation Learning for Human Pose Estimation},
  author={Ke Sun and Bin Xiao and Dong Liu and Jingdong Wang},
  booktitle={CVPR},
  year={2019}
}

@article{WangSCJDZLMTWLX19,
  title={Deep High-Resolution Representation Learning for Visual Recognition},
  author={Jingdong Wang and Ke Sun and Tianheng Cheng and 
          Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and 
          Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
  journal={TPAMI}
  year={2019}
}

Comments
  • Lite-HRNet does not perform well in the bottom-up pose estimation task

    Lite-HRNet does not perform well in the bottom-up pose estimation task

    This is a great job. I use LIte-HRNet as the backbone to reproduce HigherHRNet-Human-Pose-Estimation but the result is not very satisfactory. I hope you can provide useful suggestions. Thank you for your work. image image Do you have plans to test the bottom-up pose-estimation task?

    opened by xiexiaoshinick 3
  • would u release demo.py to test one img or video?

    would u release demo.py to test one img or video?

    opened by codylcs 3
  • No module named 'mmpose.models.registry'

    No module named 'mmpose.models.registry'

    (base) chenxin@chenxin-Nitro-AN515-52:~/disk1/github/Lite-HRNet/mmpose/mmpose/models$ tree . . ├── backbones │   ├── alexnet.py │   ├── base_backbone.py │   ├── cpm.py │   ├── hourglass.py │   ├── hrnet.py │   ├── init.py │   ├── mobilenet_v2.py │   ├── mobilenet_v3.py │   ├── mspn.py │   ├── regnet.py │   ├── resnest.py │   ├── resnet.py │   ├── resnext.py │   ├── rsn.py │   ├── scnet.py │   ├── seresnet.py │   ├── seresnext.py │   ├── shufflenet_v1.py │   ├── shufflenet_v2.py │   ├── tcn.py │   ├── utils │   │   ├── channel_shuffle.py │   │   ├── init.py │   │   ├── inverted_residual.py │   │   ├── make_divisible.py │   │   ├── se_layer.py │   │   └── utils.py │   └── vgg.py ├── builder.py ├── detectors │   ├── associative_embedding.py │   ├── base.py │   ├── init.py │   ├── interhand_3d.py │   ├── mesh.py │   ├── multi_task.py │   ├── pose_lifter.py │   └── top_down.py ├── heads │   ├── ae_higher_resolution_head.py │   ├── ae_simple_head.py │   ├── deeppose_regression_head.py │   ├── hmr_head.py │   ├── init.py │   ├── interhand_3d_head.py │   ├── temporal_regression_head.py │   ├── topdown_heatmap_base_head.py │   ├── topdown_heatmap_multi_stage_head.py │   └── topdown_heatmap_simple_head.py ├── init.py ├── losses │   ├── classfication_loss.py │   ├── init.py │   ├── mesh_loss.py │   ├── mse_loss.py │   ├── multi_loss_factory.py │   └── regression_loss.py ├── misc │   ├── discriminator.py │   └── init.py ├── necks │   ├── gap_neck.py │   └── init.py └── utils ├── geometry.py ├── init.py └── ops.py

    8 directories, 60 files

    opened by mathpopo 2
  • 安装ERRORS

    安装ERRORS

    1. requirment.txt要求安装numpy=1.19.0,但会报错“ValueError: numpy.ndarray size changed, may indicate binary incompatibility ” 解决方法:版本问题,安装numpy 1.20.0 pip install numpy==1.20.0

    2. 缺少mmpose包,要最近新版的,否则会缺少相应模块,如NECKS git clone [email protected]:open-mmlab/mmpose.git cd mmpose pip install -r requirements.txt python setup.py develop

    3. ModuleNotFoundError: No module named 'poseval' git clone https://github.com/svenkreiss/poseval.git cd poseval pip install -e .

    4. ModuleNotFoundError: No module named 'tensorboard' pip install future tensorboard

    opened by WangChen100 2
  • when load model in test.py flows with error,config with configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py

    when load model in test.py flows with error,config with configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py

    Connected to pydev debugger (build 202.7319.64) Use load_from_local loader The model and loaded state dict do not match exactly

    unexpected key in source state_dict: backbone.stage0.2.layers.0.cross_resolution_weighting.conv1.conv.weight, backbone.stage0.2.layers.0.cross_resolution_...

    how to load model in right way?

    opened by codylcs 2
  • I got an issue about KeyError 'center',there is no keyword 'center' in result

    I got an issue about KeyError 'center',there is no keyword 'center' in result

    D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py:23: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_pb=_b('\n+tensorboard/compat/proto/tensor_shape.proto\x12\x0btensorboard"{\n\x10TensorShapeProto\x12.\n\x03\x64im\x18\x02 \x03(\x0b\x32!.tensorboard.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB\x87\x01\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01ZSgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto\xf8\x01\x01\x62\x06proto3') D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py:42: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_options=None, file=DESCRIPTOR), D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py:63: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_end=183, D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\types_pb2.py:24: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_pb=_b('\n$tensorboard/compat/proto/types.proto\x12\x0btensorboard*\xaa\x06\n\x08\x44\x61taType\x12\x0e\n\nDT_INVALID\x10\x00\x12\x0c\n\x08\x44T_FLOAT\x10\x01\x12\r\n\tDT_DOUBLE\x10\x02\x12\x0c\n\x08\x44T_INT32\x10\x03\x12\x0c\n\x08\x44T_UINT8\x10\x04\x12\x0c\n\x08\x44T_INT16\x10\x05\x12\x0b\n\x07\x44T_INT8\x10\x06\x12\r\n\tDT_STRING\x10\x07\x12\x10\n\x0c\x44T_COMPLEX64\x10\x08\x12\x0c\n\x08\x44T_INT64\x10\t\x12\x0b\n\x07\x44T_BOOL\x10\n\x12\x0c\n\x08\x44T_QINT8\x10\x0b\x12\r\n\tDT_QUINT8\x10\x0c\x12\r\n\tDT_QINT32\x10\r\x12\x0f\n\x0b\x44T_BFLOAT16\x10\x0e\x12\r\n\tDT_QINT16\x10\x0f\x12\x0e\n\nDT_QUINT16\x10\x10\x12\r\n\tDT_UINT16\x10\x11\x12\x11\n\rDT_COMPLEX128\x10\x12\x12\x0b\n\x07\x44T_HALF\x10\x13\x12\x0f\n\x0b\x44T_RESOURCE\x10\x14\x12\x0e\n\nDT_VARIANT\x10\x15\x12\r\n\tDT_UINT32\x10\x16\x12\r\n\tDT_UINT64\x10\x17\x12\x10\n\x0c\x44T_FLOAT_REF\x10\x65\x12\x11\n\rDT_DOUBLE_REF\x10\x66\x12\x10\n\x0c\x44T_INT32_REF\x10g\x12\x10\n\x0c\x44T_UINT8_REF\x10h\x12\x10\n\x0c\x44T_INT16_REF\x10i\x12\x0f\n\x0b\x44T_INT8_REF\x10j\x12\x11\n\rDT_STRING_REF\x10k\x12\x14\n\x10\x44T_COMPLEX64_REF\x10l\x12\x10\n\x0c\x44T_INT64_REF\x10m\x12\x0f\n\x0b\x44T_BOOL_REF\x10n\x12\x10\n\x0c\x44T_QINT8_REF\x10o\x12\x11\n\rDT_QUINT8_REF\x10p\x12\x11\n\rDT_QINT32_REF\x10q\x12\x13\n\x0f\x44T_BFLOAT16_REF\x10r\x12\x11\n\rDT_QINT16_REF\x10s\x12\x12\n\x0e\x44T_QUINT16_REF\x10t\x12\x11\n\rDT_UINT16_REF\x10u\x12\x15\n\x11\x44T_COMPLEX128_REF\x10v\x12\x0f\n\x0b\x44T_HALF_REF\x10w\x12\x13\n\x0f\x44T_RESOURCE_REF\x10x\x12\x12\n\x0e\x44T_VARIANT_REF\x10y\x12\x11\n\rDT_UINT32_REF\x10z\x12\x11\n\rDT_UINT64_REF\x10{Bz\n\x18org.tensorflow.frameworkB\x0bTypesProtosP\x01ZLgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/types_go_proto\xf8\x01\x01\x62\x06proto3') D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\types_pb2.py:36: DeprecationWarning: Call to deprecated create function EnumValueDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. type=None), D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\types_pb2.py:225: DeprecationWarning: Call to deprecated create function EnumDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_end=864, D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\resource_handle_pb2.py:27: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. dependencies=[tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2.DESCRIPTOR,tensorboard_dot_compat_dot_proto_dot_types__pb2.DESCRIPTOR,]) D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\resource_handle_pb2.py:45: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_options=None, file=DESCRIPTOR), D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\resource_handle_pb2.py:66: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_end=437, D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\tensor_pb2.py:28: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. dependencies=[tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2.DESCRIPTOR,tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2.DESCRIPTOR,tensorboard_dot_compat_dot_proto_dot_types__pb2.DESCRIPTOR,]) D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\tensor_pb2.py:46: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_options=None, file=DESCRIPTOR), D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\tensor_pb2.py:172: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_end=714, D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\summary_pb2.py:27: DeprecationWarning: Call to deprecated create function FileDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. dependencies=[tensorboard_dot_compat_dot_proto_dot_tensor__pb2.DESCRIPTOR,]) D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\summary_pb2.py:38: DeprecationWarning: Call to deprecated create function EnumValueDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. type=None), D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\summary_pb2.py:55: DeprecationWarning: Call to deprecated create function EnumDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_end=1228, D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\summary_pb2.py:80: DeprecationWarning: Call to deprecated create function FieldDescriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_options=None, file=DESCRIPTOR), D:\Anaconda3\envs\opencv\lib\site-packages\tensorboard\compat\proto\summary_pb2.py:94: DeprecationWarning: Call to deprecated create function Descriptor(). Note: Create unlinked descriptors is going to go away. Please use get/find descriptors from generated code or query the descriptor_pool. serialized_end=133, Traceback (most recent call last): File "D:/code/CV/Lite-HRNet-hrnet/tools/train.py", line 166, in main() File "D:/code/CV/Lite-HRNet-hrnet/tools/train.py", line 162, in main meta=meta) File "D:\Anaconda3\envs\opencv\lib\site-packages\mmpose\apis\train.py", line 205, in train_model runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "D:\Anaconda3\envs\opencv\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 130, in run epoch_runner(data_loaders[i], **kwargs) File "D:\Anaconda3\envs\opencv\lib\site-packages\mmcv\runner\epoch_based_runner.py", line 47, in train for i, data_batch in enumerate(self.data_loader): File "D:\Anaconda3\envs\opencv\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next data = self._next_data() File "D:\Anaconda3\envs\opencv\lib\site-packages\torch\utils\data\dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "D:\Anaconda3\envs\opencv\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\Anaconda3\envs\opencv\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\Anaconda3\envs\opencv\lib\site-packages\mmpose\datasets\datasets\base\kpt_2d_sview_rgb_img_top_down_dataset.py", line 283, in getitem return self.pipeline(results) File "D:\Anaconda3\envs\opencv\lib\site-packages\mmpose\datasets\pipelines\shared_transform.py", line 107, in call data = t(data) File "D:\Anaconda3\envs\opencv\lib\site-packages\mmpose\datasets\pipelines\top_down_transform.py", line 115, in call center = results['center'] KeyError: 'center'

    opened by siqi777 1
  • KeyError:

    KeyError: "TopDown: 'TopDownSimpleHead is not in the models registry'"

    你好,我在部署完成训练环境后, 执行命令: python tools/summary_network.py configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py --shape 256 256 报错信息如题所示,请问这个问题要怎么解决呢? `Traceback (most recent call last): File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmcv/utils/registry.py", line 52, in build_from_cfg return obj_cls(**args) File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmpose/models/detectors/top_down.py", line 68, in init self.keypoint_head = builder.build_head(keypoint_head) File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmpose/models/builder.py", line 29, in build_head return HEADS.build(cfg) File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmcv/utils/registry.py", line 212, in build return self.build_func(*args, **kwargs, registry=self) File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmcv/utils/registry.py", line 45, in build_from_cfg f'{obj_type} is not in the {registry.name} registry') KeyError: 'TopDownSimpleHead is not in the models registry'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "tools/summary_network.py", line 66, in main() File "tools/summary_network.py", line 40, in main model = build_posenet(cfg.model) File "./models/builder.py", line 53, in build_posenet return build(cfg, POSENETS) File "./models/builder.py", line 28, in build return build_from_cfg(cfg, registry, default_args) File "/mnt/VENVS/MMcv/lib/python3.6/site-packages/mmcv/utils/registry.py", line 55, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') KeyError: "TopDown: 'TopDownSimpleHead is not in the models registry'" 我的运行环境Package Version


    addict 2.4.0 chumpy 0.70 cycler 0.11.0 Cython 0.29.26 dataclasses 0.8 future 0.18.2 json-tricks 3.15.5 kiwisolver 1.3.1 matplotlib 3.3.4 mmcv-full 1.4.2 mmpose 0.21.0 munkres 1.1.4 numpy 1.19.5 opencv-python 4.3.0.36 opencv-python-headless 4.5.5.62 packaging 21.3 pandas 1.1.5 Pillow 8.4.0 pip 21.3.1 pyparsing 3.0.6 python-dateutil 2.8.2 pytz 2021.3 PyYAML 6.0 scipy 1.5.4 setuptools 59.6.0 six 1.16.0 torch 1.7.0 torchaudio 0.7.0 torchvision 0.8.0 typing_extensions 4.0.1 wheel 0.37.1 xtcocotools 1.10 yapf 0.32.0 `

    opened by gsx1378 1
  • Where is

    Where is "Topdown" type?

    model = dict( type='Topdown', # num_stages=3, pretrained=None, backbone=dict( type='LiteHRNet', in_channels=3, extra=dict( stem=dict( stem_channels=32, out_channels=32, expand_ratio=1), num_stages=3, stages_spec=dict( num_modules=(3, 8, 3), num_branches=(2, 3, 4), num_blocks=(2, 2, 2), module_type=('LITE', 'LITE', 'LITE'), with_fuse=(True, True, True), reduce_ratios=(8, 8, 8), num_channels=( (40, 80), (40, 80, 160), (40, 80, 160, 320), )), with_head=False, )),

    any ideas about how to find out "Topdown"?

    opened by ronghui19 1
  • different from another hrnet

    different from another hrnet

    https://github.com/HRNet/Lite-HRNet/blob/0ff756074c199ae3bd06fb651019bd151b33142e/models/backbones/litehrnet.py#L634

    https://github.com/HRNet/HRNet-Semantic-Segmentation/blob/f9fb1ba66ff8aea29d833b885f08df64e62c2b23/lib/models/hrnet.py#L264

    https://github.com/HRNet/HRNet-Human-Pose-Estimation/blob/00d7bf72f56382165e504b10ff0dddb82dca6fd2/lib/models/pose_hrnet.py#L258

    https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation/blob/aa23881492ff511185acf756a2e14725cc4ab4d7/lib/models/pose_higher_hrnet.py#L236

    yours: for j in range(self.num_branches):

    others: for j in range(1, self.num_branches):

    should the range be (1, self.num_branches) ?

    opened by neosoob 1
  • summary_network.py error

    summary_network.py error

    when i try to run summary_network through "python tools/summary_network.py configs/top_down/lite_hrnet/coco/litehrnet_18_coco_256x192.py --shape 256 256 --with-head", a error raise as follows: Traceback (most recent call last): File "tools/summary_network.py", line 80, in main() File "tools/summary_network.py", line 45, in main model = build_posenet(cfg.model) File "/usr/local/lib/python3.6/site-packages/mmpose/models/builder.py", line 52, in build_posenet return build(cfg, POSENETS) File "/usr/local/lib/python3.6/site-packages/mmpose/models/builder.py", line 27, in build return build_from_cfg(cfg, registry, default_args) File "/usr/local/lib/python3.6/site-packages/mmcv/utils/registry.py", line 182, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') KeyError: "TopDown: 'LiteHRNet is not in the backbone registry'"

    but, if i replace "from mmpose.models import build_posenet" with "from models.builder import build_posenet", it raise another error:

    Traceback (most recent call last): File "tools/summary_network.py", line 80, in main() File "tools/summary_network.py", line 62, in main format(model.class.name)) NotImplementedError: FLOPs counter is currently not currently supported with TopDown

    So, how can i correctly cal the flops and mac.

    besides, i have trans the smallest model to ncnn, but for inference time, it's not faster than mobilenetv2, have you tested the inference time with mobilenetv2?

    opened by SatMa34 1
  • Runtime ERROR

    Runtime ERROR

    image

    the output of self.fuse_layers[i][j](out[j]) is [1, 40, 256, 455],(i=0, j=1) but y is [1, 40, 256, 456]

    I found this error when training my own datasets. I wonder how can i solve it?

    opened by Chic-J 0
  • lower mAP

    lower mAP

    hello,I modified the config xxx.py file into a yaml file and used the hrnet or higherhrnet framework code for training. I found that the mAP on the coco validation dataset was only about 0.51.

    2022-08-22 15:25:59,519 Epoch: [179][0/2341] Time 3.316s (3.316s) Speed 19.3 samples/s Data 2.263s (2.263s) Loss 0.00042 (0.00042) Accuracy 0.751 (0.751) 2022-08-22 15:30:13,151 Epoch: [179][300/2341] Time 0.813s (0.854s) Speed 78.8 samples/s Data 0.000s (0.019s) Loss 0.00032 (0.00038) Accuracy 0.803 (0.748) 2022-08-22 15:34:33,721 Epoch: [179][600/2341] Time 0.813s (0.861s) Speed 78.7 samples/s Data 0.000s (0.014s) Loss 0.00039 (0.00038) Accuracy 0.725 (0.747) 2022-08-22 15:42:29,909 Epoch: [179][900/2341] Time 1.648s (1.103s) Speed 38.8 samples/s Data 0.000s (0.012s) Loss 0.00035 (0.00038) Accuracy 0.737 (0.746) 2022-08-22 15:50:49,289 Epoch: [179][1200/2341] Time 1.665s (1.243s) Speed 38.4 samples/s Data 0.000s (0.013s) Loss 0.00035 (0.00038) Accuracy 0.756 (0.747) 2022-08-22 15:59:08,989 Epoch: [179][1500/2341] Time 1.639s (1.328s) Speed 39.1 samples/s Data 0.000s (0.013s) Loss 0.00035 (0.00038) Accuracy 0.775 (0.747) 2022-08-22 16:07:28,549 Epoch: [179][1800/2341] Time 1.668s (1.384s) Speed 38.4 samples/s Data 0.000s (0.013s) Loss 0.00041 (0.00038) Accuracy 0.752 (0.748) 2022-08-22 16:15:47,927 Epoch: [179][2100/2341] Time 1.674s (1.424s) Speed 38.2 samples/s Data 0.000s (0.012s) Loss 0.00033 (0.00038) Accuracy 0.785 (0.748) 2022-08-22 16:22:31,716 Test: [0/199] Time 1.750 (1.750) Loss 0.0004 (0.0004) Accuracy 0.816 (0.816) 2022-08-22 16:24:33,818 => writing results json to LiteHRNet_w18_output/coco/HigherLiteHRNet/LiteHRNet_w18_256x256_coco_correct_lr1e-3/results/keypoints_val2017_results_0.json 2022-08-22 16:24:44,456 | Arch | AP | Ap .5 | AP .75 | AP (M) | AP (L) | AR | AR .5 | AR .75 | AR (M) | AR (L) | 2022-08-22 16:24:44,457 |---|---|---|---|---|---|---|---|---|---|---| 2022-08-22 16:24:44,457 | HigherLiteHRNet | 0.511 | 0.807 | 0.544 | 0.501 | 0.530 | 0.557 | 0.830 | 0.598 | 0.539 | 0.583 |

    opened by JWSunny 1
  • Not able to download the test jsons?

    Not able to download the test jsons?

    When I tried to download the MPII converted json files following error orrcurs:

    This XML file does not appear to have any style information associated with it. The document tree is shown below.
    <Error>
    <Code>AccessDenied</Code>
    <Message>Access denied by bucket policy.</Message>
    <RequestId>62E03B71DFFFCE3236E49184</RequestId>
    <HostId>openmmlab.oss-cn-hangzhou.aliyuncs.com</HostId>
    <Bucket>openmmlab</Bucket>
    <User>nosuchuser</User>
    </Error>
    

    Can you please share the same? Thanks

    opened by sourabhyadav 0
  • Our new study on lightweight high resolution networks: Dite-HRNet

    Our new study on lightweight high resolution networks: Dite-HRNet

    Hi, thank you so much for your excellent work. Based on your research on lightweight high resolution networks, we have done some further exploratory studies. Our paper was accepted by IJCAI-ECAI2022. This is our repository: https://github.com/ZiyiZhang27/Dite-HRNet. And you are most welcome to give your valuable comments.

    opened by ZiyiZhang27 1
  • train mpii failed,MMDistributedDataParallel object has no attribute _sync_params

    train mpii failed,MMDistributedDataParallel object has no attribute _sync_params

    Traceback (most recent call last): File "./tools/train.py", line 166, in main() File "./tools/train.py", line 155, in main train_model( File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/mmpose/apis/train.py", line 200, in train_model runner.run(data_loaders, cfg.workflow, cfg.total_epochs) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train self.run_iter(data_batch, train_mode=True, **kwargs) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter outputs = self.model.train_step(data_batch, self.optimizer, File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 48, in train_step self._sync_params() File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'MMDistributedDataParallel' object has no attribute '_sync_params' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 19468) of binary: /mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/bin/python Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in main() File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run elastic_launch( File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/mnt/DATA/AI/project/Lite-HRNet-hrnet/venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    ./tools/train.py FAILED

    how to fix this?

    opened by xukefang 0
Owner
HRNet
Code for pose estimation is available at https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
HRNet
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

null 80 Dec 27, 2022
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 2, 2023
Fashion Landmark Estimation with HRNet

HRNet for Fashion Landmark Estimation (Modified from deep-high-resolution-net.pytorch) Introduction This code applies the HRNet (Deep High-Resolution

SVIP Lab 91 Dec 26, 2022
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 4, 2023
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite

FaceIDLight ?? Description A lightweight face-recognition toolbox and pipeline based on tensorflow-lite with MTCNN-Face-Detection and ArcFace-Face-Rec

Martin Knoche 16 Dec 7, 2022
Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging

Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging This repository contains an implementation

Computational Photography Lab @ SFU 1.1k Jan 2, 2023
Official PyTorch implementation of "VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization" (CVPR 2021)

VITON-HD — Official PyTorch Implementation VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization Seunghwan Choi*1, Sunghyun Pa

Seunghwan Choi 250 Jan 6, 2023
Official implement of Paper:A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images

A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images 深度监督影像融合网络DSIFN用于高分辨率双时相遥感影像变化检测 Of

Chenxiao Zhang 135 Dec 19, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
This is an official implementation of the High-Resolution Transformer for Dense Prediction.

High-Resolution Transformer for Dense Prediction Introduction This is the official implementation of High-Resolution Transformer (HRT). We present a H

HRNet 403 Dec 13, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Official Implementation of HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation

HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation by Lukas Hoyer, Dengxin Dai, and Luc Van Gool [Arxiv] [Paper] Overview Unsup

Lukas Hoyer 149 Dec 28, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
Yolov5-lite - Minimal PyTorch implementation of YOLOv5

Yolov5-Lite: Minimal YOLOv5 + Deep Sort Overview This repo is a shortened versio

Kadir Nar 57 Nov 28, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 1, 2023
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

MINDs Lab 170 Jan 4, 2023
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

MINDs Lab 54 Aug 30, 2021
Unofficial pytorch implementation of the paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution"

DFSA Unofficial pytorch implementation of the ICCV 2021 paper "Dynamic High-Pass Filtering and Multi-Spectral Attention for Image Super-Resolution" (p

null 2 Nov 15, 2021
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022

PGNet Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022, CVPR 2022 (arXiv 2204.05041) Abstract Recent salient objec

CVTEAM 109 Dec 5, 2022