HeadPoseEstimation-WHENet-yolov4-onnx-openvino
ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L
1. Usage
$ git clone https://github.com/PINTO0309/HeadPoseEstimation-WHENet-yolov4-onnx-openvino
$ cd HeadPoseEstimation-WHENet-yolov4-onnx-openvino
$ wget https://github.com/PINTO0309/HeadPoseEstimation-WHENet-yolov4-onnx-openvino/releases/download/v1.0.0/saved_model_224x224.tar.gz
$ tar -zxvf saved_model_224x224.tar.gz && rm saved_model_224x224.tar.gz
$ python3 demo_video.py
usage: demo_video.py \
[-h] \
[--whenet_mode {onnx,openvino}] \
[--device DEVICE] \
[--height_width HEIGHT_WIDTH]
optional arguments:
-h, --help
show this help message and exit
--whenet_mode {onnx,openvino}
Choose whether to infer WHENet with ONNX or OpenVINO. Default: onnx
--device DEVICE
Path of the mp4 file or device number of the USB camera. Default: 0
--height_width HEIGHT_WIDTH
{H}x{W} Default: 480x640
2. Reference
- https://github.com/Ascend-Research/HeadPoseEstimation-WHENet
- https://github.com/AlexeyAB/darknet
- https://github.com/linghu8812/tensorrt_inference
- https://github.com/jkjung-avt/yolov4_crowdhuman
- https://github.com/PINTO0309/PINTO_model_zoo
- https://github.com/PINTO0309/openvino2tensorflow
- https://zenn.dev/pinto0309/scraps/1849b6909db13b