A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

Overview

A Light and Fast Face Detector for Edge Devices

Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended to use LFD instead !!! Visit LFD Repo here. This repo will not be maintained from now on.

Recent Update

  • 2019.07.25 This repos is first online. Face detection code and trained models are released.
  • 2019.08.15 This repos is formally released. Any advice and error reports are sincerely welcome.
  • 2019.08.22 face_detection: latency evaluation on TX2 is added.
  • 2019.08.25 face_detection: RetinaFace-MobileNet-0.25 is added for comparison (both accuracy and latency).
  • 2019.09.09 LFFD is ported to NCNN (link) and MNN (link) by SyGoing, great thanks to SyGoing.
  • 2019.09.10 face_detection: important bug fix: vibration offset should be subtracted by shift in data iterator. This bug may result in lower accuracy, inaccurate bbox prediction and bbox vibration in test phase. We will upgrade v1 and v2 as soon as possible (should have higher accuracy and more stable).
  • 2019.09.17 face_detection: model v2 is upgraded! After fixing the bug, we have fine-tuned the old v2 model. The accuracy on WIDER FACE is improved significantly! Please try new v2.
  • 2019.09.18 pedestrian_detection: preview version of model v1 for Caltech Pedestrian Dataset is released.
  • 2019.09.23 head_detection: model v1 for brainwash dataset is released.
  • 2019.10.02 license_plate_detection: model v1 for CCPD dataset is released. (The accuracy is very high and the latency is very short! Have a try.)
  • 2019.10.02 Currently, we have provided some application-oriented detectors. Subsequently, we will put most energy to next generation framework for single-class detection. Any feedback is welcome.
  • 2019.10.16 face_detection: the preview of PyTorch version is ready (link). Any feedback is welcome.
  • 2019.10.16 Tips: data preparation is important, irrational values of (x,y,w,h) may introduce nan in training; we trained models with convs followed by BNs. But we found that the convergence is not stable, and can not reach a good point.
  • 2019.11.08 face_detection: caffe version of LFFD is provided by vicwer (great thanks). Guys who are familiar with caffe can navigate to /face_detection/caffemodel for details.
  • 2020.03.27 license_plate_detection: model v1_small for CCPD dataset is released. v1_small has much less parameters than v1, hence it is much faster. The AP of v1_small is 0.982 (vs v1-0.989). Please check README.md. Besides, a commercial-ready license plate recognition repo which adopted LFFD as the detector is hightly recommended!

Introduction

This repo releases the source code of paper "LFFD: A Light and Fast Face Detector for Edge Devices". Our paper presents a light and fast face detector (LFFD) for edge devices. LFFD considerably balances both accuracy and latency, resulting in small model size, fast inference speed while achieving excellent accuracy. Understanding the essence of receptive field makes detection networks interpretable.

In practical, we have deployed it in cloud and edge devices (like NVIDIA Jetson series and ARM-based embedding system). The comprehensive performance of LFFD is robust enough to support our applications.

In fact, our method is a general detection framework that applicable to one class detection, such as face detection, pedestrian detection, head detection, vehicle detection and so on. In general, an object class, whose average ratio of the longer side and the shorter side is less than 5, is appropriate to apply our framework for detection.

Several practical advantages:

  1. large scale coverage, and easy to extend to larger scales by adding more layers without much latency gain.
  2. detect small objects (as small as 10 pixels) in images with extremely large resolution (8K or even larger) in only one inference.
  3. easy backbone with very common operators makes it easy to deploy anywhere.

Accuracy and Latency

We train LFFD on train set of WIDER FACE benchmark. All methods are evaluated on val/test sets under the SIO schema (please refer to the paper for details).

  • Accuracy on val set of WIDER FACE (The values in () are results from the original papers):
Method Easy Set Medium Set Hard Set
DSFD 0.949(0.966) 0.936(0.957) 0.850(0.904)
PyramidBox 0.937(0.961) 0.927(0.950) 0.867(0.889)
S3FD 0.923(0.937) 0.907(0.924) 0.822(0.852)
SSH 0.921(0.931) 0.907(0.921) 0.702(0.845)
FaceBoxes 0.840 0.766 0.395
FaceBoxes3.2× 0.798 0.802 0.715
LFFD 0.910 0.881 0.780
  • Accuracy on test set of WIDER FACE (The values in () are results from the original papers):
Method Easy Set Medium Set Hard Set
DSFD 0.947(0.960) 0.934(0.953) 0.845(0.900)
PyramidBox 0.926(0.956) 0.920(0.946) 0.862(0.887)
S3FD 0.917(0.928) 0.904(0.913) 0.821(0.840)
SSH 0.919(0.927) 0.903(0.915) 0.705(0.844)
FaceBoxes 0.839 0.763 0.396
FaceBoxes3.2× 0.791 0.794 0.715
LFFD 0.896 0.865 0.770
  • Accuracy on FDDB:
Method Disc ROC curves score
DFSD 0.984
PyramidBox 0.982
S3FD 0.981
SSH 0.977
FaceBoxes3.2× 0.905
FaceBoxes 0.960
LFFD 0.973

In the paper, three hardware platforms are used for latency evaluation: NVIDIA GTX TITAN Xp, NVIDIA TX2 and Rasberry Pi 3 Model B+ (ARM A53).

We report the latency of inference only (for NVIDIA hardwares, data transfer is included), excluding pre-processing and post-processing. The batchsize is set to 1 for all evaluations.

  • Latency on NVIDIA GTX TITAN Xp (MXNet+CUDA 9.0+CUDNN7.1):
Resolution-> 640×480 1280×720 1920×1080 3840×2160
DSFD 78.08ms(12.81 FPS) 187.78ms(5.33 FPS) 392.82ms(2.55 FPS) 1562.50ms(0.64 FPS)
PyramidBox 50.51ms(19.08 FPS) 143.34ms(6.98 FPS) 331.93ms(3.01 FPS) 1344.07ms(0.74 FPS)
S3FD 21.75ms(45.95 FPS) 55.73ms(17.94 FPS) 119.53ms(8.37 FPS) 471.31ms(2.21 FPS)
SSH 22.44ms(44.47 FPS) 55.29ms(18.09 FPS) 118.43ms(8.44 FPS) 463.10ms(2.16 FPS)
FaceBoxes3.2× 6.80ms(147.00 FPS) 12.96ms(77.19 FPS) 25.37ms(39.41 FPS) 111.98ms(8.93 FPS)
LFFD 7.60ms(131.40 FPS) 16.37ms(61.07 FPS) 31.27ms(31.98 FPS) 87.79ms(11.39 FPS)
  • Latency on NVIDIA TX2 (MXNet+CUDA 9.0+CUDNN7.1) presented in the paper:
Resolution-> 160×120 320×240 640×480
FaceBoxes3.2× 11.20ms(89.29 FPS) 19.62ms(50.97 FPS) 72.74ms(13.75 FPS)
LFFD 7.30ms(136.99 FPS) 19.64ms(50.92 FPS) 64.70ms(15.46 FPS)
  • Latency on Respberry Pi 3 Model B+ (ncnn) presented in the paper:
Resolution-> 160×120 320×240 640×480
FaceBoxes3.2× 167.20ms(5.98 FPS) 686.19ms(1.46 FPS) 3232.26ms(0.31 FPS)
LFFD 118.45ms(8.44 FPS) 409.19ms(2.44 FPS) 4114.15ms(0.24 FPS)

On NVIDIA platform, TensorRT is the best choice for inference. So we conduct additional latency evaluations using TensorRT (the latency is dramatically decreased!!!). As for ARM based platform, we plan to use MNN and Tengine for latency evaluation. Details can be found in the sub-project face_detection.

Getting Started

We implement the proposed method using MXNet Module API.

Prerequirements (global)

  • Python>=3.5
  • numpy>=1.16 (lower versions should work as well, but not tested)
  • MXNet>=1.4.1 (install guide)
  • cv2=3.x (pip3 install opencv-python==3.4.5.20, other version should work as well, but not tested)

Tips:

  • use MXNet with cudnn.
  • build numpy from source with OpenBLAS. This will improve the training efficiency.
  • make sure cv2 links to libjpeg-turbo, not libjpeg. This will improve the jpeg decode efficiency.

Sub-directory description

  • face_detection contains the code of training, evaluation and inference for LFFD, the main content of this repo. The trained models of different versions are provided for off-the-shelf deployment.
  • head_detection contains the trained models for head detection. The models are obtained by the proposed general one class detection framework.
  • pedestrian_detection contains the trained models for pedestrian detection. The models are obtained by the proposed general one class detection framework.
  • vehicle_detection contains the trained models for vehicle detection. The models are obtained by the proposed general one class detection framework.
  • ChasingTrainFramework_GeneralOneClassDetection is a simple wrapper based on MXNet Module API for general one class detection.

Installation

  1. Download the repo:
git clone https://github.com/YonghaoHe/A-Light-and-Fast-Face-Detector-for-Edge-Devices.git
  1. Refer to the corresponding sub-project for detailed usage.

Citation

If you benefit from our work in your research and product, please kindly cite the paper

@inproceedings{LFFD,
title={LFFD: A Light and Fast Face Detector for Edge Devices},
author={He, Yonghao and Xu, Dezhong and Wu, Lifang and Jian, Meng and Xiang, Shiming and Pan, Chunhong},
booktitle={arXiv:1904.10633},
year={2019}
}

To Do List

Contact

Yonghao He

E-mails: [email protected] / [email protected]

If you are interested in this work, any innovative contributions are welcome!!!

Internship is open at NLPR, CASIA all the time. Send me your resumes!

Comments
  • How about multi-class detection?

    How about multi-class detection?

    Hi @YonghaoHe,

    Very nice work and thank you for sharing your code. Do you think this framework could be extended to general multi-class detection case?

    opened by leogogogo 13
  • 关于感受野

    关于感受野

    `# feature map size for each scale param_feature_map_size_list = [159, 159, 79, 79, 39, 19, 19, 19]

    bbox lower bound for each scale

    param_bbox_small_list = [10, 15, 20, 40, 70, 110, 250, 400] assert len(param_bbox_small_list) == param_num_output_scales

    bbox upper bound for each scale

    param_bbox_large_list = [15, 20, 40, 70, 110, 250, 400, 560] assert len(param_bbox_large_list) == param_num_output_scales

    bbox gray lower bound for each scale

    param_bbox_small_gray_list = [math.floor(v * 0.9) for v in param_bbox_small_list]

    bbox gray upper bound for each scale

    param_bbox_large_gray_list = [math.ceil(v * 1.1) for v in param_bbox_large_list]

    the RF size of each scale used for normalization, here we use param_bbox_large_list for better regression

    param_receptive_field_list = param_bbox_large_list

    RF stride for each scale

    param_receptive_field_stride = [4, 4, 8, 8, 16, 32, 32, 32]

    the start location of the first RF of each scale

    param_receptive_field_center_start = [3, 3, 7, 7, 15, 31, 31, 31]`

    大佬您好:   首先,感谢开源.看完facedetection里config_farm和data_iterator_farm里的内容后,关于感受野有点疑问:1,您实际训练中,8个branch中每个branch的RF的大小实际不是用的感受野计算公式逐层迭代得到,而是直接引用了每个尺度的上边界作为该branch的RF大小,是处于ERF的考量吗? 2,起始的感受也中心 [3, 3, 7, 7, 15, 31, 31, 31]是怎么得到的?(按照网上说的起始感受野位置计算公式center_out=center_in + ((kernel - 1)/2 +p)*不包括当前层之前的stride累积,计算的话,比这大多了..

    希望大佬能不吝赐教,谢谢!

    opened by dongfangduoshou123 10
  • box loss

    box loss

    hi Yonghao, i realize your work on pytorch , but when i run my code, the training box loss is too small i.e 5e-02 at beginning, I think the values are wrong. I just to want to know the common range for box loss and conf loss.

    opened by QingL0218 8
  • TensorRT conversion failure

    TensorRT conversion failure

    Traceback (most recent call last):
      File "to_onnx.py", line 38, in <module>
        generate_onnx_file()
      File "to_onnx.py", line 28, in generate_onnx_file
        onnx_mxnet.export_model(net_symbol, net_params, [input_shape], numpy.float32, onnx_path, verbose=True)
      File "/usr/local/lib/python3.5/dist-packages/mxnet/contrib/onnx/mx2onnx/export_model.py", line 87, in export_model
        verbose=verbose)
      File "/usr/local/lib/python3.5/dist-packages/mxnet/contrib/onnx/mx2onnx/export_onnx.py", line 309, in create_onnx_graph_proto
        checker.check_graph(graph)
      File "/usr/local/lib/python3.5/dist-packages/onnx/checker.py", line 52, in checker
        proto.SerializeToString(), ctx)
    onnx.onnx_cpp2py_export.checker.ValidationError: Node (slice_axis20) has input size 1 not in range [min=3, max=5].
    
    ==> Context: Bad node spec: input: "softmax0" output: "slice_axis20" name: "slice_axis20" op_type: "Slice" attribute { name: "axes" ints: 1 type: INTS } attribute { name: "ends" ints: 1 type: INTS } attribute { name: "starts" ints: 0 type: INTS }
    
    
    opened by deimsdeutsch 6
  • 一个训练的问题

    一个训练的问题

    我想请教一下,如果我想要训练2k的图片,但是目标只有15x15pix 大小,如何调整训练参数呢? 我尝试用v2去训练,将bbox_small_list改为[7, 12, 17],larget_list为[12, 17, 22],修改feature_map_size_list为计算得到的size; 训练得到的模型什么都检测不出来?是哪里存在问题吗?

    opened by afterimagex 6
  • 代码该更新啦!!!

    代码该更新啦!!!

    在NX上,只有最新的jetpack ,由于TensorRT的版本为7.0.0.11, ,

    face_detection中的predict_tensorrt.py跑不通啦。

    Error: INFO:root:Init engine from ONNX file. INFO:root:Create TensorRT builder. INFO:root:Create TensorRT network. INFO:root:Create TensorRT ONNX parser. ERROR:root:Errors occur while parsing the ONNX file! Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."

    opened by CallmeZhangChenchen 4
  • face detection model failed in the CPU environment

    face detection model failed in the CPU environment

    Thank you for your good work with face detection.

    It is working well in the GPU environment but failed in the CPU case.

    System: Ubuntu 18.04.5 LTS"

    Mxnet: mxnet-mkl==1.6.0

    Error information: A lot of bounding boxes are detected on the test image

    My changes: In the folder: "LFFD-A-Light-and-Fast-Face-Detector-for-Edge-Devices/face_detection/accuracy_evaluation", I changed the "ctx=mxnet.gpu(0)" to "ctx=mxnet.cpu(0)" in the file: "predict.py"

    See attached the image for more information test_result

    opened by tairenchen 3
  • 执行python predict_tensorrt.py报错

    执行python predict_tensorrt.py报错

    INFO:root:Init engine from ONNX file. INFO:root:Create TensorRT builder. INFO:root:Create TensorRT network. INFO:root:Create TensorRT ONNX parser. ERROR:root:Errors occur while parsing the ONNX file! Assertion failed: tensor->getDimensions().nbDims == combined->getDimensions().nbDims

    这是哪里出了问题呢?

    opened by 394781865 3
  • face_detection,运行inference_speed_evaluation/inference_speed_eval.py报错

    face_detection,运行inference_speed_evaluation/inference_speed_eval.py报错

    INFO:root:Convert mxnet symbol to onnx... INFO:root:Input shape of the model [(1, 3, 480, 640)] INFO:root:Exported ONNX file temp.onnx saved to disk INFO:root:Parsing onnx for trt network... ERROR:root:Errors occur while parsing the onnx file! ERROR:root:Error 0: Assertion failed: tensor->getDimensions().nbDims == combined->getDimensions().nbDims

    opened by lemonyhw 3
  • Nan loss while training face detection model on custom dataset

    Nan loss while training face detection model on custom dataset

    First of all, thank you for this amazing work. But the training procedure and requirements of face detection lack some clarity. I will list out the errors I faced during the training.

    • I have tried to do transfer learning using the pretrained v1 model, but it gave a nan loss.
    • Mxnet used: mxnet-cu100==1.5.0
    • As the training with the pretrained model failed, I decided to train from scratch.
    • I have verified that the data do not contain negative bounding boxes.
    • After completing around 1 lakh iterations, RuntimeWarning: invalid value encountered in multiply loss_score = numpy.sum(pred_score * mask_score) this error was printed and it started producing nan loss again.
    • During the training, both the losses had values more than 1000.

    @YonghaoHe Any idea on this issue? Can anyone list out the correct procedure to follow while training on the custom dataset?

    opened by aiswaryasukumar4 2
  • Trying to convert to ONNX - converter fails with sliceerror

    Trying to convert to ONNX - converter fails with sliceerror

    I am trying to convert the head detection model from MXNET to ONNX using the steps mentioned in https://github.com/onnx/tutorials/blob/master/tutorials/MXNetONNXExport.ipynb

    My setup is as follows

    Requirement already satisfied: onnx in ./kenv/lib/python3.6/site-packages (1.6.0) Requirement already satisfied: six in /home/username/.local/lib/python3.6/site-packages (from onnx) (1.13.0) Requirement already satisfied: protobuf in ./kenv/lib/python3.6/site-packages (from onnx) (3.11.1) Requirement already satisfied: typing-extensions>=3.6.2.1 in ./kenv/lib/python3.6/site-packages (from onnx) (3.7.4.1) Requirement already satisfied: numpy in /home/username/.local/lib/python3.6/site-packages (from onnx) (1.17.4) Requirement already satisfied: setuptools in ./kenv/lib/python3.6/site-packages (from protobuf->onnx) (42.0.2)

    When trying to use the code, I get the following error, any workarounds?

    $ python mxconverter.py 
    INFO:root:Converting json and weight file to sym and params
    [20:53:26] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.5.0. Attempting to upgrade...
    [20:53:26] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
    Traceback (most recent call last):
      File "mxconverter.py", line 17, in <module>
        converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file)
      File "/home/username/tensorflowpython/kenv/lib/python3.6/site-packages/mxnet/contrib/onnx/mx2onnx/export_model.py", line 83, in export_model
        verbose=verbose)
      File "/home/username/tensorflowpython/kenv/lib/python3.6/site-packages/mxnet/contrib/onnx/mx2onnx/export_onnx.py", line 312, in create_onnx_graph_proto
        checker.check_graph(graph)
      File "/home/username/tensorflowpython/kenv/lib/python3.6/site-packages/onnx/checker.py", line 53, in checker
        proto.SerializeToString(), ctx)
    onnx.onnx_cpp2py_export.checker.ValidationError: Node (slice_axis16) has input size 1 not in range [min=3, max=5].
    
    ==> Context: Bad node spec: input: "softmax0" output: "slice_axis16" name: "slice_axis16" op_type: "Slice" attribute { name: "axes" ints: 1 type: INTS } attribute { name: "ends" ints: 1 type: INTS } attribute { name: "starts" ints: 0 type: INTS }
    
    opened by krishnak 2
  • Please update the repo for Vehicle Detection

    Please update the repo for Vehicle Detection

    Hi,

    Can you please add the remaining part of the code for Vehicle Detection ? This would make this repo complete.

    Thanks. Awesome work buddy. Keep it up.

    opened by dexception 6
  • Difference on accuracy between your results and my reproduction

    Difference on accuracy between your results and my reproduction

    Hi, I have tried to reproduce your results of LFFD v1 on the WIDER_FACE_val dataset. The training data I used is produced by your method:

    1. Input the WIDER FACE train dataset (downloaded from the official website);
    2. Generate the train_list.txt file by reformat_WIDERFACE.py;
    3. Generate the *.pkl file by pickle_provider.py And then, run configuration_10_560_25L_8scales_v1.py (specify the *.pkl file generated just now). The only parameter I have modified is the param_num_thread_train_dataiter at the line 56 (change it from 4 to 10). After 1,400,000 iterations, I evaluated it on the WIDER_FACE_val dataset using standard tools released by the WIDER FACE benchmark. However, my results of accuracy are quite lower than yours, which are obtained by face_detection/saved_model/configuration_10_560_25L_8scales_v1/train_10_560_25L_8scales_v1_iter_1400000.params you provided in the project. Here are details: ======================================== easy medium hard mine 0.885 0.853 0.681 yours 0.896 0.861 0.710 diff -1.2% -0.9% -4.1% ======================================== I can not figure it out really. May you give me some help? thanks
    opened by on-your-way 3
  • Error in validation over WIDER_val

    Error in validation over WIDER_val

    I have MXNet 1.5.0, CUDA 10.2, OpenCV 4.5.1 (Installing OpenCV 3.x, gives me #107 ) and Python 3.6.13. When I run evaluation_on_widerface.py in accuracy_evaluation, I get the following output:

    ----> load symbol file: ../symbol_farm/symbol_10_320_20L_5scales_v2_deploy.json ----> load model file: ../saved_model/configuration_10_320_20L_5scales_v2/train_10_320_20L_5scales_v2_iter_1000000.params Traceback (most recent call last): File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1623, in simple_bind ctypes.byref(exe_handle))) File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [22:08:43] /tmp/build/80754af9/libmxnet_1564766659613/work/src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x38e1c4) [0x7f5ea2f541c4] [bt] (1) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x26a3727) [0x7f5ea5269727] [bt] (2) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x26a664e) [0x7f5ea526c64e] [bt] (3) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::StorageImpl::Alloc(mxnet::Storage::Handle*)+0x51) [0x7f5ea526edb1] [bt] (4) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x39c4a4) [0x7f5ea2f624a4] [bt] (5) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x4a22a8) [0x7f5ea30682a8] [bt] (6) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x4b05c6) [0x7f5ea30765c6] [bt] (7) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocatormxnet::TShape > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, mxnet::Executor const*, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >)+0xcf6) [0x7f5ea307eba6] [bt] (8) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::Context, std::less<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::TShape, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::TShape> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x652) [0x7f5ea3086322]

    During handling of the above exception, another exception occurred: Traceback (most recent call last): File "evaluation_on_widerface.py", line 24, in num_output_scales=cfg.param_num_output_scales) File "/media/danush/b17c9432-d479-4633-aa80-3c70cb68a206/danush/Documents/lffd_conda/face_detection/accuracy_evaluation/predict.py", line 102, in init self.__load_model() File "/media/danush/b17c9432-d479-4633-aa80-3c70cb68a206/danush/Documents/lffd_conda/face_detection/accuracy_evaluation/predict.py", line 122, in __load_model for_training=False) File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/module/module.py", line 429, in bind state_names=self._state_names) File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 279, in init self.bind_exec(data_shapes, label_shapes, shared_group) File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 375, in bind_exec shared_group)) File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/module/executor_group.py", line 662, in _bind_ith_exec shared_buffer=shared_data_arrays, input_shapes) File "/home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1629, in simple_bind raise RuntimeError(error_msg) RuntimeError: simple_bind error. Arguments: data: (1, 3, 480, 640) [22:08:43] /tmp/build/80754af9/libmxnet_1564766659613/work/src/storage/storage.cc:119: Compile with USE_CUDA=1 to enable GPU usage Stack trace: [bt] (0) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x38e1c4) [0x7f5ea2f541c4] [bt] (1) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x26a3727) [0x7f5ea5269727] [bt] (2) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x26a664e) [0x7f5ea526c64e] [bt] (3) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::StorageImpl::Alloc(mxnet::Storage::Handle)+0x51) [0x7f5ea526edb1] [bt] (4) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x39c4a4) [0x7f5ea2f624a4] [bt] (5) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x4a22a8) [0x7f5ea30682a8] [bt] (6) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x4b05c6) [0x7f5ea30765c6] [bt] (7) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::InitArguments(nnvm::IndexedGraph const&, std::vector<mxnet::TShape, std::allocatormxnet::TShape > const&, std::vector<int, std::allocator > const&, std::vector<int, std::allocator > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, mxnet::Executor const, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >)+0xcf6) [0x7f5ea307eba6] [bt] (8) /home/danush/anaconda3/envs/lffd/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::exec::GraphExecutor::Init(nnvm::Symbol, mxnet::Context const&, std::map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::Context, std::less<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::Context> > > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::vector<mxnet::Context, std::allocatormxnet::Context > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::TShape, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::TShape> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, int> > > const&, std::vector<mxnet::OpReqType, std::allocatormxnet::OpReqType > const&, std::unordered_set<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::vector<mxnet::NDArray, std::allocatormxnet::NDArray >, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, mxnet::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, mxnet::NDArray> > >, mxnet::Executor*, std::unordered_map<nnvm::NodeEntry, mxnet::NDArray, nnvm::NodeEntryHash, nnvm::NodeEntryEqual, std::allocator<std::pair<nnvm::NodeEntry const, mxnet::NDArray> > > const&)+0x652) [0x7f5ea3086322]

    Please let me know what can I do, or if you need any other information.

    opened by CodexForster 0
  • Error in face detection code while testing over WIDER_val dataset

    Error in face detection code while testing over WIDER_val dataset

    I tried to evaluate the saved model on the WIDER_val dataset. I have downloaded the dataset from the official website and placed the folder in the data_provider_farm and changed the required paths in evaluation_on_widerface.py (line 27). But when I run the same python file, I get the following error:

    Traceback (most recent call last): File "evaluation_on_widerface.py", line 57, in fout.write('%d %d %d %d %.03f' % (math.floor(bbox[0]), math.floor(bbox[1]), math.ceil(bbox[2] - bbox[0]), math.ceil(bbox[3] - bbox[1]), bbox[4] if bbox[4] <= 1 else 1) + '\n') TypeError: must be real number, not tuple

    What am I doing wrong?

    opened by CodexForster 0
Owner
YonghaoHe
Assistant Professor
YonghaoHe
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
Code for BMVC2021 "MOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation"

MOS-Multi-Task-Face-Detect Introduction This repo is the official implementation of "MOS: A Low Latency and Lightweight Framework for Face Detection,

null 104 Dec 8, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 8, 2022
Human Detection - Pedestrian Detection using OpenCV Python

Pedestrian Detection using OpenCV Python Follow us on Instagram for Machine Lear

Hrishikesh Dutta 1 Jan 23, 2022
BED: A Real-Time Object Detection System for Edge Devices

BED: A Real-Time Object Detection System for Edge Devices About this project Thi

Data Analytics Lab at Texas A&M University 44 Nov 18, 2022
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Tao Luo 125 Dec 22, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
git《Beta R-CNN: Looking into Pedestrian Detection from Another Perspective》(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

null 35 Sep 8, 2021
PED: DETR for Crowd Pedestrian Detection

PED: DETR for Crowd Pedestrian Detection Code for PED: DETR For (Crowd) Pedestrian Detection Paper PED: DETR for Crowd Pedestrian Detection Installati

null 36 Sep 13, 2022
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft 244 Jan 6, 2023
Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices, ACM Multimedia 2021

Codes for ECBSR Edge-oriented Convolution Block for Real-time Super Resolution on Mobile Devices Xindong Zhang, Hui Zeng, Lei Zhang ACM Multimedia 202

xindong zhang 236 Dec 26, 2022
Face and Pose detector that emits MQTT events when a face or human body is detected and not detected.

Face Detect MQTT Face or Pose detector that emits MQTT events when a face or human body is detected and not detected. I built this as an alternative t

Jacob Morris 38 Oct 21, 2022
[ECCV 2020] Reimplementation of 3DDFAv2, including face mesh, head pose, landmarks, and more.

Stable Head Pose Estimation and Landmark Regression via 3D Dense Face Reconstruction Reimplementation of (ECCV 2020) Towards Fast, Accurate and Stable

Remilia Scarlet 221 Dec 30, 2022
This program can detect your face and add an Christams hat on the top of your head

Auto_Christmas This program can detect your face and add a Christmas hat to the top of your head. just run the Auto_Christmas.py, then you can see the

null 3 Dec 22, 2021
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection ?? Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Prem Kumar 86 Aug 3, 2022
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022