PFLD pytorch Implementation

Overview

PFLD-pytorch

Implementation of PFLD A Practical Facial Landmark Detector by pytorch.

1. install requirements

pip3 install -r requirements.txt

2. Datasets

  • WFLW Dataset Download

Wider Facial Landmarks in-the-wild (WFLW) is a new proposed face dataset. It contains 10000 faces (7500 for training and 2500 for testing) with 98 fully manual annotated landmarks.

  1. WFLW Training and Testing images [Google Drive] [Baidu Drive]
  2. WFLW Face Annotations
  3. Unzip above two packages and put them on ./data/WFLW/
  4. move Mirror98.txt to WFLW/WFLW_annotations
$ cd data 
$ python3 SetPreparation.py

3. training & testing

training :

$ python3 train.py

use tensorboard, open a new terminal

$ tensorboard  --logdir=./checkpoint/tensorboard/

testing:

$ python3 test.py

4. results:

5. pytorch -> onnx -> ncnn

Pytorch -> onnx

python3 pytorch2onnx.py

onnx -> ncnn

how to build :https://github.com/Tencent/ncnn/wiki/how-to-build

cd ncnn/build/tools/onnx
./onnx2ncnn pfld-sim.onnx pfld-sim.param pfld-sim.bin

Now you can use pfld-sim.param and pfld-sim.bin in ncnn:

ncnn::Net pfld;
pfld.load_param("path/to/pfld-sim.param");
pfld.load_model("path/to/pfld-sim.bin");

cv::Mat img = cv::imread(imagepath, 1);
ncnn::Mat in = ncnn::Mat::from_pixels_resize(img.data, ncnn::Mat::PIXEL_BGR, img.cols, img.rows, 112, 112);
const float norm_vals[3] = {1/255.f, 1/255.f, 1/255.f};
in.substract_mean_normalize(0, norm_vals);

ncnn::Extractor ex = pfld.create_extractor();
ex.input("input_1", in);
ncnn::Mat out;
ex.extract("415", out);

6. reference:

PFLD: A Practical Facial Landmark Detector https://arxiv.org/pdf/1902.10859.pdf

Tensorflow Implementation: https://github.com/guoqiangqi/PFLD

Comments
  • Interpretation of the output

    Interpretation of the output

    When I use the model on a cropped face( 112*112), it return

    tensor([[ 2.5031, 0.5400, 2.5610, 1.6902, 2.3800, 1.4249, 2.3219, 1.9674, 2.2124, 2.0817, 1.9422, 2.3128, 1.8053, 2.5855, 1.4570, 2.7046, 1.0818, 3.6230, 0.6297, 3.7358, 0.9336, 3.7999, 0.6649, 4.2318, 0.8000, 4.3501, 0.9931, 5.3472, 0.8499, 5.6622, 0.9833, 5.7424, 0.6650, 6.2639, 1.5096, 6.6354, 1.8387, 6.0361, 2.4721, 5.5128, 3.3405, 5.9895, 3.7352, 5.5592, 4.0576, 5.5659, 4.9748, 5.4990, 4.6798, 5.3513, 5.2311, 4.9756, 5.6050, 4.5588, 5.4349, 4.1934, 5.5895, 3.8537, 5.6141, 3.2810, 5.7145, 2.4732, 6.3659, 2.4489, 5.7713, 1.4750, 1.7614, 0.7001, 1.7819, -0.0296, 1.6259, -0.0539, 1.9667, 0.1593, 2.0979, 0.6584, 2.1205, 0.5665, 1.8736, 0.6672, 2.0189, 0.6153, 1.7041, 0.2538, 1.9706, 0.5217, 2.1157, -0.1328, 3.3149, 0.1556, 3.4920, 0.5049, 4.1756, 1.3516, 3.9482, 0.8055, 2.9670, 0.5373, 1.9500, 0.2579, 1.6205, 0.7916, 1.7664, 0.8393, 1.6058, 1.0465, 0.9469, 0.4592, 1.0506, 0.5595, 0.8714, 1.6369, 1.1845, 1.7528, 1.4024, 1.7032, 1.5809, 1.6675, 2.1324, 2.1407, 1.5833, 0.9698, 1.3089, 0.5519, 1.7143, 0.8796, 1.6684, 0.9314, 2.1470, 1.0451, 2.0533, 1.2146, 1.7021, 1.2716, 1.0598, 1.2469, 2.6481, 1.2449, 2.3563, 0.7741, 2.9258, 0.9934, 3.3385, 1.0512, 2.9415, 1.2636, 2.4886, 1.6280, 2.8286, 0.8264, 2.2652, 1.6024, 0.7494, 2.4899, 0.8105, 2.0038, 1.2357, 1.9055, 1.4521, 1.6275, 1.1433, 1.3466, 2.2189, 2.1250, 2.5340, 2.9909, 2.0008, 2.6108, 1.4500, 3.5718, 0.6595, 3.2810, 0.6810, 3.4444, 0.3239, 2.8738, 0.3464, 2.4799, 0.8839, 2.2725, 1.8056, 2.2162, 2.5020, 1.7966, 2.4657, 2.7388, 1.7206, 2.9695, 0.7993, 3.0288, 0.4674, 3.1466, 1.3650, 0.8892, 2.6473, 1.1290]], grad_fn=)

    What's that mean? That's no the coordinate, right? And how can I draw dot on a picture by those results? Thanks

    opened by ElvishElvis 5
  • wrong loss penalize?

    wrong loss penalize?

    original paper:

    For instance,if disabling the geometry and data imbalance functionalities, our loss degenerates to a simple`2loss

    your code:

            mat_ratio = torch.mean(attributes_w_n, axis=0)
            mat_ratio = torch.Tensor([
                1 / (x*256) if x > 0 else 1 for x in mat_ratio
            ]).cuda()
            weight_attribute = torch.sum(attributes_w_n.mul(mat_ratio), axis=1)
    
            l2_distant = torch.sum((landmark_gt - landmarks) * (landmark_gt - landmarks), axis=1)
            return torch.mean(weight_angle * weight_attribute * l2_distant), torch.mean(l2_distant)
    

    1 / (x*256) if x > 0 else 1 for x in mat_ratio

    you are decreasing weight_attribute if the geometry and data imbalance functionalities are present, but should be vice versa

    opened by iperov 3
  • 欧拉角检测不准, angle为检测,angle_gt为真实数据

    欧拉角检测不准, angle为检测,angle_gt为真实数据

    angle = tensor([[-0.0873, 0.7688, -0.2588]], device='cuda:0') angle_gt = tensor([[ 88.0343, -60.7853, -62.0728]], device='cuda:0') angle = tensor([[-0.1598, 0.5773, -0.4115]], device='cuda:0') angle_gt = tensor([[ 66.4287, -53.0196, -49.0985]], device='cuda:0') angle = tensor([[-0.0510, 0.7076, -0.3719]], device='cuda:0') angle_gt = tensor([[ 48.0916, -45.6873, -17.9782]], device='cuda:0') angle = tensor([[ 0.0888, 0.7797, -0.3578]], device='cuda:0') angle_gt = tensor([[ 54.0094, -37.7794, -10.2092]], device='cuda:0') angle = tensor([[-0.0731, 0.8048, -0.0279]], device='cuda:0') angle_gt = tensor([[15.3467, 3.7500, 10.6802]], device='cuda:0') angle = tensor([[ 0.0906, 0.6620, -0.2479]], device='cuda:0') angle_gt = tensor([[ 79.5670, -63.5313, -39.0085]], device='cuda:0')

    opened by a361251388 2
  • core dump

    core dump

    thank you for your code ,when i download this code and try to run camera.py ,some error happend, first ,my torch version is 1.1.0, it say volatile was moved ,use torch.no_grad() instead,second ,Illegal instruction (core dumped), do you have this question, how to fix it? thank you

    opened by guojilei 2
  • Bump opencv-python from 4.1.0.25 to 4.1.1.26

    Bump opencv-python from 4.1.0.25 to 4.1.1.26

    Bumps opencv-python from 4.1.0.25 to 4.1.1.26.

    Release notes

    Sourced from opencv-python's releases.

    4.1.1.26

    OpenCV version 4.1.1.

    Changes:

    • FFmpeg has been compiled with https support on Linux builds #229
    • CI build logic related changes #197, #227, #228
    • Custom libjepg-turbo removed because it's provided by OpenCV #231
    • 64-bit Qt builds are now smaller #236
    • Custom builds should be now rather easy to do locally #235:
      1. Clone this repository
      2. Optional: set up ENABLE_CONTRIB and ENABLE_HEADLESS environment variables to 1 if needed
      3. Optional: add additional Cmake arguments to CMAKE_ARGS environment variable
      4. Run python setup.py bdist_wheel
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • tensorboard

    tensorboard

    Hello how does your tensorboard work? I have trained as required, and the corresponding files are also generated in checkpoints, but when viewing tensorboard, there is no output on the web page. This page isn’t working

    opened by wangping1408 1
  • conv2 is wrong?

    conv2 is wrong?

    class PFLDInference(nn.Module): def init(self): super(PFLDInference, self).init()

        self.conv1 = nn.Conv2d(
            3, 64, kernel_size=3, stride=2, padding=1, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
    
        self.conv2 = nn.Conv2d(
            64, 64, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
    

    @polarisZhao I think conv2 is a depth-wise convolution instead of a normal convolution, right?

    opened by AnthonyF333 1
  • RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 188 and 156 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:689

    RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 188 and 156 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:689

    您好,非常感谢你的工作。 我在运行train.py的时候遇到了上面这个问题。具体的错误如下:Traceback (most recent call last):

    File "train.py", line 232, in main(args) File "train.py", line 173, in main criterion, epoch) File "train.py", line 85, in validate for img, landmark_gt, attribute_gt, euler_angle_gt in wlfw_val_dataloader: File "/home//anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in next return self._process_data(data) File "/home//anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/home//anaconda3/lib/python3.6/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/hom/anaconda3/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 80, in default_collate return [default_collate(samples) for samples in transposed] File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 80, in return [default_collate(samples) for samples in transposed] File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 188 and 156 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:689

    opened by wyuzyf 1
  • how to train on cpu

    how to train on cpu

    hi! I have no nvidia gpu on my computer .When I run the script train.py the error shows: "AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from" How can I train on cpu?

    opened by sljlp 1
  • Training: how much GPU memory do you need/

    Training: how much GPU memory do you need/

    I try to train on NV2080 (8G), but failed with out of memory. What's your training environment? What GPU do you use?

    RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 7.76 GiB total capacity; 6.88 GiB already allocated; 7.56 MiB free; 34.09 MiB cached)

    opened by foolhard 1
  • Missing file

    Missing file

    Hello in your test.py you have parser.add_argument('--test_dataset', default='./data/test_data/list.txt', type=str) however, what is the list.txt here and could you give us a format to test the WLFWDatasets/ Thanks

    opened by ElvishElvis 1
  • Face with mask

    Face with mask

    Hello, Thank you for your nice repository I make some tests for face with mask but the model cannot detect landmarks because face detection is not generated any bouding box. Could you try with some cases like this? Please help me to figure out what is the problem Thank you

    opened by hathubkhn 0
  • Bad quality of

    Bad quality of "eye" landmarks when eyes are closed.

    Hello, great work there!!!

    I executed the camera.py file with your pre-trained model at "./checkpoint/snapshot/checkpoint.pth.tar".

    I realize that eye landmarks generated when eyes are closed are very bad (shown in the below figure). Is this behavior normal? if "not", can you give me some advice to overcome this? Thanks in advance.

    image

    opened by HungVS 1
  • Cannot find annotations

    Cannot find annotations

    Hi! Nice work there. I could download he images from the link provided but Annotations are unavailable. Can You please share sample txt for annotations. I need to annotate my custom dataset and try on this model. Many thanks...

    opened by MuzammiFareed 1
  • Bump numpy from 1.17.2 to 1.22.0

    Bump numpy from 1.17.2 to 1.22.0

    Bumps numpy from 1.17.2 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Owner
zhaozhichao
Stay hungry, Stay foolish
zhaozhichao
An essential implementation of BYOL in PyTorch + PyTorch Lightning

Essential BYOL A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Ligh

Enrico Fini 48 Sep 27, 2022
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 8, 2022
A PyTorch implementation of the paper Mixup: Beyond Empirical Risk Minimization in PyTorch

Mixup: Beyond Empirical Risk Minimization in PyTorch This is an unofficial PyTorch implementation of mixup: Beyond Empirical Risk Minimization. The co

Harry Yang 121 Dec 17, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch

Advantage async actor-critic Algorithms (A3C) in PyTorch @inproceedings{mnih2016asynchronous, title={Asynchronous methods for deep reinforcement lea

LEI TAI 111 Dec 8, 2022
Pytorch-diffusion - A basic PyTorch implementation of 'Denoising Diffusion Probabilistic Models'

PyTorch implementation of 'Denoising Diffusion Probabilistic Models' This reposi

Arthur Juliani 76 Jan 7, 2023
Fang Zhonghao 13 Nov 19, 2022
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

RETRO - Pytorch (wip) Implementation of RETRO, Deepmind's Retrieval based Attent

Phil Wang 556 Jan 4, 2023
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Yash Sanjay Bhalgat 616 Jan 6, 2023
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch

Pytorch Lightning 1.4k Jan 1, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 360 Dec 10, 2022
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 9.2k Jan 2, 2023
Amazon Forest Computer Vision: Satellite Image tagging code using PyTorch / Keras with lots of PyTorch tricks

Amazon Forest Computer Vision Satellite Image tagging code using PyTorch / Keras Here is a sample of images we had to work with Source: https://www.ka

Mamy Ratsimbazafy 359 Jan 5, 2023
A bunch of random PyTorch models using PyTorch's C++ frontend

PyTorch Deep Learning Models using the C++ frontend Gettting started Clone the repo 1. https://github.com/mrdvince/pytorchcpp 2. cd fashionmnist or

Vince 0 Jul 13, 2021
PyTorch Autoencoders - Implementing a Variational Autoencoder (VAE) Series in Pytorch.

PyTorch Autoencoders Implementing a Variational Autoencoder (VAE) Series in Pytorch. Inspired by this repository Model List check model paper conferen

Subin An 8 Nov 21, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022