“Robust Lightweight Facial Expression Recognition Network with Label Distribution Training”, AAAI 2021.

Overview

EfficientFace

Zengqun Zhao, Qingshan Liu, Feng Zhou. "Robust Lightweight Facial Expression Recognition Network with Label Distribution Training". AAAI'21

Requirements

  • Python >= 3.6
  • PyTorch >= 1.2
  • torchvision >= 0.4.0

Training

  • Step 1: download basic emotions dataset of RAF-DB, and make sure it has the structure like the following:
./RAF-DB/
         train/
               0/
                 train_09748.jpg
                 ...
                 train_12271.jpg
               1/
               ...
               6/
         test/
              0/
              ...
              6/

[Note] 0: Neutral; 1: Happiness; 2: Sadness; 3: Surprise; 4: Fear; 5: Disgust; 6: Anger
  • Step 2: download pre-trained model from Google Drive, and put it into ./checkpoint.
  • Step 3: change the --data in run.sh to your path
  • Step 4: run sh run.sh

Pre-trained Models

  • Sept. 16, 2021 Update
    We provide the pre-trained ResNet-18 and ResNet-50 on MS-Celeb-1M (classes number is 12666) for your research.
    The Google Driver for ResNet-18 model. The Google Driver for ResNet-50 model.
    The pre-trained ResNet-50 model can be also used for LDG.
  • Nov. 6, 2021 Update
    The fine-tuned LDG models on CAER-S, AffectNet-7, and AffectNet-8 can be downloaded here, here, and here, respectively.
  • Nov. 12, 2021 Update
    The trained EfficientFace model on RAF-DB, CAER-S, AffectNet-7, and AffectNet-8 can be downloaded here, here, here, and here, respectively. As demonstrated in the paper, the testing accuracy is 88.36%, 85.87%, 63.70%, and 59.89%, respectively.

Citation

@inproceedings{zhao2021robust,
  title={Robust Lightweight Facial Expression Recognition Network with Label Distribution Training},
  author={Zhao, Zengqun and Liu, Qingshan and Zhou, Feng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={4},
  pages={3510--3519},
  year={2021}
}

Note

The samples' number of the CAER-S dataset employed in our work should be: all (69,982 samples), training set (48,995 samples), and test set (20,987 samples). We apologize for the typos in our paper.

Comments
  • missing the last fc layer in the pre-trained model

    missing the last fc layer in the pre-trained model

    Hi, thanks for the nice and clean repo. I am trying to use your code on some in the wild images. I realize the provided pre-trained model_cla doesn't have the weight of the final FC layer (1024 x 7). Could you help me with that? Thanks again

    opened by sukun1045 19
  • cannot reproduce the result with CARE-S

    cannot reproduce the result with CARE-S

    Thanks for sharing the models.

    I try to reproduce the results in the paper with CARE-S dataset. At first, view(-1) does not work in the accuracy func. (325 lines in main.py), so I use reshape(-1) instaed of view(-1). I might think it does not affect the results.

    Nevertheless, I can`t get the expected result. The accuracy is quite low even though I use the pre-trained model uploaded in this repo. The accuracy of test set is not bad at first time, but it becomes worse with epochs. The progress is shown below.

    Training time: 05-31 16:52
    Current learning rate:  0.1
    Epoch: [0][  0/157]     Loss 0.2979 (0.2979)    Accuracy 12.500 (12.500)
    Epoch: [0][ 10/157]     Loss 0.2384 (0.2485)    Accuracy 19.531 (13.707)
    Epoch: [0][ 20/157]     Loss 0.2250 (0.2420)    Accuracy 10.156 (14.174)
    Epoch: [0][ 30/157]     Loss 0.2187 (0.2359)    Accuracy 17.188 (14.793)
    Epoch: [0][ 40/157]     Loss 0.2107 (0.2317)    Accuracy 17.969 (15.377)
    Epoch: [0][ 50/157]     Loss 0.2052 (0.2272)    Accuracy 23.438 (15.395)
    Epoch: [0][ 60/157]     Loss 0.2063 (0.2236)    Accuracy 16.406 (15.151)
    Epoch: [0][ 70/157]     Loss 0.2074 (0.2214)    Accuracy 16.406 (15.119)
    Epoch: [0][ 80/157]     Loss 0.2119 (0.2199)    Accuracy 11.719 (15.239)
    Epoch: [0][ 90/157]     Loss 0.2050 (0.2181)    Accuracy 15.625 (15.393)
    Epoch: [0][100/157]     Loss 0.2081 (0.2168)    Accuracy 14.844 (15.316)
    Epoch: [0][110/157]     Loss 0.2133 (0.2158)    Accuracy 15.625 (15.280)
    Epoch: [0][120/157]     Loss 0.2068 (0.2151)    Accuracy 13.281 (15.283)
    Epoch: [0][130/157]     Loss 0.2117 (0.2149)    Accuracy 15.625 (15.357)
    Epoch: [0][140/157]     Loss 0.2069 (0.2142)    Accuracy 17.188 (15.376)
    Epoch: [0][150/157]     Loss 0.2092 (0.2137)    Accuracy 12.500 (15.444)
    Test: [0/7]     Loss 1.0344 (1.0344)    Accuracy 57.031 (57.031)
     *** Accuracy 15.067  *** 
    
    opened by kiyoungkim1 6
  • pretrain loaded error

    pretrain loaded error

    great work, when i used the pretrianed weights to show the perfromance , this happend when i made a new depite val script to see the result, as is shown in the code ,the val process used only model_cla to get the result ,when i loaded the Pretrained_EfficientFace.tar in model_cla ,an error occured as below: do you hava any idea?

    model_cla.load_state_dict(pre_trained_dict) File "E:\anaconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 1224, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for EfficientFace: Missing key(s) in state_dict: "conv1.0.weight", "conv1.1.weight", "conv1.1.bias", "conv1.1.running_mean", "conv1.1.running_var", "stage2.0.branch1.0.weight", "stage2.0.branch1.1.weight", "stage2.0.branch1.1.bias", "stage2.0.branch1.1.running_mean", "stage2.0.branch1.1.running_var", "stage2.0.branch1.2.weight", "stage2.0.branch1.3.weight", "stage2.0.branch1.3.bias", "stage2.0.branch1.3.running_mean", "stage2.0.branch1.3.running_var", "stage2.0.branch2.0.weight", "stage2.0.branch2.1.weight", "stage2.0.branch2.1.bias", "stage2.0.branch2.1.running_mean", "stage2.0.branch2.1.running_var", "stage2.0.branch2.3.weight", "stage2.0.branch2.4.weight", "stage2.0.branch2.4.bias", "stage2.0.branch2.4.running_mean", "stage2.0.branch2.4.running_var", "stage2.0.branch2.5.weight", "stage2.0.branch2.6.weight", "stage2.0.branch2.6.bias", "stage2.0.branch2.6.running_mean", "stage2.0.branch2.6.running_var", "stage2.1.branch2.0.weight", "stage2.1.branch2.1.weight", "stage2.1.branch2.1.bias", "stage2.1.branch2.1.running_mean", "stage2.1.branch2.1.running_var", "stage2.1.branch2.3.weight", "stage2.1.branch2.4.weight", "stage2.1.branch2.4.bias", "stage2.1.branch2.4.running_mean", "stage2.1.branch2.4.running_var", "stage2.1.branch2.5.weight", "stage2.1.branch2.6.weight", "stage2.1.branch2.6.bias", "stage2.1.branch2.6.running_mean", "stage2.1.branch2.6.running_var", "stage2.2.branch2.0.weight", "stage2.2.branch2.1.weight", "stage2.2.branch2.1.bias", "stage2.2.branch2.1.running_mean", "stage2.2.branch2.1.running_var", "stage2.2.branch2.3.weight", "stage2.2.branch2.4.weight", "stage2.2.branch2.4.bias", "stage2.2.branch2.4.running_mean", "stage2.2.branch2.4.running_var", "stage2.2.branch2.5.weight", "stage2.2.branch2.6.weight", "stage2.2.branch2.6.bias", "stage2.2.branch2.6.running_mean", "stage2.2.branch2.6.running_var", "stage2.3.branch2.0.weight", "stage2.3.branch2.1.weight", "stage2.3.branch2.1.bias", "stage2.3.branch2.1.running_mean", "stage2.3.branch2.1.running_var", "stage2.3.branch2.3.weight", "stage2.3.branch2.4.weight", "stage2.3.branch2.4.bias", "stage2.3.branch2.4.running_mean", "stage2.3.branch2.4.running_var", "stage2.3.branch2.5.weight", "stage2.3.branch2.6.weight", "stage2.3.branch2.6.bias", "stage2.3.branch2.6.running_mean", "stage2.3.branch2.6.running_var", "stage3.0.branch1.0.weight", "stage3.0.branch1.1.weight", "stage3.0.branch1.1.bias", "stage3.0.branch1.1.running_mean", "stage3.0.branch1.1.running_var", "stage3.0.branch1.2.weight", "stage3.0.branch1.3.weight", "stage3.0.branch1.3.bias", "stage3.0.branch1.3.running_mean", "stage3.0.branch1.3.running_var", "stage3.0.branch2.0.weight", "stage3.0.branch2.1.weight", "stage3.0.branch2.1.bias", "stage3.0.branch2.1.running_mean", "stage3.0.branch2.1.running_var", "stage3.0.branch2.3.weight", "stage3.0.branch2.4.weight", "stage3.0.branch2.4.bias", "stage3.0.branch2.4.running_mean", "stage3.0.branch2.4.running_var", "stage3.0.branch2.5.weight", "stage3.0.branch2.6.weight", "stage3.0.branch2.6.bias", "stage3.0.branch2.6.running_mean", "stage3.0.branch2.6.running_var", "stage3.1.branch2.0.weight", "stage3.1.branch2.1.weight", "stage3.1.branch2.1.bias", "stage3.1.branch2.1.running_mean", "stage3.1.branch2.1.running_var", "stage3.1.branch2.3.weight", "stage3.1.branch2.4.weight", "stage3.1.branch2.4.bias", "stage3.1.branch2.4.running_mean", "stage3.1.branch2.4.running_var", "stage3.1.branch2.5.weight", "stage3.1.branch2.6.weight", "stage3.1.branch2.6.bias", "stage3.1.branch2.6.running_mean", "stage3.1.branch2.6.running_var", "stage3.2.branch2.0.weight", "stage3.2.branch2.1.weight", "stage3.2.branch2.1.bias", "stage3.2.branch2.1.running_mean", "stage3.2.branch2.1.running_var", "stage3.2.branch2.3.weight", "stage3.2.branch2.4.weight", "stage3.2.branch2.4.bias", "stage3.2.branch2.4.running_mean", "stage3.2.branch2.4.running_var", "stage3.2.branch2.5.weight", "stage3.2.branch2.6.weight", "stage3.2.branch2.6.bias", "stage3.2.branch2.6.running_mean", "stage3.2.branch2.6.running_var", "stage3.3.branch2.0.weight", "stage3.3.branch2.1.weight", "stage3.3.branch2.1.bias", "stage3.3.branch2.1.running_mean", "stage3.3.branch2.1.running_var", "stage3.3.branch2.3.weight", "stage3.3.branch2.4.weight", "stage3.3.branch2.4.bias", "stage3.3.branch2.4.running_mean", "stage3.3.branch2.4.running_var", "stage3.3.branch2.5.weight", "stage3.3.branch2.6.weight", "stage3.3.branch2.6.bias", "stage3.3.branch2.6.running_mean", "stage3.3.branch2.6.running_var", "stage3.4.branch2.0.weight", "stage3.4.branch2.1.weight", "stage3.4.branch2.1.bias", "stage3.4.branch2.1.running_mean", "stage3.4.branch2.1.running_var", "stage3.4.branch2.3.weight", "stage3.4.branch2.4.weight", "stage3.4.branch2.4.bias", "stage3.4.branch2.4.running_mean", "stage3.4.branch2.4.running_var", "stage3.4.branch2.5.weight", "stage3.4.branch2.6.weight", "stage3.4.branch2.6.bias", "stage3.4.branch2.6.running_mean", "stage3.4.branch2.6.running_var", "stage3.5.branch2.0.weight", "stage3.5.branch2.1.weight", "stage3.5.branch2.1.bias", "stage3.5.branch2.1.running_mean", "stage3.5.branch2.1.running_var", "stage3.5.branch2.3.weight", "stage3.5.branch2.4.weight", "stage3.5.branch2.4.bias", "stage3.5.branch2.4.running_mean", "stage3.5.branch2.4.running_var", "stage3.5.branch2.5.weight", "stage3.5.branch2.6.weight", "stage3.5.branch2.6.bias", "stage3.5.branch2.6.running_mean", "stage3.5.branch2.6.running_var", "stage3.6.branch2.0.weight", "stage3.6.branch2.1.weight", "stage3.6.branch2.1.bias", "stage3.6.branch2.1.running_mean", "stage3.6.branch2.1.running_var", "stage3.6.branch2.3.weight", "stage3.6.branch2.4.weight", "stage3.6.branch2.4.bias", "stage3.6.branch2.4.running_mean", "stage3.6.branch2.4.running_var", "stage3.6.branch2.5.weight", "stage3.6.branch2.6.weight", "stage3.6.branch2.6.bias", "stage3.6.branch2.6.running_mean", "stage3.6.branch2.6.running_var", "stage3.7.branch2.0.weight", "stage3.7.branch2.1.weight", "stage3.7.branch2.1.bias", "stage3.7.branch2.1.running_mean", "stage3.7.branch2.1.running_var", "stage3.7.branch2.3.weight", "stage3.7.branch2.4.weight", "stage3.7.branch2.4.bias", "stage3.7.branch2.4.running_mean", "stage3.7.branch2.4.running_var", "stage3.7.branch2.5.weight", "stage3.7.branch2.6.weight", "stage3.7.branch2.6.bias", "stage3.7.branch2.6.running_mean", "stage3.7.branch2.6.running_var", "stage4.0.branch1.0.weight", "stage4.0.branch1.1.weight", "stage4.0.branch1.1.bias", "stage4.0.branch1.1.running_mean", "stage4.0.branch1.1.running_var", "stage4.0.branch1.2.weight", "stage4.0.branch1.3.weight", "stage4.0.branch1.3.bias", "stage4.0.branch1.3.running_mean", "stage4.0.branch1.3.running_var", "stage4.0.branch2.0.weight", "stage4.0.branch2.1.weight", "stage4.0.branch2.1.bias", "stage4.0.branch2.1.running_mean", "stage4.0.branch2.1.running_var", "stage4.0.branch2.3.weight", "stage4.0.branch2.4.weight", "stage4.0.branch2.4.bias", "stage4.0.branch2.4.running_mean", "stage4.0.branch2.4.running_var", "stage4.0.branch2.5.weight", "stage4.0.branch2.6.weight", "stage4.0.branch2.6.bias", "stage4.0.branch2.6.running_mean", "stage4.0.branch2.6.running_var", "stage4.1.branch2.0.weight", "stage4.1.branch2.1.weight", "stage4.1.branch2.1.bias", "stage4.1.branch2.1.running_mean", "stage4.1.branch2.1.running_var", "stage4.1.branch2.3.weight", "stage4.1.branch2.4.weight", "stage4.1.branch2.4.bias", "stage4.1.branch2.4.running_mean", "stage4.1.branch2.4.running_var", "stage4.1.branch2.5.weight", "stage4.1.branch2.6.weight", "stage4.1.branch2.6.bias", "stage4.1.branch2.6.running_mean", "stage4.1.branch2.6.running_var", "stage4.2.branch2.0.weight", "stage4.2.branch2.1.weight", "stage4.2.branch2.1.bias", "stage4.2.branch2.1.running_mean", "stage4.2.branch2.1.running_var", "stage4.2.branch2.3.weight", "stage4.2.branch2.4.weight", "stage4.2.branch2.4.bias", "stage4.2.branch2.4.running_mean", "stage4.2.branch2.4.running_var", "stage4.2.branch2.5.weight", "stage4.2.branch2.6.weight", "stage4.2.branch2.6.bias", "stage4.2.branch2.6.running_mean", "stage4.2.branch2.6.running_var", "stage4.3.branch2.0.weight", "stage4.3.branch2.1.weight", "stage4.3.branch2.1.bias", "stage4.3.branch2.1.running_mean", "stage4.3.branch2.1.running_var", "stage4.3.branch2.3.weight", "stage4.3.branch2.4.weight", "stage4.3.branch2.4.bias", "stage4.3.branch2.4.running_mean", "stage4.3.branch2.4.running_var", "stage4.3.branch2.5.weight", "stage4.3.branch2.6.weight", "stage4.3.branch2.6.bias", "stage4.3.branch2.6.running_mean", "stage4.3.branch2.6.running_var", "local.conv1_1.weight", "local.bn1_1.weight", "local.bn1_1.bias", "local.bn1_1.running_mean", "local.bn1_1.running_var", "local.conv1_2.weight", "local.bn1_2.weight", "local.bn1_2.bias", "local.bn1_2.running_mean", "local.bn1_2.running_var", "local.conv2_1.weight", "local.bn2_1.weight", "local.bn2_1.bias", "local.bn2_1.running_mean", "local.bn2_1.running_var", "local.conv2_2.weight", "local.bn2_2.weight", "local.bn2_2.bias", "local.bn2_2.running_mean", "local.bn2_2.running_var", "local.conv3_1.weight", "local.bn3_1.weight", "local.bn3_1.bias", "local.bn3_1.running_mean", "local.bn3_1.running_var", "local.conv3_2.weight", "local.bn3_2.weight", "local.bn3_2.bias", "local.bn3_2.running_mean", "local.bn3_2.running_var", "local.conv4_1.weight", "local.bn4_1.weight", "local.bn4_1.bias", "local.bn4_1.running_mean", "local.bn4_1.running_var", "local.conv4_2.weight", "local.bn4_2.weight", "local.bn4_2.bias", "local.bn4_2.running_mean", "local.bn4_2.running_var", "modulator.channel_att.gate_c.gate_c_fc_0.weight", "modulator.channel_att.gate_c.gate_c_fc_0.bias", "modulator.channel_att.gate_c.gate_c_bn_1.weight", "modulator.channel_att.gate_c.gate_c_bn_1.bias", "modulator.channel_att.gate_c.gate_c_bn_1.running_mean", "modulator.channel_att.gate_c.gate_c_bn_1.running_var", "modulator.channel_att.gate_c.gate_c_fc_final.weight", "modulator.channel_att.gate_c.gate_c_fc_final.bias", "modulator.spatial_att.gate_s.gate_s_conv_reduce0.weight", "modulator.spatial_att.gate_s.gate_s_conv_reduce0.bias", "modulator.spatial_att.gate_s.gate_s_bn_reduce0.weight", "modulator.spatial_att.gate_s.gate_s_bn_reduce0.bias", "modulator.spatial_att.gate_s.gate_s_bn_reduce0.running_mean", "modulator.spatial_att.gate_s.gate_s_bn_reduce0.running_var", "modulator.spatial_att.gate_s.gate_s_conv_di_0.weight", "modulator.spatial_att.gate_s.gate_s_conv_di_0.bias", "modulator.spatial_att.gate_s.gate_s_bn_di_0.weight", "modulator.spatial_att.gate_s.gate_s_bn_di_0.bias", "modulator.spatial_att.gate_s.gate_s_bn_di_0.running_mean", "modulator.spatial_att.gate_s.gate_s_bn_di_0.running_var", "modulator.spatial_att.gate_s.gate_s_conv_di_1.weight", "modulator.spatial_att.gate_s.gate_s_conv_di_1.bias", "modulator.spatial_att.gate_s.gate_s_bn_di_1.weight", "modulator.spatial_att.gate_s.gate_s_bn_di_1.bias", "modulator.spatial_att.gate_s.gate_s_bn_di_1.running_mean", "modulator.spatial_att.gate_s.gate_s_bn_di_1.running_var", "modulator.spatial_att.gate_s.gate_s_conv_final.weight", "modulator.spatial_att.gate_s.gate_s_conv_final.bias", "conv5.0.weight", "conv5.1.weight", "conv5.1.bias", "conv5.1.running_mean", "conv5.1.running_var", "fc.weight", "fc.bias". Unexpected key(s) in state_dict: "module.conv1.0.weight", "module.conv1.1.weight", "module.conv1.1.bias", "module.conv1.1.running_mean", "module.conv1.1.running_var", "module.conv1.1.num_batches_tracked", "module.stage2.0.branch1.0.weight", "module.stage2.0.branch1.1.weight", "module.stage2.0.branch1.1.bias", "module.stage2.0.branch1.1.running_mean", "module.stage2.0.branch1.1.running_var", "module.stage2.0.branch1.1.num_batches_tracked", "module.stage2.0.branch1.2.weight", "module.stage2.0.branch1.3.weight", "module.stage2.0.branch1.3.bias", "module.stage2.0.branch1.3.running_mean", "module.stage2.0.branch1.3.running_var", "module.stage2.0.branch1.3.num_batches_tracked", "module.stage2.0.branch2.0.weight", "module.stage2.0.branch2.1.weight", "module.stage2.0.branch2.1.bias", "module.stage2.0.branch2.1.running_mean", "module.stage2.0.branch2.1.running_var", "module.stage2.0.branch2.1.num_batches_tracked", "module.stage2.0.branch2.3.weight", "module.stage2.0.branch2.4.weight", "module.stage2.0.branch2.4.bias", "module.stage2.0.branch2.4.running_mean", "module.stage2.0.branch2.4.running_var", "module.stage2.0.branch2.4.num_batches_tracked", "module.stage2.0.branch2.5.weight", "module.stage2.0.branch2.6.weight", "module.stage2.0.branch2.6.bias", "module.stage2.0.branch2.6.running_mean", "module.stage2.0.branch2.6.running_var", "module.stage2.0.branch2.6.num_batches_tracked", "module.stage2.1.branch2.0.weight", "module.stage2.1.branch2.1.weight", "module.stage2.1.branch2.1.bias", "module.stage2.1.branch2.1.running_mean", "module.stage2.1.branch2.1.running_var", "module.stage2.1.branch2.1.num_batches_tracked", "module.stage2.1.branch2.3.weight", "module.stage2.1.branch2.4.weight", "module.stage2.1.branch2.4.bias", "module.stage2.1.branch2.4.running_mean", "module.stage2.1.branch2.4.running_var", "module.stage2.1.branch2.4.num_batches_tracked", "module.stage2.1.branch2.5.weight", "module.stage2.1.branch2.6.weight", "module.stage2.1.branch2.6.bias", "module.stage2.1.branch2.6.running_mean", "module.stage2.1.branch2.6.running_var", "module.stage2.1.branch2.6.num_batches_tracked", "module.stage2.2.branch2.0.weight", "module.stage2.2.branch2.1.weight", "module.stage2.2.branch2.1.bias", "module.stage2.2.branch2.1.running_mean", "module.stage2.2.branch2.1.running_var", "module.stage2.2.branch2.1.num_batches_tracked", "module.stage2.2.branch2.3.weight", "module.stage2.2.branch2.4.weight", "module.stage2.2.branch2.4.bias", "module.stage2.2.branch2.4.running_mean", "module.stage2.2.branch2.4.running_var", "module.stage2.2.branch2.4.num_batches_tracked", "module.stage2.2.branch2.5.weight", "module.stage2.2.branch2.6.weight", "module.stage2.2.branch2.6.bias", "module.stage2.2.branch2.6.running_mean", "module.stage2.2.branch2.6.running_var", "module.stage2.2.branch2.6.num_batches_tracked", "module.stage2.3.branch2.0.weight", "module.stage2.3.branch2.1.weight", "module.stage2.3.branch2.1.bias", "module.stage2.3.branch2.1.running_mean", "module.stage2.3.branch2.1.running_var", "module.stage2.3.branch2.1.num_batches_tracked", "module.stage2.3.branch2.3.weight", "module.stage2.3.branch2.4.weight", "module.stage2.3.branch2.4.bias", "module.stage2.3.branch2.4.running_mean", "module.stage2.3.branch2.4.running_var", "module.stage2.3.branch2.4.num_batches_tracked", "module.stage2.3.branch2.5.weight", "module.stage2.3.branch2.6.weight", "module.stage2.3.branch2.6.bias", "module.stage2.3.branch2.6.running_mean", "module.stage2.3.branch2.6.running_var", "module.stage2.3.branch2.6.num_batches_tracked", "module.stage3.0.branch1.0.weight", "module.stage3.0.branch1.1.weight", "module.stage3.0.branch1.1.bias", "module.stage3.0.branch1.1.running_mean", "module.stage3.0.branch1.1.running_var", "module.stage3.0.branch1.1.num_batches_tracked", "module.stage3.0.branch1.2.weight", "module.stage3.0.branch1.3.weight", "module.stage3.0.branch1.3.bias", "module.stage3.0.branch1.3.running_mean", "module.stage3.0.branch1.3.running_var", "module.stage3.0.branch1.3.num_batches_tracked", "module.stage3.0.branch2.0.weight", "module.stage3.0.branch2.1.weight", "module.stage3.0.branch2.1.bias", "module.stage3.0.branch2.1.running_mean", "module.stage3.0.branch2.1.running_var", "module.stage3.0.branch2.1.num_batches_tracked", "module.stage3.0.branch2.3.weight", "module.stage3.0.branch2.4.weight", "module.stage3.0.branch2.4.bias", "module.stage3.0.branch2.4.running_mean", "module.stage3.0.branch2.4.running_var", "module.stage3.0.branch2.4.num_batches_tracked", "module.stage3.0.branch2.5.weight", "module.stage3.0.branch2.6.weight", "module.stage3.0.branch2.6.bias", "module.stage3.0.branch2.6.running_mean", "module.stage3.0.branch2.6.running_var", "module.stage3.0.branch2.6.num_batches_tracked", "module.stage3.1.branch2.0.weight", "module.stage3.1.branch2.1.weight", "module.stage3.1.branch2.1.bias", "module.stage3.1.branch2.1.running_mean", "module.stage3.1.branch2.1.running_var", "module.stage3.1.branch2.1.num_batches_tracked", "module.stage3.1.branch2.3.weight", "module.stage3.1.branch2.4.weight", "module.stage3.1.branch2.4.bias", "module.stage3.1.branch2.4.running_mean", "module.stage3.1.branch2.4.running_var", "module.stage3.1.branch2.4.num_batches_tracked", "module.stage3.1.branch2.5.weight", "module.stage3.1.branch2.6.weight", "module.stage3.1.branch2.6.bias", "module.stage3.1.branch2.6.running_mean", "module.stage3.1.branch2.6.running_var", "module.stage3.1.branch2.6.num_batches_tracked", "module.stage3.2.branch2.0.weight", "module.stage3.2.branch2.1.weight", "module.stage3.2.branch2.1.bias", "module.stage3.2.branch2.1.running_mean", "module.stage3.2.branch2.1.running_var", "module.stage3.2.branch2.1.num_batches_tracked", "module.stage3.2.branch2.3.weight", "module.stage3.2.branch2.4.weight", "module.stage3.2.branch2.4.bias", "module.stage3.2.branch2.4.running_mean", "module.stage3.2.branch2.4.running_var", "module.stage3.2.branch2.4.num_batches_tracked", "module.stage3.2.branch2.5.weight", "module.stage3.2.branch2.6.weight", "module.stage3.2.branch2.6.bias", "module.stage3.2.branch2.6.running_mean", "module.stage3.2.branch2.6.running_var", "module.stage3.2.branch2.6.num_batches_tracked", "module.stage3.3.branch2.0.weight", "module.stage3.3.branch2.1.weight", "module.stage3.3.branch2.1.bias", "module.stage3.3.branch2.1.running_mean", "module.stage3.3.branch2.1.running_var", "module.stage3.3.branch2.1.num_batches_tracked", "module.stage3.3.branch2.3.weight", "module.stage3.3.branch2.4.weight", "module.stage3.3.branch2.4.bias", "module.stage3.3.branch2.4.running_mean", "module.stage3.3.branch2.4.running_var", "module.stage3.3.branch2.4.num_batches_tracked", "module.stage3.3.branch2.5.weight", "module.stage3.3.branch2.6.weight", "module.stage3.3.branch2.6.bias", "module.stage3.3.branch2.6.running_mean", "module.stage3.3.branch2.6.running_var", "module.stage3.3.branch2.6.num_batches_tracked", "module.stage3.4.branch2.0.weight", "module.stage3.4.branch2.1.weight", "module.stage3.4.branch2.1.bias", "module.stage3.4.branch2.1.running_mean", "module.stage3.4.branch2.1.running_var", "module.stage3.4.branch2.1.num_batches_tracked", "module.stage3.4.branch2.3.weight", "module.stage3.4.branch2.4.weight", "module.stage3.4.branch2.4.bias", "module.stage3.4.branch2.4.running_mean", "module.stage3.4.branch2.4.running_var", "module.stage3.4.branch2.4.num_batches_tracked", "module.stage3.4.branch2.5.weight", "module.stage3.4.branch2.6.weight", "module.stage3.4.branch2.6.bias", "module.stage3.4.branch2.6.running_mean", "module.stage3.4.branch2.6.running_var", "module.stage3.4.branch2.6.num_batches_tracked", "module.stage3.5.branch2.0.weight", "module.stage3.5.branch2.1.weight", "module.stage3.5.branch2.1.bias", "module.stage3.5.branch2.1.running_mean", "module.stage3.5.branch2.1.running_var", "module.stage3.5.branch2.1.num_batches_tracked", "module.stage3.5.branch2.3.weight", "module.stage3.5.branch2.4.weight", "module.stage3.5.branch2.4.bias", "module.stage3.5.branch2.4.running_mean", "module.stage3.5.branch2.4.running_var", "module.stage3.5.branch2.4.num_batches_tracked", "module.stage3.5.branch2.5.weight", "module.stage3.5.branch2.6.weight", "module.stage3.5.branch2.6.bias", "module.stage3.5.branch2.6.running_mean", "module.stage3.5.branch2.6.running_var", "module.stage3.5.branch2.6.num_batches_tracked", "module.stage3.6.branch2.0.weight", "module.stage3.6.branch2.1.weight", "module.stage3.6.branch2.1.bias", "module.stage3.6.branch2.1.running_mean", "module.stage3.6.branch2.1.running_var", "module.stage3.6.branch2.1.num_batches_tracked", "module.stage3.6.branch2.3.weight", "module.stage3.6.branch2.4.weight", "module.stage3.6.branch2.4.bias", "module.stage3.6.branch2.4.running_mean", "module.stage3.6.branch2.4.running_var", "module.stage3.6.branch2.4.num_batches_tracked", "module.stage3.6.branch2.5.weight", "module.stage3.6.branch2.6.weight", "module.stage3.6.branch2.6.bias", "module.stage3.6.branch2.6.running_mean", "module.stage3.6.branch2.6.running_var", "module.stage3.6.branch2.6.num_batches_tracked", "module.stage3.7.branch2.0.weight", "module.stage3.7.branch2.1.weight", "module.stage3.7.branch2.1.bias", "module.stage3.7.branch2.1.running_mean", "module.stage3.7.branch2.1.running_var", "module.stage3.7.branch2.1.num_batches_tracked", "module.stage3.7.branch2.3.weight", "module.stage3.7.branch2.4.weight", "module.stage3.7.branch2.4.bias", "module.stage3.7.branch2.4.running_mean", "module.stage3.7.branch2.4.running_var", "module.stage3.7.branch2.4.num_batches_tracked", "module.stage3.7.branch2.5.weight", "module.stage3.7.branch2.6.weight", "module.stage3.7.branch2.6.bias", "module.stage3.7.branch2.6.running_mean", "module.stage3.7.branch2.6.running_var", "module.stage3.7.branch2.6.num_batches_tracked", "module.stage4.0.branch1.0.weight", "module.stage4.0.branch1.1.weight", "module.stage4.0.branch1.1.bias", "module.stage4.0.branch1.1.running_mean", "module.stage4.0.branch1.1.running_var", "module.stage4.0.branch1.1.num_batches_tracked", "module.stage4.0.branch1.2.weight", "module.stage4.0.branch1.3.weight", "module.stage4.0.branch1.3.bias", "module.stage4.0.branch1.3.running_mean", "module.stage4.0.branch1.3.running_var", "module.stage4.0.branch1.3.num_batches_tracked", "module.stage4.0.branch2.0.weight", "module.stage4.0.branch2.1.weight", "module.stage4.0.branch2.1.bias", "module.stage4.0.branch2.1.running_mean", "module.stage4.0.branch2.1.running_var", "module.stage4.0.branch2.1.num_batches_tracked", "module.stage4.0.branch2.3.weight", "module.stage4.0.branch2.4.weight", "module.stage4.0.branch2.4.bias", "module.stage4.0.branch2.4.running_mean", "module.stage4.0.branch2.4.running_var", "module.stage4.0.branch2.4.num_batches_tracked", "module.stage4.0.branch2.5.weight", "module.stage4.0.branch2.6.weight", "module.stage4.0.branch2.6.bias", "module.stage4.0.branch2.6.running_mean", "module.stage4.0.branch2.6.running_var", "module.stage4.0.branch2.6.num_batches_tracked", "module.stage4.1.branch2.0.weight", "module.stage4.1.branch2.1.weight", "module.stage4.1.branch2.1.bias", "module.stage4.1.branch2.1.running_mean", "module.stage4.1.branch2.1.running_var", "module.stage4.1.branch2.1.num_batches_tracked", "module.stage4.1.branch2.3.weight", "module.stage4.1.branch2.4.weight", "module.stage4.1.branch2.4.bias", "module.stage4.1.branch2.4.running_mean", "module.stage4.1.branch2.4.running_var", "module.stage4.1.branch2.4.num_batches_tracked", "module.stage4.1.branch2.5.weight", "module.stage4.1.branch2.6.weight", "module.stage4.1.branch2.6.bias", "module.stage4.1.branch2.6.running_mean", "module.stage4.1.branch2.6.running_var", "module.stage4.1.branch2.6.num_batches_tracked", "module.stage4.2.branch2.0.weight", "module.stage4.2.branch2.1.weight", "module.stage4.2.branch2.1.bias", "module.stage4.2.branch2.1.running_mean", "module.stage4.2.branch2.1.running_var", "module.stage4.2.branch2.1.num_batches_tracked", "module.stage4.2.branch2.3.weight", "module.stage4.2.branch2.4.weight", "module.stage4.2.branch2.4.bias", "module.stage4.2.branch2.4.running_mean", "module.stage4.2.branch2.4.running_var", "module.stage4.2.branch2.4.num_batches_tracked", "module.stage4.2.branch2.5.weight", "module.stage4.2.branch2.6.weight", "module.stage4.2.branch2.6.bias", "module.stage4.2.branch2.6.running_mean", "module.stage4.2.branch2.6.running_var", "module.stage4.2.branch2.6.num_batches_tracked", "module.stage4.3.branch2.0.weight", "module.stage4.3.branch2.1.weight", "module.stage4.3.branch2.1.bias", "module.stage4.3.branch2.1.running_mean", "module.stage4.3.branch2.1.running_var", "module.stage4.3.branch2.1.num_batches_tracked", "module.stage4.3.branch2.3.weight", "module.stage4.3.branch2.4.weight", "module.stage4.3.branch2.4.bias", "module.stage4.3.branch2.4.running_mean", "module.stage4.3.branch2.4.running_var", "module.stage4.3.branch2.4.num_batches_tracked", "module.stage4.3.branch2.5.weight", "module.stage4.3.branch2.6.weight", "module.stage4.3.branch2.6.bias", "module.stage4.3.branch2.6.running_mean", "module.stage4.3.branch2.6.running_var", "module.stage4.3.branch2.6.num_batches_tracked", "module.local.conv1_1.weight", "module.local.bn1_1.weight", "module.local.bn1_1.bias", "module.local.bn1_1.running_mean", "module.local.bn1_1.running_var", "module.local.bn1_1.num_batches_tracked", "module.local.conv1_2.weight", "module.local.bn1_2.weight", "module.local.bn1_2.bias", "module.local.bn1_2.running_mean", "module.local.bn1_2.running_var", "module.local.bn1_2.num_batches_tracked", "module.local.conv2_1.weight", "module.local.bn2_1.weight", "module.local.bn2_1.bias", "module.local.bn2_1.running_mean", "module.local.bn2_1.running_var", "module.local.bn2_1.num_batches_tracked", "module.local.conv2_2.weight", "module.local.bn2_2.weight", "module.local.bn2_2.bias", "module.local.bn2_2.running_mean", "module.local.bn2_2.running_var", "module.local.bn2_2.num_batches_tracked", "module.local.conv3_1.weight", "module.local.bn3_1.weight", "module.local.bn3_1.bias", "module.local.bn3_1.running_mean", "module.local.bn3_1.running_var", "module.local.bn3_1.num_batches_tracked", "module.local.conv3_2.weight", "module.local.bn3_2.weight", "module.local.bn3_2.bias", "module.local.bn3_2.running_mean", "module.local.bn3_2.running_var", "module.local.bn3_2.num_batches_tracked", "module.local.conv4_1.weight", "module.local.bn4_1.weight", "module.local.bn4_1.bias", "module.local.bn4_1.running_mean", "module.local.bn4_1.running_var", "module.local.bn4_1.num_batches_tracked", "module.local.conv4_2.weight", "module.local.bn4_2.weight", "module.local.bn4_2.bias", "module.local.bn4_2.running_mean", "module.local.bn4_2.running_var", "module.local.bn4_2.num_batches_tracked", "module.modulator.channel_att.gate_c.gate_c_fc_0.weight", "module.modulator.channel_att.gate_c.gate_c_fc_0.bias", "module.modulator.channel_att.gate_c.gate_c_bn_1.weight", "module.modulator.channel_att.gate_c.gate_c_bn_1.bias", "module.modulator.channel_att.gate_c.gate_c_bn_1.running_mean", "module.modulator.channel_att.gate_c.gate_c_bn_1.running_var", "module.modulator.channel_att.gate_c.gate_c_bn_1.num_batches_tracked", "module.modulator.channel_att.gate_c.gate_c_fc_final.weight", "module.modulator.channel_att.gate_c.gate_c_fc_final.bias", "module.modulator.spatial_att.gate_s.gate_s_conv_reduce0.weight", "module.modulator.spatial_att.gate_s.gate_s_conv_reduce0.bias", "module.modulator.spatial_att.gate_s.gate_s_bn_reduce0.weight", "module.modulator.spatial_att.gate_s.gate_s_bn_reduce0.bias", "module.modulator.spatial_att.gate_s.gate_s_bn_reduce0.running_mean", "module.modulator.spatial_att.gate_s.gate_s_bn_reduce0.running_var", "module.modulator.spatial_att.gate_s.gate_s_bn_reduce0.num_batches_tracked", "module.modulator.spatial_att.gate_s.gate_s_conv_di_0.weight", "module.modulator.spatial_att.gate_s.gate_s_conv_di_0.bias", "module.modulator.spatial_att.gate_s.gate_s_bn_di_0.weight", "module.modulator.spatial_att.gate_s.gate_s_bn_di_0.bias", "module.modulator.spatial_att.gate_s.gate_s_bn_di_0.running_mean", "module.modulator.spatial_att.gate_s.gate_s_bn_di_0.running_var", "module.modulator.spatial_att.gate_s.gate_s_bn_di_0.num_batches_tracked", "module.modulator.spatial_att.gate_s.gate_s_conv_di_1.weight", "module.modulator.spatial_att.gate_s.gate_s_conv_di_1.bias", "module.modulator.spatial_att.gate_s.gate_s_bn_di_1.weight", "module.modulator.spatial_att.gate_s.gate_s_bn_di_1.bias", "module.modulator.spatial_att.gate_s.gate_s_bn_di_1.running_mean", "module.modulator.spatial_att.gate_s.gate_s_bn_di_1.running_var", "module.modulator.spatial_att.gate_s.gate_s_bn_di_1.num_batches_tracked", "module.modulator.spatial_att.gate_s.gate_s_conv_final.weight", "module.modulator.spatial_att.gate_s.gate_s_conv_final.bias", "module.conv5.0.weight", "module.conv5.1.weight", "module.conv5.1.bias", "module.conv5.1.running_mean", "module.conv5.1.running_var", "module.conv5.1.num_batches_tracked", "module.fc.weight", "module.fc.bias".

    opened by yearlz 4
  • Problem setting up the environment

    Problem setting up the environment

    I'm trying to run the project on my pc, but i keep running into the following error:

    Traceback (most recent call last):
      File "D:\LEO\2-Estudos\sem8\tcc2\git reconhecimento de expressoes\EfficientFace\main.py", line 383, in <module>
        main()
      File "D:\LEO\2-Estudos\sem8\tcc2\git reconhecimento de expressoes\EfficientFace\main.py", line 99, in main
        train_dataset = datasets.ImageFolder(traindir,
      File "C:\Users\leodo\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\datasets\folder.py", line 310, in __init__
        super(ImageFolder, self).__init__(root, loader, IMG_EXTENSIONS if is_valid_file is None else None,
      File "C:\Users\leodo\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\datasets\folder.py", line 145, in __init__
        classes, class_to_idx = self.find_classes(self.root)
      File "C:\Users\leodo\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\datasets\folder.py", line 221, in find_classes
        return find_classes(directory)
      File "C:\Users\leodo\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\datasets\folder.py", line 42, in find_classes
        raise FileNotFoundError(f"Couldn't find any class folder in {directory}.")
    FileNotFoundError: Couldn't find any class folder in D:\LEO\2-Estudos\sem8\tcc2\git reconhecimento de expressoes\EfficientFace\train.
    

    Can you help me?

    opened by leodollinger 3
  • Question about combining channel and spatial heatmaps

    Question about combining channel and spatial heatmaps

    Hi, Zeng Qun,

    Thank you for sharing this great work. I noticed that in your paper and also in the BAM, the channel heatmaps Mc(Fstage2) and the spatial heatmaps Ms(Fstage2) were "added" before being passed through sigmoid. But in your code, they were "multiplied" before being passed through sigmoid. https://github.com/zengqunzhao/EfficientFace/blob/328cb992915ed8c2dcbae351a8d9da718c117d8f/models/modulator.py#L58 I wonder if there is any interesting reason for you to do it differently ? Thanks.

    opened by alvin870203 2
  • Pretrained LDG got extreme low accuracy on RAF-DB

    Pretrained LDG got extreme low accuracy on RAF-DB

    Hi, zengqun

    Thanks for your nice work. I ran the code with the provided models on RAF-DB (I did not modify anything except the data path), however, I got every low accuracy as below:

    Current learning rate: 0.00010000000000000003 Epoch: [99][ 0/96] Loss 0.0743 (0.0743) Accuracy 3.906 ( 3.906) Epoch: [99][10/96] Loss 0.0642 (0.0640) Accuracy 0.000 ( 1.420) Epoch: [99][20/96] Loss 0.0661 (0.0641) Accuracy 0.781 ( 1.749) Epoch: [99][30/96] Loss 0.0552 (0.0627) Accuracy 0.781 ( 1.815) Epoch: [99][40/96] Loss 0.0636 (0.0624) Accuracy 3.906 ( 1.925) Epoch: [99][50/96] Loss 0.0739 (0.0627) Accuracy 1.562 ( 1.915) Epoch: [99][60/96] Loss 0.0604 (0.0622) Accuracy 0.781 ( 1.857) Epoch: [99][70/96] Loss 0.0473 (0.0615) Accuracy 1.562 ( 1.816) Epoch: [99][80/96] Loss 0.0638 (0.0612) Accuracy 3.125 ( 1.823) Epoch: [99][90/96] Loss 0.0632 (0.0610) Accuracy 2.344 ( 1.829) Test: [ 0/24] Loss 7.5441 (7.5441) Accuracy 6.250 ( 6.250) Test: [10/24] Loss 16.6608 (14.6800) Accuracy 0.781 ( 2.841) Test: [20/24] Loss 8.7611 (12.6607) Accuracy 0.000 ( 2.009) * Accuracy 1.825 Current best accuracy: 4.269882678985596 27.060322523117065

    I believe there might have bugs for the pretrained LDG. Thus, I directly validate its performance on RAF-DB, I found that pretrained LDG got extreme low accuracy. The modified validation code and the experiment log are as below:

        if args.evaluate:
            # validate(val_loader, model_cla, criterion_val, args)
            validate(val_loader, model_dis, criterion_val, args)
            return
    

    Training time: 06-21 15:24 Test: [ 0/24] Loss 9.1622 (9.1622) Accuracy 5.469 ( 5.469) Test: [10/24] Loss 20.1950 (17.4116) Accuracy 2.344 ( 3.409) Test: [20/24] Loss 9.6880 (14.8375) Accuracy 0.781 ( 2.604) *** Accuracy 2.347 ***

    Could you give me help? Thanks very much!

    opened by youcaiSUN 2
  • Change the order of expressions

    Change the order of expressions

    If I want to change the order of the categories, for example, 0 is no longer the neutral, but the expression representing sadness, what should I change in the code

    opened by DJdjUSER 1
  • Smaller model size

    Smaller model size

    Hi,

    I'm looking into a real-time application where speed is more important than accuracy. So I am interested in your opinion. If you had to decrease the size of the model by 10x, 5x and 2x, where would you truncate the model.

    E.g. by decreasing the stages and out channels, but by how much? [1]

    [1] model = EfficientFace([4, 8, 4], [29, 116, 232, 464, 1024]) https://github.com/zengqunzhao/EfficientFace/blob/350845c9fb0aae1cf14e728591ebb70b20ae7a22/models/EfficientFace.py#L198

    Thanks, Rasmus

    opened by Rassibassi 1
  • Face extraction and alignment of RAF dataset

    Face extraction and alignment of RAF dataset

    Hi,

    great work, and thanks for sharing. I'm trying to train my own version of the model and I have a quick question.

    Did you use the aligned versions from the RAF-DB dataset or did you use a custom face detection and alignment method.

    The RAF-DB has two folders original and aligned. From your README it seems that you used the files from the original folder, e.g. test_0001.jpg. In the aligned folder the naming is as follows test_0001_aligned.jpg.

    Thanks, Rasmus

    opened by Rassibassi 1
  • EfficientFace trained FER models?

    EfficientFace trained FER models?

    Hi! I see that you recently shared several LDG models trained for FER. Do you have a plan to also share the trained EfficientFace models trained on any FER datasets as per paper?

    Thanks!

    opened by katerynaCh 1
  • Ask about Retinaface in image preprocessing

    Ask about Retinaface in image preprocessing

    Hi Zengqun Thanks for you share this nice work! I want to know which part of the code reflects that you said in your paper that '' the face region is detected and aligned using Retinaface". I carefully looked at the preprocessing part of the code and it seemed that I could not find it. Sure, I'm just beginning to study facial expression recognition so please forgive me if the problem is too simple. I am very grateful for any replies from you

    opened by Dearbreeze 1
Owner
Zengqun Zhao
M.S. Student.
Zengqun Zhao
Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI

EmotionUI Software for Multimodalty 2D+3D Facial Expression Recognition (FER) UI. demo screenshot (with RealSense) required packages Python >= 3.6 num

Yang Jiao 2 Dec 23, 2021
Web service for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation based on OpenFace 2.0

OpenGaze: Web Service for OpenFace Facial Behaviour Analysis Toolkit Overview OpenFace is a fantastic tool intended for computer vision and machine le

Sayom Shakib 4 Nov 3, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Py-FEAT: Python Facial Expression Analysis Toolbox

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.

Computational Social Affective Neuroscience Laboratory 147 Jan 6, 2023
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
Facial Expression Detection In The Realtime

The human's facial expressions is very important to detect thier emotions and sentiment. It can be very efficient to use to make our computers make interviews. Furthermore, we have robots now can detect the human's emotions and based on thats take an action .etc. So, It will be better to provide a tool or model for this.

Adel El-Nabarawy 4 Mar 1, 2022
Facial expression detector

A tensorflow convolutional neural network model to detect facial expressions.

Carlos Tardón Rubio 5 Apr 20, 2022
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Kushal Shingote 2 Feb 10, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 7, 2022
《LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification》(AAAI 2021) GitHub:

LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification

null 76 Dec 5, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
[AAAI 2021] MVFNet: Multi-View Fusion Network for Efficient Video Recognition

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

Wenhao Wu 114 Nov 27, 2022
MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021)

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

null 2 Jan 29, 2022
Official implementation for ICDAR 2021 paper "Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer"

Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer Description Convert offline handwritten mathematical expressi

Wenqi Zhao 87 Dec 27, 2022
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

null 52 Nov 20, 2022
A PyTorch implementation of ICLR 2022 Oral paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 83 May 11, 2022
Face Detection & Age Gender & Expression & Recognition

Face Detection & Age Gender & Expression & Recognition

Sajjad Ayobi 188 Dec 28, 2022