Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution

Related tags

Deep Learning FAU
Overview

FAU

Implementation of the paper:

Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution. Yingruo Fan, Jacqueline C.K. Lam and Victor O.K. Li. AAAI 2020 [PDF]

The Pytorch version

Overview

Environment

  • Ubuntu 18.04.4
  • Python 3.7
  • Tensorflow 1.14.0

Dependencies

Check the packages needed or simply run the command

❱❱❱ pip install -r requirements.txt

Datasets

For data preparation, please make a request for the BP4D database and the DISFA database.

Data Preprocessing

The Dlib library is utilized to locate the 68 facial landmarks for defining AU locations. The face images are aligned and resized to 256*256 pixels. For annotation files, you need to convert them into json format and make them look like [{imgpath:" ", AUs:[AU1_coord_x,AU1_coord_y,AU1_intensity, ...]}, ...]. An example is provided in examples/train_example.json.

Backbone Model

The backbone model is initialized from the pretrained ResNet-V1-50. Please download it under ${DATA_ROOT}. You can change default path by modifying config.py.

Training

❱❱❱ python train.py --gpu 1

Testing

❱❱❱ python test.py --gpu 1 --epoch *

Citation

@inproceedings{fan2020fau,
    title = {Facial Action Unit Intensity Estimation via Semantic 
    Correspondence Learning with Dynamic Graph Convolution},
    author = {Fan, Yingruo and Lam, Jacqueline and Li, Victor},
    booktitle = {Thirty-Fourth AAAI Conference on Artificial Intelligence},
    year={2020}
}
Comments
  • problems running demo.py

    problems running demo.py

    Hi! And thank you for making this code available. I am trying to run the demo code using the command: python demo.py --gpu 1 --epoch 10

    I have downloaded the three files from the models links and put them in outout/models

    I get the following errors:

    04-27 14:34:19 CRI: Load nothing. There is no model in path /data0/BP4D/output\models\snapshot_10.ckpt.
    E0427 14:34:19.535481 14412 logger.py:46] CRITICAL - CRI: Load nothing. There is no model in path /data0/BP4D/output\models\snapshot_10.ckpt.
    

    Where am i going wrong here? do i need to hardcode the model path?

    the full trace is below.

    Thanks!

    python demo.py --gpu 1 --epoch 10
    WARNING: Logging before flag parsing goes to stderr.
    W0427 14:34:17.884148 14412 lazy_loader.py:50]
    The TensorFlow contrib module will not be included in TensorFlow 2.0.
    For more information, please see:
      * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
      * https://github.com/tensorflow/addons
      * https://github.com/tensorflow/io (for I/O related ops)
    If you depend on functionality not listed there, please file an issue.
    
    W0427 14:34:17.955076 14412 module_wrapper.py:139] From D:\Vice\FAU\FAU-master\lib\base.py:107: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
    
    W0427 14:34:17.956074 14412 module_wrapper.py:139] From D:\Vice\FAU\FAU-master\lib\base.py:109: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
    
    2020-04-27 14:34:17.963765: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
    W0427 14:34:17.982058 14412 module_wrapper.py:139] From D:\Vice\FAU\FAU-master\lib\base.py:128: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.
    
    W0427 14:34:17.982058 14412 module_wrapper.py:139] From D:\Vice\FAU\FAU-master\lib\base.py:353: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
    
    W0427 14:34:17.983057 14412 module_wrapper.py:139] From D:\Vice\FAU\FAU-master\lib\base.py:353: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead.
    
    W0427 14:34:17.984055 14412 module_wrapper.py:139] From D:\ice\FAU\FAU-master\model_graph.py:131: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
    
    W0427 14:34:17.994045 14412 deprecation.py:323] From D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use `layer.__call__` method instead.
    W0427 14:34:19.232768 14412 deprecation.py:506] From D:\\Vice\FAU\FAU-master\model_graph.py:32: calling reduce_sum_v1 (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    W0427 14:34:19.279742 14412 deprecation.py:506] From D:\\Vice\FAU\FAU-master\model_graph.py:43: calling reduce_max_v1 (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    04-27 14:34:19 CRI: Load nothing. There is no model in path /data0/BP4D/output\models\snapshot_10.ckpt.
    E0427 14:34:19.535481 14412 logger.py:46] CRITICAL - CRI: Load nothing. There is no model in path /data0/BP4D/output\models\snapshot_10.ckpt.
    Traceback (most recent call last):
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
        return fn(*args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
        target_list, run_metadata)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
        run_metadata)
    tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value resnet_v1_50/conv1/BatchNorm/moving_variance
             [[{{node resnet_v1_50/conv1/BatchNorm/moving_variance/read}}]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "demo.py", line 54, in <module>
        main()
      File "demo.py", line 32, in main
        test(int(args.epoch))
      File "demo.py", line 51, in test
        result = test_net(tester, data)
      File "demo.py", line 44, in test_net
        heatmap = tester.predict_one([imgs],batch_id)[0]
      File "D:\\Vice\FAU\FAU-master\lib\base.py", line 382, in predict_one
        res = self.sess.run([*self.graph_ops, *self.summary_dict.values()], feed_dict=feed_dict)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
        run_metadata_ptr)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
        feed_dict_tensor, options, run_metadata)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
        run_metadata)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value resnet_v1_50/conv1/BatchNorm/moving_variance
             [[node resnet_v1_50/conv1/BatchNorm/moving_variance/read (defined at D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
    
    Original stack trace for 'resnet_v1_50/conv1/BatchNorm/moving_variance/read':
      File "demo.py", line 54, in <module>
        main()
      File "demo.py", line 32, in main
        test(int(args.epoch))
      File "demo.py", line 49, in test
        tester = Tester(Model_graph(), cfg)
      File "D:\\Vice\FAU\FAU-master\lib\base.py", line 309, in __init__
        super(Tester, self).__init__(net, cfg, data_iter, log_name='test_logs.txt')
      File "D:\\Vice\FAU\FAU-master\lib\base.py", line 112, in __init__
        self.build_graph()
      File "D:\\Vice\FAU\FAU-master\lib\base.py", line 129, in build_graph
        self.graph_ops = self._make_graph()
      File "D:\\Vice\FAU\FAU-master\lib\base.py", line 357, in _make_graph
        self.net.make_network(is_train=False)
      File "D:\\Vice\FAU\FAU-master\model_graph.py", line 134, in make_network
        resnet_fms = backbone(image, is_train, bn_trainable=True)
      File "D:\\Vice\FAU\FAU-master\lib\basemodel.py", line 57, in resnet50
        tf.concat(inp,axis=3), 64, 7, stride=2, scope='conv1')
      File "D:\\Vice\FAU\FAU-master\lib\resnet_utils.py", line 148, in conv2d_same
        scope=scope)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1159, in convolution2d
        conv_dims=2)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1066, in convolution
        outputs = normalizer_fn(outputs, **normalizer_params)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 650, in batch_norm
        outputs = layer.apply(inputs, training=is_training)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
        return func(*args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 1700, in apply
        return self.__call__(inputs, *args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\layers\base.py", line 548, in __call__
        outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 824, in __call__
        self._maybe_build(inputs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 2146, in _maybe_build
        self.build(input_shapes)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\keras\layers\normalization.py", line 411, in build
        experimental_autocast=False)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\layers\base.py", line 461, in add_weight
        **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 529, in add_weight
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\training\tracking\base.py", line 712, in _add_variable_with_custom_getter
        **kwargs_for_getter)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 1500, in get_variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 1243, in get_variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 550, in get_variable
        return custom_getter(**custom_getter_kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 1956, in wrapped_custom_getter
        return custom_getter(functools.partial(old_getter, getter), *args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1761, in layer_variable_getter
        return _model_variable_getter(getter, *args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1752, in _model_variable_getter
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\variables.py", line 351, in model_variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\variables.py", line 281, in variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1761, in layer_variable_getter
        return _model_variable_getter(getter, *args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\layers\python\layers\layers.py", line 1752, in _model_variable_getter
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\variables.py", line 351, in model_variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\arg_scope.py", line 182, in func_with_args
        return func(*args, **current_args)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\contrib\framework\python\ops\variables.py", line 281, in variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 519, in _true_getter
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 933, in _get_single_variable
        aggregation=aggregation)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variables.py", line 258, in __call__
        return cls._variable_v1_call(*args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variables.py", line 219, in _variable_v1_call
        shape=shape)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variables.py", line 197, in <lambda>
        previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 2519, in default_variable_creator
        shape=shape)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variables.py", line 262, in __call__
        return super(VariableMetaclass, cls).__call__(*args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variables.py", line 1688, in __init__
        shape=shape)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\variables.py", line 1872, in _init_from_args
        self._snapshot = array_ops.identity(self._variable, name="read")
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
        return target(*args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 203, in identity
        ret = gen_array_ops.identity(input, name=name)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 4238, in identity
        "Identity", input=input, name=name)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
        op_def=op_def)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
        attrs, op_def, compute_device)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
        op_def=op_def)
      File "D:\\Deep\Python3.7\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()
    
    
    
    opened by antithing 4
  • Pretrained model

    Pretrained model

    Hi Fan,

    I was checking the fork of your repository (https://github.com/ZhiwenShao/FAU) and found a pretrained model (invalid link though) is included for the file "demo.py". I would like to ask if you will include the link to this model here?

    Thanks~

    opened by adaniefei 2
  • Asking about localisation of AUs

    Asking about localisation of AUs

    Hello,

    There are a few questions I would like to ask about your project:

    How did you locale the location of each AU?. I see in your source code https://github.com/EvelynFan/FAU/blob/master/datasets.py#L33 there are 3 points of AU 9, but in the figure 2 in your paper https://arxiv.org/abs/2004.09681 saying only 2 central points ?

    As indicated in your paper, DISFA is a heavy imbalance dataset, so, in the training phase, do you apply some sort of balancing for the dataset ?

    There are some frames in the database that the facial landmarks is predicted incorrect (i.e subject SN029, frames 4330~4536, which lead to the location of AU central points incorrect. What do you do with those frames ?

    Thanks

    opened by glmanhtu 2
  • Error in AU6 location

    Error in AU6 location

    Hello Could you please guide me? I trained the model via DISFA dataset and the output of AU5 and AU6 are as below. Could you please tell me what is my mistake to have the HeatMaps of AU6 in AU5?

    AU5:

    AU6:

    Do I create the Json file like below or do I just have to put two coordinates for each AU?

    AUs = ['AU01_1', 'AU01_2', 'AU02_1', 'AU02_2','AU04_1', 'AU05_1', 'AU05_2','AU06_1', 'AU06_2',
    'AU09_1', 'AU09_2','AU09_3','AU12_1','AU12_2','AU15_1','AU15_2','AU17_1','AU17_2',
    'AU20_1','AU20_1','AU25_1','AU25_2','AU26_1','AU26_2']

    Thanks

    opened by Mahdidrm 1
  • AttributeError: 'Config' object has no attribute 'demo'

    AttributeError: 'Config' object has no attribute 'demo'

    Hello. Could you please tell me why I have faced with this error?

     Process _Worker-8:
       File "/soft/anaconda3/envs/Py37/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
       self.run()
       File "/FAU/lib/data_provider.py", line 207, in run
       dp = self.map_func(dp)
       File "/FAU/functions.py", line 27, in generate_batch
       if cfg.demo:
        AttributeError: 'Config' object has no attribute 'demo'
    

    In config.py you have just

    def set_demo(self, demo=False):
        self.demo = demo
    

    Should I add a line to the config.py as well? Thanks

    opened by Mahdidrm 1
  • AUs involved on training

    AUs involved on training

    Hello. In datasets.py for the DISFA dataset, you used these action units:

    AUs = ['AU01_1', 'AU01_2', 'AU02_1', 'AU02_2','AU04_1', 'AU05_1', 'AU05_2','AU06_1', 'AU06_2',\
    'AU09_1', 'AU09_2','AU09_3','AU12_1','AU12_2','AU15_1','AU15_2','AU17_1','AU17_2',\
    'AU20_1','AU20_1','AU25_1','AU25_2','AU26_1','AU26_2']
    

    Can you tell me why you used AU4 once and AU9 three times?

    In addition, could you please tell me, for the training step you have used which machine configuration and how many times did it take? Because I am using a machine with an Intel (R) Xeon (R) Gold 5220 processor and 18 cores CPU with an NVIDIA Corporation TU102GL [Quadro RTX 6000/8000] graphics card. but it is training since 24 hours with no response.

    Can you give me more information about your code? My article deadline is very near and your article and code are part of my research. I will be thankful.

    Best Regards

    opened by Mahdidrm 1
  •  Protocol not supported

    Protocol not supported

    Hello.

    I made a Json file of AU and coordinates. but when I run the code I got this error:

    ZMQError in socket.bind(). Perhaps you're                 using pipes on a non-local file system. See documentation of PrefetchDataZMQ for more information.
    Traceback (most recent call last):
     File "train.py", line 20, in <module>
       main()
     File "train.py", line 16, in main
       trainer = Trainer(Model_graph(), cfg)
     File "FAU\lib\base.py", line 179, in __init__
       self._data_iter, self.itr_per_epoch = self._make_data()
     File "FAU\lib\base.py", line 193, in _make_data
       data_load_thread.reset_state()
     File "FAU\lib\data_provider.py", line 371, in reset_state
       self.ds.reset_state()
     File "FAU\lib\data_provider.py", line 266, in reset_state
       self._reset_once()  # build processes
     File "FAU\lib\data_provider.py", line 236, in _reset_once
       self.socket.bind(pipename)
     File "zmq/backend/cython/socket.pyx", line 550, in zmq.backend.cython.socket.Socket.bind
     File "zmq/backend/cython/checkrc.pxd", line 26, in zmq.backend.cython.checkrc._check_rc
       raise ZMQError(errno)
    zmq.error.ZMQError: Protocol not supported
    MultiProcessMapDataZMQ successfully cleaned-up.
    MultiProcessMapDataZMQ successfully cleaned-up.ed-up.
    

    Note that all the addresses in config.py are correct. Can you guide me please?

    Regards

    opened by Mahdidrm 1
  • Preprocessing

    Preprocessing

    Hi,

    I was wondering exactly how you pre-processed the images. I am training the model with the BP4D dataset and I am not achieving the same results in the paper (about 0.1 off the ICC). I used dlib to estimate the 68 facial landmarks and then resized/cropped to 256x256. I also switched k = 7 to k = 5 in the model_graph.py file. I think the issue is with pre-processing since I am not matching the AU centers from the BP4D training examples provided. However, I think my calculations for the AU centers from the facial landmarks are correct.

    Thanks

    opened by deeyeet 1
  • pretrained model

    pretrained model

    No description provided.

    Thanks for your interest. The Pytorch model will be released at https://github.com/EvelynFan/Pytorch-FAU.

    Originally posted by @EvelynFan in https://github.com/EvelynFan/FAU/issues/9#issuecomment-741532372

    opened by liuyvchi 0
  • Central AU Location

    Central AU Location

    Hi, I was wondering about which landmarks were used to calculate the central AU locations for BP4D? I saw Figure 2 in the paper for an example, but I am uncertain how the central locations are calculated.

    opened by deeyeet 0
  • 18 subjucts for training, specifically?

    18 subjucts for training, specifically?

    In the paper, I notice that 18 subjects for training and 9 subjects for testing but could you tell us which ones are used for training and test specifically.

    opened by Timber1018 0
  • Json file

    Json file

    Hello and thank you very much for sharing this project. I have a DISFA dataset, that means I have Landmark positions and UA intensities. I have also the videos and the images of DISFA dataset. But I don't have any Json file. You noted that we should make them by hand, but for 27 candidates which every one has 4845 frames and 66 landmarks as well as 27 AU files with 12 AUs for each, how do I find the coordinates of the AUs?

    Can you guide me please?

    Regards and thanks again

    opened by Mahdidrm 5
  • An Error About ZMQ

    An Error About ZMQ

    Hi, After I run train.py, I get an error like this that tells me I'm missing files. However, I have not used TensorFlow or ZMQ, so I would like to ask how I can solve such errors. File "/TensorFlow-FAU-master/lib/data_provider.py", line 236, in _reset_once self.socket.bind(pipename) File "zmq/backend/cython/socket.pyx", line 547, in zmq.backend.cython.socket.Socket.bind zmq.error.ZMQError: No such file or directory for ipc path "@dataflow-map-pipe-f0dc7e8c".

    Look forward to your reply.

    opened by wmdydxr 4
Owner
Evelyn
Evelyn
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Dec 31, 2022
Non-Homogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport

Non-Homogeneous Poisson Process Intensity Modeling and Estimation using Measure Transport This GitHub page provides code for reproducing the results i

Andrew Zammit Mangion 1 Nov 8, 2021
Unit-Convertor - Unit Convertor Built With Python

Python Unit Converter This project can convert Weigth,length and ... units for y

Mahdis Esmaeelian 1 May 31, 2022
Diverse Branch Block: Building a Convolution as an Inception-like Unit

Diverse Branch Block: Building a Convolution as an Inception-like Unit (PyTorch) (CVPR-2021) DBB is a powerful ConvNet building block to replace regul

null 253 Dec 24, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 7, 2022
Calculates JMA (Japan Meteorological Agency) seismic intensity (shindo) scale from acceleration data recorded in NumPy array

shindo.py Calculates JMA (Japan Meteorological Agency) seismic intensity (shindo) scale from acceleration data stored in NumPy array Introduction Japa

RR_Inyo 3 Sep 23, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
FACIAL: Synthesizing Dynamic Talking Face With Implicit Attribute Learning. ICCV, 2021.

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning PyTorch implementation for the paper: FACIAL: Synthesizing Dynamic Talking

null 226 Jan 8, 2023
PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal Convolutions for Action Recognition"

R2Plus1D-PyTorch PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal

Irhum Shafkat 342 Dec 16, 2022
code for Multi-scale Matching Networks for Semantic Correspondence, ICCV

MMNet This repo is the official implementation of ICCV 2021 paper "Multi-scale Matching Networks for Semantic Correspondence.". Pre-requisite conda cr

joey zhao 25 Dec 12, 2022
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

The Official PyTorch Implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision

Shiyi Lan 3 Oct 15, 2021
The PyTorch implementation of DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision.

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision The PyTorch implementation of DiscoBox: Weakly Supe

Shiyi Lan 1 Oct 23, 2021
(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds

PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds by Mutian Xu*, Runyu Ding*, Hengshuang Zhao, and Xiaojuan Qi. Int

CVMI Lab 228 Dec 25, 2022
official code for dynamic convolution decomposition

Revisiting Dynamic Convolution via Matrix Decomposition (ICLR 2021) A pytorch implementation of DCD. If you use this code in your research please cons

Yunsheng Li 110 Nov 23, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 3, 2023
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022