BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

Related tags

Deep Learning BABEL
Overview

BABEL: Bodies, Action and Behavior with English Labels [CVPR 2021]

Abhinanda R. Punnakkal*, Arjun Chandrasekaran*, Nikos Athanasiou, Alejandra Quiros-Ramirez, Michael J. Black. * denotes equal contribution

Project Website | Paper | Video | Poster


BABEL is a large dataset with language labels describing the actions being performed in mocap sequences. BABEL labels about 43 hours of mocap sequences from AMASS [1] with action labels. Sequences have action labels at two possible levels of abstraction:

  • Sequence labels which describe the overall action in the sequence
  • Frame labels which describe all actions in every frame of the sequence. Each frame label is precisely aligned with the duration of the corresponding action in the mocap sequence, and multiple actions can overlap.

To download the BABEL action labels, visit our 'Data' page. You can download the mocap sequences from AMASS.

Tutorials

We release some helper code in Jupyter notebooks to load the BABEL dataset, visualize mocap sequences and their action labels, search BABEL for sequences containing specific actions, etc.

See notebooks/ for more details.

Action Recognition

We provide features, training and inference code, and pre-trained checkpoints for 3D skeleton-based action recognition.

Please see action_recognition/ for more details.

Acknowledgements

The notebooks in this repo are inspired by the those provided by AMASS. The Action Recognition code is based on the 2s-AGCN implementation.

References

[1] Mahmood, Naureen, et al. "AMASS: Archive of motion capture as surface shapes." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the AMASS dataset, and software, (the "Model & Software"). By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Contact

The code in this repository is developed by Abhinanda Punnakkal and Arjun Chandrasekaran.

If you have any questions you can contact us at [email protected].

Comments
  • Multi-label action recognition?

    Multi-label action recognition?

    Hi, Thanks for releasing the code and the data, that's super interesting and useful for the community. I have a question regarding the action recognition experiments. A high number of action segments contains multiple labels so turning the problem as a simple classification is a bit weird to me. If I understand well an action segment which has two associated actions will be represented two times in the training set: a first time associated to the first action and then a second time with the other action.. A im right? The top1 and top5 does not really consider these multi-label classification problem. Did you try training using a binary cross-entropy loss and reporting metrics such as precision, recall, average precision? Thanks a lot for your explanations

    opened by fabienbaradel 3
  • Action Recognition - PreProcess

    Action Recognition - PreProcess

    Hi,

    Can you explain a little bit more about the pre-process step? Did you use the 3D SMPL joints (24 joints?) for the pre-process?

    None of the proposed joints indexes in the 2S-ACGN paper doesn't match to SMPL joint indexes. Attached photo from paper: image https://arxiv.org/pdf/1805.07694.pdf

    SMPL joints indexes: image

    Could you give more details?? Thanks!

    opened by BrianG13 1
  • Count of action segments doesn't match with the bar graph provided on the website

    Count of action segments doesn't match with the bar graph provided on the website

    We used the babel json files and generated the count of action segments using all the json files (i.e train, val, test, extra_train, extra_val) but the distribution we got is not matching with that of website plot provided at https://babel.is.tue.mpg.de/media/upload/stats/h_action_category_counts.svg NOTE : We used frame_ann act_cat to count the action segments and if it was None we used seq_ann act_cat NOTE : As per the table in this link https://babel.is.tue.mpg.de/actionsrawlabels.html, there should be 271 action categories but we were getting only 251 unique action categories. Count of action segments we got is as follows, Total action segments : 251 Count of action segments : {'endure': 1, 'lead': 1, 'bartender behavior series': 1, 'admire': 1, 'slash gesture series': 1, 'plant feet': 1, 'try': 1, 'vomit': 1, 'dive': 1, 'cower': 1, 'read': 1, 'agressive actions series': 1, 'cradle': 1, 'blow': 1, 'puppeteer': 1, 'shout': 2, 'curtsy': 2, 'navigate': 2, 'glide': 2, 'reveal': 2, 'smell': 2, 'plead': 2, 'stick': 2, 'backwards': 2, 'count': 2, 'feet movements': 2, 'fly': 2, 'weave': 2, 'come': 2, 'follow': 2, 'fill': 2, 'charge': 3, 'pat': 3, 'moonwalk': 3, 'uncross': 3, 'fidget': 3, 'lick': 3, 'taunt': 3, 'hiccup': 3, 'leave': 3, 'sleep': 3, 'fire gun': 4, 'headstand': 4, 'cut': 4, 'excite': 4, 'disagree': 5, 'celebrate': 5, 'gain': 5, 'despair': 5, 'skate': 5, 'laugh': 5, 'sneeze': 5, 'tentative movements': 5, 'zip/unzip': 5, 'chop': 6, 'conduct': 6, 'shave': 7, 'steady': 7, 'operate interface': 7, 'ride': 7, 'rolls on ground': 7, 'remove': 8, 'cough': 8, 'hang': 8, 'juggle': 8, 'worry': 8, 'flail arms': 8, 'aim': 9, 'yawn': 9, 'chicken dance': 9, 'fish': 9, 'pose': 9, 'dip': 9, 'prepare': 9, 'rocking movement': 10, 'write': 11, 'release': 11, 'unknown': 11, 'inward motion': 11, 'listen': 11, 'drive': 11, 'close something': 11, 'hurry': 11, 'clasp hands': 12, 'think': 13, 'hug': 13, 'cry': 13, 'handstand': 13, 'wobble': 14, 'strafe': 14, 'shivering': 15, 'sign': 15, 'bump': 15, 'zombie': 16, 'drunken behavior': 16, 'interact with rope': 17, 'pray': 17, 'stroke': 17, 'tiptoe': 17, 'swipe': 18, 'fall': 18, 'tie': 18, 'stomp': 19, 'noisy labels': 19, 'waddle': 19, 'draw': 20, 'get injured': 20, 'eat': 21, 'fight': 21, 'wash': 21, 'wiggle': 23, 'style hair': 24, 'check': 24, 'search': 24, 'flip': 26, 'march': 26, 'list body parts': 26, 'mime': 29, 'leap': 29, 'duck': 29, 'move misc. body part': 32, 'relax': 32, 'misc. action': 32, 'shrug': 33, 'slide': 34, 'salute': 35, 'stagger': 38, 'play': 38, 'sudden movement': 39, 'to lower a body part': 40, 'wait': 40, 'open something': 41, 'flap': 41, 'dribble': 42, 'protect': 43, 'sway': 43, 'lunge': 44, 'limp': 45, 'give something': 45, 'cartwheel': 46, 'rub': 46, 'shuffle': 46, 'misc. abstract action': 46, 'golf': 47, 'sneak': 48, 'mix': 49, 'misc. activities': 52, 'side to side movement': 52, 'wrist movements': 53, 'trip': 54, 'stances': 55, 'skip': 56, 'press something': 57, 'jump rope': 60, 'communicate (vocalise)': 62, 'spread': 63, 'swim': 63, 'lie': 65, 'drink': 68, 'point': 72, 'tap': 73, 'yoga': 74, 'jumping jacks': 84, 'shoulder movements': 84, 'evade': 85, 'rolling movement': 86, 'play catch': 87, 'support': 93, 'touch ground': 94, 'telephone call': 96, 'grab person': 97, 'stop': 97, 'move back to original position': 104, 'twist': 107, 'adjust': 109, 'play instrument': 109, 'animal behavior': 112, 'bow': 113, 'upper body movements': 125, 'hit': 135, 'grab body part': 150, 'crawl': 150, 'knock': 151, 'crouch': 153, 'kneel': 162, 'martial art': 162, 'face direction': 165, 'spin': 165, 'balance': 187, 'crossing limbs': 192, 'shake': 192, 'clap': 196, 'punch': 226, 'hop': 228, 'clean something': 243, 'stumble': 245, 'squat': 248, 'scratch': 250, 'play sport': 261, 'touching face': 264, 'poses': 266, 'catch': 270, 'sports move': 287, 'lean': 297, 'swing body part': 312, 'perform': 321, 'knee movement': 324, 'waist movements': 326, 'lift something': 343, 'move something': 364, 'grasp object': 398, 'greet': 411, 'exercise/training': 418, 'wave': 468, 'head movements': 479, 'dance': 499, 'a pose': 526, 'lowering body part': 559, 'action with ball': 564, 'stand up': 566, 'foot movements': 609, 'jog': 636, 'gesture': 648, 'sideways movement': 665, 'bend': 683, 'move up/down incline': 702, 'place something': 754, 'kick': 756, 'take/pick something up': 768, 'throw': 822, 'touching body part': 845, 'leg movements': 876, 'sit': 886, 'forward movement': 915, 'touch object': 947, 'stretch': 1009, 'circular movement': 1033, 'run': 1094, 'look': 1122, 'jump': 1124, 'raising body part': 1533, 'backwards movement': 1685, 'step': 1802, 't pose': 2299, 'interact with/use object': 2618, 'arm movements': 2676, 'turn': 2992, 'hand movements': 3648, 'stand': 5936, 'walk': 8487, 'transition': 14628}

    We also tried the same without extra_train and extra_val json files (i.e we used only train, val, test json files of BABEL), but again the distribution doesn't match to the plot provided on the website The distribution we got with this setup is as follows, Total action segments : 243 Count of action segments : {'endure': 1, 'lead': 1, 'curtsy': 1, 'bartender behavior series': 1, 'reveal': 1, 'admire': 1, 'slash gesture series': 1, 'plant feet': 1, 'plead': 1, 'excite': 1, 'try': 1, 'conduct': 1, 'taunt': 1, 'vomit': 1, 'dive': 1, 'cower': 1, 'read': 1, 'leave': 1, 'weave': 1, 'shout': 2, 'charge': 2, 'disagree': 2, 'navigate': 2, 'glide': 2, 'laugh': 2, 'smell': 2, 'stick': 2, 'backwards': 2, 'moonwalk': 2, 'fidget': 2, 'noisy labels': 2, 'zip/unzip': 2, 'list body parts': 2, 'count': 2, 'rolls on ground': 2, 'feet movements': 2, 'fly': 2, 'pat': 3, 'tentative movements': 3, 'uncross': 3, 'lick': 3, 'hiccup': 3, 'fire gun': 4, 'remove': 4, 'headstand': 4, 'celebrate': 4, 'skate': 4, 'cut': 4, 'sneeze': 4, 'worry': 4, 'chop': 4, 'hang': 5, 'gain': 5, 'shave': 5, 'despair': 5, 'steady': 5, 'ride': 5, 'write': 6, 'cough': 6, 'sign': 6, 'juggle': 6, 'unknown': 6, 'fish': 6, 'think': 7, 'fight': 7, 'operate interface': 7, 'drive': 7, 'tiptoe': 7, 'prepare': 7, 'cry': 8, 'aim': 8, 'play': 8, 'chicken dance': 8, 'flail arms': 8, 'close something': 8, 'handstand': 8, 'yawn': 9, 'strafe': 9, 'listen': 9, 'drunken behavior': 9, 'pose': 9, 'dip': 9, 'rocking movement': 10, 'wobble': 10, 'release': 10, 'wash': 10, 'hurry': 10, 'waddle': 10, 'inward motion': 11, 'clasp hands': 11, 'shivering': 12, 'bump': 12, 'search': 12, 'hug': 13, 'fall': 13, 'give something': 13, 'zombie': 13, 'stroke': 13, 'wiggle': 14, 'stomp': 14, 'check': 14, 'mime': 15, 'eat': 15, 'pray': 15, 'draw': 16, 'interact with rope': 16, 'swipe': 17, 'tie': 17, 'style hair': 18, 'relax': 19, 'flip': 20, 'get injured': 20, 'stagger': 20, 'march': 21, 'yoga': 23, 'move misc. body part': 24, 'leap': 24, 'shrug': 25, 'open something': 26, 'sudden movement': 26, 'duck': 26, 'golf': 26, 'misc. action': 26, 'trip': 27, 'salute': 28, 'misc. abstract action': 29, 'slide': 29, 'to lower a body part': 31, 'limp': 31, 'flap': 32, 'communicate (vocalise)': 33, 'shuffle': 33, 'misc. activities': 34, 'dribble': 34, 'wait': 34, 'lunge': 35, 'mix': 35, 'cartwheel': 36, 'rub': 37, 'protect': 38, 'sneak': 38, 'sway': 40, 'swim': 41, 'jumping jacks': 43, 'side to side movement': 49, 'jump rope': 49, 'stances': 49, 'drink': 50, 'point': 50, 'skip': 50, 'support': 50, 'wrist movements': 51, 'press something': 52, 'play catch': 54, 'lie': 59, 'spread': 63, 'stop': 63, 'evade': 64, 'telephone call': 69, 'tap': 69, 'play instrument': 72, 'grab person': 73, 'rolling movement': 79, 'twist': 79, 'shoulder movements': 80, 'touch ground': 85, 'bow': 94, 'adjust': 95, 'crawl': 96, 'animal behavior': 99, 'move back to original position': 103, 'knock': 110, 'hit': 114, 'upper body movements': 118, 'crouch': 121, 'grab body part': 125, 'clap': 127, 'kneel': 127, 'martial art': 135, 'spin': 136, 'balance': 142, 'perform': 144, 'crossing limbs': 149, 'hop': 156, 'face direction': 162, 'play sport': 166, 'stumble': 167, 'shake': 167, 'squat': 186, 'scratch': 194, 'punch': 197, 'clean something': 209, 'sports move': 224, 'exercise/training': 230, 'poses': 241, 'greet': 249, 'catch': 252, 'touching face': 253, 'swing body part': 261, 'lean': 266, 'dance': 278, 'lift something': 280, 'knee movement': 290, 'move something': 294, 'waist movements': 296, 'wave': 310, 'grasp object': 352, 'jog': 356, 'action with ball': 448, 'head movements': 458, 'gesture': 475, 'kick': 491, 'move up/down incline': 516, 'a pose': 519, 'lowering body part': 528, 'stand up': 528, 'sideways movement': 547, 'throw': 588, 'foot movements': 588, 'bend': 629, 'run': 655, 'place something': 671, 'take/pick something up': 673, 'sit': 693, 'touching body part': 727, 'jump': 734, 'forward movement': 801, 'stretch': 815, 'leg movements': 835, 'circular movement': 860, 'touch object': 875, 'look': 983, 'backwards movement': 1434, 'raising body part': 1445, 'step': 1555, 't pose': 2156, 'interact with/use object': 2363, 'arm movements': 2444, 'turn': 2745, 'hand movements': 3456, 'stand': 5808, 'walk': 6262, 'transition': 14446}

    opened by chinnusai25 1
  • About 5-second motions with repeated sub-actions.

    About 5-second motions with repeated sub-actions.

    I am one of the people who are interested in this dataset. To participate in the challenge of action recognition, I've tried to download the pre-processed pickle file and tried to plot it.

    But when I plot one of the motions in the npz file, I've observed that several 5-second motions are just repeating the same sub-action segment.

    For example, when I tried to plot the npy1[17].transpose(1, 2, 0, 3)[:, :, :, 0], where npy1 = np.load('./train_ntu_sk_60.npy'),

    The plotted result shows the human just keep moving own's hands for 5 seconds.

    But in the paper, it says "If the Kth chunk XKs has duration< 5 sec., we repeat XKs, and truncate at 5 sec."

    What I see is a big gap between the code and paper. Could you explain why this happens in the preprocessed dataset?

    And also, could you explain what is "chunk_id"? I cannot find more information about this. It seems like the order of the chunk when dividing the whole sequence into 5 seconds of segments, but I am still not sure about this.

    opened by cotton-ahn 1
  • 'frame_ann' is None

    'frame_ann' is None

    Hi guys!

    Thanks for sharing your work, its really helpful and interesting!

    I have a question: While I am iterating through the train and val labels files, I encountered that on various samples that the value of the key 'frame_ann' is equal to None . That means that for these sequence there are no annotations at level frame (only on entire sequence level?

    Just checking that I am not missing anything important and thats OK.

    Examples of sequences with None value at 'frame_ann' : | sequence_id | file_name ['feat_p'] | | ----------- | ----------- | | '6912' | 'KIT/KIT/4/WalkingStraightForward06_poses.npz' | | '3671' | 'KIT/KIT/3/jump_forward02_poses.npz' | | '3965' | 'BMLrub/BioMotionLab_NTroje/rub014/0004_treadmill_jog_poses.npz' |

    Thanks!

    opened by BrianG13 0
  • Access to AMASS SMPL parameters

    Access to AMASS SMPL parameters

    Hi, 1st I want to thank you guys for releasing the code + data, this is not obvious at all and I appreciate the effort, I am sure it will be useful. I am wondering as in the example notebooks uploaded there are some missing examples:

    • How can I access to the SMPL parameters of the actions? (How the mapping to AMASS dataset work? What should I download from their site?)
    • There are frames annotations for the actions? I see on the examples only: start_t & end_t . There is a start_frame anywhere?

    Thanks for your help!

    opened by BrianG13 0
  • Generate Dataset

    Generate Dataset

    Hello, can I know how you generate babel60 datasets? Here is an example of one of your annotations

    We are visualizing annotations for seq ID: 5788 in "train.json" {'babel_sid': 5788, 'dur': 6.77, 'feat_p': 'BMLrub/BioMotionLab_NTroje/rub055/0020_lifting_heavy2_poses.npz', 'frame_ann': {'anntr_id': '6e0e9098-e2a1-4019-a59b-ba1711d82c07', 'babel_lid': '450b32c4-4d51-45d4-bd4e-2884e2a626cb', 'labels': [{'act_cat': ['place something'], 'end_t': 4.023, 'proc_label': 'place', 'raw_label': 'place', 'seg_id': '0e4b04d8-9be4-4863-997b-34d40fda345f', 'start_t': 2.44}, {'act_cat': ['turn'], 'end_t': 5.127, 'proc_label': 'turn', 'raw_label': 'turn', 'seg_id': '168cb328-35c9-4501-b914-e474af7ab329', 'start_t': 4.19}, {'act_cat': ['walk'], 'end_t': 1.523, 'proc_label': 'walk', 'raw_label': 'walk', 'seg_id': '5866b366-0386-42d5-aa1a-846f47ac2372', 'start_t': 0}, {'act_cat': ['walk'], 'end_t': 6.767, 'proc_label': 'walk', 'raw_label': 'walk', 'seg_id': '48334837-6265-458d-a92e-3a00931e9450', 'start_t': 5.127}, {'act_cat': ['take/pick something up'], 'end_t': 2.44, 'proc_label': 'take', 'raw_label': 'take', 'seg_id': '247a098f-bcca-4a6b-8fc4-50508f96a55d', 'start_t': 1.752}, {'act_cat': ['transition'], 'end_t': 1.752, 'proc_label': 'transition', 'raw_label': 'transition', 'seg_id': 'fa15e7de-1966-4d80-af7a-3968b5f510c2', 'start_t': 1.523}, {'act_cat': ['transition'], 'end_t': 4.19, 'proc_label': 'transition', 'raw_label': 'transition', 'seg_id': '85752b5d-1df9-4b1a-b234-ac1a2029b992', 'start_t': 4.023}], 'mul_act': True}, 'seq_ann': {'anntr_id': '9872fc75-d3a1-4335-9c9f-c64810f48c4d', 'babel_lid': '06225ed2-2f81-4057-951b-b41c736021d3', 'labels': [{'act_cat': ['walk'], 'proc_label': 'walk', 'raw_label': 'walk', 'seg_id': '02cb19b9-1b85-4e48-a6c9-00b10d129904'}, {'act_cat': ['fill', 'lift something'], 'proc_label': 'scoop', 'raw_label': 'scoop', 'seg_id': 'ff8ee348-9917-49fb-b995-69577712dbaa'}, {'act_cat': ['place something'], 'proc_label': 'place', 'raw_label': 'place', 'seg_id': '6d5e7ca4-22a3-45d9-b6a0-1979f9de98c3'}, {'act_cat': ['turn'], 'proc_label': 'turn', 'raw_label': 'turn', 'seg_id': 'fbdcd31e-4178-401b-ab61-d4acaed80353'}, {'act_cat': ['walk'], 'proc_label': 'walk', 'raw_label': 'walk', 'seg_id': '0a45afa6-46ae-4023-80ce-acf7c74f4207'}], 'mul_act': True}, 'url': 'https://babel-renders.s3.eu-central-1.amazonaws.com/005788.mp4'} ### I want to know how you deal with this file. If you use frame_ann, then many label have a duration of "end_t-start_t" less than 5 seconds. If you use a sequence label, you think the whole sequence is doing an action, but there are a lot of "act_cat" at this time, such as "walk,fill,liftsomething". How do you choose an "act_cat" as a motion label?

    opened by Jaceyxy 2
Owner
null
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 4, 2022
Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).

ACTION-Net Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21). Getting Started EgoGesture data folder struct

V-Sense 171 Dec 26, 2022
An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Neural Architecture Search with Random Labels(RLNAS) Introduction This project provides an implementation for Neural Architecture Search with Random L

null 18 Nov 8, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these environments (PPO, SAC, evolutionary strategy, and direct trajectory optimization are implemented).

Google 1.5k Jan 2, 2023
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

null 37 Dec 3, 2022
We are More than Our JOints: Predicting How 3D Bodies Move

We are More than Our JOints: Predicting How 3D Bodies Move Citation This repo contains the official implementation of our paper MOJO: @inproceedings{Z

null 72 Oct 20, 2022
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
Allows including an action inside another action (by preprocessing the Yaml file). This is how composite actions should have worked.

actions-includes Allows including an action inside another action (by preprocessing the Yaml file). Instead of using uses or run in your action step,

Tim Ansell 70 Nov 4, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 3, 2023
Human Action Controller - A human action controller running on different platforms.

Human Action Controller (HAC) Goal A human action controller running on different platforms. Fun Easy-to-use Accurate Anywhere Fun Examples Mouse Cont

null 27 Jul 20, 2022
The official TensorFlow implementation of the paper Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition

Action Transformer A Self-Attention Model for Short-Time Human Action Recognition This repository contains the official TensorFlow implementation of t

PIC4SeRCentre 20 Jan 3, 2023
[CVPR 2022] Semi-Supervised Semantic Segmentation Using Unreliable Pseudo-Labels

Using Unreliable Pseudo Labels Official PyTorch implementation of Semi-Supervised Semantic Segmentation Using Unreliable Pseudo Labels, CVPR 2022. Ple

Haochen Wang 268 Dec 24, 2022
Code for "ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on", accepted at WACV 2021 Generation of Human Behavior Workshop.

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [ Paper ] [ Project Page ] This repository contains the code fo

Andrew Jong 97 Dec 13, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 1, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

mythbuster 27 Dec 23, 2022
This was initially the repo for the project of PSYC626@USC of Asaf Mazar, Millad Kassaie and Georgios Chochlakis named "Powered by the Will? Exploring Lay Theories of Behavior Change through Social Media"

Subreddit Analysis This repo includes tools for Subreddit analysis, originally developed for our class project of PSYC 626 in USC, titled "Powered by

Georgios Chochlakis 1 Dec 17, 2021
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars This repository is the official implementation of Colar. In this work,

LeYang 246 Dec 13, 2022
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

null 129 Jan 4, 2023