Motion and Shape Capture from Sparse Markers

Related tags

Deep Learning moshpp
Overview

MoSh++

This repository contains the official chumpy implementation of mocap body solver used for AMASS:

AMASS: Archive of Motion Capture as Surface Shapes
Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, Michael J. Black
Full paper | Video | Project website | Poster

Description

This repository holds the code for MoSh++, introduced in AMASS, ICCV'19. MoSh++ is the upgraded version of MoSh, Sig.Asia'2014. Given a labeled marker-based motion capture (mocap) c3d file and the correspondences of the marker labels to the locations on the body, MoSh can return model parameters for every frame of the mocap sequence. The current MoSh++ code works with the following models:

Installation

The Current repository requires Python 3.7 and chumpy; a CPU based auto-differentiation package. This package is assumed to be used along with SOMA, the mocap auto-labeling package. Please install MoSh++ inside the conda environment of SOMA. Clone the moshpp repository, and run the following from the root directory:

sudo apt install libeigen3-dev
sudo apt install libtbb-dev

pip install -r requirements.txt

cd src/moshpp/scan2mesh/mesh_distance
make

cd ../../../..
python setup.py install

Tutorials

This repository is a complementary package to SOMA, an automatic mocap solver. Please refer to the SOMA repository for tutorials and use cases.

Citation

Please cite the following paper if you use this code directly or indirectly in your research/projects:

@inproceedings{AMASS:2019,
  title={AMASS: Archive of Motion Capture as Surface Shapes},
  author={Mahmood, Naureen and Ghorbani, Nima and Troje, Nikolaus F. and Pons-Moll, Gerard and Black, Michael J.},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2019},
  month = {Oct},
  url = {https://amass.is.tue.mpg.de},
  month_numeric = {10}
}

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the MoSh++ data and software, (the "Data & Software"), software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of this repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

The software is compiled using CGAL sources following the license in CGAL_LICENSE.pdf

Contact

The code in this repository is developed by Nima Ghorbani while at Max-Planck Institute for Intelligent Systems, Tübingen, Germany.

If you have any questions you can contact us at [email protected].

For commercial licensing, contact [email protected]

Comments
  • From human keypoint to SMPL parameters?

    From human keypoint to SMPL parameters?

    Hi,

    Great work! I'm wondering if it is possible to reconstruct smpl model parameters when the human keypoint aren't the same format as the default one. For example, I'm using a in-house dataset obtained keypoint coordinates from Kinect V2, which output 25 keypoints coordinates in 3D (https://lisajamhoury.medium.com/understanding-kinect-v2-joints-and-coordinate-system-4f4b90b9df16). How do I specify the correspondence from my keypoint to the mesh? Does this code include any similar functions?

    Appreciate it!

    opened by SizheAn 3
  • Convert c3d to smpl

    Convert c3d to smpl

    Hi,

    Thank you for making this repository. Can you please guide me how I can convert c3d file to smpl file using this repository. I am unable to find any relevant tutorial.

    Thank you

    opened by usamatariq70 1
  • lack of pose_body_prior.pkl for smal

    lack of pose_body_prior.pkl for smal

    hi, would you mind to provide me the file pose_body_prior.pkl for the smal model ? the results I generate now are almost correct except the body pose(limb pose), when the model_type is smal without a pose_body_prior.pkl for it.

    as the figures show below, the shape and the orientation are alright, but the limb poses are not as expected as I think, the limb poses keep same although they have been changed in fact in the left image. image image image

    opened by minushuang 1
  • change the suface_model type from smplx to smpl

    change the suface_model type from smplx to smpl

    the code can work normally when the model type is smplx. and now I try to fit the smpl params instead of the smplx following the smplx moshpp process. at first I donwloaded the smpl basic models, and put them under support_files/smpl directory, then change the surface_model.typefrom smplx to smpl, and addedopt_weights.smpl config which is a copy of smplx as below.

    opt_weights:
      smpl:
        stagei_wt_poseH: 3.0
        stagei_wt_poseF: 3.
        stagei_wt_expr: 34.
        stagei_wt_pose: 3.
        stagei_wt_poseB: 3.
        stagei_wt_init_finger_left: 400.0
        stagei_wt_init_finger_right: 400.0
        stagei_wt_init_finger: 400.0
        stagei_wt_betas: 10.
        stagei_wt_init: 300
        stagei_wt_data: 75.
        stagei_wt_surf: 10000.
        stagei_wt_annealing: [ 1., .5, .25, .125 ]
        stageii_wt_data: 400
        stageii_wt_velo: 2.5
        stageii_wt_dmpl: 1.0
        stageii_wt_expr: 1.0
        stageii_wt_poseB: 1.6
        stageii_wt_poseH: 1.0
        stageii_wt_poseF: 1.0
        stageii_wt_annealing: 2.5
    

    the error when I run the mosh job(task)

    2022-07-07 08:11:49.227 | INFO     | soma.tools.parallel_tools:run_parallel_jobs:54 - #Job(s) submitted: 83
    2022-07-07 08:11:49.227 | INFO     | soma.tools.parallel_tools:run_parallel_jobs:67 - Will run the jobs in random order.
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:__init__:95 -- mocap_fname: /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE//training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/custom_low_head2_yz_swap.pkl
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:__init__:97 -- stagei_fname: /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE//training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/mosh_results_tracklet/SOMA_unlabeled_mpc/soma_subject1/male_stagei.pkl
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:__init__:98 -- stageii_fname: /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE//training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/mosh_results_tracklet/SOMA_unlabeled_mpc/soma_subject1/custom_low_head2_yz_swap_stageii.pkl
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:__init__:103 -- surface_model: type: smpl; gender: male; fname:/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/support_files/smpl/male/model.pkl
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:__init__:107 -- optimize_fingers: False, optimize_face: False, optimize_toes: False, optimize_betas: True, optimize_dynamics: False
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:prepare_stagei_frames:157 -- Selecting 12 frames using method manual on frames with 100% least_avail_markers
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:prepare_stagei_frames:197 -- Using stagei_fnames for stage-i: ['/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/run_002.pkl_001091'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/jump_001.pkl_000137'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/run_001.pkl_001366'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/jump_001.pkl_000509'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/throw_001.pkl_000596'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/dance_003.pkl_001488'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/jump_001.pkl_000588'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/squat_002.pkl_001134'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/jump_002.pkl_000471'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/run_001.pkl_000032'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/dance_001.pkl_001042'
     '/data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/soma_labeled_mocap_tracklet/SOMA_unlabeled_mpc/soma_subject1/dance_001.pkl_000289']
    soma_subject1 -- custom_low_head2_yz_swap -- mosh_head:mosh_stagei:241 -- Attempting mosh stagei to create /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE//training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/mosh_results_tracklet/SOMA_unlabeled_mpc/soma_subject1/male_stagei.pkl
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:120 -- using marker_layout_fname: /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE//training_experiments/V48_02_SOMA/OC_05_G_03_real_000_synt_100/evaluations/mosh_results_tracklet/SOMA_unlabeled_mpc/SOMA_unlabeled_mpc_smpl.json
    soma_subject1 -- custom_low_head2_yz_swap -- bodymodel_loader:load_moshpp_models:93 -- Loading model: /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/support_files/smpl/male/model.pkl
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:172 -- can_model.betas.shape: (300,)
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:173 -- opt_models[0].betas.shape: (300,)
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:192 -- Estimating for #latent markers: 53
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:229 -- Number of available markers in each stagei selected frames: (F00, 53), (F01, 53), (F02, 53), (F03, 53), (F04, 53), (F05, 53), (F06, 53), (F07, 53), (F08, 53), (F09, 53), (F10, 53), (F11, 53)
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:237 -- Rigidly aligning the body to the markers
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:262 -- MoSh stagei weights before annealing:
    stagei_wt_poseH: 3.0
    stagei_wt_poseF: 3.0
    stagei_wt_expr: 34.0
    stagei_wt_pose: 3.0
    stagei_wt_poseB: 3.0
    stagei_wt_init_finger_left: 400.0
    stagei_wt_init_finger_right: 400.0
    stagei_wt_init_finger: 400.0
    stagei_wt_betas: 10.0
    stagei_wt_init: 300
    stagei_wt_data: 75.0
    stagei_wt_surf: 10000.0
    stagei_wt_annealing: [1.0, 0.5, 0.25, 0.125]
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:266 -- head_marker_corr_fname is provided and is being loaded: /data/hxh/project/soma/SOMA_FOLDER_TEMPLATE/support_files/ssm_head_marker_corr.npz
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:274 -- Successfully took into account the correlation of the head markers
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:280 -- Beginning mosh stagei with opt_settings.weights_type: smpl
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:354 -- Step 1/4 : Opt. wt_anneal_factor = 1.00, wt_data = 1.00, wt_poseB = 65.09
    soma_subject1 -- custom_low_head2_yz_swap -- chmosh:mosh_stagei:356 -- stagei_wt_init for different marker types body = 300.00:
    Traceback (most recent call last):
      File "test_soma.py", line 115, in <module>
        'randomly_run_jobs': True,
      File "/data/hxh/project/soma/src/soma/tools/run_soma_multiple.py", line 283, in run_soma_on_multiple_settings
        run_parallel_jobs(func=run_moshpp_once, jobs=mosh_jobs, parallel_cfg=moshpp_parallel_cfg)
      File "/data/hxh/project/soma/src/soma/tools/parallel_tools.py", line 79, in run_parallel_jobs
        func(job)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/moshpp-3.0-py3.7.egg/moshpp/mosh_head.py", line 598, in run_moshpp_once
        mp.mosh_stagei(mosh_stagei)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/moshpp-3.0-py3.7.egg/moshpp/mosh_head.py", line 245, in mosh_stagei
        v_template_fname=self.cfg.moshpp.v_template_fname)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/moshpp-3.0-py3.7.egg/moshpp/chmosh.py", line 421, in mosh_stagei
        [f'{k} = {np.sum(opt_objs[k].r ** 2):2.2e}' for k in sorted(opt_objs)])))
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/moshpp-3.0-py3.7.egg/moshpp/chmosh.py", line 421, in <listcomp>
        [f'{k} = {np.sum(opt_objs[k].r ** 2):2.2e}' for k in sorted(opt_objs)])))
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch_ops.py", line 708, in compute_r
        return self.a.r * self.b.r
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/reordering.py", line 376, in compute_r
        return np.concatenate([t.r for t in self.our_terms], axis=self.axis)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/reordering.py", line 376, in <listcomp>
        return np.concatenate([t.r for t in self.our_terms], axis=self.axis)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 594, in r
        self._call_on_changed()
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 589, in _call_on_changed
        self.on_changed(self._dirty_vars)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/moshpp-3.0-py3.7.egg/moshpp/prior/gmm_prior_ch.py", line 62, in on_changed
        for logl, w in zip(self.loglikelihoods, self.weights)])
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/moshpp-3.0-py3.7.egg/moshpp/prior/gmm_prior_ch.py", line 62, in <listcomp>
        for logl, w in zip(self.loglikelihoods, self.weights)])
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch_ops.py", line 319, in compute_r
        return np.sum(self.x.r, axis=self.axis)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch_ops.py", line 584, in compute_r
        return self.safe_power(self.x.r, self.pow.r)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch_ops.py", line 708, in compute_r
        return self.a.r * self.b.r
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch_ops.py", line 731, in compute_r
        return self.a.r.dot(self.b.r)
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch.py", line 596, in r
        self._cache['r'] = np.asarray(np.atleast_1d(self.compute_r()), dtype=np.float64, order='C')
      File "/root/anaconda3/envs/soma/lib/python3.7/site-packages/chumpy/ch_ops.py", line 566, in compute_r
        return self.a.r - self.b.r
    ValueError: operands could not be broadcast together with shapes (69,) (63,)
    
    opened by minushuang 3
  • fit smal params to animal(cat) mocap data which is manual picked from unity 3d model

    fit smal params to animal(cat) mocap data which is manual picked from unity 3d model

    Hi, could you give me some advice about how to get the smal parameters with Mosh++ if I have some 3d animation models bought from unity store. (ie:https://assetstore.unity.com/packages/3d/characters/animals/mammals/fully-animated-cats-185493#description)

    currently, I manually labeled 53 marker locations which are defined in person smpl-x marker-layout, and the mesh result with mosh++ looks like a person with four cilmbs down on ground, the pose is what we thought it would be except the shape is a person, because of the smpl-x model. we need a smal model rather than a smpl-x for the animals.

    besides replacing the smpl-x model with smal, what should I do if I want to fitting the smal params to animal mocap data manual picked from the unity 3d model?

    below is a mesh results of cat under smpl-x param model, a person-like cat. image

    opened by minushuang 3
Owner
Nima Ghorbani
Research Engineer at Max-Planck Institute for Intelligent Systems. In love with math and its applications in perceiving systems.
Nima Ghorbani
This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21

Deep Virtual Markers This repository contains the accompanying code for Deep Virtual Markers for Articulated 3D Shapes, ICCV'21 Getting Started Get sa

KimHyomin 45 Oct 7, 2022
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

ZJU3DV 2.2k Jan 5, 2023
dataset for ECCV 2020 "Motion Capture from Internet Videos"

Motion Capture from Internet Videos Motion Capture from Internet Videos Junting Dong*, Qing Shuai*, Yuanqing Zhang, Xian Liu, Xiaowei Zhou, Hujun Bao

ZJU3DV 98 Dec 7, 2022
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time The implementation is based on SIGGRAPH Aisa'20. Dependencies Python 3.7 Ubuntu

soratobtai 124 Dec 8, 2022
A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run.

Minimal Hand A minimal solution to hand motion capture from a single color camera at over 100fps. Easy to use, plug to run. This project provides the

Yuxiao Zhou 824 Jan 7, 2023
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

William Rodriguez 4 May 27, 2022
Differentiable Neural Computers, Sparse Access Memory and Sparse Differentiable Neural Computers, for Pytorch

Differentiable Neural Computers and family, for Pytorch Includes: Differentiable Neural Computers (DNC) Sparse Access Memory (SAM) Sparse Differentiab

ixaxaar 302 Dec 14, 2022
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)

Pop-Out Motion Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022) Jihyun Lee*, Minhyuk Sung*, Hyunjin Kim, Tae-Ky

Jihyun Lee 88 Nov 22, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance This is the codebase for video-based human motion reconstruction in human-mot

Jiachen Xu 5 Jul 14, 2022
Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Space robot - (Course Project) Using the space robot to capture the target satellite that is disabled and spinning, then stabilize and fix it up

Mingrui Yu 3 Jan 7, 2022
Capture all information throughout your model's development in a reproducible way and tie results directly to the model code!

Rubicon Purpose Rubicon is a data science tool that captures and stores model training and execution information, like parameters and outcomes, in a r

Capital One 97 Jan 3, 2023
Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Deep Deterministic Uncertainty This repository contains the code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic

Jishnu Mukhoti 69 Nov 28, 2022
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [Project Page] [Paper] [Supp. Mat.] Table of Contents License Description Fittin

Vassilis Choutas 1.3k Jan 7, 2023
HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022 [Project page | Video] Getting sta

null 51 Nov 29, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 9, 2022
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 7, 2022
DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Evaluation, Training, Demo, and Inference of DeFMO DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021) Denys Rozumnyi, Martin R. O

Denys Rozumnyi 139 Dec 26, 2022