This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

Overview

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

report Open In Colab

News:

  • [2020/05/04] Added EGL rendering option for training data generation. Now you can create your own training data with headless machines!
  • [2020/04/13] Demo with Google Colab (incl. visualization) is available. Special thanks to @nanopoteto!!!
  • [2020/02/26] License is updated to MIT license! Enjoy!

This repository contains a pytorch implementation of "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization".

Project Page Teaser Image

If you find the code useful in your research, please consider citing the paper.

@InProceedings{saito2019pifu,
author = {Saito, Shunsuke and Huang, Zeng and Natsume, Ryota and Morishima, Shigeo and Kanazawa, Angjoo and Li, Hao},
title = {PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}

This codebase provides:

  • test code
  • training code
  • data generation code

Requirements

  • Python 3
  • PyTorch tested on 1.4.0
  • json
  • PIL
  • skimage
  • tqdm
  • numpy
  • cv2

for training and data generation

  • trimesh with pyembree
  • pyexr
  • PyOpenGL
  • freeglut (use sudo apt-get install freeglut3-dev for ubuntu users)
  • (optional) egl related packages for rendering with headless machines. (use apt install libgl1-mesa-dri libegl1-mesa libgbm1 for ubuntu users)

Warning: I found that outdated NVIDIA drivers may cause errors with EGL. If you want to try out the EGL version, please update your NVIDIA driver to the latest!!

Windows demo installation instuction

  • Install miniconda
  • Add conda to PATH
  • Install git bash
  • Launch Git\bin\bash.exe
  • eval "$(conda shell.bash hook)" then conda activate my_env because of this
  • Automatic env create -f environment.yml (look this)
  • OR manually setup environment
    • conda create —name pifu python where pifu is name of your environment
    • conda activate
    • conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
    • conda install pillow
    • conda install scikit-image
    • conda install tqdm
    • conda install -c menpo opencv
  • Download wget.exe
  • Place it into Git\mingw64\bin
  • sh ./scripts/download_trained_model.sh
  • Remove background from your image (this, for example)
  • Create black-white mask .png
  • Replace original from sample_images/
  • Try it out - sh ./scripts/test.sh
  • Download Meshlab because of this
  • Open .obj file in Meshlab

Demo

Warning: The released model is trained with mostly upright standing scans with weak perspectie projection and the pitch angle of 0 degree. Reconstruction quality may degrade for images highly deviated from trainining data.

  1. run the following script to download the pretrained models from the following link and copy them under ./PIFu/checkpoints/.
sh ./scripts/download_trained_model.sh
  1. run the following script. the script creates a textured .obj file under ./PIFu/eval_results/. You may need to use ./apps/crop_img.py to roughly align an input image and the corresponding mask to the training data for better performance. For background removal, you can use any off-the-shelf tools such as removebg.
sh ./scripts/test.sh

Demo on Google Colab

If you do not have a setup to run PIFu, we offer Google Colab version to give it a try, allowing you to run PIFu in the cloud, free of charge. Try our Colab demo using the following notebook: Open In Colab

Data Generation (Linux Only)

While we are unable to release the full training data due to the restriction of commertial scans, we provide rendering code using free models in RenderPeople. This tutorial uses rp_dennis_posed_004 model. Please download the model from this link and unzip the content under a folder named rp_dennis_posed_004_OBJ. The same process can be applied to other RenderPeople data.

Warning: the following code becomes extremely slow without pyembree. Please make sure you install pyembree.

  1. run the following script to compute spherical harmonics coefficients for precomputed radiance transfer (PRT). In a nutshell, PRT is used to account for accurate light transport including ambient occlusion without compromising online rendering time, which significantly improves the photorealism compared with a common sperical harmonics rendering using surface normals. This process has to be done once for each obj file.
python -m apps.prt_util -i {path_to_rp_dennis_posed_004_OBJ}
  1. run the following script. Under the specified data path, the code creates folders named GEO, RENDER, MASK, PARAM, UV_RENDER, UV_MASK, UV_NORMAL, and UV_POS. Note that you may need to list validation subjects to exclude from training in {path_to_training_data}/val.txt (this tutorial has only one subject and leave it empty). If you wish to render images with headless servers equipped with NVIDIA GPU, add -e to enable EGL rendering.
python -m apps.render_data -i {path_to_rp_dennis_posed_004_OBJ} -o {path_to_training_data} [-e]

Training (Linux Only)

Warning: the following code becomes extremely slow without pyembree. Please make sure you install pyembree.

  1. run the following script to train the shape module. The intermediate results and checkpoints are saved under ./results and ./checkpoints respectively. You can add --batch_size and --num_sample_input flags to adjust the batch size and the number of sampled points based on available GPU memory.
python -m apps.train_shape --dataroot {path_to_training_data} --random_flip --random_scale --random_trans
  1. run the following script to train the color module.
python -m apps.train_color --dataroot {path_to_training_data} --num_sample_inout 0 --num_sample_color 5000 --sigma 0.1 --random_flip --random_scale --random_trans

Related Research

Monocular Real-Time Volumetric Performance Capture (ECCV 2020)
Ruilong Li*, Yuliang Xiu*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

The first real-time PIFu by accelerating reconstruction and rendering!!

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020)
Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

We further improve the quality of reconstruction by leveraging multi-level approach!

ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020)
Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

Learning PIFu in canonical space for animatable avatar generation!

Robust 3D Self-portraits in Seconds (CVPR 2020)
Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

They extend PIFu to RGBD + introduce "PIFusion" utilizing PIFu reconstruction for non-rigid fusion.

Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019)
Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

We answer to the question of "how can we learn implicit function if we don't have 3D ground truth?"

SiCloPe: Silhouette-Based Clothed People (CVPR 2019, best paper finalist)
Ryota Natsume*, Shunsuke Saito*, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima

Our first attempt to reconstruct 3D clothed human body with texture from a single image!

Deep Volumetric Video from Very Sparse Multi-view Performance Capture (ECCV 2018)
Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, Hao Li

Implict surface learning for sparse view human performance capture!


For commercial queries, please contact:

Hao Li: [email protected] ccto: [email protected] Baker!!

Comments
  • How to set training data

    How to set training data

    Thank you very much for your work. Now I can convert the mesh in the shapenet data to a watertight mesh through ManifoldPlus, but I don’t know how to set up the data set structure for training. If you can give some advice, I would appreciate it.

    opened by zyz-1998 12
  • Training Errors

    Training Errors

    Sir, I've created ./bounce/bounce0.txt and ./bounce/face.npy under {path_to_rp_dennis_posed_004_OBJ}. Now, I try to train, but get RunTimeError:

    
    (tokurEnv) hamit@hamit-MS-7B49:~/Softwares/environments/PIFu$ python -m apps.train_shape --dataroot /home/hamit/Softwares/environments/PIFu/tempImages --random_flip --random_scale --random_trans
    /home/hamit/Softwares/environments/PIFu/lib/data/TrainDataset.py:102: UserWarning: loadtxt: Empty input file: "/home/hamit/Softwares/environments/PIFu/tempImages/val.txt"
      var_subjects = np.loadtxt(os.path.join(self.root, 'val.txt'), dtype=str)
    train data size:  180
    test data size:  360
    initialize network with normal
    Using Network:  hgpifu
    Traceback (most recent call last):
      File "/home/hamit/Softwares/environments/tokurEnv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 761, in _try_get_data
        data = self._data_queue.get(timeout=timeout)
      File "/usr/lib/python3.6/multiprocessing/queues.py", line 104, in get
        if not self._poll(timeout):
      File "/usr/lib/python3.6/multiprocessing/connection.py", line 257, in poll
        return self._poll(timeout)
      File "/usr/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
        r = wait([self], timeout)
      File "/usr/lib/python3.6/multiprocessing/connection.py", line 911, in wait
        ready = selector.select(timeout)
      File "/usr/lib/python3.6/selectors.py", line 376, in select
        fd_event_list = self._poll.poll(timeout)
      File "/home/hamit/Softwares/environments/tokurEnv/lib/python3.6/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
        _error_if_any_worker_fails()
    RuntimeError: DataLoader worker (pid 12928) is killed by signal: Killed. 
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/hamit/Softwares/environments/PIFu/apps/train_shape.py", line 183, in <module>
        train(opt)
      File "/home/hamit/Softwares/environments/PIFu/apps/train_shape.py", line 90, in train
        for train_idx, train_data in enumerate(train_data_loader):
      File "/home/hamit/Softwares/environments/tokurEnv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
        data = self._next_data()
      File "/home/hamit/Softwares/environments/tokurEnv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 841, in _next_data
        idx, data = self._get_data()
      File "/home/hamit/Softwares/environments/tokurEnv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 808, in _get_data
        success, data = self._try_get_data()
      File "/home/hamit/Softwares/environments/tokurEnv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 774, in _try_get_data
        raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
    RuntimeError: DataLoader worker (pid(s) 12928) exited unexpectedly
    

    PyTorch version : 1.4.0

    opened by alitokur 9
  • Sampling for feature vector vs ground truth mesh

    Sampling for feature vector vs ground truth mesh

    Hi. I am having some trouble with my understanding. I hope you can enlighten and will truly appreciate it! You use spatial sampling with the ground truth mesh. Does this not mean you have a ground truth 3d occupancy field that is incomplete? For corresponding pixels without a groundtruth inside/outside prediction, will their feature vectors and z-values still be fed through PIFu?

    I noticed in an earlier paragraph that you mentioned the use of bilinear sampling to obtain the feature vectors. How is the purpose of this sampling different from spatial sampling used with the ground truth meshes?

    I also checked out the script you attached in the previous issue. surface_points, _ = trimesh.sample.sample_surface(mesh, 4 * self.num_sample_inout) sample_points = surface_points + np.random.normal(scale=self.opt.sigma, size=surface_points.shape) Is this above spatial sampling? I do not see a direct connection to the spatial sampling described in the paper so I just want to confirm.

    Thank you for you patience. I have picked this paper to try and learn as much as possible. Hopefully I'm not annoying you. I look forward to your reply.

    opened by gordon-lim 8
  • Error cannot marching cubes

    Error cannot marching cubes

    Using your source code on geometry we generated, we got this error during the training "Error cannot marching cubes". This error appears when the iteration is finished, it restarts the iteration again. Is this expected or something is wrong with our settings?

    Thanks again for sharing your source.

    opened by faniry6 8
  • cannot import 'Textures'

    cannot import 'Textures'

    /content/PIFu/lib/colab_util.py in <module>()
         16 
         17 # Data structures and functions for rendering
    ---> 18 from pytorch3d.structures import Meshes, Textures
         19 from pytorch3d.renderer import (
         20     look_at_view_transform,
    
    ImportError: cannot import name 'Textures'
    

    Thanks for your excellent work. When I using your colab demo, it gives me the above error. Could you please tell me how to fix the error? Sorry for the dumb question. I searched for it but nothing helps.

    opened by Dominoer 7
  • input format for multi-view PIFu

    input format for multi-view PIFu

    Dear Sir; For single view Pifu training, I almost got the data format: [image_tensor -> our images( 4-dim ) sample_tensor -> for mesh points, calib_tensor , labels (1 for inside points zero for the outsides)]

    But I didn't find any solution for multi-view format. Should I create an image_tensor parameter for each view? #31 here, u mentioned about transform matrixes. When we creating data with "apps.render_data" we just rotate the object, right? Should I find the rotation matrices between the images and give them to the function? Can you give more information about data format when you are available? Sincerely

    opened by alitokur 7
  • issue when eval with color net

    issue when eval with color net

    Thank you for your great work. I trained the model with my own data. When I eval with only shape net, there is no problem. But when I eval with both net_G and net_C, errors occurred: RuntimeError: Error(s) in loading state_dict for ResBlkPIFuNet: Missing key(s) in state_dict: "image_filter.model.2.weight", "image_filter.model.2.bias", "image_filter.model.5.weight", "image_filter.model.5.bias", "image_filter.model.8.weight", "image_filter.model.8.bias", "image_filter.model.10.conv_block.2.weight", "image_filter.model.10.conv_block.2.bias", "image_filter.model.10.conv_block.6.weight", "image_filter.model.10.conv_block.6.bias", "image_filter.model.11.conv_block.2.weight", "image_filter.model.11.conv_block.2.bias", "image_filter.model.11.conv_block.6.weight", "image_filter.model.11.conv_block.6.bias", "image_filter.model.12.conv_block.2.weight", "image_filter.model.12.conv_block.2.bias", "image_filter.model.12.conv_block.6.weight", "image_filter.model.12.conv_block.6.bias", "image_filter.model.13.conv_block.2.weight", "image_filter.model.13.conv_block.2.bias", "image_filter.model.13.conv_block.6.weight", "image_filter.model.13.conv_block.6.bias", "image_filter.model.14.conv_block.2.weight", "image_filter.model.14.conv_block.2.bias", "image_filter.model.14.conv_block.6.weight", "image_filter.model.14.conv_block.6.bias", "image_filter.model.15.conv_block.2.weight", "image_filter.model.15.conv_block.2.bias". Unexpected key(s) in state_dict: "image_filter.model.1.bias", "image_filter.model.4.bias", "image_filter.model.7.bias", "image_filter.model.10.conv_block.1.bias", "image_filter.model.10.conv_block.5.bias", "image_filter.model.11.conv_block.1.bias", "image_filter.model.11.conv_block.5.bias", "image_filter.model.12.conv_block.1.bias", "image_filter.model.12.conv_block.5.bias", "image_filter.model.13.conv_block.1.bias", "image_filter.model.13.conv_block.5.bias", "image_filter.model.14.conv_block.1.bias", "image_filter.model.14.conv_block.5.bias", "image_filter.model.15.conv_block.1.bias", "image_filter.model.15.conv_block.5.bias".

    opened by troylujc 6
  • Multiple images into one object

    Multiple images into one object

    Hello! @shunsukesaito awesome work.

    My question is the following: I have several images of the same person from different angles (the images were created at the same time), how can I interpolate one common object of better quality from these images? So that this object takes into account all projections. Is it possible to implement this feature using this solution?

    I would be grateful if you can help me with this. Thanks!

    image

    opened by pandov 6
  • Multi-view settings?

    Multi-view settings?

    Hi, it would be great if you could document how to use this method in the multi-view setting and also provide the model needed. :)

    Thank you, amazing work.

    opened by mmzsolt 6
  • Query regarding Sigma

    Query regarding Sigma

    Hi,

    I have my models scaled to unit box in meshlab. I have trained this data with sigma=5 in which i observed no change in the error. I have trained the same data with sigma 0.1 where I observed error decreasing. However, marching cubes error still persists even after 15th epoch. May I know if any changes to code should be incorporated from changing meshes to unit box?

    Thanks,

    opened by jsaisagar 5
  • Training Result

    Training Result

    Hello shunsuke, I trained a netG using my own data, but the reconstructed meshes are not as good as the result of your released model. I'm not sure whether it is the data problem or did I miss something. Can you show some details like how is the MSE error going during your training process? Thank you so much!

    opened by Brawie 5
  • Bump pillow from 9.0.0 to 9.3.0

    Bump pillow from 9.0.0 to 9.3.0

    Bumps pillow from 9.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Training data

    Training data

    Hello.

    We are collecting data to train new models. I'm asking you because I want to know how much data i need.

    The paper said that RenderPeople used about 500 scaned data for training, but I wonder why it is 500.

    Is there a reason why it should be 500?

    opened by Yunhyeonjin4777 0
  • One question about multiview training

    One question about multiview training

    Hi, in the paper you report multi-view results, do you train the multi-view PIFu with opt.random_multiview=True? I find that if this flag is set to be true, then the model cannot generate satisfactory results, while good results can be acheived by training with fixed angle distance.

    opened by Jarvisss 1
  • freeglut (foo): failed to open display 'localhost:12.0'

    freeglut (foo): failed to open display 'localhost:12.0'

    my environment is a Linux server machine, which os is ubuntu18.04, when I run this command: python -m apps.train_color --dataroot Traindata --num_sample_inout 0 --num_sample_color 5000 --sigma 0.1 --random_flip --random_scale --random_trans an error occurred, like this: ValueError: Sample larger than population or is negative

    Can you help me solve this problem?

    opened by Deerzh 0
  • Training on dennis obj does not yield satisfactory results

    Training on dennis obj does not yield satisfactory results

    So,I tried training PiFU on the sample dennis obj,after training for 70 epochs this is my current result.Is this what you were getting for that particular object. Compared to the actual dennis object,I feel that a lot of the features especially in the face,hand and front view section were missed.Plus the entire model looks sort of blocky.

    Dennis reconstruction my result dennis_model_78epochs The ground truth dennis model is as follows ground_truth_PIFU_front_view Note in the dennis folder I have only included one image which is rp_dennis_posed_004_A

    opened by sparshgarg23 10
  • Some questions about dataset format and color model training

    Some questions about dataset format and color model training

    Hi, Thanks for open sourcing the work.I am looking into the method and would like to apply it to a custom dataset. Based on the csv files in PiFUHD and the colab example,I am assuming that this should be the following format of the data

    Train
    Model_person_1
    .obj file
    model_person_1.jpg
    Model person_1_pose_1
    .obj file
    model_person_1_pose_1.jpg
    Model_person_2
    contains an obj file and jog file
    
    TEST
    Same format as train
    

    Second question,Let's say I have already created the model using Blender and I would like to use PiFU to just estimate the texture of the clothes,as such can I achieve this by executing train_color_model.py If yes,for evaluation what steps should be followed,because in the current demo script supports both model construction and transfer color.

    opened by sparshgarg23 2
null 190 Jan 3, 2023
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
This repository contains the code and models for the following paper.

DC-ShadowNet Introduction This is an implementation of the following paper DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised

AuAgCu 65 Dec 27, 2022
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

null 1.1k Dec 30, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by Divam Gupta, Wei Pu, Trenton Tabor, Jeff Schneider

SBEVNet: End-to-End Deep Stereo Layout Estimation This repository contains the code for "SBEVNet: End-to-End Deep Stereo Layout Estimation" paper by D

Divam Gupta 19 Dec 17, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 9, 2021
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 5, 2022
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" (SPNLP@ACL2022)

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
This repository contains the implementation of the paper: "Towards Frequency-Based Explanation for Robust CNN"

RobustFreqCNN About This repository contains the implementation of the paper "Towards Frequency-Based Explanation for Robust CNN" arxiv. It primarly d

Sarosij Bose 2 Jan 23, 2022
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network.

Phong Nguyen Ha 4 May 26, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022