Code for "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans" CVPR 2021 best paper candidate

Overview

News

  • 05/17/2021 To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Volumes, Multi-view Neural Human Rendering, and Deferred Neural Human Rendering.
  • 05/13/2021 To make the following works easier compare with our model, we save our rendering results of ZJU-MoCap at here and write a document that describes the training and test protocols.
  • 05/12/2021 The code supports the test and visualization on unseen human poses.
  • 05/12/2021 We update the ZJU-MoCap dataset with better fitted SMPL using EasyMocap. We also release a website for visualization. Please see here for the usage of provided smpl parameters.

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans

Project Page | Video | Paper | Data

monocular

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
CVPR 2021

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md for manual installation.

Installation using docker

Please see docker/README.md.

Thanks to Zhaoyi Wan for providing the docker implementation.

Run the code on the custom dataset

Please see CUSTOM.

Run the code on People-Snapshot

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Process People-Snapshot

We already provide some processed data. If you want to process more videos of People-Snapshot, you could use tools/process_snapshot.py.

You can also visualize smpl parameters of People-Snapshot with tools/vis_snapshot.py.

Visualization on People-Snapshot

Take the visualization on female-3-casual as an example. The command lines for visualization are recorded in visualize.sh.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/female3c/latest.pth.

  2. Visualization:

    • Visualize novel views of single frame
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_view True num_render_views 144
    

    monocular

    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_pose True
    

    monocular

    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_mesh True train.num_workers 0
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name female3c --dataset people_snapshot --mesh_ind 226
    

    monocular

  3. The results of visualization are located at $ROOT/data/render/female3c and $ROOT/data/perform/female3c.

Training on People-Snapshot

Take the training on female-3-casual as an example. The command lines for training are recorded in train.sh.

  1. Train:
    # training
    python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False
    # distributed training
    python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False gpus "0, 1, 2, 3" distributed True
    
  2. Train with white background:
    # training
    python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False white_bkgd True
    
  3. Tensorboard:
    tensorboard --logdir data/record/if_nerf
    

Run the code on ZJU-MoCap

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Potential problems of provided smpl parameters

  1. The newly fitted parameters locate in new_params. Currently, the released pretrained models are trained on previously fitted parameters, which locate in params.
  2. The smpl parameters of ZJU-MoCap have different definition from the one of MPI's smplx.
    • If you want to extract vertices from the provided smpl parameters, please use zju_smpl/extract_vertices.py.
    • The reason that we use the current definition is described at here.

It is okay to train Neural Body with smpl parameters fitted by smplx.

Test on ZJU-MoCap

The command lines for test are recorded in test.sh.

Take the test on sequence 313 as an example.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth.
  2. Test on training human poses:
    python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313
    
  3. Test on unseen human poses:
    python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 test_novel_pose True
    

Visualization on ZJU-MoCap

Take the visualization on sequence 313 as an example. The command lines for visualization are recorded in visualize.sh.

  1. Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth.

  2. Visualization:

    • Visualize novel views of single frame
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True
    

    zju_mocap

    • Visualize novel views of single frame by rotating the SMPL model
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True num_render_views 100
    

    zju_mocap

    • Visualize views of dynamic humans with fixed camera
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000 num_render_views 1
    

    zju_mocap

    • Visualize views of dynamic humans with rotated camera
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000
    

    zju_mocap

    • Visualize mesh
    # generate meshes
    python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_mesh True train.num_workers 0
    # visualize a specific mesh
    python tools/render_mesh.py --exp_name xyzc_313 --dataset zju_mocap --mesh_ind 0
    

    zju_mocap

  3. The results of visualization are located at $ROOT/data/render/xyzc_313 and $ROOT/data/perform/xyzc_313.

Training on ZJU-MoCap

Take the training on sequence 313 as an example. The command lines for training are recorded in train.sh.

  1. Train:
    # training
    python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False
    # distributed training
    python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False gpus "0, 1, 2, 3" distributed True
    
  2. Train with white background:
    # training
    python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False white_bkgd True
    
  3. Tensorboard:
    tensorboard --logdir data/record/if_nerf
    

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2021neural,
  title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
  author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2021}
}
Comments
  • Visualization on People-Snapshot

    Visualization on People-Snapshot

    INFO - 2022-05-18 16:15:24,440 - utils - NumExpr defaulting to 2 threads. load model: data/trained_model/if_nerf/female4c/latest.pth Traceback (most recent call last): File "run.py", line 126, in globals()'run_' + args.type File "run.py", line 88, in run_visualize epoch=cfg.test.epoch) File "/content/neuralbody/lib/utils/net_utils.py", line 381, in load_network net.load_state_dict(pretrained_model['net'], strict=False) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 830, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Network: size mismatch for xyzc_net.conv0.0.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for xyzc_net.conv0.3.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for xyzc_net.down0.0.weight: copying a param with shape torch.Size([3, 3, 3, 16, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 16]). size mismatch for xyzc_net.conv1.0.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for xyzc_net.conv1.3.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for xyzc_net.down1.0.weight: copying a param with shape torch.Size([3, 3, 3, 32, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 32]). size mismatch for xyzc_net.conv2.0.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for xyzc_net.conv2.3.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for xyzc_net.conv2.6.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for xyzc_net.down2.0.weight: copying a param with shape torch.Size([3, 3, 3, 64, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 64]). size mismatch for xyzc_net.conv3.0.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for xyzc_net.conv3.3.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for xyzc_net.conv3.6.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for xyzc_net.down3.0.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for xyzc_net.conv4.0.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for xyzc_net.conv4.3.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]). size mismatch for xyzc_net.conv4.6.weight: copying a param with shape torch.Size([3, 3, 3, 128, 128]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3, 128]).

    can you help me, what i should do?

    opened by muxi-0104 10
  • issue with the cmu panoptic

    issue with the cmu panoptic

    Thank you very much for your awesome work, I met a issue when playing with the cmu pano I tried to replicate this project using the CMU panoptic data, but when I tried to visualize the result, I get such issue.

    I find the upper part of the human body is missing like this: I used four views including different directions. image

    opened by neilgogogo 8
  • Training custom monocular video encountered a bug

    Training custom monocular video encountered a bug

    After i training follow the instruction use my own monocular video,i encountered a problem when visualizing:

    (neuralbody) chensien@Lab-Server:~/neuralbody$ python run.py --type visualize --cfg_file configs/custom_perform.yaml exp_name custom render_views 1 load model: data/trained_model/if_nerf/custom/latest.pth /home/chensien/neuralbody/lib/utils/render_utils.py:12: RuntimeWarning: invalid value encountered in true_divide return x / np.linalg.norm(x) [0] the results are saved at data/perform/custom 0%| | 0/300 [00:00<?, ?it/s] Traceback (most recent call last): File "run.py", line 126, in globals()'run_' + args.type File "run.py", line 99, in run_visualize output = renderer.render(batch) File "lib/networks/renderer/if_clight_renderer_mmsk.py", line 179, in render raw = self.net(sp_input, grid_coords, viewdir, light_pts) File "/home/chensien/anaconda3/envs/neuralbody/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "lib/networks/latent_xyzc.py", line 43, in forward xyzc_features = self.xyzc_net(xyzc, grid_coords) File "/home/chensien/anaconda3/envs/neuralbody/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "lib/networks/latent_xyzc.py", line 125, in forward features = features.view(features.size(0), -1, features.size(4)) RuntimeError: cannot reshape tensor of 0 elements into shape [1, -1, 0] because the unspecified dimension size -1 can be any value and is ambiguous

    opened by SienChen1997 7
  • Results of the monocular video

    Results of the monocular video

    Hi, sida, Thanks a lot for the code!!! I am interested in the results of the monocular video.

    There is my question, is the monocular video trained with only one view and the other setting is just the same as the sparse views experiment. It's amazing that the results it's so good with the monocular video.

    opened by gxj98 7
  • Can't visualization with vis_snapshot.py

    Can't visualization with vis_snapshot.py

    Thanks to this fabulous work. However, when l use vis_snapshot.py to visualize smpl parameters of People-Snapshot, I get the following error: /app/tools$ python vis_snapshot.py INFO - 2021-06-22 15:08:19,340 - acceleratesupport - OpenGL_accelerate module loaded INFO - 2021-06-22 15:08:19,345 - arraydatatype - Using accelerated ArrayDatatype Traceback (most recent call last): File "vis_snapshot.py", line 84, in renderer = Renderer(height=1080, width=1080, faces=faces) File "/app/tools/snapshot_smpl/renderer.py", line 23, in init self.renderer = pyrender.OffscreenRenderer(height, width) File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in init self._create() File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create self._platform.init_context() File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyrender/platforms/pyglet_platform.py", line 50, in init_context self._window = pyglet.window.Window(config=conf, visible=False, File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyglet/window/xlib/init.py", line 171, in init super(XlibWindow, self).init(*args, **kwargs) File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyglet/window/init.py", line 575, in init display = pyglet.canvas.get_display() File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyglet/canvas/init.py", line 95, in get_display return Display() File "/home/linxiong/miniconda/lib/python3.8/site-packages/pyglet/canvas/xlib.py", line 123, in init raise NoSuchDisplayException('Cannot connect to "%s"' % name) pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to "None"

    How to solve it ? I find some solutions via google, but all don't work. Thanks!

    opened by bruinxiong 6
  • training on Human3.6M

    training on Human3.6M

    Hi,

    do you have any plan to release the training code on Human3.6M? I tried myself, but the outputs have minor artifacts which are not found in your results. 89

    Do you have any idea for the reason? I used the official mask data from Human3.6M dataset for training.

    opened by hongsukchoi 6
  • The md5 value of datasets

    The md5 value of datasets

    Hi, I have recently downloaded datasets from the given links. The zip files seem broken that I can't open them. Can you please provide the md5 sum of the files?

    opened by wanzysky 6
  • About the SparseConvNet

    About the SparseConvNet

    Hi! I wonder how to represent the deformed mesh with structured latent code, i.e. the input of SparseConvNet. Can you release the supplementary material?

    opened by 07hyx06 6
  • ImportError:Imageio Pillow requires Pillow, not PIL!

    ImportError:Imageio Pillow requires Pillow, not PIL!

    When I run "python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_view True num_render_views 144", there's an error "ImportError:Imageio Pillow requires Pillow, not PIL!" And I have installed Pillow.

    opened by ClimberY 5
  • Why camera origin in 'get_ray' of 'if_nerf_data_utils.py' isn't T?

    Why camera origin in 'get_ray' of 'if_nerf_data_utils.py' isn't T?

    Hi, it's a great and awesome work. And I have a question in 'if_nerf_data_utils.py', the camera origin in 'get_ray' is calculate by -np.dot(R.T, T), I think this is the World origin in camera coordinate, camera origin in World coordinate should be T ? I got confused by this, I would be a great help if you can answer this, thank! QQ图片20220114114234

    opened by Yzmblog 5
  • unexpected things about training custom monocular video

    unexpected things about training custom monocular video

    I want to process my video using neuralbody and the poses are estimated by octopus. But I got abnormal results as follows. Could you give me some advice? : )

    opened by hzhao1997 5
  • About calculating the grid coordinate

    About calculating the grid coordinate

    Hi @pengsida , Thanks for your great work. I am a bit confused about the computation of the grid coordinate as shown below. The first step convert xyz to dhw, while the last step convert it back again. I would be great if your can explain a bit about this. image

    opened by chaneyddtt 0
  • 如何使用自己的人物视频进行推理?

    如何使用自己的人物视频进行推理?

    我看每个人物视频都有自己的pth参数文件,是不是我们想使用自己的自定义人物视频进行3d重建时,需要将视频制作成People-Snapshot数据集格式,经tools/process_snapshot.py处理后,训练生成pth参数文件,之后推理(python run.py --type visualize)呢? 有没有通用的3d重建模型参数文件,输入任意人物视频即可生成3d人物模型?

    opened by wangdabee 1
  • The difference bewteeen the new params/vertices and the previous params/vertices

    The difference bewteeen the new params/vertices and the previous params/vertices

    Hi @pengsida, thanks for your great work. I have a question regarding the new params/vertices. What is the difference between the updated params/vertices and the previous one?

    opened by chaneyddtt 0
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In neuralbody, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    open3d>=0.9.0.0
    PyYAML==5.3.1
    tqdm==4.28.1
    tensorboardX==1.2
    termcolor==1.1.0
    scikit-image==0.14.2
    opencv-contrib-python>=3.4.2.17
    opencv-python>=3.4.2.17<4
    imageio==2.3.0
    trimesh==3.8.15
    plyfile==0.6
    PyMCubes==0.1.0
    pyglet==1.4.0b1
    chumpy
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0. The version constraint of dependency imageio can be changed to >=0.3.0,<=2.19.3. The version constraint of dependency trimesh can be changed to >=1.14.18,<=2.36.6. The version constraint of dependency pyglet can be changed to >=1.3.0rc2,<=1.4.11.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the tqdm
    tqdm.tqdm
    
    The calling methods from the imageio
    imageio.imread
    
    The calling methods from the trimesh
    trimesh.sample.sample_surface_even
    trimesh.Trimesh.apply_scale
    trimesh.Trimesh.apply_translation
    trimesh.load
    
    The calling methods from the pyglet
    datetime.timedelta
    
    The calling methods from the all methods
    poly.copy
    net.state_dict
    _merge_a_into_b
    self.PolyMatchingLoss.super.__init__
    ax1.imshow
    ray_d.reshape.astype.reshape
    smpl_renderer.Renderer
    pts2d.pts2d.msk.bool
    samplers.IterationBasedBatchSampler
    numpy.percentile
    color.astype.astype
    numpy.minimum
    self.down1
    join
    R.Th.xyz.np.dot.astype
    lib.utils.optimizer.lr_scheduler.MultiStepLR
    matplotlib.pyplot.show
    depth_map.torch.ones_like.to
    pose_feature.reshape.reshape
    numpy.cross
    cv2.dilate.copy
    k.startswith
    torch.min
    cv2.FileStorage
    scheduler.state_dict
    recorder.batch_time.update
    numpy.random.uniform.sum
    self.center.reshape
    torch.isinf
    glutInitDisplayMode
    lib.evaluators.make_evaluator.evaluate
    self.CfgNode.super.__init__
    grayscale
    get_o3d_mesh
    get_cams.append
    depth_map.view
    cv2.dilate
    _wrapper_factory
    cdf.unsqueeze.expand
    random.shuffle
    self.shapedirs.dot
    recorder.update_image_stats
    closest_face.astype
    ind.output._tranpose_and_gather_feat.view.size
    lib.networks.make_network
    self.feature_id.size
    torch.reshape
    lib.utils.optimizer.lr_scheduler.ExponentialLR
    round
    self.conv0
    self._set_up
    torch.matmul.view
    K.append
    glGetShaderInfoLog
    pts3d.len.np.ones.astype
    ind.output._tranpose_and_gather_feat.view.unsqueeze
    _YAML_EXTS.union
    inter_from_mask
    cv2.cvtColor
    path.module.imp.load_source.Dataset
    torch.exp
    torch.utils.data.sampler.RandomSampler
    glutInit
    self.render_rays
    lib.utils.net_utils.load_model
    v.view.items
    lib.config.cfg.res.cfg.res.cfg.res.occupancy.reshape.astype.reshape
    numpy.linalg.norm
    range
    glReadPixels.reshape
    corners_2d.np.round.astype
    shapedirs.dot
    to_tensor.unsqueeze
    self.WarmupMultiStepLR.super.__init__
    numpy.linspace
    self.ssim_metric
    poly.poly.pow
    ann_file.np.load.item
    inside.detach.cpu.numpy.detach
    to_np
    numpy.concatenate
    cv2.getStructuringElement
    numpy.sqrt
    recorder.state_dict
    self.network.parameters
    i.images.cpu.numpy.transpose
    lib.networks.renderer.tpose_renderer.Renderer
    _tranpose_and_gather_feat
    blend_
    sklearn.manifold.TSNE
    join.pop
    json.load
    torch.matmul
    chumpy.zeros
    self.prepare_inside_pts.reshape
    sorted
    fig.add_subplot.view_init
    glAttachShader
    glUniform1i
    exp_avg.mul_
    self.network.eval
    i.masks.astype
    z_vals.view.view
    self.sanity_check
    path.module.imp.load_source.Network
    pyrender.camera.IntrinsicsCamera
    vertices.detach.cpu.numpy
    coord.detach.cpu
    mse.detach.cpu
    numpy.ones
    self.feature_fc
    self.pose.reshape
    self.xyzc_net
    img_path.tolist
    torch.arange
    zip
    params_path.np.load.item
    os.path.join.split
    fig.add_subplot.scatter
    torch.linspace
    self.prepare_input
    VoxelGrid
    width.topk_inds.int
    i.results.dot
    scheduler.load_state_dict
    glutInitWindowSize
    glGetProgramInfoLog
    barycentric_interpolation
    dtype.array.torch.tensor.to
    neg_loss.sum.sum
    self.MultiStepLR.super.__init__
    ValueError
    A.reshape.bweights.np.dot.reshape.reshape
    ray_d.reshape.astype
    render.color_render.ColorRender.display
    Rh.cv2.Rodrigues.astype
    self.Dataset.super.__init__
    rgb.len.np.zeros.astype
    np.random.rand
    loss_stats.items
    b.update
    self.SparseConvNet.super.__init__
    glCreateProgram
    torchsearchsorted.searchsorted
    coord.detach.cpu.numpy
    self.rgb_linear
    cv2.VideoCapture.read
    Embedder
    tuple
    voxelize_ray
    numpy.exp.max
    numpy.logical_and
    torch.std
    join.split
    matplotlib.pyplot.figure
    file_obj.read
    SmoothedValue
    os.path.exists
    torch.ones_like
    K.copy.astype.copy
    torch.nn.BatchNorm1d
    numpy.concatenate.sum
    float
    make_cfg
    occupancy.VoxelGrid.to_mesh.export
    target_poly.poly.pow
    snapshot_smpl.renderer.Renderer
    ray_o.reshape.astype.reshape
    self.posedirs.dot
    img.reshape
    cv2.imread
    barycentric_interpolation.sum
    poly.poly.pow.sum
    self.batchify
    ind.unsqueeze.expand.unsqueeze
    K.topk_ind.int
    img.detach.cpu.numpy
    v.view.view
    glClearDepth
    res.res.res.occupancy.reshape.astype
    self.Network.super.__init__
    self.NetworkWrapper.super.__init__
    i.vertices.detach
    ray_o_list.np.concatenate.astype
    _load_cfg_py_source
    numpy.argwhere
    shapes.expand.expand
    fig.add_subplot.set_zlim
    self.down0
    numpy.maximum
    numpy.broadcast_to
    self.fc_2
    self.bs_type.posemap
    ready_arguments
    nx.points.astype
    lib.utils.snapshot_data_utils.get_camera
    rays_d.view
    _assert_with_logging
    pickle._Unpickler
    compute_normal
    vertices.items
    self.Ind2dRegL1Loss.super.__init__
    embedder.xyz_embedder
    pose.Rodrigues.r.camera_rt.Rodrigues.r.np.matmul.Rodrigues.r.reshape
    load_cfg
    y_rot_angle.lt.to
    self.get_intrinsic_matrix
    optim.state_dict
    torch.exp.transpose
    viewdir.view.view
    glBindTexture
    torch.tensor.mean
    weight.unsqueeze.sum
    ast.literal_eval
    path.split
    alpha.detach.cpu
    device.torch.long.batch_size.torch.arange.view
    trimesh.Trimesh.apply_translation
    d.mean.item
    numpy.matmul.min
    torch.save
    t
    self.FocalLoss.super.__init__
    numpy.reshape
    pidxall.unsqueeze_.long
    transl.detach.cpu
    acc.astype.astype
    Nerf
    c_void_p
    lib.utils.render_utils.image_rays
    poly.np.round.astype
    imageio.imread
    self.AttentionLoss.super.__init__
    self.renderer.render
    get_dir
    smpl.dot.copy
    sorted.keys
    i_intv.i_intv.ni.i.i.annots.ims_data.view.ims_data.len.np.arange.np.array.ravel
    i.parent.results.dot
    vertex_weights.view
    imgaug.augmenters.MotionBlur
    topk_ind.batch.topk_inds.view._gather_feat.view.view
    ind.unsqueeze.expand.size
    normalize_v3
    global_rigid_transformation
    torch.tensor
    Smpl
    numpy.concatenate.mean
    tag_mean.sum.sum
    torch.nn.Conv1d
    torch.device
    mse.detach.cpu.numpy
    glClear
    logging.getLogger
    torch.cat.unsqueeze
    vertices.detach.cpu.numpy.detach
    inrange
    numpy.finfo
    enumerate
    obj_mask.sum.float
    join.format
    get_gt_mask
    self.conv2
    torch.cat
    self.deque.append
    N_samples.torch.linspace.to
    torch.max.detach
    termcolor.colored
    dtype.pose.view.batch_rodrigues.view
    ready_arguments.dot
    torch.randn
    torch.eye
    torch.is_tensor
    lib.visualizers.make_visualizer
    backwards_compatibility_replacements
    self.color_buffer.keys
    self.embed_fn
    voxel_grid.to_mesh.show
    lbs.batch_rodrigues.transpose
    obj_mask.unsqueeze.unsqueeze
    viewdir.view.repeat
    n_sample.n_pixel.n_batch.torch.zeros.to
    ptstocam
    numpy.min
    self.draw_end
    X.reshape.reshape
    argparse.ArgumentParser
    pack
    __import__
    numpy.where
    Th.astype.astype
    batch.item
    ToTensor
    self.model_view_matrix.transpose
    i.images.cpu.numpy
    cv2.Rodrigues
    torch.full
    glFramebufferTexture2D
    torch.no_grad
    pyrender.DirectionalLight
    self.psnr.append
    render.Render.__init__
    self.draw
    chumpy.hstack.reshape
    cv2.fillPoly
    disp_map.view
    i.translation.cpu.numpy
    int
    scores.view
    extract_image
    torch.tensor.median
    p.data.copy_
    self.J_regressor.toarray
    self.latent
    collections.Counter
    poly.view.size
    glGetShaderiv
    sys.path.append
    k.batch.to
    ray_d.view.view
    params_path.np.load.item.astype
    CfgNode
    pts.transpose.detach
    ann_file.endswith
    os.path.dirname
    ax.set_xticklabels
    self.get_mask
    coord.detach.cpu.numpy.detach
    self.psnr_metric
    pts.view.view
    lib.utils.if_nerf.if_nerf_data_utils.sample_ray
    glGenBuffers
    scipy.sparse.issparse
    numpy.clip
    chumpy.array
    self.get_pixel_value
    numpy.any
    numpy.abs
    topk_ind.batch.topk_inds.view._gather_feat.view
    spec.loader.exec_module
    torch.inverse
    root.key_is_deprecated
    _gather_feat.view
    self.RAdam.super.__setstate__
    numpy.random.seed
    i.self.views_linears
    glob.glob
    ray_d_list.np.concatenate.astype
    render.camera.Camera
    Exception
    obj_mask.unsqueeze.sum
    self.neg_loss
    AttributeError
    torch.nn.functional.softmax
    importlib.util.module_from_spec
    batch_size.pidxall.np.reshape.torch.from_numpy.to.size
    numpy.savez
    torch.cat.view
    transl.detach.cpu.numpy.detach
    numpy.random.normal
    math.ceil
    chumpy.dstack
    os.path.splitext
    batch.detach
    a.items
    glutPostRedisplay
    obj_mask.unsqueeze.float
    i.format.intri.getNode.mat
    W.H.np.ones.astype
    far.transpose
    lib.config.cfg.ptot_vsize.torch.tensor.to.clone
    lib.visualizers.make_visualizer.visualize
    torch.cat.size
    k.all_ret.append
    _to_dict
    T.ravel
    conditional_cast
    cv2.boundingRect
    embedder.view_embedder
    ind.unsqueeze.expand
    heat.hmax.float
    self.register_buffer
    add_small_obj
    _load_cfg_from_file
    yaml.safe_dump
    self.set_intrinsic_matrix
    _valid_type
    np.transpose
    glUseProgram
    dis.torch.abs.sum.sum
    torch.distributed.get_rank
    train_loader.batch_sampler.sampler.set_epoch
    glDisableVertexAttribArray
    chumpy.ch.MatVecMult
    self.__dict__.values
    snapshot_smpl.renderer.Renderer.render_multiview
    closure
    p.reshape.ravel
    lmk_faces_idx.view
    chumpy.eye.pp.Rodrigues.ravel
    i.self.params.astype
    z_samples.detach.detach
    self.prepare_inside_pts
    pts2d.round.long
    os.path.isdir
    os.path.basename
    self.interpolate_features
    far_list.np.concatenate.astype
    torch.ones
    torch.cumprod
    identity
    lbs_weights.unsqueeze
    optimizer.zero_grad
    numpy.max
    path.module.imp.load_source.Evaluator
    p_fn
    sp_input.torch.tensor.to
    self.get_extrinsic_matrix
    lib.train.make_trainer.train
    torch.sqrt
    vendor.smpl.serialization.backwards_compatibility_replacements
    self.feature_linear
    trainer.Trainer
    xp.concatenate
    numpy.loadtxt.astype
    exp_avg_sq.sqrt.add_
    torch.cuda.synchronize
    param_path.np.load.item
    torch.sum.clone
    os.system
    param_path.np.load.item.reshape
    gt.eq
    ax.grid
    mycmap._init
    voxelized_pointcloud
    self.get_density_color
    lib.networks.renderer.if_clight_renderer.Renderer
    torch.sin
    glutCreateWindow
    output.detach.cpu.numpy
    self.GeoCrossEntropyLoss.super.__init__
    get_grid_points
    self.loss
    acc_map.view
    posedirs.batch_size.pose_feature.view.torch.matmul.view
    img.detach.cpu
    i.parent.self.J.i.self.J.reshape
    ret_list.keys
    network
    numpy.pi.rel_rot_mat.rot_mat_to_euler.torch.clamp.torch.round.to.lt
    pts.detach.cpu.numpy
    numpy.transpose
    gt.eq.float.float
    camera.get_gl_matrix
    R.transpose
    kernel.transpose.output.sum.mean
    fig.add_subplot.imshow
    Recorder
    make_batch_data_sampler
    createProgram
    eo.embed
    raw.transpose.transpose
    get_KRTD
    i_intv.i_intv.ni.i.i.annots.ims_data.view.ims_data.np.array.np.array.ravel
    Y.reshape.reshape
    get_near_far
    val_loss_stats.update
    dis.sum.sqrt.sum
    pyrender.Mesh.from_trimesh.apply_transform
    numpy.random.randn
    self._global_rigid_transformation
    torch.unsqueeze.clone
    lib.utils.net_utils.save_model
    bw.permute.permute
    obj_num.pull_dist.sum.sum
    torch.utils.data.DataLoader
    math.sqrt
    ready_arguments.items
    verts.verts_core
    lib.config.cfg.N_samples.torch.linspace.to
    batch.transpose
    collections.defaultdict
    format
    ind_mask.unsqueeze
    _evaluator_factory
    self.right.reshape
    glShaderSource
    parse_cfg
    self.alpha_linear
    torch.Tensor.expand
    mask_at_box_list.append
    i.box.copy
    lib.utils.if_nerf.if_nerf_data_utils.get_near_far
    smpl_renderer.Renderer.render_multiview
    transforms.astype.astype
    lib.config.cfg.ptot_vsize.torch.tensor.to
    open3d.geometry.TriangleMesh
    pts.astype.astype
    pyrender.Scene.add
    self.PlainRAdam.super.__init__
    embedder.get_embedder
    xp.dstack
    numpy.copy
    self.fc_0
    numpy.array
    fig.add_subplot.set_xlim
    imp.load_source
    camera_center.reshape.reshape
    vendor.smpl.posemapper.Rodrigues
    loadShader
    lib.utils.if_nerf.voxels.VoxelGrid.to_mesh
    collate_batch.make_collator
    p.reshape.pp.np.eye.pp.np.array.cv2.Rodrigues.ravel.np.concatenate.ravel
    trimesh.load
    ind.view.unsqueeze
    trimesh.Trimesh
    inside.detach.cpu
    zeros.rx.ry.rx.zeros.rz.ry.rz.zeros.torch.cat.view
    glDeleteBuffers
    globals
    val_loss_stats.keys
    x.sigmoid
    ret.update
    xp.vstack.reshape
    self.mse.append
    net.load_state_dict
    torch.nn.Embedding
    raw2alpha
    torch.einsum
    tensorboardX.SummaryWriter
    cv2.erode.copy
    torch.utils.data.sampler.BatchSampler
    p.reshape.reshape
    plyfile.PlyData.read
    cv2.imshow
    torch.max
    root.raise_key_rename_error
    argparse.ArgumentParser.parse_args
    self.get_rotation_matrix
    output.detach
    Rh0.cv2.Rodrigues.astype
    pysmplceres.loadSMPL
    torch.distributed.is_available
    v.detach
    lib.networks.renderer.make_renderer
    cv2.decomposeProjectionMatrix
    numpy.load
    ray_o.reshape.astype
    glCompileShader
    plotSkel2D
    lib.evaluators.make_evaluator.summarize
    pose.view.view
    setattr
    rot_mat_to_euler
    open3d.utility.Vector3dVector
    np.random.seed
    numpy.flip
    self.up.reshape
    draw_poly
    lib.utils.render_utils.gen_path
    Normalize
    msk_cihp.msk.astype
    pts.clone.detach
    self.ExponentialLR.super.__init__
    Rot.dot
    b.to
    get_cams
    loss_state.append
    self.get_density_color.reshape
    get_rays
    glReadPixels
    R0.transpose
    i.vertices.detach.cpu.numpy
    prepare_tpose
    xp.array
    _indent
    J.reshape
    torch.cos
    glReadBuffer
    self.net.encode_sparse_voxels
    numpy.column_stack
    numpy.full
    idx.torch.cat.to
    snapshot_smpl.smpl.Smpl
    smpl_path.np.load.item
    viewdir.transpose.transpose
    vertices.detach.cpu
    f.read
    casts.append
    make_dataset
    self.is_frozen
    xp.hstack
    torch.distributed.is_initialized
    path.module.imp.load_source.Renderer
    glBindBuffer
    self.SMPLlayer.super.__init__
    get_mask_img
    pts2d.round.long.round
    pyrender.Mesh.from_trimesh
    joints.clone.contiguous
    numpy.full.sum
    img.detach.cpu.clone
    inds.reshape.reshape
    lib.utils.if_nerf.if_nerf_data_utils.get_rays
    lib.utils.data_utils.load_ply
    lighting_
    chumpy.zeros_like
    spconv.SparseConv3d
    torch.from_numpy
    poly.append
    features.view.view
    numpy.sin
    z_vals.expand.expand
    glBindFramebuffer
    glutSwapBuffers
    numpy.ceil
    importlib.util.spec_from_file_location
    torch.round
    cv2.waitKey
    time.time
    batch.detach.cpu
    joints.copy
    features.view.size
    K.copy.copy
    lib.config.cfg.ptot_vsize.torch.tensor.to.view
    xyz.copy.astype
    dict.items
    self.camera.get_gl_matrix
    acc_map.view.view
    iter
    self.set_rotation_matrix
    sh.can_pts.reshape.astype.reshape
    inside.detach.cpu.numpy
    lib.train.set_lr_scheduler
    T.R.np.concatenate.astype
    list
    ret_list.append
    self.SmoothL1Loss.super.__init__
    res.res.res.occupancy.reshape.astype.reshape
    batch_rodrigues
    posedirs.pose_feature.torch.matmul.view
    light_pts.view.view
    numpy.zeros
    lib.config.cfg.training_view.self.cams.np.array.astype
    occupancy.VoxelGrid.to_mesh
    img.copy
    numpy.linalg.inv
    nv.batch.transpose
    opendr.renderer.ColoredRenderer
    pred.astype.astype
    bins.unsqueeze.expand
    glTexParameteri
    p.data.float.add_
    fig.add_subplot.add_patch
    img_path.os.path.basename.replace
    glutInitWindowPosition
    numpy.dot
    self.IndL1Loss1d.super.__init__
    x.reshape
    get_embedder
    mask_at_box.reshape.reshape
    self.Nerf.super.__init__
    numpy.stack.append
    math.radians
    ae.view.gather
    batch_size.pidxall.np.reshape.torch.from_numpy.to.unsqueeze_
    ind_mask.unsqueeze.expand_as.sum
    K.reshape.reshape
    A.view.view
    numpy.eye.pp.np.array.cv2.Rodrigues.ravel
    camera.np.array.transpose
    with_zeros
    raw2outputs
    transforms.make_transforms
    copy.deepcopy
    mse.detach.cpu.numpy.astype.detach
    self.feature_id.expand
    main
    get_smpl
    self.PlainRAdam.super.__setstate__
    torch.gather
    render.color_render.ColorRender.get_color
    ind_mask.unsqueeze.expand_as.unsqueeze
    vertices.view
    feat.view.view
    coord_list.append
    numpy.unpackbits
    hasattr
    ray_o.size
    batch.view
    vendor.smpl.posemapper.posemap
    topk_ind.batch.topk_ys.view._gather_feat.view.view
    samplers.DistributedSampler
    voxel_size.min_dhw.dhw.np.round.astype
    lib.train.make_trainer.val
    torch.Generator
    obj_mask.float.sum
    ind.view.view
    numpy.exp
    glTexImage2D
    numpy.arange
    p.data.float
    self.prepare_sp_input
    batch.append
    glEnable
    obj_num.obj_num.obj_num.push_dist.sum.sum
    chumpy.ones
    self.down2
    matplotlib.pyplot.ylabel
    torch.where
    lib.utils.render_utils.get_center_rayd
    lmk_faces_idx.view.faces.torch.index_select.view
    str
    chumpy.concatenate
    numpy.pi.rel_rot_mat.rot_mat_to_euler.torch.clamp.torch.round.to
    _gather_feat.permute
    torch.log.size
    _load_cfg_from_yaml_str
    lib.networks.renderer.make_renderer.render
    torch.cuda.max_memory_allocated
    get_img_paths
    self.RAdam.super.__init__
    output.detach.cpu
    torch.sort
    get_bound_corners
    alpha_decoder
    lbs_weights.unsqueeze.expand
    self.Renderer.super.__init__
    num_joints.batch_size.A.view.W.torch.matmul.view
    torch.rand
    make_rotate
    self.img2mse
    latent.expand.expand
    self.A.dot
    lib.utils.net_utils.load_network
    dtype.aa_pose.view.batch_rodrigues.view
    os.path.isfile
    self.conv3
    batch.bool
    _load_module_from_file
    ax.set_yticklabels
    transform_mat
    img_path.imageio.imread.astype
    render_w2c.append
    ast.literal_eval._immutable
    self.J.reshape
    _VALID_TYPES.union.union
    lib.networks.renderer.volume_renderer.Renderer
    voxel_size.min_dhw.max_dhw.np.ceil.astype
    _gather_feat
    rel_joints.contiguous.view.rot_mats.view.transform_mat.view
    torch.distributed.barrier
    weight.unsqueeze.unsqueeze
    Compose
    evaluator.summarize
    trans.reshape
    f.write
    recorder.update_loss_stats
    mask.unsqueeze.expand_as
    self.actvn
    n_sample.viewdir.repeat.contiguous
    opendr.lighting.LambertianPointLight
    self.J_regressor.T.reshape
    msk_cihp.astype.astype
    lib.train.make_trainer
    type
    torch.mean.view
    poly.view.view
    self.v_shaped.reshape
    lib.train.make_lr_scheduler
    optimizer.step
    Z.reshape.reshape
    width.topk_inds.int.float
    pts.clone.detach.clone
    torch.nn.ModuleList
    cv2.resize
    array.todense.to
    K.reshape.tolist
    lib.config.cfg.ptot_vsize.torch.tensor.to.size
    exp_avg_sq.mul_
    render.color_render.ColorRender
    glUniformMatrix4fv
    cv2.VideoCapture
    get_canpts
    k.batch.cuda
    rgb_list.np.concatenate.astype
    numpy.transpose.astype
    torch.cat.append
    lib.config.cfg.make_network.cuda.train
    torch.nn.functional.pad
    imageio.imread.astype
    lbs.batch_rodrigues
    numpy.ones_like
    torch.zeros_like
    lib.config.cfg.training_view.K.np.array.astype
    ready_arguments.reshape
    config.keys
    p.data.float.addcdiv_
    open3d.geometry.PointCloud
    smplmodel.body_model.SMPLlayer.to
    torch.isnan
    create_grid_points_from_bounds
    cam_render.CamRender.__init__
    __import__.set_trace
    torch.stack
    numpy.random.RandomState.uniform
    glRenderbufferStorageMultisample
    self.view_fc
    tag_mean.unsqueeze.tag.pow
    sigma_2.abs_diff.detach
    lib.train.make_optimizer
    cv2.FileStorage.getNode
    R.astype.astype
    nxyz.copy.copy
    CfgNode.DEPRECATED_KEYS.self.__dict__.add
    lib.train.make_recorder
    torch.split
    feat.view.gather
    torch.sigmoid
    train
    to_tensor
    torch.index_select.view
    img_path.split
    g.self.dataset.len.torch.randperm.tolist
    glDetachShader
    feature_id.gt.torch.gather.view
    torch.manual_seed
    glDepthFunc
    A.reshape.bweights.np.dot.reshape
    get_rigid_transformation
    render.color_render.ColorRender.set_mesh
    numpy.save
    self.c
    alpha.detach.cpu.numpy
    j.i.poly.np.round.astype
    imgaug.augmenters.blur_gaussian_
    read_pickle.dot
    _gather_feat.size
    glActiveTexture
    cv2.VideoCapture.release
    yacs.load_cfg.keys
    self.color_buffer.append
    self.vert_buffer.keys
    self.get_ptot_grid_coords
    near_list.np.concatenate.astype
    join.append
    wpts.view.view
    z_vals.shape.torch.rand.to
    cv2.imwrite
    self.AdamW.super.__setstate__
    torch.nn.Linear
    target.expand.expand
    batch_rigid_transform
    init_dict.items
    topk_scores.view
    glDrawBuffers
    gt.lt
    torch.cuda.set_device
    trimesh.Trimesh.apply_scale
    lib.config.cfg.voxel_size.torch.tensor.to
    matplotlib.pyplot.xlabel
    lib.utils.base_utils.project
    glTexImage2DMultisample
    external.SMPL_CPP.build.python.pysmplceres.getVertices
    ax.set_yticks
    opendr.renderer.ColoredRenderer.set
    pts.detach.cpu
    posedirs.dot
    glDeleteShader
    recorder.record
    glGenTextures
    self.items
    rel_joints.contiguous.view
    numpy.random.randint
    lmk_faces.vertices.view.view
    print
    recorder.load_state_dict
    matplotlib.pyplot.figure.add_subplot
    torch.abs
    i.vertices.detach.cpu
    render_smpl
    sklearn.manifold.TSNE.fit_transform
    depth_map.view.view
    numpy.eye
    i.poly.copy
    psbody.mesh.Mesh
    pickle.dump
    chumpy.vstack
    os.path.join
    chumpy.sum
    pose.reshape.reshape
    spconv.SparseSequential
    gaussian2D
    A.view
    self.to_cuda
    cv2.destroyAllWindows
    glutDisplayFunc
    make_cfg.merge_from_other_cfg
    torch.nn.functional.grid_sample
    torch.arange.to
    trimesh.transformations.rotation_matrix
    alpha.shape.torch.ones.to
    convert_to_dict
    self.ssim.append
    pts2d.np.round.astype
    numpy.log
    z_vals.astype.astype
    v.detach.cpu
    msks.append
    yaml.safe_load
    scipy.spatial.cKDTree.query
    k.self.loss_stats.update
    batch_size.pidxall.np.reshape.torch.from_numpy.to
    lbs.verts_core
    get_transform_params
    fig.add_subplot.set_ylim
    faces.reshape
    ray_d_list.append
    findFileOrThrow
    fig.add_subplot.axis
    self.merge_from_other_cfg
    raw_decoder
    synchronize
    numpy.matmul.max
    self.draw_init
    numpy.random.uniform
    psbody.mesh.Mesh.barycentric_coordinates_for_points
    KeyError
    draw_umich_gaussian
    self.values
    weights.view
    matplotlib.pyplot.get_cmap
    tag_mean.sum.unsqueeze
    nv.self.Ks.copy
    topk_ind.batch.topk_ys.view._gather_feat.view
    topk_ind.batch.topk_xs.view._gather_feat.view
    torch.log
    mse.detach.cpu.numpy.astype.mean
    ray_o_list.append
    root.key_is_renamed
    k.ret.torch.isinf.any
    torch.distributed.init_process_group
    self.conv1
    numpy.asarray
    torch.topk
    self.embeddirs_fn
    K.copy.astype
    array.todense.todense
    matplotlib.pyplot.cm.Spectral
    self.get_translation_vector
    self.processor.items
    k.ret.torch.isnan.any
    numpy.zeros_like
    voxel_grid.to_mesh.export
    out_sh.tolist
    ind_mask.unsqueeze.expand_as
    glDrawBuffer
    self.AELoss.super.__init__
    cls
    p.grad.data.float
    glGetProgramiv
    numpy.random.random
    tqdm.tqdm
    torch.norm
    stride_conv
    get_o3d_mesh.compute_vertex_normals
    self.direction.reshape
    numpy.random.RandomState.normal
    path.module.imp.load_source.NetworkWrapper
    f
    gt.lt.float
    out_sh.torch.tensor.to
    block_reduce
    numpy.random.rand
    img.detach
    glVertexAttribPointer
    self.v.dr_wrt
    voxelize_fill
    data.to_np.to_tensor.long
    batch.detach.cpu.numpy
    glutKeyboardFunc
    collections.OrderedDict
    glCreateShader
    lib.config.cfg.res.cfg.res.cfg.res.occupancy.reshape.astype
    pts.reshape.reshape
    vec3
    ax.set_xticks
    chumpy.eye
    numpy.uint8.colors.np.array.reshape
    h5py.File
    NotImplementedError
    kernel.transpose.output.sum
    posedirs.torch.Tensor.X_regressor.torch.einsum.numpy
    self.smooth_l1_loss
    yacs.CfgNode
    torch.nn.parallel.DistributedDataParallel.to
    glDrawArrays
    torch.zeros
    self._immutable
    blend_shapes
    super
    numpy.concatenate.append
    light_pts.transpose.transpose
    argparse.ArgumentParser.add_argument
    torch.pow
    tag_mean.unsqueeze.tag.pow.sum
    samplers.FrameSampler
    ptot_pts.shape.torch.tensor.to.tolist
    rgb_list.append
    self.reduce_loss_stats
    self.batchify_rays
    self.init_quad_program
    self.create_embedding_fn
    val_loss_stats.setdefault
    numpy.matmul
    self.loss_stats.items
    opendr.camera.ProjectPoints
    self.conv4
    alpha.detach.cpu.numpy.detach
    lib.evaluators.make_evaluator
    i.parent.J.i.J.reshape
    R.Th.xyz.np.dot.astype.astype
    inter_from_polys
    self.on_changed
    img.reshape.astype
    transl.detach.cpu.numpy
    inds.reshape.ravel
    cv2.getAffineTransform
    d.median.item
    A.dot
    read_pickle
    scipy.spatial.cKDTree
    torch.randperm
    get_bound_2d_mask
    ident.rot_mats.view
    matplotlib.patches.Polygon
    numpy.concatenate.tolist
    numpy.meshgrid
    dis.torch.abs.sum
    glEnableVertexAttribArray
    ax2.imshow
    scipy.ndimage.gaussian_filter
    std.torch.tensor.view
    os.listdir
    gt.eq.float
    set
    ptot_pts.shape.torch.tensor.to
    numpy.split
    ind.view.size
    numpy.tan
    magnitude
    act_fn
    self.generate_height_width
    near.astype.astype
    lib.utils.render_utils.load_cam
    numpy.array.ravel
    cv2.undistort
    pts.size.viewdir.repeat.contiguous
    torch.Tensor
    _decode_cfg_value
    rodrigues
    to_type
    math.cos
    self.rgb_fc
    isinstance
    far_list.append
    evaluator.evaluate
    glLinkProgram
    pyrender.Scene
    fn
    self.writer.add_image
    cv2.resize.copy
    self.output_linear
    rot.astype.astype
    output_depths.append
    bs_type.posemap
    self.get_grid_coords.transpose
    index.self.params.astype
    self.pts_to_can_pts
    embed_fns.append
    pos_loss.sum.sum
    glBlitFramebuffer
    mse.detach.cpu.numpy.astype
    body_model
    kp.astype
    numpy.prod
    pickle.load
    lib.utils.if_nerf.voxels.VoxelGrid
    pos_inds.float.sum
    cv2.dilate.astype
    torch.nn.parallel.DistributedDataParallel
    pysmplceres.getVertices
    sh.can_pts.reshape.astype
    rays_o.view
    gt.astype.astype
    label.append
    recorder.data_time.update
    double_conv
    ind_mask.sum
    trimesh.Trimesh.copy
    T.R.T.np.dot.ravel
    net.keys
    self.conv4.size
    dis.torch.abs.sum.sum.sum
    optim.load_state_dict
    ind.output._tranpose_and_gather_feat.view
    feat.view.size
    i.format.extri.getNode.mat
    numpy.squeeze
    pose.view
    torch.load
    lib.config.cfg.training_view.RT.np.array.astype
    self.AdamW.super.__init__
    net.named_parameters
    triple_conv
    self.CfgNode.super.__repr__
    torch.cuda.device_count
    cross
    near_list.append
    RuntimeError
    abs
    numpy.round
    pickle._Unpickler.load
    rgb_map.view
    torch.distributed.get_world_size
    shader_list.append
    i.images.cpu
    torch.mean
    spconv.SubMConv3d
    numpy.array.append
    self.alpha_fc.transpose
    far.astype.astype
    lib.utils.light_stage.ply_to_occupancy.ply_to_occupancy
    scores.size
    samplers.ImageSizeBatchSampler
    u.contiguous.expand
    glBufferData
    numpy.identity
    topk_ind.batch.topk_xs.view._gather_feat.view.view
    sigma_2.abs_diff.detach.float
    _gather_feat.gather
    numpy.mean
    logging.getLogger.warning
    rot.permute
    get_scaled_model
    target_poly.poly.pow.sum
    scalar_stats.update
    device.dtype.torch.eye.unsqueeze
    open3d.utility.Vector3iVector
    lib.config.cfg.netchunk.fn.self.batchify
    math.sin
    exp_avg_sq.sqrt
    loss.mean.backward
    make_data_sampler
    chumpy.hstack
    glClampColor
    glGetUniformLocation
    renderer.render_multiview
    datetime.timedelta
    lib.config.args.type.globals
    i.translation.cpu
    self.normalize_vector
    numpy.load.astype
    viewdirs.expand
    state.type_as
    pts.transpose.transpose
    lib.train.make_lr_scheduler.step
    xyz.copy.copy
    torch.index_select
    transform_small_gt
    nxyz.copy.astype
    torch.clamp
    glDisable
    mcubes.marching_cubes
    numpy.loadtxt
    voxel_path.os.path.basename.split
    lbs.lbs
    glutMainLoop
    torch.nn.ReLU
    self.conv4.dense
    cv2.erode
    lib.config.cfg.make_network.cuda.eval
    torch.cumsum
    torch.nn.functional.relu
    bins.unsqueeze
    dis.sum.sqrt
    training_state.format.format
    numpy.packbits
    ply.vertices.np.array.ravel
    ready_arguments.posemap
    torch.nn.functional.max_pool2d
    v.view
    max.split
    sample_pdf
    pidxall.size.pidxall.size.pidxall.unsqueeze_.long.expand.detach
    self.network.train
    psbody.mesh.Mesh.closest_faces_and_points
    max
    loss.mean.mean
    posemapper.Rodrigues
    numpy.stack
    numpy.array.astype
    torch.cuda.empty_cache
    _check_and_coerce_cfg_value_type
    attachments.append
    glGenRenderbuffers
    K.copy.reshape
    transform_chain.append
    ppts.view.view
    near.transpose
    numpy.pad
    camera_rt.Rodrigues.dot
    glClearColor
    vertices_path.np.load.astype
    z_vals.view.size
    torch.FloatTensor
    pts.astype.reshape
    glGenFramebuffers
    select_point
    torch.norm.view
    batch.size
    insides.append
    self.get_mask.copy
    normalize
    get_3rd_point
    faces_clr.reshape
    collections.deque
    IOError
    torch.unsqueeze
    self.latent_fc
    logging.getLogger.debug
    self.alpha_fc
    numpy.array.astype.reshape
    bws.append
    pidxall.unsqueeze_.long.expand
    torch.nn.utils.clip_grad_value_
    lib.config.cfg.make_network.cuda
    s_.split
    pyrender.MetallicRoughnessMaterial
    make_cfg.merge_from_list
    empty
    viewmatrix
    raw.reshape.reshape
    batch_rodrigues.reshape
    len
    loss_dict.items
    pyrender.OffscreenRenderer
    smplmodel.body_model.SMPLlayer
    self.net.calculate_density_color
    glBindRenderbuffer
    mask.unsqueeze.expand_as.unsqueeze
    numpy.float32
    self.net.calculate_density
    grayscale.mean
    i.self.pts_linears
    torch.norm.repeat
    beta2.exp_avg_sq.mul_.addcmul_
    cfg.train.optim._optimizer_factory
    seed_ind.ae.gather.view
    beta1.exp_avg.mul_.add_
    torch.atan2
    fig.add_subplot.plot
    self.reduce_loss_stats.items
    self.processor
    yacs.load_cfg
    skimage.measure.compare_ssim
    bounds.bounds.max
    _dataset_factory
    vertices2joints
    test
    rgb_map.view.view
    matplotlib.pyplot.imshow
    torch.sum
    transparent_cmap
    pattern.format
    pred_img_path.imageio.imread.astype
    z_trans.x_trans.np.array.astype
    spconv.SparseConvTensor
    xyz.np.zeros_like.astype
    bisect.bisect_right
    self.dataset.len.torch.arange.tolist
    os.path.abspath
    pysmplceres.getFaces
    self.projection_matrix.transpose
    self.get_sampling_points
    mean.torch.tensor.view
    gender.upper
    cv2.circle
    self.writer.add_scalar
    mask_at_box.reshape.astype
    pred.gt.np.logical_and.sum
    feat.permute.contiguous
    SparseConvNet
    lib.utils.if_nerf.if_nerf_data_utils.sample_ray_h36m
    N_samples.cdf.shape.list.torch.rand.to
    numpy.sum
    img_path.os.path.basename.split
    xp.vstack
    trimesh.sample.sample_surface_even
    ae.view.view
    self.prepare_inside_pts.sum
    glm.ortho
    self.screen_texture.append
    posemapper.posemap
    numpy.radians
    glFramebufferRenderbuffer
    inter_from_poly
    self.fc_1
    self.net
    self.network
    min
    ischumpy
    numpy.cos
    can_pts.view.view
    dict
    torch.bmm
    torch.Generator.manual_seed
    open
    dtype.vertices.device.torch.eye.unsqueeze_
    dot
    self.trans.reshape
    matplotlib.pyplot.subplots
    render.color_render.ColorRender.set_camera
    Rodrigues
    p.pp.ch.eye.pp.Rodrigues.ravel.ch.concatenate.ravel
    extract_mask
    torch.utils.data.sampler.SequentialSampler
    lib.datasets.make_data_loader
    u.contiguous.contiguous
    self.down3
    output_images.append
    path.module.imp.load_source.Visualizer
    self.get_grid_coords
    numpy.random.RandomState
    dists.shape.torch.Tensor.expand.to
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
Owner
ZJU3DV
ZJU3DV is a research group of State Key Lab of CAD&CG, Zhejiang University. We focus on the research of 3D computer vision, SLAM and AR.
ZJU3DV
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

null 158 Jan 4, 2023
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 4, 2022
Official code for the CVPR 2021 paper "How Well Do Self-Supervised Models Transfer?"

How Well Do Self-Supervised Models Transfer? This repository hosts the code for the experiments in the CVPR 2021 paper How Well Do Self-Supervised Mod

Linus Ericsson 157 Dec 16, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
the code of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021)

RMA-Net This repo is the implementation of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021). Paper

Wanquan Feng 205 Nov 9, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 5, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

null 44 Dec 12, 2022