Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Learned Virtual View Visibility ICCV2021

Overview

Vis2Mesh

This is the offical repository of the paper:

Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Learned Virtual View Visibility

https://arxiv.org/abs/2108.08378

@misc{song2021vis2mesh,
      title={Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds of Large Scenes with Learned Virtual View Visibility}, 
      author={Shuang Song and Zhaopeng Cui and Rongjun Qin},
      year={2021},
      eprint={2108.08378},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Updates
  • 2021/9/6: Intialize all in one project. Only this version only supports inferencing with our pre-trained weights. We will release Dockerfile to relief deploy efforts.
TODO
  • Ground truth generation and network training.
  • Evaluation scripts

Build With Docker (Recommended)

Install nvidia-docker2
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
Build docker image

docker build . -t vis2mesh

Build on Ubuntu

Please create a conda environment with pytorch and check out our setup script:

./setup_tools.sh

Usage

Get pretrained weights and examples
pip install gdown
./checkpoints/get_pretrained.sh
./example/get_example.sh
Run example

The main command for surface reconstruction, the result will be copied as $(CLOUDFILE)_vis2mesh.ply.

python inference.py example/example1.ply --cam cam0

We suggested to use docker, either in interactive mode or single shot mode.

xhost +
name=vis2mesh
# Run in interactive mode
docker run -it \
--mount type=bind,source="$PWD/checkpoints",target=/workspace/checkpoints \
--mount type=bind,source="$PWD/example",target=/workspace/example \
--privileged \
--net=host \
-e NVIDIA_DRIVER_CAPABILITIES=all \
-e DISPLAY=unix$DISPLAY \
-v $XAUTH:/root/.Xauthority \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
--device=/dev/dri \
--gpus all $name

cd /workspace
python inference.py example/example1.ply --cam cam0

# Run with single shot call
docker run \
--mount type=bind,source="$PWD/checkpoints",target=/workspace/checkpoints \
--mount type=bind,source="$PWD/example",target=/workspace/example \
--privileged \
--net=host \
-e NVIDIA_DRIVER_CAPABILITIES=all \
-e DISPLAY=unix$DISPLAY \
-v $XAUTH:/root/.Xauthority \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
--device=/dev/dri \
--gpus all $name \
/workspace/inference.py example/example1.ply --cam cam0
Run with Customize Views

python inference.py example/example1.ply Run the command without --cam flag, you can add virtual views interactively with the following GUI. Your views will be recorded in example/example1.ply_WORK/cam*.json.

Main View

Navigate in 3D viewer and click key [Space] to record current view. Click key [Q] to close the window and continue meshing process.

Record Virtual Views

Comments
  • Training Pipeline Update

    Training Pipeline Update

    Hello,

    I'm working on a similar topic for my Master Thesis. Vis2Mesh paper has been a very good Reference to do my work. I would like to use this repo for the same. Is there a possibility that the Training Pipeline can be uploaded or shared here any time soon? This would be really helpful for my work.

    Looking forward to your reply.

    Thank you in advance.

    opened by impaidk 9
  • Installation issue and inference running

    Installation issue and inference running

    Hi I installed the repo on my computer (Ubuntu 20)

    Method 1: I followed all the steps. When it tried to build_openmvs, then I am getting cmake error:

    [ 22%] Linking CXX static library ../../lib/libCommon.a
    [ 22%] Built target Common
    [100%] Linking C static library libzstd.a
    [100%] Built target libzstd_static
    [ 23%] No install step for 'zstd_ext'
    
    
    [ 25%] Completed 'zstd_ext'
    [ 25%] Built target zstd_ext
    Makefile:148: recipe for target 'all' failed
    
    

    I kept continue and then I launched the docker container. When I execute default example: python inference.py example/example1.ply

    After getting all views, and pressing Q, nothing happening.

    #Available Toolset#
    vvtool: vvtool
    o3d_vvcreator.py: o3d_vvcreator.py
    ReconstructMesh: ReconstructMesh
    #Project Settings#
    Input Cloud: /workspace/example/example1.ply Exists: True
    Input Mesh:  Exists: False
    Input Textured Mesh:  Exists: False
    Input Textured Mesh:  Exists: False
    Base Cam:  Exists: False
    Input Cam: /workspace/example/example1.ply_WORK/cam0.json Exists: False
    Work Folder: /workspace/example/example1.ply_WORK Exists: True
    Cam: cam0    ToolChain: NET.POINT_DELAY
    ToolChain Work Folder: /workspace/example/example1.ply_WORK/VDVNet_cam0 Exists:True
    Output Folder: /workspace/example/example1.ply_WORK/VDVNet_cam0/out Exists:True
    /workspace/tools/bin/o3d_vvcreator.py:72: MatplotlibDeprecationWarning: 
    The set_window_title function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use `.FigureManagerBase.set_window_title` or GUI-specific methods instead.
      fig.canvas.set_window_title(f"Num of Virtual Views: {len(camjson)}")
    /workspace/tools/bin/o3d_vvcreator.py:103: MatplotlibDeprecationWarning: 
    The set_window_title function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use `.FigureManagerBase.set_window_title` or GUI-specific methods instead.
      fig.canvas.set_window_title(f"Num of Virtual Views: {len(camjson)}")
    -- CamCreator Cmd:
    o3d_vvcreator.py --output_list=/workspace/example/example1.ply_WORK/cam0.json --width=512 --height=512 /workspace/example/example1.ply
    INFO:root:Valid Cams:31
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2)
    libEGL warning: DRI2: failed to authenticate
      0%|                                                                                                                                                                               | 0/31 [00:00<?, ?it/s]/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py:106: UserWarning: 
    NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
    The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
    If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
    
      warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
    

    I think i am stucking here: https://github.com/GDAOSU/vis2mesh/blob/master/inference.py#L306

    How do I get the final mesh reconstruction? What am I missing? I tried all ways but couldn't succeed.

    opened by quicktwit 7
  • Error running example in docker 2

    Error running example in docker 2

    Tried to launch example in docker, but met the following error:

    #Available Toolset# vvtool: vvtool o3d_vvcreator.py: o3d_vvcreator.py ReconstructMesh: ReconstructMesh #Project Settings# Input Cloud: /workspace/example/example1.ply Exists: True Input Mesh: Exists: False Input Textured Mesh: Exists: False Input Textured Mesh: Exists: False Base Cam: Exists: False Input Cam: /workspace/example/example1.ply_WORK/cam0.json Exists: True Work Folder: /workspace/example/example1.ply_WORK Exists: True Cam: cam0 ToolChain: NET.POINT_DELAY ToolChain Work Folder: /workspace/example/example1.ply_WORK/VDVNet_cam0 Exists:True Output Folder: /workspace/example/example1.ply_WORK/VDVNet_cam0/out Exists:True -- CamCreator Cmd: o3d_vvcreator.py --output_list=/workspace/example/example1.ply_WORK/cam0.json --width=512 --height=512 /workspace/example/example1.ply INFO:root:Valid Cams:104 vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2) 0%| | 0/104 [00:00<?, ?it/s]/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448224956/work/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 104/104 [00:40<00:00, 2.54it/s] vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2) ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by ReconstructMesh) ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2) ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20) ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73) ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2) terminate called after throwing an instance of 'std::length_error' what(): basic_string::_M_replace_aux *** Generation Failed *** Process Done

    Google the "no version information available" and found that solution, but updating the ghost script has no effect.

    I read the previous issue about thar error, but in my case valid cams param is OK.

    opened by VitalyyBezuglyj 6
  • FileNotFoundError: [Errno 2] No such file or directory: 'ReconstructMesh': 'ReconstructMesh'

    FileNotFoundError: [Errno 2] No such file or directory: 'ReconstructMesh': 'ReconstructMesh'

    Hello, I had some errors while running the sample code: python inference.py example/example1.ply --cam cam0 I do believe the pretrained weights and examples have been downloaded. please help with this

    Traceback (most recent call last): File "inference.py", line 381, in main() File "inference.py", line 376, in main config=RunProcess(config) File "inference.py", line 318, in RunProcess p = subprocess.Popen(cmdArr, stdout=subprocess.PIPE) File "/opt/conda/lib/python3.7/subprocess.py", line 800, in init restore_signals, start_new_session) File "/opt/conda/lib/python3.7/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'ReconstructMesh': 'ReconstructMesh'

    opened by Alex-NKG 4
  • How to use the last docker command

    How to use the last docker command

    Hi,

    I want to ask whether the command in image is neccesary. If neccesary, how should i use it?

    I try to input it, but an error called command not found is displayed. Sorry about I'm not good at docker.

    图片

    opened by Bobkk-k 4
  • some problems happened in Get pretrained weights and examples part

    some problems happened in Get pretrained weights and examples part

    I have built on docker. When I want to get pretrained weights or examples, in another word, I input the code './checkpoints/get_pretrained.sh' something wrong happen. It shows that 'warning.warn...' , 'traceback...'. Can you guide me help me solve the error?

    opened by Bobkk-k 4
  • How to use cam*.json for self-defined cameras?

    How to use cam*.json for self-defined cameras?

    Environments and example: all installation are done, environment configured properly, and the demo (example1.ply) can be played properly.

    I have tried your interactive mode for adding new virtual views through an open3d GUI on linux. That works well. But I expect a more automatic pipeline, such as writing some cam1.json with scripts and use these json for my own dataset. When I do so, I face

    Failed to find match for field 'y'.
    Failed to find match for field 'x'.
    *** Generation Failed ***
    

    Do you have any recommendation about how I should do to use self-defined cams?

    opened by zhu-yuefeng 2
  • Compiling Error with

    Compiling Error with "CUDA_CUDA_LIBRARY"

    I compiled on unbutu18.04 and didn't use docker.

    I met

    CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: CUDA_CUDA_LIBRARY (ADVANCED) linked by target "MVS" in directory /home/zhangshuai/lastest/ODM/SuperBuild/src/openmvs/libs/MVS

    /usr/bin/ld: cannot find -lCUDA_CUDA_LIBRARY-NOTFOUND

    This issue #627doesn't solve my problem.

    I am a relatively new to cmake, does anyone know what went wrong?

    opened by XueCong2 2
  • vvtool fail to render

    vvtool fail to render

    python inference.py example/example1.ply --cam cam0 #Available Toolset# vvtool: vvtool o3d_vvcreator.py: o3d_vvcreator.py ReconstructMesh: ReconstructMesh #Project Settings# Input Cloud: /workspace/example/example1.ply Exists: True Input Mesh: Exists: False Input Textured Mesh: Exists: False Input Textured Mesh: Exists: False Base Cam: Exists: False Input Cam: /workspace/example/example1.ply_WORK/cam0.json Exists: True Work Folder: /workspace/example/example1.ply_WORK Exists: True Cam: cam0 ToolChain: NET.POINT_DELAY ToolChain Work Folder: /workspace/example/example1.ply_WORK/VDVNet_cam0 Exists:True Output Folder: /workspace/example/example1.ply_WORK/VDVNet_cam0/out Exists:True -- CamCreator Cmd: o3d_vvcreator.py --output_list=/workspace/example/example1.ply_WORK/cam0.json --width=512 --height=512 /workspace/example/example1.ply INFO:root:Valid Cams:104 vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2) 0%| | 0/104 [00:01<?, ?it/s] Traceback (most recent call last): File "tools/lib/network_predict.py", line 108, in call_plugin(blockJson) File "tools/lib/network_predict.py", line 84, in call_plugin rawdep = readFlt(input_depth) File "/workspace/utils/dataset_util.py", line 8, in readFlt with open(path, 'rb') as f: FileNotFoundError: [Errno 2] No such file or directory: '/workspace/example/example1.ply_WORK/VDVNet_cam0/render/pt0.flt' vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2) [CLOUD_BUNDLE] /workspace/example/example1.ply_WORK/VDVNet_cam0/render/pt0.uint not exists. Traceback (most recent call last): File "inference.py", line 382, in main() File "inference.py", line 377, in main config=RunProcess(config) File "inference.py", line 319, in RunProcess p = subprocess.Popen(cmdArr, stdout=subprocess.PIPE) File "/opt/conda/lib/python3.7/subprocess.py", line 800, in init restore_signals, start_new_session) File "/opt/conda/lib/python3.7/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'ReconstructMesh': 'ReconstructMesh'

    And I Found that vvtool is to blame: vvtool /workspace/example/example1.ply_WORK/VDVNet_cam0/render_seg.json vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73) vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2) [2021-10-11 09:26:57.946] [info] Register Plugin: DEFENV success. [2021-10-11 09:26:57.946] [info] Register Plugin: DEFCONTEXT success. [2021-10-11 09:26:57.946] [info] Register Plugin: PRINT success. [2021-10-11 09:26:57.946] [info] Register Plugin: GLOB success. [2021-10-11 09:26:57.946] [info] Register Plugin: REGEX_REPLACE success. [2021-10-11 09:26:57.946] [info] Register Plugin: GLRENDER success. [2021-10-11 09:26:57.946] [info] Register Plugin: GTFILTER success. [2021-10-11 09:26:57.946] [info] Register Plugin: RAYGTFILTER success. [2021-10-11 09:26:57.946] [info] Register Plugin: CLOUD_BUNDLE success. [2021-10-11 09:26:57.946] [info] Register Plugin: NETWORKVISIBILITY success. [2021-10-11 09:26:57.946] [info] Plugins registered: 10 successed 0 failed [2021-10-11 09:26:57.946] [info] Input Json: /workspace/example/example1.ply_WORK/VDVNet_cam0/render_seg.json [2021-10-11 09:26:57.946] [info] === Initializing Plugins === [2021-10-11 09:26:57.946] [info] === Input Process Units(1) === [2021-10-11 09:26:57.946] [info] === Processing 1/1 === Load Cam Info: Peek First Cam:

    K: 150 0 128 0 150 128 0 0 1 R: 0.998805 0.0209822 -0.044135 -0.0162957 -0.708441 -0.705582 -0.0460717 0.705458 -0.707253 C: -2.0875 -32.9513 -7.10067 w 256 h 256 Load cloud (+v 1999961) Load mesh (+v 0) (+f 0) Segmentation fault (core dumped)

    opened by zzttzz 2
  • Error running example in docker `/opt/conda/lib/libtiff.so.5: no version information available`

    Error running example in docker `/opt/conda/lib/libtiff.so.5: no version information available`

    [email protected]:/workspace# ./inference.py example/example1.ply           
    #Available Toolset#
    vvtool: vvtool
    o3d_vvcreator.py: o3d_vvcreator.py
    ReconstructMesh: ReconstructMesh
    #Project Settings#
    Input Cloud: /workspace/example/example1.ply Exists: True
    Input Mesh:  Exists: False
    Input Textured Mesh:  Exists: False
    Input Textured Mesh:  Exists: False
    Base Cam:  Exists: False
    Input Cam: /workspace/example/example1.ply_WORK/cam1.json Exists: False
    Work Folder: /workspace/example/example1.ply_WORK Exists: True
    Cam: cam1    ToolChain: NET.POINT_DELAY
    ToolChain Work Folder: /workspace/example/example1.ply_WORK/VDVNet_cam1 Exists:True
    Output Folder: /workspace/example/example1.ply_WORK/VDVNet_cam1/out Exists:True
    INFO - 2021-09-08 09:47:51,090 - font_manager - generated new fontManager
    /workspace/tools/bin/o3d_vvcreator.py:68: MatplotlibDeprecationWarning: 
    The set_window_title function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use manager.set_window_title or GUI-specific methods instead.
      fig.canvas.set_window_title(f"Num of Virtual Views: {len(camjson)}")
    -- CamCreator Cmd:
    o3d_vvcreator.py --output_list=/workspace/example/example1.ply_WORK/cam1.json --width=512 --height=512 /workspace/example/example1.ply
    INFO:root:Valid Cams:0
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2)
    libEGL warning: DRI2: failed to authenticate
    0it [00:00, ?it/s]
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libvtkIOImage-6.3.so.6.3)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73)
    vvtool: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2)
    libEGL warning: DRI2: failed to authenticate
    ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by ReconstructMesh)
    ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.2)
    ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/libgdal.so.20)
    ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libpoppler.so.73)
    ReconstructMesh: /opt/conda/lib/libtiff.so.5: no version information available (required by /usr/lib/x86_64-linux-gnu/libgeotiff.so.2)
    *** Generation Failed *** 
    Process Done
    
    opened by satyajit-ink 2
  • openmvs texturemesh dev

    openmvs texturemesh dev

    You mentioned not implementing texture part despite openmvs' ability. I am trying to use openmvs to recover texture for a mesh reconstructed with vis2mesh. The obstacle is that .mva format seems to be very specific, and I failed to find a proper way to feed it into openmvs. I think the .mva file in .ply_WORK dir should serve similar to .mvs file. Is that correct?

    opened by zhu-yuefeng 1
  • Environment Setup errors

    Environment Setup errors

    Dear Authors,

    I have been trying to create the environment on docker but running into some trouble. So when I run the docker build command, the code crashes on the vvmesh and openmvs building process. Here is a screenshot of the error (occurs during building vvmesh):

    make_error

    Hence, I commented the vvmesh and openmvs build scripts, built the docker image and container and am trying to run the scripts myself in the container.

    The openmvs script did not work, however I was able to install openmvs from their github installation guide. The issue comes with vvmesh, at the cmake phase. I have provided a screenshot of the error below. vvmesh_fmt_error

    would you happen to know how to overcome this issue?

    opened by karpat2022 3
  • No such file or directory: pt0.flt

    No such file or directory: pt0.flt

    Hi sxsong,

    I have met another error again. When I run the example, a error indicates that a file named pt0.flt does no exist. I looked in /example

    /example1.ply_WORK/VDVNet_cam0/render and I found that the folder was empty. Could you tell me where should I look for the

    missing file?

    Thank you, Bob C11C8D31A41C937C3E5401D0E01858C1

    opened by Bobkk-k 6
  • no such file or directory: vvtool

    no such file or directory: vvtool

    when I run the command 'python inference.py example/example1.ply --cam cam0' , a new error rise. It shows that No such file or directory: 'vvtool'. Where is the vvtool? Is there some preparation I didn't do? Or, Is that some files I put wrong place?By the way, I have downloaded the above-mentioned file named 'VDVNet_CascadePPP_epoch30.pth' and I put it in checkpoints folder. Is that right? Can you tell me how to do?Thank you! image

    opened by Bobkk-k 7
  • python inference.py example/example1.ply --cam cam0

    python inference.py example/example1.ply --cam cam0

    Load Cam Info: Peek First Cam:

    K: 150 0 128 0 150 128 0 0 1 R: 0.998805 0.0209822 -0.044135 -0.0162957 -0.708441 -0.705582 -0.0460717 0.705458 -0.707253 C: -2.0875 -32.9513 -7.10067 w 256 h 256 Load cloud (+v 1999961) Load mesh (+v 0) (+f 0) Segmentation fault (core dumped)

    opened by shangfenghuang 3
Owner
null
Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds

Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds Xinxin Zuo, Sen Wang, Minglun Gong, Li Cheng Prerequisites We have tested the code on Ubun

null 41 Dec 12, 2022
[ICCV2021] 3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds

3DVG-Transformer This repository is for the ICCV 2021 paper "3DVG-Transformer: Relation Modeling for Visual Grounding on Point Clouds" Our method "3DV

null 22 Dec 11, 2022
This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction".

TreePartNet This is the code repository implementing the paper "TreePartNet: Neural Decomposition of Point Clouds for 3D Tree Reconstruction". Depende

刘彦超 34 Nov 30, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 136 Dec 12, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 5, 2022
Perturbed Self-Distillation: Weakly Supervised Large-Scale Point Cloud Semantic Segmentation (ICCV2021)

Perturbed Self-Distillation: Weakly Supervised Large-Scale Point Cloud Semantic Segmentation (ICCV2021) This is the implementation of PSD (ICCV 2021),

null 12 Dec 12, 2022
An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

Zou 33 Jan 3, 2023
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
Code for Mesh Convolution Using a Learned Kernel Basis

Mesh Convolution This repository contains the implementation (in PyTorch) of the paper FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY

Yi_Zhou 35 Jan 3, 2023
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News ?? 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 113 Dec 21, 2022
A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python

Mesh-Keys A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python Have been seeing alot

Joseph 53 Dec 13, 2022
Mesh Graphormer is a new transformer-based method for human pose and mesh reconsruction from an input image

MeshGraphormer ✨ ✨ This is our research code of Mesh Graphormer. Mesh Graphormer is a new transformer-based method for human pose and mesh reconsructi

Microsoft 251 Jan 8, 2023
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh

generate_cloud_points Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh. Run python disp_mesh.py Or you

Peng Yu 2 Dec 24, 2021
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Implementation of ICCV2021(Oral) paper - VMNet: Voxel-Mesh Network for Geodesic-aware 3D Semantic Segmentation

VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation Created by Zeyu HU Introduction This work is based on our paper VMNet: Voxel-Mes

HU Zeyu 82 Dec 27, 2022
Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

M3D-VTON: A Monocular-to-3D Virtual Try-On Network Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network" Paper | Suppl

null 109 Dec 29, 2022