Code repository for "Stable View Synthesis".

Overview

Stable View Synthesis

Code repository for "Stable View Synthesis".

Setup

Install the following Python packages in your Python environment

- numpy (1.19.1)
- scikit-image (0.15.0)
- pillow (7.2.0)
- torch
- torchvision (0.7.0)
- torch-scatter (1.6)
- torch-sparse (1.6)
- torch-geometric (1.6)
- torch-sparse (1.6)
- open3d (0.11)
- python-opencv
- matplotlib (3.2.x)
- pandas (1.0.x)

To compile the Python extensions you will also need Eigen and cmake.

Clone the repository and initialize the submodule

git clone https://github.com/intel-isl/StableViewSynthesis.git
cd StableViewSynthesis
git submodule update --init --recursive

Finally, build the Python extensions

cd ext/preprocess
cmake -DCMAKE_BUILD_TYPE=Release .
make 

cd ../mytorch
python setup.py build_ext --inplace

Tested with Ubuntu 18.04 and macOS Catalina.

Run Stable View Synthesis

Make sure you adapted the paths in config.py to point to the downloaded data!

cd experiments

Then run the evaluation via

python exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd eval --iter last --eval-dsets tat-subseq

This will run the pretrained network on the four Tanks and Temples sequences.

To train the network from scratch you can run

python exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd retrain

Data

See FreeViewSynthesis.

Citation

Please cite our paper if you find this work useful.

@inproceedings{Riegler2021SVS,
  title={Stable View Synthesis},
  author={Riegler, Gernot and Koltun, Vladlen},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Video

Stable View Synthesis Video

Comments
  • An error occurred while running setup.py

    An error occurred while running setup.py

    First of all, thank you for sharing this project code.


    python setup.py build_ext --inplace

    generate generated/map_to_list_nn_cpu.cpp generate generated/map_to_list_nn_cuda.cpp generate generated/map_to_list_nn_kernel.cu generate generated/map_to_list_bl_cpu.cpp generate generated/map_to_list_bl_cuda.cpp generate generated/map_to_list_bl_kernel.cu generate generated/map_to_list_bl_seq_cpu.cpp generate generated/map_to_list_bl_seq_cuda.cpp generate generated/map_to_list_bl_seq_kernel.cu generate generated/list_to_map_cpu.cpp generate generated/list_to_map_cuda.cpp generate generated/list_to_map_kernel.cu generate generated/ext_cpu.cpp generate generated/ext_cuda.cpp generate generated/ext_kernel.cu generate generated_ext.py running build_ext building 'ext_cpu' extension creating /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build creating /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8 Emitting ninja build file /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/1] c++ -MMD -MF /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_cpu.o.d -pthread -B /home/vig-titan2/anaconda3/envs/SVS/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/TH -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/THC -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/generated -I/home/vig-titan2/anaconda3/envs/SVS/include/python3.8 -c -c /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/ext_cpu.cpp -o /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=ext_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/include/torch_common.h:3, from /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/ext_cpu.cpp:1: /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas] #pragma omp parallel for if ((end - begin) >= grain_size)

    creating build/lib.linux-x86_64-3.8 g++ -pthread -shared -B /home/vig-titan2/anaconda3/envs/SVS/compiler_compat -L/home/vig-titan2/anaconda3/envs/SVS/lib -Wl,-rpath=/home/vig-titan2/anaconda3/envs/SVS/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_cpu.o -L/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-3.8/ext_cpu.cpython-38-x86_64-linux-gnu.so building 'ext_cuda' extension Emitting ninja build file /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] c++ -MMD -MF /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_cuda.o.d -pthread -B /home/vig-titan2/anaconda3/envs/SVS/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/TH -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/generated -I/home/vig-titan2/anaconda3/envs/SVS/include/python3.8 -c -c /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/ext_cuda.cpp -o /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=ext_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:149:0, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, from /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/include/torch_common.h:3, from /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/ext_cuda.cpp:1: /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas] #pragma omp parallel for if ((end - begin) >= grain_size)

    [2/2] /usr/local/cuda-10.1/bin/nvcc -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/TH -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/generated -I/home/vig-titan2/anaconda3/envs/SVS/include/python3.8 -c -c /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/ext_kernel.cu -o /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -arch=sm_30 -gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_35,code=sm_35 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=ext_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 FAILED: /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_kernel.o /usr/local/cuda-10.1/bin/nvcc -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/TH -I/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/include -I/home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/generated -I/home/vig-titan2/anaconda3/envs/SVS/include/python3.8 -c -c /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/ext_kernel.cu -o /home/vig-titan2/PycharmProjects/SVS/StableViewSynthesis/ext/mytorch/build/temp.linux-x86_64-3.8/ext_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -arch=sm_30 -gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_35,code=sm_35 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=ext_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/record_function.h(18): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/profiler.h(97): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/profiler.h(126): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(187): warning: statement is unreachable

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/record_function.h(18): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/profiler.h(97): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/profiler.h(126): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(187): warning: statement is unreachable

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/ATen/record_function.h(18): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(187): warning: statement is unreachable

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/profiler.h(97): warning: attribute "visibility" does not apply here

    /home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/include/torch/csrc/autograd/profiler.h(126): warning: attribute "visibility" does not apply here

    /usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’: /usr/include/c++/7/bits/basic_string.tcc:578:28: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ /usr/include/c++/7/bits/basic_string.h:5042:20: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ /usr/include/c++/7/bits/basic_string.h:5063:24: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ /usr/include/c++/7/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/7/bits/basic_string.h:6688:95: required from here /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object __p->_M_set_sharable(); ~~~~~~~~~^~ /usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’: /usr/include/c++/7/bits/basic_string.tcc:578:28: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ /usr/include/c++/7/bits/basic_string.h:5042:20: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ /usr/include/c++/7/bits/basic_string.h:5063:24: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ /usr/include/c++/7/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/7/bits/basic_string.h:6693:95: required from here /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1509, in _run_ninja_build subprocess.run( File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "setup.py", line 226, in setup( File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/setuptools/init.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions build_ext.build_extensions(self) File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 196, in build_extension _build_ext.build_extension(self, ext) File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension objects = self.compiler.compile(sources, File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 469, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1228, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/home/vig-titan2/anaconda3/envs/SVS/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build raise RuntimeError(message) RuntimeError: Error compiling objects for extension


    It's ubuntu 18.04 and cuda 10.1 on pytorch 1.6. I followed the instructions, but I got this error. Can someone help me?

    opened by bring728 6
  • Customized Dataset for Stable View Synthesis & CUDA Error

    Customized Dataset for Stable View Synthesis & CUDA Error

    Thank you very much for publishing "Stable View Synthesis", it seems to be the significant photorealistic approach for novel view synthesis! Could you add to your github page https://github.com/intel-isl/StableViewSynthesis the detailed instructions on how to build your own customized dataset, please?

    Besides, I am interested in the following questions:

    1. Can you please tell us how you calculated depth maps in your work?
    2. When I am running the training process on my own data, this error is raised: invalid configuration argument in /notebook/SVS/StableViewSynthesis/ext/mytorch/include/common_cuda.h at 171 What might be a reason for this?

    Thank you in advance!

    opened by MatveyMor 5
  • How to run with own data?

    How to run with own data?

    Hi and thank you for the release, I succesfully installed it and prepared a dataset with data/create_data_own.py When I run the pretrained network as python3 exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd eval --iter last --eval-dsets mydataset The script silently exits after these messages

    [2021-04-22/15:51/INFO/modules] [NET][EncNet] resunet3.16
    [2021-04-22/15:51/INFO/modules] [NET][RefNet] point_edges_mode=penone
    [2021-04-22/15:51/INFO/modules] [NET][RefNet] point_aux_data=dirs
    [2021-04-22/15:51/INFO/modules] [NET][RefNet] point_avg_mode=avg
    [2021-04-22/15:51/INFO/modules] [NET][RefNet] Seq 9 nets, nets_residual=True
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   Unet(in_channels=16, enc_channels=[16, 32, 64, 128, 128], dec_channels=[128, 64, 32, 16], n_conv=2)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet] Single gnn
    [2021-04-22/15:51/INFO/modules] [NET][RefNet]   MLPDir(in_channels=16, hidden_channels=64, n_mods=3, out_channels=16, aggr=mean)
    [2021-04-22/15:51/INFO/modules] [NET][RefNet] out_conv(16, 3)
    [2021-04-22/15:51/INFO/mytorch] [EVAL] loading net for iter last: experiments/tat-wo-val_bs1_nbs3_rpointdir_s0.25_resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16_vgg/net_0000000000000000.params
    

    Seems like it doesnt find my dataset. Any advice?

    opened by vuoriov4 4
  • some questions about test process

    some questions about test process

    Hi @griegler, thanks for your great work, I have some questions about the test process, hope you can help.

    First, I use the interpolate_wapoints in thecreate_custom_track.py to get a newly continuous new camera path

    Second, I use this newly generated camera path and the reconstructed mesh from colmap to render the corresponding depth map

    Third, based on the above newly depth map, I use the count_nbs to compute the counts.npy for each new depth map (the tgt parameters are the newly generated camera path and depth map, the src parameters are the original camera path, and depth map ). [I notice that although your tat_eval_sets are not trained, every data mesh(e.g, truck) is reconstructed from the truck and then you choose some images from the truck to test, so it has not generate a new view image, but like a process to reconstruct a known image. I have tested on the provided datasets, the generated images have the same image in the original images file. I wonder if I have some misunderstanding? ]

    Last, I use original images, newly generated depth map, newly generated count.npy to form a new test dataset and modify the tat_tracks to contain this data, and then runexp.py

    I have visualized the generated camera path and have seen the rendered newly depth map, everything is normal, but the render new view images look bad. I can't figure out where I made mistake, hope you can give some advice, thanks~

    btw, I also try the above process with the original images, depth maps, and count.npy, the generated image looks normal, but since this image is part of the original images, it seems when test on image that has used to reconstruct the mesh, it's normal, but when test on image generated from a newly generated depth map and camera path, the generated images are bad.

    opened by visonpon 4
  • 'pybind11' not exist..

    'pybind11' not exist..

    cmake -DCMAKE_BUILD_TYPE=Release .

    -- The C compiler identification is GNU 7.5.0 -- The CXX compiler identification is GNU 7.5.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1") -- Checking for module 'eigen3' -- Found eigen3, version 3.3.4 CMake Error at CMakeLists.txt:10 (add_subdirectory): add_subdirectory given source "pybind11" which is not an existing directory.

    CMake Error at CMakeLists.txt:12 (pybind11_add_module): Unknown CMake command "pybind11_add_module".

    -- Configuring incomplete, errors occurred!

    in ubuntu 18.04 i got this error message during Cmake, can you help me?

    opened by bring728 3
  • Question about LPIPS percentage point metric

    Question about LPIPS percentage point metric

    Dear author,

    I have read "Stable View Synthesis" and wonders what is LPIP pertancage point metric.

    In general, LPIPS measures the perceptual distance in Euclidean and even your another paper "Free View Synthesis" report LPIPS, not LPIPS percentage point.

    Can you explain how LPIPS percentage point is calculated ? And if LPIPS percentage point is relative to the best prior state of the art, why lower LPIPS percentage point is better than higher ?

    Thank you for your great work!

    Best regards, YJHong.

    opened by yjhong89 2
  • Segmentation fault on retrain the network or run evaluation

    Segmentation fault on retrain the network or run evaluation

    Hi Gernot,

    Thank you for the great work!

    I was trying to get the evaluation scripts exp.py running for both evaluation and retrain. However, I consistently get a segmentation fault like so:

    python exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd retrain
    .
    .
    .
    .
    [2021-05-31/09:52/INFO/mytorch] Setup training data loader and other stuff                                                                                                            
    invalid device function in /home/fyusion/Documents/projects/StableViewSynthesis/ext/mytorch/include/common_cuda.h at 171                                                             
    [1]    633554 segmentation fault (core dumped)  python exp.py --net  --cmd retrain   
    

    Some more details of my system installation:

    python -c 'from torch.utils.collect_env import main; main()'
    Collecting environment information...
    PyTorch version: 1.6.0
    Is debug build: No
    CUDA used to build PyTorch: 10.2
    
    OS: Ubuntu 20.04.2 LTS
    GCC version: (Ubuntu 7.5.0-6ubuntu2) 7.5.0
    CMake version: version 3.16.3
    
    Python version: 3.6
    Is CUDA available: Yes
    CUDA runtime version: 10.0.130
    GPU models and configuration:
    GPU 0: TITAN V
    GPU 1: GeForce RTX 2080 Ti
    GPU 2: GeForce RTX 2080 Ti
    
    Nvidia driver version: 460.73.01
    cuDNN version: /usr/lib/cuda-10.0/lib64/libcudnn.so.7.4.1
    
    Versions of relevant libraries:
    [pip3] numpy==1.19.2
    [pip3] torch==1.6.0
    [pip3] torch-geometric==1.7.0
    [pip3] torch-scatter==2.0.6
    [pip3] torch-sparse==0.6.9
    [pip3] torchvision==0.7.0
    [conda] blas                      1.0                         mkl
    [conda] cudatoolkit               10.2.89              hfd86e86_1
    [conda] mkl                       2020.2                      256
    [conda] mkl-service               2.3.0            py36he8ac12f_0
    [conda] mkl_fft                   1.3.0            py36h54f3939_0
    [conda] mkl_random                1.1.1            py36h0573a6f_0
    [conda] numpy                     1.19.2           py36h54aff64_0
    [conda] numpy-base                1.19.2           py36hfa32c7d_0
    [conda] pytorch                   1.6.0           py3.6_cuda10.2.89_cudnn7.6.5_0    pytorch
    [conda] torch-geometric           1.7.0                    pypi_0    pypi
    [conda] torch-scatter             2.0.6                    pypi_0    pypi
    [conda] torch-sparse              0.6.9                    pypi_0    pypi
    [conda] torchvision               0.7.0                py36_cu102    pytorch
    
    opened by asharma-fy 2
  • Can not find counts.npy

    Can not find counts.npy

    Hello, I am testing your model. However, it seems like I can not find the file counts.npy. I downloaded the preprocessed data from FVS and I can not find it also for each scene. Please help !

    python exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd eval --iter last --eval-dsets tat-subseq
    [2021-03-18/19:06/INFO/mytorch] Set seed to 42
    [2021-03-18/19:06/INFO/mytorch] ================================================================================
    [2021-03-18/19:06/INFO/mytorch] Start cmd "eval": tat-wo-val_bs1_nbs3_rpointdir_s0.25_resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16_vgg
    [2021-03-18/19:06/INFO/mytorch] 2021-03-18 19:06:41
    [2021-03-18/19:06/INFO/mytorch] host: phong-Server
    [2021-03-18/19:06/INFO/mytorch] --------------------------------------------------------------------------------
    [2021-03-18/19:06/INFO/mytorch] worker env:
        experiments_root: experiments
        experiment_name: tat-wo-val_bs1_nbs3_rpointdir_s0.25_resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16_vgg
        n_train_iters: -65536
        seed: 42
        train_batch_size: 1
        train_batch_acc_steps: 1
        eval_batch_size: 1
        num_workers: 6
        save_frequency: <co.mytorch.Frequency object at 0x7fd0cb475970>
        eval_frequency: <co.mytorch.Frequency object at 0x7fd0cb4755e0>
        train_device: cuda:0
        eval_device: cuda:0
        clip_gradient_value: None
        clip_gradient_norm: None
        empty_cache_per_batch: False
        log_debug: []
        train_iter_messages: []
        stopwatch: 
        train_dsets: ['tat-wo-val']
        eval_dsets: ['tat-subseq']
        train_n_nbs: 3
        train_src_mode: image
        train_nbs_mode: argmax
        train_scale: 0.25
        eval_scale: 0.5
        invalid_depth: 1000000000.0
        point_aux_data: ['dirs']
        point_edges_mode: penone
        eval_n_max_sources: 5
        train_rank_mode: pointdir
        eval_rank_mode: pointdir
        train_loss: VGGPerceptualLoss(
      (vgg): Sequential(
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (3): ReLU(inplace=True)
        (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (6): ReLU(inplace=True)
        (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (8): ReLU(inplace=True)
        (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (11): ReLU(inplace=True)
        (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): ReLU(inplace=True)
        (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (15): ReLU(inplace=True)
        (16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (17): ReLU(inplace=True)
        (18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (20): ReLU(inplace=True)
        (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (22): ReLU(inplace=True)
        (23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (24): ReLU(inplace=True)
        (25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (26): ReLU(inplace=True)
        (27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (29): ReLU(inplace=True)
        (30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (31): ReLU(inplace=True)
        (32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (33): ReLU(inplace=True)
        (34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (35): ReLU(inplace=True)
        (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
      )
    )
        eval_loss: L1Loss()
        exp_out_root: experiments/tat-wo-val_bs1_nbs3_rpointdir_s0.25_resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16_vgg
        db_path: experiments/tat-wo-val_bs1_nbs3_rpointdir_s0.25_resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16_vgg/exp.phong-Server.db
        db_logger: <co.sqlite.Logger object at 0x7fd0cb475910>
    [2021-03-18/19:06/INFO/mytorch] ================================================================================
    [2021-03-18/19:06/INFO/exp] Create eval datasets
    [2021-03-18/19:06/INFO/exp]   create dataset for tat_subseq_training_Truck
    Traceback (most recent call last):
      File "exp.py", line 945, in <module>
        worker.do(args, worker_objects)
      File "../co/mytorch.py", line 442, in do
        self.do_cmd(args, worker_objects)
      File "../co/mytorch.py", line 429, in do_cmd
        self.eval_iters(
      File "../co/mytorch.py", line 604, in eval_iters
        eval_sets = self.get_eval_sets()
      File "exp.py", line 327, in get_eval_sets
        self.get_eval_set_tat(
      File "exp.py", line 252, in get_eval_set_tat
        dset = self.get_dataset(
      File "exp.py", line 133, in get_dataset
        counts = np.load(ibr_dir / "counts.npy")
      File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/numpy/lib/npyio.py", line 416, in load
        fid = stack.enter_context(open(os_fspath(file), "rb"))
    FileNotFoundError: [Errno 2] No such file or directory: '/home/phong/data/Work/Paper3/Code/FreeViewSynthesis/ibr3d_tat/training/Truck/dense/ibr3d_pw_0.50/counts.npy'
    
    
    
    opened by phongnhhn92 2
  • Failure to import torch-scatter

    Failure to import torch-scatter

    Seems like the torch_scatter is expecting a pretty old version of PyTorch, is that expected?

    (stableviewsynthesis) [gkopanas@nefgpu34 experiments]$ python exp.py --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd eval --iter last --eval-dsets tat-subseq
    Traceback (most recent call last):
      File "/home/gkopanas/.conda/envs/stableviewsynthesis/lib/python3.7/site-packages/torch_scatter/__init__.py", line 13, in <module>
        library, [osp.dirname(__file__)]).origin)
      File "/home/gkopanas/.conda/envs/stableviewsynthesis/lib/python3.7/site-packages/torch/_ops.py", line 104, in load_library
        ctypes.CDLL(path)
      File "/home/gkopanas/.conda/envs/stableviewsynthesis/lib/python3.7/ctypes/__init__.py", line 364, in __init__
        self._handle = _dlopen(self._name, mode)
    OSError: /home/gkopanas/.conda/envs/stableviewsynthesis/lib/python3.7/site-packages/torch_scatter/_version.so: undefined symbol: _ZN3c1017RegisterOperatorsC1Ev
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "exp.py", line 10, in <module>
        import modules
      File "/home/gkopanas/StableViewSynthesis/experiments/modules.py", line 8, in <module>
        import torch_scatter
      File "/home/gkopanas/.conda/envs/stableviewsynthesis/lib/python3.7/site-packages/torch_scatter/__init__.py", line 19, in <module>
        f'Expected PyTorch version {t_major}.{t_minor} but found '
    RuntimeError: Expected PyTorch version 1.4 but found version 1.8.
    

    my conda packages are the following:

    # packages in environment at /home/gkopanas/.conda/envs/stableviewsynthesis:
    #
    _libgcc_mutex             0.1                        main
    argon2-cffi               20.1.0           py37h27cfd23_1
    ase                       3.21.1             pyhd8ed1ab_0    conda-forge
    async_generator           1.10               pyhd3eb1b0_0
    attrs                     20.3.0             pyhd3eb1b0_0
    backcall                  0.2.0              pyhd3eb1b0_0
    blas                      1.0                         mkl
    bleach                    3.3.0              pyhd3eb1b0_0
    blosc                     1.21.0               h8c45485_0
    brotli                    1.0.9                he6710b0_2
    brotlipy                  0.7.0           py37hb5d75c8_1001    conda-forge
    brunsli                   0.1                  h2531618_0
    bzip2                     1.0.8                h7b6447c_0
    ca-certificates           2021.1.19            h06a4308_1
    cached-property           1.5.2                hd8ed1ab_1    conda-forge
    cached_property           1.5.2              pyha770c72_1    conda-forge
    certifi                   2020.12.5        py37h06a4308_0
    cffi                      1.14.5           py37h261ae71_0
    chardet                   4.0.0            py37h89c1867_1    conda-forge
    charls                    2.1.0                he6710b0_2
    click                     7.1.2              pyh9f0ad1d_0    conda-forge
    cloudpickle               1.6.0                      py_0
    cryptography              3.4.6            py37h5d9358c_0    conda-forge
    cudatoolkit               10.2.89              hfd86e86_1
    cycler                    0.10.0                     py_2    conda-forge
    cytoolz                   0.11.0           py37h7b6447c_0
    dask-core                 2021.3.0           pyhd3eb1b0_0
    dbus                      1.13.18              hb2f20db_0
    decorator                 4.4.2                      py_0    conda-forge
    defusedxml                0.7.1              pyhd3eb1b0_0
    entrypoints               0.3                      py37_0
    expat                     2.2.10               he6710b0_2
    ffmpeg                    4.3                  hf484d3e_0    pytorch
    flask                     1.1.2              pyh9f0ad1d_0    conda-forge
    fontconfig                2.13.1               h6c09931_0
    freetype                  2.10.4               h5ab3b9f_0
    giflib                    5.1.4                h14c3975_1
    glib                      2.67.4               h36276a3_1
    gmp                       6.2.1                h2531618_2
    gnutls                    3.6.5             h71b1129_1002
    googledrivedownloader     0.4                pyh9f0ad1d_0    conda-forge
    gst-plugins-base          1.14.0               h8213a91_2
    gstreamer                 1.14.0               h28cd5cc_2
    h5py                      3.1.0           nompi_py37h1e651dc_100    conda-forge
    hdf5                      1.10.6          nompi_h3c11f04_101    conda-forge
    html5lib                  1.1                pyh9f0ad1d_0    conda-forge
    icu                       58.2                 he6710b0_3
    idna                      2.10               pyh9f0ad1d_0    conda-forge
    imagecodecs               2021.1.11        py37h581e88b_1
    imageio                   2.9.0                      py_0
    importlib-metadata        2.0.0                      py_1
    importlib_metadata        2.0.0                         1
    intel-openmp              2020.2                      254
    ipykernel                 5.3.4            py37h5ca1d4c_0
    ipython                   7.21.0           py37hb070fc8_0
    ipython_genutils          0.2.0              pyhd3eb1b0_1
    ipywidgets                7.6.3              pyhd3eb1b0_1
    isodate                   0.6.0                      py_1    conda-forge
    itsdangerous              1.1.0                      py_0    conda-forge
    jedi                      0.17.0                   py37_0
    jinja2                    2.11.3             pyh44b312d_0    conda-forge
    joblib                    1.0.1              pyhd8ed1ab_0    conda-forge
    jpeg                      9b                   h024ee3a_2
    jsonschema                3.2.0                      py_2
    jupyter_client            6.1.7                      py_0
    jupyter_core              4.7.1            py37h06a4308_0
    jupyterlab_pygments       0.1.2                      py_0
    jupyterlab_widgets        1.0.0              pyhd3eb1b0_1
    jxrlib                    1.1                  h7b6447c_2
    keepalive                 0.5              py37h89c1867_5    conda-forge
    kiwisolver                1.3.1            py37hc928c03_0    conda-forge
    lame                      3.100                h7b6447c_0
    lcms2                     2.11                 h396b838_0
    ld_impl_linux-64          2.33.1               h53a641e_7
    lerc                      2.2.1                h2531618_0
    libaec                    1.0.4                he6710b0_1
    libblas                   3.9.0                8_openblas    conda-forge
    libcblas                  3.9.0                8_openblas    conda-forge
    libdeflate                1.7                  h27cfd23_5
    libedit                   3.1.20191231         h14c3975_1
    libffi                    3.3                  he6710b0_2
    libgcc-ng                 9.1.0                hdf63c60_0
    libgfortran-ng            7.3.0                hdf63c60_0
    libgfortran4              7.5.0               h14aa051_18    conda-forge
    libiconv                  1.15                 h63c8f33_5
    libllvm10                 10.0.1               he513fc3_3    conda-forge
    libopenblas               0.3.12          pthreads_hb3c22a3_1    conda-forge
    libpng                    1.6.37               hbc83047_0
    libsodium                 1.0.18               h7b6447c_0
    libstdcxx-ng              9.1.0                hdf63c60_0
    libtiff                   4.2.0                h3942068_0
    libuuid                   1.0.3                h1bed415_2
    libuv                     1.40.0               h7b6447c_0
    libwebp                   1.0.1                h8e7db2f_0
    libwebp-base              1.2.0                h27cfd23_0
    libxcb                    1.14                 h7b6447c_0
    libxml2                   2.9.10               hb55368b_3
    libzopfli                 1.0.3                he6710b0_0
    llvmlite                  0.34.0           py37h5202443_2    conda-forge
    lz4-c                     1.9.3                h2531618_0
    markupsafe                1.1.1            py37hb5d75c8_2    conda-forge
    matplotlib                3.3.4            py37h06a4308_0
    matplotlib-base           3.3.4            py37h62a2d02_0
    mistune                   0.8.4           py37h14c3975_1001
    mkl                       2020.2                      256
    mkl-service               2.3.0            py37he8ac12f_0
    mkl_fft                   1.3.0            py37h54f3939_0
    mkl_random                1.1.1            py37h0573a6f_0
    nbclient                  0.5.3              pyhd3eb1b0_0
    nbconvert                 6.0.7                    py37_0
    nbformat                  5.1.2              pyhd3eb1b0_1
    ncurses                   6.2                  he6710b0_1
    nest-asyncio              1.5.1              pyhd3eb1b0_0
    nettle                    3.4.1                hbb512f6_0
    networkx                  2.5                        py_0    conda-forge
    ninja                     1.10.2           py37hff7bd54_0
    notebook                  6.2.0            py37h06a4308_0
    numba                     0.51.2           py37h9fdb41a_0    conda-forge
    numpy                     1.19.2           py37h54aff64_0
    numpy-base                1.19.2           py37hfa32c7d_0
    olefile                   0.46                       py_0
    open3d                    0.11.2                   py37_0    open3d-admin
    openh264                  2.1.0                hd408876_0
    openjpeg                  2.3.0                h05c96fa_1
    openssl                   1.1.1j               h27cfd23_0
    packaging                 20.9               pyhd3eb1b0_0
    pandas                    1.2.3            py37ha9443f7_0
    pandoc                    2.11                 hb0f4dca_0
    pandocfilters             1.4.3            py37h06a4308_1
    parso                     0.8.1              pyhd3eb1b0_0
    pcre                      8.44                 he6710b0_0
    pexpect                   4.8.0              pyhd3eb1b0_3
    pickleshare               0.7.5           pyhd3eb1b0_1003
    pillow                    8.1.2            py37he98fc37_0
    pip                       21.0.1           py37h06a4308_0
    plyfile                   0.7.3              pyh44b312d_0    conda-forge
    prometheus_client         0.9.0              pyhd3eb1b0_0
    prompt-toolkit            3.0.8                      py_0
    ptyprocess                0.7.0              pyhd3eb1b0_2
    pycparser                 2.20               pyh9f0ad1d_2    conda-forge
    pygments                  2.8.1              pyhd3eb1b0_0
    pyopenssl                 20.0.1             pyhd8ed1ab_0    conda-forge
    pyparsing                 2.4.7              pyh9f0ad1d_0    conda-forge
    pyqt                      5.9.2            py37h05f1152_2
    pyrsistent                0.17.3           py37h7b6447c_0
    pysocks                   1.7.1            py37h89c1867_3    conda-forge
    python                    3.7.10               hdb3f193_0
    python-dateutil           2.8.1                      py_0    conda-forge
    python_abi                3.7                     1_cp37m    conda-forge
    pytorch                   1.8.0           py3.7_cuda10.2_cudnn7.6.5_0    pytorch
    pytorch_cluster           1.5.4            py37hcae2be3_1    conda-forge
    pytorch_geometric         1.6.1              pyh9f0ad1d_0    conda-forge
    pytorch_scatter           2.0.4            py37hcae2be3_1    conda-forge
    pytorch_sparse            0.6.3            py37hcae2be3_1    conda-forge
    pytorch_spline_conv       1.2.0            py37hcae2be3_1    conda-forge
    pytz                      2021.1             pyhd8ed1ab_0    conda-forge
    pywavelets                1.1.1            py37h7b6447c_2
    pyyaml                    5.4.1            py37h27cfd23_1
    pyzmq                     20.0.0           py37h2531618_1
    qt                        5.9.7                h5867ecd_1
    rdflib                    5.0.0            py37h89c1867_3    conda-forge
    readline                  8.1                  h27cfd23_0
    requests                  2.25.1             pyhd3deb0d_0    conda-forge
    scikit-image              0.17.2           py37hdf5156a_0
    scikit-learn              0.23.2           py37hddcf8d6_3    conda-forge
    scipy                     1.6.1            py37h91f5cce_0
    send2trash                1.5.0              pyhd3eb1b0_1
    setuptools                52.0.0           py37h06a4308_0
    sip                       4.19.8           py37hf484d3e_0
    six                       1.15.0             pyhd3eb1b0_0
    snappy                    1.1.8                he6710b0_0
    sparqlwrapper             1.8.5           py37h89c1867_1005    conda-forge
    sqlite                    3.33.0               h62c20be_0
    terminado                 0.9.2            py37h06a4308_0
    testpath                  0.4.4              pyhd3eb1b0_0
    threadpoolctl             2.1.0              pyh5ca1d4c_0    conda-forge
    tifffile                  2021.3.5           pyhd3eb1b0_1
    tk                        8.6.10               hbc83047_0
    toolz                     0.11.1             pyhd3eb1b0_0
    torchaudio                0.8.0                      py37    pytorch
    torchvision               0.9.0                py37_cu102    pytorch
    tornado                   6.1              py37h4abf009_0    conda-forge
    tqdm                      4.59.0             pyhd8ed1ab_0    conda-forge
    traitlets                 5.0.5              pyhd3eb1b0_0
    typing_extensions         3.7.4.3            pyha847dfd_0
    urllib3                   1.26.3             pyhd8ed1ab_0    conda-forge
    wcwidth                   0.2.5                      py_0
    webencodings              0.5.1                      py_1    conda-forge
    werkzeug                  1.0.1              pyh9f0ad1d_0    conda-forge
    wheel                     0.36.2             pyhd3eb1b0_0
    widgetsnbextension        3.5.1                    py37_0
    xz                        5.2.5                h7b6447c_0
    yaml                      0.2.5                h7b6447c_0
    zeromq                    4.3.3                he6710b0_3
    zfp                       0.5.5                h2531618_4
    zipp                      3.4.0              pyhd3eb1b0_0
    zlib                      1.2.11               h7b6447c_3
    zstd                      1.4.5                h9ceee32_0
    
    opened by grgkopanas 2
  • preprocess dir make has no error, but when run exp.py got  attributeError

    preprocess dir make has no error, but when run exp.py got attributeError

    Hi, @dvdhfnr Thanks for your great work and when I try to test I got an error like the below: AttributeError: module 'ext.preprocess' has no attribute 'map_source_points' hope you can help, thanks~

    opened by visonpon 2
  • Significant degration only changing camera intrinsics

    Significant degration only changing camera intrinsics

    Hi,

    Congratulations on your great work and thanks for sharing!

    Upon testing on the scene-agnostic model as in README.md, I tried changing only the focal length K[0, 0], K[1, 1] of a test camera, mimicing a new camera intrinsics.

    Script for testing is the same as in READM.md

    python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5
    

    The results are significantly worse with respect to original K. I cannot upload images for security reasons, but here's the code to change the intrinsics

    # Changing the focal length of first test image of Truck
    Ks = np.load('ibr3d_tat/training/Truck/dense/ibr3d_pw_0.50.bak/Ks.npy')
    
    id=172
    
    print(Ks[id])
    
    Ks[id][0, 0] *= 0.6
    Ks[id][1, 1] *= 0.6
    
    np.save('ibr3d_tat/training/Truck/dense/ibr3d_pw_0.50/Ks.npy', Ks)
    

    From the paper the method should be more or less robust to the test-time camera intrinsics/extrinsics, since the features are "attached" to a geometry proxy. Which part of the network could this over-fitting occur?

    opened by jingyibo123 1
  • Eigen3 module not found

    Eigen3 module not found

    I'm trying to compile the repo using CMake command but continuously getting the error like the Eigen3 module is not found so can anyone suggest me the solution? I'm using a MacBook pro(m1).

    opened by CharmiSheladiya 0
  • DTU groundtruth meshes

    DTU groundtruth meshes

    Hi,

    Thanks for the wonderful work. I was wondering about how you obtained your DTU ground truth meshes. Was it the same way you obtained them for tanks and temples? If yes, would it be possible to share them?

    Thanks in advance

    opened by Shubhendu-Jena 0
  • How to generate new views?

    How to generate new views?

    If I want to generate new views beyond my train and eval sets, how can I do that?Am I need to generate the cams parameters myself or is there any other methods using the code now?

    opened by zhangkai0425 0
  • Purpose of m2l_tgt_idx

    Purpose of m2l_tgt_idx

    when trying to use SVS against my own data, I run into the following exception at the "create target images" stage:

    image

    The problematic key in question is "m2l_tgt_idx." What is this key supposed to be set to? The main reference I see is ""m2l_tgt_idx", # TODO: m2l_tgt_idx wrong"

    opened by hturki 1
  • How many train_iters/hours is needed to reproduce the results in the paper?

    How many train_iters/hours is needed to reproduce the results in the paper?

    Hi, I don't know if I miss anything, but I can't find anywhere in the paper how long time it takes to train the SVS model. What is the number of n_train_iters (or training time) required to reproduce the results? By using the default setting, the ETA is 1132 days on my 1080ti GPU.

    opened by FomalhautB 0
Owner
Intelligent Systems Lab Org
Intelligent Systems Lab Org
data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer"

C2F-FWN data/code repository of "C2F-FWN: Coarse-to-Fine Flow Warping Network for Spatial-Temporal Consistent Motion Transfer" (https://arxiv.org/abs/

EKILI 46 Dec 14, 2022
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.

OpenVINO Inference API This is a repository for an object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operati

BMW TechOffice MUNICH 68 Nov 24, 2022
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Predicting Patient Outcomes with Graph Representation Learning This repository contains the code used for Predicting Patient Outcomes with Graph Repre

Emma Rocheteau 76 Dec 22, 2022
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

null 33 Dec 18, 2022
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

北海若 48 Nov 14, 2022
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

null 117 Dec 7, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
This repository contains the code for our fast polygonal building extraction from overhead images pipeline.

Polygonal Building Segmentation by Frame Field Learning We add a frame field output to an image segmentation neural network to improve segmentation qu

Nicolas Girard 186 Jan 4, 2023
This repository holds the code for the paper "Deep Conditional Gaussian Mixture Model forConstrained Clustering".

Deep Conditional Gaussian Mixture Model for Constrained Clustering. This repository holds the code for the paper Deep Conditional Gaussian Mixture Mod

null 17 Oct 30, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 5, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
null 190 Jan 3, 2023
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effects in Video."

Omnimatte in PyTorch This repository contains a re-implementation of the code for the CVPR 2021 paper "Omnimatte: Associating Objects and Their Effect

Erika Lu 728 Dec 28, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022