Dynamic View Synthesis from Dynamic Monocular Video

Overview

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer

This repository contains code to compute depth from a single image. It accompanies our paper:

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer
René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun

and our preprint:

Vision Transformers for Dense Prediction
René Ranftl, Alexey Bochkovskiy, Vladlen Koltun

MiDaS was trained on 10 datasets (ReDWeb, DIML, Movies, MegaDepth, WSVD, TartanAir, HRWSI, ApolloScape, BlendedMVS, IRS) with multi-objective optimization. The original model that was trained on 5 datasets (MIX 5 in the paper) can be found here.

Changelog

Setup

  1. Pick one or more models and download corresponding weights to the weights folder:
  • For highest quality: dpt_large
  • For moderately less quality, but better speed on CPU and slower GPUs: dpt_hybrid
  • For real-time applications on resource-constrained devices: midas_v21_small
  • Legacy convolutional model: midas_v21
  1. Set up dependencies:

    conda install pytorch torchvision opencv
    pip install timm

    The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, and timm 0.4.5.

Usage

  1. Place one or more input images in the folder input.

  2. Run the model:

    python run.py --model_type dpt_large
    python run.py --model_type dpt_hybrid 
    python run.py --model_type midas_v21_small
    python run.py --model_type midas_v21
  3. The resulting inverse depth maps are written to the output folder.

via Docker

  1. Make sure you have installed Docker and the NVIDIA Docker runtime.

  2. Build the Docker image:

    docker build -t midas .
  3. Run inference:

    docker run --rm --gpus all -v $PWD/input:/opt/MiDaS/input -v $PWD/output:/opt/MiDaS/output midas

    This command passes through all of your NVIDIA GPUs to the container, mounts the input and output directories and then runs the inference.

via PyTorch Hub

The pretrained model is also available on PyTorch Hub

via TensorFlow or ONNX

See README in the tf subdirectory.

Currently only supports MiDaS v2.1. DPT-based models to be added.

via Mobile (iOS / Android)

See README in the mobile subdirectory.

via ROS1 (Robot Operating System)

See README in the ros subdirectory.

Currently only supports MiDaS v2.1. DPT-based models to be added.

Accuracy

Zero-shot error (the lower - the better) and speed (FPS):

Model DIW, WHDR Eth3d, AbsRel Sintel, AbsRel Kitti, δ>1.25 NyuDepthV2, δ>1.25 TUM, δ>1.25 Speed, FPS
Small models: iPhone 11
MiDaS v2 small 0.1248 0.1550 0.3300 21.81 15.73 17.00 0.6
MiDaS v2.1 small URL 0.1344 0.1344 0.3370 29.27 13.43 14.53 30
Big models: GPU RTX 3090
MiDaS v2 large URL 0.1246 0.1290 0.3270 23.90 9.55 14.29 51
MiDaS v2.1 large URL 0.1295 0.1155 0.3285 16.08 8.71 12.51 51
MiDaS v3.0 DPT-Hybrid URL 0.1106 0.0934 0.2741 11.56 8.69 10.89 46
MiDaS v3.0 DPT-Large URL 0.1082 0.0888 0.2697 8.46 8.32 9.97 47

Citation

Please cite our paper if you use this code or any of the models:

@article{Ranftl2020,
	author    = {Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun},
	title     = {Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer},
	journal   = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
	year      = {2020},
}

If you use a DPT-based model, please also cite:

@article{Ranftl2021,
	author    = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
	title     = {Vision Transformers for Dense Prediction},
	journal   = {ArXiv preprint},
	year      = {2021},
}

Acknowledgements

Our work builds on and uses code from timm. We'd like to thank the author for making these libraries available.

License

MIT License

Comments
  • How could I convert it to onnx model?

    How could I convert it to onnx model?

    Trying to convert the model to onnx model, but got error

    File "to_onnx.py", line 72, in export_model(model, img_input, export_model_name) File "to_onnx.py", line 30, in export_model torch.onnx.export(model, input, export_model_name, verbose=False, export_params=True, opset_version=11) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\onnx_init_.py", line 148, in export strip_doc_string, dynamic_axes, keep_initializers_as_inputs) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\onnx\utils.py", line 66, in export dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\onnx\utils.py", line 416, in _export fixed_batch_size=fixed_batch_size) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\onnx\utils.py", line 279, in _model_to_graph graph, torch_out = _trace_and_get_graph_from_model(model, args, training) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\onnx\utils.py", line 236, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(model, args, _force_outplace=True, return_inputs_states=True) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\jit_init.py", line 277, in _get_trace_graph outs = ONNXTracedModule(f, _force_outplace, return_inputs, return_inputs_states)(*args, **kwargs) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\nn\modules\module.py", line 532, in call result = self.forward(*input, **kwargs) File "C:\Users\yyyy\Anaconda3\envs\torchreid\lib\site-packages\torch\jit_init.py", line 332, in forward in_vars, in_desc = _flatten(args) RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type numpy.ndarray

    to_onnx.py

    import os
    import glob
    import torch
    import utils
    import cv2
    
    from torchvision.transforms import Compose
    from models.midas_net import MidasNet
    from models.transforms import Resize, NormalizeImage, PrepareForNet
    
    import onnx
    import onnxruntime
    
    def test_model_accuracy(export_model_name, raw_output, input):    
        ort_session = onnxruntime.InferenceSession(export_model_name)
    
        def to_numpy(tensor):
            return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
    
        # compute ONNX Runtime output prediction
        ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input)}
        ort_outs = ort_session.run(None, ort_inputs)	
    
        # compare ONNX Runtime and PyTorch results
        np.testing.assert_allclose(to_numpy(raw_output), ort_outs[0], rtol=1e-03, atol=1e-05)
    
        print("Exported model has been tested with ONNXRuntime, and the result looks good!")		
    
    def export_model(model, input, export_model_name):
        torch.onnx.export(model, input, export_model_name, verbose=False, export_params=True, opset_version=11)	
        onnx_model = onnx.load(export_model_name)    
        onnx.checker.check_model(onnx_model)
        graph_output = onnx.helper.printable_graph(onnx_model.graph)
        with open("graph_output.txt", mode="w") as fout:
            fout.write(graph_output)
    		
    device = torch.device("cpu")
    
     # load network
    model_path = "model.pt"
    model = MidasNet(model_path, non_negative=True)
    
    transform = Compose(
            [
                Resize(
                    384,
                    384,
                    resize_target=None,
                    keep_aspect_ratio=True,
                    ensure_multiple_of=32,
                    resize_method="lower_bound",
                    image_interpolation_method=cv2.INTER_CUBIC,
                ),
                NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
                PrepareForNet(),
            ]
    )
    
    model.to(device)
    model.eval()
    
    img = utils.read_image("input/line_up_00.jpg")
    img_input = transform({"image": img})["image"]
    
    # compute
    #with torch.no_grad():
    sample = torch.from_numpy(img_input).to(device).unsqueeze(0)
    print("sample type = ", type(sample), ", shape of sample = ", sample.shape)
    print(sample)	
    prediction = model.forward(sample)
    export_model_name = "midas.onnx"	
    export_model(model, img_input, export_model_name)
    

    Environment:

    pytorch 1.4.0(installed by anaconda) os is windows 10 64bits

    opened by stereomatchingkiss 10
  • Slow image transformation

    Slow image transformation

    Hej, I dont think this is a issue, sorry for posting like this. But the image that goes through you model is really slow. Do you have a method for speeding it up? Sorry again for posting it as a issue, but dont know how else to make contact.

    opened by nickteim 8
  • Depth in float32 in meters units

    Depth in float32 in meters units

    Hello! Thanks for you work!

    I have two questions:

    1. What is the .pmf format and what is it used for?
    2. While opening .png depth maps how to convert them into float32 in meters units?
    opened by n-kasatkin 8
  • IMPORTANT Your shell script breaks all apps using MiDaS

    IMPORTANT Your shell script breaks all apps using MiDaS

    Your install.sh script causes every single Python project using your module to break. It is used by some Automatic1111 extensions and NONE of them work unless manually copying the externals/Next_ViT into the Web UI root folder.

    Error loading script: depthmap_for_depth2img.py Traceback (most recent call last): File "X:\StableDiffusion\webui\modules\scripts.py", line 195, in load_scripts module = script_loading.load_module(scriptfile.path) File "X:\StableDiffusion\webui\modules\script_loading.py", line 13, in load_module exec(compiled, module.dict) File "X:\StableDiffusion\webui\extensions\depthmap2mask\scripts\depthmap_for_depth2img.py", line 11, in from repositories.midas.midas.dpt_depth import DPTDepthModel File "X:\StableDiffusion\webui\repositories\midas\midas\dpt_depth.py", line 5, in from .blocks import ( File "X:\StableDiffusion\webui\repositories\midas\midas\blocks.py", line 21, in from .backbones.next_vit import ( File "X:\StableDiffusion\webui\repositories\midas\midas\backbones\next_vit.py", line 8, in file = open("./externals/Next_ViT/classification/nextvit.py", "r")

    When an app uses your module, it will expect the externals/Next_ViT directory at the app root. You cannot import a module in Python to a fixed location because that will break the dependency, since you're installing it in the wrong location. Your module won't find the module, because the main app installes it to it's own root, not to your module's root.

    Anyone using your module will have to manually copy the externals folder to their own app's root folder, which is really unacceptable.

    You should clone it to your own project's root using python git and load it by instead of using

    file = open("./externals/Next_ViT/classification/nextvit.py", "r")

    in midas/backbones/next_vit.py Use

    file = open("../../externals/Next_ViT/classification/nextvit.py", "r")

    Also, a shell script to run a git command? Really?? You think nobody uses another operating system? This is why I say you MUST use Python's own git module.

    opened by cooperdk 7
  • How to get a maximum depth by numeric, and x,y axis?

    How to get a maximum depth by numeric, and x,y axis?

    Hi, Before launching your code, I would like to understand how to get a maximum depth by numeric, and the x/y axis. Could I have your advice on it? If you could tell me the specific key function where generates the maximum depth, it would be really helpful.

    Allow me to add one more question, is your code available to use on the google colaboratory?

    I'm looking forward to hearing from you soon!

    opened by ramuneblue 7
  • Installation problem

    Installation problem

    Hi! I have runtime problems:

    ...
    File "/home/etienne/.cache/torch/hub/facebookresearch_WSL-Images_master/hubconf.py", line 23, in _resnext
        model = ResNet(block, layers, **kwargs)
    TypeError: __init__() got an unexpected keyword argument 'groups'
    

    Searching on this, it seems it's because of a bad torchvision version. I do not succeed in solving all version conflicts. Is it possible to have a minimal set of command lines to install all in the right version?

    opened by EtienneAb3d 7
  • What does the predicted depth signify?

    What does the predicted depth signify?

    A "prediction" gives the following:

    [[2496.0127 2495.973 2495.9888 ... 855.7698 855.57666 856.0468 ] [2495.9575 2495.9158 2495.9329 ... 855.4036 855.20917 855.68256] [2495.9797 2495.9387 2495.9556 ... 855.55426 855.36035 855.83234] ... [3245.7551 3245.7756 3245.7664 ... 2852.5774 2852.4922 2852.702 ] [3245.7275 3245.7478 3245.739 ... 2852.4827 2852.397 2852.6072 ] [3245.7974 3245.8179 3245.809 ... 2852.7156 2852.6309 2852.8398 ]]

    What are the units of these numbers m, mm, ft? Of course those numbers aren't disparities (since the images aren't that wide). So what do these numbers represent? How to convert this prediction to actual depth given camera intrinsics? Thanks

    opened by zendevil 7
  • No such file or directory: 'model-f46da743.pt'

    No such file or directory: 'model-f46da743.pt'

    Hello

    I got this error. Did I put the .pt file in wrong location? (see attachment)

    initialize device: cuda Loading weights: model-f46da743.pt Using cache found in C:\Users\gregb/.cache\torch\hub\facebookresearch_WSL-Images_master Traceback (most recent call last): File "run.py", line 105, in run(INPUT_PATH, OUTPUT_PATH, MODEL_PATH) File "run.py", line 29, in run model = MidasNet(model_path, non_negative=True) File "C:\Users\gregb\Documents\Python\MiDaS\midas\midas_net.py", line 47, in init self.load(path) File "C:\Users\gregb\Documents\Python\MiDaS\midas\base_model.py", line 11, in load parameters = torch.load(path) File "C:\Users\gregb\anaconda3\envs\3DP\lib\site-packages\torch\serialization.py", line 525, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\gregb\anaconda3\envs\3DP\lib\site-packages\torch\serialization.py", line 212, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\gregb\anaconda3\envs\3DP\lib\site-packages\torch\serialization.py", line 193, in init super(_open_file, self).init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'model-f46da743.pt'

    Untitled

    opened by GregoryBetsey 7
  • About getting results in meters unit

    About getting results in meters unit

    @ranftlr Thank you for the work. I'm trying to apply it with Myriad X VPU. So I would like to ask whether the unknown scale and shift mentioned in #36 are linear parameters? For example, in each frame, I can find a linear equation like "P = D * scale + shift" to project the values of depth maps "D" to the physical absolute measurements "P" according to putting a known scale ruler in the view, right ?

    opened by nightheronry 6
  • Сrash when using in notebook

    Сrash when using in notebook

    Hello! I am using the following jupiter notebook https://github.com/deforum/stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb

    When setting up the environment, the following line causes problems https://github.com/isl-org/MiDaS/blob/eaa249ff68fde82cad78982d0a36523e3ebfbfdf/midas/backbones/next_vit.py#L8

    It looks like it wasn't intended to be used this way.

    opened by JargeZ 5
  • How to eval on MiDaS?

    How to eval on MiDaS?

    Hi, is there any evaluation code for MiDaS?

    MiDaS can predict a robust inverse-depth of a single image, but how can I eval on datasets with ground truth depth like KITTI? Should I convert predict disparity to depth and evaluate in the depth space, or convert ground truth depth to disparity and evaluate in the disparity space?

    I downloaded the official validation set of KITTI, and convert gt_depth to gt_disparity with:

    gt_disparity = 1 / (gt_depth + 1e-8)
    gt_disparity[gt_depth==0]=0
    

    after performing alignment in disparity space and evaluating on midas_v3.0_dpt-large on KITTI, I got the performance:

    KITTI AbsRel : 10.0
    KITTI delta > 1.25 : 10.1
    

    It seems that my evaluation code is not so accurate. Could you please provide your evaluation code for MiDas? Thank you so much.

    opened by guangkaixu 5
  • torch.hub example is broken

    torch.hub example is broken

    Hi all,

    We're the maintainers of torch.hub and it looks like executing the code in the MiDaS example fails with the following error:

    QObject::moveToThread: Current thread (0x5f956b80) is not the object's thread (0x601f7590).
    E           Cannot move to target thread (0x5f956b80)
    E           
    E           qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/circleci/miniconda3/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
    E           This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
    E           
    E           Available platform plugins are: xcb, eglfs, minimal, minimalegl, offscreen, vnc, webgl.
    

    You can find more details about the execution here

    Would you mind submitting a fix either in your own repo if relevant, or directly to the MiDaS torchhub file? Thank you!

    (I suspect that the problem may be that Qt is required to run the model somehow, while our CI job is headless so the X server isn't available - I could be completely wrong though).

    opened by NicolasHug 0
  • PUBKEY error building the docker image

    PUBKEY error building the docker image

    Hi,when building the Docker image with :docker build -t midas . the following error occured: GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC #6 3.925 E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease' is not signed.

    try fix with: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC but failed.

    Do you have any idea? 2022-10-18 (2)

    opened by wangbohan6aa 1
  • Video > Sequence > Depth (Uneven Render)

    Video > Sequence > Depth (Uneven Render)

    So basically what I’ve tried doing is converting a 2D video into a 3D video with anaglyph.

    1. I render a video clip into a PNG sequence using Adobe Media Encoder
    2. I run the PNG sequence using MiDas
    3. I insert the both DepthMap and the normal sequence in after effects as PNG sequence
    4. I Displace Map the depth and Right shift +5 and left shift -5 on another copy, creating 2 different perspectives
    5. I color the right one Cyan and the left one Magenta and apply Multiply blending mode
    6. Color correcting using adjustment layer
    7. Then render to final result

    the issue is that the Depth sequence rendered with MiDas is uneven, it’s flickering and it’s slight light changes, although the original clip has a flat exposure throughout the video.

    I’m using the Large MiDas file to render. Was wondering if there is another more ”accurate” way to do this? It seems to me the results other people are getting is way more on-point whilst mines is kind of blurry and uneven throughout the sequence!

    opened by Joakimgreenday 0
  • no timm despite pip install

    no timm despite pip install

    Hello, In my dedicated python environment, I tried with latest timm and forced 0.4.5 without luck:

    pip install timm==0.4.5
    Defaulting to user installation because normal site-packages is not writeable
    Collecting timm==0.4.5
      Downloading timm-0.4.5-py3-none-any.whl (287 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 287.4/287.4 kB 2.8 MB/s eta 0:00:00
    Requirement already satisfied: torch>=1.4 in /home/pm/.local/lib/python3.10/site-packages (from timm==0.4.5) (1.12.1)
    Requirement already satisfied: torchvision in /home/pm/.local/lib/python3.10/site-packages (from timm==0.4.5) (0.13.1)
    Requirement already satisfied: typing-extensions in /home/pm/.local/lib/python3.10/site-packages (from torch>=1.4->timm==0.4.5) (4.3.0)
    Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/lib/python3/dist-packages (from torchvision->timm==0.4.5) (9.0.1)
    Requirement already satisfied: numpy in /usr/lib/python3/dist-packages (from torchvision->timm==0.4.5) (1.21.5)
    Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from torchvision->timm==0.4.5) (2.25.1)
    Installing collected packages: timm
    Successfully installed timm-0.4.5
    (midas) pm@pop-os:~/Documents/github$ python run.py --model_type dpt_large
    Traceback (most recent call last):
      File "run.py", line 11, in <module>
        from midas.dpt_depth import DPTDepthModel
      File "/home/pm/Documents/github/midas/dpt_depth.py", line 6, in <module>
        from .blocks import (
      File "/home/pm/Documents/github/midas/blocks.py", line 4, in <module>
        from .vit import (
      File "/home/pm/Documents/github/midas/vit.py", line 3, in <module>
        import timm
    ModuleNotFoundError: No module named 'timm'
    

    Any idea?

    opened by j2l 0
  • How do I define a cache directory for models to download and load from?

    How do I define a cache directory for models to download and load from?

    I'm having trouble figuring out how to define a location to store downloaded models, and load them. Every run it downloads models, and the probability of connections ended by peers, and other stuff is high. I'd like to prevent this, and keep bandwidth consumption load. I couldn't find anything in the documentation regarding model paths.

    opened by WASasquatch 0
  • device

    device "mps" for Apple silicon - strange output

    I've tried to run MiDaS on Apple silicon with pytorch nightly with mps (metal performance shader) support.

    I've change two lines of code to get this working: 28 - device = torch.device("cuda" if torch.cuda.is_available() else "mps") and 127 - mode="bilinear",

    The change to "bilinear" is necessary because bicubic is not supported currently for the M1 native implementation. Influence on output quality is neglectable in my experience (I did a few comparing tests on "CPU" with only this parameter changed)

    This way I could speed up inference by a whopping factor of 6. But: Output quality detoriates to not-usable. I've got virtually no experience with python and pytorch - I'm basically a hobbyist creator and dad who wants to have a depth map on selected gopro footage of his son to fake depth of field into it. So i do not know where to look at.

    Legacy Midas V2 seems to work okay and delivers the same output on CPU and MPS: 000727v2cpu CPU 000727v2mps MPS

    DPT however, output becomes unusable with mps: 000727dptmps MPS

    000727dptcpu CPU

    Any ideas? Can someone give me a hint?

    opened by RaceBo 0
Releases(v3_1)
Owner
Intelligent Systems Lab Org
Intelligent Systems Lab Org
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

null 118 Dec 26, 2022
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

null 63 Dec 16, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images

Main repo for ECCV 2020 paper MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. visual.cs.brown.edu/matryodshka

Brown University Visual Computing Group 75 Dec 13, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022
Out-of-boundary View Synthesis towards Full-frame Video Stabilization

Out-of-boundary View Synthesis towards Full-frame Video Stabilization Introduction | Update | Results Demo | Introduction This repository contains the

null 25 Oct 10, 2022
[ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing

NeRFlow [ICCV'21] Neural Radiance Flow for 4D View Synthesis and Video Processing Datasets The pouring dataset used for experiments can be download he

null 44 Dec 20, 2022
PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"

FIERY This is the PyTorch implementation for inference and training of the future prediction bird's-eye view network as described in: FIERY: Future In

Wayve 406 Dec 24, 2022
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

null 304 Jan 3, 2023
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
Code for ECCV 2020 paper "Contacts and Human Dynamics from Monocular Video".

Contact and Human Dynamics from Monocular Video This is the official implementation for the ECCV 2020 spotlight paper by Davis Rempe, Leonidas J. Guib

Davis Rempe 207 Jan 5, 2023
Official implementation of the network presented in the paper "M4Depth: A motion-based approach for monocular depth estimation on video sequences"

M4Depth This is the reference TensorFlow implementation for training and testing depth estimation models using the method described in M4Depth: A moti

Michaël Fonder 76 Jan 3, 2023
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

null 536 Dec 20, 2022