[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

Overview

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

Open 3DPhotoInpainting in Colab

[Paper] [Project Website] [Google Colab]

We propose a method for converting a single RGB-D input image into a 3D photo, i.e., a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that iteratively synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show fewer artifacts when compared with the state-of-the-arts.

3D Photography using Context-aware Layered Depth Inpainting
Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.

Prerequisites

  • Linux (tested on Ubuntu 18.04.4 LTS)
  • Anaconda
  • Python 3.7 (tested on 3.7.4)
  • PyTorch 1.4.0 (tested on 1.4.0 for execution)

and the Python dependencies listed in requirements.txt

  • To get started, please run the following commands:
    conda create -n 3DP python=3.7 anaconda
    conda activate 3DP
    pip install -r requirements.txt
    conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit==10.1.243 -c pytorch
  • Next, please download the model weight using the following command:
    chmod +x download.sh
    ./download.sh

Quick start

Please follow the instructions in this section. This should allow to execute our results. For more detailed instructions, please refer to DOCUMENTATION.md.

Execute

  1. Put .jpg files (e.g., test.jpg) into the image folder.
    • E.g., image/moon.jpg
  2. Run the following command
    python main.py --config argument.yml
    • Note: The 3D photo generation process usually takes about 2-3 minutes depending on the available computing resources.
  3. The results are stored in the following directories:
    • Corresponding depth map estimated by MiDaS
      • E.g. depth/moon.npy, depth/moon.png
      • User could edit depth/moon.png manually.
        • Remember to set the following two flags as listed below if user wants to use manually edited depth/moon.png as input for 3D Photo.
          • depth_format: '.png'
          • require_midas: False
    • Inpainted 3D mesh (Optional: User need to switch on the flag save_ply)
      • E.g. mesh/moon.ply
    • Rendered videos with zoom-in motion
      • E.g. video/moon_zoom-in.mp4
    • Rendered videos with swing motion
      • E.g. video/moon_swing.mp4
    • Rendered videos with circle motion
      • E.g. video/moon_circle.mp4
    • Rendered videos with dolly zoom-in effect
      • E.g. video/moon_dolly-zoom-in.mp4
      • Note: We assume that the object of focus is located at the center of the image.
  4. (Optional) If you want to change the default configuration. Please read DOCUMENTATION.md and modified argument.yml.

License

This work is licensed under MIT License. See LICENSE for details.

If you find our code/models useful, please consider citing our paper:

@inproceedings{Shih3DP20,
  author = {Shih, Meng-Li and Su, Shih-Yang and Kopf, Johannes and Huang, Jia-Bin},
  title = {3D Photography using Context-aware Layered Depth Inpainting},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2020}
}

Acknowledgments

Comments
  • Warnings while running on an SSH session - no output videos

    Warnings while running on an SSH session - no output videos

    I get the following warnings while running headless:

    Current Source ==>  md
    initialize
    device: cpu
    start processing
      processing image/md.jpg (1/1)
    torch.Size([1, 3, 384, 288])
    finished
    Start Running 3D_Photo ...
    Writing mesh file mesh/md.ply ...
    53.13010235415598
    WARNING: QXcbConnection: Could not connect to display
    WARNING: Could not connect to any X display.
    

    I do see the md.ply file in the mesh folder and md.npy file in the depth folder. Just no videos though. Can this not be run without GUI?

    opened by MahadevanSrinivasan 21
  • windows support

    windows support

    Do you know how to change the backend properly for vispy (since I use windows)? I tried using pyqt5 instead of EPL but the new git version doesnt work with that. (probably the circle animation..) It gives following error with pyqt5:

    (3DP) E:\3d>python main.py --config argument.yml
      0%|                                                                                            | 0/2 [00:00<?, ?it/s]Current Source ==>  beatles
    initialize
    device: cpu
    start processing
      processing image\beatles.jpg (1/1)
    torch.Size([1, 3, 384, 384])
    finished
    53.13010235415598
    WARNING: Although PyQt5 is already imported, the PyQt5 backend could not
    be used ("DLL load failed: The specified procedure could not be found.").
    Note that running multiple GUI toolkits simultaneously can cause side effects.
    Traceback (most recent call last):
      File "main.py", line 112, in <module>
        videos_poses, video_basename, config.get('original_h'), config.get('original_w'), border=border, depth=depth, normal_canvas=normal_canvas, all_canvas=all_canvas)
      File "E:\3d\mesh.py", line 2203, in output_3d_photo
        proj='perspective')
      File "E:\3d\mesh.py", line 2132, in __init__
        self.canvas = scene.SceneCanvas(bgcolor=bgcolor, size=(canvas_size*factor, canvas_size*factor))
      File "C:\Users\Filip\anaconda3\envs\3DP\lib\site-packages\vispy\scene\canvas.py", line 137, in __init__
        always_on_top, px_scale)
      File "C:\Users\Filip\anaconda3\envs\3DP\lib\site-packages\vispy\app\canvas.py", line 169, in __init__
        self._app = use_app(call_reuse=False)
      File "C:\Users\Filip\anaconda3\envs\3DP\lib\site-packages\vispy\app\_default_app.py", line 47, in use_app
        default_app = Application(backend_name)
      File "C:\Users\Filip\anaconda3\envs\3DP\lib\site-packages\vispy\app\application.py", line 49, in __init__
        self._use(backend_name)
      File "C:\Users\Filip\anaconda3\envs\3DP\lib\site-packages\vispy\app\application.py", line 256, in _use
        'PyQt' % [b[0] for b in CORE_BACKENDS])
    RuntimeError: Could not import any of the backends. You need to install any of ['PyQt4', 'PyQt5', 'PySide', 'PySide2', 'Pyglet', 'Glfw', 'SDL2', 'wx', 'EGL', 'osmesa']. We recommend PyQt
      0%|
    

    pyqt5 is installed, this issue started since arrays were added into the argument.yml file

    opened by filipppp 19
  • Dockerfile

    Dockerfile

    Hye guys,

    Trying to set up stuff via docker. Bumped into issue:

    >docker-compose up
    3d_1  | Traceback (most recent call last):
    3d_1  |   File "main.py", line 28, in <module>
    3d_1  |     vispy.use(app='egl')
    3d_1  |   File "/opt/conda/lib/python3.7/site-packages/vispy/util/wrappers.py", line 97, in use
    3d_1  |     use_app(app)
    3d_1  |   File "/opt/conda/lib/python3.7/site-packages/vispy/app/_default_app.py", line 47, in use_app
    3d_1  |     default_app = Application(backend_name)
    3d_1  |   File "/opt/conda/lib/python3.7/site-packages/vispy/app/application.py", line 49, in __init__
    3d_1  |     self._use(backend_name)
    3d_1  |   File "/opt/conda/lib/python3.7/site-packages/vispy/app/application.py", line 235, in _use
    3d_1  |     raise RuntimeError(msg)
    3d_1  | RuntimeError: Could not import backend "EGL":
    3d_1  | Could not initialize
    
    

    My Dockerfile

    FROM pytorch/pytorch:1.4-cuda10.1-cudnn7-runtime
    
    COPY . /app/
    WORKDIR /app
    
    RUN pip install -r requirements.txt
    RUN apt update
    RUN apt install -y wget 
    RUN apt install -y libfontconfig1-dev
    RUN pip install scipy matplotlib scikit-image
    RUN apt install -y ffmpeg git less nano libsm6 libxext6 libxrender-dev python3-pyqt4 libgegl-0.3-0 libegl1 libegl-mesa0 libegl1-mesa-dev libgegl-dev
    

    docker-compose.yml

    version: '2.3'
    
    services:
      3d:
        build: .
        volumes: 
          - .:/app
          - .torch:/root/.torch
        working_dir: /app
        networks:                                                                                                                                          
         - withvpn
        stdin_open: true
        tty: true
        runtime: nvidia
        ipc: host
        environment:
          - CUDA_VISIBLE_DEVICES=0
        command: python main.py --config argument.yml
    networks:                                                                                                                                              
      withvpn:                                                                                                                                             
        ipam:                                                                                                                                              
          config:                                                                                                                                          
          - subnet: 170.13.241.0/24                                                                                                                        
            gateway: 170.13.241.1
    
    
    opened by ssnake 13
  • Qt plugin issue

    Qt plugin issue

    "WARNING: Could not load the Qt platform plugin "xcb" in "" even though it was found. WARNING: This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

    Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

    Aborted (core dumped)"

    And i set the offscreen_render=True in argument.yml, it's still doesn't work

    opened by ouguozhen 12
  • RTX 3XXX Series Not Supported

    RTX 3XXX Series Not Supported

    Thought to give my new GPU test on BMD mixed with 3DP but noticed some suspicious slowness, a process that should have taken 10 minutes was taking 7 hours. I worked down the list of why and thought to test BMD on its own, and sure enough, it provided the information I needed. It gives a warning that the RTX 3070 in my system has a newer compute version than is supported, and instead of defaulting to my RTX 2060 (this is supported and works in seconds), it defaults to a single CPU thread (1/16).

    The same issue is found in 3DP, but 3DP does not report the issue like BMD does, it instead just tries to move forward.

    conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge

    on the pytorch page makes the card work and the results are not altered by the change

    opened by 78Alpha 10
  • Using CPU instead of CUDA

    Using CPU instead of CUDA

    I do not have a GPU, and want to try running the code with only a CPU. I got the next error message: python main.py --config argument.yml 0%| | 0/1 [00:00<?, ?it/s]Current Source ==> moon2 initialize device: cpu start processing processing image/moon2.jpg (1/1) torch.Size([1, 3, 384, 384]) finished Start Running 3D_Photo ... Traceback (most recent call last): File "main.py", line 61, in <module> depth_edge_weight = torch.load(config['depth_edge_model_ckpt']) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 529, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 702, in _legacy_load result = unpickler.load() File "/home/user/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 665, in persistent_load deserialized_objects[root_key] = restore_location(obj, location) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 156, in default_restore_location result = fn(storage, location) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 132, in _cuda_deserialize device = validate_cuda_device(location) File "/home/user/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 116, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. 0%| | 0/1 [00:19<?, ?it/s]

    opened by ModMaamari 10
  • Missing models at https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/

    Missing models at https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/

    --2020-07-22 21:37:02-- https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/color-model.pth Resolving filebox.ece.vt.edu (filebox.ece.vt.edu)... 128.173.88.43 Connecting to filebox.ece.vt.edu (filebox.ece.vt.edu)|128.173.88.43|:443... failed: Connection refused. mv: cannot stat 'color-model.pth': No such file or directory --2020-07-22 21:37:02-- https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/depth-model.pth Resolving filebox.ece.vt.edu (filebox.ece.vt.edu)... 128.173.88.43 Connecting to filebox.ece.vt.edu (filebox.ece.vt.edu)|128.173.88.43|:443... failed: Connection refused. mv: cannot stat 'depth-model.pth': No such file or directory --2020-07-22 21:37:02-- https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/edge-model.pth Resolving filebox.ece.vt.edu (filebox.ece.vt.edu)... 128.173.88.43 Connecting to filebox.ece.vt.edu (filebox.ece.vt.edu)|128.173.88.43|:443... failed: Connection refused. mv: cannot stat 'edge-model.pth': No such file or directory --2020-07-22 21:37:02-- https://filebox.ece.vt.edu/~jbhuang/project/3DPhoto/model/model.pt Resolving filebox.ece.vt.edu (filebox.ece.vt.edu)... 128.173.88.43 Connecting to filebox.ece.vt.edu (filebox.ece.vt.edu)|128.173.88.43|:443... failed: Connection refused. mv: cannot stat 'model.pt': No such file or directory

    opened by turkeyphant 8
  • ERROR: Failed building wheel for cynetworkx

    ERROR: Failed building wheel for cynetworkx

    Following the install instructions exactly with the following:

    Ubuntu 18.04.5 Anaconda 2020.7 Python 3.7.7

    When running pip install -r requirements.txt, receive the error: ERROR: Failed building wheel for cynetworkx

    Attempted to resolve with:

    pip install Cython==3.0a5 pip install git+https://github.com/pattern-inc/cynetworkx.git pip install -r requirements.txt

    But error persists. Any suggestions on how to resolve?

    opened by geopast 7
  • Issues getting it to work on CPU only mac (Catalina 10.15.4)

    Issues getting it to work on CPU only mac (Catalina 10.15.4)

    I don't think it is possible, but can this run on a CPU only macbook pro? Below is the error output on my last 2013 MBP with retina integrated graphics

    Also, if you are running a NVIDIA GPU mac book pro you'll need to change this line conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit==10.1.243 -c pytorch To this: conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit==9.0 -c pytorch

    Output I get on my machine: initialize device: cpu start processing processing image/4O5A8714.jpg (1/1) torch.Size([1, 3, 256, 384]) finished Start Running 3D_Photo ... Traceback (most recent call last): File "main.py", line 95, in depth_feat_model) File "/Users/tmbouman/Documents/GitHub/3d-photo-inpainting/mesh.py", line 1877, in write_ply depth_edge_model, depth_feat_model, rgb_model, config, direc="up") File "/Users/tmbouman/Documents/GitHub/3d-photo-inpainting/mesh_tools.py", line 193, in extrapolate t_edge = torch.FloatTensor(edge).to(device)[None, None, ...] File "/Users/tmbouman/anaconda3/envs/3DP/lib/python3.7/site-packages/torch/cuda/init.py", line 196, in _lazy_init _check_driver() File "/Users/tmbouman/anaconda3/envs/3DP/lib/python3.7/site-packages/torch/cuda/init.py", line 94, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled")

    opened by tmbouman 7
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    start processing
      processing image/1.jpg (1/1)
    torch.Size([1, 3, 384, 256])
    finished
    Start Running 3D_Photo ...
    Writing mesh file mesh/1.ply ...
    53.13010235415598
    Segmentation fault (core dumped)
    

    Any suggestions?

    opened by if1oke 7
  • AttributeError: 'Graph' object has no attribute 'node'

    AttributeError: 'Graph' object has no attribute 'node'

    I set the value of gpu_ids in the argument.yml to -1 because it was giving me the error "RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU." until earlier. However, this time I got an error that said "AttributeError: 'Graph' object has no attribute 'node'".

    What am I supposed to do?

    opened by ShuperDark 6
  • Add support to run on Windows

    Add support to run on Windows

    Need to install Git BASH to run.

    sh download.sh
    

    Ref: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/4268#discussioncomment-4134100

    opened by donlinglok 2
  • Windows 10 cynetworkx Install Issue

    Windows 10 cynetworkx Install Issue

    Everything installs in Windows 10 with no problem until trying to build cynetworkx. I have searched Github and used some Google-Fu, however none of the solutions found have yielded a positive result. Lots of output, so it's attached as a text file.

    error.txt

    opened by y0himba 2
  • Colab: 'missing argument' error prevents executing the inpainting

    Colab: 'missing argument' error prevents executing the inpainting

    When attempting to execute the inpainting after going through the colab normally as one would, I get the following result:

    Traceback (most recent call last):
      File "main.py", line 29, in <module>
        config = yaml.load(open(args.config, 'r'))
    TypeError: load() missing 1 required positional argument: 'Loader'
    

    How do I fix this? Is the colab just broken?

    opened by CrimsonCuttle 2
  • Does anyone have a workflow for getting a normal, textured 3D environment out of this?

    Does anyone have a workflow for getting a normal, textured 3D environment out of this?

    I'd love to view my results in other spaces like VR or to be used in 3D scenes in Blender, but the mesh comes out stretched to near unrecognizability, and I can't find a way to make the textures "fit" properly. Has anyone ported the results from this project anywhere else like Blender etc? If so, what was your process?

    opened by CrimsonCuttle 1
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In 3d-photo-inpainting, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    opencv-python==4.2.0.32
    vispy==0.6.4
    moviepy==1.0.2
    transforms3d==0.3.1
    networkx==2.3
    cynetworkx
    scikit-image
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency moviepy can be changed to >=0.2.1.7.15,<=2.0.0.dev2. The version constraint of dependency networkx can be changed to >=2.0,<=2.8.4.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the moviepy
    moviepy.editor.ImageSequenceClip
    
    The calling methods from the networkx
    max
    
    The calling methods from the all methods
    filtered_comp_far_cc.add
    build_connection.has_node
    fpath.append
    self.dec_4A
    range
    far_edge_id.far_edge_cc.astype
    set.append
    numpy.meshgrid
    self.scratch.layer1_rn
    plan_path
    direc.lower
    numpy.ones
    condition_id.condition_labels.astype
    self.enc_5
    node_a.mesh_nodes.add
    disp.max
    pix_info.get
    depth_feat_model.to.load_state_dict
    ResnetBlock
    self.enc_4
    copy.deepcopy
    l_diff.np.abs.astype
    get_neighbors
    utils.filter_irrelevant_edge_new
    torch.nn.ConvTranspose2d
    self.mask_conv.parameters
    tensor_depth_dict.data.cpu
    round
    disp.max.max
    torch.nn.InstanceNorm2d
    context.cpu.data.numpy
    H.W.H.max.W.W.H.max.np.array.astype
    isinstance
    max_size
    utils.read_MiDaS_depth
    fpath_real_id.np.bincount.argmax
    f_info.split
    self.enc_3
    boostmonodepth_utils.run_boostmonodepth
    fpath_id.global_fpath_map.astype
    edge_node.mesh_nodes.get
    xx.squeeze
    torch.nn.ReLU
    format
    numpy.arange
    torch.nn.functional.interpolate
    r_over.l_over.b_over.u_over.clip
    cv2.imwrite
    bilateral_filtering.sparse_bilateral_filtering
    mesh_tools.crop_maps_by_size
    far_edge.squeeze.numpy
    mask.label_lost_map.astype.max
    four_dir_nes.append
    closest_depths.np.array.argmax
    vispy.use
    update_edge.torch.FloatTensor.to
    file.readline.rstrip
    edge_id.erode_context_ccs.remove
    hy.hx.mesh.nodes.get
    numpy.linalg.inv
    aright.aleft.abuttom.atop.stereo.astype
    mask.cpu.data.numpy.squeeze.astype
    mask_depth.squeeze.cpu
    r_diff.np.abs.astype
    self.enc_6
    cur_node.mesh.neighbors.copy
    torch.nn.init.normal_
    fnode.global_mesh.nodes.get
    tgt_name.os.path.basename.replace
    encode
    self.scratch.refinenet4
    numpy.mean
    input_rgb.torch.FloatTensor.to
    recursive_add_edge
    any
    networkx.Graph.add_node
    surround_map_wo_end_pt.copy.astype
    tensor_rgb_dict.data.cpu
    far_edge.squeeze.cpu
    ny.nx.end_maps.ny.nx.mesh_nodes.get
    min
    hole_near_edge.copy
    remove_node_feat
    init_depth_map.copy
    copy.deepcopy.append
    info.get
    zip
    self.Inpaint_Edge_Net.super.__init__
    other_edges.squeeze.astype
    Net.to
    networkx.Graph.remove_node
    mask_edge_with_id.flatten.collections.Counter.dict.keys
    horizon_condition
    near_node.mesh_nodes.get
    rgb_output.squeeze.data.cpu
    new_specific_edge_map.copy.astype
    residual_w.w.residual_h.h.input.shape.input.shape.torch.zeros.to
    stereos.append
    filtered_accomp_near_cc.add
    remove_redundant_edge
    scale.height_orig.np.ceil.astype
    mask_depth.squeeze.numpy
    build_connection.add_edges_from
    sub_discont_cc.list.LDI.nodes.pop
    tmp_context_map.init_invalid_context_map.astype
    remove_edge_list.append
    point_to_adjoint.get
    sub_edge_id.sub_specific_edge_maps.astype.astype
    is_redun_near
    weights_init
    self.input_conv
    super.train
    depth_feat_model.to.eval
    context_edge.squeeze.numpy
    depth_edge_model.to.load_state_dict
    tear_edges
    Exception
    tensor_rgb_dict.data.cpu.numpy
    utils.refine_depth_around_edge
    enlarge_input.to.to
    utils.smooth_cntsyn_gap.cpu
    scipy.interpolate.interp1d
    build_connection.neighbors
    image.transpose.torch.FloatTensor.unsqueeze
    cur_node.mesh.nodes.append
    context_edge.squeeze.cpu
    new_tmp_context_nodes.append
    torch.nn.Sequential
    new_node.mesh.nodes.update
    diag_candi_anc.add
    ply_fi.readline.split.split
    networkx.Graph.remove_edge
    samples.append
    depth.astype.astype
    combine_end_node
    tmp_intersect_context_nodes.append
    src_node.mesh.nodes.remove
    os.path.splitext
    depth.transpose.torch.FloatTensor.unsqueeze.to
    mesh_tools.get_edge_from_nodes.copy
    numpy.array.astype
    uncleaned_far_edge.squeeze.cpu
    pj.pi.pad_discontinuity_patches.any
    networkx.shortest_path_length
    numpy.reshape
    self.scratch.layer2_rn
    connect_node.append
    calculate_fov.degree
    mask.torch.FloatTensor.to
    os.path.join
    discont_nes.add
    cs
    self_edge.squeeze.squeeze
    find_anchors
    numpy.uint8.y.input_other_edge_with_id.astype.x.clip
    collections.namedtuple
    input_feat.input_feat.clamp
    join
    self.load_state_dict
    networkx.Graph.has_edge
    eight_nes.mesh.subgraph.copy
    cur_mask_cc.append
    tensor_edge_dict.config.depth_edge_output.float.max
    shape_fn
    ne_node.mesh.nodes.astype
    cv2.blur
    numpy.count_nonzero
    mesh.graph.mesh.graph.np.zeros.astype
    add_new_edge
    PartialConv
    img_resized.unsqueeze.unsqueeze
    reproject_3d_int_detail
    ends_1.global_mesh.nodes.get
    mesh_tools.depth_inpainting
    networkx.Graph.has_node
    out_fmt.append
    utils.refine_depth_around_edge.copy
    cv2.imread
    save_discontinuities.append
    rgb_output.cpu.mean
    new_tmp_invalid_context_nodes.append
    edge_id.fpaths.remove
    self.conv5
    edge_id.context_ccs.add
    utils.smooth_cntsyn_gap
    numpy.uint8.mask_map.astype.copy
    get_depth_from_nodes.copy
    cv2.dilate.astype
    residual_w.w.residual_h.h.c.n.torch.zeros.to
    numpy.linspace
    numpy.exp
    numpy.digitize
    numpy.zeros.copy
    self.canvas.central_widget.add_view
    f_id.edge_ccs.add
    build_connection.add_node
    pdb.set_trace
    edge_node.mesh.nodes.get
    networkx.Graph.add_edge
    image.transpose.torch.FloatTensor.unsqueeze.to
    numpy.finfo
    disp_fi.imageio.imread.astype
    config.get
    propagated_depth.append
    forbidden_len.forbidden_len.forbidden_len.forbidden_len.forbidden_map.np.pad.astype
    self_edge.squeeze.astype
    input_disp.max.max
    context_edge.squeeze.squeeze
    tgt_poses.append
    Canvas_view.view_changed
    numpy.log
    x.m.nodes.get
    colors.copy
    cv2.resize.copy
    patch_rgb_dict.shape.np.array.prod
    self.pretrained.layer4
    context.astype
    mask_map.astype
    ax1.imshow
    self.scratch.layer3_rn
    cv2.blur.numpy
    img_resized.np.transpose.torch.from_numpy.contiguous
    setattr
    conn_2_nodes.mesh.subgraph.copy
    ResidualConvUnit
    os.makedirs
    utils.refresh_node.add_edges_from
    np_mask.torch.FloatTensor.to
    tgt_names.append
    pix_info.tolist
    self.input_conv.apply
    uncleaned_far_edge.squeeze.numpy
    self.canvas.render
    tmp_fmap.max
    Canvas_view.reinit_mesh
    info_on_pix.items
    build_connection.add_edge
    patch_rgb_dict.shape.np.array.max
    start_my.start_mx.edges_infos.append
    numpy.uint8.mask.astype.np.uint8.new_specific_edge_map.astype.np.uint8.np.uint8.far_edge_cc.astype.np.uint8.far_edge_id.far_edge_cc.astype.astype.clip
    len
    numpy.uint8.tmp_context_map.astype.np.uint8.tmp_mask_map.astype.max
    blocks.append
    rgb_model.to.eval
    output_pre.masked_fill_
    MiDaS.run.run_depth
    depth.astype.copy
    node.node.rgb_dict.astype
    torch.cat.cpu
    Canvas_view.render
    open.close
    self.view.camera.view_changed
    cur_accomp_near_cc.append
    sub_specific_edge_maps.max
    input.input.clamp.repeat
    end_pts.add
    self.conv2
    build_connection.remove_edges_from
    mesh.graph.copy
    utils.get_MiDaS_samples
    torch.ones_like
    enumerate
    Canvas_view.translate
    self.conv_block
    copy.deepcopy.remove
    edge_id.context_ccs.remove
    self.mask_conv.masked_fill_
    self.dec_2A
    input_edge.astype
    self.scratch.layer4_rn
    l_key.self.getattr
    edge_id.mask_edge_with_id.astype
    imap.sum
    get_cross_nes
    context_and_holes
    open.write
    networkx.connected_components
    self.decoder_2
    cam_mesh.graph.get
    far.mesh_nodes.append
    context_near_node_3d.global_mesh.nodes.get
    yaml.load.get
    list
    utils.write_depth
    self.view_changed
    self.middle
    self.scratch.output_conv
    transforms3d.axangles.mat2axangle
    abs.items
    add_new_node
    edge_id.edges_in_mask.add
    reassign_floating_island
    abs
    u_diff.np.abs.astype
    node_existence
    input.input.clamp
    numpy.hypot
    numpy.uint8
    t_output_edge.data.cpu
    fpath_map1.astype
    open.readline
    matplotlib.pyplot.subplots
    crop_maps_by_size
    edge_id.other_edges_with_id.astype
    super.__init__
    Net.eval
    numpy.where
    networkx.Graph.neighbors
    mesh_tools.get_valid_size
    ply_fi.readline.split
    out.astype
    re.match.groups
    numpy.stack
    utils.create_placeholder
    mesh_tools.size_operation
    numpy.bincount
    self.conv3
    self.mask_conv
    torch.nn.init.orthogonal_
    numpy.cos
    numpy.float32.over_diff.astype.sum
    cv2.dilate
    rolling_window
    condition
    self.conv1
    utils.refresh_node
    input_rgb.mask.rgb_output.squeeze.data.cpu.permute.numpy.astype
    closest_depths.append
    calculate_fov.neighbors
    ne.mesh_nodes.get
    end_bevel_point.max
    networks.Inpaint_Edge_Net
    omaps.append
    numpy.roll.copy
    mesh_tools.get_edge_from_nodes
    self.dec_5A
    ends.global_mesh.nodes.get
    torch.nn.Module
    cv2.resize
    mask.squeeze.cpu
    utils.open_small_mask
    node_b.mesh_nodes.get
    faces.copy
    cv2.boundingRect
    mesh.graph.mesh.graph.np.zeros.astype.astype
    tensor_depth_dict.data.cpu.numpy
    self.resConfUnit
    comm_opp_bg
    enlarge_input.to.clamp
    torch.from_numpy
    hy.hx.rgb_dict.astype
    cur_invalid_extend_edge_cc.append
    torch.cuda.empty_cache
    fpath_real_id.np.bincount.argmax.astype
    edge_group.keys
    accum_context_cc.extend
    sdict.max
    dict
    surr_edge_ids.items
    tmp_patch.np.sum.astype
    cur_node.nodes.items
    rgb_model.forward_3P
    edge_mesh.neighbors
    mesh_tools.extrapolate
    candidate_bevel.append
    iter
    set.add
    context.other_edges.edge_id.other_edges_with_id.astype
    numpy.save
    self_comp_edge.filter_self_edge.clip
    rt_meshes.append
    edge_mesh.neighbors.append
    mesh_tools.update_info
    numpy.eye
    node_a.mesh_nodes.get
    getattr
    context.cpu.data.numpy.squeeze
    numpy.unique
    depth.np.zeros_like.astype
    utils.resize_depth
    uncleaned_far_edge.squeeze.astype
    skimage.transform.resize.min
    mesh_tools.get_map_from_ccs
    update_info
    utils.refresh_node.add_node
    utils.refresh_node.remove_edges_from
    numpy.ceil
    edge_id.connnect_points_ccs.add
    tensor_edge_dict.squeeze.data.cpu
    self.enc_7
    node_b.mesh_nodes.add
    conn_2_nodes.mesh.subgraph.copy.degree
    scale.width_orig.np.ceil.astype
    image.transpose.torch.FloatTensor.unsqueeze.to.transpose
    stack.pop
    near_id.fpath_map.astype
    max_rectangle_size
    numpy.flipud.tofile
    curr_mask_region.astype
    float
    numpy.sum
    tensor_edge_dict.squeeze.data.cpu.numpy
    torch.load
    numpy.ones_like
    map
    cc_id.four_ccs.add
    value.torch.FloatTensor.permute
    discontinuity_holes.astype
    input.append
    torch.exp
    numpy.all
    vispy.visuals.filters.Alpha
    numpy.transpose
    self_edge.other_edges_with_id.np.unique.astype
    numpy.sin
    far_edge.squeeze.squeeze
    collections.Counter
    ply_fi.readline.split.startswith
    os.path.exists
    xy.info_on_pix.get
    mask_edge.squeeze.sum
    cv2.connectedComponents
    torch.nn.init.xavier_normal_
    FeatureFusionBlock
    combine_end_node.has_node
    tmp_intersect_nodes.append
    src_node.mesh.nodes.get
    cv2.resize.repeat
    clear_node_feat
    remove_nodes.append
    torch.squeeze
    config.b_diff.np.abs.astype
    out_fmt
    self.pretrained.layer2
    far_edge_cc.astype
    input_feat.input_feat.clamp.repeat
    self.activation
    mesh_tools.enlarge_border
    tuple
    hasattr
    networks.Inpaint_Color_Net
    torch.nn.ReflectionPad2d
    x.G.nodes.set.isdisjoint
    self.decoder_1
    abs.remove
    self.dec_7
    numpy.pad.copy
    tensor_dict.items
    node_e.node_e.node_e.node_e.context_edge.flatten
    node.mesh.nodes.astype
    self.enc_2
    ax2.imshow
    hole_far_edge.copy
    skimage.transform.resize
    key_exist
    discont_cc.list.discont_graph.subgraph.copy
    mapping_dict.keys
    img_resized.np.transpose.torch.from_numpy.contiguous.float
    Info
    set
    int
    torch.cat
    stack.append
    far_edge_cc.max
    glob.glob
    networkx.periphery
    numpy.maximum
    mask_edge_with_id.flatten
    depth.astype.transpose
    tensor_rgb_dict.data.cpu.numpy.squeeze
    mesh.graph.sx.mesh.graph.sy.np.array.reshape
    max
    prjto3d
    v_info.split.split
    self.conv
    classname.find
    self_node.global_mesh.nodes.append
    e_fnode.global_mesh.nodes.get
    networks.Inpaint_Depth_Net
    set.update
    numpy.arctan
    self.input_conv.bias.view.expand_as
    depth_edge_model.to.to
    self.tr.translate
    fpath_map2.astype
    crop_stereos.append
    build_connection.subgraph
    xx.mesh.nodes.get
    utils.read_image
    info_on_pix.get.append
    get_edge_from_nodes
    acc_shape
    open.readlines
    self.scratch.refinenet1
    node_str_list.append
    abs.keys
    mask.squeeze.numpy
    numpy.uint8.far_edge_cc.astype.np.uint8.far_edge_id.far_edge_cc.astype.astype
    super
    numpy.float32.curr_mask_region.astype.fluxin_mask.astype
    over_diff.astype
    self.Interpolate.super.__init__
    size.size.size.size.imap.copy
    far_node.mesh_nodes.get
    get_depth_from_nodes
    top
    numpy.minimum
    utils.refresh_node.degree
    G.has_edge
    npath_map2.astype
    networkx.shortest_path
    networkx.Graph.remove_edges_from
    edge_dict.edge_dict.clip
    depth_patch.ravel.argsort
    plan_path_e2e
    tensor_depth_dict.data.cpu.numpy.squeeze
    numpy.floor
    numpy.sort
    numpy.any
    end_my.end_mx.edges_infos.append
    save_depths.append
    cv2.resize.astype
    near_edge.squeeze.cpu
    numpy.pad
    far_edge.squeeze.copy
    numpy.pad.max
    sorted
    matplotlib.pyplot.imshow
    self.forward
    edge_id.mask_ccs.add
    depth_output.cpu.data.numpy.squeeze
    moviepy.editor.ImageSequenceClip
    depth_edge_output.squeeze.data.cpu
    new_node.mesh.nodes.get
    bilateral_filter.max
    convert2tensor
    mesh_tools.fill_missing_node
    depth_feat_model.to.to
    utils.read_MiDaS_depth.copy
    torchvision.models.resnet50
    numpy.split
    self.encoder_0
    Canvas_view.reinit_camera
    depth_edge_output.cpu.cpu
    cv2.blur.max
    bilateral_filter.min
    numpy.zeros_like
    imap.sum.squeeze
    config.l_diff.np.abs.astype
    edge_id.edge_label.astype
    input.cpu.data.numpy
    dec_l_key.self.getattr
    depth_edge_model.forward_3P
    node_s.node_s.node_s.node_s.context_edge.flatten
    torch.FloatTensor
    depth_edge_output.squeeze.data.cpu.numpy.cpu
    pix_info.pix_info.astype
    calculate_fov
    torch.nn.functional.interpolate.float
    self.tr.rotate
    near_edge.squeeze.squeeze
    torch.sigmoid
    depth_feat_model.forward_3P
    near_id.npath_map.astype
    context.cpu.data.numpy.squeeze.astype
    mask.cpu.data.numpy
    d.get
    self.dec_3A
    cv2.blur.min
    tqdm.tqdm
    open.writelines
    self.scratch.refinenet2
    global_mesh.has_edge
    skimage.transform.resize.max
    numpy.lib.stride_tricks.as_strided
    self.relu
    vertical_condition
    matplotlib.pyplot.show
    mesh_tools.get_rgb_from_nodes
    far_node.mesh_nodes.edge_group.add
    utils.refresh_node.neighbors
    near_edge.squeeze.numpy
    self.pretrained.layer3
    torch.no_grad
    condition.dilate_self_edge.sum
    edge_group.get
    self.add_border
    f_info.split.split
    comm_opp_fg
    build_connection.remove_node
    mesh_tools.depth_inpainting.copy
    exceed_thre
    update_status
    imap.sum.squeeze.np.where.min
    fn.mesh_nodes.get
    rgb_model.to.load_state_dict
    numpy.cumsum
    context_map.astype
    line.split.split
    utils.refresh_node.remove_node
    depth_edge_output.squeeze.data.cpu.numpy
    far_node.mesh.nodes.get
    invalid_edge_ids.append
    self.input_conv.bias.view
    hy.hx.info_on_pix.append
    ax3.imshow
    xx.mesh_nodes.get
    numpy.array.append
    numpy.zeros_like.astype
    info_on_pix.get
    torch.nn.init.constant_
    uncleaned_far_edge.squeeze.squeeze
    node_str_color.np.array.astype.append
    bool
    utils.refine_color_around_edge
    far.mesh_nodes.get
    convert2tensor.squeeze
    vispy.scene.SceneCanvas
    numpy.tan
    context.context.other_edges_with_id.np.unique.astype
    torch.nn.BatchNorm2d
    utils.resize_image
    ne_node.global_mesh.nodes.get
    rgb_output.mean.repeat
    npaths.keys
    edge.torch.FloatTensor.to
    near.mesh_nodes.append
    mapping_dict.values
    self.translate
    torch.nn.LeakyReLU
    os.system
    device.input_rgb.torch.FloatTensor.to.permute.unsqueeze
    erode_size.append
    clean_folder
    near_edge.squeeze.copy
    discont_node.LDI.nodes.pop
    torch.nn.Conv2d
    depth_patch.ravel
    input.input.clamp.cpu
    build_connection
    self.bn
    Interpolate
    self.ResnetBlock.super.__init__
    npath_map1.astype
    mesh.graph.get
    v.clone.float
    vfnes.sort
    new_tmp_noncont_nodes.append
    numpy.abs.copy
    mask_edge.squeeze.squeeze
    numpy.float32.over_diff.astype.sum.clip.astype
    G.has_node
    isolate_condition.dilate_self_edge.astype
    tmp_specific_edge_id.append
    numpy.max
    new_tmp_intersect_nodes.append
    depth_edge_model.to.eval
    v_info.split
    far_node_list.append
    coef.sum.ravel
    t_output_edge.data.cpu.numpy
    mesh_tools.dilate_valid_size
    point_to_amount.get
    context_edge.dilate_mask.max
    append_element
    node.mesh.nodes.get
    mask_edge.squeeze.numpy
    mesh_tools.fill_dummy_bord
    rgb_model.to.to
    near_id.edge_dict.astype
    n_id.edge_ccs.add
    device.input_rgb.torch.FloatTensor.to.permute
    torch.zeros
    near_node.mesh.nodes.get
    str
    self.enc_1
    file.write
    cv2.dilate.copy
    numpy.load
    self.mesh.set_data
    G.add_edge
    mask.squeeze.astype
    Canvas_view.rotate
    far_node.mesh_nodes.remove
    rgb_output.squeeze.data.cpu.permute
    point_3d.flatten.flatten
    node.mesh.nodes.get.new_edge_ccs.add
    comp_edge.other_edges_with_id.np.unique.astype
    utils.refresh_node.has_node
    numpy.dot
    utils.refresh_node.add_edge
    Canvas_view
    calculate_fov.add_edge
    other_edges_with_id.squeeze.flatten
    self.dec_1A
    mask.label_lost_map.astype
    tensor_rgb_dict.data.cpu.numpy.squeeze.transpose
    coef.sum.max
    all_depth_mask.tmp_mask.clip
    node_str_color.np.array.astype
    depth_output.cpu.data.numpy
    fne.fne.end_maps.fne.fne.mesh_nodes.get
    t_output_edge.data.cpu.numpy.squeeze
    cur_comp_far_cc.append
    ne_node.mesh_nodes.get
    Net.forward
    connect_info.extend
    imageio.imread
    ends_2.global_mesh.nodes.get
    self.cat
    mesh_tools.convert2tensor.squeeze
    other_edges_with_id.squeeze.squeeze
    global_mesh.remove_edge
    numpy.float32.curr_mask_region.astype.fluxin_mask.astype.sum
    is_inside
    spectral_norm
    mask.squeeze.squeeze
    torch.nn.init.kaiming_normal_
    context.cpu
    numpy.zeros
    vispy.scene.visuals.Mesh
    imap.sum.squeeze.np.where.max
    self.mesh.attach
    end_cross_point.max
    build_connection.has_edge
    near.mesh_nodes.get
    vis_depth_discontinuity
    argparse.ArgumentParser.add_argument
    numpy.sum.argmin
    node.mesh.nodes.pop
    erode_context_node.mesh_nodes.get
    self.apply
    config.r_diff.np.abs.astype
    numpy.zeros_like.copy
    input_zero_mean_depth.torch.FloatTensor.to
    config.depth_edge_output.float
    input_rgb.mask.rgb_output.squeeze.data.cpu.permute.numpy.astype.mean
    img_input.to.to
    ax4.imshow
    utils.smooth_cntsyn_gap.copy
    numpy.uint8.pix_info.pix_info.astype.tolist
    tmp_nmap.astype
    self.dec_6
    file.readline.rstrip.decode
    file.readline
    remove_dangling
    node.mesh_nodes.get
    self.Discriminator.super.__init__
    self.scratch.refinenet3
    new_tmp_intouched_nodes.append
    mesh_tools.convert2tensor.items
    forbidden_map.np.zeros_like.astype
    numpy.uint8.mask_map.astype.np.uint8.context_map.astype.max
    input_rgb.mask.rgb_output.squeeze.data.cpu.permute.numpy.astype.squeeze
    mesh.neighbors.append
    utils.require_depth_edge
    ne_depths.append
    file.readline.decode
    new_tmp_redundant_nodes.append
    self.upsample
    functools.reduce
    cv2.GaussianBlur
    PCBActiv
    near_node.mesh_nodes.remove
    numpy.arctan2
    hy.hx.mesh.nodes.append
    forbidden_map.np.zeros_like.astype.fill
    input_dict.items
    mesh_tools.convert2tensor
    mesh_tools.refresh_bord_depth
    image.copy
    self.decoder_0
    node_key.mesh.neighbors.__length_hint__
    mask_depth.squeeze.squeeze
    input_mask.cpu.data.numpy
    numpy.float32.over_diff.astype.sum.clip
    nine_nes.mesh.subgraph.copy
    verts.copy
    tmp_nmap.max
    os.path.join.replace
    numpy.array
    generate_face
    ne.ne.four_nes.append
    math.sqrt
    mw.j.j.mh.i.i.matrix.max
    tmp_fmap.astype
    torch.zeros_like
    bilateral_filter.copy
    edge_mesh.has_node
    self.encoder_1
    open
    x.mesh.nodes.get
    torch.device
    cur_accomp_near_cc.set.intersection
    self.conv4
    mask.cpu.data.numpy.squeeze.astype.copy
    four_ccs.append
    new_discont_ccs.append
    MiDaS.MiDaS_utils.write_depth
    new_tmp_intersect_context_nodes.append
    near_edge.squeeze.astype
    generate_init_node
    cur_context_cc.extend
    file.readline.decode.rstrip
    networkx.Graph.subgraph
    mesh_tools.recursive_add_edge
    mesh.output_3d_photo
    image.transpose.torch.FloatTensor.unsqueeze.to.copy
    other_edges_with_id.flatten.collections.Counter.keys
    calculate_fov.add_node
    save_images.append
    valid_edge_ccs.append
    numpy.fromfile
    mask_edge.squeeze.cpu
    DL_inpaint_edge
    cc.list.mesh.subgraph.copy
    rgb_feat_model.forward_3P
    numpy.roll
    mesh.read_ply
    other_edges.squeeze.squeeze
    relabel_node
    re.match
    new_node.mesh.nodes.remove
    copy.deepcopy.sum
    G.add_node
    cv2.cvtColor
    input_disp.torch.FloatTensor.to
    comp_edge.self_edge.astype
    self.init_weights
    Net
    sub_edge_id.sub_specific_edge_maps.astype
    mask.cpu.data.numpy.squeeze
    networkx.connected_components.append
    coef.sum.sum
    self.view.add
    utils.refresh_node.has_edge
    self_node.global_mesh.nodes.get
    argparse.ArgumentParser.parse_args
    resize_dict.shape.np.array.max
    self.rotate
    v.clone
    end_pts.append
    mesh.graph.mesh.graph.np.zeros.astype.copy
    self_end.append
    ne_node.mesh.nodes.get
    context.torch.FloatTensor.to
    b_diff.np.abs.astype
    W.H.np.zeros.astype
    numpy.abs
    moviepy.editor.ImageSequenceClip.write_videofile
    connect_node.np.bincount.argmax
    extend_other_edges.other_edges.clip
    self.interp
    config.u_diff.np.abs.astype
    utils.refresh_node.subgraph
    self.pretrained.layer1
    depth.torch.squeeze.to
    torch.nn.utils.spectral_norm
    y.input_other_edge_with_id.astype
    self.BaseNetwork.super.__init__
    utils.clean_far_edge_new
    dict.get
    bilateral_filter
    rgb_output.cpu.cpu
    ends.all_anchor.ends.all_anchor.ends.global_mesh.nodes.get
    all_depth_patch.astype
    mesh_tools.get_edge_from_nodes.min
    os.path.basename
    depth.np.array.astype
    next
    depth.transpose.torch.FloatTensor.unsqueeze
    sorted.append
    mesh_tools.filter_edge
    numpy.zeros.astype
    networkx.Graph
    mesh_tools.get_depth_from_maps
    self.named_modules
    self.encoder_2
    argparse.ArgumentParser
    print
    mesh.write_ply
    rgb_output.squeeze.data.cpu.permute.numpy
    numpy.flipud
    resize_depth
    global_mesh.has_node
    path_planning
    depth_edge_output.squeeze.data.cpu.numpy.squeeze
    networkx.diameter
    group_edges
    numpy.abs.argmin
    cv2.erode
    numpy.abs.min
    tmp_merge.append
    time.time
    self.load
    real_nes.append
    tmp_erode.append
    create_mesh
    new_mask.masked_fill_.masked_fill_
    module.eval
    yaml.load
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
Owner
Virginia Tech Vision and Learning Lab
Virginia Tech Vision and Learning Lab
Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021)

Beyond Image to Depth: Improving Depth Prediction using Echoes (CVPR 2021) Kranti Kumar Parida, Siddharth Srivastava, Gaurav Sharma. We address the pr

Kranti Kumar Parida 33 Jun 27, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Source code for "OmniPhotos: Casual 360° VR Photography"

OmniPhotos: Casual 360° VR Photography Project Page | Video | Paper | Demo | Data This repository contains the source code for creating and viewing Om

Christian Richardt 144 Dec 30, 2022
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

null 321 Dec 25, 2022
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

Diplodocus 258 Jan 2, 2023
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

Jia Research Lab 137 Dec 14, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

DV Lab 137 Dec 14, 2022
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals

LapDepth-release This repository is a Pytorch implementation of the paper "Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals" M

Minsoo Song 205 Dec 30, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Ibai Gorordo 19 Oct 22, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 4, 2021
MAT: Mask-Aware Transformer for Large Hole Image Inpainting

MAT: Mask-Aware Transformer for Large Hole Image Inpainting (CVPR2022, Oral) Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia [Paper] News This

null 254 Dec 29, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
Code for "Unsupervised Layered Image Decomposition into Object Prototypes" paper

DTI-Sprites Pytorch implementation of "Unsupervised Layered Image Decomposition into Object Prototypes" paper Check out our paper and webpage for deta

null 40 Dec 22, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surface-emitting lasers, nano-antennas, and more.

Alex Song 17 Dec 19, 2022
Read and write layered TIFF ImageSourceData and ImageResources tags

Read and write layered TIFF ImageSourceData and ImageResources tags Psdtags is a Python library to read and write the Adobe Photoshop(r) specific Imag

Christoph Gohlke 4 Feb 5, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
CVPR 2021: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE"

Diverse Structure Inpainting ArXiv | Papar | Supplementary Material | BibTex This repository is for the CVPR 2021 paper, "Generating Diverse Structure

null 152 Nov 4, 2022