Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

Related tags

Deep Learning lasr
Overview

LASR

Installation

Build with conda

conda env create -f lasr.yml
conda activate lasr
# install softras
cd third_party/softras; python setup.py install; cd -;
# install manifold remeshing
git clone --recursive -j8 git://github.com/hjwdzh/Manifold; cd Manifold; mkdir build; cd build; cmake .. -DCMAKE_BUILD_TYPE=Release;make; cd ../../

For docker installation, please see install.md

Data preparation

Create folders to store data and training logs

mkdir log; mkdir tmp; 
Synthetic data

To render {silhouette, flow, rgb} observations of spot.

python scripts/render_syn.py
Real data (DAVIS)

First, download DAVIS 2017 trainval set and copy JPEGImages/Full-Resolution and Annotations/Full-Resolution folders of DAVIS-camel into the according folders in database.

cp ...davis-path/DAVIS/Annotations/Full-Resolution/camel/ -rf database/DAVIS/Annotations/Full-Resolution/
cp ...davis-path/DAVIS-lasr/DAVIS/JPEGImages/Full-Resolution/camel/ -rf database/DAVIS/JPEGImages/Full-Resolution/

Then download pre-trained VCN optical flow:

pip install gdown
mkdir ./lasr_vcn
gdown https://drive.google.com/uc?id=139S6pplPvMTB-_giI6V2dxpOHGqqAdHn -O ./lasr_vcn/vcn_rob.pth

Run VCN-robust to predict optical flow on DAVIS camel video:

bash preprocess/auto_gen.sh camel
Your own video

You will need to download and install detectron2 to obtain object segmentations as instructed below.

python -m pip install detectron2 -f \
  https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html

First, use any video processing tool (such as ffmpeg) to extract frames into JPEGImages/Full-Resolution/name-of-the-video.

mkdir database/DAVIS/JPEGImages/Full-Resolution/pika-tmp/
ffmpeg -ss 00:00:04 -i database/raw/IMG-7495.MOV -vf fps=10 database/DAVIS/JPEGImages/Full-Resolution/pika-tmp/%05d.jpg

Then, run pointrend to get segmentations:

cd preprocess
python mask.py pika path-to-detectron2-root; cd -

Assuming you have downloaded VCN flow in the previous step, run flow prediction:

bash preprocess/auto_gen.sh pika

Single video optimization

Synthetic spot Next, we want to optimize the shape, texture and camera parameters from image observartions. Optimizing spot takes ~20min on a single Titan Xp GPU.
bash scripts/spot3.sh

To render the optimized shape, texture and camera parameters

bash scripts/extract.sh spot3-1 10 1 26 spot3 no no
python render_vis.py --testdir log/spot3-1/ --seqname spot3 --freeze --outpath tmp/1.gif
DAVIS camel

Optimize on camel observations.

bash scripts/template.sh camel

To render optimized camel

bash scripts/render_result.sh camel
Costumized video (Pika)

Similarly, run the following steps to reconstruct pika

bash scripts/template.sh pika

To render reconstructed shape

bash scripts/render_result.sh pika
Monitor optimization

To monitor optimization, run

tensorboard --logdir log/

Example outputs

Evaluation

Run the following command to evaluate 3D shape accuracy for synthetic spot.

python scripts/eval_mesh.py --testdir log/spot3-1/ --gtdir database/DAVIS/Meshes/Full-Resolution/syn-spot3f/

Run the following command to evaluate keypoint accuracy on BADJA.

python scripts/eval_badja.py --testdir log/camel-5/ --seqname camel

Additional Notes

Other videos in DAVIS/BAJDA

Please refer to data preparation and optimization of the camel example, and modify camel to other sequence names, such as dance-twirl. We provide config files the configs folder.

Synthetic articulated objects

To render and reproduce results on articulated objects (Sec. 4.2), you will need to purchase and download 3D models here. We use blender to export animated meshes and run rendera_all.py:

python scripts/render_syn.py --outdir syn-dog-15 --nframes 15 --alpha 0.5 --model dog

Optimize on rendered observations

bash scripts/dog15.sh

To render optimized dog

bash scripts/render_result.sh dog
Batchsize

The current codebase is tested with batchsize=4. Batchsize can be modified in scripts/template.sh. Note decreasing the batchsize will improive speed but reduce the stability.

Distributed training

The current codebase supports single-node multi-gpu training with pytorch distributed data-parallel. Please modify dev and ngpu in scripts/template.sh to select devices.

Acknowledgement

The code borrows the skeleton of CMR

External repos:

External data:

Citation

To cite our paper,

@inproceedings{yang2021lasr,
  title={LASR: Learning Articulated Shape Reconstruction from a Monocular Video},
  author={Yang, Gengshan 
      and Sun, Deqing
      and Jampani, Varun
      and Vlasic, Daniel
      and Cole, Forrester
      and Chang, Huiwen
      and Ramanan, Deva
      and Freeman, William T
      and Liu, Ce},
  booktitle={CVPR},
  year={2021}
}  
Comments
  •  Pdb mode

    Pdb mode

    Excuse me for asking again and again.

    "bash scripts/spot3.sh". After running the above code, terminal goes into Pdb mode. What should I enter here?

    opened by Kana-alt 64
  • ModuleNotFoundError: No module named 'point_rend'

    ModuleNotFoundError: No module named 'point_rend'

    I ran python mask.py pika path-to-detectron2-root; cd -

    The following error has occurred. How can I solve this problem?

    Traceback (most recent call last): File "mask.py", line 45, in import point_rend ModuleNotFoundError: No module named 'point_rend' /home/shiori/lasr-main

    opened by Kana-alt 11
  • ninja: no work to do.

    ninja: no work to do.

    I started to build the conda environment again. I get an output that ninja is not working properly, does this mean that the environment build was not successful?

    cd third_party/softras; python setup.py install; cd -;

    running install running bdist_egg running egg_info writing soft_renderer.egg-info/PKG-INFO writing dependency_links to soft_renderer.egg-info/dependency_links.txt writing requirements to soft_renderer.egg-info/requires.txt writing top-level names to soft_renderer.egg-info/top_level.txt reading manifest file 'soft_renderer.egg-info/SOURCES.txt' writing manifest file 'soft_renderer.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py running build_ext building 'soft_renderer.cuda.load_textures' extension Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/load_textures_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/load_textures_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/load_textures.cpython-38-x86_64-linux-gnu.so building 'soft_renderer.cuda.create_texture_image' extension Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/create_texture_image.cpython-38-x86_64-linux-gnu.so building 'soft_renderer.cuda.soft_rasterize' extension Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/soft_rasterize.cpython-38-x86_64-linux-gnu.so building 'soft_renderer.cuda.voxelization' extension Emitting ninja build file /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. g++ -pthread -shared -B /home/kana/anaconda3/envs/lasr2/compiler_compat -L/home/kana/anaconda3/envs/lasr2/lib -Wl,-rpath=/home/kana/anaconda3/envs/lasr2/lib -Wl,--no-as-needed -Wl,--sysroot=/ /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/voxelization_cuda.o /home/kana/lasr3/third_party/softras/build/temp.linux-x86_64-3.8/soft_renderer/cuda/voxelization_cuda_kernel.o -L/home/kana/anaconda3/envs/lasr2/lib/python3.8/site-packages/torch/lib -L/home/kana/anaconda3/envs/lasr2/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.8/soft_renderer/cuda/voxelization.cpython-38-x86_64-linux-gnu.so

    opened by Kana-alt 8
  • Question about the flatten loss

    Question about the flatten loss

    Dear Authors,

    Thank you so much for the great work. While reading your source code, I found there is a flatten loss here. This loss is not discussed in the paper and it is also not well explained in the code. Can you explain what this loss is about? Thank you very much!

    Best, Xianghui

    opened by xiexh20 4
  • OpenGL.error.GLError

    OpenGL.error.GLError

    I set up an environment in docker and tried to run the rendering code.

    As a result, I got an OpenGL error. Is this due to a different version installed or something else?

    """ docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr; source activate lasr; python render_vis.py --testdir log/spot3-1/ --seqname spot3 --freeze --outpath tmp/1.gif'

    log/spot3-1/ syn-spot3f/0 syn-spot3f/1 0 Traceback (most recent call last): File "render_vis.py", line 292, in main() File "render_vis.py", line 226, in main r = OffscreenRenderer(img_size, img_size) File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in init self._create() File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create self._platform.init_context() File "/anaconda3/envs/lasr/lib/python3.8/site-packages/pyrender/platforms/egl.py", line 186, in init_context self._egl_context = eglCreateContext( File "/anaconda3/envs/lasr/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 402, in call return self( *args, **named ) File "/anaconda3/envs/lasr/lib/python3.8/site-packages/OpenGL/error.py", line 228, in glCheckError raise GLError( OpenGL.error.GLError: GLError( err = 12297, baseOperation = eglCreateContext, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7f1e16d3a1c0>, <OpenGL._opaque.EGLConfig_pointer object at 0x7f1e16d3a240>, <OpenGL._opaque.EGLContext_pointer object at 0x7f1e16d85040>, <OpenGL.arrays.lists.c_int_Array_7 object at 0x7f1e2fd5a940>, ), result = <OpenGL._opaque.EGLContext_pointer object at 0x7f1e16d3a8c0> ) """

    opened by Kana-alt 3
  • LASR fails for sequence of a person

    LASR fails for sequence of a person

    Hello, I am looking to run LASR for a couple different scenes showing a single person. For one (RGB sequence: https://user-images.githubusercontent.com/6766142/126760093-b96c19ae-8e15-4cb6-8942-8ad0a420a2e5.mp4 LASR results: https://user-images.githubusercontent.com/6766142/126760220-8ceff0c3-03bd-432e-8d7a-0b1789112dc7.mp4), LASR works very well using the default parameters and symmetry disabled. However, for the other the method runs to completion, but produces invalid results. The RGB sequence is: https://user-images.githubusercontent.com/6766142/126758853-57390ec1-966d-4488-979e-a1f92632bfb5.mp4

    The results using default values (symmetry enabled) show a phantom copy and the mesh doesn't deform to match the mask:

    https://user-images.githubusercontent.com/6766142/126759304-3866ae49-64d7-4413-8e38-0405c1e33765.mp4

    I disabled the symmetry and now the resulting mesh is an amorphous blob that doesn't even overlap the mask:

    https://user-images.githubusercontent.com/6766142/126759364-007f64e3-4cba-4370-9da3-57a8d24ef592.mp4

    Monitoring the trends in tensorboard seem to show that everything proceeded well until the end of the first epoch, so I ran the method using only a single epoch which gives the best results so far (although somewhat reminiscent of a tadpole):

    https://user-images.githubusercontent.com/6766142/126759766-c3cf477a-507d-4af3-b0f9-2ef447de0309.mp4

    I also tried with larger batchsizes as suggested in the readme (6 and 10), but this didn't seem to cause any difference in the results. I verified that the masks and flow fields didn't look vastly incorrect. I'm wondering if this is a known issue or that you might have an idea what has gone wrong for this scene. Thanks!

    opened by ecmjohnson 2
  • RuntimeError: CUDA error: invalid device ordinal

    RuntimeError: CUDA error: invalid device ordinal

    I used the docker to build the environment. I prepared the DAVIS data and tried to run Optimize on camel observations.

    Then, I got a CUDA error and the execution did not proceed.

    Can you tell me the cause?

    docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr; source activate lasr; bash scripts/template.sh camel' Jitting Chamfer 3D Jitting Chamfer 3D Loaded JIT 3D CUDA chamfer distance Loaded JIT 3D CUDA chamfer distance Traceback (most recent call last): File "optimize.py", line 59, in app.run(main) File "/anaconda3/envs/lasr/lib/python3.8/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/anaconda3/envs/lasr/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "optimize.py", line 40, in main torch.cuda.set_device(opts.local_rank) File "/anaconda3/envs/lasr/lib/python3.8/site-packages/torch/cuda/init.py", line 263, in set_device torch._C._cuda_setDevice(device) RuntimeError: CUDA error: invalid device ordinal

    opened by Kana-alt 2
  • error [python scripts/render_syn.py]

    error [python scripts/render_syn.py]

    I ran "python scripts/render_syn.py".

    The following error has occurred. How can I solve this problem?

    /home/kana/anaconda3/envs/lasr/lib/python3.8/site-packages/kornia/geometry/conversions.py:369: UserWarning: XYZW quaternion coefficient order is deprecated and will be removed after > 0.6. Please use QuaternionCoeffOrder.WXYZ instead. warnings.warn("XYZW quaternion coefficient order is deprecated and" /home/kana/anaconda3/envs/lasr/lib/python3.8/site-packages/kornia/geometry/conversions.py:506: UserWarning: XYZW quaternion coefficient order is deprecated and will be removed after > 0.6. Please use QuaternionCoeffOrder.WXYZ instead. warnings.warn("XYZW quaternion coefficient order is deprecated and"

    opened by Kana-alt 2
  • Flipped flow maps in the flow loss

    Flipped flow maps in the flow loss

    Hi Gengshan,

    Thanks for the great work.

    I noticed that you flip the flows before saving in autogen.py https://github.com/google/lasr/blob/29d8759354f853119276f41504d6b527fd5484e5/preprocess/auto_gen.py#L173-L178

    Is there a good reason to do this? An unintended consequence is that it leads to flipped flows being loaded at training time and the flow loss ends up being wrong.

    Here's an example of flow error being logged while running lasr on Camel example. As you can see, the ground truth flow here is flipped image

    opened by sanjayharesh 3
  • Question on Flow preprocessing

    Question on Flow preprocessing

    Hi,

    Thank you for open-sourcing your awesome work.

    Could you explain on what is going on with the flow pre-processing below?

    https://github.com/google/lasr/blob/492fa417bce7ec8743da80dda267320ade153873/dataloader/vidbase.py#L145-L151

    Why is this preferred over a simple MSE penalty over raw flow fields?

    Thanks!!

    opened by kakashiht 1
  • Does the env yaml file not works on windows?

    Does the env yaml file not works on windows?

    I cloned your repo and tried to create env by
    conda env create -f lasr.yml then I got following messages. Solving environment: failed

    ResolvePackageNotFound:
      - lz4-c==1.9.3=h2531618_0
      - cudatoolkit==11.0.221=h6bb024c_0
      - ca-certificates==2021.5.30=ha878542_0
      - openssl==1.1.1k=h27cfd23_0
      - libwebp-base==1.2.0=h27cfd23_0
      - tk==8.6.10=hbc83047_0
      - numpy==1.20.2=py38h2d18471_0
      - sqlite==3.35.4=hdfb4753_0
      - numpy-base==1.20.2=py38hfae3a4d_0
      - jpeg==9b=h024ee3a_2
      - freetype==2.10.4=h5ab3b9f_0
      - intel-openmp==2021.2.0=h06a4308_610
      - mkl_random==1.2.1=py38ha9443f7_2
      - pytorch3d==0.4.0=py38_cu110_pyt171
      - pyyaml==5.3.1=py38h8df0ef7_1
      - zstd==1.4.9=haebb681_0
      - ld_impl_linux-64==2.33.1=h53a641e_7
      - pytorch==1.7.1=py3.8_cuda11.0.221_cudnn8.0.5_0
      - readline==8.1=h27cfd23_0
      - xz==5.2.5=h7b6447c_0
      - libtiff==4.2.0=h85742a9_0
      - lcms2==2.12=h3be6417_0
      - cudatoolkit-dev=11.0.3
      - ncurses==6.2=he6710b0_1
      - zlib==1.2.11=h7b6447c_3
      - mkl_fft==1.3.0=py38h42c9631_2
      - mkl-service==2.3.0=py38h27cfd23_1
      - libgcc-ng=9.1.0
      - libffi==3.3=he6710b0_2
      - libstdcxx-ng==9.1.0=hdf63c60_0
    

    when I delete the build then error disappears, but I'm not sure this is right way.
    Is this repo is not compatible with windows?

    opened by IlkwonHong 1
  • LASR with known camera intrinsics/extrinsics

    LASR with known camera intrinsics/extrinsics

    Hello, I would like to run LASR with known camera intrinsics & extrinsics. I believe this is already implemented, but I'm having some trouble understanding how to accomplish this myself. The mechanisms seem to be two-fold: with the use_gtpose option and providing per-frame camera files (parsing code here). Could you clarify the functionality of these mechanisms? I was unable to find an example that made use of either, but if I missed one or you have one, that would also be helpful.

    Another thing that confuses me is the scaling of the scale (lol) when use_gtpose is set even though the focal length is assigned equivalently if the camera files are provided or not that makes me think these two mechanisms might have different purposes and I am incorrectly conflating them.

    Any clarification you can provide would be much appreciated! Thanks!

    opened by ecmjohnson 12
  • No module named 'detectron2.config'

    No module named 'detectron2.config'

    I built my environment with docker. Therefore, I use the following command to get segmentations.

    docker run -v $(pwd):/lasr --gpus all lasr bash -c 'cd lasr/detectron2; source activate lasr; python mask.py pika . /detectron2; cd -'

    Then I get the following error.

    Traceback (most recent call last): File "mask.py", line 23, in from detectron2.config import get_cfg ModuleNotFoundError: No module named 'detectron2.config'

    detectron2 is installed with a folder created in the parent directory of preprocess. Is it because I am using docker that I am getting this error?

    opened by Kana-alt 1
Owner
Google
Google ❤️ Open Source
Google
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

null 144 Dec 24, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Code for "Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks", CVPR 2021

Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks This repository contains the code that accompanies our CVPR 20

Despoina Paschalidou 161 Dec 20, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

null 158 Jan 4, 2023
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 4, 2022
Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021)

Transferable Semantic Augmentation for Domain Adaptation Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021) Paper

null 66 Dec 16, 2022
Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021

LoFTR: Detector-Free Local Feature Matching with Transformers Project Page | Paper LoFTR: Detector-Free Local Feature Matching with Transformers Jiami

ZJU3DV 1.4k Jan 4, 2023
Code for "NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video", CVPR 2021 oral

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video Project Page | Paper NeuralRecon: Real-Time Coherent 3D Reconstruction from Mon

ZJU3DV 1.4k Dec 30, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Official code for the CVPR 2021 paper "How Well Do Self-Supervised Models Transfer?"

How Well Do Self-Supervised Models Transfer? This repository hosts the code for the experiments in the CVPR 2021 paper How Well Do Self-Supervised Mod

Linus Ericsson 157 Dec 16, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Arthur Douillard 116 Dec 14, 2022