[ICCV'21] Pri3D: Can 3D Priors Help 2D Representation Learning?

Overview

Pri3D: Can 3D Priors Help 2D Representation Learning? [ICCV 2021]

Pri3D

Pri3D leverages 3D priors for downstream 2D image understanding tasks: during pre-training, we incorporate view-invariant and geometric priors from color-geometry information given by RGB-D datasets, imbuing geometric priors into learned features. We show that these 3D-imbued learned features can effectively transfer to improved performance on 2D tasks such as semantic segmentation, object detection, and instance segmentation.

[ICCV 2021 Paper] [Video]

Environment

This codebase was tested with the following environment configurations.

Installation

We use conda for the installation process:

# Install virtual env and PyTorch
conda create -n sparseconv051 python=3.8
conda activate sparseconv051
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=11.0 -c pytorch

# Complie and install MinkowskiEngine 0.5.1.
conda install mkl mkl-include -c intel
wget https://github.com/NVIDIA/MinkowskiEngine/archive/refs/tags/v0.5.1.zip
cd MinkowskiEngine-0.5.1 
python setup.py install

Next, clone the Pri3D repository and install the requirement from the root directory.

git clone https://github.com/Sekunde/Pri3D.git
cd Pri3D
pip install -r requirements.txt

Training Mask R-CNN models requires Detectron2.

[Logging] Pri3D will create runs for logging training curves and configurations in the project named pri3d in Weights & Biases. Additionally, checkpoints and a txt log file will be stored in the specified output folder. You will be asked to input username and password of Weights & Biases in the first time to run the training code.

[Optional] If you want to pre-train the view-consistent contrastive loss on MegaDepth data, you need to install COLMAP, see their installation ducomentation.

Pre-training Section

Data Pre-processing

Prepare ScanNet Pre-training Data

For pre-training view-invariant contrastive loss, pairs of ScanNet frames data can be generated by the following code (need to change the TARGET and SCANNET_DIR accordingly in the script). This piece of code first extracts pointcloud from partial frames, and then computes a filelist of overlapped partial frames for each scene.

cd pretrain/data_preprocess/scannet
./preprocess.sh

Then a combined txt file called overlap30.txt of filelists of each scene can be generated by running the following code. This overlap30.txt should be put into folder TARGET/splits.

cd pretrain/data_preprocess/scannet
python generate_list.py --target_dir TARGET

For pre-training geometric-prior, we first generate point cloud of ScanNet reconstruction. This can be done by running following code. We use SCANNET_DATA to refer where scannet data lives and SCANNET_OUT_PATH to denote the output path of processed scannet data.

# Edit path variables: SCANNET_DATA and SCANNET_OUT_PATH
cd pretrain/data_preprocess/scannet
python collect_indoor3d_data.py --input SCANNET_DATA --output SCANNET_OUT_PATH
# copy the filelists
cp -r split SCANNET_OUT_PATH

Afterwards, we further generate chunk bbox that is used for cropping chunks by running following code. TARGET points to where the previously generated pairs of ScanNet frames are located.

cd pretrain/data_preprocess/scannet/
python chunk.py --base TARGET

Prepare MegaDepth Pre-training Data

We borrow the MegaDepth data generation code from D2-Net. After installing COLMAP and downloading the MegaDepth Dataset(including SfM models), you can run the following code to pre-process data.

cd pretrain/data_preprocess/megadepth/
python undistort_reconstructions.py --colmap_path /path/to/colmap/executable --base_path /path/to/megadepth
bash preprocess_undistorted_megadepth.sh /path/to/megadepth /path/to/output/folder

We also provide visualization of the MegaDepth data.

Prepare KITTI Pre-training Data

Download KITTI Dataset and run the following code to pre-process the data. This will create an overlap.txt file indexing the pairs of frames as well as a mapping folder storing the coordinates mapping between frames located in /path/to/kitti/dataset.

cd pretrain/data_preprocess/kitti/
python kitti_process.py --input /path/to/kitti/dataset

Pre-training on Different Datasets

To pre-train on different datasets, we provide the scripts to train Pri3D with 8 GPUs (batch_size=64, 8 per GPU) on a single server under folder pretrain/pri3d/scripts. Pri3D can also be pre-trained on the server with fewer GPUs, e.g. 4 GPUs by setting train.batsh_size=32 (8 per GPUs) and optimizer.accumulate_step=2 (effective batch_size=32x2=64) to accumulate gradients. The code is competitable with facebook hydra. Our codebase enables multi-gpu training with distributed data parallel (DDP) module in pytorch.

Pre-train on ScanNet

TARGET and SCANNET_OUT_PATH refer to the pre-processed data locations that are defined in Prepare ScanNet Pre-train Data.

cd pretrain/pri3d
export DATAPATH=TARGET 
# if pre-train with geometric-contrastive loss
export POINTCLOUD_PATH=SCANNET_OUT_PATH
# Pretrain with view-invariant loss (ResNet50 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet VIEW=True scripts/scannet.sh
# Pretrain with geometric-contrastive loss (ResNet50 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet GEO=True scripts/scannet.sh
# Pretrain with view-invariant and geometric-contrastive loss (ResNet50 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet VIEW=True GEO=True scripts/scannet.sh
# Pretrain with view-invariant loss (ResNet18 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet VIEW=True scripts/scannet.sh

Pre-train on MegaDepth

cd pretrain/pri3d
export DATAPATH=/path/to/megadepth/processed/data/folder
# Pretrain with view-invariant loss (ResNet50 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet scripts/megadepth.sh
# Pretrain with view-invariant loss (ResNet18 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet scripts/megadepth.sh

Pre-train on KITTI

cd pretrain/pri3d
export DATAPATH=KITTI_PATH
# Pretrain with view-invariant loss (ResNet50 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet scripts/kitti.sh
# Pretrain with view-invariant loss (ResNet18 backbone)
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet scripts/kitti.sh

Downstream Task Section

Semantic Segmentation on ScanNet

Download scannet_frames_25k and unzip to SCANNET_SEMSEG. It should have following structures.

SCANNET_SEMSEG/
    scene0000_00/
        color/
	depth/
	instance/
	label/
	pose/
	intrinsics_color.txt
	intrinsics_depth.txt
    scene0000_01/
    ...

Export path SCANNET_SEMSEG to enviromental variable $DATAPATH and run the code to train the ResUNet models.

cd downstream/semseg/unet
export DATAPATH=SCANNET_SEMSEG
export PHASE=train
# train the model with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=imagenet scripts/scannet.sh
# train the model with ResNet50 backbone, train from scratch 
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=scratch scripts/scannet.sh
# train the model with ResNet50 backbone, train from scratch with 20% data.
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=scratch PHASE=train20 scripts/scannet.sh
# train the model with ResNet50 backbone, initialized with specified pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=/path/to/saved/model scripts/scannet.sh
# train the model with ResNet18 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet INIT=imagenet scripts/scannet.sh

Similarly, export environmental variable and run the code to train PSPNet and DeepLabV3/V3+ models.

# train PSPNet (ResNet50 as backbones)
cd downstream/semseg/pspnet
export DATAPATH=SCANNET_SEMSEG
# train PSPNet with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder INIT=imagenet scripts/scannet.sh
# train PSPNet with ResNet50 backbone, initialized with specified pre-trained model
LOG_DIR=/path/to/log/folder INIT=/path/to/checkpoint scripts/scannet.sh

# train DeepLabV3 and DeepLabV3+ models (ResNet50 as backbones)
cd downstream/semseg/deeblabv3
export DATAPATH=SCANNET_SEMSEG
# train DeepLabV3 with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder INIT=imagenet scripts/train_scannet_deeplapv3.sh
# train DeepLabV3+ with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder INIT=imagenet scripts/train_scannet_deeplapv3plus.sh
# train DeepLabV3+ with ResNet50 backbone, initialized with specified pre-trained model
LOG_DIR=/path/to/log/folder INIT=/path/to/checkpoint scripts/train_scannet_deeplabv3plus.sh

Model Zoo

PSPNet and DeepLabV3/V3+ use checkpoints in torchvision format, thus we provide the code for converting from our Pri3D pre-trained checkpoint to torchvision checkpoint.

cd downstream/conversion
python pri3d_to_torchvision.py /path/to/pre-trained/pri3d/checkpoint /path/to/output/checkpoint/in/torchvison/format

The provided pre-trained models for PSPNet and DeepLabV3/V3+ are already converted to torchvision format.

Training Data mIoU (val) Backbone Pre-trained Model (on ScanNet) Curves Logs
100% scenes 61.7 ResNet50 Pri3D (View + Geo) link link
100% scenes 55.7 ResNet18 Pri3D (View + Geo) link link
100% scenes 62.8 PSPNet Pri3D (View + Geo) link link
100% scenes 61.3 DeepLabV3 Pri3D (View + Geo) link link
100% scenes 61.6 DeepLabV3+ Pri3D (View + Geo) link link
80% scenes 60.3 ResNet50 Pri3D (View + Geo) link link
60% scenes 58.9 ResNet50 Pri3D (View + Geo) link link
40% scenes 56.2 ResNet50 Pri3D (View + Geo) link link
20% scenes 51.5 ResNet50 Pri3D (View + Geo) link link

Semantic Segmentation on KITTI

Download and unzip label for semantic and instance segmentation. unzip and organize the data folder as following structures.

KITTI_SESEG/
    image_2/
    instance/
    semantic/
    semantic_rgb/

Use following code snippets to train semantic segmentation models on KITTI data.

cd downstream/semseg
export DATAPATH=KITTI_SEMSEG
# train the model with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=imagenet scripts/kitti.sh
# train the model with ResNet50 backbone, train from scratch 
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=scratch scripts/kitti.sh
# train the model with ResNet50 backbone, initialized with specified pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=/path/to/saved/model scripts/kitti.sh
# train the model with ResNet18 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet INIT=imagenet scripts/kitti.sh

Model Zoo

Training Data mIoU (val) Backbone Pre-trained Model Curves Logs
100% scenes 33.2 ResNet50 Pri3D (View) on KITTI link link

Semantic Segmentation on NYUv2

Download NYUv2 Dataset and unzip it to path NYUv2_SEMSEG.

cd downstream/semseg
export DATAPATH=NYUv2_SEMSEG
# train the model with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=imagenet scripts/nyuv2.sh
# train the model with ResNet50 backbone, train from scratch 
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=scratch scripts/nyuv2.sh
# train the model with ResNet50 backbone, initialized with specified pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=/path/to/saved/model scripts/nyuv2.sh
# train the model with ResNet18 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet INIT=imagenet scripts/nyuv2.sh

Model Zoo

Training Data mIoU (val) Backbone Pre-trained Model (on ScanNet) Curves Logs
100% scenes 54.7 ResNet50 Pri3D (View + Geo) link link
100% scenes 47.6 ResNet50 MoCoV2-supIN->SN link link

Semantic Segmentation on Cityscapes

Download gtFine_trainvaltest and leftImg8bit_trainvaltest. Unzip and organize as following data structures.

CityScapes_SEMSEG/
    gtFine/
    leftImg8bit/

Export the data path (CityScapes_SEMSEG) to $DATAPATH environmental variable and train the models.

cd downstream/semseg
export DATAPATH=Cityscapes_SEMSEG
# train the model with ResNet50 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=imagenet scripts/cityscapes.sh
# train the model with ResNet50 backbone, train from scratch 
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=scratch scripts/cityscapes.sh
# train the model with ResNet50 backbone, initialized with specified pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res50UNet INIT=/path/to/saved/model scripts/cityscapes.sh
# train the model with ResNet18 backbone, initialized with ImageNet pre-trained model
LOG_DIR=/path/to/log/folder BACKBONE=Res18UNet INIT=imagenet scripts/cityscapes.sh

Model Zoo

Training Data mIoU (val) Backbone Pre-trained Model Curves Logs
100% scenes 56.3 ResNet50 Pri3D (View) on KITTI link link
100% scenes 55.2 ResNet50 Pri3D (View) on MegaDepth link link

Instance Segmentation/Detection on ScanNet

For training an instance segmentation/detection model, a COCO format json file of annotation needs to be generated. We provide code to convert the ScanNet Annotation into COCO format (json file). Path SCANNET_SEMSEG refers to the location of ScanNet semantic segmentation data.

cd downstream/insseg/dataset
# generate json file of annotations for training
python scanet2coco.py --scannet_path SCANNET_SEMSEG --phase train
# generate json file of annotations for validation
python scanet2coco.py --scannet_path SCANNET_SEMSEG --phase val
# generate json file of annotations for training on 20%,40%,60%,80% data.
python scanet2coco.py --scannet_path SCANNET_SEMSEG --phase train20
python scanet2coco.py --scannet_path SCANNET_SEMSEG --phase train40
python scanet2coco.py --scannet_path SCANNET_SEMSEG --phase train60
python scanet2coco.py --scannet_path SCANNET_SEMSEG --phase train80

The code above generates json files, such as scannet_train.coco.json and scannet_val.coco.json. After having json files, the following code will train Mask R-CNN models.

cd downstream/insseg
export JSON_PATH=/path/to/json/file
export IMAGE_PATH=SCANNET_SEMSEG
# train the model with ImageNet pre-trained model 
LOG_DIR=/path/to/log/folder INIT=imagenet sbatch script/train_scannet.sh
# train the model with pre-trained model (remove 'sbatch' if training on a local machine)
LOG_DIR=/path/to/log/folder INIT=/path/to/model sbatch script/train_scannet.sh
# train the model on ScanNet 20% data
JSON_PATH=/path/to/scannet_train20.coco.json LOG_DIR=/path/to/log/folder INIT=imagenet script/train_scannet.sh

Model Zoo

Detectron2 requires a specific checkpoint format, thus we provide the code for converting from our Pri3D pre-trained checkpoint to the required checkpoint format.

cd downstream/conversion
python pri3d_to_torchvision.py /path/to/pri3d/format/checkpoint /path/to//torchvison/format/checkpoint
ptthon torchvision_to_detectron.py /path/to//torchvison/format/checkpoint /path/to/detectron/format/checkpoint

The provided pre-trained models in the following are already converted to detectron2 checkpoints.

Data [email protected] (bbox) [email protected] (segm) Backbone Pre-trained Model (on ScanNet) Curves Logs
100% 44.5 35.8 ResNet50 Pri3D (View + Geo) link link
100% 43.5 33.9 ResNet50 MoCoV2-supIN->SN link link

Instance Segmentation/Detection on NYUv2

Similarly to ScanNet, we provide code to convert the NYUv2 Annotation into COCO format (json files). Path NYUv2_SEMSEG refers to the location of NYUv2 semantic segmentation data.

cd downstream/insseg/dataset
# generate json file of annotations for training
python nyu2coco.py --nyu_path NYUv2_SEMSEG --phase train
# generate json file of annotations for validation
python nyu2coco.py --scannet_path NYUv2_SEMSEG --phase val

The code above generates json files, such as nyu_train.coco.json and nyu_val.coco.json. After having json files, the following code will train Mask R-CNN models.

cd downstream/insseg
export JSON_PATH=/path/to/json/file
export IMAGE_PATH=NYUv2_SEMSEG
# train the model with ImageNet pre-trained model 
LOG_DIR=/path/to/log/folder INIT=imagenet sbatch script/train_nyu.sh
# train the model with pre-trained model (remove 'sbatch' if training on a local machine)
LOG_DIR=/path/to/log/folder INIT=/path/to/model sbatch script/train_nyu.sh

Model Zoo

The provided pre-trained models in the following are already converted to Detectron2 checkpoints (convert to detectrion2 shows how to convert from Pri3D checkpoint to Detectron2 format).

Data [email protected] (bbox) [email protected] (segm) Backbone Pre-trained Model (on ScanNet) Curves Logs
100% 34.0 29.5 ResNet50 Pri3D (View + Geo) link link
100% 31.1 27.2 ResNet50 MoCoV2-supIN->SN link link

Instance Segmentation/Detection on COCO

Download 2017 Train Images, 2017 Val Images and 2017 Train/Val Annotations. Unzip and organize them as following structures.

$DETECTRON2_DATASETS/
    coco/
        annotations/
	    instances_{train,val}2017.json
        {train,val}2017/		

Then using the following code to train instance segmentation/detection models.

cd downstream/insseg
export DETECTRON2_DATASETS=/path/to/datasets
# train the model with ImageNet pre-trained model 
LOG_DIR=/path/to/log/folder INIT=imagenet sbatch script/train_coco.sh
# train the model with pre-trained model (remove 'sbatch' if training on a local machine)
LOG_DIR=/path/to/log/folder INIT=/path/to/model sbatch script/train_coco.sh

Model Zoo

The provided pre-trained models in the following are already converted to Detectron2 checkpoints (convert to detectron2 shows how to convert from Pri3D checkpoint to Detectron2 format).

Data [email protected] (bbox) [email protected] (segm) Backbone Pre-trained Model (on ScanNet) Curves Logs
100% 60.6 57.5 ResNet50 Pri3D (View) link link

Citing our paper

@article{hou2021pri3d,
  title={Pri3D: Can 3D Priors Help 2D Representation Learning?},
  author={Hou, Ji and Xie, Saining and Graham, Benjamin and Dai, Angela and Nie{\ss}ner, Matthias},
  journal={arXiv preprint arXiv:2104.11225},
  year={2021}
}

License

Pri3D is relased under the MIT License. See the LICENSE file for more details.

Comments
  • Was

    Was "pretrain.depth = True" set in modelzoo?

    In the scannet.sh script, the setting "pretrain.depth" is set to "True": https://github.com/Sekunde/Pri3D/blob/a607a05c3a3385da1202ee38201de79a161023bb/pretrain/pri3d/scripts/scannet.sh#L26

    Was this also the setting that was used for all the models in the modelzoo? If I understand correctly, this enables the depth loss in addition to the VIEW and GEO loss, is that correct? https://github.com/Sekunde/Pri3D/blob/a607a05c3a3385da1202ee38201de79a161023bb/pretrain/pri3d/model/model.py#L94

    opened by tomsal 4
  • Pre-trained Model without downstream task fine-tuning

    Pre-trained Model without downstream task fine-tuning

    Hi,

    Thanks for the codebase. It seems like the current provided model zoo only have models fine-tuned on different downstream tasks. I somehow want to take a look at the original features just from pre-training without any fine-tuning so that I can maybe use them for other tasks. Do you think such request makes sense, or you think I could use one from the existing model zoo and it should work similarly?

    Thanks in advance.

    opened by kudo1026 4
  • About computing overlap ratio of two frams

    About computing overlap ratio of two frams

    Thanks for your great work and released code! I notice that in your paper the pixel correspondences between the two frames are then determined as those whose 3D world locations lie within 2cm of each other. But in the preprocess script pretrain/data_preprocess/scannet/compute_full_overlapping.py, the match_indices radius is set as voxel_size*1.5, which by default is 7.5cm I think. I am confused with the 2cm and 7.5cm. Could you rovide some explanations? Thanks.

    opened by Dingry 2
  • Did you validate the usefulness of the pretrained 3D CNN in 3D tasks?

    Did you validate the usefulness of the pretrained 3D CNN in 3D tasks?

    Hi, thanks for sharing your great and comprehensive work. As my understanding, 3D backbone is also trained during the pretraining process. Did you do any experiment to validate the usefulness of the pretrained 3D network for downstream 3D tasks?

    opened by sunnyHelen 2
  • Model zoo request: VIEW only and GEO only on ScanNet

    Model zoo request: VIEW only and GEO only on ScanNet

    I am really interested in how the features change when using the different losses (VIEW only, GEO only, or also combined). From what I've seen, all provided models were trained using the same loss or otherwise on a different dataset. (Or did I overlook something?) Is it possible to make more models available, e.g. VIEW only and GEO only on ScanNet?

    I am aware this is pushing your kindness, because it's already great that you provided such an extensive codebase. Still, I at least wanted to ask. :)

    opened by tomsal 2
  • Cannot find scannet.py when preparing scannet data.

    Cannot find scannet.py when preparing scannet data.

    Hi!

    first of all, thanks for sharing the code! Great job!

    When preparing the scannet data for pretraining, I cannot find the file scannet.py. Do you have any idea what went wrong?

    This is where the issue occurs: https://github.com/Sekunde/Pri3D/blob/63c728b0a2236529cf24b8095d0d719f4f92451f/README.md#L74

    cd pretrain/data_preprocess/scannet
    python scannet.py --input SCANNET_DATA --output SCANNET_OUT_PATH
    
    opened by tomsal 2
Owner
Ji Hou
PhD Candidate in TUM
Ji Hou
[ICCV21] Code for RetrievalFuse: Neural 3D Scene Reconstruction with a Database

RetrievalFuse Paper | Project Page | Video RetrievalFuse: Neural 3D Scene Reconstruction with a Database Yawar Siddiqui, Justus Thies, Fangchang Ma, Q

Yawar Nihal Siddiqui 75 Dec 22, 2022
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 6, 2022
Implementation of CVPR'2022:Surface Reconstruction from Point Clouds by Learning Predictive Context Priors

Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository c

null 136 Dec 12, 2022
Geometry-Free View Synthesis: Transformers and no 3D Priors

Geometry-Free View Synthesis: Transformers and no 3D Priors Geometry-Free View Synthesis: Transformers and no 3D Priors Robin Rombach*, Patrick Esser*

CompVis Heidelberg 293 Dec 22, 2022
DETReg: Unsupervised Pretraining with Region Priors for Object Detection

DETReg: Unsupervised Pretraining with Region Priors for Object Detection Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik

Amir Bar 283 Dec 27, 2022
This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Process Priors" (SPNLP@ACL2022)

GP-VAE This repository provides datasets and code for preprocessing, training and testing models for the paper: Diverse Text Generation via Variationa

Wanyu Du 18 Dec 29, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors

Make-A-Scene - PyTorch Pytorch implementation (inofficial) of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors (https://arxiv.org/

Casual GAN Papers 259 Dec 28, 2022
Implementation of CVPR'2022:Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors

Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR 2022) Personal Web Pages | Paper | Project Page This repository contains

null 151 Dec 26, 2022
code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

null 39 Dec 17, 2022
Eff video representation - Efficient video representation through neural fields

Neural Residual Flow Fields for Efficient Video Representations 1. Download MPI

null 41 Jan 6, 2023
A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding his way.

GuidEye A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding h

Munal Jain 0 Aug 9, 2022
In this project, two programs can help you take full agvantage of time on the model training with a remote server

In this project, two programs can help you take full agvantage of time on the model training with a remote server, which can push notification to your phone about the information during model training, like the model indices and unexpected interrupts. Then you can do something in time for your work.

GrayLee 8 Dec 27, 2022
Transport Mode detection - can detect the mode of transport with the help of features such as acceeration,jerk etc

title emoji colorFrom colorTo sdk app_file pinned Transport_Mode_Detector ?? purple yellow gradio app.py false Configuration title: string Display tit

Nishant Rajadhyaksha 3 Jan 16, 2022
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Xinlong Wang 491 Jan 3, 2023
Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Viewmaker Networks: Learning Views for Unsupervised Representation Learning Alex Tamkin, Mike Wu, and Noah Goodman Paper link: https://arxiv.org/abs/2

Alex Tamkin 31 Dec 1, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 4, 2020
Self-supervised learning on Graph Representation Learning (node-level task)

graph_SSL Self-supervised learning on Graph Representation Learning (node-level task) How to run the code To run GRACE, sh run_GRACE.sh To run GCA, sh

Namkyeong Lee 3 Dec 31, 2021