Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021

Related tags

Deep Learning ATISS
Overview

ATISS: Autoregressive Transformers for Indoor Scene Synthesis

Example 1 Example 2 Example 3

This repository contains the code that accompanies our paper ATISS: Autoregressive Transformers for Indoor Scene Synthesis.

You can find detailed usage instructions for training your own models, using our pretrained models as well as performing the interactive tasks described in the paper below.

If you found this work influential or helpful for your research, please consider citing

@Inproceedings{Paschalidou2021NEURIPS,
  author = {Despoina Paschalidou and Amlan Kar and Maria Shugrina and Karsten Kreis and Andreas Geiger and Sanja Fidler},
  title = {ATISS: Autoregressive Transformers for Indoor Scene Synthesis},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation & Dependencies

Our codebase has the following dependencies:

For the visualizations, we use simple-3dviz, which is our easy-to-use library for visualizing 3D data using Python and ModernGL and matplotlib for the colormaps. Note that simple-3dviz provides a lightweight and easy-to-use scene viewer using wxpython. If you wish you use our scripts for visualizing the generated scenes, you will need to also install wxpython. Note that for all the renderings in the paper we used NVIDIA's OMNIVERSE.

The simplest way to make sure that you have all dependencies in place is to use conda. You can create a conda environment called atiss using

conda env create -f environment.yaml
conda activate atiss

Next compile the extension modules. You can do this via

python setup.py build_ext --inplace
pip install -e .

Dataset

To evaluate a pretrained model or train a new model from scratch, you need to obtain the 3D-FRONT and the 3D-FUTURE dataset. To download both datasets, please refer to the instructions provided in the dataset's webpage. As soon as you have downloaded the 3D-FRONT and the 3D-FUTURE dataset, you are ready to start the preprocessing. In addition to a preprocessing script (preprocess_data.py), we also provide a very useful script for visualising 3D-FRONT scenes (render_threedfront_scene.py), which you can easily execute by running

python render_threedfront_scene.py SCENE_ID path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images

You can also visualize the walls, the windows as well as objects with textures by setting the corresponding arguments. Apart from only visualizing the scene with scene id SCENE_ID, the render_threedfront_scene.py script also generates a subfolder in the output folder, specified via the path_to_output_dir argument that contains the .obj files as well as the textures of all objects in this scene.

Data Preprocessing

Once you have downloaded the 3D-FRONT and 3D-FUTURE datasets you need to run the preprocess_data.py script in order to prepare the data to be able to train your own models or generate new scenes using previously trained models. To run the preprocessing script simply run

python preprocess_data.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images --dataset_filtering threed_front_bedroom

Note that you can choose the filtering for the different room types (e.g. bedrooms, living rooms, dining rooms, libraries) via the dataset_filtering argument. The path_to_floor_plan_texture_images is the path to a folder containing different floor plan textures that are necessary to render the rooms using a top-down orthographic projection. An example of such a folder can be found in the demo\floor_plan_texture_images folder.

This script starts by parsing all scenes from the 3D-FRONT dataset and then for each scene it generates a subfolder inside the path_to_output_dir that contains the information for all objects in the scene (boxes.npz), the room mask (room_mask.png) and the scene rendered using a top-down orthographic_projection (rendered_scene_256.png). Note that for the case of the living rooms and dining rooms you also need to change the size of the room during rendering to 6.2m from 3.1m, which is the default value, via the --room_side argument.

Morover, you will notice that the preprocess_data.py script takes a significant amount of time to parse all 3D-FRONT scenes. To reduce the waiting time, we cache the parsed scenes and save them to the /tmp/threed_front.pkl file. Therefore, once you parse the 3D-FRONT scenes once you can provide this path in the environment variable PATH_TO_SCENES for the next time you run this script as follows:

PATH_TO_SCENES="/tmp/threed_front.pkl" python preprocess_data.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images --dataset_filtering threed_front_bedroom

Finally, to further reduce the pre-processing time, note that it is possible to run this script in multiple threads, as it automatically checks whether a scene has been preprocessed and if it is it moves forward to the next scene.

Usage

As soon as you have installed all dependencies and have generated the preprocessed data, you can now start training new models from scratch, evaluate our pre-trained models and visualize the generated scenes using one of our pre-trained models. All scripts expect a path to a config file. In the config folder you can find the configuration files for the different room types. Make sure to change the dataset_directory argument to the path where you saved the preprocessed data from before.

Scene Generation

To generate rooms using a previously trained model, we provide the generate_scenes.py script and you can execute it by running

python generate_scenes.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. By default this script randomly selects floor plans from the test set and conditioned on this floor plan it generate different arrangements of objects. Note that if you want to generate a scene conditioned on a specific floor plan, you can select it by providing its scene id via the --scene_id argument. In case you want to run this script headlessly you should set the --without_screen argument. Finally, the path_to_3d_future_pickled_data specifies the path that contains the parsed ThreedFutureDataset after being pickled.

Scene Completion && Object Placement

To perform scene completion, we provide the scene_completion.py script that can be executed by running

python scene_completion.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. For this script make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and conditioned on this partial scene it populates the empty space with objects. However, you can choose a specific room via the --scene_id argument. This script can be also used to perform object placement. Namely starting from a partial scene add an object of a specific object category.

In the output directory, the scene_completion.py script generates two folders for each completion, one that contains the mesh files of the initial partial scene and another one that contains the mesh files of the completed scene.

Object Suggestions

We also provide a script that performs object suggestions based on a user-specified region of acceptable positions. Similar to the previous scripts you can execute by running

python object_suggestion.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. Also for this script, please make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and the user can either choose to remove some objects or keep it unchanged. Subsequently, the user needs to specify the acceptable positions to place an object using 6 comma seperated numbers that define the bounding box of the valid positions. Similar to the previous scripts, it is possible to select a particular scene by choosing specific room via the --scene_id argument.

In the output directory, the object_suggestion.py script generates two folders in each run, one that contains the mesh files of the initial scene and another one that contains the mesh files of the completed scene with the suggested object.

Failure Cases Detection and Correction

We also provide a script that performs failure cases correction on a scene that contains a problematic object. You can simply execute it by running

python failure_correction.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. Also for this script, please make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and the user needs to select an object inside the room that will be located in an unnatural position. Given the scene with the unnatural position, our model identifies the problematic object and repositions it in a more plausible position.

In the output directory, the falure_correction.py script generates two folders in each run, one that contains the mesh files of the initial scene with the problematic object and another one that contains the mesh files of the new scene.

Training

Finally, to train a new network from scratch, we provide the train_network.py script. To execute this script, you need to specify the path to the configuration file you wish to use and the path to the output directory, where the trained models and the training statistics will be saved. Namely, to train a new model from scratch, you simply need to run

python train_network.py path_to_config_yaml path_to_output_dir

Note that it is also possible to start from a previously trained model by specifying the --weight_file argument, which should contain the path to a previously trained model.

Note that, if you want to use the RAdam optimizer during training, you will have to also install to download and install the corresponding code from this repository.

We also provide the option to log the experiment's evolution using Weights & Biases. To do that, you simply need to set the --with_wandb_logger argument and of course to have installed wandb in your conda environment.

Relevant Research

Please also check out the following papers that explore similar ideas:

  • Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models pdf
  • Sceneformer: Indoor Scene Generation with Transformers pdf
Comments
  • Generating new scene from model

    Generating new scene from model

    Trained an initial model on 50 epochs, and wanted to test it out, using the generate_scenes.py script. Getting the following error when running the code, please help!

    Command: python3 generate_scenes.py ../config/bedrooms_config.yaml tester /tmp/threed_front.pkl demo/floor_plan_texture_images/ --weight_file models/OSL638YTV/model_00050

    Output: Running code on cpu Applying no_filtering filtering Loaded 17129 3D-FUTURE models <class 'list'> Applying no_filtering filtering Loaded 162 scenes with 21 object types: Loading weight file from models/OSL638YTV/model_00050 0 / 10: Using the 80 floor plan of scene SecondBedroom-36408 Traceback (most recent call last): File "generate_scenes.py", line 276, in main(sys.argv[1:]) File "generate_scenes.py", line 211, in main renderables, trimesh_meshes = get_textured_objects( File "/home/mil/Desktop/esoft/ATISS/scene_synthesis/utils.py", line 24, in get_textured_objects furniture = objects_dataset.get_closest_furniture_to_box( AttributeError: 'list' object has no attribute 'get_closest_furniture_to_box'

    opened by AIMads 12
  • Easy overfitting to the dataset?

    Easy overfitting to the dataset?

    Hi,

    I've been trying to replicate the results on the new dataset since the old version is no longer accessible. However, when I train the model on the bedroom scenes, the validation loss seems to drop for only the first several epochs and start to increase thereafter.

    Could you provide some info such as the range the validation loss achieved during your experiments? My run can achieve the training loss range fairly consistently reported in #9 but still ends up overfitting the training examples.

    Thanks in advance.

    Best, Jingyu

    opened by Jingyu6 8
  • Issue with data preprocessing: MTL files do not contain Ns lines

    Issue with data preprocessing: MTL files do not contain Ns lines

    Hi,

    The material files provided with the meshes in the 3D-FUTURE dataset do no contain information about the specular exponents "Ns", leading to a failure of simple_3dviz when reading the textured meshes in preprocess_data.py :

    Traceback (most recent call last):
      File "preprocess_data.py", line 271, in <module>
        main(sys.argv[1:])
      File "preprocess_data.py", line 258, in main
        renderables = get_textured_objects_in_scene(
      File "E:\users\PJB1\Code\ATISS\scripts\utils.py", line 144, in get_textured_objects_in_scene
        raw_mesh = TexturedMesh.from_file(model_path)
      File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\renderables\textured_mesh.py", line 300, in from_file
        mtl = read_material_file(mesh.material_file)
      File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\io\__init__.py", line 27, in read_material_file
        return {
      File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\io\material.py", line 25, in __init__
        self.read(filename)
      File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\io\material.py", line 113, in read
        self._Ns = float([
    IndexError: list index out of range
    

    Do you know an easy way to overcome this issue without having to locally make changes in the simple_3dvizlibrary ? Thanks ! Paul

    opened by pauljcb 5
  • Hardware/training time/epochs etc

    Hardware/training time/epochs etc

    Hello,

    I was wondering what is the expected time to train - the paper only mentions that you choose the best model from a very large number of iterations. What hardware did you use to train the model on and how long does that usually take, just to get a ballpark figure?

    In the same vein, did you (or coauthors) use a different lr schedule, optimizer etc that seems to work better?

    PS: Do you have any intentions of releasing the Kaolin scripts used to render the figures in the paper. They look very very nice!

    opened by wamiq-reyaz 4
  • Invalid Rooms

    Invalid Rooms

    Hi,

    The script pickle_threed_future_dataset and preprocess_data both require the parameter path_to_invalid_scene_ids, whose default value is ../config/invalid_threed_front_rooms.txt.

    However, I found that some rooms not listed in invalid_threed_front_rooms.txt are also problematic. One example is MasterBedroom-58086, which is identified by converting the scene json to obj models and opening them in meshlab.

    As invalid rooms would ruin the training process, how to easily identify these rooms and record the room ids in invalid_threed_front_rooms.txt?

    Another question is how to identify the problematic objects and record jids in black_list.txt.

    Thanks!

    opened by tdsuper 3
  • Pretrained models

    Pretrained models

    Hey!

    Currently waiting to get access to the dataset, I have setup the code but was wondering where the pretrained models are located. I can't seem to find them anywhere?

    opened by AIMads 3
  • Positional Embedding in autoregressive_transformer.py

    Positional Embedding in autoregressive_transformer.py

    pe_pos have three (x, y, z) in BaseAutoregressiveTransformer class. But, in the AutoregressiveTransformer forward function only pe_pos_x and pe_size_x used. Is it right?

    pos_f_x = self.pe_pos_x(translations[:, :, 0:1])
    pos_f_y = self.pe_pos_x(translations[:, :, 1:2])
    pos_f_z = self.pe_pos_x(translations[:, :, 2:3])
    pos_f = torch.cat([pos_f_x, pos_f_y, pos_f_z], dim=-1)
    
    size_f_x = self.pe_size_x(sizes[:, :, 0:1])
    size_f_y = self.pe_size_x(sizes[:, :, 1:2])
    size_f_z = self.pe_size_x(sizes[:, :, 2:3])
    
    opened by coco-archisketch 2
  • The old versiond dataset

    The old versiond dataset

    The number of rooms filted out in the new version dataset is inconsistent with that in the paper, which is much less. Could you release the old version dataset?

    opened by JackW987 2
  • Type error trigered by generate_scenes.py

    Type error trigered by generate_scenes.py

    Your ideas about using transformers in this project is great, and I'm trying reproducing your project, but I got some obstacles. Atfter traning, I run the following code:

    python generate_scenes.py ../config/bedrooms_config.yaml /data/3D-generate /data/3D-FUTURE-pickle/threed_future_model_bedroom.pkl ../demo/floor_plan_texture_images --weight_file /data/weights/8A9ENPCMK/model_00100
    

    and get erros:

    Running code on cuda:0
    Loaded 2354 3D-FUTURE models
    Applying no_filtering filtering
    Loaded 162 scenes with 21 object types:
    Loading weight file from /data/weights/8A9ENPCMK/model_00100
    0 / 10: Using the 147 floor plan of scene MasterBedroom-109561
    Traceback (most recent call last):
      File "generate_scenes.py", line 266, in <module>
        main(sys.argv[1:])
      File "generate_scenes.py", line 192, in main
        bbox_params = network.generate_boxes(room_mask=room_mask)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
        return func(*args, **kwargs)
      File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 227, in generate_boxes
        box = self.autoregressive_decode(boxes, room_mask=room_mask)
      File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 202, in autoregressive_decode
        F = self._encode(boxes, room_mask)
      File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 165, in _encode
        start_symbol_f = self.start_symbol_features(B, room_mask)
      File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 93, in start_symbol_features
        room_layout_f = self.fc_room_f(self.feature_extractor(room_mask))
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/codes/ATISS/scene_synthesis/networks/feature_extractors.py", line 24, in forward
        return self._feature_extractor(X)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torchvision/models/resnet.py", line 220, in forward
        return self._forward_impl(x)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torchvision/models/resnet.py", line 203, in _forward_impl
        x = self.conv1(x)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in forward
        return self._conv_forward(input, self.weight)
      File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 415, in _conv_forward
        return F.conv2d(input, weight, self.bias, self.stride,
    RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
    

    envs: cuda 10

    opened by fuyb1992 2
  • The special q hat token with dimension 512

    The special q hat token with dimension 512

    Thanks for your wonderful work. In the paper,the special q hat token with the dimension 64 and concated to predict other info.But in the code ,the special q hat token with wrong dimension 512.

    opened by JackW987 1
  • bedrooms_config.yaml error

    bedrooms_config.yaml error

    data: dataset_type: "cached_threedfront" encoding_type: "cached_autoregressive_wocm" dataset_directory: "/media/paschalidoud/goproorgohome/3D_FRONT_processed/bedrooms" annotation_file: "../config/bedroom_threed_front_splits.csv" augmentations: ["rotations"] filter_fn: "threed_front_bedroom" train_stats: "dataset_stats.txt" filter_fn: "no_filtering" room_layout_size: "64,64"

    opened by JackW987 1
  • Custom floor layouts ?

    Custom floor layouts ?

    Is it possible to use custom floor layouts with the trained model? If yes how can I do this? I've tried to manipulate the data in boxes.npz files but without any luck.

    opened by blekir 0
  • Importance of room_side argument

    Importance of room_side argument

    In readme.md you mentioned room_side argument: "This script starts by parsing all scenes from the 3D-FRONT dataset and then for each scene it generates a subfolder inside the path_to_output_dir that contains the information for all objects in the scene (boxes.npz), the room mask (room_mask.png) and the scene rendered using a top-down orthographic_projection (rendered_scene_256.png). Note that for the case of the living rooms and dining rooms you also need to change the size of the room during rendering to 6.2m from 3.1m, which is the default value, via the --room_side argument." I inspected your code but yet not sure about importance of that parameter and actual scales in your model. Is it true that room from default bedroom dataset with 64x64 generated mask but only 32x32 square of not zeros, have actual size 1.55x1.55 meters? If I want to check your model on a custom room mask what scales/resolutions of the my room mask should I use for dining room and bedroom generation respectively?

    opened by SergCholovsky 0
  • Figure 11 Contradicts with Eq 8 - 11

    Figure 11 Contradicts with Eq 8 - 11

    Hi all,

    I found it confusing when I look at Figure 11 on page 15. It uses a MLP (2 layers) and output dim is 64, but if it's predicting the parameters of the mixture of logistics, the output dim should be what's described in Eq. 8 - 11.

    Can you help me understand the difference? And if the mixture of logistics is actually been used (given the code is not published yet)?

    Thank you in advance!

    opened by shanyang-me 0
Owner
null
Companion code for the paper "An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence" (NeurIPS 2021)

ReLU-GP Residual (RGPR) This repository contains code for reproducing the following NeurIPS 2021 paper: @inproceedings{kristiadi2021infinite, title=

Agustinus Kristiadi 4 Dec 26, 2021
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
Code for MarioNette: Self-Supervised Sprite Learning, in NeurIPS 2021

MarioNette | Webpage | Paper | Video MarioNette: Self-Supervised Sprite Learning Dmitriy Smirnov, Michaël Gharbi, Matthew Fisher, Vitor Guizilini, Ale

Dima Smirnov 28 Nov 18, 2022
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

Facebook Research 462 Jan 3, 2023
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Shiqi Yang 53 Dec 25, 2022
Official code for On Path Integration of Grid Cells: Group Representation and Isotropic Scaling (NeurIPS 2021)

On Path Integration of Grid Cells: Group Representation and Isotropic Scaling This repo contains the official implementation for the paper On Path Int

Ruiqi Gao 39 Nov 10, 2022
This GitHub repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.'

About Repository This repository contains code used for plots in NeurIPS 2021 paper 'Stochastic Multi-Armed Bandits with Control Variates.' About Code

Arun Verma 1 Nov 9, 2021
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

Jingkang Wang 12 Nov 23, 2022
[NeurIPS 2021] Code for Unsupervised Learning of Compositional Energy Concepts

Unsupervised Learning of Compositional Energy Concepts This is the pytorch code for the paper Unsupervised Learning of Compositional Energy Concepts.

null 45 Nov 30, 2022
Source code of NeurIPS 2021 Paper ''Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration''

CaGCN This repo is for source code of NeurIPS 2021 paper "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration". Paper L

null 6 Dec 19, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
[NeurIPS 2021 Spotlight] Code for Learning to Compose Visual Relations

Learning to Compose Visual Relations This is the pytorch codebase for the NeurIPS 2021 Spotlight paper Learning to Compose Visual Relations. Demo Imag

Nan Liu 88 Jan 4, 2023
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

null 32 Dec 26, 2022
Code for NeurIPS 2021 paper: Invariant Causal Imitation Learning for Generalizable Policies

Invariant Causal Imitation Learning for Generalizable Policies Ioana Bica, Daniel Jarrett, Mihaela van der Schaar Neural Information Processing System

Ioana Bica 17 Dec 1, 2022
Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Official implementation of NeurIPS 2021 paper "One Loss for All: Deep Hashing with a Single Cosine Similarity based Learning Objective"

Ng Kam Woh 71 Dec 22, 2022
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax

[NeurIPS 2021] Galerkin Transformer: linear attention without softmax Summary A non-numerical analyst oriented explanation on Toward Data Science abou

Shuhao Cao 159 Dec 20, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
[NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning

SoCo [NeurIPS 2021 Spotlight] Aligning Pretraining for Detection via Object-Level Contrastive Learning By Fangyun Wei*, Yue Gao*, Zhirong Wu, Han Hu,

Yue Gao 139 Dec 14, 2022