Object-Centric Learning with Slot Attention

Overview

Slot Attention

This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055).

Outputs of our slot attention model. This image demonstrates the models ability to divide objects (or parts of objects) into slots.

Requirements

  • Poetry
  • Python >= 3.8
  • PyTorch >= 1.7.1
  • Pytorch Lightning >= 1.1.4
  • CUDA enabled computing device

Note: the model was run using a Nvidia Tesla V100 16GB GPU.

Getting Started

Run run.sh to get started. This script will install the dependencies, download the CLEVR dataset and run the model.

Usage

python slot_attention/train.py

Modify SlotAttentionParams in slot_attention/train.py to modify the hyperparameters. See slot_attenion/params.py for the default hyperparamters.

Logging

To log outputs to wandb, run wandb login YOUR_API_KEY and set is_logging_enabled=True in SlotAttentionParams.

Acknowledgements

Special thanks to the original authors of the paper: Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf.

Comments
  • Is it a bug that `slot_mu` and `slot_log_sigma` is not updated in training?

    Is it a bug that `slot_mu` and `slot_log_sigma` is not updated in training?

    Hi. Thank you for opening source this wonderful implementation! I have a small question about a code and think it might be a bug.

    In these lines, you define slot_mu and slot_log_sigma using register_buffer. If I understand correctly, tensors created via register_buffer won't be updated during training (see here for reference). I also check my trained checkpoints, these two values are indeed the same throughout the training process.

    Also, in other slot-attention implementations, they define them as trainable parameters (see PyTorch one and the official one). So I just wonder if this is a bug or intentional behavior?

    Update: I didn't observe much performance difference using trainable or fixed mu+sigma. That's very interesting.

    opened by Wuziyi616 2
  • How would you save the predicted slots?

    How would you save the predicted slots?

    Great code! I like how it trains on CLEVR out of the box. I would like to save the predicted images like below. Could you help me find out where I could add the Image.save function to get this output? Thanks! image

    opened by IssamLaradji 1
  • Question about the object-centric representation of each slot

    Question about the object-centric representation of each slot

    Maybe this is not the right place to ask this question, but I would like to initiate a discussion about the thing puzzling me when reading the paper.

    My question is: In the paper, the author does not explicitly enforce each slot to represent exactly a single object, then why did not the network learn to use each slot for more than 1 object in the image reconstruction task? Is there any inductive bias imposed by the architecture itself?

    opened by lehduong 0
  • ModuleNotFoundError

    ModuleNotFoundError

    Hi, I am trying to run the slot_attention/train.py code in your repository, however, I keep getting stuck by a error reading...

    Traceback (most recent call last):
      File "slot_attention/train.py", line 8, in <module>
        from slot_attention.data import CLEVRDataModule
    ModuleNotFoundError: No module named 'slot_attention'
    

    I cloned the git to a PC running on the Ubuntu operating system, and upon doing so, I ran the code within the highest directory of the cloned repository, as written in the Readme. Could you help me out with this issue?

    opened by EYUJin 4
  • Testing code

    Testing code

    Hi, thank you very much for the really nice implementation! I have trained the model for 100 epochs and the evaluation results look nice. I was wondering if there's also testing code available. I implemented my own, but I get results such as the image below.

    Thank you very much for your reply.

    123

    opened by ozzyou 14
Owner
Untitled AI
We're investigating the fundamentals of learning across humans and machines in order to create more general machine intelligence.
Untitled AI
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

RNN-for-Joint-NLU Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"

Kim SungDong 194 Dec 28, 2022
The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

OC-SORT Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes

Jinkun Cao 325 Jan 5, 2023
Open source code for Paper "A Co-Interactive Transformer for Joint Slot Filling and Intent Detection"

A Co-Interactive Transformer for Joint Slot Filling and Intent Detection This repository contains the PyTorch implementation of the paper: A Co-Intera

null 67 Dec 5, 2022
SlotRefine: A Fast Non-Autoregressive Model forJoint Intent Detection and Slot Filling

SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling Reference Main paper to be cited (Di Wu et al., 2020) @article

Moore 34 Nov 3, 2022
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
NUANCED is a user-centric conversational recommendation dataset that contains 5.1k annotated dialogues and 26k high-quality user turns.

NUANCED: Natural Utterance Annotation for Nuanced Conversation with Estimated Distributions Overview NUANCED is a user-centric conversational recommen

Facebook Research 18 Dec 28, 2021
EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers

EntityQuestions This repository contains the EntityQuestions dataset as well as code to evaluate retrieval results from the the paper Simple Entity-ce

Princeton Natural Language Processing 119 Sep 28, 2022
EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers

EntityQuestions This repository contains the EntityQuestions dataset as well as code to evaluate retrieval results from the the paper Simple Entity-ce

Princeton Natural Language Processing 50 Sep 24, 2021
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

null 2 Nov 7, 2022
Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

FPT_data_centric_competition - Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

Pham Viet Hoang (Harry) 2 Oct 30, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 8, 2023
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 3, 2023
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorch

Transformer in Transformer Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image c

Phil Wang 272 Dec 23, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

Haoyu Chen 71 Dec 30, 2022
Attention-driven Robot Manipulation (ARM) which includes Q-attention

Attention-driven Robotic Manipulation (ARM) This codebase is home to: Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation I

Stephen James 84 Dec 29, 2022
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021