Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Related tags

Deep Learning embert
Overview

EmBERT: A Transformer Model for Embodied, Language-guided Visual Task Completion

We present Embodied BERT (EmBERT), a transformer-based model which can attend to high-dimensional, multi-modal inputs across long temporal horizons for language-conditioned task completion. Additionally, we bridge the gap between successful object-centric navigation models used for non-interactive agents and the language-guided visual task completion benchmark, ALFRED, by introducing object navigation targets for EmBERT training. We achieve competitive performance on the ALFRED benchmark, and EmBERT marks the first transformer-based model to successfully handle the long-horizon, dense, multi-modal histories of ALFRED, and the first ALFRED model to utilize object-centric navigation targets.

In this repository, we provide the entire codebase which is used for training and evaluating EmBERT performance on the ALFRED dataset. It's mostly based on AllenNLP and PyTorch-Lightning therefore it's inherently easily to extend.

Setup

We used Anaconda for our experiments. Please create an anaconda environment and then install the project dependencies with the following command:

pip install -r requirements.txt

As next step, we will download the ALFRED data using the script scripts/download_alfred_data.sh as follows:

sh scripts/donwload_alfred_data.sh json_feat

Before doing so, make sure that you have installed p7zip because is used to extract the trajectory files.

MaskRCNN fine-tuning

We provide the code to fine-tune a MaskRCNN model on the ALFRED dataset. To create the vision dataset, use the script scripts/generate_vision_dataset.sh. This will create the dataset splits required by the training process. After this, it's possible to run the model fine-tuning using:

PYTHONPATH=. python vision/finetune.py --batch_size 8 --gradient_clip_val 5 --lr 3e-4 --gpus 1 --accumulate_grad_batches 2 --num_workers 4 --save_dir storage/models/vision/maskrcnn_bs_16_lr_3e-4_epochs_46_7k_batches --max_epochs 46 --limit_train_batches 7000

We provide this code for reference however in our experiments we used the MaskRCNN model from MOCA which applies more sophisticated data augmentation techniques to improve performance on the ALFRED dataset.

ALFRED Visual Features extraction

MaskRCNN

The visual feature extraction script is responsible for generating the MaskRCNN features as well as orientation information for every bounding box. For the MaskrCNN model, we use the pretrained model from MOCA. You can download it from their GitHub page. First, we create the directory structure and then download the model weights:

mkdir -p storage/models/vision/moca_maskrcnn;
wget https://alfred-colorswap.s3.us-east-2.amazonaws.com/weight_maskrcnn.pt -O storage/models/vision/moca_maskrcnn/weight_maskrcnn.pt; 

We extract visual features for training trajectories using the following command:

sh scripts/generate_moca_maskrcnn.sh

You can refer to the actual extraction script scripts/generate_maskrcnn_horizon0.py for additional parameters. We executed this command on an p3.2xlarge instance with NVIDIA V100. This command will populate the directory storage/data/alfred/json_feat_2.1.0/ with the visual features for each trajectory step. In particular, the parameter --features_folder will specify the subdirectory (for each trajectory) that will contain the compressed NumPy files constituting the features. Each NumPy file has the following structure:

dict(
    box_features=np.array,
    roi_angles=np.array,
    boxes=np.array,
    masks=np.array,
    class_probs=np.array,
    class_labels=np.array,
    num_objects=int,
    pano_id=int
)

Data-augmentation procedure

In our paper, we describe a procedure to augment the ALFREd trajectories with object and corresponding receptacle information. In particular, we reply the trajectories and we make sure to track object and its receptacle during a subgoal. The data augmentation script will create a new trajectory file called ref_traj_data.json that mimics the same data structure of the original ALFRED dataset but adds to it a few fields for each action.

To start generating the refined data, use the following script:

PYTHONPATH=. python scripts/generate_landmarks.py 

EmBERT Training

Vocabulary creation

We use AllenNLP for training our models. Before starting the training we will generate the vocabulary for the model using the following command:

allennlp build-vocab training_configs/embert/embert_oscar.jsonnet storage/models/embert/vocab.tar.gz --include-package grolp

Training

First, we need to download the OSCAR checkpoint before starting the training process. We used a version of OSCAR which doesn't use object labels which can be freely downloaded following the instruction on GitHub. Make sure to download this file in the folder storage/models/pretrained using the following commands:

mkdir -p storage/models/pretrained/;
wget https://biglmdiag.blob.core.windows.net/oscar/pretrained_models/base-no-labels.zip -O storage/models/pretrained/oscar.zip;
unzip storage/models/pretrained/oscar.zip -d storage/models/pretrained/;
mv storage/models/pretrained/base-no-labels/ep_67_588997/pytorch_model.bin storage/models/pretrained/oscar-base-no-labels.bin;
rm storage/models/pretrained/oscar.zip;

A new model can be trained using the following command:

allennlp train training_configs/embert/embert_widest.jsonnet -s storage/models/alfred/embert --include-package grolp

When training for the first time, make sure to add to the previous command the following parameters: --preprocess --num_workers 4. This will make sure that the dataset is preprocessed and cached in order to speedup training. We run training using AWS EC2 instances p3.8xlarge with 16 workers on a single GPU per configuration.

The configuration file training_configs/embert/embert_widest.jsonnet contains all the parameters that you might be interested in if you want to change the way the model works or any reference to the actual features files. If you're interested in how to change the model itself, please refer to the model definition. The parameters in the constructor of the class will reflect the ones reported in the configuration file. In general, this project has been developed by using AllenNLP has a reference framework. We refer the reader to the official AllenNLP documentation for more details about how to structure a project.

EmBERT evaluation

We modified the original ALFRED evaluation script to make sure that the results are completely reproducible. Refer to the original repository for more information.

To run the evaluation on the valid_seen and valid_unseen you can use the provided script scripts/run_eval.sh in order to evaluate your model. The EmBERT trainer has different ways of saving checkpoints. At the end of the training, it will automatically save the best model in an archive named model.tar.gz in the destination folder (the one specified with -s). To evaluate it run the following command:

sh scripts/run_eval.sh <your_model_path>/model.tar.gz 

It's also possible to run the evaluation of a specific checkpoint. This can be done by running the previous command as follows:

sh scripts/run_eval.sh <your_model_path>/model-epoch=6.ckpt

In this way the evaluation script will load the checkpoint at epoch 6 in the path . When specifying a checkpoint directly, make sure that the folder contains both config.json file and vocabulary directory because they are required by the script to load all the correct model parameters.

Citation

If you're using this codebase please cite our work:

@article{suglia:embert,
  title={Embodied {BERT}: A Transformer Model for Embodied, Language-guided Visual Task Completion},
  author={Alessandro Suglia and Qiaozi Gao and Jesse Thomason and Govind Thattai and Gaurav Sukhatme},
  journal={arXiv},
  year={2021},
  url={https://arxiv.org/abs/2108.04927}
}
You might also like...
Code for the SIGIR 2022 paper
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features
Official page of Struct-MDC (RA-L'22 with IROS'22 option); Depth completion from Visual-SLAM using point & line features

Struct-MDC (click the above buttons for redirection!) Official page of "Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural R

[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer
[ICCV 2021 Oral] SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer

This repository contains the source code for the paper SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021 Oral). The project page is here.

Exploring the Dual-task Correlation for Pose Guided Person Image Generation
Exploring the Dual-task Correlation for Pose Guided Person Image Generation

Dual-task Pose Transformer Network The source code for our paper "Exploring Dual-task Correlation for Pose Guided Person Image Generation“ (CVPR2022)

Using pretrained language models for biomedical knowledge graph completion.

LMs for biomedical KG completion This repository contains code to run the experiments described in: Scientific Language Models for Biomedical Knowledg

Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

The code repository for EMNLP 2021 paper
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".

Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization [Paper] accepted at the EMNLP 2021: Vision Guided Genera

VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Comments
  • `vocab.tar.gz` not found

    `vocab.tar.gz` not found

    Hi, thanks a lot for sharing the code for EmBert! I am trying to generate the vocabulary for the model by the following command on the README:

    allennlp build-vocab training_configs/embert/embert_oscar.jsonnet storage/models/embert/vocab.tar.gz --include-package grolp
    

    But I receive the following error.

    FileNotFoundError: file storage/models/embert/vocab.tar.gz not found
    

    vocab.tar.gz seems important to train the model. Kindly make this file available or advise on where to find it.

    opened by vidhiJain 1
  • Spelling error in Setup command in the README.md

    Spelling error in Setup command in the README.md

    The command given in dataset download in the README.md is sh scripts/donwload_alfred_data.sh json_feat

    It should be : sh scripts/download_alfred_data.sh json_feat

    It's a spelling error in the download_alfred_data.sh

    opened by varun0308 0
  • allennlp.common.checks.ConfigurationError: key

    allennlp.common.checks.ConfigurationError: key "dataset" is required at location "data_loader."

    Hello, I'm trying to run the training procedure allennlp build-vocab ... and allennlp train ..., but got an error:

    Traceback (most recent call last):
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/params.py", line 238, in pop
        value = self.params.pop(key)
    KeyError: 'dataset'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/home/ubuntu/miniconda3/envs/thor/bin/allennlp", line 8, in <module>
        sys.exit(run())
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/__main__.py", line 34, in run
        main(prog="allennlp")
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/commands/__init__.py", line 119, in main
        args.func(args)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/commands/build_vocab.py", line 75, in build_vocab_from_args
        make_vocab_from_params(params, temp_dir)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/training/util.py", line 468, in make_vocab_from_params
        data_loaders = data_loaders_from_params(params, serialization_dir=serialization_dir)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/training/util.py", line 118, in data_loaders_from_params
        data_loaders["train"] = DataLoader.from_params(
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/from_params.py", line 589, in from_params
        return retyped_subclass.from_params(
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/from_params.py", line 621, in from_params
        kwargs = create_kwargs(constructor_to_inspect, cls, params, **extras)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/from_params.py", line 199, in create_kwargs
        constructed_arg = pop_and_construct_arg(
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/from_params.py", line 303, in pop_and_construct_arg
        popped_params = params.pop(name, default) if default != _NO_DEFAULT else params.pop(name)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/params.py", line 243, in pop
        raise ConfigurationError(msg)
    allennlp.common.checks.ConfigurationError: key "dataset" is required at location "data_loader."
    

    This error occurs at both build-vocab and train phase. I'm not familiar with allennlp. If I add "dataset": "alfred" into the "data_loader" field, a more confusing error occurs:

    Traceback (most recent call last):
      File "/home/ubuntu/miniconda3/envs/thor/bin/allennlp", line 8, in <module>
        sys.exit(run())
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/__main__.py", line 34, in run
        main(prog="allennlp")
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/commands/__init__.py", line 119, in main
        args.func(args)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/commands/build_vocab.py", line 75, in build_vocab_from_args
        make_vocab_from_params(params, temp_dir)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/training/util.py", line 491, in make_vocab_from_params
        vocab = Vocabulary.from_params(vocab_params, instances=instances)
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/from_params.py", line 589, in from_params
        return retyped_subclass.from_params(
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/common/from_params.py", line 623, in from_params
        return constructor_to_call(**kwargs)  # type: ignore
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/data/vocabulary.py", line 309, in from_instances
        for instance in Tqdm.tqdm(instances, desc="building vocab"):
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
        for obj in iterable:
      File "/home/ubuntu/miniconda3/envs/thor/lib/python3.8/site-packages/allennlp/training/util.py", line 485, in <genexpr>
        for instance in data_loader.iter_instances()
    TypeError: 'NoneType' object is not iterable
    

    Is there any solution for this error?

    opened by RavenKiller 0
Owner
null
Alex Pashevich 62 Dec 24, 2022
YouRefIt: Embodied Reference Understanding with Language and Gesture

YouRefIt: Embodied Reference Understanding with Language and Gesture YouRefIt: Embodied Reference Understanding with Language and Gesture by Yixin Che

null 16 Jul 11, 2022
This is the pytorch code for the paper Curious Representation Learning for Embodied Intelligence.

Curious Representation Learning for Embodied Intelligence This is the pytorch code for the paper Curious Representation Learning for Embodied Intellig

null 19 Oct 19, 2022
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

null 232 Dec 25, 2022
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

ETHZ V4RL 70 Dec 22, 2022
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

null 22 Jan 4, 2023
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

null 34 Oct 8, 2022
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
Embodied Intelligence via Learning and Evolution

Embodied Intelligence via Learning and Evolution This is the code for the paper Embodied Intelligence via Learning and Evolution Agrim Gupta, Silvio S

Agrim Gupta 111 Dec 13, 2022
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

Saim Wani 4 May 8, 2022