Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating

Related tags

Deep Learning NRNS
Overview

No RL No Simulation (NRNS)

Official implementation of the NRNS paper: No RL, No Simulation: Learning to Navigate without Navigating

NRNS is a heriarchical modular approach to image goal navigation that uses a topological map and distance estimator to navigate and self-localize. Distance function and target prediction function are learnt over passive video trajectories gathered from Mp3D and Gibson.

NRNS is a heriarchical modular approach to image goal navigation that uses a topological map and distance estimator to navigate and self-localize. Distance function and target prediction function are learnt over passive video trajectories gathered from Mp3D and Gibson.

[project website]

Setup

This project is developed with Python 3.6. If you are using miniconda or anaconda, you can create an environment:

conda create -n nrns python3.6
conda activate nrns

Install Habitat and Other Dependencies

NRNS makes extensive use of the Habitat Simulator and Habitat-Lab developed by FAIR. You will first need to install both Habitat-Sim and Habitat-Lab.

Please find the instructions to install habitat here

If you are using conda, Habitat-Sim can easily be installed with

conda install -c aihabitat -c conda-forge habitat-sim headless

We recommend downloading the test scenes and running the example script as described here to ensure the installation of Habitat-Sim and Habitat-Lab was successful. Now you can clone this repository and install the rest of the dependencies:

git clone [email protected]:meera1hahn/NRNS.git
cd NRNS
python -m pip install -r requirements.txt
python download_aux.py

Download Scene Data

Like Habitat-Lab, we expect a data folder (or symlink) with a particular structure in the top-level directory of this project. Running the download_aux.py script will download the pretrained models but you will still need to download the scene data. We evaluate our agents on Matterport3D (MP3D) and Gibson scene reconstructions. Instructions on how to download RealEstate10k can be found here.

Image-Nav Test Episodes

The image-nav test epsiodes used in this paper for MP3D and Gibson can be found here. These were used to test all baselines and NRNS.

Matterport3D

The official Matterport3D download script (download_mp.py) can be accessed by following the "Dataset Download" instructions on their project webpage. The scene data can then be downloaded this way:

# requires running with python 2.7
python download_mp.py --task habitat -o data/scene_datasets/mp3d/

Extract this data to data/scene_datasets/mp3d such that it has the form data/scene_datasets/mp3d/{scene}/{scene}.glb. There should be 90 total scenes. We follow the standard train/val/test splits.

Gibson

The official Gibson dataset can be accessed on their project webpage. Please follow the link to download the Habitat Simulator compatible data. The link will first take you to the license agreement and then to the data. We follow the standard train/val/test splits.

Running pre-trained models

Look at the run scripts in src/image_nav/run_scripts/ for examples of how to run the model.

Difficulty settings options are: easy, medium, hard

Path Type setting options are: straight, curved

To run NRNS on gibson without noise for example on the straight setting with a medium difficulty

cd src/image_nav/
python -W ignore run.py \
    --dataset 'gibson' \
    --path_type 'straight' \
    --difficulty 'medium' \

Citing

If you use NRNS in your research, please cite the following paper:

@inproceedings{hahn_nrns_2021,
  title={No RL, No Simulation: Learning to Navigate without Navigating},
  author={Meera Hahn and Devendra Chaplot and Mustafa Mukadam and James M. Rehg and Shubham Tulsiani and Abhinav Gupta},
  booktitle={Neurips},
  year={2021}
 }
You might also like...
Codes for realizing theories learned from Data Mining, Machine Learning, Deep Learning without using the present Python packages.

Codes-for-Algorithms Codes for realizing theories learned from Data Mining, Machine Learning, Deep Learning without using the present Python packages.

Code for the ICML 2021 paper:
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Code for the ICML 2021 paper:
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Differentiable molecular simulation of proteins with a coarse-grained potential

Differentiable molecular simulation of proteins with a coarse-grained potential This repository contains the learned potential, simulation scripts and

Differentiable simulation for system identification and visuomotor control
Differentiable simulation for system identification and visuomotor control

gradsim gradSim: Differentiable simulation for system identification and visuomotor control gradSim is a unified differentiable rendering and multiphy

Civsim is a basic civilisation simulation and modelling system built in Python 3.8.
Civsim is a basic civilisation simulation and modelling system built in Python 3.8.

Civsim Introduction Civsim is a basic civilisation simulation and modelling system built in Python 3.8. It requires the following packages: perlin_noi

A custom-designed Spider Robot trained to walk using Deep RL in a PyBullet Simulation
A custom-designed Spider Robot trained to walk using Deep RL in a PyBullet Simulation

SpiderBot_DeepRL Title: Implementation of Single and Multi-Agent Deep Reinforcement Learning Algorithms for a Walking Spider Robot Authors(s): Arijit

 Notspot robot simulation - Python version
Notspot robot simulation - Python version

Notspot robot simulation - Python version This repository contains all the files and code needed to simulate the notspot quadrupedal robot using Gazeb

 Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather
Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather

LiDAR fog simulation Created by Martin Hahner at the Computer Vision Lab of ETH Zurich. This is the official code release of the paper Fog Simulation

Comments
  • Can't reproduce results from paper

    Can't reproduce results from paper

    Hello, after a fresh clone and following the instructions I am getting the following results when I run

    python -W ignore run.py --dataset 'gibson' --path_type 'straight' --difficulty 'easy'
    

    100%|███████████████████████████████████████████████████| 1000/1000 [1:22:39<00:00, 4.96s/it]

    Type of Run: Dataset: GIBSON Data: STRAIGHT Data Type: easy Pose Noise: False; Actuation Noise: False

    Stats of Runs: Success Rate: 0.6630 SPL: 0.6087 Avg dist to goal: 1.0640 Avg taken path len - total: 3.9273 Avg taken path len - success: 2.3749 Avg gt path len - total: 2.2472 Avg gt path len - success: 2.1967

    For excel in above order: 0.6630 0.6087 1.0640 3.9273 2.3749 2.2472 2.1967

    which doesn't match with the 68% from your paper. Any thoughts on why this might be happening?

    opened by Jbwasse2 13
  • How to use RealEstate10K data

    How to use RealEstate10K data

    Thanks for the awesome work! Could you please include code used to generate data from RealEstate10K dataset to this repo? I also wonder how you extend the graph to generate unexplored nodes since all videos in RealEstate10K do not have depth camera during data generation? That will be very helpful for me to understand your awesome work. Thanks in advance!

    opened by recordmp3 0
  • Could you provide the details of training learned modules in the paper?

    Could you provide the details of training learned modules in the paper?

    Thanks for the awesome work! It seems that there is only testing details in the release code, I'm wondering if you can provide the details of training learned modules in the paper, especially the self-supervised parts which trained with passive video! That will be very helpful for me to understand your awesome work. Thanks in advance!

    opened by JeremyLinky 2
Owner
Meera Hahn
Ph.D. Student in Computer Science School of Interactive Computing Georgia Institute of Technology
Meera Hahn
[BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations"

DomainMix [BMVC2021] The official implementation of "DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations" [paper] [de

Wenhao Wang 17 Dec 20, 2022
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Arthur Douillard 116 Dec 14, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Differentiable Volumetric Rendering Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page This repos

null 697 Jan 6, 2023
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
Doosan robotic arm, simulation, control, visualization in Gazebo and ROS2 for Reinforcement Learning.

Robotic Arm Simulation in ROS2 and Gazebo General Overview This repository includes: First, how to simulate a 6DoF Robotic Arm from scratch using GAZE

David Valencia 12 Jan 2, 2023
Reinforcement learning for self-driving in a 3D simulation

SelfDrive_AI Reinforcement learning for self-driving in a 3D simulation (Created using UNITY-3D) 1. Requirements for the SelfDrive_AI Gym You need Pyt

Surajit Saikia 17 Dec 14, 2021
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 4, 2023
Learning Open-World Object Proposals without Learning to Classify

Learning Open-World Object Proposals without Learning to Classify Pytorch implementation for "Learning Open-World Object Proposals without Learning to

Dahun Kim 149 Dec 22, 2022