Code for: https://berkeleyautomation.github.io/bags/

Overview

DeformableRavens

Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the project website, which also contains the data we used to train policies. Contents of this README:

Installation

This is how to get the code running on a local machine. First, get conda on the machine if it isn't there already:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Then, create a new Python 3.7 conda environment (e.g., named "py3-defs") and activate it:

conda create -n py3-defs python=3.7
conda activate py3-defs

Then install:

./install_python_ubuntu.sh

Note I: It is tested on Ubuntu 18.04. We have not tried other Ubuntu versions or other operating systems.

Note II: Installing TensorFlow using conda is usually easier than pip because the conda version will ship with the correct CUDA and cuDNN libraries, whereas the pip version is a nightmare regarding version compatibility.

Note III: the code has only been tested with PyBullet 3.0.4. In fact, there are some places which explicitly hard-code this requirement. Using later versions may work but is not recommended.

Environments and Tasks

This repository contains tasks in the ICRA 2021 submission and the predecessor paper on Transporters (presented at CoRL 2020). For the latter paper, there are (roughly) 10 tasks that came pre-shipped; the Transporters paper doesn't test with pushing or insertion-translation, but tests with all others. See Tasks.md for some task-specific documentation

Each task subclasses a Task class and needs to define its own reset(). The Task class defines an oracle policy that's used to get demonstrations (so it is not implemented within each task subclass), and is divided into cases depending on the action, or self.primitive, used.

Similarly, different tasks have different reward functions, but all are integrated into the Task super-class and divided based on the self.metric type: pose or zone.

Code Usage

Experiments start with python main.py, with --disp added for seeing the PyBullet GUI (but not used for large-scale experiments). The general logic for main.py proceeds as follows:

  • Gather expert demonstrations for the task and put it in data/{TASK}, unless there are already a sufficient amount of demonstrations. There are sub-directories for action, color, depth, info, etc., which store the data pickle files with consistent indexing per time step. Caution: this will start "counting" the data from the existing data/ directory. If you want entirely fresh data, delete the relevant file in data/.

  • Given the data, train the designated agent. The logged data is stored in logs/{AGENT}/{TASK}/{DATE}/{train}/ in the form of a tfevent file for TensorBoard. Note: it will do multiple training runs for statistical significance.

For deformables, we actually use a separate load.py script, due to some issues with creating multiple environments.

See Commands.md for commands to reproduce experimental results.

Downloading the Data

We normally generate 1000 demos for each of the tasks. However, this can take a long time, especially for the bag tasks. We have pre-generated datasets for all the tasks we tested with on the project website. Here's how to do this. For example, suppose we want to download demonstration data for the "bag-color-goal" task. Download the demonstration data from the website. Since this is also a goal-conditioned task, download the goal demonstrations as well. Make new data/ and goals/ directories and put the tar.gz files in the respective directories:

deformable-ravens/
    data/
        bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
    goals/
        bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Note: if you generate data using the main.py script, then it will automatically create the data/ scripts, and similarly for the generate_goals.py script. You only need to manually create data/ and goals/ if you only want to download and get pre-existing datasets in the right spot.

Then untar both of them in their respective directories:

tar -zxvf bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
tar -zxvf bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Now the data should be ready! If you want to inspect and debug the data, for example the goals data, then do:

python ravens/dataset.py --path goals/bag-color-goal/

Note that by default it saves any content in goals/ to goals_out/ and data in data/ to data_out/. Also, by default, it will download and save images. This can be very computationally intensive if you do this for the full 1000 demos. (The goals/ data only has 20 demos.) You can change this easily in the main method of ravens/datasets.py.

Running the script will print out some interesting data statistics for you.

Miscellaneous

If you have questions, please use the public issue tracker, so that all of us can benefit from your questions.

If you find this code or research paper helpful, please consider citing it:

@inproceedings{seita_bags_2021,
    author  = {Daniel Seita and Pete Florence and Jonathan Tompson and Erwin Coumans and Vikas Sindhwani and Ken Goldberg and Andy Zeng},
    title   = {{Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks}},
    journal = {arXiv preprint arXiv:2012.03385},
    Year    = {2020}
}
Comments
  • Unable to run `main.py`:

    Unable to run `main.py`: "No module named 'tensorflow.python.types'"

    I followed the instructions (I think):

    $ cd deformable-ravens
    $ git log -n 1 --oneline --no-decorate
    6ff2443 remove outdated bag files (we're using the other ones)
    $ conda create -n py3-bullet python=3.7
    $ conda activate py3-bullet
    $ ./install_python_ubuntu.sh
    

    Then I got the following error:

    $ python main.py -h
    pybullet build time: Sep 22 2020 00:55:20
    Detected TensorFlow version:  2.2.0
    Traceback (most recent call last):
      File "./main.py", line 45, in <module>
        from ravens import Dataset, Environment, agents, tasks
      File ".../deformable-ravens/ravens/__init__.py", line 2, in <module>
        import ravens.agents as agents
      File ".../deformable-ravens/ravens/agents/__init__.py", line 1, in <module>
        from ravens.agents.dummy import DummyAgent
      File ".../deformable-ravens/ravens/agents/dummy.py", line 10, in <module>
        from ravens.models import Attention, Transport
      File ".../deformable-ravens/ravens/models/__init__.py", line 6, in <module>
        from ravens.models.conv_mlp import ConvMLP
      File ".../deformable-ravens/ravens/models/conv_mlp.py", line 5, in <module>
        import tensorflow_hub as hub
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_hub/__init__.py", line 29, in <module>
        from tensorflow_hub.estimator import LatestModuleExporter
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_hub/estimator.py", line 64, in <module>
        class LatestModuleExporter(tf_v1.estimator.Exporter):
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 62, in __getattr__
        module = self._load()
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 45, in _load
        module = importlib.import_module(self.__name__)
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_estimator/__init__.py", line 10, in <module>
        from tensorflow_estimator._api.v1 import estimator
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_estimator/_api/v1/estimator/__init__.py", line 10, in <module>
        from tensorflow_estimator._api.v1.estimator import experimental
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_estimator/_api/v1/estimator/experimental/__init__.py", line 10, in <module>
        from tensorflow_estimator.python.estimator.canned.dnn import dnn_logit_fn_builder
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 31, in <module>
        from tensorflow_estimator.python.estimator import estimator
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 51, in <module>
        from tensorflow_estimator.python.estimator import model_fn as model_fn_lib
      File "{home}/.local/opt/miniconda3/envs/py3-bullet/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/model_fn.py", line 29, in <module>
        from tensorflow.python.types import core
    ModuleNotFoundError: No module named 'tensorflow.python.types'
    

    See output of conda list --export here: https://gist.github.com/EricCousineau-TRI/50284aba406e965956a2827de37745b1

    opened by EricCousineau-TRI 9
  • Questions regarding network architecture

    Questions regarding network architecture

    Hello Dr Seita,

    First of all, thank you for making the code for your paper publicly available, and for the contribution your work has brought to the research community. I have been reading the "goal-conditioned transporter network" paper in detail, and have a few points of confusion I was hoping I could ask about:

    1. For Transporter-Goal-Stack: Although Fig 3 of the paper seems to imply that only $o_t$ is input to $\Phi_{query}$ and $\Phi_{key}$, I was reading through the code and noticed that it seems that actually the full stacked $(o_t, o_g)$ is used as the input to these two FCNs (not only as the input to $f_{pick}$). Could you please confirm this point? link
    2. For Transporter-Goal-Split: Similarly, although Fig 3 of the paper seems to imply that only $o_t$ is input to $f_{pick}$ in this architecture, from the code it seems that actually the full stacked $(o_t, o_g)$ is used as input. Could you please confirm this point? link (It seems I may be having some misunderstandings regarding how to read Fig 3 properly, so please do let me know if this is the case.)
    3. From the code it seems that rotations are actually performed (although not shown in Fig 3). My understanding is that (some input image of some size) is input to $\Phi_{query}$ and we get dense features as output from this; then $T_{pick}$ is sampled from $Q_{pick}$, and then we create rotated crops from the extracted dense features, around $T_{pick}$. We then convolve each crop to get $T_{place}$ as the location with maximum placing success, calculated over all locations and over all rotations. Is this correct?
    4. Related to the previous question. Is it correct to understand $Q_{place}$ as a set of num_rotations maps where each map is associated with a rotation, and the pixel values in the map are correlated with predicted placement success given this pick and rotation?

    I'm sorry for so many questions but I would really like to better understand the paper's details. Thank you very much for your time!

    opened by convolutionalJellyfish 7
  • Error message of python packages(tensorflow package version may have mismatched)

    Error message of python packages(tensorflow package version may have mismatched)

    I created anaconda environment using your instruction in README(install_ubuntu.sh), and try to execute dataset.py. But some errors of tensorflow packages. If you know the solution, could you share me? I put the below that the error message when I tried " python ravens/dataset.py --path goals/bag-color-goal/".

    Thank you,

    pybullet build time: Sep 22 2020 00:55:20 2021-10-04 18:00:30.592384: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1 Detected TensorFlow version: 2.4.1 Traceback (most recent call last): File "ravens/dataset.py", line 9, in from ravens import utils as U File "/home/user/deformable-ravens/ravens/init.py", line 2, in import ravens.agents as agents File "/home/user/deformable-ravens/ravens/agents/init.py", line 1, in from ravens.agents.dummy import DummyAgent File "/home/user/deformable-ravens/ravens/agents/dummy.py", line 10, in from ravens.models import Attention, Transport File "/home/user/deformable-ravens/ravens/models/init.py", line 6, in from ravens.models.conv_mlp import ConvMLP File "/home/user/deformable-ravens/ravens/models/conv_mlp.py", line 5, in import tensorflow_hub as hub File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow_hub/init.py", line 88, in from tensorflow_hub.estimator import LatestModuleExporter File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow_hub/estimator.py", line 62, in class LatestModuleExporter(tf.compat.v1.estimator.Exporter): File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 62, in getattr module = self._load() File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 45, in _load module = importlib.import_module(self.name) File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow_estimator/init.py", line 10, in from tensorflow_estimator._api.v1 import estimator File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow_estimator/_api/v1/estimator/init.py", line 13, in from tensorflow_estimator._api.v1.estimator import tpu File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow_estimator/_api/v1/estimator/tpu/init.py", line 14, in from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimator File "/home/user/anaconda3/envs/py3-defs/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 108, in _tpu_estimator_gauge = tf.compat.v2.internal.monitoring.BoolGauge( AttributeError: module 'tensorflow._api.v2.compat.v2.internal' has no attribute 'monitoring'

    opened by k-makihara 3
  • For goal-conditioned task, how does the environment judge that the task is successful?

    For goal-conditioned task, how does the environment judge that the task is successful?

    Hi, I have a question here. For goal-conditioned tasks, how does the environment judge that the task is successful? Because the inputs of agent are current observation image and goal image. Does the environment judge success by comparing the current observation image with the target image? Or is the environment judge success by comparing particle positions, and the image is only used as input to the agent?

    opened by ShiguangSun 2
  • Add missing asset

    Add missing asset

    This image is needed for the floor texture when rendering the environment

    Reproduction steps:

    1. train model
    2. Run model (do rollouts) python load.py --gpu=0 --agent=${AGENT} --hz=${HZ} --task=${TASK} --num_rots_inf=${ROTS} --train_run=${tr} --num_demos=${NDEMOS} --disp

    Without this file, you get the error

    b3Printf: b3Warning[examples/Importers/ImportMeshUtility/b3ImportMeshUtility.cpp,182]:
    
    b3Printf: not found [/path-to-repo/ravens/assets/plane/checker_blue.png]
    
    opened by gautams3 1
  • Using the 2-finger gripper?

    Using the 2-finger gripper?

    Thanks a lot for sharing the amazing work,

    I can run the demo using self.ee = 'suction', but I failed to do it with Robotiq2F85.

    I see this is the line where to load specific end-effector. For example, using insertion.py task, from here I changed it to self.ee = 'gripper'.

    The gripper could be loaded but the robot doesn't move correctly to the target object, the end-effector always moves upside down. The robot seems to approach the object for picking but the gripper heads to the up direction.

    When I run python main.py --disp --task insertion I get the following,

    Traceback (most recent call last):
      File "main.py", line 251, in <module>
        demo_reward, episode, t, last_obs_info = rollout(task.oracle(env), env, task, args)
      File "main.py", line 114, in rollout
        (obs, reward, done, info) = env.step(act)
      File "/home/abdu/deformable-ravens/ravens/environment.py", line 268, in step
        success = self.primitives[act['primitive']](**act['params'])
      File "/home/abdu/deformable-ravens/ravens/environment.py", line 602, in pick_place
        while not self.ee.detect_contact(def_IDs) and target_pose[2] > 0:
    TypeError: detect_contact() takes 1 positional argument but 2 were given
    numActiveThreads = 0
    stopping threads
    Thread with taskId 0 exiting
    Thread TERMINATED
    

    When I print the pose0[0] and pose0[1] from here I get

    Pose0[0] (0.359375, -0.21249999999999997, 0.040639832615852356)
    Pose0[1] (0.0, 0.0, 0.0, 1.0)
    

    Any help would be greatly appreciated... Thanks

    opened by Abduoit 1
  • install_ubuntu_python: Upgrade to tensorflow-gpu==2.4.1

    install_ubuntu_python: Upgrade to tensorflow-gpu==2.4.1

    • Fixes error, No module named 'tensorflow.python.types'
    • Upgrade to tensorflow-addons==0.13.0, per console warnings when using tensorflow-addons==0.11.0 with tensorflow-gpu==2.4.1

    Resolves #4 per @PeterQiu0516's suggestion - thanks!

    @DanielTakeshi Any chance you think this is a good fix?

    This allows me to run the following two sample commands:

    python main.py --gpu=0 --agent=dummy --hz=240 --task=cable-shape --disp
    python main.py --gpu=0 --agent=dummy --hz=480 --task=bag-items-easy --disp
    
    opened by EricCousineau-TRI 1
  • The parameter meaning of load.py output.We look forward to your reply.

    The parameter meaning of load.py output.We look forward to your reply.

    Excuse me, I'm eager to know why the RESNET network structure used by the network structure chose to give up the BN layer? We look forward to your replyResNet43_8s(input_shape, output_dim, **include_batchnorm=False**, batchnorm_axis=3, prefix='', cutoff_early=False):

    opened by 74284853 0
  • The results are inconsistent with those in the paper

    The results are inconsistent with those in the paper

    Hi!

    I used your code to train the Cable-Shape model with Transporter. In the final test, I found that the success rate was quite different from that in the paper. When I learned 1000 demos, the success rate in the paper was 86.5%, but the highest success rate was only 70%.I used the load.py file in the test and tested 100 cases.

    Why are my results so different from yours?

    Did you use the default 20 cases in your code when testing, or did you choose 100?

    The following results were tested with a model of 25,000, 30,000, 35,000 and 40,000 steps respectively. image

    opened by TriBall3 9
  • NOTE TO SELF: make task names consistent with what's in the paper!

    NOTE TO SELF: make task names consistent with what's in the paper!

    The code internally has some different task names than what we reported in the paper, since we were changing what the paper reports for ease of readability, but changing task names in the code requires a little more care. To be specific:

    • The three "fabric" tasks in the paper have "cloth" instead, e.g., "fabric-flat" in the paper is "cloth-flat" in this code.
    • The "bag-items-1" and "bag-items-2" in the paper are referred to as "bag-items-easy" and "bag-items-hard" in the code.
    • The "block-notarget" task in the paper (see the appendix) is referred to as "insertion-goal" in the code.

    Complicating matters is that there are a few places in the code which check by detecting if a task name has 'cloth' in it, etc.

    opened by DanielTakeshi 0
Owner
Daniel Seita
Computer science Ph.D. student at UC Berkeley working in Artificial Intelligence.
Daniel Seita
Web-interface + rest API for classification and regression (https://jeff1evesque.github.io/machine-learning.docs)

Machine Learning This project provides a web-interface, as well as a programmatic-api for various machine learning algorithms. Supported algorithms: S

Jeff Levesque 252 Dec 11, 2022
Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/

SemanticGAN This is the official code for: Semantic Segmentation with Generative Models: Semi-Supervised Learning and Strong Out-of-Domain Generalizat

null 151 Dec 28, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Predicting Patient Outcomes with Graph Representation Learning This repository contains the code used for Predicting Patient Outcomes with Graph Repre

Emma Rocheteau 76 Dec 22, 2022
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
Source Code for DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances (https://arxiv.org/pdf/2012.01775.pdf)

DialogBERT This is a PyTorch implementation of the DialogBERT model described in DialogBERT: Neural Response Generation via Hierarchical BERT with Dis

Xiaodong Gu 67 Jan 6, 2023
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 7, 2022
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)

AudioCLIP Extending CLIP to Image, Text and Audio This repository contains implementation of the models described in the paper arXiv:2106.13043. This

null 458 Jan 2, 2023
source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

International Business Machines 71 Nov 15, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
Top #1 Submission code for the first https://alphamev.ai MEV competition with best AUC (0.9893) and MSE (0.0982).

alphamev-winning-submission Top #1 Submission code for the first alphamev MEV competition with best AUC (0.9893) and MSE (0.0982). The code won't run

null 70 Oct 29, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
Read number plates with https://platerecognizer.com/

HASS-plate-recognizer Read vehicle license plates with https://platerecognizer.com/ which offers free processing of 2500 images per month. You will ne

Robin 69 Dec 30, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory >= 8G Numpy > 1.

null 46 Dec 14, 2022