Code for the paper "Offline Reinforcement Learning as One Big Sequence Modeling Problem"

Overview

Trajectory Transformer

Code release for Offline Reinforcement Learning as One Big Sequence Modeling Problem.

Installation

All python dependencies are in environment.yml. Install with:

conda env create -f environment.yml
conda activate trajectory
pip install -e .

For reproducibility, we have also included system requirements in a Dockerfile (see installation instructions), but the conda installation should work on most standard Linux machines.

Usage

Train a transformer with: python scripts/train.py --dataset halfcheetah-medium-v2

To reproduce the offline RL results: python scripts/plan.py --dataset halfcheetah-medium-v2

By default, these commands will use the hyperparameters in config/offline.py. You can override them with runtime flags:

python scripts/plan.py --dataset halfcheetah-medium-v2 \
	--horizon 5 --beam_width 32

A few hyperparameters are different from those listed in the paper because of changes to the discretization strategy. These hyperparameters will be updated in the next arxiv version to match what is currently in the codebase.

Pretrained models

We have provided pretrained models for 16 datasets: {halfcheetah, hopper, walker2d, ant}-{expert-v2, medium-expert-v2, medium-v2, medium-replay-v2}. Download them with ./pretrained.sh

The models will be saved in logs/$DATASET/gpt/pretrained. To plan with these models, refer to them using the gpt_loadpath flag:

python scripts/plan.py --dataset halfcheetah-medium-v2 \
	--gpt_loadpath gpt/pretrained

pretrained.sh will also download 15 plans from each model, saved to logs/$DATASET/plans/pretrained. Read them with python plotting/read_results.py.

To create the table of offline RL results from the paper, run python plotting/table.py. This will print a table that can be copied into a Latex document. (Expand to view table source.)
\begin{table*}[h]
\centering
\small
\begin{tabular}{llrrrrrr}
\toprule
\multicolumn{1}{c}{\bf Dataset} & \multicolumn{1}{c}{\bf Environment} & \multicolumn{1}{c}{\bf BC} & \multicolumn{1}{c}{\bf MBOP} & \multicolumn{1}{c}{\bf BRAC} & \multicolumn{1}{c}{\bf CQL} & \multicolumn{1}{c}{\bf DT} & \multicolumn{1}{c}{\bf TT (Ours)} \\
\midrule
Medium-Expert & HalfCheetah & $59.9$ & $105.9$ & $41.9$ & $91.6$ & $86.8$ & $95.0$ \scriptsize{\raisebox{1pt}{$\pm 0.2$}} \\
Medium-Expert & Hopper & $79.6$ & $55.1$ & $0.9$ & $105.4$ & $107.6$ & $110.0$ \scriptsize{\raisebox{1pt}{$\pm 2.7$}} \\
Medium-Expert & Walker2d & $36.6$ & $70.2$ & $81.6$ & $108.8$ & $108.1$ & $101.9$ \scriptsize{\raisebox{1pt}{$\pm 6.8$}} \\
Medium-Expert & Ant & $-$ & $-$ & $-$ & $-$ & $-$ & $116.1$ \scriptsize{\raisebox{1pt}{$\pm 9.0$}} \\
\midrule
Medium & HalfCheetah & $43.1$ & $44.6$ & $46.3$ & $44.0$ & $42.6$ & $46.9$ \scriptsize{\raisebox{1pt}{$\pm 0.4$}} \\
Medium & Hopper & $63.9$ & $48.8$ & $31.3$ & $58.5$ & $67.6$ & $61.1$ \scriptsize{\raisebox{1pt}{$\pm 3.6$}} \\
Medium & Walker2d & $77.3$ & $41.0$ & $81.1$ & $72.5$ & $74.0$ & $79.0$ \scriptsize{\raisebox{1pt}{$\pm 2.8$}} \\
Medium & Ant & $-$ & $-$ & $-$ & $-$ & $-$ & $83.1$ \scriptsize{\raisebox{1pt}{$\pm 7.3$}} \\
\midrule
Medium-Replay & HalfCheetah & $4.3$ & $42.3$ & $47.7$ & $45.5$ & $36.6$ & $41.9$ \scriptsize{\raisebox{1pt}{$\pm 2.5$}} \\
Medium-Replay & Hopper & $27.6$ & $12.4$ & $0.6$ & $95.0$ & $82.7$ & $91.5$ \scriptsize{\raisebox{1pt}{$\pm 3.6$}} \\
Medium-Replay & Walker2d & $36.9$ & $9.7$ & $0.9$ & $77.2$ & $66.6$ & $82.6$ \scriptsize{\raisebox{1pt}{$\pm 6.9$}} \\
Medium-Replay & Ant & $-$ & $-$ & $-$ & $-$ & $-$ & $77.0$ \scriptsize{\raisebox{1pt}{$\pm 6.8$}} \\
\midrule
\multicolumn{2}{c}{\bf Average (without Ant)} & 47.7 & 47.8 & 36.9 & 77.6 & 74.7 & 78.9 \hspace{.6cm} \\
\multicolumn{2}{c}{\bf Average (all settings)} & $-$ & $-$ & $-$ & $-$ & $-$ & 82.2 \hspace{.6cm} \\
\bottomrule
\end{tabular}
\label{table:d4rl}
\end{table*}

To create the average performance plot, run python plotting/plot.py. (Expand to view plot.)

Docker

Copy your MuJoCo key to the Docker build context and build the container:

cp ~/.mujoco/mjkey.txt azure/files/
docker build -f azure/Dockerfile . -t trajectory

Test the container:

docker run -it --rm --gpus all \
	--mount type=bind,source=$PWD,target=/home/code \
	--mount type=bind,source=$HOME/.d4rl,target=/root/.d4rl \
	trajectory \
	bash -c \
	"export PYTHONPATH=$PYTHONPATH:/home/code && \
	python /home/code/scripts/train.py --dataset hopper-medium-expert-v2 --exp_name docker/"

Running on Azure

Setup

  1. Launching jobs on Azure requires one more python dependency:
pip install git+https://github.com/JannerM/doodad.git@janner
  1. Tag the image built in the previous section and push it to Docker Hub:
export DOCKER_USERNAME=$(docker info | sed '/Username:/!d;s/.* //')
docker tag trajectory ${DOCKER_USERNAME}/trajectory:latest
docker image push ${DOCKER_USERNAME}/trajectory
  1. Update azure/config.py, either by modifying the file directly or setting the relevant environment variables. To set the AZURE_STORAGE_CONNECTION variable, navigate to the Access keys section of your storage account. Click Show keys and copy the Connection string.

  2. Download azcopy: ./azure/download.sh

Usage

Launch training jobs with python azure/launch_train.py and planning jobs with python azure/launch_plan.py.

These scripts do not take runtime arguments. Instead, they run the corresponding scripts (scripts/train.py and scripts/plan.py, respectively) using the Cartesian product of the parameters in params_to_sweep.

Viewing results

To rsync the results from the Azure storage container, run ./azure/sync.sh.

To mount the storage container:

  1. Create a blobfuse config with ./azure/make_fuse_config.sh
  2. Run ./azure/mount.sh to mount the storage container to ~/azure_mount

To unmount the container, run sudo umount -f ~/azure_mount; rm -r ~/azure_mount

Reference

@inproceedings{janner2021sequence,
  title = {Offline Reinforcement Learning as One Big Sequence Modeling Problem},
  author = {Michael Janner and Qiyang Li and Sergey Levine},
  booktitle = {Advances in Neural Information Processing Systems},
  year = {2021},
}

Acknowledgements

The GPT implementation is from Andrej Karpathy's minGPT repo.

Comments
  • About the gym env.

    About the gym env.

    Hi, I run this project without Docker, just Vscode. However, when I run

    python scripts/train.py --dataset halfcheetah-medium-v2

    It goes like this: How can I fix it ? Thank you so much.

    python scripts/train.py --dataset halfcheetah-medium-v2
    Warning: Mujoco-based envs failed to import. Set the environment variable D4RL_SUPPRESS_IMPORT_ERROR=1 to suppress this message.
    No module named 'mjrl'
    Warning: Flow failed to import. Set the environment variable D4RL_SUPPRESS_IMPORT_ERROR=1 to suppress this message.
    No module named 'flow'
    Warning: CARLA failed to import. Set the environment variable D4RL_SUPPRESS_IMPORT_ERROR=1 to suppress this message.
    No module named 'carla'
    pybullet build time: May 20 2022 19:44:17
    [ utils/setup ] Reading config: config.offline:halfcheetah_medium_v2
    [ utils/setup ] Not using overrides | config: config.offline | dataset: halfcheetah_medium_v2
    [ utils/setup ] Saved args to logs/halfcheetah-medium-v2/gpt/azure/args.json
    Traceback (most recent call last):
      File "/home/**/software/anaconda3/envs/trajectory/lib/python3.8/site-packages/gym/envs/registration.py", line 121, in spec
        return self.env_specs[id]
    KeyError: 'halfcheetah-medium-v2'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "scripts/train.py", line 26, in <module>
        env = datasets.load_environment(args.dataset)
      File "/home/**/Desktop/TT/trajectory-transformer/trajectory/datasets/d4rl.py", line 84, in load_environment
        wrapped_env = gym.make(name)
      File "/home/**/software/anaconda3/envs/trajectory/lib/python3.8/site-packages/gym/envs/registration.py", line 145, in make
        return registry.make(id, **kwargs)
      File "/home/**/software/anaconda3/envs/trajectory/lib/python3.8/site-packages/gym/envs/registration.py", line 89, in make
        spec = self.spec(path)
      File "/home/**/software/anaconda3/envs/trajectory/lib/python3.8/site-packages/gym/envs/registration.py", line 131, in spec
        raise error.UnregisteredEnv('No registered env with id: {}'.format(id))
    gym.error.UnregisteredEnv: No registered env with id: halfcheetah-medium-v2
    
    opened by CRLqinliang 12
  • Issue with  mc_bin_client.py

    Issue with mc_bin_client.py

    While trying to run your given script, I faced following problems. Traceback (most recent call last): File "scripts/train.py", line 6, in import trajectory.utils as utils File "/home/rs/18CS91P06/Bill_payment/trajectory-transformer-master/trajectory/utils/init.py", line 1, in from .setup import Parser, watch File "/home/rs/18CS91P06/Bill_payment/trajectory-transformer-master/trajectory/utils/setup.py", line 6, in from tap import Tap File "/home/rs/18CS91P06/anaconda3/envs/trajectory/lib/python3.8/site-packages/tap.py", line 6, in from mc_bin_client import mc_bin_client, memcacheConstants as Constants File "/home/rs/18CS91P06/anaconda3/envs/trajectory/lib/python3.8/site-packages/mc_bin_client/mc_bin_client.py", line 14, in from memcacheConstants import REQ_MAGIC_BYTE, RES_MAGIC_BYTE ModuleNotFoundError: No module named 'memcacheConstants'

    opened by paramita1024 3
  • Pretrained Model in AntMaze

    Pretrained Model in AntMaze

    Hello, I'm interested in the AntMaze tasks and notice that currently the pretrained models in AntMaze are not provided. Will you provide the pretrained models in AntMaze in the future? Thank you!

    opened by TsuTikgiau 3
  • KeyError: 'halfcheetah-medium-v2'

    KeyError: 'halfcheetah-medium-v2'

    Dear author,

    After installation and downloading pretrained models&plans, I still get in trouble with running the command. python scripts/train.py --dataset halfcheetah-medium-v2

    (trajectory) qz@qz:~/trajectory-transformer$ python scripts/train.py --dataset halfcheetah-medium-v2

    [ utils/setup ] Reading config: config.offline:halfcheetah_medium_v2 [ utils/setup ] Not using overrides | config: config.offline | dataset: halfcheetah_medium_v2 [ utils/setup ] Made savepath: logs/halfcheetah-medium-v2/gpt/azure [ utils/setup ] Saved args to logs/halfcheetah-medium-v2/gpt/azure/args.json Traceback (most recent call last): File "/home/qz/anaconda3/envs/trajectory/lib/python3.6/site-packages/gym/envs/registration.py", line 121, in spec return self.env_specs[id] KeyError: 'halfcheetah-medium-v2'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "scripts/train.py", line 25, in env = datasets.load_environment(args.dataset) File "/home/qz/trajectory-transformer/trajectory/datasets/d4rl.py", line 81, in load_environment wrapped_env = gym.make(name) File "/home/qz/anaconda3/envs/trajectory/lib/python3.6/site-packages/gym/envs registration.py", line 145, in make return registry.make(id, **kwargs) File "/home/qz/anaconda3/envs/trajectory/lib/python3.6/site-packages/gym/envs/registration.py", line 89, in make spec = self.spec(path) File "/home/qz/anaconda3/envs/trajectory/lib/python3.6/site-packages/gym/envs/registration.py", line 131, in spec raise error.UnregisteredEnv('No registered env with id: {}'.format(id)) gym.error.UnregisteredEnv: No registered env with id: halfcheetah-medium-v2

    Thank you very much for your attention.

    opened by GromShine 3
  • ERROR: Could not find a version that satisfies the requirement dm-control

    ERROR: Could not find a version that satisfies the requirement dm-control

    Hi,

    I encounter an error when creating the Conda environment. It seems that the version of dm-control (from D4RL) is unavailable. Is there any recommended version for the D4RL package? The following is the full error message.

    Pip subprocess error:
      Running command git clone -q https://github.com/JannerM/d4rl.git /tmp/pip-req-build-enrmm6ao
      Running command git rev-parse -q --verify 'sha^d5719e2c6ef6ab3b1c678a846c02621abb8074a4'
      Running command git fetch -q https://github.com/JannerM/d4rl.git d5719e2c6ef6ab3b1c678a846c02621abb8074a4
      Running command git checkout -q d5719e2c6ef6ab3b1c678a846c02621abb8074a4
      WARNING: Missing build requirements in pyproject.toml for mujoco-py==2.0.2.13 from https://files.pythonhosted.org/packages/2f/48/b108057c1a23c8da9f4cdc7a7c46ab7cec49c3563c0706d50f2527de6ba0/mujoco-py-2.0.2.13.tar.gz#sha256=d6ae66276b565af9063597fda70683a89c7356290f5ac3961b794ee90ec50eea (from -r /708HDD/hungyh/trajectory-transformer/condaenv.l4fl7x8g.requirements.txt (line 4)).
      WARNING: The project does not specify a build backend, and pip cannot fall back to setuptools without 'wheel'.
      Running command git clone -q git://github.com/deepmind/dm_control /tmp/pip-install-ehe12w5d/dm-control_d0b0cee6667746188485b8f85955e996
      fatal: unable to connect to github.com:
      github.com[0: 20.27.177.113]: errno=Connection timed out
    
    WARNING: Discarding git+git://github.com/deepmind/dm_control@90f00e4e80af56abb9f905070d0c152845db5602#egg=dm_control. Command errored out with exit status 128: git clone -q git://github.com/deepmind/dm_control /tmp/pip-install-ehe12w5d/dm-control_d0b0cee6667746188485b8f85955e996 Check the logs for full command output.
      Running command git clone -q git://github.com/aravindr93/mjrl /tmp/pip-install-ehe12w5d/mjrl_82435f9d000845a69223ca68a1c237e4
      fatal: unable to connect to github.com:
      github.com[0: 20.27.177.113]: errno=Connection timed out
    
    WARNING: Discarding git+git://github.com/aravindr93/mjrl@master#egg=mjrl. Command errored out with exit status 128: git clone -q git://github.com/aravindr93/mjrl /tmp/pip-install-ehe12w5d/mjrl_82435f9d000845a69223ca68a1c237e4 Check the logs for full command output.
    ERROR: Could not find a version that satisfies the requirement dm-control (unavailable) (from d4rl) (from versions: 0.0.286587932, 0.0.286955599, 0.0.288398964, 0.0.288483845, 0.0.295778102, 0.0.300771433, 0.0.312466143, 0.0.318037100, 0.0.318066097, 0.0.319497192, 0.0.322773188, 0.0.355168290, 0.0.364896371)
    ERROR: No matching distribution found for dm-control (unavailable)
    
    failed
    
    CondaEnvException: Pip failed
    
    

    Thanks!

    opened by HungYuHeng 2
  • purpose of pad_to_full_observation

    purpose of pad_to_full_observation

    Hi! First of all, thank you for such an interesting work!

    I'm trying to figure out how trajectories are represented in this work. As far as I understand, after transformer blocks we get [batch, block_size, embedding_dim] shapes. In a normal transformer we would just pass this to the head, for example nn.Linear(embedding_dim, vocab_size) and get logits for prediction.

    Why wouldn't that work? What's the intuition behind such padding and reshape (and ein linear) that you do? It doesn't seem to be mentioned in the paper.

    Also, what is stop token? Seems like there is no special cases for ending in beam plan. Is this just for done?

    Thanks!

    opened by Howuhh 2
  • No registered env with id: halfcheetah-medium-v2

    No registered env with id: halfcheetah-medium-v2

    When I run "python scripts/train.py --dataset halfcheetah-medium-v2", then exception occurred : "gym.error.UnregisteredEnv: No registered env with id: halfcheetah-medium-v2". And my gym version and mujoco version are all same as environment.yml

    opened by QuBohao 1
  • double forward in goal gpt

    double forward in goal gpt

    Hi! I noticed one more not straightforward thing in goal conditioned version of GPT.

    Here: https://github.com/jannerm/trajectory-transformer/blob/e0b5f12677a131ee87c65bc01179381679b3cfef/trajectory/models/transformers.py#L288-L295

    After you append goal embeddings to the main sequence, you do self.blocks twice. Is that how it's intended to work? Shouldn't one time be enough, since all embeddings will have all needed information about the goal due to the attention mechanism.

    opened by Howuhh 1
  • Some questions about hyperparameters in newer version

    Some questions about hyperparameters in newer version

    Dear Author,

    I am very interested in your great works and trying to reproduce your experiment results.

    Previously I have almost achieved the same score described in the older version of your paper (approximately 70 score on average in 3 * 3 datasets). But I noticed that you updated your paper in arXiv in November and the score for TT (quantile) went up to 78.9 on average in 3 * 3 datasets.

    I also noticed that you listed your beam search hyperparameters in Appendix E, where k_act is 20. The listed hyperparameters has some discrepancy with your config file (config/offline.py), where default k_act is None and cdf_act is 0.6. I am wondering if you changed the hyperparameters and obtained a higher score. If so, could you please update your config file so that I can also reproduce your results?

    Thanks!

    opened by FallCicada 1
  • [Question] Output shape of heads

    [Question] Output shape of heads

    Thank you for such an interesting work.

    Im really interested in your works and trying to understand your code, but I wonder about why the head network outputs the "#vocabulary + 1". Can you explain this for me?

    opened by jsw7460 0
  • imitation learning results on HalfCheetah env

    imitation learning results on HalfCheetah env

    Hi! I noticed that I can't get good results on the HalfCheetah environment with imitation learning (with plain beam search decoding by logprob) even after long training and without overfitting (but can on Hopper). I also noticed that in the paper only results on Hopper and Walker2d are presented for imitation learning section.

    Have you encountered the same difficulties? Or haven't considered testing in this environment? If so, where there any particular reasons for this?

    opened by Howuhh 0
  • Question about the visualisation of four-rooms

    Question about the visualisation of four-rooms

    Hi,

    Here I saw the paper where you drew the trajectories of four rooms environment in Figure 6. Where the observation in this environment is based on pictures, could you share your code about how you draw the trajectories?

    opened by louieworth 1
  • Question about D4RL-gym dataset version

    Question about D4RL-gym dataset version

    Hi, recently I read your paper and it inspire me a lot, and I think it is no doubt a good paper. However, I am confused about the version of D4RL dataset used for your compared baselines. I notice that in "Appendix C Baseline performance sources", the results of BC, MOPO (by the way, I didn't find MOPO in your experiment part) and MBOP are taken from their original papers, all of which use D4RL-gym-v0 datasets. Because I find that the performance of CQL on D4RL-gym-v0^[1] is greatly different from that on D4RL-gym-v2[2] on several datasets, I wonder that will scores of the above baselines change greatly on D4RL-gym-v2, or you have evidence that this will not happen, since you compare these scores directly?

    opened by FineArtz 1
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

null 73 Nov 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Malik Boudiaf 138 Dec 12, 2022
Code for ACM MM 2020 paper "NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination"

NOH-NMS: Improving Pedestrian Detection by Nearby Objects Hallucination The offical implementation for the "NOH-NMS: Improving Pedestrian Detection by

Tencent YouTu Research 64 Nov 11, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python>=3.7 pytorch>=1.6.0 torchvision>=0.8

Yunfan Li 210 Dec 30, 2022
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Computer Vision Lab at Columbia University 139 Nov 18, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

null 43 Nov 19, 2022
Code for the paper A Theoretical Analysis of the Repetition Problem in Text Generation

A Theoretical Analysis of the Repetition Problem in Text Generation This repository share the code for the paper "A Theoretical Analysis of the Repeti

Zihao Fu 37 Nov 21, 2022
Code for our ICASSP 2021 paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks

SA-Net: Shuffle Attention for Deep Convolutional Neural Networks (paper) By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software T

Qing-Long Zhang 199 Jan 8, 2023
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

Ofir Press 138 Apr 15, 2022
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022