This is the official implementation for "Do Transformers Really Perform Bad for Graph Representation?".

Overview

Graphormer

By Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng*, Guolin Ke, Di He*, Yanming Shen and Tie-Yan Liu.

This repo is the official implementation of "Do Transformers Really Perform Bad for Graph Representation?".

Updates

06/10/2021

Initial commits:

  1. License files and example code.

Introduction

Graphormer is initially described in arxiv, which is a standard Transformer architecture with several structural encodings, which could effectively encoding the structural information of a graph into the model.

Graphormer achieves strong performance on PCQM4M-LSC (0.1234 MAE on val), MolPCBA (31.39 AP(%) on test), MolHIV (80.51 AUC(%) on test) and ZINC (0.122 MAE on test), surpassing previous models by a large margin.

Main Results

Citing Graphormer

@article{ying2021transformers,
  title={Do Transformers Really Perform Bad for Graph Representation?},
  author={Ying, Chengxuan and Cai, Tianle and Luo, Shengjie and Zheng, Shuxin and Ke, Guolin and He, Di and Shen, Yanming and Liu, Tie-Yan},
  journal={arXiv preprint arXiv:2106.05234},
  year={2021}
}

Getting Started

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Comments
  • No response after running the example scripts

    No response after running the example scripts

    I install Graphormer follow the guide:

    1. To create and activate a conda environment with Python3.9
    conda create -n graphormer python=3.9
    conda activate graphormer
    
    1. Run the following commands
    git clone --recursive https://github.com/microsoft/Graphormer.git
    cd Graphormer
    bash install.sh
    
    1. To train a Graphormer-slim on ZINC-500K on a single GPU card
    cd examples/property_prediction/
    bash zinc.sh
    

    I ran this command for half an hour and got no response: image Occasional reactions occur, but the following error is reported:

    2022-03-17 17:00:38 | WARNING | root | The OGB package is out of date. Your version is 1.3.2, while the latest ve
    rsion is 1.3.3.
    Traceback (most recent call last):
      File "/home/linjiayi/anaconda3/envs/graphormer/bin/fairseq-train", line 8, in <module>
        sys.exit(cli_main())
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq_cli/train.py", line 512, in
    cli_main
        parser = options.get_training_parser()
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/options.py", line 38, in get
    _training_parser
        parser = get_parser("Trainer", default_task)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/options.py", line 234, in ge
    t_parser
        utils.import_user_module(usr_args)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/utils.py", line 497, in impo
    rt_user_module
        import_tasks(tasks_path, f"{module_name}.tasks")
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/tasks/__init__.py", line 117
    , in import_tasks
        importlib.import_module(namespace + "." + task_name)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
      File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 850, in exec_module
      File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
      File "/home/linjiayi/Graphormer/graphormer/tasks/graph_prediction.py", line 25, in <module>
        from ..data.dataset import (
      File "/home/linjiayi/Graphormer/graphormer/data/dataset.py", line 9, in <module>
        from .wrapper import MyPygGraphPropPredDataset
      File "/home/linjiayi/Graphormer/graphormer/data/wrapper.py", line 6, in <module>
        from ogb.graphproppred import PygGraphPropPredDataset
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/ogb/graphproppred/__init__.py", line
     5, in <module>
        from .dataset_pyg import PygGraphPropPredDataset
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/ogb/graphproppred/dataset_pyg.py", l
    ine 1, in <module>
        from torch_geometric.data import InMemoryDataset
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_geometric/__init__.py", line 5
    , in <module>
        import torch_geometric.data
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_geometric/data/__init__.py", l
    ine 1, in <module>
        from .data import Data
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_geometric/data/data.py", line
    8, in <module>
        from torch_sparse import coalesce, SparseTensor
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_sparse/__init__.py", line 41,
    in <module>
        from .tensor import SparseTensor  # noqa
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_sparse/tensor.py", line 13, in
     <module>
        class SparseTensor(object):
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch/jit/_script.py", line 1128, in
     script
        _compile_and_register_class(obj, _rcb, qualified_name)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch/jit/_script.py", line 138, in
    _compile_and_register_class
        script_class = torch._C._jit_script_class_compile(qualified_name, ast, defaults, rcb)
    RuntimeError:
    object has no attribute sparse_csr_tensor:
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_sparse/tensor.py", line 511
                value = torch.ones(self.nnz(), dtype=dtype, device=self.device())
    
            return torch.sparse_csr_tensor(rowptr, col, value, self.sizes())
                   ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    

    I try to train a Graphormer-base on PCQM4M dataset on multiple GPU cards using bash pcqv1.sh, no response either. Is there a problem with the data set download? How to solve the problem?

    opened by skye95git 14
  • About reproducing PCBA result

    About reproducing PCBA result

    Hi authors, thanks for your great work. As I have been trying to reproduce the No.1 result on ogb-pcba board, I didn't find the checkpoints mentioned in your paper that pretrained for the PCBA task. Therefore, I turned to use the PCQM checkpoint you provided for the PCQM task. But during loading the checkpoints, an error occured even I have set the hidden dimension and ffn dimension from 1024 to 768: `RuntimeError: Error(s) in loading state_dict for Graphormer:

        size mismatch for atom_encoder.weight: copying a param with shape torch.Size([4737, 768]) from checkpoint, the shape in current model is torch.Size([4609, 768]).
    
        size mismatch for edge_encoder.weight: copying a param with shape torch.Size([769, 32]) from checkpoint, the shape in current model is torch.Size([1537, 32]).`
    

    Thus, may I ask two questions about the reproducing process:

    1. Can you provide the checkpoints that can be used to reproduce the PCBA result?
    2. Is there a reason why the code cannot load the previous PCQM checkpoints even though having changed the ffn and hidden dimension?

    Looking forward to your reply. Thank you!

    opened by Noisyntrain 10
  • Example script hanging without any output or any error hint.

    Example script hanging without any output or any error hint.

    I have completed the installation with Conda (using the install.sh script), but could not successfully run the example script. When I run the pcqv2.sh script, it just hangs without any output or any error message. I'm not sure anybody else has faced the same problem. Can you give me some advice to resolve the issue?

    For more information, I'm using the GCP with NVIDIA V100 GPUs and CUDA11.1. Within the same environment, I have checked running the fairseq NMT example codes and the Graphormer v1. codes. Both of them did not produce any errors.

    opened by mswzeus 8
  • Changing entry.py for MisconfigurationException error

    Changing entry.py for MisconfigurationException error

    Hi! This is Stella from Seoul National University, I'm getting a lot of help from your code. I have a question about entry.py line 87. Originally it has metric = 'valid_' + get_dataset(dm.dataset_name)['metric'] image but when I run model, I faced error like this: 'pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='valid_mae') not found in the returned metrics: ['train_loss']. HINT: Did you call self.log('valid_mae', value) in the LightningModule?'

    So I changed the line 87 as metric = 'train_loss' image and it runs well.

    I'm quite afraid that I'm doing something wrong, is it right way to modify the code? Here are some useful information for my project:

    1. task: regression
    2. input type : integer (originally continuous value, but discretized)
    3. target type : real value
    4. eval metric : rmse
    5. features from data.py
    •         'num_class': 1,
      
    •         'loss_fn': F.l1_loss,
      
    •         'metric': 'mae',
      
    •         'metric_mode': 'min',
      
    opened by Sangyoon-Bae 7
  • How can I do graph regression with graphormer?

    How can I do graph regression with graphormer?

    Hi! this is Stella from Seoul National University. I'd like to ask how can I implement regression task on Graphormer. I adjusted ogb module for our data, and setted num_class as -1 like other regression datasets. And I faced problem with editing model dimensions at model.py, line 62 ~ 75. image I think that 512*9+1 is something like vocabulary size, which is calculated by 512 * (number of categories of node features) + 1. Is my guess right? And you said that it should be greater than the number of the class of all categories inissue #32, and how can I set this number in regression task? maybe number of graphs?

    Thank you!

    opened by Sangyoon-Bae 6
  • Cannot Reproduce Result of ZINC

    Cannot Reproduce Result of ZINC

    Hi, I trained some models on PCQM4M-LSC, ogbg-molhiv, and ZINC following the setting in the paper, and the results of PCQM4M-LSC and ogbg-molhiv are same as the paper. I also run experiment on ZINC several times, but the MAE is always more than 0.14 (with or without adding --intput_dropout_rate 0), which should be about 0.12 according to the paper. Here is my command:

    python3 entry.py --dataset_name ZINC --hidden_dim 80 --ffn_dim 80 --num_heads 8 --tot_updates 400000 --batch_size 256 --warmup_updates 40000 --precision 16 --intput_dropout_rate 0 --gradient_clip_val 5 --num_workers 8 --gpus 1 --accelerator ddp --max_epochs 10000

    opened by b05901024 6
  • The weight of embedding padding_idx=0 is not zero

    The weight of embedding padding_idx=0 is not zero

    https://github.com/microsoft/Graphormer/blob/740e6ff09a5de29d61def5ea6af7dfd04cee719e/graphormer/model.py#L20

    When you re-initialize the weight of embedding, the weight of 0th index is also initialized by normal distribution, whose padding vector in the feature input will be non-zero. It should be wrong.

    opened by lkfo415579 5
  • Reproduce Validate MAE

    Reproduce Validate MAE

    Hi,

    Thanks for your interesting work. I have a problem regarding the evaluation. I downloaded your checkpoints from here, then I run the following command as mentioned in the Readme (for all_fold_seed0 checkpoint):

    conda activate graphormer-lsc
    export arch="--ffn_dim 768 --hidden_dim 768 --attention_dropout_rate 0.1 --dropout_rate 0.1 --n_layers 12 --peak_lr 2e-4 --edge_type multi_hop --multi_hop_max_dist 20 --weight_decay 0.0 --intput_dropout_rate 0.0"
    export ckpt_path="checkpoints"
    export ckpt_name="all_fold_seed0.ckpt"
    bash inference.sh
    

    The output log is:

    Global seed set to 1
     > PCQM4M-LSC loaded!
    {'num_class': 1, 'loss_fn': <function l1_loss at 0x7fc2381b3950>, 'metric': 'mae', 'metric_mode': 'min', 'evaluator': <ogb.lsc.pcqm4m.PCQM4MEvaluator object at 0x7fc1995d2110>, 'dataset': MyPygPCQM4MDataset2(3803453), 'max_node': 128}
     > dataset info ends
    total params: 47167841
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    Global seed set to 1
    initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
    len(val_dataloader) 1487
    Validating: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1487/1487 [03:04<00:00,  7.42it/s]
    0.027769196778535843
    --------------------------------------------------------------------------------                                                                                                                           
    DATALOADER:0 VALIDATE RESULTS
    {'valid_mae': 0.027769196778535843}
    --------------------------------------------------------------------------------
    [{'valid_mae': 0.027769196778535843}]
    

    I assumed that I should get results near Table 1 "validate MAE column", but it's different from that. Do I miss something?

    Thanks for your help.

    opened by alirezamshi-zz 5
  • the evaluation on the PCBA dataset seems wrong

    the evaluation on the PCBA dataset seems wrong

    Hi authors, Thank you for your great work! I noticed that the result of 'the mean of the 4 cards ap' is different from the result of 'gather all pred and labels of different cards and do evaluation once'. And the latter method's result tends to be lower than the former one. It seems that when doing evaluation, Graphormer is using the former method. May I know that if you have the valid and test result of Graphormer model evulating the whole dataset once? Thank you!

    opened by Noisyntrain 4
  • Padding in case of different number of nodes in batch

    Padding in case of different number of nodes in batch

    Hi,

    I have a few questions about the node padding.

    Firstly, is my assumption correct, that the adding of -inf values in "pad_attn_bias_unsqueeze" has the same purpose as the attention_mask in BERT, so that there will be no attention to padded nodes?

    If this is correct, why do you add +1 to x in the padding functions? As the attention is restricted not to attend there anyway, there could be any values in the padded nodes, so 0 could still be just as a regular feature value.

    I talk about the padding like in

    def pad_2d_unsqueeze(x, padlen):
        x = x + 1  # pad id = 0 -> THIS LINE
        xlen, xdim = x.size()
        if xlen < padlen:
            new_x = x.new_zeros([padlen, xdim], dtype=x.dtype)
            new_x[:xlen, :] = x
            x = new_x
        return x.unsqueeze(0)
    

    which is used to pad x.

    opened by ChantalMP 4
  • Unable to reproduce results

    Unable to reproduce results

    I'm trying to reproduce the reported results on OGB and ZINC datasets, but I failed to achieve the performance.

    I first directly run the provided scripts hiv.sh to train a graphormer on MolHiv dataset without pretraining. The final AUC is 73.10%. Then I followed the instructions and hyper-parameter settings in the paper to do pre-training. I pre-trained on the PCQM4M for 20 epochs (until the loss converge) and fine-tuned the model on MolHiv for 8 epochs (as specified in the script) The best result turn out to be 76.25%.

    Despite some improvement, the final AUC is not as high as it was reported in the paper. I also tried to reproduce the result on ZINC via the example script. But the best MAE is 0.1576, which is lower than 0.122 reported in the paper.

    I'm wondering what I'm likely to miss that results in my poor performance. Can I know more reproduction details? My python environment is elaborated as below:

    pytorch==1.9.0
    pytorch-geometric==1.7,2
    pytorch-scatter==2.0.8
    pytorch-sparse==0.6.11
    pytorch-lightning==1.3.0
    ogb==1.3.1
    cudatoolkit==11.1
    

    I'd really appreciate it if someone could share their reproduced results and give me some suggestions.

    opened by peihaowang 4
  • Could you please elaborate on preparing the customized dataset based on my self-curated data?

    Could you please elaborate on preparing the customized dataset based on my self-curated data?

    I've read the instructions of preparing the customized dataset, and I saw the example customized dataset is the "QM9" from dgl. And this dataset object extends the dgl.data.DGLDataset class, so does it mean that my customized dataset must extend the DGLDataset class and override the parental methods? I think it might be very tricky. 屏幕截图 2022-12-19 171007 So, could you please tell me more details if I want to prepare the customized dataset based on my self-curated data?

    opened by yanhailu 0
  • Errors when using algos.pyx in my own python file

    Errors when using algos.pyx in my own python file

    I want to use algos.pyx in my own file, but it can not be imported in that it can not be found. My own file can not recognize this pyx file. Is it not able to use in other files? Or could you tell me how to make it ? Thanks a lot!

    opened by starry-y 0
  • fairseq installation errors

    fairseq installation errors

    Hi,

    When running install.sh, getting the following error in the fairseq part - Screen Shot 2022-11-26 at 11 18 41 AM

    Have you seen this before? Could this be due to different pip versions supporting different --use-feature options? I am currently running pip 22.3.1. Thank you.

    opened by yashjakhotiya 0
  • Error with customized dataset

    Error with customized dataset

    Hallo everone, I wanted to train a customized dataset. Following the instructions in https://graphormer.readthedocs.io/en/latest/Datasets.html#id5 I added my code before the create_customized_dataset function to build a dgl dataset class. Then I wrote a shell file to run the training process. But I got ModuleNotFoundError error when I started the training. Here was the error information

    Traceback (most recent call last):
      File "/root/anaconda3/envs/graphormer/bin/fairseq-train", line 8, in <module>
        sys.exit(cli_main())
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq_cli/train.py", line 528, in cli_main
        distributed_utils.call_main(cfg, main)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/distributed/utils.py", line 369, in call_main
        main(cfg, **kwargs)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq_cli/train.py", line 85, in main
        task = tasks.setup_task(cfg.task)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/tasks/__init__.py", line 46, in setup_task
        return task.setup_task(cfg, **kwargs)
      File "/workspace/Graphormer/graphormer/tasks/graph_prediction.py", line 179, in setup_task
        return cls(cfg)
      File "/workspace/Graphormer/graphormer/tasks/graph_prediction.py", line 142, in __init__
        self.__import_user_defined_datasets(cfg.user_data_dir)
      File "/workspace/Graphormer/graphormer/tasks/graph_prediction.py", line 165, in __import_user_defined_datasets
        importlib.import_module(module_name)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
      File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
    ModuleNotFoundError: No module named 'customized_dataset'
    

    Here was the shell file

    #!/usr/bin/env bash
    
    CUDA_VISIBLE_DEVICES=0 fairseq-train \
    --user-dir ../../graphormer \
    --user-data-dir /workspace/Graphormer/examples/customized_dataset \
    --num-workers 16 \
    --ddp-backend=legacy_ddp \
    --dataset-name MonomerTg_dataset \
    --task graph_prediction \
    --criterion l1_loss \
    --arch graphormer_slim \
    --num-classes 1 \
    --attention-dropout 0.1 --act-dropout 0.1 --dropout 0.0 \
    --optimizer adam --adam-betas '(0.9, 0.999)' --adam-eps 1e-8 --clip-norm 5.0 --weight-decay 0.01 \
    --lr-scheduler polynomial_decay --power 1 --warmup-updates 60000 --total-num-update 400000 \
    --lr 2e-4 --end-learning-rate 1e-9 \
    --batch-size 64 \
    --fp16 \
    --data-buffer-size 20 \
    --encoder-layers 12 \
    --encoder-embed-dim 80 \
    --encoder-ffn-embed-dim 80 \
    --encoder-attention-heads 8 \
    --max-epoch 10000 \
    --save-dir ./ckpts
    

    Anyone knows why it happens and maybe a solution for it? Thank you very much!

    opened by ZhanggaoYuan16 2
  • windows cannot run bash

    windows cannot run bash

    Using git bash under windwos OS, it reports errors:

    $ bash pcqv1.sh Your OS does not support multiprocessing based on fork, please use num_workers=0 2022-10-17 19:22:19 | WARNING | root | The OGB package is out of date. Your version is 1.3.2, while the latest version is 1.3.4. Using backend: pytorch Traceback (most recent call last): File "D:\python\lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "D:\python\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "D:\python\Scripts\fairseq-train.exe_main.py", line 7, in File "D:\python\lib\site-packages\fairseq_cli\train.py", line 541, in cli_main parser = options.get_training_parser() File "D:\python\lib\site-packages\fairseq\options.py", line 38, in get_training_parser parser = get_parser("Trainer", default_task) File "D:\python\lib\site-packages\fairseq\options.py", line 234, in get_parser utils.import_user_module(usr_args) File "D:\python\lib\site-packages\fairseq\utils.py", line 497, in import_user_module import_tasks(tasks_path, f"{module_name}.tasks") File "D:\python\lib\site-packages\fairseq\tasks_init.py", line 117, in import_tasks importlib.import_module(namespace + "." + task_name) File "D:\python\lib\importlib_init_.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "D:\PycharmProjects\Graphormer\graphormer\tasks\is2re.py", line 25, in class LMDBDataset: File "D:\PycharmProjects\Graphormer\graphormer\tasks\is2re.py", line 43, in LMDBDataset def getitem(self, idx: int) -> dict[str, Union[Tensor, float]]: TypeError: 'type' object is not subscriptable Error in atexit._run_exitfuncs: Traceback (most recent call last): File "D:\python\lib\site-packages\colorama\ansitowin32.py", line 59, in closed return stream.closed ValueError: underlying buffer has been detached

    enhancement 
    opened by yingtaoluo 0
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 3.2k Dec 30, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

Dongkwan Kim 127 Dec 28, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

NVIDIA Research Projects 10.1k Dec 28, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

null 46 Nov 9, 2022