This is the official implementation for "Do Transformers Really Perform Bad for Graph Representation?".

Overview

Graphormer

By Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng*, Guolin Ke, Di He*, Yanming Shen and Tie-Yan Liu.

This repo is the official implementation of "Do Transformers Really Perform Bad for Graph Representation?".

News

08/03/2021

  1. Codes and scripts are released.

06/16/2021

  1. Graphormer has won the 1st place of quantum prediction track of Open Graph Benchmark Large-Scale Challenge (KDD CUP 2021) [Competition Description] [Competition Result] [Technical Report] [Blog (English)] [Blog (Chinese)]

Introduction

Graphormer is initially described in arxiv, which is a standard Transformer architecture with several structural encodings, which could effectively encoding the structural information of a graph into the model.

Graphormer achieves strong performance on PCQM4M-LSC (0.1234 MAE on val), MolPCBA (31.39 AP(%) on test), MolHIV (80.51 AUC(%) on test) and ZINC (0.122 MAE on test), surpassing previous models by a large margin.

Main Results

PCQM4M-LSC

Method #params train MAE valid MAE
GCN 2.0M 0.1318 0.1691
GIN 3.8M 0.1203 0.1537
GCN-VN 4.9M 0.1225 0.1485
GIN-VN 6.7M 0.1150 0.1395
Graphormer-Small 12.5M 0.0778 0.1264
Graphormer 47.1M 0.0582 0.1234

OGBG-MolPCBA

Method #params test AP (%)
DeeperGCN-VN+FLAG 5.6M 28.42
DGN 6.7M 28.85
GINE-VN 6.1M 29.17
PHC-GNN 1.7M 29.47
GINE-APPNP 6.1M 29.79
Graphormer 119.5M 31.39

OGBG-MolHIV

Method #params test AP (%)
GCN-GraphNorm 526K 78.83
PNA 326K 79.05
PHC-GNN 111K 79.34
DeeperGCN-FLAG 532K 79.42
DGN 114K 79.70
Graphormer 47.0M 80.51

ZINC-500K

Method #params test MAE
GIN 509.5K 0.526
GraphSage 505.3K 0.398
GAT 531.3K 0.384
GCN 505.1K 0.367
GT 588.9K 0.226
GatedGCN-PE 505.0K 0.214
MPNN (sum) 480.8K 0.145
PNA 387.2K 0.142
SAN 508.6K 0.139
Graphormer-Slim 489.3K 0.122

Requirements and Installation

Setup with Conda

# create a new environment
conda create --name graphormer python=3.7
conda activate graphormer
# install requirements
pip install rdkit-pypi cython
pip install ogb==1.3.1 pytorch-lightning==1.3.0
pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 -f https://download.pytorch.org/whl/torch_stable.html
pip install torch-geometric==1.6.3 ogb==1.3.1 pytorch-lightning==1.3.1 tqdm torch-sparse==0.6.9 torch-scatter==2.0.6 -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html

Citation

Please kindly cite this paper if you use the code:

@article{ying2021transformers,
  title={Do Transformers Really Perform Bad for Graph Representation?},
  author={Ying, Chengxuan and Cai, Tianle and Luo, Shengjie and Zheng, Shuxin and Ke, Guolin and He, Di and Shen, Yanming and Liu, Tie-Yan},
  journal={arXiv preprint arXiv:2106.05234},
  year={2021}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Comments
  • No response after running the example scripts

    No response after running the example scripts

    I install Graphormer follow the guide:

    1. To create and activate a conda environment with Python3.9
    conda create -n graphormer python=3.9
    conda activate graphormer
    
    1. Run the following commands
    git clone --recursive https://github.com/microsoft/Graphormer.git
    cd Graphormer
    bash install.sh
    
    1. To train a Graphormer-slim on ZINC-500K on a single GPU card
    cd examples/property_prediction/
    bash zinc.sh
    

    I ran this command for half an hour and got no response: image Occasional reactions occur, but the following error is reported:

    2022-03-17 17:00:38 | WARNING | root | The OGB package is out of date. Your version is 1.3.2, while the latest ve
    rsion is 1.3.3.
    Traceback (most recent call last):
      File "/home/linjiayi/anaconda3/envs/graphormer/bin/fairseq-train", line 8, in <module>
        sys.exit(cli_main())
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq_cli/train.py", line 512, in
    cli_main
        parser = options.get_training_parser()
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/options.py", line 38, in get
    _training_parser
        parser = get_parser("Trainer", default_task)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/options.py", line 234, in ge
    t_parser
        utils.import_user_module(usr_args)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/utils.py", line 497, in impo
    rt_user_module
        import_tasks(tasks_path, f"{module_name}.tasks")
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/tasks/__init__.py", line 117
    , in import_tasks
        importlib.import_module(namespace + "." + task_name)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
      File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 850, in exec_module
      File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
      File "/home/linjiayi/Graphormer/graphormer/tasks/graph_prediction.py", line 25, in <module>
        from ..data.dataset import (
      File "/home/linjiayi/Graphormer/graphormer/data/dataset.py", line 9, in <module>
        from .wrapper import MyPygGraphPropPredDataset
      File "/home/linjiayi/Graphormer/graphormer/data/wrapper.py", line 6, in <module>
        from ogb.graphproppred import PygGraphPropPredDataset
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/ogb/graphproppred/__init__.py", line
     5, in <module>
        from .dataset_pyg import PygGraphPropPredDataset
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/ogb/graphproppred/dataset_pyg.py", l
    ine 1, in <module>
        from torch_geometric.data import InMemoryDataset
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_geometric/__init__.py", line 5
    , in <module>
        import torch_geometric.data
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_geometric/data/__init__.py", l
    ine 1, in <module>
        from .data import Data
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_geometric/data/data.py", line
    8, in <module>
        from torch_sparse import coalesce, SparseTensor
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_sparse/__init__.py", line 41,
    in <module>
        from .tensor import SparseTensor  # noqa
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_sparse/tensor.py", line 13, in
     <module>
        class SparseTensor(object):
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch/jit/_script.py", line 1128, in
     script
        _compile_and_register_class(obj, _rcb, qualified_name)
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch/jit/_script.py", line 138, in
    _compile_and_register_class
        script_class = torch._C._jit_script_class_compile(qualified_name, ast, defaults, rcb)
    RuntimeError:
    object has no attribute sparse_csr_tensor:
      File "/home/linjiayi/anaconda3/envs/graphormer/lib/python3.9/site-packages/torch_sparse/tensor.py", line 511
                value = torch.ones(self.nnz(), dtype=dtype, device=self.device())
    
            return torch.sparse_csr_tensor(rowptr, col, value, self.sizes())
                   ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    

    I try to train a Graphormer-base on PCQM4M dataset on multiple GPU cards using bash pcqv1.sh, no response either. Is there a problem with the data set download? How to solve the problem?

    opened by skye95git 14
  • About reproducing PCBA result

    About reproducing PCBA result

    Hi authors, thanks for your great work. As I have been trying to reproduce the No.1 result on ogb-pcba board, I didn't find the checkpoints mentioned in your paper that pretrained for the PCBA task. Therefore, I turned to use the PCQM checkpoint you provided for the PCQM task. But during loading the checkpoints, an error occured even I have set the hidden dimension and ffn dimension from 1024 to 768: `RuntimeError: Error(s) in loading state_dict for Graphormer:

        size mismatch for atom_encoder.weight: copying a param with shape torch.Size([4737, 768]) from checkpoint, the shape in current model is torch.Size([4609, 768]).
    
        size mismatch for edge_encoder.weight: copying a param with shape torch.Size([769, 32]) from checkpoint, the shape in current model is torch.Size([1537, 32]).`
    

    Thus, may I ask two questions about the reproducing process:

    1. Can you provide the checkpoints that can be used to reproduce the PCBA result?
    2. Is there a reason why the code cannot load the previous PCQM checkpoints even though having changed the ffn and hidden dimension?

    Looking forward to your reply. Thank you!

    opened by Noisyntrain 10
  • Example script hanging without any output or any error hint.

    Example script hanging without any output or any error hint.

    I have completed the installation with Conda (using the install.sh script), but could not successfully run the example script. When I run the pcqv2.sh script, it just hangs without any output or any error message. I'm not sure anybody else has faced the same problem. Can you give me some advice to resolve the issue?

    For more information, I'm using the GCP with NVIDIA V100 GPUs and CUDA11.1. Within the same environment, I have checked running the fairseq NMT example codes and the Graphormer v1. codes. Both of them did not produce any errors.

    opened by mswzeus 8
  • Changing entry.py for MisconfigurationException error

    Changing entry.py for MisconfigurationException error

    Hi! This is Stella from Seoul National University, I'm getting a lot of help from your code. I have a question about entry.py line 87. Originally it has metric = 'valid_' + get_dataset(dm.dataset_name)['metric'] image but when I run model, I faced error like this: 'pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='valid_mae') not found in the returned metrics: ['train_loss']. HINT: Did you call self.log('valid_mae', value) in the LightningModule?'

    So I changed the line 87 as metric = 'train_loss' image and it runs well.

    I'm quite afraid that I'm doing something wrong, is it right way to modify the code? Here are some useful information for my project:

    1. task: regression
    2. input type : integer (originally continuous value, but discretized)
    3. target type : real value
    4. eval metric : rmse
    5. features from data.py
    •         'num_class': 1,
      
    •         'loss_fn': F.l1_loss,
      
    •         'metric': 'mae',
      
    •         'metric_mode': 'min',
      
    opened by Sangyoon-Bae 7
  • How can I do graph regression with graphormer?

    How can I do graph regression with graphormer?

    Hi! this is Stella from Seoul National University. I'd like to ask how can I implement regression task on Graphormer. I adjusted ogb module for our data, and setted num_class as -1 like other regression datasets. And I faced problem with editing model dimensions at model.py, line 62 ~ 75. image I think that 512*9+1 is something like vocabulary size, which is calculated by 512 * (number of categories of node features) + 1. Is my guess right? And you said that it should be greater than the number of the class of all categories inissue #32, and how can I set this number in regression task? maybe number of graphs?

    Thank you!

    opened by Sangyoon-Bae 6
  • Cannot Reproduce Result of ZINC

    Cannot Reproduce Result of ZINC

    Hi, I trained some models on PCQM4M-LSC, ogbg-molhiv, and ZINC following the setting in the paper, and the results of PCQM4M-LSC and ogbg-molhiv are same as the paper. I also run experiment on ZINC several times, but the MAE is always more than 0.14 (with or without adding --intput_dropout_rate 0), which should be about 0.12 according to the paper. Here is my command:

    python3 entry.py --dataset_name ZINC --hidden_dim 80 --ffn_dim 80 --num_heads 8 --tot_updates 400000 --batch_size 256 --warmup_updates 40000 --precision 16 --intput_dropout_rate 0 --gradient_clip_val 5 --num_workers 8 --gpus 1 --accelerator ddp --max_epochs 10000

    opened by b05901024 6
  • The weight of embedding padding_idx=0 is not zero

    The weight of embedding padding_idx=0 is not zero

    https://github.com/microsoft/Graphormer/blob/740e6ff09a5de29d61def5ea6af7dfd04cee719e/graphormer/model.py#L20

    When you re-initialize the weight of embedding, the weight of 0th index is also initialized by normal distribution, whose padding vector in the feature input will be non-zero. It should be wrong.

    opened by lkfo415579 5
  • Reproduce Validate MAE

    Reproduce Validate MAE

    Hi,

    Thanks for your interesting work. I have a problem regarding the evaluation. I downloaded your checkpoints from here, then I run the following command as mentioned in the Readme (for all_fold_seed0 checkpoint):

    conda activate graphormer-lsc
    export arch="--ffn_dim 768 --hidden_dim 768 --attention_dropout_rate 0.1 --dropout_rate 0.1 --n_layers 12 --peak_lr 2e-4 --edge_type multi_hop --multi_hop_max_dist 20 --weight_decay 0.0 --intput_dropout_rate 0.0"
    export ckpt_path="checkpoints"
    export ckpt_name="all_fold_seed0.ckpt"
    bash inference.sh
    

    The output log is:

    Global seed set to 1
     > PCQM4M-LSC loaded!
    {'num_class': 1, 'loss_fn': <function l1_loss at 0x7fc2381b3950>, 'metric': 'mae', 'metric_mode': 'min', 'evaluator': <ogb.lsc.pcqm4m.PCQM4MEvaluator object at 0x7fc1995d2110>, 'dataset': MyPygPCQM4MDataset2(3803453), 'max_node': 128}
     > dataset info ends
    total params: 47167841
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    Global seed set to 1
    initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
    len(val_dataloader) 1487
    Validating: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1487/1487 [03:04<00:00,  7.42it/s]
    0.027769196778535843
    --------------------------------------------------------------------------------                                                                                                                           
    DATALOADER:0 VALIDATE RESULTS
    {'valid_mae': 0.027769196778535843}
    --------------------------------------------------------------------------------
    [{'valid_mae': 0.027769196778535843}]
    

    I assumed that I should get results near Table 1 "validate MAE column", but it's different from that. Do I miss something?

    Thanks for your help.

    opened by alirezamshi-zz 5
  • the evaluation on the PCBA dataset seems wrong

    the evaluation on the PCBA dataset seems wrong

    Hi authors, Thank you for your great work! I noticed that the result of 'the mean of the 4 cards ap' is different from the result of 'gather all pred and labels of different cards and do evaluation once'. And the latter method's result tends to be lower than the former one. It seems that when doing evaluation, Graphormer is using the former method. May I know that if you have the valid and test result of Graphormer model evulating the whole dataset once? Thank you!

    opened by Noisyntrain 4
  • Padding in case of different number of nodes in batch

    Padding in case of different number of nodes in batch

    Hi,

    I have a few questions about the node padding.

    Firstly, is my assumption correct, that the adding of -inf values in "pad_attn_bias_unsqueeze" has the same purpose as the attention_mask in BERT, so that there will be no attention to padded nodes?

    If this is correct, why do you add +1 to x in the padding functions? As the attention is restricted not to attend there anyway, there could be any values in the padded nodes, so 0 could still be just as a regular feature value.

    I talk about the padding like in

    def pad_2d_unsqueeze(x, padlen):
        x = x + 1  # pad id = 0 -> THIS LINE
        xlen, xdim = x.size()
        if xlen < padlen:
            new_x = x.new_zeros([padlen, xdim], dtype=x.dtype)
            new_x[:xlen, :] = x
            x = new_x
        return x.unsqueeze(0)
    

    which is used to pad x.

    opened by ChantalMP 4
  • Unable to reproduce results

    Unable to reproduce results

    I'm trying to reproduce the reported results on OGB and ZINC datasets, but I failed to achieve the performance.

    I first directly run the provided scripts hiv.sh to train a graphormer on MolHiv dataset without pretraining. The final AUC is 73.10%. Then I followed the instructions and hyper-parameter settings in the paper to do pre-training. I pre-trained on the PCQM4M for 20 epochs (until the loss converge) and fine-tuned the model on MolHiv for 8 epochs (as specified in the script) The best result turn out to be 76.25%.

    Despite some improvement, the final AUC is not as high as it was reported in the paper. I also tried to reproduce the result on ZINC via the example script. But the best MAE is 0.1576, which is lower than 0.122 reported in the paper.

    I'm wondering what I'm likely to miss that results in my poor performance. Can I know more reproduction details? My python environment is elaborated as below:

    pytorch==1.9.0
    pytorch-geometric==1.7,2
    pytorch-scatter==2.0.8
    pytorch-sparse==0.6.11
    pytorch-lightning==1.3.0
    ogb==1.3.1
    cudatoolkit==11.1
    

    I'd really appreciate it if someone could share their reproduced results and give me some suggestions.

    opened by peihaowang 4
  • Very slow training

    Very slow training

    I am training graphormer-slim on a custom dataset with 5K graphs and the loss is coming down nicely. However, the GPU utilization is close to zero while the CPU processes (16) are very busy, which means that the CPU is the bottleneck and I could be training much faster...

    Is there anyway to make the CPU side faster?

    details: graphs in DGL format 50-100 nodes per graph 1-3 edges per node

    opened by decioren 0
  • graphormer v1.0在pcqm4m v1上的训练权重

    graphormer v1.0在pcqm4m v1上的训练权重

    我最近在做Graphormer相关工作,我查找了GitHub未能找到您graphormer v1.0( https://github.com/microsoft/Graphormer/tree/v1.0)提供的预训练模型,重新训练时间太长,所以向您求助graphormer v1.0在pcqm4m v1上的训练权重。

    opened by xiaohua990109 0
  • Could you please elaborate on preparing the customized dataset based on my self-curated data?

    Could you please elaborate on preparing the customized dataset based on my self-curated data?

    I've read the instructions of preparing the customized dataset, and I saw the example customized dataset is the "QM9" from dgl. And this dataset object extends the dgl.data.DGLDataset class, so does it mean that my customized dataset must extend the DGLDataset class and override the parental methods? I think it might be very tricky. 屏幕截图 2022-12-19 171007 So, could you please tell me more details if I want to prepare the customized dataset based on my self-curated data?

    opened by yanhailu 0
  • Errors when using algos.pyx in my own python file

    Errors when using algos.pyx in my own python file

    I want to use algos.pyx in my own file, but it can not be imported in that it can not be found. My own file can not recognize this pyx file. Is it not able to use in other files? Or could you tell me how to make it ? Thanks a lot!

    opened by starry-y 0
  • fairseq installation errors

    fairseq installation errors

    Hi,

    When running install.sh, getting the following error in the fairseq part - Screen Shot 2022-11-26 at 11 18 41 AM

    Have you seen this before? Could this be due to different pip versions supporting different --use-feature options? I am currently running pip 22.3.1. Thank you.

    opened by yashjakhotiya 0
  • Error with customized dataset

    Error with customized dataset

    Hallo everone, I wanted to train a customized dataset. Following the instructions in https://graphormer.readthedocs.io/en/latest/Datasets.html#id5 I added my code before the create_customized_dataset function to build a dgl dataset class. Then I wrote a shell file to run the training process. But I got ModuleNotFoundError error when I started the training. Here was the error information

    Traceback (most recent call last):
      File "/root/anaconda3/envs/graphormer/bin/fairseq-train", line 8, in <module>
        sys.exit(cli_main())
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq_cli/train.py", line 528, in cli_main
        distributed_utils.call_main(cfg, main)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/distributed/utils.py", line 369, in call_main
        main(cfg, **kwargs)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq_cli/train.py", line 85, in main
        task = tasks.setup_task(cfg.task)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/site-packages/fairseq/tasks/__init__.py", line 46, in setup_task
        return task.setup_task(cfg, **kwargs)
      File "/workspace/Graphormer/graphormer/tasks/graph_prediction.py", line 179, in setup_task
        return cls(cfg)
      File "/workspace/Graphormer/graphormer/tasks/graph_prediction.py", line 142, in __init__
        self.__import_user_defined_datasets(cfg.user_data_dir)
      File "/workspace/Graphormer/graphormer/tasks/graph_prediction.py", line 165, in __import_user_defined_datasets
        importlib.import_module(module_name)
      File "/root/anaconda3/envs/graphormer/lib/python3.9/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
      File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
      File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
    ModuleNotFoundError: No module named 'customized_dataset'
    

    Here was the shell file

    #!/usr/bin/env bash
    
    CUDA_VISIBLE_DEVICES=0 fairseq-train \
    --user-dir ../../graphormer \
    --user-data-dir /workspace/Graphormer/examples/customized_dataset \
    --num-workers 16 \
    --ddp-backend=legacy_ddp \
    --dataset-name MonomerTg_dataset \
    --task graph_prediction \
    --criterion l1_loss \
    --arch graphormer_slim \
    --num-classes 1 \
    --attention-dropout 0.1 --act-dropout 0.1 --dropout 0.0 \
    --optimizer adam --adam-betas '(0.9, 0.999)' --adam-eps 1e-8 --clip-norm 5.0 --weight-decay 0.01 \
    --lr-scheduler polynomial_decay --power 1 --warmup-updates 60000 --total-num-update 400000 \
    --lr 2e-4 --end-learning-rate 1e-9 \
    --batch-size 64 \
    --fp16 \
    --data-buffer-size 20 \
    --encoder-layers 12 \
    --encoder-embed-dim 80 \
    --encoder-ffn-embed-dim 80 \
    --encoder-attention-heads 8 \
    --max-epoch 10000 \
    --save-dir ./ckpts
    

    Anyone knows why it happens and maybe a solution for it? Thank you very much!

    opened by ZhanggaoYuan16 2
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 3.2k Dec 30, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

SuperGAT Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighbor

Dongkwan Kim 127 Dec 28, 2022
An official implementation of "SFNet: Learning Object-aware Semantic Correspondence" (CVPR 2019, TPAMI 2020) in PyTorch.

PyTorch implementation of SFNet This is the implementation of the paper "SFNet: Learning Object-aware Semantic Correspondence". For more information,

CV Lab @ Yonsei University 87 Dec 30, 2022
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

Haotong Qin 59 Dec 17, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
StyleGAN2 - Official TensorFlow Implementation

StyleGAN2 - Official TensorFlow Implementation

NVIDIA Research Projects 10.1k Dec 28, 2022
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

null 46 Nov 9, 2022