Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

Overview

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization (Salesforce Research)

This is a PyTorch implementation of the CoMatch paper [Blog]:

@article{CoMatch,
	title={Semi-supervised Learning with Contrastive Graph Regularization},
	author={Junnan Li and Caiming Xiong and Steven C.H. Hoi},
	journal={arXiv preprint arXiv:2011.11183},
	year={2020}
}

Requirements:

  • PyTorch ≥ 1.4
  • pip install tensorboard_logger
  • download and extract cifar-10 dataset into ./data/

To perform semi-supervised learning on CIFAR-10 with 4 labels per class, run:

python Train_CoMatch.py --n-labeled 40 --seed 1 

The results using different random seeds are:

seed 1 2 3 4 5 avg
accuracy 93.71 94.10 92.93 90.73 93.97 93.09

ImageNet

For ImageNet experiments, see ./imagenet/

You might also like...
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning" by Zhongzheng Ren*, Raymond A. Yeh*, Alexander G. Schwing.

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning Overview This code is for paper: Not All Unlabeled Data are Equa

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Wanli Li and Tieyun Qian: Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction, IJCNN 2021

MRefG Wanli Li and Tieyun Qian: "Exploit a Multi-head Reference Graph for Semi-supervised Relation Extraction", IJCNN 2021 1. Requirements To reproduc

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

Semi Supervised Learning for Medical Image Segmentation, a collection of literature reviews and code implementations.

Semi-supervised-learning-for-medical-image-segmentation. Recently, semi-supervised image segmentation has become a hot topic in medical image computin

The code for our paper Semi-Supervised Learning with Multi-Head Co-Training
The code for our paper Semi-Supervised Learning with Multi-Head Co-Training

Semi-Supervised Learning with Multi-Head Co-Training (PyTorch) Abstract Co-training, extended from self-training, is one of the frameworks for semi-su

CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Comments
  • Codes for Dataloader of STL Dataset

    Codes for Dataloader of STL Dataset

    Thanks a lot for sharing your clear codes. I try to train the models on STL dataset but don't find the dataloader of STL suitable for SSL setting. Would you like to share the codes of STL Dataloader with randomly splitting labeled&unlabeled data?

    opened by QiushiYang 4
  • How to improve performance

    How to improve performance

    the idea in your paper is amazing,great truths are all simple. I have the following questions: 1、Does a stronger data enhancement method, such as RandomAugment, improve the performance? 2、To improve the performance, is necessary expend the size of distribution alignment ? 3、how to adjust the memory bank size when more label data is used in ImageNet ,such as 20% of ImageNet? 4、If use 20% of ImageNet data as a label, what are the recommendations for other hyperparameters? 5、From your point of view, what are the main challenges in achieving full oversight with 20% of ImageNet data?

    opened by happyxuwork 3
  • Questions about distribution align codes

    Questions about distribution align codes

    Thanks for sharing your clean codes. While I find that the codes of distribution alignment (DA) only calculate: q = Norm(q/mean(p(y))), without multiplying labeled data distribution p(y) and further sharpening, which seems not same as the DA introduced in ReMixMatch. Wonder why don't use the standard DA to train models? Is the simple way you used better for SSL models' performance?

    opened by QiushiYang 1
  • Several Questions

    Several Questions

    Hi, Thank you for your great work, I am new to this area, I have the following questions: 1、How much improvements can be made by the branch of Graph-based contrastive learning? I can not see that ablation experiments in the paper. I'm interested in this part and would also like to know the performance improvements that might be useful for my task. 2、Does the Co-match method fit for multi-label classification tasks?

    Once again, thank you for your nice work and clean code! Looking forward to your reply. Best regards! Tan

    opened by myt889 2
Owner
Salesforce
A variety of vendor agnostic projects which power Salesforce
Salesforce
This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels].

CGPN This is the repository for the NeurIPS-21 paper [Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels]. Req

null 10 Sep 12, 2022
PyTorch implementation of Self-supervised Contrastive Regularization for DG (SelfReg)

SelfReg PyTorch official implementation of Self-supervised Contrastive Regularization for Domain Generalization (SelfReg, https://arxiv.org/abs/2104.0

null 64 Dec 16, 2022
A PyTorch implementation of "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective" (WWW 2019)

SEAL ⠀⠀⠀ A PyTorch implementation of Semi-Supervised Graph Classification: A Hierarchical Graph Perspective (WWW 2019) Abstract Node classification an

Benedek Rozemberczki 202 Dec 27, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

null 25 Nov 9, 2022
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
Saeed Lotfi 28 Dec 12, 2022
I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive constraining

I-SECRET This is the implementation of the MICCAI 2021 Paper "I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive con

null 13 Dec 2, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 2, 2023
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 2, 2023
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 1 Dec 9, 2021