Code that accompanies the paper Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance

Related tags

Deep Learning ssdkl
Overview

Semi-supervised Deep Kernel Learning

This is the code that accompanies the paper Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance

Install via pip install -e . in this directory in a NEW virtualenv.

  • Experiments for SSDKL, DKL, VAT, Coreg are in the directory ssdkl.
  • Experiments for Label Propagation and Mean Teacher are in labelprop_and_meanteacher.
  • Experiments for VAE are in the directory vae.

For more detailed instructions, please see the README files in each directory.

Tested with Python 2.7.12.

If you find this code useful in your research, please cite

@article{jeanxieermon_ssdkl_2018,
  title={Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance},
  author={Jean, Neal and Xie, Sang Michael and Ermon, Stefano},
  journal={Neural Information Processing Systems (NIPS)},
  year={2018},
}
You might also like...
The code for our paper Semi-Supervised Learning with Multi-Head Co-Training
The code for our paper Semi-Supervised Learning with Multi-Head Co-Training

Semi-Supervised Learning with Multi-Head Co-Training (PyTorch) Abstract Co-training, extended from self-training, is one of the frameworks for semi-su

CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Hitters Linear Regression - Hitters Linear Regression With Python
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression Kullanacağımız veri seti Carnegie Mellon Üniversitesi'

Self-Supervised Learning with Kernel Dependence Maximization

Self-Supervised Learning with Kernel Dependence Maximization This is the code for SSL-HSIC, a self-supervised learning loss proposed in the paper Self

Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection
Hybrid CenterNet - Hybrid-supervised object detection / Weakly semi-supervised object detection

Hybrid-Supervised Object Detection System Object detection system trained by hybrid-supervision/weakly semi-supervision (HSOD/WSSOD): This project is

Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

Source codes for the paper "Local Additivity Based Data Augmentation for Semi-supervised NER"

LADA This repo contains codes for the following paper: Jiaao Chen*, Zhenghui Wang*, Ran Tian, Zichao Yang, Diyi Yang: Local Additivity Based Data Augm

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

Comments
  • Batch setting in Mean-teacher optimization

    Batch setting in Mean-teacher optimization

    Hi, I have a question about the batch setting inside the optimization loop for mean teacher. Please have a look of the following codes for what I am talking about. https://github.com/ermongroup/ssdkl/blob/eba7023179597316f57bb536d8fb1562855ec981/labelprop_and_meanteacher/mean_teacher.py#L152 I am trying to understand why the call "create_feed_dicts" is inside the optimization loop. In each optimization cycle train_fd gives the same (entire) training set only once without any "batch" information, the only variations could be the teacher and student noise. Did I miss any thing? Thanks!

    opened by toushi68 4
  • During training, whether the labels of the unlabelled data is used for training?

    During training, whether the labels of the unlabelled data is used for training?

    Hi,

    SSDKL is an excellent for semi-supervised learning. I want to apply this method to solve my problem. However, I have a question about the unlabelled data.

    In SSDKL, the labelled data was divided into training, validating, testing and unlabelled data. In my understanding, the label (target values) of the unlabelled section should not be fed into the training model. However, from the functions of "def _setup_nns(self):" and "def _compute_embeddings(self, sess):", the label "self.y_unlabeled" is also an input for NN training. Since my data contains real unlabelled data without any labells, if so, it seems that I cannot adapt my data to the SSDKL method directly.

    Did I understand right?

    Thanks for your help~

    Best regards,

    Jappy

    opened by Jappy0 2
  • Inquiry on Proof of Theorem 1

    Inquiry on Proof of Theorem 1

    Hello! I am not sure this is the right place but I have a simple question on the Proof of Theorem 1 in Appendix 1. (https://arxiv.org/pdf/1805.10407.pdf)

    screen shot 2019-03-04 at 11 03 19 am

    In Figure, I manage to understand the derivation up to the equations (16)~(19), but hard to understand Equation (20). When we compare Equation (17) and (20), while the (17) finds the point minimizer of negative of log posterior, the (20) finds the point minimizer of negative of log likelihood (as far as I understand..)

    Could you elaborate on this derivation?

    Best,

    opened by ykwon0407 2
  • Transductive setting in ModelTrainer

    Transductive setting in ModelTrainer

    Hello, I have a question about the transductive setting inside the ModelTrainer class. Please have a look at the following file to understand what I am saying. ssdkl/ssdkl/models/train_models.py Line 199-201

    When we set the transductive variable to True, it executes these lines of code I mentioned above. Assume that we have a training data which has 100000 samples. (only 5000 of them are labeled and we will use 10% of them as validation set) Therefore we will have 4500 samples for training, 500 samples for validation and 95000 samples for test and unlabeled set(same for transductive learning). However, when the code I mentioned above executed we lose our unlabeled set. We have 5000 unlabeled instead. Or did I miss any thing? Could you please check these lines of code?

    Thank you so much!

    opened by atakanfilgoz 1
Owner
null
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 472 Dec 21, 2022
Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Tom-R.T.Kvalvaag 2 Dec 17, 2021
[ICML 2021] Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data

Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data This repo provides the source code & data of our paper: Break-It-Fix-It: Unsupervised

Michihiro Yasunaga 86 Nov 30, 2022
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning

UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning This is the official PyTorch implementation for UniMoCo pape

dddzg 49 Jan 2, 2023
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) ?? Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

null 53 Dec 28, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

null 43 Nov 19, 2022