Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

Overview

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style [NeurIPS 2021]

Official code to reproduce the results and data presented in the paper Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style.

Problem Formulation

Numerical data

To train:

> python main_mlp.py --style-change-prob 0.75 --statistical-dependence --content-dependent-style

To evaluate:

> python main_mlp.py --style-change-prob 0.75 --statistical-dependence --content-dependent-style --evaluate

Causal3DIdent Dataset

Causal3DIdent dataset example images

You can access the dataset here. The training and test datasets consists of 250000 and 25000 samples, respectively.

High-dimensional images: Causal3DIdent

To train:

> python main_3dident.py --offline-dataset OFFLINE_DATASET --apply-random-crop --apply-color-distortion

To evaluate:

> python main_3dident.py --offline-dataset OFFLINE_DATASET --apply-random-crop --apply-color-distortion --evaluate

BibTeX

@inproceedings{vonkugelgen2021self,
  title={Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style},
  author={von Kügelgen, Julius and Sharma, Yash and Gresele, Luigi and Brendel, Wieland and Schölkopf, Bernhard and Besserve, Michel and Locatello, Francesco},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}

Acknowledgements

This repository builds on the following codebase.

You might also like...
Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model
Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model The task of age transformation illustrates the change of an individual

Fast Neural Style for Image Style Transform by Pytorch
Fast Neural Style for Image Style Transform by Pytorch

FastNeuralStyle by Pytorch Fast Neural Style for Image Style Transform by Pytorch This is famous Fast Neural Style of Paper Perceptual Losses for Real

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

Project looking into use of autoencoder for semi-supervised learning and comparing data requirements compared to supervised learning.

[CVPR 2021]
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

 Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

The official implementation of the paper,
The official implementation of the paper, "SubTab: Subsetting Features of Tabular Data for Self-Supervised Representation Learning"

SubTab: Author: Talip Ucar ([email protected]) The official implementation of the paper, SubTab: Subsetting Features of Tabular Data for Self-Supervis

Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.
Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021.

Dense Contrastive Learning for Self-Supervised Visual Pre-Training This project hosts the code for implementing the DenseCL algorithm for se

Comments
  • Difference between raw latents and latents ?

    Difference between raw latents and latents ?

    Hello,

    1. I was wondering what the difference between the raw_latents_*.npy and latents_*.py is ? both seem to be 10 variables per image. raw latents seem to be the actual value of object position, hue, rotation and spotlight position and hue and background hue.

    2. Also, what is the exact order of these attributes ?

    thank you.

    opened by kfarivar 2
  • question about data generation with statistical-dependence

    question about data generation with statistical-dependence

    Hi, thanks for sharing the code here, very nice paper.

    I have a question about your data generation process, it seems that the covariance matrix is sampled for every batch when statistical-dependence is specified to be True (https://github.com/ysharma1126/ssl_identifiability/blob/c2968e5a41d6b95f60e7d2c517bb06ca06662e23/spaces.py#L68).

    Doesn't this mean that the statistical dependence is only within one batch, and on average for the whole dataset with many batches, the covariance matrix becomes identity, i.e. no statistical dependence?

    Or is there any other reasoning that i'm missing?

    opened by TangTangFei 2
Owner
Yash Sharma
Yash Sharma
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Facilitates implementing deep neural-network backbones, data augmentations

Introduction Nowadays, the training of Deep Learning models is fragmented and unified. When AI engineers face up with one specific task, the common wa

null 40 Dec 29, 2022
torchlm is aims to build a high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations

??A high level pipeline for face landmarks detection, supports training, evaluating, exporting, inference and 100+ data augmentations, compatible with torchvision and albumentations, can easily install with pip.

DefTruth 142 Dec 25, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022
official implemntation for "Contrastive Learning with Stronger Augmentations"

CLSA CLSA is a self-supervised learning methods which focused on the pattern learning from strong augmentations. Copyright (C) 2020 Xiao Wang, Guo-Jun

Lab for MAchine Perception and LEarning (MAPLE) 47 Nov 29, 2022
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
AugMax: Adversarial Composition of Random Augmentations for Robust Training

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 31 Oct 28, 2021
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 9, 2023
Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback

CoSMo.pytorch Official Implementation of CoSMo: Content-Style Modulation for Image Retrieval with Text Feedback, Seungmin Lee*, Dongwan Kim*, Bohyung

Seung Min Lee 54 Dec 8, 2022
This script runs neural style transfer against the provided content image.

Neural Style Transfer Content Style Output Description: This script runs neural style transfer against the provided content image. The content image m

Martynas Subonis 0 Nov 25, 2021