Code for the paper "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

Overview

Unsupervised Contrastive Learning of
Sound Event Representations

This repository contains the code for the following paper. If you use this code or part of it, please cite:

Eduardo Fonseca, Diego Ortego, Kevin McGuinness, Noel E. O'Connor, Xavier Serra, "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

arXiv slides poster blog post video

We propose to learn sound event representations using the proxy task of contrasting differently augmented views of sound events, inspired by SimCLR [1]. The different views are computed by:

  • sampling TF patches at random within every input clip,
  • mixing resulting patches with unrelated background clips (mix-back), and
  • other data augmentations (DAs) (RRC, compression, noise addition, SpecAugment [2]).

Our proposed system is illustrated in the figure.

system

Our results suggest that unsupervised contrastive pre-training can mitigate the impact of data scarcity and increase robustness against noisy labels. Please check our paper for more details, or have a quicker look at our slide deck, poster, blog post, or video presentation (see links above).

This repository contains the framework that we used for our paper. It comprises the basic stages to learn an audio representation via unsupervised contrastive learning, and then evaluate the representation via supervised sound event classifcation. The system is implemented in PyTorch.

Dependencies

This framework is tested on Ubuntu 18.04 using a conda environment. To duplicate the conda environment:

conda create --name <envname> --file spec-file.txt

Directories and files

FSDnoisy18k/ includes folders to locate the FSDnoisy18k dataset and a FSDnoisy18k.py to load the dataset (train, val, test), including the data loader for contrastive and supervised training, applying transforms or mix-back when appropriate
config/ includes *.yaml files defining parameters for the different training modes
da/ contains data augmentation code, including augmentations mentioned in our paper and more
extract/ contains feature extraction code. Computes an .hdf5 file containing log-mel spectrograms and associated labels for a given subset of data
logs/ folder for output logs
models/ contains definitions for the architectures used (ResNet-18, VGG-like and CRNN)
pth/ contains provided pre-trained models for ResNet-18, VGG-like and CRNN
src/ contains functions for training and evaluation in both supervised and unsupervised fashion
main_train.py is the main script
spec-file.txt contains conda environment specs

Usage

(0) Download the dataset

Download FSDnoisy18k [3] from Zenodo through the dataset companion site, unzip it and locate it in a given directory. Fix paths to dataset in ctrl section of *.yaml. It can be useful to have a look at the different training sets of FSDnoisy18k: a larger set of noisy labels and a small set of clean data [3]. We use them for training/validation in different ways.

(1) Prepare the dataset

Create an .hdf5 file containing log-mel spectrograms and associated labels for each subset of data:

python extract/wav2spec.py -m test -s config/params_unsupervised_cl.yaml

Use -m with train, val or test to extract features from each subset. All the extraction parameters are listed in params_unsupervised_cl.yaml. Fix path to .hdf5 files in ctrl section of *.yaml.

(2) Run experiment

Our paper comprises three training modes. For convenience, we provide yaml files defining the setup for each of them.

  1. Unsupervised contrastive representation learning by comparing differently augmented views of sound events. The outcome of this stage is a trained encoder to produce low-dimensional representations. Trained encoders are saved under results_models/ using a folder name based on the string experiment_name in the corresponding yaml (make sure to change it).
CUDA_VISIBLE_DEVICES=0 python main_train.py -p config/params_unsupervised_cl.yaml &> logs/output_unsup_cl.out
  1. Evaluation of the representation using a previously trained encoder. Here, we do supervised learning by minimizing cross entropy loss without data agumentation. Currently, we load the provided pre-trained models sitting in pth/ (you can change this in main_train.py, search for select model). We follow two evaluation methods:

    • Linear Evaluation: train an additional linear classifier on top of the pre-trained unsupervised embeddings.

      CUDA_VISIBLE_DEVICES=0 python main_train.py -p config/params_supervised_lineval.yaml &> logs/output_lineval.out
      
    • End-to-end Fine Tuning: fine-tune entire model on two relevant downstream tasks after initializing with pre-trained weights. The two downstream tasks are:

      • training on the larger set of noisy labels and validate on train_clean. This is chosen by selecting train_on_clean: 0 in the yaml.
      • training on the small set of clean data (allowing 15% for validation). This is chosen by selecting train_on_clean: 1 in the yaml.

      After choosing the training set for the downstream task, run:

      CUDA_VISIBLE_DEVICES=0 python main_train.py -p config/params_supervised_finetune.yaml &> logs/output_finetune.out
      

The setup in the yaml files should provide the best results reported in our paper. JFYI, the main flags that determine the training mode are downstream, lin_eval and method in the corresponding yaml (they are already adequately set in each yaml).

(3) See results:

Check the logs/*.out for printed results at the end. Main evaluation metric is balanced (macro) top-1 accuracy. Trained models are saved under results_models/models* and some metrics are saved under results_models/metrics*.

Model Zoo

We provide pre-trained encoders as described in our paper, for ResNet-18, VGG-like and CRNN architectures. See pth/ folder. Note that better encoders could likely be obtained through a more exhaustive exploration of the data augmentation compositions, thus defining a more challenging proxy task. Also, we trained on FSDnoisy18k due to our limited compute resources at the time, yet this framework can be directly applied to other larger datasets such as FSD50K or AudioSet.

Citation

@inproceedings{fonseca2021unsupervised,
  title={Unsupervised Contrastive Learning of Sound Event Representations},
  author={Fonseca, Eduardo and Ortego, Diego and McGuinness, Kevin and O'Connor, Noel E. and Serra, Xavier},
  booktitle={2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2021},
  organization={IEEE}
}

Contact

You are welcome to contact [email protected] should you have any question/suggestion. You can also create an issue.

Acknowledgment

This work is a collaboration between the MTG-UPF and Dublin City University's Insight Centre. This work is partially supported by Science Foundation Ireland (SFI) under grant number SFI/15/SIRG/3283 and by the Young European Research University Network under a 2020 mobility award. Eduardo Fonseca is partially supported by a Google Faculty Research Award 2018. The authors are grateful for the GPUs donated by NVIDIA.

References

[1] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A Simple Framework for Contrastive Learning of Visual Representations,” in Int. Conf. on Mach. Learn. (ICML), 2020

[2] Park et al., SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. InterSpeech 2019

[3] E. Fonseca, M. Plakal, D. P. W. Ellis, F. Font, X. Favory, X. Serra, "Learning Sound Event Classifiers from Web Audio with Noisy Labels", In proceedings of ICASSP 2019, Brighton, UK

You might also like...
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Code for our CVPR 2021 paper
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

Code for ICLR 2021 Paper,
Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"

Anytime Autoregressive Model Anytime Sampling for Autoregressive Models via Ordered Autoencoding , ICLR 21 Yilun Xu, Yang Song, Sahaj Gara, Linyuan Go

Official code of the paper
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

Code for the paper
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Official code for the ICLR 2021 paper Neural ODE Processes
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Code for CVPR 2021 paper: Anchor-Free Person Search
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

Code of paper
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Comments
  • Some problems in training

    Some problems in training

    I run the code on Ubuntu20.04,but i get Loss: 0.0000, Accuracy: nan when I run python main_train.py -p config/params_unsupervised_cl.yaml. I get Accuracy: 100% when I run python main_train.py -p config/params_supervised_lineval.yaml. I don't know the reason for the mistake.

    python main_train.py -p config/params_unsupervised_cl.yaml: ############# Data loaded ############# Total params: 11.18 M ######## Epoch: 1 ######## Validation ########

    Validation set. Loss: 0.0000, Accuracy: nan

    ######## Test ########

    Test set. Loss: 0.0000, Accuracy: nan

    Epoch time: 142.33 seconds

    ######## Epoch: 2 ######## Validation ########

    Validation set. Loss: 0.0000, Accuracy: nan

    python main_train.py -p config/params_supervised_lineval.yaml: ############# Data loaded ############# Total params: 11.18 M ######## Epoch: 1 Train Epoch: 1 [1920/15813 (12%)] Loss: 0.000446, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [3840/15813 (24%)] Loss: 0.000027, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [5760/15813 (36%)] Loss: 0.000008, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [7680/15813 (48%)] Loss: 0.000009, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [9600/15813 (60%)] Loss: 0.000007, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [11520/15813 (73%)] Loss: 0.000009, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [13440/15813 (85%)] Loss: 0.000006, Accuracy: 100%, Learning rate: 0.100000 Train Epoch: 1 [15360/15813 (97%)] Loss: 0.000010, Accuracy: 100%, Learning rate: 0.100000 ######## Validation ########

    Validation set. Loss: 0.0000, Accuracy: nan

    ######## Test ########

    Test set. Loss: 0.0000, Accuracy: nan

    Epoch time: 56.20 seconds

    opened by zhaoyun-ai 1
Owner
Eduardo Fonseca
Returning research intern at Google Research | PhD candidate at Music Technology Group, Universitat Pompeu Fabra
Eduardo Fonseca
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (陈三元) 81 Nov 28, 2022
Official implementation of FCL-taco2: Fast, Controllable and Lightweight version of Tacotron2 @ ICASSP 2021

FCL-Taco2: Towards Fast, Controllable and Lightweight Text-to-Speech synthesis (ICASSP 2021) Paper | Demo Block diagram of FCL-taco2, where the decode

Disong Wang 39 Sep 28, 2022
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEECH" submitted to ICASSP 2022

CPC_DeepCluster This is the implementation of "SELF SUPERVISED REPRESENTATION LEARNING WITH DEEP CLUSTERING FOR ACOUSTIC UNIT DISCOVERY FROM RAW SPEEC

LEAP Lab 2 Sep 15, 2022
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

João Fonseca 3 Jan 3, 2023
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Vincent Bour 8 Aug 2, 2022
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python>=3.7 pytorch>=1.6.0 torchvision>=0.8

Yunfan Li 210 Dec 30, 2022