rethink-audio-fsl
This repo contains the source code for the paper "Who calls the shots? Rethinking Few-Shot Learning for Audio." (WASPAA 2021)
Table of Contents
Dataset
Models in this work are trained on FSD-MIX-CLIPS, an open dataset of programmatically mixed audio clips with a controlled level of polyphony and signal-to-noise ratio. We use single-labeled clips from FSD50K as the source material for the foreground sound events and Brownian noise as the background to generate 281,039 10-second strongly-labeled soundscapes with Scaper. We refer this (intermediate) dataset of 10s soundscapes as FSD-MIX-SED. Each soundscape contains n events from n different sound classes where n is ranging from 1 to 5. We then extract 614,533 1s clips centered on each sound event in the soundscapes in FSD-MIX-SED to produce FSD-MIX-CLIPS.
Due to the large size of the dataset, instead of releasing the raw audio files, we release the source material, a subset of FSD50K, and soundscape annotations in JAMS format which can be used to reproduce FSD-MIX-SED using Scaper. All clips in FSD-MIX-CLIPS are extracted from FSD-MIX-SED. Therefore, for FSD-MIX-CLIPS, instead of releasing duplicated audio content, we provide annotations that specify the filename in FSD-MIX-SED and the corresponding starting time (in second) of each 1-second clip.
To reproduce FSD-MIX-SED:
- Download all files from Zenodo.
- Extract
.tar.gz
files. You will get
FSD_MIX_SED.annotations
: 281,039 annotation files, 35GBFSD_MIX_SED.source
: 10,296 single-labeled audio clips, 1.9GBFSD_MIX_CLIPS.annotations
: 5 annotation files for each class/data splitvocab.json
: 89 classes, each class is then labeled by its index in the list in following experiments. 0-58: base, 59-73: novel-val, 74-88: novel-test.
We will use FSD_MIX_SED.annotations
and FSD_MIX_SED.source
to reproduce the audio data in FSD_MIX_SED
, and use the audio with FSD_MIX_CLIPS.annotation
for the following training and evaluation.
- Install Scaper
- Generate soundscapes from jams files by running the command. Set
annpaths
andaudiopath
to the extracted folders, andsavepath
to the desired path to save output audio files.
python ./data/generate_soundscapes.py \
--annpath PATH-TO-FSD_MIX_SED.annotations \
--audiopath PATH-TO-FSD_MIX_SED.source \
--savepath PATH-TO-SAVE-OUTPUT
Note that this will generate 281,039 audio files with a size of ~450GB to the folder FSD_MIX_SED.audio
at the set savepath
.
If you want to get the foreground material (FSD-MIX-SED.source) directly from FSD50K instead of downloading them, run
python ./data/preprocess_foreground_sounds.py \
--fsdpath PATH-TO-FSD50K \
--outpath PATH_TO_SAVE_OUTPUT
Experiment
We provide source code to train the best performing embedding model (pretrained OpenL3 + FC) and three different few-shot methods to predict both base and novel class data.
Preprocessing
Once audio files are reproduced, we pre-compute OpenL3 embeddings of clips in FSD-MIX-CLIPS and save them.
- Install OpenL3
- Set paths of the downloaded
FSD_MIX_CLIPS.annotations
and generatedFSD_MIX_SED.audio
, and run
python get_openl3emb_and_filelist.py \
--annpath PATH-TO-FSD_MIX_CLIPS.annotations \
--audiopath PATH-TO-FSD_MIX_SED.audio \
--savepath PATH_TO_SAVE_OUTPUT
This generates 614,533 .pkl
files where each file contains an embedding. A set of filelists will also be saved under current folder.
Environment
Create conda
environment from the environment.yml
file and activate it.
Note that you only need the environment if you want to train/evaluate the models. For reproducing the dataset, see Dataset.
conda env create -f environment.yml
conda activate dfsl
Training
- Training configuration can be specified using config files in
./config
- Model checkpoints will be saved in the folder
./experiments
, and tensorboard data will be saved in the folder./run
1. Base classifier
First, to train the base classifier on base classes, run
python train.py --config openl3CosineClassifier --openl3
2. Few-shot weight generator for DFSL
Once the base model is trained, we can train the few-shot weight generator for DFSL by running
python train.py --config openl3CosineClassifierGenWeightAttN5 --openl3
By default, DFSL is trained with 5 support examples: n=5
, to train DFSL with different n
, run
# n=10
python train.py --config openl3CosineClassifierGenWeightAttN10 --openl3
# n=20
python train.py --config openl3CosineClassifierGenWeightAttN20 --openl3
# n=30
python train.py --config openl3CosineClassifierGenWeightAttN30 --openl3
Evaluation
We evaluate the trained models on test data from both base and novel classes. For each novel class, we need to sample a support set. Run the command below to split the original filelist for test classes to test_support_filelist.pkl
and test_query_filelist.pkl
.
python get_test_support_and_query.py
- Here we consider monophonic support examples with mixed(random) SNR. Code to run evaluation with polyphonic support examples with specific low/high SNR will be released soon.
For evaluation, we compute features for both base and novel test data, then make predictions and compute metrics in a joint label space. The computed features, model predictions, and metrics will be saved in the folder ./experiments
. We consider 3 few-shot methods to predict novel classes. To test different number of support examples, set different n_pos
in the following commands.
1. Prototype
# Extract embeddings of evaluation data and save them.
python save_features.py --config=openl3CosineClassifier --openl3
# Get and save model prediction, run this multiple time (niter) to count for random selection of novel examples.
python pred.py --config=openl3CosineClassifier --openl3 --niter 100 --n_base 59 --n_novel 15 --n_pos 5
# compute and save evaluation metrics based on model prediction
python metrics.py --config=audioset_pannCosineClassifier --openl3 --n_base 59 --n_novel 15 --n_pos 5
2. DFSL
# Extract embeddings of evaluation data and save them.
python save_features.py --config=openl3CosineClassifierGenWeightAttN5 --openl3
# Get and save model prediction, run this multiple time (niter) to count for random selection of novel examples.
python pred.py --config=openl3CosineClassifierGenWeightAttN5 --openl3 --niter 100 --n_base 59 --n_novel 15 --n_pos 5
# compute and save evaluation metrics based on model prediction
python metrics.py --config=audioset_pannCosineClassifierGenWeightAttN5 --openl3 --n_base 59 --n_novel 15 --n_pos 5
3. Logistic regression
Train a binary logistic regression model for each novel class. Note that we need to sample n_neg
of examples from the base training data as the negative examples. Default n_neg
is 100. We also did a hyperparameter search on n_neg
based on the validation data while n_pos
changing from 5 to 30:
n_pos=5, n_neg=100
n_pos=10, n_neg=500
n_pos=20, n_neg=1000
n_pos=30, n_neg=5000
# Extract embeddings of evaluation data and save them.
python save_features.py --config=openl3CosineClassifier --openl3
# Train binary logistic regression models, predict test data, and compute metrics
python logistic_regression.py --config=openl3CosineClassifier --openl3 --niter 10 --n_base 59 --n_novel 15 --n_pos 5 --n_neg 100
Reference
This code is built upon the implementation from FewShotWithoutForgetting
Citation
Please cite our paper if you find the code or dataset useful for your research.
Y. Wang, N. J. Bryan, J. Salamon, M. Cartwright, and J. P. Bello. "Who calls the shots? Rethinking Few-shot Learning for Audio", IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2021