Y-Net
Official implementation of A cappella: Audio-visual Singing VoiceSeparation, British Machine Vision Conference 2021
Project page: ipcv.github.io/Acappella/
Paper: Arxiv, BMVC (not available yet)
Running a demo / Y-Net Inference
We provide simple functions to load models with pre-trained weights. Steps:
- Clone the repo or download y-net>VnBSS>models (models can run as a standalone package)
- Load a model:
from VnBSS import y_net_gr # or from models import y_net_gr
model = y_net_gr(n=1)
Citation
@inproceedings{acappella,
author = {Juan F. Montesinos and
Venkatesh S. Kadandale and
Gloria Haro},
title = {A cappella: Audio-visual Singing VoiceSeparation},
booktitle = {British Machine Vision Conference (BMVC)},
year = {2021},
}
Training / Using DEV code
###Training The most difficult part is to prepare the dataset as everything is builded upon a very specific format.
To run training:
python run.py -m model_name --workname experiment_name --arxiv_path directory_of_experiments --pretrained_from path_pret_weights
You can inspect the argparse at default.py
>argparse_default
.
Possible model names are: y_net_g
, y_net_gr
, y_net_m
,y_net_r
,u_net
,llcp
Testing
- Go to
manuscript_scripts
and replace checkpoint paths by yours in the testing scripts. - Run:
bash manuscript_scripts/test_gr_r.sh
- Replace the paths of
manuscript_scripts/auto_metrics.py
by your experiment_directory path. - Run:
python manuscript_scripts/auto_metrics.py
to visualise results.
It's a complicated framework. HELP!
The best option to run the framework is to debug! Having a runable code helps to see input shapes, dataflow and to run line by line. Download The circle of life demo with the files already processed. It will act like a dataset of 6 samples. You can download it from Google Drive 1.1 Gb.
- Unzip the file
- run
python run.py -m y_net_gr
(for example)
Everything has been configured to run by default this way.
The model
Each effective model is wrapped by a nn.Module
which takes care of computing the STFT, the mask, returning the waveform etcetera... This wrapper can be found at VnBSS
>models
>y_net.py
>YNet
. To get rid of this you can simply inherit the class, take minimum layers and keep the core_forward
method, which is the inference step without the miscelanea.
FAQs
- How to change the optimizer's hyperparameters?
Go toconfig
>optimizer.json
- How to change clip duration, video framerate, STFT parameters or audio samplerate?
Go toconfig
>__init__.py
- How to change the batch size or the amount of epochs?
Go toconfig
>hyptrs.json
- How to dump predictions from the training and test set
Go todefault.py
. ModifyDUMP_FILES
(can be controlled at a subset level).force
argument skips the iteration-wise conditions and dumps for every single network prediction. - Is tensorboard enabled?
Yes, you will find tensorboard records atyour_experiment_directory/used_workname/tensorboard
- Can I resume an experiment?
Yes, if you set exactly the same experiment folder and workname, the system will detect it and will resume from there. - I'm trying to resume but found
AssertionError
If there is an exception before running the model - How to change the amount of layers of U-Net
U-net is build dynamically given a list of layers per block as shown inmodels
>__init__.py
from outer to inner blocks. - How to modify the default network values?
The json fileconfig
>net_cfg.json
overwrites any default configuration from the model.