Audio-Track Separator
Introduction
Audio Source Separation is the process of separating a mixture (e.g. a pop band recording) into isolated sounds from individual sources (e.g. just the lead vocals). Basically, splitting a song into separate vocals and instruments.
In this Repository, We developed an audio track separator in tensorflow that successfully separates Vocals and Drums from an input audio song track.
We trained a U-Net model with two output layers. One output layer predicts the Vocals and the other predicts the Drums. The number of Output layers could be increased based on the number of elements one needs to separate from input Audio Track.
Technologies used:
- The entire architecture is built with tensorflow.
- Matplotlib has been used for visualization.
- Numpy has been used for mathematical operations.
- Librosa have used for the processing of Audio files.
- nussl for Dataset.
The dataset
We will be using the MUSDB18 dataset for this tutorial.
The musdb18 is a dataset of 150 full lengths music tracks (~10h duration) of different genres along with their isolated drums, bass, vocals and others stems.
musdb18 contains two folders, a folder with a training set: "train", composed of 100 songs, and a folder with a test set: "test", composed of 50 songs. Supervised approaches should be trained on the training set and tested on both sets.
All signals are stereophonic and encoded at 44.1kHz.
Exploratory Data Analysis
Building a Data Loader
In the pipeline we are re-sampling the audio data. For the time being our target is to separate the the Vocal and Drums audio from the original, hence the Pipeline returns original processed Audio as X and an array of processed Vocals & Drums audio as y.
Unet Architecture
model = AudioTrackSeparation()
model.build(input_shape=(None, DIM, 1))
model.build_graph().summary()
Implementation
Training
!python main.py --sampling_rate 11025 --train True --epoch 50 --batch 16 --model_save_path ./models/
Trains the u-net model on MUSDB18 Dataset and saves the trained model to the provided directory ( --model_save_path ).
Testing
!python main.py --sampling_rate 11025 --test /content/pop.00000.wav --model_save_path ./models/
Loads the model from model_save_path, reads the audio file from the provided path( --test ) with librosa, process it and use the model to predict the output. In the end, the predictions are visualized by a wave plot and saved to the root directory.