Steer OpenAI's Jukebox with Music Taggers

Related tags

Deep Learning tagbox
Overview

TagBox

Steer OpenAI's Jukebox with Music Taggers!

The closest thing we have to VQGAN+CLIP for music!

Unsupervised Source Separation By Steering Pretrained Music Models

Read the paper here. Submitted to ICASSP 2022.

Abstract

We showcase an unsupervised method that repurposes deep models trained for music generation and music tagging for audio source separation, without any retraining. An audio generation model is conditioned on an input mixture, producing a latent encoding of the audio used to generate audio. This generated audio is fed to a pretrained music tagger that creates source labels. The cross-entropy loss between the tag distribution for the generated audio and a predefined distribution for an isolated source is used to guide gradient ascent in the (unchanging) latent space of the generative model. This system does not update the weights of the generative model or the tagger, and only relies on moving through the generative model's latent space to produce separated sources. We use OpenAI's Jukebox as the pretrained generative model, and we couple it with four kinds of pretrained music taggers (two architectures and two tagging datasets). Experimental results on two source separation datasets, show this approach can produce separation estimates for a wider variety of sources than any tested supervised or unsupervised system. This work points to the vast and heretofore untapped potential of large pretrained music models for audio-to-audio tasks like source separation.

Try it yourself!

Click here to see our Github repository.

Run it yourself Colab notebook here: Open in Colab

Example Output — Separation

MUSDB18 and Slakh2100 examples coming soon!

Audio examples are not displayed on https://github.com/ethman/tagbox, please click here to see the demo page.

TagBox excels in separating prominent melodies from within sparse mixtures.

Wonderwall by Oasis - Vocal Separation

Mixture


TagBox Output

hyperparam setting
fft size(s) 512, 1024, 2048
lr 10.0
steps 200
tagger model(s) fcn, hcnn, musicnn
tagger data MTAT
selected tags All vocal tags

Howl's Moving Castle, Piano & Violin Duet - Violin Separation

Mixture


TagBox Output

hyperparam setting
fft size(s) 512, 1024, 2048
lr 10.0
steps 100
tagger model(s) fcn, hcnn, musicnn
tagger data MTG-Jamendo
selected tags Violin

Smoke On The Water, by Deep Purple - Vocal Separation

Mixture


TagBox Output

hyperparam setting
fft size(s) 512, 1024, 2048
lr 5.0
steps 200
tagger model(s) fcn, hcnn
tagger data MTAT
selected tags All vocal tags

Example Output - Improving Perceptual Output & "Style Transfer"

Adding multiple FFT sizes helps with perceptual quality

Similar to multi-scale spectral losses, when we use masks with multiple FFT sizes we notice that the quality of the output increases.

Mixture


TagBox with fft_size=[1024]

Notice the warbling effects in the following example:


TagBox with fft_size=[1024, 2048]

Those warbling effects are mitigated by using two fft sizes:

These results, however, are not reflected in the SDR evaluation metrics.

"Style Transfer"

Remove the masking step enables Jukebox to generate any audio that will optimize the tag. In some situations, TagBox will pick out the melody and resynthesize it. But it adds lots of artifacts, making it sound like the audio was recorded in a snowstorm.

Mixture


"Style Transfer"

Here, we optimize the "guitar" tag without the mask. Notice that the "All it says to you" melody sounds like a guitar being plucked in a snowstorm:



Cite

If you use this your academic research, please cite the following:

@misc{manilow2021unsupervised,
  title={Unsupervised Source Separation By Steering Pretrained Music Models}, 
  author={Ethan Manilow and Patrick O'Reilly and Prem Seetharaman and Bryan Pardo},
  year={2021},
  eprint={2110.13071},
  archivePrefix={arXiv},
  primaryClass={cs.SD}
}
You might also like...
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.
Music Source Separation; Train & Eval & Inference piplines and pretrained models we used for 2021 ISMIR MDX Challenge.

Music Source Separation with Channel-wise Subband Phase Aware ResUnet (CWS-PResUNet) Introduction This repo contains the pretrained Music Source Separ

Music source separation is a task to separate audio recordings into individual sources

Music Source Separation Music source separation is a task to separate audio recordings into individual sources. This repository is an PyTorch implmeme

PyTorch implementation of the cross-modality generative model that synthesizes dance from music.
PyTorch implementation of the cross-modality generative model that synthesizes dance from music.

Dancing to Music PyTorch implementation of the cross-modality generative model that synthesizes dance from music. Paper Hsin-Ying Lee, Xiaodong Yang,

AI that generate music

PianoGPT ai that generate music try it here https://share.streamlit.io/annasajkh/pianogpt/main/main.py or here https://huggingface.co/spaces/Annas/Pia

Project for music generation system based on object tracking and CGAN

Project for music generation system based on object tracking and CGAN The project was inspired by MIDINet: A Convolutional Generative Adversarial Netw

Proof-Of-Concept Piano-Drums Music AI Model/Implementation

Rock Piano "When all is one and one is all, that's what it is to be a rock and not to roll." ---Led Zeppelin, "Stairway To Heaven" Proof-Of-Concept Pi

Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Making a music video with Wav2CLIP and VQGAN-CLIP

music2video Overview A repo for making a music video with Wav2CLIP and VQGAN-CLIP. The base code was derived from VQGAN-CLIP The CLIP embedding for au

Music library streaming app written in Flask & VueJS

djtaytay This is a little toy app made to explore Vue, brush up on my Python, and make a remote music collection accessable through a web interface. I

Comments
  • Guidance on reproducing reported SDRi in the paper

    Guidance on reproducing reported SDRi in the paper

    Hi,

    First of all, great work! And thanks for sharing the code, much appreciated!

    I'm trying to reproduce the vocal part of the MUSDB18 results shown in Table 1 reported in the paper. But I got really bad results for SDRi.

    (1) For the data preprocessing part, I cut the original mixture into 5 second segments where vocals are activated, (in some cases vocal part are only silence);

    (2) For separation, I'm using the code snippet from the colab in the repo. In my implementation, my parameters are:

    TAGGER_SR = 16000  # Hz
    JUKEBOX_SAMPLE_RATE = 44100  # Hz
    
    # tagger source
    tagger_training_data = 'MagnaTagATune' #@param ["MTG-Jamendo", "MagnaTagATune"] {allow-input: false}
    tag = 'Vocals'
    
    # audio processing parameters
    fft512 = True 
    fft1024 = True 
    fft2048 = True 
    
    n_ffts = []
    if fft512:
        n_ffts.append(512)
    if fft1024:
        n_ffts.append(1024)
    if fft2048:
        n_ffts.append(2048)
    
    # network architecture selections
    fcn = True #@param {type:"boolean"}
    hcnn = True #@param {type:"boolean"} 
    musicnn = True #@param {type:"boolean"}
    crnn = False #@param {type:"boolean"}
    sample = False #@param {type:"boolean"}
    se = False #@param {type:"boolean"}
    attention = False #@param {type:"boolean"}
    short = False #@param {type:"boolean"}
    short_res = False #@param {type:"boolean"}
    
    # separation paras
    use_mask = True
    lr = 5.0  
    steps = 30 
    

    (3) For evaluating SDRi, I'm using asteriod package instead of museval (when using museval, SDR can be easily changed by multiplying some scaler to the audio samples even not evaluating SI-SDR).

    (4) Also I'm using the saved *_masked.wav files to compute SDR (actually *_raw_masked.wav get higher SDR)

    So I'm wondering which step could possibly be the problem cause the bad results? Thank you so much!

    opened by gzhu06 10
  • Ignore non-instrument tags in loss calculation

    Ignore non-instrument tags in loss calculation

    Right now we set the GT tags of all non-instrument tags to 0.0, but still compute a loss on the JBX'd audio for those tags, so TagBox creates audio that sets makes the non-instrument tags 0.0 too. But we really should ignore the non-instrument tags all together. So we should set the weight of non-instrument tags to 0.0 when calculating loss for every iteration.

    opened by ethman 0
Owner
Ethan Manilow
PhD in the @interactiveaudiolab
Ethan Manilow
Woosung Choi 63 Nov 14, 2022
Self-Supervised Contrastive Learning of Music Spectrograms

Self-Supervised Music Analysis Self-Supervised Contrastive Learning of Music Spectrograms Dataset Songs on the Billboard Year End Hot 100 were collect

null 27 Dec 10, 2022
Starter kit for getting started in the Music Demixing Challenge.

Music Demixing Challenge - Starter Kit ?? Challenge page This repository is the Music Demixing Challenge Submission template and Starter kit! Clone th

AIcrowd 106 Dec 20, 2022
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

Ilaria Manco 57 Dec 7, 2022
Source code and data from the RecSys 2020 article "Carousel Personalization in Music Streaming Apps with Contextual Bandits" by W. Bendada, G. Salha and T. Bontempelli

Carousel Personalization in Music Streaming Apps with Contextual Bandits - RecSys 2020 This repository provides Python code and data to reproduce expe

Deezer 48 Jan 2, 2023
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

null 137 Dec 15, 2022
PyTorch implementation of MuseMorphose, a Transformer-based model for music style transfer.

MuseMorphose This repository contains the official implementation of the following paper: Shih-Lun Wu, Yi-Hsuan Yang MuseMorphose: Full-Song and Fine-

Yating Music, Taiwan AI Labs 142 Jan 8, 2023
The personal repository of the work: *DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer*.

DanceNet3D The personal repository of the work: DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer. Dataset and Results Pleas

南嘉Nanga 36 Dec 21, 2022
Emotional conditioned music generation using transformer-based model.

This is the official repository of EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. The paper has b

hung anna 96 Nov 9, 2022
PyTorch implementation of "Learn to Dance with AIST++: Music Conditioned 3D Dance Generation."

Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. Installation pip install -r requirements.txt Prepare Dataset bash data/scripts/pre

Zj Li 8 Sep 7, 2021