MMM: Exploring Conditional Multi-Track Music Generation with the Transformer and the Johann Sebastian Bach Chorales Dataset.
Implementation of the paper "MMM: Exploring Conditional Multi-Track Music Generation with the Transformer" (paper). Uses OpenAI's GPT-2 to compose music.
Find me on LinkedIn and say hello.
If you find and issue or have a feature request, report either here on GitHub.
Please be so kind and star the repository if you find it useful.
This repository has been created in cooperation with Pyoneer. I am very grateful!
This repository allows you to train GPT-2 on the Johann Sebastian Bach chorale dataset. You can train both MMMTrack and MMMBar from the paper.
How to run.
pip install transformers pip install tokenizers pip install torch pip install music21 pip install note_seq
- Clone this repository
git clone https://github.com/AI-Guru/MMM-JSB.git.
- Train MMMTrack with
- Train MMMBar with
Sampling: Run the jupyter notebook.
Training should take roughly one hour on a GPU per model for the JSB dataset.
A pretrained network can be found here: https://ai-guru.s3.eu-central-1.amazonaws.com/mmm-jsb/mmm_jsb_checkpoints.zip
What is missing?
- TensorFlow support is rudimentary.
- Data preprocessing and training on the Lakh dataset.
- Implementation as a tool or a DAW plugin.
Released under the Apache-2.0 License.