Skip to content

A convenient MIDI tokenizer for Deep Learning networks, with multiple encoding strategies

License

Notifications You must be signed in to change notification settings

Tegridy-Code/MidiTok

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

80 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MidiTok

MidiTok is a package for MIDI encoding / tokenization for deep neural networks. It "tokenize" MIDI files as for text in the NLP field, to use them with Transformers or RNNs.

MidiTok features most known MIDI encoding strategies, and is built around the idea that they all share common parameters and methods.

Install

pip install miditok

MidiTok uses MIDIToolkit, which itself uses Mido to read and write MIDI files.

Examples

Tokenize a MIDI

from miditok import REMIEncoding, get_midi_programs
from miditoolkit import MidiFile

# Our parameters
pitch_range = range(21, 109)
beat_res = {(0, 4): 8, (4, 12): 4}
nb_velocities = 32
additional_tokens = {'Chord': True, 'Rest': True, 'Tempo': True,
                     'rest_range': (2, 8),  # (half, 8 beats)
                     'nb_tempos': 32,  # nb of tempo bins
                     'tempo_range': (40, 250),  # (min, max)
                     'Program': False}

# Creates the tokenizer and loads a MIDI
tokenizer = REMIEncoding(pitch_range, beat_res, nb_velocities, additional_tokens)
midi = MidiFile('path/to/your_midi.mid')

# Converts MIDI to tokens, and back to a MIDI
tokens = tokenizer.midi_to_tokens(midi)
converted_back_midi = tokenizer.tokens_to_midi(tokens, get_midi_programs(midi))

# Converts just a selected track
tokenizer.current_midi_metadata = {'time_division': midi.ticks_per_beat, 'tempo_changes': midi.tempo_changes}
piano_tokens = tokenizer.track_to_tokens(midi.instruments[0])

# And convert it back (the last arg stands for (program number, is drum))
converted_back_track, tempo_changes = tokenizer.tokens_to_track(piano_tokens, midi.ticks_per_beat, (0, False))

Tokenize a dataset

MidiTok will save your encoding parameters in a config.txt file to keep track of how they were converted.

from miditok import REMIEncoding
from pathlib import Path

# Creates the tokenizer and list the file paths
tokenizer = REMIEncoding()  # uses defaults parameters
paths = list(Path('path', 'to', 'dataset').glob('**/*.mid'))

# A validation method to discard MIDIs we do not want
def midi_valid(midi) -> bool:
    if any(ts.numerator != 4 or ts.denominator != 4 for ts in midi.time_signature_changes):
        return False  # time signature different from 4/4
    if midi.max_tick < 10 * midi.ticks_per_beat:
        return False  # this MIDI is too short
    return True

# Converts MIDI files to tokens saved as JSON files
tokenizer.tokenize_midi_dataset(paths, 'path/to/save', midi_valid)

Write a MIDI file from tokens

from miditok import REMIEncoding
import torch

# Creates the tokenizer and list the file paths
remi_enc = REMIEncoding()  # uses defaults parameters in constants.py

# The tokens, let's say produced by your Transformer, 4 tracks of 500 tokens
tokens = torch.randint(low=0, high=len(remi_enc.event2token), size=(4, 500)).tolist()

# The instruments, here piano, violin, french horn and drums
programs = [(0, False), (41, False), (61, False), (0, True)]

# Convert to MIDI and save it
generated_midi = remi_enc.tokens_to_midi(tokens, programs)
generated_midi.dump('path/to/save/file.mid')  # could have been done above by giving the path argument

Encodings

The figures represent the following music sheet as its corresponding token sequences.

Music sheet example

Tokens are vertically stacked at index 0 from the bottom up to the top.

MIDI-Like

Strategy used in the first symbolic music generative transformers and RNN / LSTM models. It consists of encoding the MIDI messages (Note On, Note Off, Velocity and Time Shift) into tokens as represented in a pure "MIDI way".

NOTES:

  • Rests act exactly like Time-shifts. It is then recommended choosing a minimum rest range of the same first beat resolution so the time is shifted with the same accuracy. For instance if your first beat resolution is (0, 4): 8, you should choose a minimum rest of 8.

MIDI-Like figure

REMI

Proposed with the Pop Music Transformer, it is a "position-based" representation. The time is represented with "Bar" and "Position" tokens that indicate respectively when a new bar is beginning, and the current position within a bar. A note is represented as a succession of a Pitch, Velocity and Duration tokens.

NOTES:

  • In the original REMI paper, the tempo information are in fact the succession of two token types: a "Token Class" which indicate if the tempo is fast or slow, and a "Token Value" which represents its value with respect to the tempo class. In MidiTok we only encode one Tempo token which encode its value, quantized in a number of bins set in parameters (as done for velocities).
  • Including tempo tokens in a multitrack task with REMI is not recommended. Generating several tracks would lead to multiple and ambiguous tempo changes. So in MidiTok only the tempo changes of the first track will be kept in the final created MIDI.
  • Position tokens are always following Rest tokens to make sure the position of the following notes are explicitly stated. Bar tokens can follow Rest tokens depending on their respective value and your parameters.

REMI figure

Compound Word

Introduced with the Compound Word Transformer this representation is similar to the REMI encoding. The key difference is that tokens of different types of a same "event" are combined and processed at the same time by the model. Pitch, Velocity and Durations tokens of a same note will be combined for instance. The greatest benefit of this encoding strategy is the reduced sequence lengths that it creates, which means less time and memory consumption as transformers (with softmax attention) have a quadratic complexity.

You can combine them in your model the way you want. CP Word authors concatenated each embeddings and projected the sequence with a projection matrix, resulting in a d-dimensional vector (d being the model size).

At decoding, the easiest way to predict multiple tokens (employed by the original authors) is to project the output vector of your model with several projection matrices, one for each token type.

Compound Word figure

Structured

Presented with the Piano Inpainting Application, it is similar to the MIDI-Like encoding but with Duration tokens instead Note-Off. The main advantage of this encoding is the consistent token type transitions it imposes, which can greatly speed up training. The structure is as: Pitch -> Velocity -> Duration -> Time Shift -> ... (pitch again) To keep this property, no additional token can be inserted in MidiTok's implementation.

Structured figure

Octuple

Introduced with Symbolic Music Understanding with Large-Scale Pre-Training. Each note of each track is the combination of multiple embeddings: Pitch, Velocity, Duration, Track, current Bar, current Position and additional tokens. The main benefit is the reduction of the sequence lengths, its multitrack capabilities, and its simple structure easy to decode. The Bar and Position embeddings can act as a positional encoding, but the authors of the original paper still applied a token-wise positional encoding afterward.

NOTES:

  • In MidiTok, the tokens are first sorted by time, then track, then pitch values.
  • This implementation uses Program tokens to distinguish tracks, on their MIDI program. Hence, two tracks with the same program will be treated as being the same.
  • Time signature tokens are not implemented in MidiTok.
  • Octuple Mono is a modified version with no program embedding at each time step.

Octuple figure

MuMIDI

Presented with the PopMAG model, this representation is mostly suited for multitrack tasks. The time is based on Position and Bar tokens as REMI and Compound Word. The key idea of MuMIDI is to represent every track in a single sequence. At each time step, "Track" tokens preceding note tokens indicate from which track they are. MuMIDI also include a "built-in" positional encoding mechanism. At each time step, embeddings of the current bar and current position are merged with the token. For a note, the Pitch, Velocity and Duration embeddings are also merged together.

NOTES:

  • In MidiTok, the tokens are first sorted by time, then track, then pitch values.
  • In the original MuMIDI, Chord tokens are placed before Track tokens. We decided in MidiTok to put them after as chords are produced by one instrument, and several instruments can produce more than one chord at a time step.
  • This implementation uses Program tokens to distinguish tracks, on their MIDI program. Hence, two tracks with the same program will be treated as being the same.
  • As in the original MuMIDI implementation, MidiTok distinguishes pitch tokens of drums from pitch tokens of other instruments. More details in the code.

MuMIDI figure

Create your own

You can easily create your own encoding strategy and benefit from the MidiTok framework. Just create a class inheriting from the MIDITokenizer base class, and override the track_to_tokens, tokens_to_track, _create_vocabulary and _create_token_types_graph methods with your tokenization strategy.

We encourage you to read the docstring of the Vocabulary class to learn how to use it for your strategy.

Features

Common parameters

Every encoding strategy share some common parameters around which the tokenizers are built:

  • Pitch range: the MIDI norm can represent pitch values from 0 to 127, but the GM2 specification recommend from 21 to 108 for piano, which covers the recommended pitch values for all MIDI program. Notes with pitches under or above this range can be discarded or clipped to the limits.
  • Beat resolution: is the number of samples within a beat. MidiTok handles this with a flexible way: a dictionary of the form {(0, 4): 8, (3, 8): 4, ...}. The keys are tuples indicating a range of beats, ex 0 to 4 for the first bar. The values are the resolutions, in samples per beat, of the given range, here 8 for the first. This way you can create a tokenizer with durations / time shifts of different lengths and resolutions.
  • Number of velocities: the number of velocity values you want represents. For instance if you set this parameter to 32, the velocities of the notes will be quantized into 32 velocity values from 0 to 127.
  • Additional tokens: specify which additional tokens bringing information like chords should be included. Note that each encoding is compatible with different additional tokens.

Check constants.py to see how these parameters are constructed.

Additional tokens

MidiTok offers the possibility to insert additional tokens in the encodings. These tokens bring additional information about the structure and content of MIDI tracks to explicitly use them to train a neural network.

  • Chords: indicate the presence of a chord at a certain time step. MidiTok uses a chord detection method based on onset times and duration. This allows MidiTok to detect precisely chords without ambiguity, whereas most chord detection methods in symbolic music based on chroma features can't.
  • Rests: include "Rest" events whenever a segment of time is silent, i.e. no note is played within. This token type is decoded as a "Time-Shift" event, meaning the time will be shifted according to its value. You can choose the minimum and maximum rests values to represent (default is 1/2 beat to 8 beats). Note that rests shorter than one beat are only divisible by the first beat resolution, e.g. a rest of 5/8th of a beat will be a succession of Rest_0.4 and Rest_0.1, where the first number indicate the rest duration in beats and the second in samples / positions.
  • Tempos: specify the current tempo. This allows to train a model to predict tempo changes alongside with the notes, unless specified in the chart below. Tempo values are quantized on your specified range and number (default is 32 tempos from 40 to 250).
  • Programs: used to specify an instrument / MIDI program. MidiTok only offers the possibility to include these tokens in the vocabulary for you, but won't use them. If you need model multitrack symbolic music with other methods than Octuple / MuMIDI, MidiTok leaves you the choice / task to represent the track information the way you want. You can do it as in LakhNES or MMM.

Additionally, MidiTok offers to include Program tokens in the vocabulary of MIDI-Like, REMI and CP Word. We do not consider them additional tokens though as they are not used anywhere in MidiTok, but intended for you to insert them at the beginning of each sequence as Start Of Sequence tokens.

MIDI-Like REMI Compound Word Structured Octuple MuMIDI
Chord
Rest
Tempo 1 1 1
Program 3 3

1 Should not be used with multiple tracks. Otherwise, at decoding, only the events of the first track will be considered.
2 Only used in the input as additional information. At decoding no tempo tokens should be predicted, i.e will be considered.
3 Integrated by default.

Limitations

For the concerned tokenization methods, MidiTok only consider a 4/4 time signature for now. This means that each bar is considered covering 4 beats, and each beat is the duration of a quarter note.

Future updates will support other time signatures, and time signature changes for compatible tokenizations.

Contributions

Contributions are gratefully welcomed, feel free to send a PR if you want to add an encoding strategy or speed up the code. Just make sure to pass the tests.

Citations

@article{midilike2018,
    title={This time with feeling: Learning expressive musical performance},
    author={Oore, Sageev and Simon, Ian and Dieleman, Sander and Eck, Douglas and Simonyan, Karen},
    journal={Neural Computing and Applications},
    year={2018},
    publisher={Springer}
}
@inproceedings{remi2020,
    title={Pop Music Transformer: Beat-based modeling and generation of expressive Pop piano compositions},
    author={Huang, Yu-Siang and Yang, Yi-Hsuan},
    booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
    year={2020}
}
@inproceedings{cpword2021,
    title={Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs},
    author={Hsiao, Wen-Yi and Liu, Jen-Yu and Yeh, Yin-Cheng and Yang, Yi-Hsuan},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2021}
}
@misc{structured2021,
    title={The Piano Inpainting Application},
    author={Gaëtan Hadjeres and Léopold Crestel},
    year={2021},
    eprint={2107.05944},
    archivePrefix={arXiv},
    primaryClass={cs.SD}
}
@inproceedings{mumidi2020,
    author = {Ren, Yi and He, Jinzheng and Tan, Xu and Qin, Tao and Zhao, Zhou and Liu, Tie-Yan},
    title = {PopMAG: Pop Music Accompaniment Generation},
    year = {2020},
    publisher = {Association for Computing Machinery},
    booktitle = {Proceedings of the 28th ACM International Conference on Multimedia}
}
@misc{octuple2021,
    title={MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training}, 
    author={Mingliang Zeng and Xu Tan and Rui Wang and Zeqian Ju and Tao Qin and Tie-Yan Liu},
    year={2021},
    eprint={2106.05630},
    archivePrefix={arXiv},
    primaryClass={cs.SD}
}

Acknowledgments

We acknowledge Aubay, the LIP6, LERIA and ESEO for the financing and support of this project.

About

A convenient MIDI tokenizer for Deep Learning networks, with multiple encoding strategies

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 92.3%
  • Jupyter Notebook 7.7%