NOTE: Implementation hidden due to Coursera Terms of Service.
Generating Jazz music with a Long-Short-Term-Memory (LSTM) using Keras.
Music has been pre-processed to render into musical values. The RNN model will train on the dataset to generate a sequence of musical values that are post-processed into midi music. The post-processing ensures the same sound isn't repeated too many times, two notes aren't far part in pitch, etc.
- A sequence model can be used to generate musical values, which are then post-processed into midi music.
- Fairly similar models can be used to generate dinosaur names or to generate music, with the major difference being the input fed to the model.
- In Keras, sequence generation involves defining layers with shared weights, which are then repeated for the different time steps 1, ..., T_x.