Skip to content
This repository has been archived by the owner on Jul 28, 2024. It is now read-only.
/ workshop-TUMO2024 Public archive

Learning lab about Optical Music Recognition (OMR) and Music Generation

Notifications You must be signed in to change notification settings

CVidalG/workshop-TUMO2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TUMO workshop - Reading and computing music with AI

This Git contains main code and tasks for the TUMO workshop "Reading and computing music with AI", led by Chahan Vidal-Gorène (Calfa) and Baptiste Queuche (Calfa).

Week 1: Optical Music Recognition (OMR)

Goal: Analyzing and recognizing scanned music scores to generate audio music files.

Objectives:

  • Understand how neural networks work and their application to music recognition;
  • Discover the main tasks in computer vision and their application to music;
  • Learn how to build an AI project, annotate documents and train/evaluate a model.

Technical objectives: learning conda, YOLO, labelstudio, oemer.

Data: A specific focus will be made on Armenian music scores (mainly from Komitas).

Full instructions: see week 1.

Week 2: Music Generation using AI (Generative Networks)

Goal:

  • Generating audio music files from music scores
  • Music Generation using GAN and style transfer

Objectives:

  • Understand how generative networks work;
  • Understand object classification;
  • Training generative networks;
  • Building webapp for demo purposes.

Technical objectives: learning streamlit, tensorflow, prismRNN.

Data: A specific focus will be made on Armenian music.

Full instructions: see week 2.

Final project

Download final webapp with models running.

About

Learning lab about Optical Music Recognition (OMR) and Music Generation

Topics

Resources

Stars

Watchers

Forks

Languages