Skip to content

This repository is linked to our paper submission for the MOCO'24 Conference on Movement and Computing, entitled "Embodied exploration of deep latent spaces in interactive dance-music performance".

Notifications You must be signed in to change notification settings

ircam-ismm/embodied-latent-exploration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Embodied exploration of deep latent spaces in interactive dance-music performance

Sarah Nabi, Philippe Esling, Geoffroy Peeters and Frédéric Bevilacqua.

In collaboration with the dancer/choreographer Marie Bruand.

This repository is linked to our paper presented at the 9th International Conference on Movement and Computing MOCO'24. Please, visit our GitHub page for supplementary materials with examples.

In this work, we investigate the use of deep audio generative models in interactive dance/music performance. We introduce a motion-sound interactive system integrating deep audio generative model and propose three embodied interaction methods to explore deep audio latent spaces through movements. Please, refer to the paper for further details.

You can find the Max/MSP patches for the 3 proposed embodied interaction methods in the code/ folder. We also provide tutorial videos in the Usage section.

NB: You can use the Max/MSP patches with other IMU sensors by replacing the riotbitalino object.

Install and requirements

Our motion-sound interactive system implemented in Max/MSP.

First, you need to install the required dependencies.

We used R-IoT IMU motion sensors composed of accelerometers and gyroscopes with the MuBu library and the Gestural toolkit for real-time motion capture and analysis.

For deep audio generation, we relied on the RAVE model which enables fast and high-quality audio waveform synthesis in real-time on standard laptop CPU, and we used the nn_tilde external to import our pre-trained RAVE models in Max/MSP.

You can download pre-trained RAVE models in here.

Usage

Interaction I1: direct motion exploration

interaction1

Tutorial video:

IMAGE ALT TEXT HERE

Interaction I2: local exploration around existing latent trajectories

interaction2

Tutorial video:

IMAGE ALT TEXT HERE

Interaction I3: implicit mapping between motion descriptors and latent trajectories

  • First, do the training phase: synchronously record movement with sound to temporally-aligned both signals features and train the HMR model to capture the implicit movement-sound relationship.

interaction3_train

  • Second, do the performance phase: select the HMR model and activate the RAVE synthesis decoder for inference.

interaction3_inference

Tutorial video:

IMAGE ALT TEXT HERE

Acknowledgments

This work has been supported by the Paris Ile-de-France Région in the framework of DIM AI4IDF, and by Nuit Blanche-Ville de Paris. We extend our heartfelt thanks to Marie Bruand without which this study would not have been possible. We are also deeply grateful to our friends and colleagues from the STMS-IRCAM lab, particularly Victor Paredes, Antoine Caillon and Victor Bigand.

Citation

@inproceedings{nabi2024embodied,
  title={Embodied exploration of deep latent spaces in interactive dance-music performance},
  author={Nabi, Sarah and Esling, Philippe and Peeters, Geoffroy and Bevilacqua, Fr{\'e}d{\'e}ric},
  booktitle={Proceedings of the 9th International Conference on Movement and Computing},
  pages={1--8},
  year={2024}
}

About

This repository is linked to our paper submission for the MOCO'24 Conference on Movement and Computing, entitled "Embodied exploration of deep latent spaces in interactive dance-music performance".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages