A Knowledge-based, Data-driven Method for Action-sound Mapping

Abstract : This paper presents a knowledge-based, data-driven method for using data describing action-sound couplings collected from a group of people to generate multiple complex mappings between the performance movements of a musician and sound synthesis. This is done by using a database of multimodal motion data collected from multiple subjects coupled with sound synthesis parameters. A series of sound stimuli is synthesised using the sound engine that will be used in performance. Multimodal motion data is collected by asking each participant to listen to each sound stimulus and move as if they were producing the sound using a musical instrument they are given. Multimodal data is recorded during each performance, and paired with the synthesis parameters used for generating the sound stimulus. The dataset created using this method is then used to build a topological representation of the performance movements of the subjects. This representation is then used to interactively generate training data for machine learning algorithms, and define mappings for real-time performance. To better illustrate each step of the procedure, we describe an implementation involving clarinet, motion capture, wearable sensor armbands, and waveguide synthesis.
Document type :
Conference papers
Complete list of metadatas

Cited literature [27 references]  Display  Hide  Download

Contributor : Baptiste Caramiaux <>
Submitted on : Monday, July 22, 2019 - 1:36:03 PM
Last modification on : Monday, November 25, 2019 - 1:56:07 PM


Publisher files allowed on an open archive


  • HAL Id : hal-01577885, version 1


Federico Visi, Baptiste Caramiaux, Michael Mcloughlin. A Knowledge-based, Data-driven Method for Action-sound Mapping. NIME 2017 : New Interfaces for Musical Expression, May 2017, Copenhagen, Denmark. ⟨hal-01577885⟩



Record views


Files downloads