Abstract:
In this paper we present an overview of gestural capture strategies for
composers/performers wishing to develop work in the area of gestural
controllers. Our objective is to relate the choice of a sensor or the
design of a new gestural interface to the qualities of the gesture(s) one
wishes to "capture." Whether using an isolated sensor or designing a
completely new gestural interface system, our goal is to create a framework
in which one can make intelligent decisions about the ergonomic, practical,
and ultimately artistic uses of gestural controllers.
In order to do so, we begin by discussing the notion of a generic "virtual
musical instrument." We consider this to be an instrument in which the
gestural controller is independent from the sound production (synthesis)
component, divided into the following components:
1. Gestural input
2. Gestural controller (primary feedback to user)
3. Mapping layer
4. Sound Production (secondary feedback to user)
Beginning with the gestural input, we critically summarize and relate some
of the available work--sometimes contradictory--that has been done to
establish classification schema for different types of gesture. After
synthesizing these differing approaches into a general framework, we
proceed by considering the relationship of gestures to sensors and to
desired musical function, discussing properties of sensors and their
appropriateness for different applications. The theoretical framework is
related to a sensor classification system developed in IRCAM's pedagogy
department as an aid for finding the best technological solution for a
specific musical need.
The mapping layer is presented by demonstrating some of the work the
authors have done at IRCAM towards implementing a more expressive wind
controller. Using a three-tier system of mapping strategies, we demonstrate
how the mapping layer is the crucial key in turning gestural sensors into
musical tools. Using a standard WX7 MIDI wind controller, we apply
different mapping strategies to create an expressive instrument that more
accurately captures the expert gestures of a wind instrumentalist. The
system is demonstrated using a real-time additive synthesis timbral space
running in FTS, IRCAM's real-time signal processing system.
--------------------------------------------------------
About the authors:
Joseph Butch Rovan is a composer, performer, and researcher currently
working at the Institut de Recherche et de Coordination Acoustique/Musique
(IRCAM) in Paris. He is the recipient of the 1996 George Ladd Prix de
Paris, a two-year composition fellowship given by the University of
California at Berkeley for study in Paris. His electronic and acoustic
scores have been performed throughout the U.S., Canada, and Europe, and his
work on gestural controllers has been featured in IRCAM's journal
"Resonance" as well as in the proceedings for the conference "KANSEI: The
Technology of Emotion."
Marcelo Wanderley is a doctoral candidate at the Institut de Recherche et de
Coordination Acoustique/Musique (IRCAM) in Paris. He holds a degree in
engineering, and has published papers on circuit design as well as gestural
controllers. Most recently the IRCAM journal "Resonance" featured his
article, co-written with Stephan Tassart and Philippe Depalle, on gestural
research at IRCAM; his work has also been published in the proceedings for
the conference "KANSEI: The Technology of Emotion."
--------------------------------------------------------