Next: Mapping Strategies Up: Instrumental Gestural Mapping Strategies Previous: Instrumental Gestural Mapping Strategies



Introduction

A common complaint about electronic music is that it lacks expressivity. In response to this, much work has been done in developing new and varied synthesis algorithms. However, because traditional acoustic musical sound is a direct result of the interaction between an instrument and the performance gesture applied to it, if one wishes to model this espressivity, in addition to modeling the instrument itself - whatever the technique/algorithm - one must also model the physical gesture, in all its complexity. Indeed, in spite of the various methods available to synthesize sound, the ultimate musical expression of those sounds still falls upon the capture of gesture(s) used for control and performance.

In terms of expressivity, however, just as important as the capture of the gesture itself is the manner in which the mapping of gestural data onto synthesis parameters is done. Most of the work in this area has traditionally focused on one-to-one mapping of control values to synthesis parameters. In the case of physical modeling synthesis, this approach may make sense due to the fact that the relation between gesture input and sound production is often hard-coded inside the synthesis model. However, with signal models this one-to-one mapping may not be the most appropriate, since it does not take advantage of the opportunity signal models allow for higher level couplings between control gestures.

Additive synthesis, for instance, has the power to virtually synthesize any sound, but is limited by the difficulty encountered in simultaneously controlling hundreds of time-varying control parameters; it is not immediately obvious how the outputs of a gestural controller should be mapped to the frequencies, amplitudes, and phases of sinusoidal partials. 1 Nonetheless, signal models such as additive synthesis have many advantages, including powerful analysis tools 2 as well as efficient synthesis and real-time performance 3.

Figure 1 shows the central role of mapping for a virtual musical instrument (where the gestural controller is independent from the sound source)[Mul94][VUK96] for signal and physical model synthesis. As shown in the case of signal models, the liaison between these two blocks is manifest as a separate mapping layer; for the physical modeling approach the model already encompasses the mapping scheme.


 
Figure 1: A Virtual Instrument representation

In the authors' opinion the mapping layer is a key to solving such control problems, and is an undeveloped link between gestural control and synthesis by signal models. Thus our focus in this paper on the importance and influence of the mapping strategy in the context of musical expression. We propose a three-layer distinction between mapping strategies: One-to-One, Divergent and Convergent mapping. Of these three possibilities we will consider the third - convergent mapping - as the most musically expressive from an ``instrumental'' point of view, although not always immediately obvious to implement.

We discuss these mapping strategies using a system consisting of a MIDI wind controller (Yamaha's WX7)[Yam] and IRCAM's real-time digital signal processing environment FTS[DDMS96], implementing control patches and an expressive timbral subspace onto which we map performance gestures. Departing from one of the author's experience as a clarinettist, we discuss the WX7 and its inherently non-coupled gesture capture mechanism. This is compared to the interaction between a performer and a real single-reed acoustic instrument, considering the expert gestures related to expressive clarinet/saxophone performance.

Finally, we present a discussion of the methods to do morphing between different additive models of clarinet sounds in various expressive playing conditions. We show that simple interpolation between partials that have different types of frequency fluctuation behaviour gives an incorrect result. Thus, in order to maintain the ``naturalness'' of the sound due to the frequency fluctuations, and to do the correct morphing, special care must be taken so as to properly understand and model this effect.



Next: Mapping Strategies Up: Instrumental Gestural Mapping Strategies Previous: Instrumental Gestural Mapping Strategies

Marcelo Wanderley
Wed Aug 27 12:34:30 MET DST 1997