|
|
|
Mapping Strategies
|
|
|
Mapping from performance to synthesis parameters
|
Once performance parameters are available (your controller outputs), there are many ways to relate them to the final synthesis parameters - or the entries of your synthesis program or synthesizer.
The most common way to do it is to assign ONE controller output to ONE synthesis parameter (or synthesizer input), such as:
- breath pressure -> volume
- Embouchure (lip pressure) -> pitch
- fingering -> pitch, etc.
It is commonly known as One-to-One mapping. Depending on the gestural control and also on the synthesis engine available, it can be a very nice solution for a problem and it is usually very useful for quick sketches and prototypes.
Obviously, according to the application required, a one-to-one mapping may prove not sufficient, since it cannot reproduce the richness of control from, for example, an acoustic instrument. One could, in this case, consider this mapping as a "convergent" one, since more than one controller parameter may account for a single musical parameter.
Conversely, one may wish to map controller parameters to synthesis parameters in different ways, such as using metaphors, like spatial shapes (spheres, etc - see below) in order to manipulate sound characteristics, reducing the complexity of manipulation.
Shortly, the best mapping strategy will depend on the application, synthesis model, etc. Next we present some articles on mapping and also some examples of systems using different mapping strategies.
|
|
Basic references and other examples
|
Some basic papers on mapping:
- I. Bowler, A. Purvis, P. Manning, and N. Bailey, ``On mapping N articulation onto M synthesiser-control parameters,'' in Proc. Int. Computer Music Conf. (ICMC'90), pp. 181-184, 1990.
- I. Choi, R. Bargar, and C. Goudeseune, ``A manifold interface for a high dimensional control space,'' in Proc. Int. Computer Music Conf.
(ICMC'95), pp. 385-392, 1995.
- M. Lee and D. Wessel, ``Connectionist models for real-time control of synthesis and compositional algorithms,'' in Proc. Int. Computer Music Conf. (ICMC'92), pp. 277-280, 1992.
And some papers about different mapping strategies:
- Empty-handed gestures: A. Mulder, S Fels, and K Mase, ``Empty-handed Gesture Analysis in Max/FTS'' in Proceedings of the Kansei - The Technology of Emotion Workshop, (Genova - Italy), Oct. 1997.
Mulder et al. propose a shape manipulation task called sound sculpting, where shape description parameters are mapped to timbral parameters, in order to reduce the cognitive load for simultaneous multidimensional control tasks (such as in sound design).
- Neural Networks performing the mapping: (some examples)
- M. Lee, A. Freed, and D. Wessel, ``Real-time neural network processing of gestural and acoustic signals,'' in Proc. Int. Computer Music Conf. (ICMC'91), pp. 277-280, 1991. (and other articles in subsequent ICMCs).
- I. Zannos, P. Modler and K. Naoi, ``Gesture controlled music performance in a real-time network'', in Proceedings of the Kansei - The Technology of Emotion Workshop, (Genova - Italy), Oct. 1997.
- H. Sawada, N. Onoe and S. Hashimoto, ``Sounds in Hands - A Modifier Using Datagloves and Twiddle Interface'', in Proceedings of ICMC'97, (Thessaloniki - Greece), pp. 309 - 312, Sept. 1997.
- Instrumental gestures: J. Rovan, M. Wanderley, S. Dubnov, and P. Depalle, ``Instrumental gestural mapping strategies as expressivity determinants in computer music performance'' in Proceedings of the Kansei - The Technology of Emotion Workshop, (Genova - Italy), Oct. 1997.
Here we propose mapping strategies from controller outputs to musical functions (in the sense R. Vertegaal et al. have proposed in their ICMC'96 paper - check the references!) and use a clarinet timbral space (pitch vs. dynamics) using additive synthesis models of clarinet sample files.
-
Home |
Virtual Musical Instruments |
Gestures |
Capture |
Mapping |
Sensory Feedback |
Interface Examples |
Bibliography |
Email us!