Mapping Strategies


Mapping from performance to synthesis parameters

Once performance parameters are available (your controller outputs), there are many ways to relate them to the final synthesis parameters - or the entries of your synthesis program or synthesizer.

The most common way to do it is to assign ONE controller output to ONE synthesis parameter (or synthesizer input), such as:

  1. breath pressure -> volume
  2. Embouchure (lip pressure) -> pitch
  3. fingering -> pitch, etc.

It is commonly known as One-to-One mapping. Depending on the gestural control and also on the synthesis engine available, it can be a very nice solution for a problem and it is usually very useful for quick sketches and prototypes.

Obviously, according to the application required, a one-to-one mapping may prove not sufficient, since it cannot reproduce the richness of control from, for example, an acoustic instrument. One could, in this case, consider this mapping as a "convergent" one, since more than one controller parameter may account for a single musical parameter.

Conversely, one may wish to map controller parameters to synthesis parameters in different ways, such as using metaphors, like spatial shapes (spheres, etc - see below) in order to manipulate sound characteristics, reducing the complexity of manipulation.

Shortly, the best mapping strategy will depend on the application, synthesis model, etc. Next we present some articles on mapping and also some examples of systems using different mapping strategies.



Basic references and other examples


Some basic papers on mapping:


And some papers about different mapping strategies:



Home | Virtual Musical Instruments | Gestures | Capture | Mapping | Sensory Feedback | Interface Examples | Bibliography | Email us!