Next:Sound generation - two Up:Composed Instruments Previous:AD Conversion

Mapping

In the CMI model of figure 1.1, we showed the mapping layer as the "connection" between the gesture interface and the sound generation unit.

Mapping is fairly important in the control of synthesis, mainly when one uses synthesis by signal models, such as additive and subtractive synthesis. In the case of physical models, the mapping strategies are already hard coded inside the model, as relations between the systems inputs.

As an example of the importance of the mapping layer, let's cite Vertegaal and Eaglestone: "... problems of human-synthesizer interfacing in the field of computer music have been tackled primarily through the development of innovative hardware controllers. However, the use of these as generic controllers is limited, because researchers often fail to develop accompanying formalisms for mapping low-dimensional controller data to high-dimensional parameter space of the generative sound synthesis algorithms". [15]

We therefore propose, in the case of signal models, considering two different mapping groups[16]:

We considered the first group in [3], and came up with the mapping classification below. The idea here is to propose some basic units that may be used together in different layers, simply as a means of visualising the influence of the gestures to the musical functions. These strategies are independent from the synthesis model used, related only to the kind of gestural control envisaged. More than one strategy may be used at the same time to provide different levels of control, according to the application envisaged.

One example is shown in the figure below. Here we related different outputs form a WX7 MIDI controller to simulate the behaviour of the clarinet reed (as described earlier) [3].

If we return to the relation between gestures and musical functions and consider a one-to-one mapping of the Midi wind controller outputs to musical functions, this relation becomes direct (as opposed to the actual acoustic instrument relation) and yields:

As we have seen, in the case of a real instrument, where different strategies are present at the same time, this relation becomes complicated. Nevertheless this typology can be useful to draw a simplified picture of the instrument's actual behaviour and find out the basic sensors needed in the design.

The second mapping group is usually "divergent" (e.g. in the case of additive synthesis), where the number of synthesis parameters (amplitudes, frequencies and phases of hundreds of partials) is enormous compared to musical functions. Different strategies should be used to different synthesis models, i.e., this second group is synthesis dependent.

Some approaches to this question have been proposed in [17] and [18].
 




Next:Sound generation - two Up:Composed Instruments Previous:AD Conversion
Marcelo Wanderley

Wed Feb 10 10:07:20 MET 1999