Next:Sound
generation - two Up:Composed
Instruments
Previous:AD
Conversion
Mapping
In the CMI model of figure 1.1, we showed the mapping layer as the "connection"
between the gesture interface and the sound generation unit.
Mapping is fairly important in the control of synthesis, mainly when
one uses synthesis by signal models, such as additive and subtractive synthesis.
In the case of physical models, the mapping strategies are already hard
coded inside the model, as relations between the systems inputs.
As an example of the importance of the mapping layer, let's cite Vertegaal
and Eaglestone: "... problems of human-synthesizer interfacing in the field
of computer music have been tackled primarily through the development of
innovative hardware controllers. However, the use of these as generic controllers
is limited, because researchers often fail to develop accompanying formalisms
for mapping low-dimensional controller data to high-dimensional parameter
space of the generative sound synthesis algorithms". [15]
We therefore propose, in the case of signal models, considering two
different mapping groups[16]:
-
Mapping of performer gestures (controller's outputs) to musical functions
10
or what has been called in Escher
11
-
Mapping of musical functions (abstract paramenters) to synthesis parameters.
We considered the first group in [3],
and came up with the mapping classification below. The idea here is to
propose some basic units that may be used together in different layers,
simply as a means of visualising the influence of the gestures to the musical
functions. These strategies are independent from the synthesis model used,
related only to the kind of gestural control envisaged.
-
"One-to-one" Mapping - each of the controller's output parameters is associated
to one independent musical function. This is a useful scheme for rapid
prototyping (usually in MIDI systems) but usually the least expressive
when implemented in an instrumental situation. Its application can directly
profit from Cadoz and Vertegaal et al. results.
-
"Divergent" Mapping (or One-to-Many)- when one output is related to more
than one musical function, in interdependent ways. The system should possess
information about this interdependency, such as rules. It can be interesting
to control general forms of parameters evolution, in a similar way to a
conductor's gestures. It therefore may prove limited if applied alone on
the control of sound parameters in an instrumental approach. It would mostly
provide a macro-structural level of control.
-
"Convergent" Mapping (or Many-to-One)- when more than one gestural controller
output are inter-related before being assigned to a musical function. It
is useful in simulating an instruments behaviour (one could talk here about
physical modelling of gestures), but should also be used in various layers
and together with different strategies in order to allow a better control
of the overall system. It is useful for micro-structural level of control
(e.g. fine modeling of the reed behaviour).
More than one strategy may be used at the same time to provide different
levels of control, according to the application envisaged.
One example is shown in the figure below. Here we related different
outputs form a WX7 MIDI controller to simulate the behaviour of the clarinet
reed (as described earlier) [3].
If we return to the relation between gestures and musical functions
and consider a one-to-one mapping of the Midi wind controller outputs to
musical functions, this relation becomes direct (as opposed to the actual
acoustic instrument relation) and yields:
-
Breath pressure (exciter gesture) controls absolute (and relative) amplitude
(loudness) and timbre.
-
Lip pressure (parametric modulation gesture) controls relative pitch (frequency
vibrato)
-
Key value or fingering (selection gesture) controls absolute pitch.
As we have seen, in the case of a real instrument, where different strategies
are present at the same time, this relation becomes complicated. Nevertheless
this typology can be useful to draw a simplified picture of the instrument's
actual behaviour and find out the basic sensors needed in the design.
The second mapping group is usually "divergent" (e.g. in the case of
additive synthesis), where the number of synthesis parameters (amplitudes,
frequencies and phases of hundreds of partials) is enormous compared to
musical functions. Different strategies should be used to different synthesis
models, i.e., this second group is synthesis dependent.
Some approaches to this question have been proposed in [17]
and [18].
Next:Sound
generation - two Up:Composed
Instruments
Previous:AD
Conversion
Marcelo Wanderley
Wed Feb 10 10:07:20 MET 1999