|
|
Équipe Analyse/Synthèse
|
Back to the GDGM Homepage: Goupe de Discussion a propos du Geste Musical
GDGM - Report from the Second Meeting
May, 29th 1997 (14:30-16:00) - Salle B22 - IRCAM
Introduction
The second meeting started with a discussion of the needs of the group and future directions. M. Battier stressed the importance of the group's contributions to a bibliography; he felt that a focus on gathering resources and developing a typology of gesture technology and mapping strategies would be a very unique and valuable contribution. (NB: For a survey of some gesture capture technologies, take a look at Axel Mulder's list of webpages.)
Based on our discussions, we will engage in developing a preliminary taxonomy of terms relating to gesture, into which we will categorize the bibliographic and other resource materials. (NB: remember to take a look at the initial bibliography available on the GDGM web page.) We hope to have a new version ready by the next meeting (12/06/97).
A Controller Can Never Be a Musical Instrument! (Or Can It?)
We proceeded to engage in a lively discussion stemming from J. Fineberg's statement that controllers cannot reach the expressive level of traditional instruments. This was an interesting departure point, as it called into question many basic tenets of alternative controller design, including the issue of whether one should even attempt to use gestural transducers at all. The key question seemed to be: how can one make use of performer skill, developed over years of practice on a traditional instrument, in the implementation of an alternate controller? Converseley, how can one develop specific performance skills on an alternate controller?
In response, M. Wanderley brought up the work done by Tod Machover on his Hyperinstruments, where sensors were added to actual acoustic instruments (or possibly worn by the player) in order to provide expressive extensions beyond the possible level of the acoustic instrument. This was an attempt to harness traditional performing gesture and skill in an electro-acoustic context. However, it was pointed out that no matter what the gestural capture mechanism is, the uncoupling of controller and sound-producing device--common to most controller designs today--leaves a critical gap of unpredicability between gesture and resulting sound. Change the synthesis patch and one changes all the causal relationships between gesture and output. (NB: Check out M. Wanderley's comments about the Marseille Congress on Musical Gesture for discussions about some of these items).
Practice Makes Perfect
This led in turn to a consideration of what requirements might one consider for a viable musical instrument. Based on ideas that B. Rovan and M. Wanderley are working on, B. Rovan suggested that one basic requirement of any expressive musical instrument is its capacity for virtuosity. M. Battier amplified this point by suggesting that people decide to learn an instrument because they have a prior knowledge of what is possible with the instrument; in fact, their aspirations are often the driving force that makes them spend the necessary time to learn an instrument. Several people pointed out the expressive performance capabilities of earlier electronic instruments, such as the Theremin. Ondes Martenot, the Sal-Mar of Salvatore Martirano, and earlier analog synthesizers. It was noted that earlier instruments were primarily played by performers, whereas today many instruments grow out of a composer's experimentations, and thus remain a tool of the composer. (NB: M. Battier promises to give us some photos of the Sal-Mar for the web page--watch for it!)
S. Dubnov pointed out that there is a difference between the composer and the instrumentalist point of view: one is traditionally interested in organization and control of relationships among material, whereas the other is mostly interested in expressivity and timbral control of an instrument. These two orientations naturally lead to different instrument design. This was reinforced by S. Goto, who described his motivations for the design and use of his MIDI Violin. Goto pointed out that he considers the MIDI Violin a controller for granular synthesis, albeit one that uses traditional violin performance gestures. Apart from the physical appearance and approximate mass, the controller does not mimic an actual violin.
Cause and Effect
It was pointed out that another drawback of many controllers is the lack of direct relationshiop between cause and effect. R. Woehrmann mentioned the PadMaster and Radio Drum combination, developed at CCRMA as an example of cause and effect ambiguity (specifically, concerning the ability of a performer to use drum gestures to trigger events as well as remap the controller). He considered this confusing because the use of similar gestures for totally different effects destroyed the cause and effect relationship. Everybody agreed that a clear cause and effect relationship is essential for an expressive controller.
R. Woehrmann also pointed out the importance of physical interaction with a musical instrument, that an instrument has to be "fun" in order to be of interest. Paradoxically, given all the controller development and research over the years, everyone agreed that the most immediate and clearly unambiguous controller device is a simple pedal. A. Mihalic described his recent work involving an array of eight continuous controller pedals. As a testament to its simplicity, Mihalic described how quickly the performer (a flutist, in this case) adapted to the system, and even extrapolated his initial ideas to create other unforeseen combinations of pedal parameters. He did his system for musicians--performers who will not have access to any technical support, and who don't want to handle complicated relationships between parameters and gesture.
Making Connections...
Moving on from the issue of simple controller relationships, such as with a pedal, we discussed mapping strategies as a means of creating expressive and musically useful relationships between gesture and sound parameters. Dubnov made an interesting observation: he related mapping to a question of controller dimensions versus sound dimensions. (We should clarify that dimension is taken here to mean available control possibilities, or "degrees of liberty.") It was pointed out that in the case where a controller has more dimensions than the sound generator, either information will not be directed to a sound parameter, and thus lost/not used, or the different information streams will be merely redundant. If more than one input gesture is coupled to produce an effect, we thus begin to approach a model somewhat like an acoustic instrument, with its multiple couplings of input gestures.
Wrap up
In the end we covered a lot of ground (including a brief foray into the merits of improvisation--but that's another story). We concluded by mapping out some ideas for future meetings. These included having some people do short presentations on relevant articles, as opposed to the original "reading group" idea. This was decided so that we won't discourage attendance just because someone hasn't done a reading.
Coming Soon...
Next Meeting:
It was decided that the next meeting will take place on 12/June/97 at 14:30 - room B22. The initial subject for the meeting is the application of C. Cadoz's getures typology and of the works from J. Pressing (C. Music Journal - Vol 14 - Nb. 1, pp. 12-25) and R. Vertegaal (ICMC'96, pp. 308-311) as starting points to the design of novel gestural controllers. There will be a short presentation by M. Wanderley and B. Rovan on the articles followed by a general discussion on the subject.
And, as usual: All comments and suggestions are welcome!
IRCAM, 29 May 1997
B. Rovan
M. Wanderley
Back to the GDGM Homepage: Goupe de Discussion a propos du Geste Musical