Ircam - Centre Georges-Pompidou Équipe Analyse/Synthèse


Back to the GDGM Homepage: Goupe de Discussion a propos du Geste Musical

GDGM - Report from the Third Meeting


June, 12th 1997 - Salle B22 - IRCAM


David Wessel, CNMAT, Invited Lecturer


David Wessel spoke to the GDGM on June/12th. Wessel's talk covered several issues surrounding the central topic of gestural capture and effective real-time performance. He basically divided the talk in three topics:

To this end, he started by commenting on the state of the "Reactive Real-Time Computing" project, taking place both at UC Berkeley as well as here in France (see the following websites for supplementary information). The RRTC project is striving to bring to the forefront the issue of latency and true real-time performance in the world of computing science, pushing to make real-time performance an issue for language developers. He showed that nowadays there's no reaction in a controlled way - it means current systems are "half" real-time. The idea is to have a predictable latency for interaction, and as a proposal he suggested the value of 10 ms +- 1 ms variance.

Wessel also discussed a project beginning at CNMAT that has as its goal the development of a new data-acquisition device based on the AES/EBU standard. This device will include high-speed ADCs for sensor capture, and will transfer digital sensor data to the computer in a multichannel digital audio format via AES/EBU. The device shall consist of a 24 input channels, variable sample rate per input, capable of acquiring up to 4 KHz signals. The SGI, for which this device will be initially developed, includes a very well-designed audio library that makes the parsing of such a multichannel data stream relatively easy. The goal will be to bring this sensor/audio stream into FTS, treating it directly as a signal, thus bypassing the bottleneck problems of conventional analog-MIDI converter boxes.

Wessel also discussed a new controller project that he and Matt Wright presented at the ICMC this year, using a WACOM drawing tablet as a controller for additive synthesis (using the CAST system developed at CNMAT). Having just presented a concert at STEIM using the tablet controller, Wessel described the performance experience and the flexibility of the interface, wherein one has the ability to assign different sound transformations to different regions of the tablet's surface "on-the-fly." An example he gave was using the tablet's pen controller to read selectively through an additive format file, synthesizing the sound in real time with CAST.
Finally he commented on the analysis/synthesis of musical phrases with the use of Neural Networks (from the SNN project at Stuttgart). The net is an 80 neurons output unit that generate the sound from envelope and pitch information from the analysis of long phrases. The input units receive global amplitude and pitch for the current and back samples (frames) and the output one provide amplitude, frequency and phase of the partials. The net is able to generate sound in real-time with different instruments in an SGI. One problem he mentioned in the current system is the smoothing of rapid attacks and transitions, i.e., the net is not able to reproduce them perfectly.

For more information on Wessel's topics, please see:

  • CNMAT Research page
  • Reactive Control of Synthesis
  • Graphical User Interfaces for Computer Music

    Check also his paper in the last ICMC:

  • Wright, M; D. Wessel and A. Freed, "New Musical Control Structures from Standard Gestural Controllers", pages 387-390.


    Back to the GDGM Homepage: Goupe de Discussion a propos du Geste Musical