THESE de DOCTORAT de l'UNIVERSITE PARIS 6

Spécialité : Acoustique, Traitement de signal et Informatique Appliqués à la Musique




DESIGN AND IMPLEMENTATION OF AN INTEGRATED ENVIRONMENT FOR MUSIC COMPOSITION AND SYNTHESIS




Thèse présentée par Peter Hanappe,
sous la direction Emmanuel Saint-James,
préparée à l'Ircam - Centre Georges-Pompidou,
équipe Représentation Musicales,
sous la responsabilité de Gérard Assayag.


Download the gzip'ed PostScript version (350k), the PDF version (1.2M), or view the HTML version. A slite presentation is also available.

Abstract

In this text we present an integrated environment for computer music. Our system integrates a high-level Scheme interpreter, a real-time synthesizer, and a rich framework for temporal composition and sound synthesis. The environment is entirely written in the Java programming language and can be used in distributed applications. Three aspects of computer music that are generally treated separately - composition, sound synthesis, and interactivity - are tightly integrated in this environment.

The embedded Scheme interpreter offers an interactive programming environment. We show how the underlying Java platform promotes a transparent use of functional Scheme objects throughout the system. These functional objects, which we called programs, are used to describe the complex behaviors of the system. The events, for example, carry with them a program to describe their actions.

The compositional structures organize both discrete elements and continuous control functions in a hierarchical structure. Compositions thus become complex descriptions that control the sound synthesis. The basic element of temporal composition is the activity. Patterns organize activities in time and maintain the temporal relations between them. Changes to the organization are updated incrementally. This resembles the techniques of constraint propagation found in graphical interfaces. Causal relations can be used to describe the organization of activities of unknown duration.

The basic unit of sound generation is called a synthesis process. They are created with a meta-class approach using synthesis techniques and synthesis voices. Synthesis processes are aware of the time relations defined in the composition. This time information is bundled in an object called time context. Synthesis processes can use this information to deform the real time displayed by the synthesizer.

The environment concurrently handles the Scheme interaction, the garbage collection, and the real-time synthesis. We investigate whether hard real-time can be guaranteed in such a dynamic environment. It is difficult to conclude on the question solely on the basis of the discussions found in the literature. However, we introduce a constraint on the synthesis processes that reduces the question to the scheduling problem of concurrent tasks.