next up previous
Next: Architecture of the sound Up: Composition on top of Previous: Composition on top of

Introduction

Musical composition traditionally concentrates on musical elements such as duration, pitch, and intensity. Instrumentation and orchestration come into play to achieve a desired sound ``color''. The end product is a score that is handed over to and interpreted by the musician. The use of the computer intervenes in this traditional scheme in several ways. With the help of computers theoretically any sound can be generated. In this sense, the computer relieves the composer from the limitations of classical instruments. Furthermore, the composer is no longer dependent upon a performer to realize the score. There are drawbacks too. The computer is a white page, offers all possibilities, but does nothing on its own. Therefore, how the sound is produced has to be expressed unambiguously in all its details. This need for precise descriptions of sound has nurtured research in such areas as digital signal processing, instrumental acoustics, and auditive perception. Currently a wide variety of applications for sound synthesis is available. Since signal processing requires a lot of computation, these applications are mostly written in C for efficiency.

The computer can also be used as a tool for algorithmic composition. The composer expresses his musical ideas using an intermediate language and explores musical structures with the help of the computer. Lisp-like languages, especially, are well adapted to express musical ideas. (For simplicity we will use Lisp in this text to denote Lisp-like languages. A Lisp dialect that is of special interest to us in our current work is Scheme.) Lisp interpreters provide an interactive environment in which the composer (re-)defines functions and manipulates objects in runtime.

Despite of the interest and research on the description of sound, few applications for music composition intimately integrate the possibility to describe and synthesize sound. Sound synthesis applications, on the other hand, offer only limited possibilities for composition. The reason for this is a technical one. The flexibility that is requested of a composition environment necessitates dynamic memory management. Lisp environments rely on a garbage collector to reclaim unused memory (see [Wil92] for a discussion of garbage collection). These environments are not well adapted to do signal processing efficiently. The situation is worse if the environment would be used for real-time processing. The garbage collector can interrupt the real-time task at any moment. When and how long the collection will take is impossible to predict. Dynamic memory management in general and garbage collection in specific seem incompatible with real-time sound synthesis in software.

Our aim is to create an integrated environment for composition and synthesis. We will argue that it is possible, however, to combine real-time synthesis with a composition environment in Lisp. We will show this in two steps. In the next section we propose a framework for sound synthesis applications. In section 3 we will use the framework for a composition environment. Section 4 discusses some possible musical applications. This paper describes a work in progress. Only the basic ideas of the environment are put forth.


next up previous
Next: Architecture of the sound Up: Composition on top of Previous: Composition on top of

Peter Hanappe
Thu Jun 18 15:22:17 MET DST 1998