next up previous
Next: Musical applications Up: Composition on top of Previous: Architecture of the sound

Adding a level of composition

In the previous section we described a framework for programs that use sound synthesis in real-time. In this section we will use the framework to create an integrated environment for composition and synthesis. As said previously, composition environments benefit from Lisp's syntax to express musical ideas. In the previous section we have been very brief in the discussion of the application thread. The main function of this thread is subjected to the character of the application the framework is used for. Our current interest is to create an environment for composition and we want this to be constructed on top of a Lisp interpreter. The obvious next step is to incorporate this interpreter in the main thread of the framework.

Before we describe the implementation we will briefly consider the use of garbage collection. The Lisp environment typically uses a garbage collector for the reclamation of unused memory space. Using garbage collection in a real-time environment might seem as bad an idea as hopping on a local train when you're in a hurry to catch a plane. But we argue that garbage collection won't delay the synthesis thread as long as the synthesis thread 1) has a higher priority than the application thread, 2) can interrupt the garbage collector and 3) doesn't allocate new objects. This means that new objects can only be created by the event and the main thread or any thread that has a lower priority than the synthesis thread. We still have to prove this on paper. Test with the prototypes of the environment seem to confirm our hypothesis.

Since the dual nature of the application we will implement a hybrid system. The synthesis framework is written in a procedural language for efficiency. The Lisp environment should also be implemented in this language such that we can embed it into our project. We found it interesting to implemented two versions of the environment. The first implementation is written in Java. As Lisp environment it uses Kawa [Bot98]. Kawa is a Scheme interpreter written in Java that compiles and evaluates Scheme functions in Java. It provides a powerful external function interface to call Java byte code from the Scheme interpreter and an interface to evaluate Scheme expressions from Java code. Due to the portability of the Java binary code no modifications need to be made to run the environment on different platforms. Output to the sound device is machine-dependent, however. It is provided by three native functions that open, close, and write a sample buffer to the sound device. The second implementation is written in C/C++. As Lisp environment it uses the Extension Language Kit (Elk) [LC94]. Elk is a Scheme interpreter implemented in C. It was designed as an embeddable, reusable subsystem to create hybrid applications in C/C++. We wanted the C/C++ version to resemble the Java version as closely as possible. Therefore, we have re-implemented some of the standard Java classes in C++. These classes assume the presence of a collector to clean up the garbage. We have chosen to use Boehm's garbage collector [Boe93] and have modified the Elk environment to make use of this collector instead of the initially provided collector. The C/C++ implementation has exactly the same structure as the Java implementation; it is almost a word for word copy. This will allow for some interesting comparisons between Java and C/C++. We are currently testing the environment on the Silicon Graphics platforms.

The objects of the synthesis framework can be called and accessed from within the Scheme interpreter. In particular, synthesis processes can be created, added, and killed. The synthesis thread can be started, stopped, or paused. A basic scheduler is available to plan add and kill events ahead of time. We are currently finalizing the possibility to create synthesis processes as a patch of basic synthesis modules and to create score-like structures. These objects can be created and manipulated with Scheme functions. A compositional algorithm can calculate the synthesis techniques, the flow of events, and the control structures, and schedule the final score for synthesis in real-time.


next up previous
Next: Musical applications Up: Composition on top of Previous: Architecture of the sound

Peter Hanappe
Thu Jun 18 15:22:17 MET DST 1998