In this section we discuss the main components of the system. We will
build the system using an object-oriented approach. In particular,
the current prototype is completely written in the Java programming
language. Considering the formal model, and, adding to that the
Scheme shell, we can isolate three distinct tasks: the synthesis, the
event handling, and the interaction with the user. The three tasks
execute simultaneously; we will embed each of them in its own
thread. One object, called Varèse, sets up all the tasks. The
first task is the sound synthesis. The system handles this in
real-time, which means that the synthesis and the output are
interleaved. One buffer of sample frames is synthesized then written
to the output device. The synthesizer holds an array of
references to active synthesis processes (Fig. 5.1). A new synthesis process can be
added to the array by sending an add message to the synthesis
thread. A kill message will make an end to the scheduling of
a synthesis process and remove it from the array. Each synthesis
process produces sound according to its own algorithm. The synthesizer
calls the active synthesis processes every tau seconds and
passes it a reference to the output buffer. The synthesis process can
manipulate the buffer according to its algorithm. The synthesizer then
writes the buffer to the output device. A particular synthesis process
is a subclass of the abstract class SynthesisProcess. The
three key methods of this class are: birth, live,
and die. Subclasses must implement these three methods that
define the synthesis algorithm the process embodies. We will discuss
these methods in more detail in chapter
7.
|
|
Figure 5.1:
Object diagram of the system.
|
The second task the system handles is the event processing. At any
time, asynchronous to the sound synthesis, events can arrive. Events
can be sent from schedulers, programs, or synthesis processes. Events
are processed by the event handler. New events are posted to
an events queue (figure 5.1). The event handler picks the first
event out of the queue, removes it from the queue, and applies the
event's program with the given arguments. The program can create, add,
and kill synthesis processes. It can modify the state of the system on
which the synthesis processes depend, and thus influence the output of
the synthesis. It can also define new events, variables, or
programs. The time indication of the event is meant mainly for event
schedulers. It is ignored by the event handler: once the event is
posted to the event queue, it will be handled as soon as possible. The
event's program must implement the method apply defined by
the interface Program.
The last task is the interaction with the user. The music system we
present in this text incorporates a Scheme interpreter. This
interpreter allows the dynamic definition of both variables and
programs. Composers interact with the interpreter to create musical
structures, algorithms, synthesis processes, and control structures.
A discussion and example of some these structures is given in this and
the following chapter.
The text entered by the user in the Scheme shell is parsed and
evaluated by the interpreter. The evaluation of textual expression can
be seen as event handling, and this is why the formal model in the
previous chapter does not explicitly mention the user interaction. To
call the Java objects from within the shell we have defined a set of
Scheme functions. This interface will be discussed in
section 5.2.
We have three basic tasks (synthesis, event handling, and the
interpreter) and three basic objects (programs, events, and synthesis
processes) in our environment. The event handler processes the events
by applying their program. The interpreter parses and evaluates
``textual'' events. The synthesizer handles the synthesis by calling
upon all the active synthesis processes. So far, we have not detailed
these synthesis processes very much. In section 5.3 we
discuss how to describe and control complex synthesis processes. In
section 5.4 we discuss how this environment can be used in
distributed applications. In this case all external interaction with
the environment - event handling, Scheme interaction, and sound
output - is handled over the network.