previous up next
5.4 Using the system in a distributed environment

To allow interfaces to be distributed, low-level events can be sent over the network using the datagram protocol (UDP). We have chosen a very straightforward binary format for the events for reasons of efficiency: the first two bytes indicate a program index number, the following two bytes count the number of arguments, the following two bytes code the type of the arguments (int, double, char, ...), and the rest are the arguments themselves (Fig. 5.3.) To receive UDP events, a new event thread is created that listens on a given port. The UDP event thread holds an array of programs set by the user. Whenever an event arrives the program index number and the argument count are extracted. Using the type indication the arguments are converted into objects. An array of characters is converted into a string. The program with the corresponding index is applied to handle the event. Using the power of the Scheme language, the user can program complex responses of the system to user events. The following example shows how to set-up and initialize the UDP event thread. The UDP thread waits for events that start and stop a note. The parameters of the note are MIDI-style pitch and velocity values.


 
Figure 5.3
 
Figure 5.3: The structure of an UDP event.

; Create and start a UDP event thread, listening on port 9000
(define udp (udp-thread 9000)) 

; Define two procedures to start and kill a note
(define (noteon pitch vel) 
  (add (sinewave pitch (const (midi->hz pitch)) 
                       (const (db->amp (/ (- vel 130) 2))))))
(define (noteoff pitch) (kill pitch)) 

; Pass the programs to the UDP thread.
(udp-thread-set! udp noteon 0)
(udp-thread-set! udp noteoff 1)

It is fairly trivial to connect the in-port and out-port of the Scheme shell to a TCP/IP socket. Network-aware applications can create a constant connection to the environment and provide a more interactive exchange with the environment than is possible with low-level events.


 
Figure 5.4
 
Figure 5.4: The SoundOutput object uses a coder/decoder to convert the buffer of samples in double format to the format used in the output. The converted samples are written into a data stream. For sound output directly to the sound device of the local machine, the DirectSoundOutput writes the double samples immediately to the audio port that will convert the samples natively.

Also the sound can be sent over the network. The sound output uses a coder/decoder (codec) to convert the samples (an array of double values) to the desired output format (16 bits linear, 8 bits linear, mu-law, etc ...). The sound output then writes the samples into a data stream (see figure 5.4). For example, sound can be written to the audio device, to a file, or to a TCP/IP stream. Tests with sound output over a TCP/IP connection on a local network are satisfying. However, TCP/IP does not provide any control over the quality of service. We therefore envision to realize a sound output on top of the Real-time Transport Protocol (RTP) and Real-time Streaming Protocol (RTSP). In the future version we will also move the system on top of the Java Media Framework (JMF) and the JavaSound library, which have been released recently by Sun Microsystems. We will benefit from the synchronization mechanisms between several media players, the platform independance, the coders/decoders, and network capabilities (based on RTP/RTSP) offered by these frameworks.


 
Figure 5.5
 
Figure 5.5: The diagram of the system in a distributed environment.

A more complex interaction with the system requiring the passing of binary data can be build on top of Java's Remote Method Invocation (RMI) or on top of the Common Object Request Broker Architecture (CORBA, [Obj96,SV95,SGH+97,Vin97]). An easy solution would be to define a new variable in the Scheme environment, passing its name and value. Both the integration with JMF and RMI are left for future work.


 
Figure 5.6
 
Figure 5.6: Two systems used simultaneously can be placed in serie or in parallel.

Another future project is to use several synthesis systems concurrently. The systems can be placed in series, in parallel, or a combination of both. Low-cost hardware can be combined to solve computation intensive performances. For example, the first computer system generates the sound, passes it on to the next system for a compresses before it is sent to the users/listeners on the network (Fig. 5.6).


 
Figure 5.7
 
Figure 5.7: The system, when running on a multiple processor computer, can use several threads to divide the synthesis task over the available processors.

In case the system is running on a multi-processor system, the synthesis task could be distributed over the available processors. The synthesizer then runs on top of several threads and distributes the synthesis processes over the existing threads to balance the load (Fig. 5.7.)

previous up next