previous up next
7.3 Protecting the system from overload

The system has to calculate an output value every T seconds. If we consider sound synthesis using a buffer of 64 samples and a sampling rate of 44100 Hz, the value of T is a little over 1.45 milliseconds. This time has to be divided over several functions, including the sound synthesis performed by the synthesis processes. If these functions together take more than T time, the system will not be able to output its sound buffer in time, and clicks are introduced in the sound. This should be avoided at all costs. In this section we will examine how this short interval of time is used, and how we can prevent the system from overloading the CPU.

In pseudo code, the algorithm of the synthesizer to calculate one output looks like this:

synthesize one output {
  do initialization,
  call each of the synthesis process, 
  write the output value
}

Let Ts be the time needed by the system to calculate the output. Ts is split into the following fractions:

  • T0 - the time needed by the synthesis thread, even when no synthesis processes are active (to clean buffers, etc...).

  • Ti - the time needed by synthesis process spi to calculate its output.

  • Twr - the time needed to write the output to the output device. This time depends mainly upon the operating system and the driver.

  • Tsw - the time needed to switch between the running thread and the synthesis thread. This time laps depends on the operating system.

  • Tm - we also include a margin to accommodate fluctuation in the previous times, and to guarantee a minimum time to the other threads (event handling and user interaction).

Let Ts,0 be the time needed by the system when no synthesis processes are active, and let Ts = Ts,0 + sum(Ti). To protect the system from overload Ts must always be smaller than T. If Ts,0 > T the system can not perform any real-time synthesis at all. Otherwise the system can accept synthesis processes as long as Ts < T. If Ts > T the system is overloaded and can not assure the delivery of the output values in time. Furthermore, the synthesis thread might take up all the CPU time, in which case the other threads are completely blocked; the system will have to be interrupted brutally. This situation must be avoided, especially in concert situations. How can we protect the system from such an overload?

  • The traditional approach for real-time systems is to analyze the source code statically. After a very careful examination the maximum calculation times of all the function calls are determined and the maximum execution times of the real-time tasks are established. Since we know the maximum time a task can take, we can guarantee a correct behavior. However, this technique is very static and cannot cope with the dynamic aspects of our system.

  • Before a new synthesis process is added, the synthesizer negotiates its resource requirements (``quality of service''). The synthesizer asks the synthesis process the maximum CPU time it needs for its calculation. If the system has that much time available, the synthesis process is accepted. If not, the synthesis process is not added or is asked to lower its requirements.

The quality of service approach is clearly better suited for a dynamic environment. However, we still have a number of problems:

  • How do we measure the time Ti used by a synthesis process?

  • How do we keep a synthesis process to its advertised CPU requirements?

We could perform a code analysis when a synthesis process is added. However, as we have seen, a synthesis process can be a complex network of objects. Moreover, this network can change dynamically. So it will be very hard, a priori, to measure the CPU needs of a synthesis process. Even if we could guess its execution time, how do we prevent it from taking more time?

Nilsen [Nil] proposes the following solution for real-time systems written in Java. When a real-time object is scheduled, the system negotiates its CPU and memory usage. If the system can not satisfy the object's initial requirements, it asks to lower its demands. When the object is accepted, the system examines the Java code of the object, and, in particular, the code inside the event handlers of the object. From this analysis, it deduces the maximum time the object needs to respond to an exception. If during the real-time execution the object takes more time than given, the system raises an exception that will be intercepted by the object's exception handler. Since the time needed to handle the exception is known, the system can continue within fixed time limits.

Transposing Nilsen's solution to our case, the synthesizer could raise an exception when the synthesis processes takes too much time. In addition, we could advice developers not to define exception handlers in the signal processing code of the synthesis processes. In this situation the exception will be caught by the synthesizer. We can then design this handler to take the appropriate measures to reassure a timely behavior. Experiments of this kind are left for future work.

Before we end this section we would like to remark that due to internal buffering of sound data by the sound driver, an occasional delay in the sound output need not be catastrophic. However, in our argumentation we did not include this feature for several reasons. First, an occasional delay can be tolerated, but, in the average, the above discussion is valid. Second, the internal buffering should be reduced to a minimum since it introduces delays. Ideally, for real-time systems, no buffering is done. In the light of the previous sections and with chapter 1 in the background we can now argument the real-time possibilities of the system.

previous up next