The work presented in this paper is part of a larger project that focuses on the control of sound synthesis. Already in 1963 Max Mathews reported the first experiments with digital sound synthesis (using the MusicV environment) and computer aided composition [Mat63]. In the same article he also stated the need of a better understanding of psychoacoustic phenomena and more precise tools for the control of sound synthesis. Since then many applications for synthesis and CAC have reached computer screens: CSound [Ver86], CMix [Gar95], Max [Puc91], Kyma [Sca89], and Modalys [MA93] are more sound oriented; Formes [RC84], PatchWork [AR93], and DMIX [Opp96] are more composition oriented; and Common Lisp Music [Tau91] and Foo [EGA94] handle both. In the field psychoacoustics progress has also been made [McA94]. In particular the concept of timbre is now better understood [MWD 95]. There have been projects that targeted the control of sound synthesis [MMR74, Wes79, Wes92, Mir93, RP95], however, few tools have really survived to the world of computer music. Our goal is to develop new tools which help the composer to create the sound he desires. We have started with examining some of the available environments for sound synthesis and composition, and tried to establish an environment for our experimentation.
As mentioned above, there already exist a wide variety of environments which have proven their usefulness, and which have their particular characteristics and user groups. We want to continue using these environments. Considering the implementation of a new environment for CAC or synthesis is beyond the scope of this project.
Every environment has some good tool or some feature that is not found in any other. Yet, there is rarely a way to make the two environments work together and benefit from both. For example, one environment can be gifted with a well designed breakpoint function editor. But it might be impossible to use the editor when working in a other environment. Making these environments communicate will extend the possibilities found in either of them.
One might wonder how control of sound synthesis is related to composition. The answer is closely. The composer Marco Stroppa, in one of our conversations, said that the control of sound synthesis is an act of composition, because a sound has a meaning only if it is imagined within a composition. Since control of sound synthesis is closely related to the use of timbre as a musical element in the composition [Ler87], so should tools for the control of sound synthesis be intimately realted with CAC environments; to the extend that they cooperate closely, but not completely depend upon each other. PatchWork, for example, includes a library to prepare data for CSound. But the data structures used by PatchWork are conceived uniquely for CSound. Converting this library to use with an other synthesis kernel takes more then a hard day's work. It would require re-designing the library.
A last argument concerns all large, monolithic applications in general. Extending the application or replacing an existing functionality is, in most cases, impossible for the user. This means no replacement of the breakpoint function editor with a better one. No adding a new signal processing function to the synthesis kernel. The user is forced to work with the application as it was designed by the author, even if is desirable to add new features.
So what can we conclude from these observations? First, that we need to come up with a strategy allowing our tools to be used from within the available environments. We then take advantage of existing software and guarantee our tools a greater usability. Second, the architecture of our solution should be modular, and its interfaces public. Users/programmers should be able to add new tools or replace some of the existing modules of the environment. Last, we attempt to make the environment independent of the computer platform. This will urge us to design a clean architecture independent of platform specific features, therefore running less risk of our work becoming obsolete.
We have developed a framework, with the project name JavaMusic, that tries to fulfill these requirements. This framework can be viewed as a crossroads where different applications/modules meet and share data, and whose architecture allows the dynamic addition and removal of synthesis kernels, CAC environments and other tools.
We hope to achieve this goal with the introduction of several elements:
For the development of JavaMusic we have chosen the programming language Java. Java has the advantages of being a high level, dynamic language which is freely available, and widely used. The main reason for choosing Java, however, is its portability. Indeed, Java is an interpreted language and its specifications are independent of the local CPU. Furthermore, Java provides the classes which abstract machine dependent system components such as the file system, graphics, and networking.
In the next section we discuss the data structures, then, in section 3, we will describe the client-server architecture.