Room Acoustics TeamWAVE FIELD SYNTHESIS @IRCAM |
Wave Field Synthesis : A brief overviewWave Field Synthesis (WFS) is a sound reproduction technique using loudspeaker arrays that redefines the limits set by conventional techniques (stereo, 5.1 ...). These techniques rely on stereophonic principles allowing the creation of an acoustical illusion (as opposed to an optical illusion) over a very small area in the center of the loudspeaker setup, generally referred to as "sweet spot". WFS, on the other hand, aims at reproducing the true physical attributes of a given sound field over an extended area of the listening room. It is based on Huyghens' principle (1678) which states that the propagation of a wave through a medium can be formulated by adding the contributions of all of the secondary sources positioned along a wave front. |
Huyghens' PrincipleTo illustrate Huyghens' principle, let us consider a simple example. A rock (or primary source) thrown in the middle of a pond generates a wave front that propagates along the surface. Huyghens' principle indicates that an identical wave front can be generated by simultaneously dropping an infinite number of rocks (secondary sources) along any position defined by the passage of the primary wave front. This synthesized wave front will be perfectly accurate outside of the zone delimited by the secondary source distribution. The secondary sources therefore act as a "relay", and can reproduce the original primary wave front in absence of a primary source! Origins of Wave Field SynthesisWave Field Synthesis (WFS) is based on a series of simplifications of the previous principle. The first work to have been published on the subject dates back to 1988 and is attributed to Professor A.J. Berkhout of the acoustics and seismology team of the Technological University of Delft (T.U.D.) in Holland. This research was continued throughout the 90's by the T.U.D. as well as by the Research and Development department of France Telecom Lannion. |
Content-CodingWFS relies on an object-based description of a sound scene. To obtain an object-based description, one must decompose the sound scene into a finite number of sources interacting with an acoustical environment. The coding of such a sound scene includes the description of the acoustic properties of the room and the sound sources (including their positions and radiation characteristics). Separate from this spatial sound scene description is the coding of the sound stream itself (encoding of the sound produced by each source). The MPEG-4 format provides an object-based sound scene description that is compatible with WFS reproduction. For further information on this subject the reader may refer to the overview of WFS. |
WFS reproductionWork conducted on the subject of Wave Field Synthesis
has allowed for a very simple formulation of the reproduction of omni-directional
virtual sources using a linear loudspeaker array. The driving signals
for the loudspeakers composing the array appear as delayed and attenuated
versions of a unique filtered signal. The maximum spacing between two
adjacent loudspeakers is approximately 15 to 20 cm. This allows for optimal
localization over the entire span of the listening area. • Virtual point sources situated behind the loudspeaker array. This type of source is perceived by any listener situated inside of the sound installation as emitting sound from a fixed position. The position remains stable for a single listener moving around inside of the installation. |
|
• Plane Waves. These sound sources are produced by placing a virtual point source at a seemingly "infinite" distance behind the loudspeakers (i.e. at a very large distance in comparison to the size of the listening room). Such sources have no acoustical equivalent in the "real world". However, the sun is a good illustration of the plane wave phenomenon in the visual domain. When travelling inside a car or train, one can entertain the impression that the sun is "following" the train while the landscape streams along at high speeds. The sensation of being "followed" by an object that retains the same angular direction while one moves around inside of the listening area accurately describes the effect of a plane wave.
VIEW
AN ANIMATED VERSION
OF THESE VIRTUAL SOURCES |
MAP LoudspeakersMAP (Multi-Actuator Panel) loudspeakers are derived from DML (Distributed Mode Loudspeaker) technology. They exhibit a vibrating plate made out of polystyrene that is excited by a set of drivers (electrodynamic devices fastened to the rear surface of the plate by their mobile coil). Each driver receives an independent signal, which allows for the creation of a multi-channel system that using a single vibrating surface. The biggest advantage of this type of setup is its low visual profile, which can allow it to be integrated into an existing environment without revealing the presence of up to hundreds of loudspeakers. Furthermore, the vibration of the surface is sufficiently faint so that it doesn't interfere with the projection of 2D images; MAP loudspeakers can therefore be used as projection screens. The problem with these loudspeakers is that their acoustical behaviour is quite different from that of the omni-directional point sources that are theoretically needed to achieve Wave Field Synthesis. They exhibit frequency responses and radiation patterns that require specific processing. Equalization methods were therefore implemented in order to compensate for the flaws of these loudspeakers over an extended area.
DOWNLOAD FURTHER
DOCUMENTATION
|
Copyright © IRCAM Room Acoustics Team, last modified 10/05/04 |