Room Acoustics Team

WAVE FIELD SYNTHESIS @IRCAM

Wave Field Synthesis : A brief overview

Wave Field Synthesis (WFS) is a sound reproduction technique using loudspeaker arrays that redefines the limits set by conventional techniques (stereo, 5.1 ...). These techniques rely on stereophonic principles allowing the creation of an acoustical illusion (as opposed to an optical illusion) over a very small area in the center of the loudspeaker setup, generally referred to as "sweet spot". WFS, on the other hand, aims at reproducing the true physical attributes of a given sound field over an extended area of the listening room. It is based on Huyghens' principle (1678) which states that the propagation of a wave through a medium can be formulated by adding the contributions of all of the secondary sources positioned along a wave front.

Huyghens' Principle

To illustrate Huyghens' principle, let us consider a simple example. A rock (or primary source) thrown in the middle of a pond generates a wave front that propagates along the surface. Huyghens' principle indicates that an identical wave front can be generated by simultaneously dropping an infinite number of rocks (secondary sources) along any position defined by the passage of the primary wave front. This synthesized wave front will be perfectly accurate outside of the zone delimited by the secondary source distribution. The secondary sources therefore act as a "relay", and can reproduce the original primary wave front in absence of a primary source!

Origins of Wave Field Synthesis

Wave Field Synthesis (WFS) is based on a series of simplifications of the previous principle. The first work to have been published on the subject dates back to 1988 and is attributed to Professor A.J. Berkhout of the acoustics and seismology team of the Technological University of Delft (T.U.D.) in Holland. This research was continued throughout the 90's by the T.U.D. as well as by the Research and Development department of France Telecom Lannion.

Content-Coding

WFS relies on an object-based description of a sound scene. To obtain an object-based description, one must decompose the sound scene into a finite number of sources interacting with an acoustical environment. The coding of such a sound scene includes the description of the acoustic properties of the room and the sound sources (including their positions and radiation characteristics). Separate from this spatial sound scene description is the coding of the sound stream itself (encoding of the sound produced by each source). The MPEG-4 format provides an object-based sound scene description that is compatible with WFS reproduction. For further information on this subject the reader may refer to the overview of WFS.

WFS reproduction

Work conducted on the subject of Wave Field Synthesis has allowed for a very simple formulation of the reproduction of omni-directional virtual sources using a linear loudspeaker array. The driving signals for the loudspeakers composing the array appear as delayed and attenuated versions of a unique filtered signal. The maximum spacing between two adjacent loudspeakers is approximately 15 to 20 cm. This allows for optimal localization over the entire span of the listening area.

Elementary sources in Wave Field Synthesis
One can distinguish three separate types of virtual sources that are synthesizable using WFS systems:

Virtual point sources situated behind the loudspeaker array. This type of source is perceived by any listener situated inside of the sound installation as emitting sound from a fixed position. The position remains stable for a single listener moving around inside of the installation.


A linear loudspeaker array can synthesize a sound field associated to multiple sound sources simultaneously

Plane Waves. These sound sources are produced by placing a virtual point source at a seemingly "infinite" distance behind the loudspeakers (i.e. at a very large distance in comparison to the size of the listening room). Such sources have no acoustical equivalent in the "real world". However, the sun is a good illustration of the plane wave phenomenon in the visual domain. When travelling inside a car or train, one can entertain the impression that the sun is "following" the train while the landscape streams along at high speeds. The sensation of being "followed" by an object that retains the same angular direction while one moves around inside of the listening area accurately describes the effect of a plane wave.


Virtual point sources situated in front of the loudspeaker array. An extension of the WFS principle allows the synthesis of sources within the listening area at positions where no physical sources are actually present. These "sound holograms" are created when a wave front created by the loudspeaker array converges onto a fixed position inside of the listening room. The wave front is then naturally re-emitted from the target position to the rest of the listening area. The sound field is therefore inaccurate between the loudspeaker array and the target position but perfectly valid beyond it.

VIEW AN ANIMATED VERSION OF THESE VIRTUAL SOURCES

MAP Loudspeakers

MAP (Multi-Actuator Panel) loudspeakers are derived from DML (Distributed Mode Loudspeaker) technology. They exhibit a vibrating plate made out of polystyrene that is excited by a set of drivers (electrodynamic devices fastened to the rear surface of the plate by their mobile coil). Each driver receives an independent signal, which allows for the creation of a multi-channel system that using a single vibrating surface. The biggest advantage of this type of setup is its low visual profile, which can allow it to be integrated into an existing environment without revealing the presence of up to hundreds of loudspeakers. Furthermore, the vibration of the surface is sufficiently faint so that it doesn't interfere with the projection of 2D images; MAP loudspeakers can therefore be used as projection screens.

The problem with these loudspeakers is that their acoustical behaviour is quite different from that of the omni-directional point sources that are theoretically needed to achieve Wave Field Synthesis. They exhibit frequency responses and radiation patterns that require specific processing. Equalization methods were therefore implemented in order to compensate for the flaws of these loudspeakers over an extended area.

MAP loudspeakers are presently manufactured by sonicEmotion.

DOWNLOAD FURTHER DOCUMENTATION

Copyright © IRCAM Room Acoustics Team,  last modified 10/05/04