Creating a sound scene in
WFS involves associating spatial information to the sound signals composing
the scene. Virtual sources are used to reproduce both direct sound and
room effect following a physical, geometrical or perceptual description.
The sound signals distributed on different virtual sources
correspond to the soundcard outputs of a computer equipped with an audio
sequencer. This implies that classical sequencer functions such as equalization
or compression remain available to the end-user. Stereophonic panning
is used in order to distribute the signals over Virtual Panning Spots
(VPS), allowing for the creation of phantom sources within a virtual
stereophonic VPS imaging area (cf Documentation).
Scene description parameters are adjustable via a plug-in giving access
to source position (holophonic distance, incidence angle) as well as MPEG-4
perceptual parameters. These parameters can be entirely automated, conferring
the possibility of a temporal evolution in spatialization and allowing
precise synchronicity with sound events.

Above: Screen shot from a ProTools session
in which the Scene Description Parameters have been automated using a
dedicated Plug-in.
The author of the sound scene can choose to render direct sound using
WFS virtual point sources or panning over the different room effect channels.
This allows for the reproduction of a large number of sources without
increasing the required processing power. More pragmatically, if the "complete"
WFS setup (i.e. uninterrupted loudspeaker distribution) is restricted
to the front wall of the listening room, this extends the possible positions
for virtual sources to the rear and side walls.
Spatialization parameters are shared and accessible by
the other elements of the production chain through a distributed database
on the ZsonicNet network developed by sonicEmotion . A large number of
parameters are therefore made available at all locations within the installation
using an Ethernet type connection. The network displays very low latency
(~10 ms), allowing for a global refreshment of parameters in real time
from any location within the network. ZsonicNet allows the control of
distributed processes from any client inside the network. It enables a
synchronous transfer of audio data to all clients and provides a consistent
database of parameters. In practice, WFS rendering on a large set-up with
different rendering machines can be controlled from one or more audio
workstations. This means that WFS can be integrated into existing audio
workstations. The audio and control data are transferred from the audio
workstation to different WFS rendering machines inside the network. The
network itself remains server-less and allows a dynamic configuration
with changing reproduction systems.
Thus spatialization parameters can be made available
on a ListenSpace interface installed on a portable PC tablet using a wireless
Ethernet connection. The author can then modify spatialization parameters
in real time while moving around in the sound installation.
Left: The ListenSpace interface, from which
the end-user may modify the sound scene according to listening room geometry
Right: Use of a portable PC Tablet with a wireless connection to modify
the sound scene in real time while moving around in the listening room
The virtual acoustics processor Spat~ developed by IRCAM
has been adapted for WFS rendering. The Spat~ creates a set of room effect
channels and transmits the direct sound signals associated to WFS point
sources to the reproduction system.
The proposed interface gives access to scene description parameters, as
well as a few basic mixing operations (level settings, routing, mute,
solo…).
It also contains a multi-channel sound-file player synchronized with a
MIDI sequencer. The system can therefore function without the use of the
audio sequencer. Spatialization parameters are then translated into fixed
MIDI controller values. The MIDI files associated with the multi-channel
sound-files therefore form a complete content-coding of the sound scene.

WFS Production Chain: Synthetic view
|