There are two practical applications that directly follow from the
presented results. The first one is to synthesize music signals. To
consider musicians demands, we need to enhance the control of the
synthesized signals. Therefore, in the future we will try to enlarge
the models, incorporating different flavors of sound into the same
model and adding additional control inputs. Especially we plan to
build models for different volume and pitch.
As a second application we will investigate the possibility to use the
neural models in a speech synthesizer. In this case we also have to
enhance the control, to be able to represent intonation and
coarticulation. The single models, however, will represent smaller
entities, for example phonemes or diphones. Due to the physically
oriented representation of the single entities, we expect a more natural
result with compacter models compared to existing speech synthesis
systems.