Équipe Analyse/Synthèse |
According to the authors, the goal of this research is to develop gestural interfaces that allow for simultaneous multidimensional control, such as in musical composition and sound design - control of timbre, involving the control of many inter-dependent parameters simultaneously. They claim that, in order to reduce the cognitive load when performing simultaneous multi-dimensional control, it is necessary to design a human computer interface that implements data reduction and/or an interface that exploits the capability of human gestures to effortlessly vary many degrees of freedom simultaneously. They also consider that, while the human hand is well suited for multidimensional control, due to its detailed articulation, most general interfaces do not exploit this capability due to a lack of understanding of the way humans produce their gestures and what meaning can be inferred from these gestures.
Their approach consists of focusing on the continuous changing represented by the gestures produced by the user, as opposed to (1) recognition of gesture formalisms (need for learning the formalism) and to (2) natural gestures recognition, since they report previous results where the latter is considered by the authors as rarely sufficiently accurate due to classification errors and segmentation ambiguity. They also consider that touch and force feedback can be replaced by only acoustic feedback (with some compromises - not specified). This option was chosen due to technical constrains in implementing touch and force feedback in a shape manipulation task.
The proposed system uses MAX/FTS running on a R10000 SGI Onyx workstation with audio/serial option to interface two Virtual Technologies Cybergloves and a Polhemus Fastrak sensor. Some considerations are made in respect of accuracy of the glove and calibration, considered tedious, as well as specific to each individual user. The authors have developed new FTS objects specific for facilitating quick and easy prototyping of various gestural analysis computations and allowing for application of the computations to different body parts.
They consider both sound and human movement as able to be represented at various abstraction levels and claim that a mapping will be faster to learn when movement features are mapped to sound features of the same abstraction level. The strategy is to use a shape as a means to relate hand movements to sound variations.
The system works exploring the intuitive relations between shape of physical objects and timbre, as well as shape and manipulation for the design of a sound editing environment, where the user can change the sound by applying shape orientations to a virtual object. Shape features are subsequently computed and mapped to sound parameters.