This paper presents an open testbed for controlling facial animation. The adopted controlling means can act at different levels of abstraction (specification). These means of control can be associated with different interactive devices and media thereby allowing a greater flexibility and freedom to the animator. Possibility of integration and mixing of control means provides a general platform where a user can experiment with his choice of control method. Experiments with input accessories like the keyboard of a music sinthesizer and gestures from the DataGlove are illustrated.
Da pagina a pagina:59-72
Titolo della pubblicazione:Multimedia modeling : proceedings of the First International Conference on Multi-Media Modeling, Singapore, 9-12 November, 1993