Cognitive Science and Technology

Research Group at Laboratory of Computational Engineering

 

Artificial Person

Researchers: Martin Dobšík, Michael Frydrych, Jari Kätsyri, Mikko Sams

Speech is both heard and seen. Visible articulatory movements significantly improve speech perception, especially when the acoustic speech is degraded because of, e.g. hearing impairment or environmental noise. There is an evidence that the speech perception improves significantly also with computer animated audio-visual speech synthesizers, talking heads.

We have developed a toolkit for real-time computer animation of Finnish-speaking talking head, "Artificial Person". We have paid special attention in improving the quality of audiovisual speech. Synchronized auditory and visual speech are automatically produced from input text, which can be enriched by user definable commands to perform specific gestures, as for example facial expressions (Fig. 1). The Artificial Person is able to express six basic emotions (anger, disgust, fear, happiness, sadness and surprise) and their combinations.

Figure 1

Figure 1: Artificial Person. Expressions from left: neutral, sad and surprised.