Modeling Emotion

Researchers: Michael Frydrych, Vasily Klucharev, Jari Kätsyri, Mikko Sams

Face plays an important role in social interaction. It conveys information of, e.g., person’s identity, gender, attractiveness, health, emotional expressions, gaze (attention) direction, cultural group and social status. Facial movements may accentuate spoken information, convey additional information or regulate conversation between several speakers. Understanding how people process, recognize, and interpret each other’s faces and facial motion is a challenging task that has attracted hundreds of scientists in both the social science, computer vision and psychology communities.

We started creation of the first Finnish digital collection of natural emotional facial expressions. The database contains recordings of six basic facial expressions (anger, disgust, fear, happiness, sadness, and surprise) acted by 8 human subjects. We studied identification and naturalness of basic emotional expressions with both natural and synthetic stimuli. The expressions were presented as static (pictures) and dynamic (movie sequences). We also studied the effect of distortion on facial expression identification (Fig. 40).

We have also developed a toolkit for real-time computer animation of Finnish-speaking talking head. Current version produces synchronized auditory and visual speech from input text, and displays facial expressions.

Figure 40

Figure 40: The identification scores for dynamic presentation decrease less with increased distortion.

Using fMRI we investigated what brain areas are activated when observing static vs. dynamic (naturally moving) facial expressions of happiness and disgust. Dynamic facial expressions invoked stronger activations in comparison to still pictures in areas MT, STS, and FFA (Fig. 41). The results suggest that STS is activated more by natural motion from neutral to emotional face than by picture of the same emotional face alone and that moving emotional facial expressions elicit stronger activation in certain emotion-specific areas.

Figure 41

Figure 41: Difference in brain activations for dyn_hap>sta_hap. From left to right: Medial temporal area (MT/V5), superior temporal sulcus (STS), fusiform face area (FFA) and globus pallidum (PLD).


www@lce.hut.fi