The Cognitive Science and Technology Group studies neurocognitive mechanisms of human communication, especially multisensory perception. Our perception of the world is inherently multisensory, meaning that we perceive objects with different sense modalities. Most of the objects around us can be both heard and seen, some of them even felt. A crucial question is how the various unimodal features are integrated to a unitary multisensory perception. Importantly, integration of the sensory information obtained via different modalities makes us more sensitive to the slight changes in our environment, and also improves object identification.
The human multisensory mechanisms are studied in psychophysical experiments, in which various aspects of the stimuli as well as the state of the subject are manipulated. In order to illuminate the nature of audiovisual integration, we also construct system level models, which in turn can guide future experiments and provide ideas for automatic audiovisual speech recognition and synthesis. In many of our experiments, we have utilized the McGurk effect, where visual information of articulatory movements changes the auditory perception.
Neurocognitive mechanisms of multisensory perception are studied by electroencephalography (EEG), magnetoencephalography (MEG) and functional Magnetic Resonance Imaging. The research will enhance our understanding of the basic mechanisms of human speech perception, and will have applications in communication technology, for example in the development of automatic audiovisual speech recognizers.
In addition to the basic research, we develop technologies based on neurocognitive research. We are developing the Artificial Person, a model of the communicating human being. The Artificial Person provides us a well-controlled stimulus for basic research. It may also be used in various applications.