Robotics has been traditionally considered as one of the topics of Artificial Intelligence. On the other hand, most industrial robots need not be particularly intelligent. They can be very useful even when they just can realise planned trajectories. However, mobile robots are dealing with the noisy, ultimately complex and dynamic real world, as it is, not just virtual world, which can be planned, implemented and managed according the needs of the designer. Design principles in the classical Artificial Intelligence, Physical Symbol Systems Hypothesis, goal based design (goal and knowledge, plan, action), modularity, traditional sense-think-act cycle, central information processing architecture and top down design, have turned out to be quite insufficient in connection of building mobile robots for natural environment. Classical AI systems lack robustness and real generalisation capabilities and ability to operate in real time needed in robotics. The operation of classical AI systems is model based. The more comprehensive the more detailed is the model, the more strongly the robotic agent is going to be affected by the frame problem (how can a model of a continuously changing world be kept in tune with the real world) and the symbol grounding problem (the symbol mappings in AI programs are grounded in the human's experience of her/his interaction with the real world; in the case of autonomous robot there is no human operator, the meaning of the symbols must be grounded in the systems own interaction with the real world). In sensor information processing and perception, there are problems in robotics context, which cannot be solved just by adding more MIPS.
New methodologies for designing intelligent robotic systems have been developed. A new field has grown around the study of behavior based intelligence, also known as embodied cognitive science. The crucial difference is that instead of performing extensive inference operations on internal model or representations, the robot can interact with the current situation. It can merely look at the real world through its sensors. The world is its best model. This approach has proved to be very useful, but many problems are still unsolved. New insights are needed.
Biological creatures have excellent sensor processing and motion control properties and very efficient learning of internal representations through evolution process, which may provide interesting bases for engineering solutions, too. On the basis of studies in biomechanics, neuroscience and biologically oriented neural networks these functions and mechanisms can be partly understood and explained. On the other hand, cognitive psychology can to some degree explain how humans recognise, analyse and organise their environment. In animals and humans, these functions and mechanisms are tightly combined and the same nervous system, although it is hierarchical, can do these all functions. It is very likely that there are the same basic neural structures and mechanisms, initially emerged for efficient sensorimotor functions, but which are flexible enough for higher cognition functions emerged in humans and in other animals. The research on neural mechanisms has maybe concentrated too much on perception only. By taking into account also the related motor actions, the neural systems can be understood better.
The main scientific objective of the proposed seminar is to adapt to practical mobile robotics, in tight interaction between partners representing different disciplines and paradigms, research results both on (I) lower level sensorimotor neural control mechanisms, (II) higher level dynamic perception principles and (III) higher level cognitive functions and mechanism. The main objective is to find common representations and mechanisms for all these tightly combined functions.