Computational Neuroscience

Research Group at Laboratory of Computational Engineering

 

Model of neocortex

As its name suggests, neocortex has evolved relatively recently, some time after mammalian lineage departed from reptiles. The neocortex has expanded the most during evolution and with its numerous folds and gyri is the largest structure in the human brain.

The neocortex processes inputs from all the senses and is also the seat of high-level cognitive functions such as decision making, imagination, planning and consciousness. It learns regularities, rules, abstractions and relations from the world using the sensory inputs it receives. Thus, it forms a model of the world where the animal is living. It also supports attention by deciding which aspects of the world are relevant at each moment.

The neocortex has a stereotypical six layered organization. Although many details vary, the overall structure is still recognisable throughout different cortical areas and species. This suggests that the neocortex can do all of its functions with variations of the same basic algorithm. This algorithm must be quite general and widely applicable because over the course of evolution, neocortex has expanded enormously and taken over many functions of other specialised, subcortical brain structures. For instance in human motor control, the motor cortex is a necessary executive organ without which we become paralysed. In contrast, in many other mammals such as rats, the whole neocortex can be removed without critically imparing motor behaviour.

So far our model of the neocortex supports learning and attention. The model consists of a large number of similar, interconnected information processing units which interpret their inputs and make decisions about what information to broadcast based on the contextual inputs they receive from their neighbours. In such a network, global attention emerges from the units' individual decisions to broadcast information (see Complex networks and agent-based models and Cognitive Systems for other related research).


Figure 1. Example architecture of the neocortex. The black arrows are the driving bottom-up input connections, and dashed purple arrows are contextual connections. One processing unit is shown in the enlargement. See text for more details.

The model is depicted in Figure 1 with one of the units shown in the enlargement. Each unit receives bottom-up input vectors (solid black arrows) and represents their regularities (features in machine learning terminology) by neural activation levels (the plots with blue curves) which are the outputs of the unit. In addition to the bottom-up inputs, the units receive information about other units through contextual inputs (dashed purple arrows). The units use the contextual information to improve their estimate about the identity of their bottom-up input and to make a decision about which features are the most relevant at the moment. Implicitly the units make Bayesian inference about the identity of their bottom-up inputs using contextual inputs as background information to refine their judgement. The units also implicitly evaluate the importance of the bottom-up information they are receiving and decide whether to represent and broadcast it.

In practice the contextual information is processed by an associator module which looks for correlations between the context and the input features. Those bottom-up features that are supported by the context are highlighted. In Bayesian terms, the context mediates the prior probabilities and bottom-up connections mediate the likelihood of different features being present in the world.

The context-based associations are also used to assess the value of representing different features. So far we have experimented with evaluating the features based on their coherence with the context. The motivation is that it is better to represent those features which belong to the same object or event rather than represent features which belong to different objects or events. In practice this is achieved by highlighting context-supported features even more than Bayesian probability theory would suggest and then selecting only the most active features. In a network of processing units, this type of selection quickly singles out the features belonging to the most prominent object. The network automatically learns to perceive objects based on the associations between the context and the bottom-up inputs. This corresponds to finding Gestalt shapes.

Since it is usually beneficial to process and represent more than one object, we have added a mechanism to switch between different objects. Again, this process relies on a very simple habituation mechanism distributed among the processing units: the active output neurons gradually get ``tired''. After a while some of the units start to make decisions to represent the features of another object and due to the context connections between units, this change escalates rapidly through the network and the network switches its attention to another object.

One of the most intriguing aspects of neocortex is its ability to come up with abstract, meaningful concepts. Our model uses so called competitive learning where the output neurons learn to respond even stronger to those inputs for which they became active. Since the contextual inputs modulate the activations strongly, they also have an important role in guiding learning. We have shown that in a hierarchical model like the one shown in Figure 1, the upper layers develop meaningful abstract representations. Moreover, since the emergent selection process in the network is able to attend to one object at a time, learning is faster because the features of different objects do not mix up.

So far we have not embedded the model into a larger cognitive architecture but this has been the goal in the design of the model. We are planning to include inputs from other ``subcortical'' modules as contextual inputs in order to bias attention and learning in the neocortical model. There are also various other interesting possibilities to improve the model's evaluation of important bottom-up inputs. For example, it is usually important to represent bottom-up inputs which are predictive of changes in context whereas the reverse temporal order indicates that the corresponding bottom-up inputs are not important. When receiving context from a motor system, such as the cerebellar model discussed in the previous section, and bottom-up inputs from sensors, such as cameras, the model could then learn to represent those visual features which are important for the motor behaviour of the system.

Other documents of this project

Master's thesis of Antti Yli-Krekola (pdf): A bio-inspired computational model of covert attention and learning