next up previous contents
Next: Recurrent Self Organizing Map Up: Computational Information Processing Previous: Learning Control of Indoor

Modeling and Prediction of Temporal Processes

Researchers: Timo Koskela, Markus Varsta,Jukka Heikkonen, and Kimmo Kaski

Modeling and prediction of temporal processes is an important problem in many fields of science and engineering. We have studied different neural network models and compared the results with other methods. In time series prediction the goal is to predict the future of the measured process. The prediction is made based on the previous values of the series and possibly some other variables that affect the process. In figure 6. a time series of the intensity of an infrared laser in chaotic state is presented. This series is highly nonlinear, stationary and chaotic. Modeling with neural networks was found well suited for this this difficult problem.

 
Figure 6:   The intensity of an infrared laser in chaotic state. The measured process is an example of a highly nonlinear, stationary and chaotic time series.
Figure 6

Presentation of time in the different models can be divided roughly to two cases. Typically the series is splitted into input vectors of certain length using a windowing technique. In this process the time is converted into one additional dimension of the data. If the time series is stationary within known time lag, splitting can be carried out without losing essential information. However, if the series is nonstationary, windowing becomes more difficult. Too long window tends to average out the temporal patterns in the series, and on the other hand too short window is sensitive to short disturbances in the series.

Another way of presenting time is to use internal memory in the model. The temporal context can be stored in internal variables of the model, that can be estimated based on the measured data. If the changes in the statistics of the process are slow, the model can follow these changes by adapting its parameters using the stored context. If the changes in the statistics of the process can be explained with different states, the context can be used in predicting the next state of the process.

Models can be further divided to global and local models. Global models try to model the whole time series with one model. In local model approach time series is divided to local data sets for which simple local models are estimated. The division of the series into local data sets can be carried out with a clustering or quantization algorithm, e.g. k-means or Self-Organizing Map (SOM).

Our research has concentrated on studying the modeling abilities of the Recurrent Self-Organizing Map (RSOM), that has been proposed by Varsta and Heikkonen [48], [49]. The RSOM is used as a clustering algorithm. The recurrent difference vectors in each of the map units allow the model to store temporal context. Time series is divided to local data sets based on the best matching unit of the map. In our studies linear regression models have been applied to model the gained local data sets. The results gained with RSOM have been compared with Multilayer Perceptron (MLP) neural network and autoregressive (AR) models. Results are presented in a research report [18] and also in a journal paper accepted for publication [17].


next up previous contents
Next: Recurrent Self Organizing Map Up: Computational Information Processing Previous: Learning Control of Indoor
Juha Merimaa
1/2/1998