next up previous
Next: Practical results Up: Neural Network Modeling of Previous: RBF neural networks

Neural models

As is shown in figure  we use the delayed coordinate vectors and a selected control sequence to train the network to predict the sequence of the following T time samples. The vector valued prediction avoids the need for a further interpolation of the predicted samples. Otherwise, an interpolation would be necessary to obtain the original sample frequency, but, because the Nyquist frequency is not regarded in choosing T, is not straightforward to achieve.

After training we initialize the network input with the first input vector tex2html_wrap_inline368 of the time series and iterate the network function shifting the network input and using the latest output unit to complete the new input. The control input may be copied from the training phase to resynthesize the training signal or may be varied to emulate another sequence of system dynamics.

The question that has to be posed in this context is concerned with the stability of the model. Due to the prediction error of the model the iteration will soon leave the reconstructed attractor. Because there exists no training data from the neighborhood of the attractor the minimization of the prediction error of the network does not guaranty the stability of the model [5]. Nevertheless, as we will see in the examples, the neural models are stable for at least some parameters DD and T.
Due to the high density of training data the method for stabilizing dynamical models presented in [5] is difficult to apply in our situation. Another approach to increase the model stability is to lower the gradient of the prediction function for the directions normal to the attractor. This may be obtained by disturbing the network input during training with a small noise level. While conceptually straightforward, we found that this method is only partly successful. While the resulting prediction function is smoother in the neighborhood of the attractor, the prediction error for training with noise is considerably higher as expected from the noise free results, such that the overall effect often is negative. To circumvent the problems of training with noise further investigations will consider a optimization function with regularization that directly penalizes high derivatives of the network with respect to the input units [1]. The stability of the models is a major subject of further research.


next up previous
Next: Practical results Up: Neural Network Modeling of Previous: RBF neural networks

Axel Roebel
Mon Dec 30 16:01:14 MET 1996