As is shown in figure
After training we initialize the network input with the first input vector of the time series and iterate the network function shifting the network input and using the latest output unit to complete the new input. The control input may be copied from the training phase to resynthesize the training signal or may be varied to emulate another sequence of system dynamics.
The question that has to be posed in this context is concerned with
the stability of the model. Due to the prediction error of the model
the iteration will soon leave the reconstructed attractor. Because
there exists no training data from the neighborhood of the attractor
the minimization of the prediction error of the network does not
guaranty the stability of the model [5].
Nevertheless, as we will see in the examples, the neural models are
stable for at least some parameters DD and T.
Due to the high density of training data the method for stabilizing
dynamical models presented in [5] is difficult to
apply in our situation. Another approach to increase the model
stability is to lower the gradient of the prediction function for the
directions normal to the attractor. This may be obtained by
disturbing the network input during training with a small noise level.
While conceptually straightforward, we found that this method is only
partly successful. While the resulting prediction function is smoother
in the neighborhood of the attractor, the prediction error for
training with noise is considerably higher as expected from the noise
free results, such that the overall effect often is negative. To
circumvent the problems of training with noise further investigations
will consider a optimization function with regularization that
directly penalizes high derivatives of the network with respect to the
input units [1]. The stability of the models is a major
subject of further research.