next up previous
Next: RBF neural networks Up: Neural Network Modeling of Previous: Introduction

Reconstructing attractors

Assume an n-dimensional dynamical system tex2html_wrap_inline294 evolving on an attractor A. A has fractal dimension d, which often is considerably smaller then n. The system state tex2html_wrap_inline304 is observed through a sequence of measurements tex2html_wrap_inline306 , resulting in a time series of measurements tex2html_wrap_inline308 . Under weak assumptions concerning tex2html_wrap_inline310 and tex2html_wrap_inline294 the fractal embedding theorem[9] ensures that, for tex2html_wrap_inline314 , the set of all delayed coordinate vectors

equation48

with an arbitrary delay time T, forms an embedding of A in the DD-dimensional reconstruction space. We call the minimal DD, which yields an embedding of A, the embedding dimension tex2html_wrap_inline326 . Because an embedding preserves characteristic features of A, especially it is one to one, it may be employed for building a system model. For this purpose the reconstruction of the attractor is used to uniquely identify the systems state thereby establishing the possibility of uniquely predicting the systems evolution. The prediction function may be represented by a hyperplane over the attractor in an (D+1) dimensional space. By iterating this prediction function we obtain a vector valued system model which, however, is valid only at the respective attractor.
For the reconstruction of instationary systems dynamics we confine ourselves to the case of slowly varying parameters and model the instationary system using a sequence of attractors.



Axel Roebel
Mon Dec 30 16:01:14 MET 1996