next up previous
Next: Neural models Up: RBF Networks for Previous: Reconstructing attractors

RBF neural networks

There are different topologies of neural networks that may be employed for time series modeling. In our investigation we used radial basis function networks which have shown considerably better scaling properties, when increasing the number of hidden units, than networks with sigmoid activation function. As proposed by Verleysen et. al [8] we initialize the network using a vector quantization procedure and then apply backpropagation training to finally tune the network parameters. The tuning of the parameters yields an improvement factor of about ten in prediction error compared to the standard RBF network approach [2].

The resulting network function for m-dimensional vector valued output is of the form

 

where stands for the standard deviation of the Gaussian, the input and the centers are n-dimensional vectors and and are m-dimensional parameters of the network. Networks of the form eq. (2) with a finite number of hidden units are able to approximate arbitrary closely all continuous mappings [3]. This universal approximation property is the foundation of using neural networks for time series modeling, where they are referred to as neural models. In the context of the previous section the neural models are approximating the systems prediction function.

To be able to represent instationary dynamics, we extend the network to have an additional input, that enables the control of the actual mapping

 

From the universal approximation properties of the RBF-networks stated above it follows, that eq. (3) with appropriate control sequence is able to approximate any sequence of functions. In this setting i represents the sample time. The control sequence may be optimized during training [1] or, with lower computational demands, may be appropriately chosen in advance. Selecting to be monotonically increasing with i will often work, as long as the number of training samples is high enough to fix the different network functions. In our investigation we select to be a linear, increasing function of i.

  
Figure 1: Input/Output structure of the neural model.



next up previous
Next: Neural models Up: RBF Networks for Previous: Reconstructing attractors



Axel Roebel
Thu Nov 9 12:55:11 MET 1995