IJPAM: Volume 2, No. 4 (2002)


Nikos Kofidis$^1$, Manos Roumeliotis$^2$, Miltiadis Adamopoulos$^3$
Dept. of Applied Informatics
Economics and Social Sciences
University of Macedonia
P.O. Box 1591, Egnatia 156
Thessaloniki 54006, GREECE
$^1$e-mail: kofid@uom.gr
$^2$e-mail: manos@uom.gr
$^3$e-mail: miltos@uom.gr

Abstract.This paper investigates the ability of neural structures to model chaotic attractors and the chaotic features of neural networks. The objects of modeling are the logistic and Henon attractors. A significant improvement of model behavior is achieved when "multiple training" is applied. The improved model consists of a complete set of submodels, that is, networks which simulate a part of the attractor, driven by an LVQ controller. The series of absolute errors produced during the recall phase of the best neural models of each map is submitted to Lyapunov exponent investigation. The chaotic behavior of the absolute error that emanates from a single step prediction of the chaotic orbits indicates that neural networks, when fed only with an initial input, are able to produce the geometrical object corresponding to the simulated attractor. However, they fail to follow the original chaotic path (orbit) of the attractor. The chaotic features of the training phase are examined by investigating the dominant Lyapunov exponents of the weights series, and series of input patterns producing the minimum absolute error in the output.

The best estimation series, consisting of the output values corresponding to the best-learned input patterns, are also examined, using the theoretical tool of topological conjugacy.

Received: May 15, 2002

AMS Subject Classification: 65P20

Key Words and Phrases: chaos theory and models, neural networks, nonlinear dynamical systems

Source: International Journal of Pure and Applied Mathematics
ISSN: 1311-8080
Year: 2002
Volume: 2
Issue: 4