dc.contributor.advisor |
Frean, Marcus |
|
dc.contributor.advisor |
Zhang, Mengjie |
|
dc.contributor.author |
Chandra, Rohitash |
|
dc.date.accessioned |
2012-04-11T00:38:50Z |
|
dc.date.available |
2012-04-11T00:38:50Z |
|
dc.date.copyright |
2012 |
|
dc.date.issued |
2012 |
|
dc.identifier.uri |
http://researcharchive.vuw.ac.nz/handle/10063/2110 |
|
dc.description.abstract |
One way to train neural networks is to use evolutionary algorithms
such as cooperative coevolution - a method that decomposes the network's
learnable parameters into subsets, called subcomponents. Cooperative
coevolution gains advantage over other methods by evolving particular
subcomponents independently from the rest of the network. Its success
depends strongly on how the problem decomposition is carried out.
This thesis suggests new forms of problem decomposition, based on a
novel and intuitive choice of modularity, and examines in detail at what
stage and to what extent the different decomposition methods should be
used. The new methods are evaluated by training feedforward networks
to solve pattern classification tasks, and by training recurrent networks to
solve grammatical inference problems.
Efficient problem decomposition methods group interacting variables
into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a
novel problem decomposition method that groups interacting variables
and that can be generalized to neural networks with more than a single
hidden layer.
We then incorporate local search into cooperative neuro-evolution. We
present a memetic cooperative coevolution method that takes into account
the cost of employing local search across several sub-populations.
The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation
of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance
in terms of optimization time, scalability and robustness.
As a further test, we apply the problem decomposition and adaptive
cooperative coevolution methods for training recurrent neural networks
on chaotic time series problems. The proposed methods show better performance
in terms of accuracy and robustness. |
en_NZ |
dc.language.iso |
en_NZ |
|
dc.publisher |
Victoria University of Wellington |
en_NZ |
dc.subject |
Neural networks |
en_NZ |
dc.subject |
Cooperative coevolution |
en_NZ |
dc.subject |
Recurrent network |
en_NZ |
dc.subject |
Co-operative co-evolution |
en_NZ |
dc.title |
Problem Decomposition and
Adaptation in Cooperative
Neuro-Evolution |
en_NZ |
dc.type |
Text |
en_NZ |
vuwschema.contributor.unit |
School of Engineering and Computer Science |
en_NZ |
vuwschema.subject.marsden |
280212 Neural Networks, Genetic Algorithms and Fuzzy Logic |
en_NZ |
vuwschema.subject.marsden |
280207 Pattern Recognition |
en_NZ |
vuwschema.type.vuw |
Awarded Doctoral Thesis |
en_NZ |
thesis.degree.discipline |
Computer Science |
en_NZ |
thesis.degree.grantor |
Victoria University of Wellington |
en_NZ |
thesis.degree.level |
Doctoral |
en_NZ |
thesis.degree.name |
Doctor of Philosophy |
en_NZ |
vuwschema.subject.anzsrcfor |
089999 Information and Computing Sciences not elsewhere classified |
en_NZ |