Adaptive Resonance Theory Training Parameters: Pretty Good Sets
Abstract
Problem statement: ART1 artificial neural networks offer good tools for test clustering, where no expert is needed if the system is well trained. However, having no output reference for the input patterns makes it hard to judge the quality of the training. Moreover, the performance depends to a great extent on a set of training parameters. Designers follow some recommendations or depend on their expertise in finding good sets with no performance guarantees. Many methods were proposed; from greedy methods offering quick and acceptable solutions to evolutionary algorithms offering suboptimal sets of parameters. While the evolutionary algorithms are a good choice for quality, the computational cost is large even for an offline process; after all, computing resources are not for free. Approach: We introduced a method for selecting a set of parameters that yields a comparable performance and robust operation, with relatively low cost compared to the evolutionary methods. This method located a suitable set through repetitive portioning of the range, by considering the best subset for the next iteration. Results: Tests have shown that performance comparable with the computationally intensive evolutionary methods could be achieved in much less time. Conclusion: The repetitive portioning method for finding a good set of training parameters is very cost effective and yields good performance.
DOI: https://doi.org/10.3844/jcssp.2010.1443.1449
Copyright: © 2010 Taisir Mohammad Eldos and Abdulaziz Suleiman Almazyad. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 3,440 Views
- 3,113 Downloads
- 1 Citations
Download
Keywords
- Adaptive resonance theory
- pretty good set
- artificial neural network
- optimization