Catalogo Articoli (Spogli Riviste)

OPAC HELP

Titolo:
PARALLEL, SELF-ORGANIZING, HIERARCHICAL NEURAL NETWORKS WITH COMPETITIVE LEARNING AND SAFE REJECTION SCHEMES
Autore:
CHO S; ERSOY OK; LEHTO MR;
Indirizzi:
HONG IK UNIV,DEPT ELECT & CONTROL ENGN SEOUL SOUTH KOREA PURDUE UNIV,SCH ELECT ENGN W LAFAYETTE IN 47907 PURDUE UNIV,SCH IND ENGN W LAFAYETTE IN 47907
Titolo Testata:
IEEE transactions on circuits and systems. 2, Analog and digital signal processing
fascicolo: 9, volume: 40, anno: 1993,
pagine: 556 - 567
SICI:
1057-7130(1993)40:9<556:PSHNNW>2.0.ZU;2-7
Fonte:
ISI
Lingua:
ENG
Tipo documento:
Article
Natura:
Periodico
Settore Disciplinare:
Science Citation Index Expanded
Citazioni:
18
Recensione:
Indirizzi per estratti:
Citazione:
S. Cho et al., "PARALLEL, SELF-ORGANIZING, HIERARCHICAL NEURAL NETWORKS WITH COMPETITIVE LEARNING AND SAFE REJECTION SCHEMES", IEEE transactions on circuits and systems. 2, Analog and digital signal processing, 40(9), 1993, pp. 556-567

Abstract

A new neural network learning algorithm with competitive learning andmultiple safe rejection schemes is proposed in the context of parallel, self-organizing, hierarchical neural networks (PSHNN). After reference vectors are computed using competitive learning in a stage of PSHNN, the safe rejection schemes are constructed for reference vectors. The purpose of safe rejection schemes is to reject the input vectors which are hard to classify. The next stage neural network is trained with the nonlinearly transformed values of only those training vectors that were rejected in the previous stage neural network. Two different kinds of safe rejection schemes, RADPN and RAD, are developed and used together. Experimental results comparing the performance of the proposed algorithms with those of backpropagation and the PSHNN with the delta rule learning algorithm are discussed. The proposed learning network produced higher classification accuracy and much faster learning. The classification accuracies of two methods for learning the reference vectors were compared. When the reference vectors are computed separately for each class (Method II), higher classification accuracy was obtained as compared to the method in which the reference vectors are computed together for all the classes (Method I). This conclusion has to do with rejection of hard vectors, and is the opposite of what is normally expected. In addition, Method II has the advantage of parallelismby which the reference vectors for all the classes can be computed simultaneously.

ASDD Area Sistemi Dipartimentali e Documentali, Università di Bologna, Catalogo delle riviste ed altri periodici
Documento generato il 25/09/20 alle ore 13:03:15