Catalogo Articoli (Spogli Riviste)

OPAC HELP

Titolo:
Distributed representations for extended syntactic transformation
Autore:
Niklasson, L; Linaker, F;
Indirizzi:
Univ Skovde, S-54128 Skovde, Sweden Univ Skovde Skovde Sweden S-54128Univ Skovde, S-54128 Skovde, Sweden
Titolo Testata:
CONNECTION SCIENCE
fascicolo: 3-4, volume: 12, anno: 2000,
pagine: 299 - 314
SICI:
0954-0091(200012)12:3-4<299:DRFEST>2.0.ZU;2-4
Fonte:
ISI
Lingua:
ENG
Soggetto:
CONNECTIONISM; SYSTEMATICITY;
Keywords:
constituent similarity; recursive auto-associative memory; systematicity; generalization; syntactic transformation;
Tipo documento:
Article
Natura:
Periodico
Settore Disciplinare:
Engineering, Computing & Technology
Citazioni:
38
Recensione:
Indirizzi per estratti:
Indirizzo: Niklasson, L Univ Skovde, POB 408, S-54128 Skovde, Sweden Univ Skovde POB 408 Skovde Sweden S-54128 128 Skovde, Sweden
Citazione:
L. Niklasson e F. Linaker, "Distributed representations for extended syntactic transformation", CONNECT SCI, 12(3-4), 2000, pp. 299-314

Abstract

This paper shows how the choice of representation substantially affects the generalization performance of connectionist networks. The starting point is Chalmers' simulations involving structure-sensitive processing. Chalmersargued that a connectionist network could handle structure sensitive processing without the use of syntactically structured representations. He trained a connectionist architecture to encode/decode distributed representations for simple sentences. These distributed representations were then holistically transformed such that active sentences were transformed into their passive counterpart. However, he noted that the recursive auto-associative memory (RAAM), which was used to encode and decode distributed representations for the structures, exhibited only a limited ability to generalize when trained to encode/decode a randomly selected sample of the total corpus. When the RAAM was trained to encode/decode all sentences, and a separate transformation network was trained to make some active-passive transformations of the RAAM-encoded sentences, the transformation network demonstrated perfect generalization on the remaining test sentences. It is argued here that the main reason for the limited generalization is not the ability of the RAAM architecture per se, but the choice of representation for the tokens used. This paper shows that 100% generalization can be achieved for Chalmers' original set up (i.e. using only 30% of the total corpus for training). The key to this success is to use distributed representations for the tokens (capturing different characteristics for different classes of tokens, e.g. verbs or nouns).

ASDD Area Sistemi Dipartimentali e Documentali, Università di Bologna, Catalogo delle riviste ed altri periodici
Documento generato il 05/04/20 alle ore 12:48:59