Abstract | The literature contains a disconnect between accounts of how humans learn lexical semantic representations for words. Theories generally propose that lexical semantics are learned either through perceptual experience or through exposure to regularities in language. We propose here a model to integrate these two information sources. Specifically, the model uses the global structure of memory to exploit the redundancy between language and perception in order to generate inferred perceptual representations for words with which the model has no perceptual experience. We test the model on a variety of different datasets from grounded cognition experiments and demonstrate that this diverse set of results can be explained as perceptual simulation (cf. ) within a global memory model.
|