Neural representations can be characterized as falling along a continuum, from distributed representations, in which neurons are responsive to many re

Distributed representations for human inference

submited by
Style Pass
2021-07-31 18:00:08

Neural representations can be characterized as falling along a continuum, from distributed representations, in which neurons are responsive to many related features of the environment, to localist representations, where neurons orthogonalize activity patterns despite any input similarity. Distributed representations support powerful learning in neural network models and have been posited to exist throughout the brain, but it is unclear under what conditions humans acquire these representations and what computational advantages they may confer. In a series of behavioral experiments, we present evidence that interleaved exposure to new information facilitates the rapid formation of distributed representations in humans. As in neural network models with distributed representations, interleaved learning supports fast and automatic recognition of item relatedness, affords efficient generalization, and is especially critical for inference when learning requires statistical integration of noisy information over time. We use the data to adjudicate between several existing computational models of human memory and inference. The results demonstrate the power of interleaved learning and implicate the use of distributed representations in human inference.

Leave a Comment