Symposium: Human language mechanisms as revealed by the MMN
Friday, Sep 11, 2015
Hörsaal 3

Automatic neural discrimination of lexical information in unattended visually presented words

Yury Shtyrov

Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark

Previous MMN studies using auditory presentation of spoken words have established that the brain is capable of automatic lexical processing even in the absence of attention on the linguistic input. This automaticity is attributed to the robustness of strongly-connected word-specific neural memory circuits that activate irrespective of the task or attention level. Such an account predicts the automatic activation of memory traces upon any presentation of words, irrespective of the presentation mode. However, neurolinguistic experiments in the visual modality have not been able to explore this phenomenon, as they usually present stimuli (even if masked) in the focus of attention. Our recent experiments investigated the putative automatic processing of unattended lexical stimuli in the visual modality, in different languages. Matched words and pseudowords were presented to volunteers outside the focus of attention while they were engaged in a non-linguistic visual dual task of detecting colour combinations in the centre of their visual field. Event-related EEG and MEG responses revealed a complex timecourse of brain activation dynamics underpinning lexical processing. Differential processing of words and pseudowords started early, from ~100 ms, and continued over extended time of a few hundred milliseconds. This was found in MMN designs for both standard and deviant orthographic stimuli, as well as in non-oddball presentation. Furthermore, similar to earlier auditory findings, automatic visual responses can track rapid formation of new memory traces for novel words. This body of results suggests automatic neural processing of linguistic information as a supra-modal mechanism shared by visual and auditory modalities.