Copyright 2003 Cell Press.

Neuron, Vol 38, 487-497, 8 May 2003

Spatiotemporal Dynamics of Modality-Specific and Supramodal Word Processing

Ksenija Marinkovic *1, Rupali P. Dhond 1,2, Anders M. Dale 1, Maureen Glessner 1, Valerie Carr 1, and Eric Halgren 1,3

1Athinoula A. Martinos Center, for Biomedical Imaging, Charlestown, MA 02129 USA

2Department of Radiology, University of Utah, Salt Lake City, UT 84108 USA

3INSERM, E9926 Marseilles, France

 

xenia@nmr.mgh.harvard.edu

The ability of written and spoken words to access the same semantic meaning provides a test case for the multimodal convergence of information from sensory to associative areas. Using anatomically constrained magnetoencephalography (aMEG), the present study investigated the stages of word comprehension in real time in the auditory and visual modalities, as subjects participated in a semantic judgment task. Activity spread from the primary sensory areas along the respective ventral processing streams and converged in anterior temporal and inferior prefrontal regions, primarily on the left at around 400 ms. Comparison of response patterns during repetition priming between the two modalities suggest that they are initiated by modality-specific memory systems, but that they are eventually elaborated mainly in supramodal areas.

Supplemental Data for:

Marinkovic et al., Neuron 38, pp. 487-497

(see two "brain movies -- reading and listening to words)

Group average (n = 9) frames of dynamic statistical parametric maps of estimated responses to novel words presented either in auditory or visual modality. The anatomically constrained MEG approach relies on constraining a minimum norm inverse solution to each subject's cortical surface reconstructed from high-resolution T1 MRI scans. Activity is estimated at each cortical location every 5 ms and is noise-normalized. Inverse solutions for MEG signal from all of the subjects are averaged and presented on a canonical inflated cortical surface in order to view estimated sulcal activity. The resulting 'brain movies' are a series of frames of statistical parametric maps of activity at each time point. The movies illustrate the overall activity to spoken and written words. In both cases, activity spreads from the primary sensory areas along the respective ventral streams to converge in anterior temporal and inferior prefrontal regions, primarily on the left at around 400 ms. This evidence suggests that semantic and contextual elaboration may rely primarily on contributions from distributed supramodal areas.