Sound frequency, duration, and intensity are key elements in carrying prosodic and musical information. Our aims were to compare the neurocognition of speech and music sounds and to determine the relative importance of the above mentioned features.
The subjects were presented with pseudowords and music sound patterns of matched duration, intensity, and fundamental frequency. As rarely occurring "deviants", pseudowords and music patterns with acoustically matched pitch, duration, and intensity changes were employed. In addition, all additive combinations of those changes were included (ERP study). The subjects watched a silent video (ERP: Ignore), performed a target detection task (ERP: Attend), or categorized the sounds as speech or music (fMRI).
The change-specific MMN component was modulated by the deviating sound parameter, the number of deviated parameters, and the sound type (music vs. speech). Moreover, N2b was influenced by the subjects' musical expertise, deviated sound parameter, and marginally also by the sound type. Up to some extent, fMRI dissociated the brain areas involved in speech vs. music processing. Taken together, these results support the existence of (at least partially) modular music- and speech-specific systems in neurocognition of auditory information.