Symposium: Deviance-detection across modalities
Thursday, Sep 10, 2015
The neural network underlying automatic visual change detection
1Translational Neuromodeling Unit, University of Zurich & ETH Zurich, Zurich, Switzerland
2Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Zurich, Switzerland
Perceiving an object as a whole requires combining its different features encoded by different neural populations/brain areas into a unified representation. We used a multi-feature visual ‘roving standard’ paradigm to elicit mismatch responses by rare changes either in 1) color (red, green), or 2) emotional expression (happy, fearful) of human faces, or 3) both. Importantly, this allowed us to study brain responses to physically identical stimuli violating regularity in color and emotion separately. FMRI data was acquired on a Philips 3 Tesla scanner from 34 participants. A general linear model (GLM) with parametric modulation (prediction and prediction error (PE)) was estimated for each participant. We used a novel model of Bayesian learning, the Hierarchical Gaussian Filter to generate fMRI regressors parametrically modulated by PEs and predictions. Finally, as second level statistic we used F-tests to find regions whose response was significantly modulated by either prediction of prediction error. We found visual and other areas where activity showed a relationship with model-based prediction and prediction error parameters. Our results suggest that automatic visual perceptual predictions are generated in several areas including feature-specific cortical sites. Prediction-related activity is generated also in several cortical areas, and some of them are upstream relative to the prediction error generating areas, consistently with the hierarchical predictive coding hypothesis. Our results indicate that PEs and predictions related to different features of complex objects are generated probably in complex hierarchical networks including several structures.