This page provides some basic information about the Reinhart-Koselleck-grant SCHR 375/20 "Predictive Modelling in Audition" awarded from the German Research Foundation (DFG). The project lasted from 10/2009 until 12/2016 .
The project aimed at researching how we make sense of and predict the auditory world. More specifically, we were interested in predictions that come into play on a sensory auditory level and which are set up in a rather automatic (non-intentional) manner. Originally, we intended to conduct our research in two areas to tap into auditory prediction. One is concerned with the system that automatically encodes and monitors rules inherent in what we hear. This system is especially suited to derive predictions about forthcoming sounds from currently active regularities in the acoustic environment. The other is concerned with the system attenuating self-generated sounds via internal forward modelling in order to distinguish self-generated from externally generated sounds. We also wanted to challenge these two traditional lines and to synthesize a new perspective on the mental processes involved. We took an experimental approach with an emphasis on chronometric, functional aspects (tapped by behavioral measures, event-related potentials, and – to a smaller extent - oscillatory EEG activity) but also considered structural aspects (tapped by functional imaging and patient studies).
This animation shows the processing of a sound while listening passively (right) vs. the processing of a highly anticipated but omitted sound (left). Note that initially silence evokes brain activity in auditory areas (similar to the activity evoked by a real sound), when it was strongly expected.