This page provides some basic information about the Reinhart-Koselleck-grant SCHR 375/20 "Predictive Modelling in Audition" awarded from the German Research Foundation (DFG). The project started in August 2009. It mirrors the summary of the interim report dated from September 2012.
The project aimed at researching how we make sense of and predict the auditory world. More specifically, we were interested in predictions that come into play on a sensory auditory level and which are set up in a rather automatic (non-intentional) manner. We intended to conduct our research along two lines which presumably tap into auditory prediction. One line is concerned with the system that automatically encodes and monitors rules inherent in what we hear. This system is especially suited to derive predictions about forthcoming sounds from currently active regularities in the acoustic environment. The other line is concerned with the system attenuating self-generated sounds via internal forward modelling (quite similar to the classical "re-afference" in order to distinguish self-generated from externally generated sounds. We also wanted to challenge these two traditional lines and to synthesize a new perspective on the mental processes involved. We intended to take an experimental approach with an emphasis on chronometric functional aspects (tapped by behavioral measures, event-related potentials, and oscillatory EEG activity) but also considering structural aspects (tapped by functional imaging). In the following, it is shortly described what we actually did.
This animation shows the processing of a sound while listening passively (right) vs. the processing of a highly anticipated but omitted sound (left).