Abstract The processing of multisensory information is based upon the capacity of brain regions, such as the superior temporal cortex, to combine information across modalities. However, it is still unclear whether the representation of coherent auditory and visual events does require any prior audiovisual experience to develop and function. In three fMRI experiments, intersubject correlation analysis measured brain synchronization during the presentation of an audiovisual, audio-only or video-only versions of the same narrative in distinct groups of sensory-deprived (congenitally blind and deaf) and typically-developed individuals. The superior temporal cortex synchronized across auditory and visual conditions, even in sensory-deprived individuals who lack any audiovisual experience. This synchronization was primarily mediated by low-level perceptual features and relied on a similar modality-independent topographical organization of temporal dynamics. The human superior temporal cortex is naturally endowed with a functional scaffolding to yield a common representation across multisensory events.
This paper's license is marked as closed access or non-commercial and cannot be viewed on ResearchHub. Visit the paper's external site.