CNS 2022: Come Chat with Us!

We had a wonderful time presenting at the Cognitive Neuroscience Society (CNS) 2022 Annual Meeting in sunny San Francisco, California. Zhewei (“Cody”), Carli, and recently graduated Dr. Karthik each had a poster presentation. Cody also spoke at a data blitz talk and earned a $10 award for keeping his slides to the allotted time limit.

Let us know if you would like to further engage with our work! #CNS2022

 

Carli with poster that says: "A" is lobster red: Intracranial EEG recordings show fast synesthetic conflict in the insula

Carli presenting poster presentation

Publication: Vision perceptually restores auditory spectral dynamics in speech

 

In this article, we demonstrate that the human perceptual system exploits the natural correlation between mouth shape and auditory signal frequency to facilitate speech perception. Due to the acoustics physics of the oral cavity, changing the shape of the mouth (i.e., its width or narrowness) can be used to predict frequencies of auditory signals (oral resonances; “formants”) during speech. For example, when the lips are protruded (visually narrow), the oral cavity is elongated, producing a lower resonant frequency, like extending the slide on a trombone. By contrast, when the lips are retracted (visually wide), a higher resonant frequency is produced. In our study, participants were able to use visual speech cues to perceptually recover auditory frequency cues that were digitally degraded in our experiments. This process appeared to occur automatically, even when participants did not recognize audiovisual speech cues as being speech-related. Altogether, our results suggest that the perceptual system uses natural correlations between midlevel visual (oral deformations) and auditory speech features (frequency modulations) to facilitate speech perception.