We study the cognitive and neural processes that derive meaning from speech and text. The long-term goals are to specify the algorithms by which linguistic input is converted in to mental representations of sentence meanings (such as propositions) and word meanings (such as concepts), and to characterize the neural circuits that implement these algorithms. We use a whole range of tools, including EEG, fMRI, MEG, and behavioral data. We apply these tools with an emphasis on natural or “every-day” language comprehension.

Lots of the stuff we think about in our lab are discussed in a conversation between Jon and the Brain Inspired Podcast host Paul Middlebrooks, go ahead and listen!


Currently, work in the lab is moving forward on three fronts:

The syntactic and semantic structures that are constructed incrementally during comprehension


Tree complexity based on a minimalist grammar modulates activation in the left anterior temporal lobe (middle pair) to a degree that is significantly larger than effects due to a simpler context-free syntactic representation

To combine linguistic insights with neural data, we need to specify the mapping between one domain and the other. We do so by building a computational model of the algorithm that describes how linguistic knowledge (like the rules of grammar) is used in real time. When such a model is built to make predictions about brain data, we call it a neuro-computational model.

The left-hand side of the graphic at the top of this page shows an example of how word-by-word cognitive states can be turned in to estimates for neural signals that are recorded while simply listening to a story.

In collaboration with John Hale and others we use this approach to test how syntactic representations are constructed incrementally by correlating the predictions of a range of computational models with data recorded using fMRI and EEG data while participants passively listen to a story.

Against these data we test models that incorporate to a range of syntactic proposals (e.g. Markov models, context-free grammars, textbook-based Minimalist grammars) and various ways of quantifying processing cost (e.g. tree complexity, surprisal.) We test these models against fMRI- and EEG-recorded brain activity.

Expectations based on syntactic hierarchy modulate evoked brain responses starting around 200 ms after word onset above-and-beyond expectations based on word-sequence information alone.

A sketch of some results are shown in the two figures. A 2012 paper introduces the basic approach, and a 2016 review paper covers the broader landscape. Results comparing different models with fMRI, MEG, and EEG are reported in several recent papers. Here’s John discussing some of our collaborative work at the 2018 ACL conference in Melbourne:


The neural circuits that map auditory signals to word meanings

Some of the research above serves a second purpose by mapping out the neural substrates of sentence comprehension. Previous studies have described the function of the left anterior temporal cortex in vague terms such as “sentence-level combinatorics” or “basic syntactic processes”. Our work with neuro-computational models replaces qualitative descriptions with a quantitatively precise functional hypothesis: this region carries out operations that are well described by a class predictive parsing algorithms defined over context free or mildly context sensitive grammars.

We also study neural signals that reflect spoken word recognition with a particular focus on timing: what factors affect when the brain (begins to) identify what word someone is saying? Prior research shows that word recognition is incremental – it begins before a word has been completed – and is sensitive to many aspects of context. With Dave Embick and others, We are currently zooming in on how properties of the context and properties of the stimulus together affect the speed of the neural response to words. One on-going study, for example, examines whether word onsets that are more informative about word identity lead to more rapid brain responses. This research is guided by computational models of speech recognition that estimate how information about word identify changes segment-by-segment.

In addition to familiar neural signals, like the N400 measured in ERP studies, we also examine neural oscillations which have been linked closely with neural mechanisms in the early stages of speech recognition. The figure below, from our 2014 paper, provides an example. Related work has been presented in a conference poster [pdf].

Time-frequency response to non words or to words that are unrelated or related to a preceding stimulus (from Brennan et al., 2014)

Time-frequency response from the left auditory cortex to non-words or to words that are unrelated or related to a preceding stimulus (from Brennan et al., 2014)

Linking pathologies in word and sentence comprehension with atypical neural patterning

The video below is a short 10 minute presentation summarizing some of our ongoing work on studying sentence processing and brain mechanisms in Autism Spectrum Disorders.

This 2018 paper reports correlations between neural coherence and several aspects of ASD symtomology, and this other 2018 paper reports results concerning syntactic predictions.