Category Archives: data

Data Sharing: fMRI Whole-brain datasets from “Alice in Wonderland”

Shohini Bhattasali has led the tremendous effort to make available the full whole-brain fMRI datasets recorded as part of the Alice in Wonderland project conducted in collaboration between our lab and John Hale’s lab (Cornell, U Georgia). Raw and preprocessed recordings from from 29 participants are available alongside code and stimuli at the OpenNeuro repository:

https://openneuro.org/datasets/ds002322/versions/1.0.3

The dataset is described in the following publication:

Continue reading

Data Sharing: EEG Datasets for Naturalistic Listening to “Alice in Wonderland”

We’re very pleased to release the raw EEG data, pre-processing parameters and stimulus details for our EEG story-listening experiment. The data comprise 49 human electroencephalography (EEG) datasets collected in our lab. The data were recorded with 61 active electrodes and a Brain Products actiCHamp amplifier at 500 Hz (0.1 to 200 hz band). Participants listened passively to a 12.4 m audiobook recording of the first chapter of Alice’s Adventures in Wonderland (librivox.org) and after which they completed a short 8-question comprehension questionnaire. The raw data are stored as MATLAB data structures created by the Fieldtrip toolbox (version 20170322, available at http://fieldtriptoolbox.org/)

These data are used in the soon-to-appear publication:

Brennan, J. R., & Hale, J. T. (To appear). Hierarchical structure guides rapid linguistic predictions during naturalistic listening. PLoS ONE

These are the same data used for the analysis in our 2018 ACL paper.

The data and other information are released under the Creative Commons By Attribution License 4.0. That means that anyone can use these data to replicate/remix/build our work as long as you give appropriate credit. So have fun and let us know what you find!

Here is the recommended citation for the data:

Brennan, J.R. (2018). EEG Datasets for Naturalistic Listening to “”Alice in Wonderland”” [Data set]. University of Michigan Deep Blue Data Repository.
https://doi.org/10.7302/Z29C6VNH

 

ROIs in our 2016 Brain and Language paper: A supplement to the supplement

Our Abstract linguistic structure paper, published in Brain & Language in 2016, featured an ROI analysis based on individual-subject peaks. We give the peak coordinates in MNI coordinates in the supplemental material. But, I’ve recently been thinking a bit more about spatial sub-divisions of the left anterior and left posterior temporal lobes, especially as they might relate to semantic and/or syntactic composition. If you’re like me, you can’t just read MNI coordinates and recognize “Oh, those ATL ROIs are all clustering along the middle temporal gyrus” or whatever. So, I made a visualization of all of the individual subject peaks that were used to define the ROIs used in that paper. The figure shows, in part, that the peaks are relatively evenly clustered across the macro-anatomical ROIs that we defined.

I really wish I we had included something like this in the original paper.  I suppose the next best thing is to simply share the figure here!

Note: Each MNI coordinate from Table S1 of the supplementary material is shown in relation to the fsaverage pial surfaces distributed with FreeSurfer version 6.0.