Tech News
← Back to articles

Vicarious body maps bridge vision and touch in the human brain

read original related products more articles

Participants and stimuli

fMRI data were taken from 174 participants of the HCP movie-watching dataset51. The sample consisted of 104 female and 70 male individuals (mean age 29.3 years, s.d. = 3.3) born in Missouri, USA. In total, 88.5% of the sample identified as ‘white’ (4.0% Asian, Hawaiian or Other Pacific Island; 6.3% Black or African American; 1.1% unreported). The English language comprehension ability of the sample (as assessed by age-adjusted NIH Picture Vocabulary Test52 scores) was above the national average of 100 (mean = 110, s.d. = 15). The participants were scanned while watching short (ranging from 1 to 4.3 min in length) independent and Hollywood film clips that were concatenated into four videos of 11.9–13.7 min total length. Before each clip, and after the final clip was displayed, there were 20 s periods in which there was no auditory stimulation and only the word ‘REST’ presented on the screen. There were four separate functional runs, in which observers viewed each of the four separate videos. All four videos contained an identical 83 s ‘validation’ sequence at the end of the video that was later removed to ensure independent stimulation in each cross-validation fold. Audio was scaled to ensure that no video clips were too loud or quiet across sessions and was delivered by Sensimetric earbuds that provide high-quality acoustic stimulus delivery while attenuating scanner noise. The participants also took part in one hour of resting state scans, also split into four runs of equal (around 15 min) length. Full details of the procedure and the experimental setup are reported in the HCP S12000 release reference manual53. The ethical aspects of the HCP procedures were approved by Washington University Institutional Review Board (IRB) (approval number 201204036) and all use of the data reported in this manuscript abide by the WU-Minn HCP Consortium data use terms.

HCP data format and preparation

Ultra-high field fMRI (7 T) data from the 174 participants were used, sampled at 1.6 mm isotropic resolution and a rate of 1 Hz (ref. 51). Data were preprocessed identically for video watching and resting state scans. For all analyses, the FIX-independent-component-analysis-denoised time-course data, sampled to the 59,000 vertex-per-hemisphere through the areal feature-based cross-participant alignment method (MSMAll)54 surface format was used. These data are freely available from the HCP project website. The MSMAII method is optimized for aligning primary sensory cortices based on variations in myelin density and resting state connectivity maps18. Owing to the unreliable relation between cortical folding patterns and functional boundaries, MSM method takes into account underlying cortical microarchitecture, such as myelin, which is known to match sensory brain function better than cortical folding patterns alone55. Previous research has demonstrated that such an approach improves the cross-participant alignment of independent task fMRI datasets while at the same time decreasing the alignment of cortical folding patterns that do not correlate with cortical areal locations54.

We applied a high-pass filter to the timeseries data through a Savitzky Golay filter (third order, 210 s in length), which is a robust, flexible filter that allowed us to tailor our parameters to reduce the influence of low frequency components of the signal unrelated to the content of the experimental stimulation (for example, drift, generic changes in basal metabolism). For each run, BOLD time-series data were then converted to percentage signal change.

For purposes of cross-validation, we made training and test datasets from the full dataset. We removed the final 103 s of each functional run, which corresponded to the identical ‘validation’ sequence and final rest period at the end of each video run. Our training dataset therefore consisted of the concatenated data from the four functional runs with this final 103 s removed from each. The test dataset was created by concatenating the final 103 s from each run into a 412 s set of data.

All connective-field models were fit on the individual-participant data and for video watching these models were also fit to the data of an across time-course averaged (HCP average) participant. Split-half participant averages (n = 87) were also created through a random 50% split of individual-participant data. Split-half video averages were created by creating separate datasets based on the first (videos 1 and 2) and second half (videos 3 and 4) of the videos.

Dual-source connective-field model

Model maps of V1 and S1 topography

Our analyses extend the approach of connective-field modelling, wherein responses throughout the brain are modelled as deriving from a ‘field’ of activity on the surface of a ‘source’ region—classically V1. In turn, preferences for positions on the visual field can be estimated by referencing the estimated connective-field V1 positions against the retinotopic map of V1 (Fig. 1a–c). Here we extend this approach by simultaneously modelling brain responses as deriving from connective fields on both the V1 and S1 surfaces. This requires defining both a V1 and S1 source region and their underlying topographic maps.

... continue reading