Skip to content
Tech News
← Back to articles

Rapid concerted switching of the neural code in the inferotemporal cortex

read original get Neural Coding Brain Model → more articles
Why This Matters

This study reveals rapid and coordinated changes in neural coding within the inferotemporal cortex of macaques, shedding light on the brain's dynamic processing of visual information. Understanding these neural mechanisms can inform the development of more adaptive artificial intelligence systems and improve treatments for visual recognition impairments.

Key Takeaways

Three male rhesus macaques (Macaca mulatta) were used in this study. All procedures conformed to local and US National Institutes of Health guidelines, including the US National Institutes of Health Guide for the Care and Use of Laboratory Animals. All experiments were performed with the approval of the UC Berkeley Animal Care and Use Committee.

No statistical methods were used to predetermine sample size. The experiments were not randomized and investigators were not blinded to allocation during experiments and outcome assessment.

Visual stimuli

Face-patch localizer

The fMRI localizer stimulus contained five types of blocks, consisting of images of faces, hands, technological objects, vegetables and fruits, and bodies. Face blocks were presented in alternation with non-face blocks. Each block lasted 24 s (each image lasted 500 ms). In each run, the face block was repeated four times and each of the non-face blocks was shown once. A block of grid-scrambled noise patterns was presented between each stimulus block and at the beginning and end of each run. Each run lasted 408 s.

Stimuli for electrophysiology experiments

Human faces

We acquired 2,000 frontal views of faces, as in ref. 42, from various face databases: FERET49,50; CVL (Peter Peer, CVL Face Database, Computer Vision Laboratory, University of Ljubljana, Slovenia; http://www.lrv.fri.uni-lj.si/facedb.html)48; MR2, ref. 51; Chicago Face Database52; CelebA53; FEI (fei.edu.br/~cet/facedatabase.html)47; PICS (pics.stir.ac.uk); Caltech Face Dataset 1999 (Caltech DATA, 2022; https://doi.org/10.22002/D1.20237); Essex (Face Recognition Data, University of Essex, UK); and MUCT (www.milbo.org/muct)54. The faces were aligned using facial landmarks as in ref. 42 with an open-source face aligner (github.com/jrosebr1/imutils).

For Extended Data Fig. 4c (right), we used synthetic face images from Syn-Vis-v055.

Non-face objects

... continue reading