A new study by Apple researchers presents a method that lets an AI model learn one aspect of the structure of brain electrical activity without any annotated data. Here’s how.
PAirwise Relative Shift
In a new study called “Learning the relative composition of EEG signals using pairwise relative shift pretraining”, Apple introduces PARS, which is short for PAirwise Relative Shift.
Current models rely heavily on human-annotated data for brain activity, indicating which segments correspond to Wake, REM, Non-REM1, Non-REM2, and Non-REM3 sleep stages, as well as the start and end locations of seizure events, and so on.
What Apple did, in a nutshell, was get a model to teach itself to predict how far apart in time different segments of brain activity occur, based on raw, unlabeled data.
From the study:
“Self-supervised learning (SSL) offers a promising approach for learning electroencephalography (EEG) representations from unlabeled data, reducing the need for expensive annotations for clinical applications like sleep staging and seizure detection. While current EEG SSL methods predominantly use masked reconstruction strategies like masked autoencoders (MAE) that capture local temporal patterns, position prediction pretraining remains underexplored despite its potential to learn long-range dependencies in neural signals. We introduce PAirwise Relative Shift or PARS pretraining, a novel pretext task that predicts relative temporal shifts between randomly sampled EEG window pairs. Unlike reconstruction-based methods that focus on local pattern recovery, PARS encourages encoders to capture relative temporal composition and long-range dependencies inherent in neural signals. Through comprehensive evaluation on various EEG decoding tasks, we demonstrate that PARS-pretrained transformers consistently outperform existing pretraining strategies in label-efficient and transfer learning settings, establishing a new paradigm for self-supervised EEG representation learning.
In other words, the researchers saw that existing methods primarily train models to fill in small gaps in the signal. So they explored whether an AI could learn the broader structure of EEG signals directly from raw, unlabeled data.
As it turns out, it can.
In the paper, they describe a self-supervised learning method of predicting how small segments of an EEG signal relate to each other in time, which can enable better performance on multiple EEG analysis tasks, from sleep stages to seizure detection.
... continue reading