Deciphering language processing in the human brain through LLM representations
Published on: 2025-05-27 05:44:37
How does the human brain process natural language during everyday conversations? Theoretically, large language models (LLMs) and symbolic psycholinguistic models of human language provide a fundamentally different computational framework for coding natural language. Large language models do not depend on symbolic parts of speech or syntactic rules. Instead, they utilize simple self-supervised objectives, such as next-word prediction and generation enhanced by reinforcement learning. This allows them to produce context-specific linguistic outputs drawn from real-world text corpora, effectively encoding the statistical structure of natural speech (sounds) and language (words) into a multidimensional embedding space.
Inspired by the success of LLMs, our team at Google Research, in collaboration with Princeton University, NYU, and HUJI, sought to explore the similarities and differences in how the human brain and deep language models process natural language to achieve their remarkable ca
... Read full article.