Meta releases TRIBE v2: AI that predicts how the brain responds to images, sound and text
Meta FAIR (Fundamental AI Research) has released TRIBE v2, an AI model functioning as a digital twin for brain activity. Published on March 26-27, 2026, it represents a breakthrough in computational neuroscience.
TRIBE v2 (Trimodal Brain Encoder v2) was trained on over 1,000 hours of fMRI data from 720 subjects exposed to films, podcasts, images, and text. It can predict how the brain reacts to visual, auditory, and linguistic stimuli with a 70-fold increase in spatial resolution compared to previous state-of-the-art models.
A key capability is zero-shot generalization: the model can make predictions for new individuals, unseen languages, and entirely distinct tasks without retraining. In controlled tests it successfully replicated well-known neuroscientific findings, including specialized brain regions for processing faces, places, and language.
Meta has made the code, model weights, and an interactive demo freely available. Potential applications range from accelerated neuroscience research to improved brain-computer interfaces and treatment of neurological disorders.
The architecture combines LLaMA 3.2 for text, V-JEPA2 for video, and Wav2Vec-BERT for audio.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.