
Meta has built an AI that can predict how your brain responds to sound, music, and speech, without scanning your brain, without studying you personally, and without your knowledge that it is doing so.
On March 26, 2026, Meta’s AI research division released TRIBE v2, a model trained to predict brain activity in response to audio, images, and language. The model was built using brain scans from more than 700 healthy volunteers who were shown and played a range of media including images, podcasts, videos, and text.
Meta also made everything public, releasing the model, the code behind it, the research paper, and an interactive demo where anyone can see the predicted brain responses play out in real time as audio or video content is fed in.
What TRIBE v2 Actually Does
Using brain scans from over 700 people as they listened to and watched different types of content, TRIBE v2 learned which sounds and images tend to activate which parts of the brain, and by how much. And so, when you give it a new piece of audio it has never encountered before, it draws on those learned patterns to predict how a brain would respond, without needing a real person in a scanner to find out.
The brain scan technology it works with is called fMRI. fMRI measures brain activity by tracking changes in blood flow and oxygen levels. When a part of your brain becomes active, more blood flows to it. fMRI picks that up. And then TRIBE v2 learns to predict where that blood flow will go based on what a person is hearing or seeing.
The model focuses on two regions of the brain. One handles how we recognise and make sense of what we see. The other handles how we process sound and speech. So when you play a piece of music into TRIBE v2, it predicts which parts of your auditory brain will light up and by how much.
To do this, the model draws on three of Meta’s existing AI systems. One processes acoustic patterns, rhythm, and the content of speech. Another handles meaning, context, and language structure. Their combined outputs pass through a processing layer that integrates everything across a 100-second window, and that window exists because your brain does not react to sound instantly.
The Problem it Solves, and Why It Matters
Brain research is slow and expensive in a way that is hard to overstate. If a scientist wanted to find out how a particular piece of audio activates a specific brain region, they would need to recruit dozens of people, book time in a scanning facility that can cost thousands of dollars per hour, spend months analysing the results, and then wait another year for the findings to be reviewed and published.
TRIBE v2 collapses most of that process. Researchers can now run experiments entirely in simulation, feeding new sounds, images, or text into the model and getting predicted brain responses without anyone stepping inside a scanner. This makes it possible to run thousands of virtual experiments at a fraction of the cost and time.
Where This Technology is Headed
Meta has stated the application goals for this technology include speeding up research into neurological diseases and developing better brain-computer interfaces that could allow people with speech or movement impairments to communicate using brain signals.
But the technology also points somewhere more unsettling. A model that can predict how your brain responds to audio does not only have medical uses. It also could tell a content creator, for instance, which part of a soundtrack makes listeners feel tense. It could tell an advertiser which voice tone drives the strongest emotional reaction. None of these uses require accessing your actual brain. They only require knowing, statistically, how brains like yours tend to respond.
AI has already learned to predict what you will click, what you will buy, and what will keep you scrolling. TRIBE v2 suggests the next frontier is predicting what your brain will feel before you feel it.
Researchers have flagged that brain activity data is among the most sensitive information that exists, and that as this technology develops, strict safeguards and new policy frameworks will be needed to govern how that data is collected and used.
For now, Meta has released TRIBE v2 as an open research tool. What gets built on top of it, and by whom, is still an open question.
