Research March 30, 2026 · Updated March 30, 2026 · 5 min read

How I Turned Meta AI's Brain Scanner Model Into a Free SEO Tool

Metehan Yesilyurt

Metehan Yesilyurt

AI Search & SEO Researcher

Using fMRI Brain Data to Score Content Before Publishing

The tool tells you how the brain actually responds to your titles, intros, and SERP screenshots before you publish anything.

You can try it here: NeuralSEO on Hugging Face

The problem with current SEO tools & AI

Every SEO tool or AI model on the market measures the same thing: what already happened. Rankings, clicks, impressions, keyword difficulty. All lagging indicators. You publish, wait, and hope. AI models are trying to predict what you asked for and it’s blind.

But the real question has always been: will a human brain actually pay attention to this?

That’s not a metaphor. It’s literally measurable. And now, thanks to neuroscience research from Meta AI, we can predict it before publishing.

What is Meta AI TRIBE v2?

TRIBE v2 stands for TRImodal Brain Encoder v2. It’s a foundation model from Meta AI’s FAIR lab. It was trained on fMRI recordings, actual brain scans, collected while 700+ volunteers watched videos, listened to audio, and read text. It’s a multilanguage model.

The model learned to predict how the human cortex responds to any input across roughly 20,000 cortical vertices. Feed it a sentence, and it tells you which brain regions activate, how strongly, and for how long.

I looked at this and immediately thought: this can help for SEO. And of course, it’s experimental.

What NeuralSEO does

I built three core analysis tools on top of TRIBE v2.

1. Neural Screenshot Analyzer

Upload a screenshot of a Google SERP, ChatGPT response, Perplexity answer, or Google AI Mode result. The tool splits the screenshot into layout regions like title, snippet, sidebar and so on. It crops each region and runs it through TRIBE v2’s visual inference pipeline. It scores each element by neural attention activation. Then it draws live scored overlays directly on the image so you can see exactly which parts of the page grab the brain’s attention.

This is the closest thing to eye-tracking without actual eye-tracking hardware.

2. Intro Paragraph Analyzer

Paste your opening paragraph (auto-trimmed to 600 characters). TRIBE v2 scores it across four neural dimensions:

  • Hook Strength: does the opening trigger frontal attention networks?
  • Engagement: global neural activation level
  • Salience: does it stand out from noise?
  • Retention: will the reader’s brain encode this into memory?

You get a 0 to 100 neural score, a radar chart breakdown, and optional Gemini-powered rewrite recommendations.

3. Neural CTR Predictor

Enter a keyword. Gemini generates 10 to 20 dynamic title tag variants. Each title runs through TRIBE v2 individually, scored by frontal attention network activation and salience response. You get a ranked list of predicted organic CTR before you publish, without A/B testing.

How brain signals map to SEO signals

Here’s how TRIBE v2’s brain activation patterns map to SEO-relevant signals:

Neural SignalSEO Meaning
Language comprehension activationReadability and clarity
Frontal attention networksWill readers stay or bounce?
Activation entropy (spatial complexity)E-E-A-T proxy: expert vs. thin content
Salience networkDoes your title demand attention?
Default Mode Network (inverse)Mind-wandering risk = bounce rate risk

These aren’t traditional SEO metrics. They’re neurological proxies, directional signals based on how the human brain processes content.

Technical architecture

The stack runs on Hugging Face Spaces with GPU allocation:

  • Model: facebook/tribev2, Meta’s trimodal brain encoder
  • Inference: Text goes through TTS audio, then word-level timestamps via faster-whisper, then TRIBE v2 fMRI prediction
  • Visual pipeline: Image becomes a short MP4 video via moviepy, then goes through TRIBE v2 visual inference
  • Title generation: Google Gemini Flash generates dynamic variants and TRIBE v2 scores them
  • Frontend: Gradio with custom dark theme, procedural Three.js brain visualization (brain part still needs a development)
  • Brain viewer: Procedural mesh with 5 cortical regions that light up based on actual analysis scores

The text pipeline is particularly interesting. TRIBE v2 was trained on multimodal data, so even for text analysis, the input goes through a TTS step to generate audio, which is then transcribed with word-level timestamps. This gives the model the temporal dynamics it needs to predict brain activation patterns over time.

Limitations

This needs to be said clearly.

Neural scores are directional signals, not ground truth ranking guarantees. Google’s ranking algorithm doesn’t use fMRI data. Or do they? Google has different patterns.

GPU quotas are real. On Hugging Face’s free tier, large batches can timeout. Use smaller inputs when possible.

First request is slow. The TRIBE v2 model weighs around 6 GB and loads on the first inference call.

Non-commercial use only. TRIBE v2 is licensed CC BY-NC 4.0.

Why I built this

I’ve been in SEO for years, and the gap between what we measure and what actually matters to users has always bothered me. We optimize for algorithms, but algorithms are trying to approximate what humans want.

TRIBE v2 skips the algorithm entirely. It predicts the human response directly.

Is it perfect? No. Is it a useful signal? I believe so. At minimum, it’s a fundamentally different lens on content quality, one grounded in neuroscience rather than keyword density.

Try it

NeuralSEO is free and open: https://huggingface.co/spaces/metehan777/neuralseo

If you find it useful or have feedback, reach out:


NeuralSEO is built on Meta AI’s TRIBE v2 (CC BY-NC 4.0). For non-commercial use only.

$ cat post.md | stats
words: 953 headings: 12 read_time: 5m links: code_blocks: images:
$subscribe --newsletter

Get new research on AI search, SEO experiments, and LLM visibility delivered to your inbox.

Powered by Substack · No spam · Unsubscribe anytime

Share with AI
Perplexity Gemini