Hi, I'm a PhD student at Meta and ENS, in Paris. My work focuses on two main fields:

  • Building AI models to decode neural activity, and eventually helping people with speech difficulties to find their voice again.
  • Exploring the similarities and differences between large models (LLMs and vision) and the human brain. And using these to:
    • Better understand the brain
    • Build better, more efficient AI models — the brain is the most optimal system we know so far, it must be doing something right.

My advisors are Jean-Rémi King (Brain & AI team, Meta AI) and Valentin Wyart (LNC2, ENS).

Papers

2026 · Preprint

Neuralset: a high-performing Python package for Neuro-AI

Jean-Rémi King, Corentin Bel, Linnea Evanson, Julien Gadonneix, Sophia Houhamdi, Jarod Lévy, Joséphine Raugel, Andrea Santos Revilla, Mingfang Zhang, Julie Bonnaire, Charlotte Caucheteux, Alexandre Défossez, Théo Desbordes, Pablo Diego-Simón, Shubh Khanna, Juliette Millet, Pierre Orhan, Saarang Panchavati, Antoine Ratouchniak, Alexis Thual, Teon L. Brooks, Katelyn Begany, Yohann Benchetrit, Marlène Careil, Hubert Banville, Stéphane d'Ascoli, Simon Dahan, Jérémy Rapin

FAIR · Meta AI · 2026

Neuralset is a unified Python package and benchmark for neuro-AI: it standardises datasets, metrics, and baselines so deep-learning models can be directly compared on how well they predict neural activity.

2026 · Preprint

TRIBEv2 — scaling brain encoding across subjects and modalities

Stéphane d'Ascoli, Jérémy Rapin, Yohann Benchetrit, Hubert Banville, Joséphine Raugel, Jean-Rémi King

FAIR · Meta AI · 2026

TRIBEv2 extends the TRIBE framework to jointly model fMRI responses across subjects, tasks, and modalities, producing a single foundation brain encoder that transfers across datasets.

DINOv3 × Brain alignment animation
ICLR 2026

Disentangling the Factors of Convergence between Brains and DINOv3

Joséphine Raugel, Marc Szafraniec, Huy V. Vo, Camille Couprie, Jérémy Rapin, Stéphane d'Ascoli, Patrick Labatut, Piotr Bojanowski, Valentin Wyart, Jean-Rémi King

arXiv:2508.18226 · 2025

We analyse when and why self-supervised vision models (DINOv3) align with human visual cortex responses, disentangling effects of scale, data, objective, and architecture on brain similarity.

Spotlight · NeurIPS 2025

Scaling and context steer LLMs along the same computational path as the human brain

Joséphine Raugel, Stéphane d'Ascoli, Jérémy Rapin, Valentin Wyart, Jean-Rémi King

arXiv:2512.01591 · 2025

Across a wide range of LLMs, scaling and in-context conditioning push model representations along a trajectory that matches how the human brain processes the same inputs — suggesting a shared computational axis between LLMs and cortex.

Hierarchical inference decoding figure
CCN 2024

Decoding of hierarchical inference in the human brain during speech processing with large language models

Joséphine Raugel, Valentin Wyart, Jean-Rémi King

CCN · 2024

Using MEG recordings of subjects listening to continuous speech, we decode a hierarchy of linguistic inferences (phonemes → words → syntax → meaning) from cortical activity, aligned with successive layers of large language models.

Workshops & Talks

GenAI workshop
2025 · Workshop

Generative AI workshop – Brain to Machine

Workshop · GenAI

Workshop on generative AI and its connections to brain-inspired modelling. Discussion of how foundation models can be used as computational probes of biological intelligence.

2025 · Conference

Invited talk — Montreal AI & Neuroscience (MAIN)

MAIN · Montréal

Talk at the Montréal AI & Neuroscience Conference on aligning large models with brain activity and what it reveals about representations in both systems.

2025 · NeurIPS

NeurIPS 2025 — DINOv3 and the DINO × Brain paper

NeurIPS · 2025

Talk at NeurIPS 2025 presenting DINOv3 and the DINO × Brain paper: a study of why and when large vision foundation models align with human visual cortex.

2024 · CCN Conference

Talk at CCN — Cognitive Computational Neuroscience

CCN

Contributed talk at CCN on decoding hierarchical inference in the human brain during speech processing, using large language models as computational probes.

Internal work in industry

I currently collaborate with other teams at Meta, working towards more interpretable models through neuro-inspired methods.

Before, I developed a model to predict the positive impact of neurofeedback on subjects based on their physiological data, for MBT, a neurotech startup.

Training

I have a Master's degree from Mines Paris – PSL in mathematics, computer science, and AI, alongside dual programs at MVA (deep learning) and Cogmaster, ENS (neuroscience modeling). I also spent time in Canada and Switzerland through exchanges and internships at Polytechnique Montréal and EPFL.

AI & Arts

During an exhibition about AI and Arts, we developed PERSONA: an RL model that adapts in real time, learning to communicate with humans through facial expressions.

Github repo