Papers
arxiv:2602.02494

MEG-XL: Data-Efficient Brain-to-Text via Long-Context Pre-Training

Published on Feb 2
· Submitted by
Dulhan Jayalath
on Feb 4
Authors:
,

Abstract

MEG-XL demonstrates improved brain-to-text decoding performance through extended pre-training with 2.5-minute MEG context, significantly outperforming previous models with less contextual data.

AI-generated summary

Clinical brain-to-text interfaces are designed for paralysed patients who cannot provide extensive training recordings. Pre-training improves data-efficient generalisation by learning statistical priors across subjects, but these priors critically depend on context. While natural speech might unfold gradually over minutes, most methods pre-train with only a few seconds of context. Thus, we propose MEG-XL, a model pre-trained with 2.5 minutes of MEG context per sample, 5-300x longer than prior work, and equivalent to 191k tokens, capturing extended neural context. Fine-tuning on the task of word decoding from brain data, MEG-XL matches supervised performance with a fraction of the data (e.g. 1hr vs 50hrs) and outperforms brain foundation models. We find that models pre-trained with longer contexts learn representations that transfer better to word decoding. Our results indicate that long-context pre-training helps exploit extended neural context that other methods unnecessarily discard. Code, model weights, and instructions are available at https://github.com/neural-processing-lab/MEG-XL .

Community

MEG-XL is a brain-to-text foundation model pre-trained with 2.5 minutes of MEG context per sample. It is designed to capture extended neural context, enabling high data efficiency for decoding words from brain activity.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.02494 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.