--- license: mit pipeline_tag: other tags: - neuroscience - brain-to-text - speech decoding - brain decoding - large brain models - brain foundation models --- # MEG-XL: Data-Efficient Brain-to-Text via Long-Context Pre-Training MEG-XL is a brain-to-text foundation model pre-trained with 2.5 minutes of MEG context per sample (equivalent to 191k tokens). It is designed to capture extended neural context, enabling high data efficiency for decoding words from brain activity. - **Paper:** [MEG-XL: Data-Efficient Brain-to-Text via Long-Context Pre-Training](https://huggingface.co/papers/2602.02494) - **Repository:** [GitHub - neural-processing-lab/MEG-XL](https://github.com/neural-processing-lab/MEG-XL) - **Weights/Checkpoint:** [meg-xl-med.ckpt](https://huggingface.co/pnpl/MEG-XL/blob/main/meg-xl-med.ckpt) ## Usage Instructions for environment setup and data preparation are available in the [official GitHub repository](https://github.com/neural-processing-lab/MEG-XL). ### Fine-tuning MEG-XL for Brain-to-Text You can fine-tune or evaluate the model on word decoding tasks using the following command structure: ```bash python -m brainstorm.evaluate_criss_cross_word_classification \ --config-name=eval_criss_cross_word_classification_{armeni, gwilliams, libribrain} \ model.criss_cross_checkpoint=/path/to/your/checkpoint.ckpt ``` ### Linear Probing To perform linear probing, use: ```bash python -m brainstorm.evaluate_criss_cross_word_classification \ --config-name=eval_criss_cross_word_classification_linear_probe_{armeni, gwilliams, libribrain} \ model.criss_cross_checkpoint=/path/to/your/checkpoint.ckpt ``` ## Requirements - Python >= 3.12 - High-VRAM GPU (>= 40-80GiB depending on the task). ## Citation If you find this work helpful in your research, please cite: ```bibtex @article{jayalath2026megxl, title={{MEG-XL}: Data-Efficient Brain-to-Text via Long-Context Pre-Training}, author={Jayalath, Dulhan and Parker Jones, Oiwi}, journal={arXiv preprint arXiv:2602.02494}, year={2026} } ```