Datasets:

Tasks:
Other
Modalities:
Audio
Languages:
English
ArXiv:
License:
LibriBrain / README.md
Gereon Elvers
Fix paper link and example usage in README
73a65af
|
raw
history blame
4.73 kB
---
language:
- en
license: cc-by-nc-4.0
task_categories:
- other
---
# LibriBrain (Sherlock Holmes 1–7)
[Paper](https://huggingface.co/papers/2506.02098) | [Code](https://github.com/neural-processing-lab/pnpl)
This repository contains the LibriBrain data organised by book: MEG recordings (`.h5`), event annotations (`.tsv`), and the audiobook stimulus audio (`.wav`).
LibriBrain was first open-sourced as part of the [2025 PNPL Competition](https://libribrain.com/).
In addition, LibriBrain is used as a fine-tuning dataset in the paper ["MEG-XL: Data-Efficient Brain-to-Text via Long-Context Pre-Training"](https://huggingface.co/papers/2602.02494) to evaluate word decoding from brain data.
## Sample Usage
The easiest way to get started with the dataset is using the [pnpl Python library](https://github.com/neural-processing-lab/pnpl). There, the following two datasets are available:
### LibriBrainSpeech
This wraps the LibriBrain dataset for use in speech detection problems.
```python
from pnpl.datasets import LibriBrainSpeech
speech_example_data = LibriBrainSpeech(
data_path="./data/",
partition="train"
)
sample_data, label = speech_example_data[0]
# Print out some basic info about the sample
print("Sample data shape:", sample_data.shape)
print("Label shape:", label.shape)
```
### LibriBrainPhoneme
This wraps the LibriBrain dataset for use in phoneme classification problems.
```python
from pnpl.datasets import LibriBrainPhoneme
phoneme_example_data = LibriBrainPhoneme(
data_path="./data/",
partition="train"
)
sample_data, label = phoneme_example_data[0]
# Print out some basic info about the sample
print("Sample data shape:", sample_data.shape)
print("Label shape:", label.shape)
```
### Usage in MEG-XL
To fine-tune the MEG-XL model on the LibriBrain dataset for word decoding, you can use the following command from the [official repository](https://github.com/neural-processing-lab/MEG-XL):
```bash
python -m brainstorm.evaluate_criss_cross_word_classification \
--config-name=eval_criss_cross_word_classification_libribrain \
model.criss_cross_checkpoint=/path/to/your/checkpoint.ckpt
```
Note: For the MEG-XL repo, you will need to adjust the dataset paths in the configuration files to point to your local download of the data. The pnpl library includes automatic downloads from HuggingFace.
## Repository structure
Data are organised into seven top-level directories:
- `Sherlock1/`
- `Sherlock2/`
-
- `Sherlock7/`
Each `Sherlock{i}` directory contains:
- `Sherlock{i}/derivatives/events/` — event annotation files (`.tsv`)
- `Sherlock{i}/derivatives/serialised/` — MEG recordings (`.h5`)
- `Sherlock{i}/stimuli/audio/` — stimulus audio (`.wav`)
## Stimulus audio (LibriVox)
The spoken-audio stimuli are derived from **LibriVox** public-domain recordings of the first seven Sherlock Holmes books (recording versions linked below). The stimuli are provided in this repository as WAV files converted from the LibriVox downloads.
### LibriVox source URLs (recording versions)
1. https://librivox.org/a-study-in-scarlet-version-6-by-sir-arthur-conan-doyle/
2. https://librivox.org/the-sign-of-the-four-version-3-by-sir-arthur-conan-doyle/
3. https://librivox.org/the-adventures-of-sherlock-holmes-version-4-by-sir-arthur-conan-doyle/
4. https://librivox.org/the-memoirs-of-sherlock-holmes-by-sir-arthur-conan-doyle-2/
5. https://librivox.org/the-hound-of-the-baskervilles-version-4-by-sir-arthur-conan-doyle/
6. https://librivox.org/the-return-of-sherlock-holmes-by-sir-arthur-conan-doyle-2/
7. https://librivox.org/the-valley-of-fear-version-3-by-sir-arthur-conan-doyle/
### Audio format
The WAV files in this repository are:
- WAV (PCM), mono (1 channel), 22,050 Hz, 16-bit signed integer PCM
Example conversion command (SoX):
```bash
sox "INPUT_FROM_LIBRIVOX.mp3" -c 1 -r 22050 -b 16 "OUTPUT.wav"
```
### Citation
If you use this dataset, please cite the LibriBrain paper:
```bibtex
@article{ozdogan2025libribrain,
author = {Özdogan, Miran and Landau, Gilad and Elvers, Gereon and Jayalath, Dulhan and Somaiya, Pratik and Mantegna, Francesco and Woolrich, Mark and Parker Jones, Oiwi},
title = {{LibriBrain}: Over 50 Hours of Within-Subject {MEG} to Improve Speech Decoding Methods at Scale},
year = {2025},
journal = {NeurIPS, Datasets \& Benchmarks Track},
url = {https://arxiv.org/abs/2506.02098},
}
```
If you use this data with the MEG-XL framework, please also cite:
```bibtex
@article{jayalath2026megxl,
title={{MEG-XL}: Data-Efficient Brain-to-Text via Long-Context Pre-Training},
author={Jayalath, Dulhan and Jones, Oiwi Parker},
journal={arXiv preprint arXiv:2602.02494},
year={2026}
}
```