| # LibriBrain MEG Preprocessed Dataset | |
| Preprocessed magnetoencephalography (MEG) recordings with phoneme labels from the LibriBrain dataset, optimized for fast loading during machine learning model training. | |
| This dataset was created for the [LibriBrain 2025 Competition](https://neural-processing-lab.github.io/2025-libribrain-competition/) (now concluded). | |
| ## Dataset Overview | |
| ### MEG Recording Specifications | |
| - **Channels**: 306 total (102 magnetometers + 204 gradiometers) | |
| - **Sampling Rate**: 250 Hz | |
| - **Duration**: ~52 hours of recordings | |
| - **Subject**: Single English speaker listening to Sherlock Holmes audiobooks | |
| - **Phoneme Instances**: ~1.5 million | |
| ### Phoneme Inventory | |
| 39 ARPAbet phonemes with position encoding: | |
| - **Vowels** (15): aa, ae, ah, ao, aw, ay, eh, er, ey, ih, iy, ow, oy, uh, uw | |
| - **Consonants** (24): b, ch, d, dh, f, g, hh, jh, k, l, m, n, ng, p, r, s, sh, t, th, v, w, y, z, zh | |
| - **Special**: oov (out-of-vocabulary) | |
| Position markers: B (beginning), I (inside), E (end), S (singleton) | |
| ### Signal Processing | |
| All MEG data has been preprocessed through the following pipeline: | |
| 1. Bad channel removal | |
| 2. Signal Space Separation (SSS) for noise reduction | |
| 3. Notch filtering for powerline noise removal | |
| 4. Bandpass filtering (0.1-125 Hz) | |
| 5. Downsampling to 250 Hz | |
| ## Preprocessing and Grouping | |
| This dataset contains pre-grouped and averaged MEG samples for significantly faster data loading during training. Instead of grouping samples on-the-fly (which is computationally expensive), samples have been pre-grouped at various levels. | |
| ### Available Grouping Configurations | |
| - `grouped_5`: 5 samples averaged together | |
| - `grouped_10`: 10 samples averaged together | |
| - `grouped_15`: 15 samples averaged together | |
| - `grouped_20`: 20 samples averaged together | |
| - `grouped_25`: 25 samples averaged together | |
| - `grouped_30`: 30 samples averaged together | |
| - `grouped_35`: 35 samples averaged together (partial - train only) | |
| - `grouped_45`: 45 samples averaged together | |
| - `grouped_50`: 50 samples averaged together | |
| - `grouped_55`: 55 samples averaged together | |
| - `grouped_60`: 60 samples averaged together | |
| - `grouped_100`: 100 samples averaged together | |
| Each configuration contains: | |
| - `train_grouped.h5`: Training data | |
| - `validation_grouped.h5`: Validation data | |
| - `test_grouped.h5`: Test data | |
| - `paths.yaml`: File path references | |
| ### Why Use Grouped Data? | |
| - **Faster Loading**: Pre-computed grouping eliminates runtime averaging overhead | |
| - **Memory Efficient**: Smaller file sizes for higher grouping levels | |
| - **Flexible**: Choose grouping level based on your accuracy vs. speed requirements | |
| - **Standardized**: Consistent preprocessing across all configurations | |
| ## Installation | |
| This dataset requires the modified pnpl library for loading: | |
| ```bash | |
| pip install git+https://github.com/September-Labs/pnpl.git | |
| ``` | |
| ## Usage | |
| ```python | |
| from pnpl.datasets import GroupedDataset | |
| # Load preprocessed data with 100-sample grouping | |
| train_dataset = GroupedDataset( | |
| preprocessed_path="data/grouped_100/train_grouped.h5", | |
| load_to_memory=True # Optional: load entire dataset to memory for faster access | |
| ) | |
| val_dataset = GroupedDataset( | |
| preprocessed_path="data/grouped_100/validation_grouped.h5", | |
| load_to_memory=True | |
| ) | |
| # Get a sample | |
| sample = train_dataset[0] | |
| meg_data = sample['meg'] # Shape: (306, time_points) | |
| phoneme_label = sample['phoneme'] # Phoneme class index | |
| # Use with PyTorch DataLoader | |
| from torch.utils.data import DataLoader | |
| dataloader = DataLoader( | |
| train_dataset, | |
| batch_size=32, | |
| shuffle=True, | |
| num_workers=4 | |
| ) | |
| ``` | |
| ## Data Structure | |
| ``` | |
| data/ | |
| ├── grouped_5/ | |
| │ ├── train_grouped.h5 | |
| │ ├── validation_grouped.h5 | |
| │ ├── test_grouped.h5 | |
| │ └── paths.yaml | |
| ├── grouped_10/ | |
| │ ├── train_grouped.h5 | |
| │ ├── validation_grouped.h5 | |
| │ ├── test_grouped.h5 | |
| │ └── paths.yaml | |
| ├── ... | |
| └── grouped_100/ | |
| ├── train_grouped.h5 | |
| ├── validation_grouped.h5 | |
| ├── test_grouped.h5 | |
| └── paths.yaml | |
| ``` | |
| ## File Sizes | |
| | Grouping | Train | Validation | Test | Total | | |
| |----------|-------|------------|------|-------| | |
| | grouped_5 | 45.6 GB | 425 MB | 456 MB | ~47 GB | | |
| | grouped_10 | 22.8 GB | 213 MB | 228 MB | ~24 GB | | |
| | grouped_20 | 11.4 GB | 106 MB | 114 MB | ~12 GB | | |
| | grouped_50 | 4.6 GB | 37 MB | 42 MB | ~4.7 GB | | |
| | grouped_100 | 2.3 GB | 19 MB | 21 MB | ~2.4 GB | | |
| ## Dataset Splits | |
| - **Train**: 88 sessions (~51 hours) | |
| - **Validation**: 1 session (~0.36 hours) | |
| - **Test**: 1 session (~0.38 hours) | |
| ## Citation | |
| If you use this dataset, please cite the LibriBrain competition: | |
| ```bibtex | |
| @misc{libribrain2025, | |
| title={LibriBrain: A Dataset for Speech Decoding from Brain Signals}, | |
| author={Neural Processing Lab}, | |
| year={2025}, | |
| url={https://neural-processing-lab.github.io/2025-libribrain-competition/} | |
| } | |
| ``` | |
| ## License | |
| Please refer to the original LibriBrain dataset license terms. | |
| ## Acknowledgments | |
| This preprocessed version was created to facilitate faster training for the LibriBrain 2025 Competition. The original dataset and competition were organized by the Neural Processing Lab. |