|
|
--- |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- Sleep-EDF |
|
|
- TUAB |
|
|
- MOABB |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- eeg |
|
|
- brain |
|
|
- timeseries |
|
|
- self-supervised |
|
|
- transformer |
|
|
- biomedical |
|
|
- neuroscience |
|
|
--- |
|
|
|
|
|
# BENDR: BErt-inspired Neural Data Representations |
|
|
|
|
|
Pretrained BENDR model for EEG classification tasks. This is the official Braindecode implementation |
|
|
of BENDR from Kostas et al. (2021). |
|
|
|
|
|
## Model Details |
|
|
|
|
|
- **Model Type**: Transformer-based EEG encoder |
|
|
- **Pretraining**: Self-supervised learning on masked sequence reconstruction |
|
|
- **Architecture**: |
|
|
- Convolutional Encoder: 6 blocks with 512 hidden units |
|
|
- Transformer Contextualizer: 8 layers, 8 attention heads |
|
|
- Total Parameters: ~157M |
|
|
- **Input**: Raw EEG signals (20 channels, variable length) |
|
|
- **Output**: Contextualized representations or class predictions |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from braindecode.models import BENDR |
|
|
import torch |
|
|
|
|
|
# Load pretrained model |
|
|
model = BENDR(n_chans=20, n_outputs=2) |
|
|
|
|
|
# Load pretrained weights from Hugging Face |
|
|
from huggingface_hub import hf_hub_download |
|
|
checkpoint_path = hf_hub_download(repo_id="braindecode/bendr-pretrained-v1", filename="pytorch_model.bin") |
|
|
checkpoint = torch.load(checkpoint_path) |
|
|
model.load_state_dict(checkpoint["model_state_dict"], strict=False) |
|
|
|
|
|
# Use for inference |
|
|
model.eval() |
|
|
with torch.no_grad(): |
|
|
eeg_data = torch.randn(1, 20, 600) # (batch, channels, time) |
|
|
predictions = model(eeg_data) |
|
|
``` |
|
|
|
|
|
## Fine-tuning |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from torch.optim import Adam |
|
|
|
|
|
# Freeze encoder for transfer learning |
|
|
for param in model.encoder.parameters(): |
|
|
param.requires_grad = False |
|
|
|
|
|
# Fine-tune on downstream task |
|
|
optimizer = Adam(model.parameters(), lr=0.0001) |
|
|
``` |
|
|
|
|
|
## Paper |
|
|
|
|
|
[BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data](https://doi.org/10.3389/fnhum.2021.653659) |
|
|
|
|
|
Kostas, D., Aroca-Ouellette, S., & Rudzicz, F. (2021). |
|
|
Frontiers in Human Neuroscience, 15, 653659. |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@article{kostas2021bendr, |
|
|
title={BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data}, |
|
|
author={Kostas, Demetres and Aroca-Ouellette, St{\'e}phane and Rudzicz, Frank}, |
|
|
journal={Frontiers in Human Neuroscience}, |
|
|
volume={15}, |
|
|
pages={653659}, |
|
|
year={2021}, |
|
|
publisher={Frontiers} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Implementation Notes |
|
|
|
|
|
- Start token is correctly extracted at index 0 (BERT [CLS] convention) |
|
|
- Uses T-Fixup weight initialization for stability |
|
|
- Includes LayerDrop for regularization |
|
|
- All architectural improvements from original paper maintained |
|
|
|
|
|
## License |
|
|
|
|
|
Apache 2.0 |
|
|
|
|
|
## Authors |
|
|
|
|
|
- Braindecode Team |
|
|
- Original paper: Kostas et al. (2021) |
|
|
|