File size: 3,488 Bytes
62bf293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9acad8
62bf293
a9acad8
62bf293
a9acad8
 
62bf293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9acad8
 
62bf293
 
a9acad8
 
62bf293
 
 
 
a9acad8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62bf293
 
 
 
a9acad8
62bf293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
  - eeg
  - biosignal
  - pytorch
  - neuroscience
  - braindecode
  - foundation-model
  - transformer
---

# EEGPT

EEGPT: Pretrained Transformer for Universal and Reliable Representation of EEG Signals from Wang et al. (2024) [eegpt].

> **Architecture-only repository.** Documents the
> `braindecode.models.EEGPT` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.

## Quick start

```bash
pip install braindecode
```

```python
from braindecode.models import EEGPT

model = EEGPT(
    n_chans=22,
    sfreq=200,
    input_window_seconds=4.0,
    n_outputs=2,
)
```

The signal-shape arguments above are illustrative defaults — adjust to
match your recording.

## Documentation
- Full API reference: <https://braindecode.org/stable/generated/braindecode.models.EEGPT.html>
- Interactive browser (live instantiation, parameter counts):
  <https://huggingface.co/spaces/braindecode/model-explorer>
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/eegpt.py#L21>


## Architecture

![EEGPT architecture](https://github.com/BINE022/EEGPT/raw/main/figures/EEGPT.jpg)


## Parameters

| Parameter | Type | Description |
|---|---|---|
| `return_encoder_output` | bool, default=False | Whether to return the encoder output or the classifier output. |
| `patch_size` | int, default=64 | Size of the patches for the transformer. |
| `patch_stride` | int, default=32 | Stride of the patches for the transformer. |
| `embed_num` | int, default=4 | Number of summary tokens used for the global representation. |
| `embed_dim` | int, default=512 | Dimension of the embeddings. |
| `depth` | int, default=8 | Number of transformer layers. |
| `num_heads` | int, default=8 | Number of attention heads. |
| `mlp_ratio` | float, default=4.0 | Ratio of the MLP hidden dimension to the embedding dimension. |
| `drop_prob` | float, default=0.0 | Dropout probability. |
| `attn_drop_rate` | float, default=0.0 | Attention dropout rate. |
| `drop_path_rate` | float, default=0.0 | Drop path rate. |
| `init_std` | float, default=0.02 | Standard deviation for weight initialization. |
| `qkv_bias` | bool, default=True | Whether to use bias in the QKV projection. |
| `norm_layer` | torch.nn.Module, default=None | Normalization layer. If None, defaults to `nn.LayerNorm` with epsilon `layer_norm_eps`. |
| `layer_norm_eps` | float, default=1e-6 | Epsilon value for the normalization layer. |


## References

1. Wang, G., Liu, W., He, Y., Xu, C., Ma, L., & Li, H. (2024). EEGPT: Pretrained transformer for universal and reliable representation of eeg signals. Advances in Neural Information Processing Systems, 37, 39249-39280. Online: https://proceedings.neurips.cc/paper_files/paper/2024/file/4540d267eeec4e5dbd9dae9448f0b739-Paper-Conference.pdf


## Citation

Cite the original architecture paper (see *References* above) and braindecode:

```bibtex
@article{aristimunha2025braindecode,
  title   = {Braindecode: a deep learning library for raw electrophysiological data},
  author  = {Aristimunha, Bruno and others},
  journal = {Zenodo},
  year    = {2025},
  doi     = {10.5281/zenodo.17699192},
}
```

## License

BSD-3-Clause for the model code (matching braindecode).
Pretraining-derived weights, if you fine-tune from a checkpoint,
inherit the licence of that checkpoint and its training corpus.