PierreGtch commited on
Commit
7a51384
·
verified ·
1 Parent(s): 71c8b43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -125
README.md CHANGED
@@ -2,140 +2,31 @@
2
  license: mit
3
  library_name: braindecode
4
  tags:
5
- - eeg
6
- - foundation-model
7
- - self-supervised
8
- - signal-jepa
9
- pipeline_tag: feature-extraction
10
  ---
11
 
12
- ![sjepa](https://cdn-uploads.huggingface.co/production/uploads/646e0135174cc96d509582a6/DS-cXrFyxZ78hK48ft0iU.png)
13
 
14
- # Signal-JEPA
 
 
15
 
16
- Self-supervised pre-trained weights for the Signal-JEPA foundation model from
17
- [Guetschel et al. (2024)](https://arxiv.org/abs/2403.11772), packaged for use
18
- with [braindecode](https://braindecode.org/).
19
 
20
- The model was pre-trained on the Lee2019 dataset (62 EEG channels in the
21
- 10-10 layout, sampled at 128 Hz). The repo ships the weights together with a
22
- `config.json` so they can be loaded in one line with
23
- `YourModelClass.from_pretrained(repo_id, ...)`.
24
 
25
- ## Available checkpoints
26
-
27
- Two variants are published:
28
-
29
- | repo ID | channel embedding included | when to use |
30
- | --- | --- | --- |
31
- | [`braindecode/signal-jepa`](https://huggingface.co/braindecode/signal-jepa) | ✓ 62-row `_ChannelEmbedding` aligned with the pre-training layout | your recording channels are a **subset** (by name, case-insensitive) of the 62 pre-training channels — you want to reuse the learned spatial embeddings |
32
- | [`braindecode/signal-jepa_without-chans`](https://huggingface.co/braindecode/signal-jepa_without-chans) | ✗ only the SSL backbone (feature encoder + transformer) | your channels are **not** a subset of the pre-training set, or you prefer to train channel embeddings from scratch |
33
-
34
- If you are unsure, start with `braindecode/signal-jepa_without-chans`: it
35
- always works, regardless of your electrode layout.
36
-
37
- ## Quick start
38
-
39
- ### Base model (pre-training architecture)
40
-
41
- The base model outputs contextual features, not class predictions. Use it
42
- for downstream feature extraction or further SSL.
43
 
44
  ```python
45
  from braindecode.models import SignalJEPA
46
 
47
- # With the pre-trained channel embeddings (recording channels ⊂ pre-train set):
48
- model = SignalJEPA.from_pretrained("braindecode/signal-jepa")
49
-
50
- # Or: with your own channels, kept aligned to the pre-training embedding table
51
- model = SignalJEPA.from_pretrained(
52
- "braindecode/signal-jepa",
53
- chs_info=raw.info["chs"], # subset of the 62 pre-training channels
54
- channel_embedding="pretrain_aligned",
55
- )
56
-
57
- # Or: without pre-trained channel embeddings (any electrode layout):
58
- model = SignalJEPA.from_pretrained(
59
- "braindecode/signal-jepa_without-chans",
60
- chs_info=raw.info["chs"],
61
- strict=False, # the channel-embedding weight is intentionally missing
62
- )
63
  ```
64
 
65
- ### Downstream architectures
66
-
67
- Three classification architectures are introduced in the paper:
68
-
69
- - **a) Contextual** — uses the full transformer encoder
70
- - **b) Post-local** — discards the transformer; spatial convolution after local features
71
- - **c) Pre-local** — discards the transformer; spatial convolution before local features
72
-
73
- All three add a freshly-initialized classification head on top of the SSL
74
- backbone. The head is **not** part of the checkpoint and will be trained from
75
- scratch during fine-tuning; pass `strict=False` so `from_pretrained` does not
76
- complain about those missing keys.
77
-
78
- ```python
79
- from braindecode.models import (
80
- SignalJEPA_Contextual,
81
- SignalJEPA_PreLocal,
82
- SignalJEPA_PostLocal,
83
- )
84
-
85
- # a) Contextual — keeps the transformer
86
- model = SignalJEPA_Contextual.from_pretrained(
87
- "braindecode/signal-jepa", # or "signal-jepa_without-chans"
88
- n_times=256, # e.g. 2 s at 128 Hz
89
- n_outputs=4,
90
- strict=False, # ignore un-trained classification head
91
- )
92
-
93
- # b) Post-local — transformer discarded
94
- model = SignalJEPA_PostLocal.from_pretrained(
95
- "braindecode/signal-jepa_without-chans",
96
- n_chans=19,
97
- n_times=256,
98
- n_outputs=4,
99
- strict=False,
100
- )
101
-
102
- # c) Pre-local — transformer discarded
103
- model = SignalJEPA_PreLocal.from_pretrained(
104
- "braindecode/signal-jepa_without-chans",
105
- n_chans=19,
106
- n_times=256,
107
- n_outputs=4,
108
- strict=False,
109
- )
110
- ```
111
-
112
- See the braindecode tutorial
113
- [Fine-tuning a Foundation Model (Signal-JEPA)](https://braindecode.org/stable/auto_examples/advanced_training/plot_finetune_foundation_model.html)
114
- for a complete example including layer freezing and training with
115
- `skorch.EEGClassifier`.
116
-
117
- ## Channel embedding modes
118
-
119
- `SignalJEPA` and `SignalJEPA_Contextual` accept a `channel_embedding` kwarg:
120
-
121
- - `"scratch"` (default): the `_ChannelEmbedding` table has one row per user
122
- channel, initialized from `chs_info`. Compatible with the
123
- `without-chans` checkpoint.
124
- - `"pretrain_aligned"`: the table has 62 rows in the pre-training order,
125
- `forward` indexes into the subset matching your `chs_info` (matched by
126
- channel name, case-insensitive). Compatible with the full checkpoint.
127
-
128
- `from_pretrained` picks the right mode automatically based on the checkpoint's
129
- `config.json`; override with the `channel_embedding=` kwarg if needed.
130
-
131
- ## Citation
132
-
133
- ```bibtex
134
- @article{guetschel2024sjepa,
135
- title = {S-JEPA: towards seamless cross-dataset transfer
136
- through dynamic spatial attention},
137
- author = {Guetschel, Pierre and Moreau, Thomas and Tangermann, Michael},
138
- journal = {arXiv preprint arXiv:2403.11772},
139
- year = {2024},
140
- }
141
- ```
 
2
  license: mit
3
  library_name: braindecode
4
  tags:
5
+ - deprecated
 
 
 
 
6
  ---
7
 
8
+ # ⚠️ This repository is deprecated
9
 
10
+ The weights in this repository are kept online for archival purposes only. They
11
+ are **not** compatible with current releases of
12
+ [braindecode](https://braindecode.org/) and will not receive updates.
13
 
14
+ ## Please use the new repositories
 
 
15
 
16
+ The Signal-JEPA weights have been republished:
 
 
 
17
 
18
+ - [`braindecode/signal-jepa`](https://huggingface.co/braindecode/signal-jepa)
19
+ — full checkpoint including the 62-channel embedding table (for users whose
20
+ recording channels are a subset of the Lee2019 pre-training layout).
21
+ - [`braindecode/signal-jepa_without-chans`](https://huggingface.co/braindecode/signal-jepa_without-chans)
22
+ SSL backbone only (channel embeddings trained from scratch on your data).
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ```python
25
  from braindecode.models import SignalJEPA
26
 
27
+ model = SignalJEPA.from_pretrained("braindecode/signal-jepa_without-chans")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ```
29
 
30
+ See the docs for [`braindecode.models.SignalJEPA`](https://braindecode.org/stable/generated/braindecode.models.SignalJEPA.html)
31
+ and the fine-tuning tutorial
32
+ [Fine-tuning a Foundation Model (Signal-JEPA)](https://braindecode.org/stable/auto_examples/advanced_training/plot_finetune_foundation_model.html).