bruAristimunha commited on
Commit
bc2fe03
·
verified ·
1 Parent(s): e9b0726

Add architecture-only model card

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ library_name: braindecode
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - eeg
7
+ - biosignal
8
+ - pytorch
9
+ - neuroscience
10
+ - braindecode
11
+ - convolutional
12
+ - sleep-staging
13
+ ---
14
+
15
+ # SleepStagerChambon2018
16
+
17
+ Sleep staging architecture from Chambon et al. (2018) .
18
+
19
+ > **Architecture-only repository.** This repo documents the
20
+ > `braindecode.models.SleepStagerChambon2018` class. **No pretrained weights are
21
+ > distributed here** — instantiate the model and train it on your own
22
+ > data, or fine-tune from a published foundation-model checkpoint
23
+ > separately.
24
+
25
+ ## Quick start
26
+
27
+ ```bash
28
+ pip install braindecode
29
+ ```
30
+
31
+ ```python
32
+ from braindecode.models import SleepStagerChambon2018
33
+
34
+ model = SleepStagerChambon2018(
35
+ n_chans=2,
36
+ sfreq=100,
37
+ input_window_seconds=30.0,
38
+ n_outputs=5,
39
+ )
40
+ ```
41
+
42
+ The signal-shape arguments above are example defaults — adjust them
43
+ to match your recording.
44
+
45
+ ## Documentation
46
+
47
+ - Full API reference (parameters, references, architecture figure):
48
+ <https://braindecode.org/stable/generated/braindecode.models.SleepStagerChambon2018.html>
49
+ - Interactive browser with live instantiation:
50
+ <https://huggingface.co/spaces/braindecode/model-explorer>
51
+ - Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/sleep_stager_chambon_2018.py#L13>
52
+
53
+ ## Architecture description
54
+
55
+ The block below is the rendered class docstring (parameters,
56
+ references, architecture figure where available).
57
+
58
+ <div class='bd-doc'><main>
59
+ <p>Sleep staging architecture from Chambon et al. (2018) [Chambon2018]_.</p>
60
+ <span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span>
61
+
62
+
63
+
64
+ .. figure:: https://braindecode.org/dev/_static/model/SleepStagerChambon2018.jpg
65
+ :align: center
66
+ :alt: SleepStagerChambon2018 Architecture
67
+
68
+ Convolutional neural network for sleep staging described in [Chambon2018]_.
69
+
70
+ Parameters
71
+ ----------
72
+ n_conv_chs : int
73
+ Number of convolutional channels. Set to 8 in [Chambon2018]_.
74
+ time_conv_size_s : float
75
+ Size of filters in temporal convolution layers, in seconds. Set to 0.5
76
+ in [Chambon2018]_ (64 samples at sfreq=128).
77
+ max_pool_size_s : float
78
+ Max pooling size, in seconds. Set to 0.125 in [Chambon2018]_ (16
79
+ samples at sfreq=128).
80
+ pad_size_s : float
81
+ Padding size, in seconds. Set to 0.25 in [Chambon2018]_ (half the
82
+ temporal convolution kernel size).
83
+ drop_prob : float
84
+ Dropout rate before the output dense layer.
85
+ apply_batch_norm : bool
86
+ If True, apply batch normalization after both temporal convolutional
87
+ layers.
88
+ return_feats : bool
89
+ If True, return the features, i.e. the output of the feature extractor
90
+ (before the final linear layer). If False, pass the features through
91
+ the final linear layer.
92
+ n_channels : int
93
+ Alias for `n_chans`.
94
+ input_size_s:
95
+ Alias for `input_window_seconds`.
96
+ n_classes:
97
+ Alias for `n_outputs`.
98
+ activation: nn.Module, default=nn.ReLU
99
+ Activation function class to apply. Should be a PyTorch activation
100
+ module class like ``nn.ReLU`` or ``nn.ELU``. Default is ``nn.ReLU``.
101
+
102
+ References
103
+ ----------
104
+ .. [Chambon2018] Chambon, S., Galtier, M. N., Arnal, P. J., Wainrib, G., &
105
+ Gramfort, A. (2018). A deep learning architecture for temporal sleep
106
+ stage classification using multivariate and multimodal time series.
107
+ IEEE Transactions on Neural Systems and Rehabilitation Engineering,
108
+ 26(4), 758-769.
109
+
110
+ .. rubric:: Hugging Face Hub integration
111
+
112
+ When the optional ``huggingface_hub`` package is installed, all models
113
+ automatically gain the ability to be pushed to and loaded from the
114
+ Hugging Face Hub. Install with::
115
+
116
+ pip install braindecode[hub]
117
+
118
+ **Pushing a model to the Hub:**
119
+
120
+ .. code::
121
+ from braindecode.models import SleepStagerChambon2018
122
+
123
+ # Train your model
124
+ model = SleepStagerChambon2018(n_chans=22, n_outputs=4, n_times=1000)
125
+ # ... training code ...
126
+
127
+ # Push to the Hub
128
+ model.push_to_hub(
129
+ repo_id="username/my-sleepstagerchambon2018-model",
130
+ commit_message="Initial model upload",
131
+ )
132
+
133
+ **Loading a model from the Hub:**
134
+
135
+ .. code::
136
+ from braindecode.models import SleepStagerChambon2018
137
+
138
+ # Load pretrained model
139
+ model = SleepStagerChambon2018.from_pretrained("username/my-sleepstagerchambon2018-model")
140
+
141
+ # Load with a different number of outputs (head is rebuilt automatically)
142
+ model = SleepStagerChambon2018.from_pretrained("username/my-sleepstagerchambon2018-model", n_outputs=4)
143
+
144
+ **Extracting features and replacing the head:**
145
+
146
+ .. code::
147
+ import torch
148
+
149
+ x = torch.randn(1, model.n_chans, model.n_times)
150
+ # Extract encoder features (consistent dict across all models)
151
+ out = model(x, return_features=True)
152
+ features = out["features"]
153
+
154
+ # Replace the classification head
155
+ model.reset_head(n_outputs=10)
156
+
157
+ **Saving and restoring full configuration:**
158
+
159
+ .. code::
160
+ import json
161
+
162
+ config = model.get_config() # all __init__ params
163
+ with open("config.json", "w") as f:
164
+ json.dump(config, f)
165
+
166
+ model2 = SleepStagerChambon2018.from_config(config) # reconstruct (no weights)
167
+
168
+ All model parameters (both EEG-specific and model-specific such as
169
+ dropout rates, activation functions, number of filters) are automatically
170
+ saved to the Hub and restored when loading.
171
+
172
+ See :ref:`load-pretrained-models` for a complete tutorial.</main>
173
+ </div>
174
+
175
+ ## Citation
176
+
177
+ Please cite both the original paper for this architecture (see the
178
+ *References* section above) and braindecode:
179
+
180
+ ```bibtex
181
+ @article{aristimunha2025braindecode,
182
+ title = {Braindecode: a deep learning library for raw electrophysiological data},
183
+ author = {Aristimunha, Bruno and others},
184
+ journal = {Zenodo},
185
+ year = {2025},
186
+ doi = {10.5281/zenodo.17699192},
187
+ }
188
+ ```
189
+
190
+ ## License
191
+
192
+ BSD-3-Clause for the model code (matching braindecode).
193
+ Pretraining-derived weights, if you fine-tune from a checkpoint,
194
+ inherit the licence of that checkpoint and its training corpus.