stefanosgikas commited on
Commit
3eb4a78
·
verified ·
1 Parent(s): a31028b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -3
README.md CHANGED
@@ -1,3 +1,214 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: pytorch
6
+ pipeline_tag: feature-extraction
7
+ tags:
8
+ - biosignals
9
+ - ecg
10
+ - emg
11
+ - eeg
12
+ - embedding
13
+ - mixture-of-experts
14
+ - timm
15
+ - pytorch
16
+ - lightweight
17
+ thumbnail: docs/overview.png
18
+ pretty_name: Tiny-BioMoE
19
+ model-index:
20
+ - name: Tiny-BioMoE
21
+ results: []
22
+ ---
23
+
24
+ # Tiny‑BioMoE
25
+
26
+ a Lightweight Embedding Model for Biosignal Analysis
27
+
28
+ > **Tiny‑BioMoE** · **7.34 M parameters** · **3.04 GFLOPs** · **192‑D embeddings** · **PyTorch ≥ 2.0**
29
+
30
+ ---
31
+
32
+ ## Paper
33
+
34
+ [**Tiny-BioMoE: a Lightweight Embedding Model for Biosignal Analysis**](https://arxiv.org/abs/2507.21875)
35
+
36
+ ---
37
+
38
+
39
+ ## Highlights
40
+
41
+ | Feature | Description |
42
+ | ---------------- | --------------------------------------------------------------------------- |
43
+ | **Compact** | <8 M parameters – runs comfortably on a laptop GPU / modern CPU |
44
+ | **Cross‑domain** | Pre‑trained on **4.4 M** ECG, EMG & EEG representations via multi‑task learning |
45
+
46
+ <br/>
47
+
48
+ <p align="center">
49
+ <img src="docs/overview.png" alt="Tiny‑BioMoE overview" width="48%"/>
50
+ &nbsp;
51
+ <img src="docs/encoders.png" alt="Encoder‑1 and Encoder‑2 details" width="48%"/>
52
+ </p>
53
+
54
+ <p align="center"><b>Figure&nbsp;1.</b> Overall Tiny‑BioMoE architecture (left) and the two expert encoders (right).</p>
55
+
56
+ ---
57
+
58
+ ## Table of Contents
59
+
60
+ 1. [Pre‑trained checkpoint](#pre-trained-checkpoint)
61
+ 2. [Quick start](#quick-start)
62
+
63
+ * [Extract embeddings](#extract-embeddings)
64
+ 3. [Fine‑tuning](#fine-tuning)
65
+ 4. [Citation](#citation)
66
+ 5. [Licence & acknowledgements](#licence--acknowledgements)
67
+
68
+ ---
69
+
70
+ ## Pre‑trained Weights
71
+
72
+ Get the weights from the **[GitHub Releases page](https://github.com/GkikasStefanos/Tiny-BioMoE/releases)**.
73
+
74
+ | File | Size |
75
+ | ----------------- | --------- |
76
+ | `Tiny-BioMoE.pth` | **89 MB** |
77
+
78
+ ```bash
79
+ # download the latest checkpoint
80
+ auto=https://github.com/GkikasStefanos/Tiny-BioMoE/releases/latest/download/Tiny-BioMoE.pth
81
+ curl -L -o Tiny-BioMoE.pth "$auto"
82
+ ```
83
+
84
+ > Verify the file if you wish:
85
+ >
86
+ > ```bash
87
+ > sha256sum Tiny-BioMoE.pth
88
+ > ```
89
+
90
+ The checkpoint contains **only one key**:
91
+
92
+ ```text
93
+ model_state_dict # MoE backbone weights (SpectFormer‑T‑w + EfficientViT‑w)
94
+ ```
95
+
96
+ ---
97
+
98
+ ## Quick start
99
+
100
+ > Assumes **PyTorch ≥ 2.0** and **timm ≥ 0.9** are already installed.
101
+
102
+ ### Extract embeddings
103
+
104
+ ```python
105
+ import torch, torch.nn as nn
106
+ from PIL import Image
107
+ from torchvision import transforms
108
+ from architecture import spectformer, efficientvit
109
+ from timm.models import create_model
110
+
111
+ # ---------------------------------------------------------------
112
+ # Setup ----------------------------------------------------------
113
+ # ---------------------------------------------------------------
114
+ emb_size, num_experts = 96, 2
115
+ final_emb_size = emb_size * num_experts # 192‑D
116
+
117
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
118
+ print(f"running on {device}")
119
+
120
+ # ---------------------------------------------------------------
121
+ # Backbone -------------------------------------------------------
122
+ # ---------------------------------------------------------------
123
+ class MoE(nn.Module):
124
+ def __init__(self, enc1, enc2):
125
+ super().__init__()
126
+ self.enc1, self.enc2 = enc1, enc2
127
+ self.ln_img = nn.LayerNorm((3, 224, 224))
128
+ self.ln_e = nn.LayerNorm(emb_size)
129
+ self.ln_out = nn.LayerNorm(final_emb_size)
130
+ self.fcn = nn.Sequential(nn.ELU(), nn.Linear(emb_size, emb_size),
131
+ nn.Hardtanh(0, 1))
132
+ @torch.no_grad()
133
+ def forward(self, x):
134
+ x = self.ln_img(x)
135
+ z1, *_ = self.enc1(x)
136
+ z2 = self.enc2(x)
137
+ z1 = self.ln_e(z1) * self.fcn(z1)
138
+ z2 = self.ln_e(z2) * self.fcn(z2)
139
+ return self.ln_out(torch.cat((z1, z2), 1))
140
+
141
+ enc1 = create_model('spectformer_t_w'); enc1.head = nn.Identity()
142
+ enc2 = create_model('EfficientViT_w'); enc2.head = nn.Identity()
143
+ backbone = MoE(enc1, enc2).to(device).eval()
144
+ backbone.load_state_dict(torch.load('Tiny-BioMoE.pth', map_location=device)['model_state_dict'])
145
+
146
+ # ---------------------------------------------------------------
147
+ # One image → embedding -----------------------------------------
148
+ # ---------------------------------------------------------------
149
+ tr = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor()])
150
+ img = Image.open('img.png').convert('RGB')
151
+ img = tr(img).unsqueeze(0).to(device) # → tensor [1, 3, 224, 224]
152
+ feat = backbone(img).squeeze().cpu().numpy() # 192‑D vector
153
+ print(feat[:8])
154
+ ```
155
+
156
+ ---
157
+
158
+ ## Fine‑tuning
159
+
160
+ Add your own classification/regression head and (optionally) un‑freeze the backbone:
161
+
162
+ ```python
163
+ import torch, torch.nn as nn
164
+ from architecture import spectformer, efficientvit
165
+ from timm.models import create_model
166
+
167
+ # ---------------------------------------------------------------
168
+ # Setup ----------------------------------------------------------
169
+ # ---------------------------------------------------------------
170
+ emb_size, num_experts = 96, 2
171
+ final_emb_size = emb_size * num_experts # 192‑D
172
+
173
+ class MoE(nn.Module):
174
+ # identical to the class in Quick‑start
175
+ ...
176
+
177
+ enc1 = create_model('spectformer_t_w'); enc1.head = nn.Identity()
178
+ enc2 = create_model('EfficientViT_w'); enc2.head = nn.Identity()
179
+ backbone = MoE(enc1, enc2).to('cuda')
180
+ backbone.load_state_dict(torch.load('Tiny-BioMoE.pth', map_location='cpu')['model_state_dict'])
181
+
182
+ # freeze if you only need fixed embeddings
183
+ for p in backbone.parameters():
184
+ p.requires_grad = False
185
+
186
+ head = nn.Sequential(nn.ELU(), nn.Linear(final_emb_size, num_classes)).to('cuda')
187
+ optimizer = torch.optim.Adam(head.parameters(), lr=1e‑3)
188
+ ```
189
+
190
+ ---
191
+
192
+ ## Citation
193
+
194
+ ```bibtex
195
+ @misc{tiny_biomoe,
196
+ title={Tiny-BioMoE: a Lightweight Embedding Model for Biosignal Analysis},
197
+ author={Stefanos Gkikas and Ioannis Kyprakis and Manolis Tsiknakis},
198
+ year={2025},
199
+ eprint={2507.21875},
200
+ archivePrefix={arXiv},
201
+ primaryClass={cs.AI}
202
+ }
203
+ ```
204
+
205
+ ---
206
+
207
+ ## Licence & acknowledgements
208
+
209
+ * Code & weights: **MIT Licence** – see [`LICENSE`](./LICENSE)
210
+ ---
211
+
212
+ ### Contact
213
+
214
+ Email **Stefanos Gkikas:** gkikas[at]ics[dot]forth[dot]gr / gikasstefanos[at]gmail[dot]com