Update README.md
Browse files
README.md
CHANGED
|
@@ -208,7 +208,7 @@ Unlike EEG FMs that mix channels early, TinyMyo uses **per-channel patching**:
|
|
| 208 |
* Patch length: **20 samples**
|
| 209 |
* Patch stride: **20 samples**
|
| 210 |
* Tokens/channel: **50**
|
| 211 |
-
* Total seq length: **800 tokens** (16
|
| 212 |
* Positional encoding: **RoPE (rotary)**
|
| 213 |
|
| 214 |
This preserves electrode-specific structure while allowing attention to learn cross-channel relationships.
|
|
@@ -341,7 +341,7 @@ Pipeline:
|
|
| 341 |
|
| 342 |
**WER:** **33.95 ± 0.97%**
|
| 343 |
|
| 344 |
-
TinyMyo is EMG-only
|
| 345 |
|
| 346 |
---
|
| 347 |
|
|
@@ -350,15 +350,15 @@ TinyMyo is EMG-only—unlike multimodal systems like MONA-LISA.
|
|
| 350 |
TinyMyo runs efficiently on **GAP9 (RISC-V)** via:
|
| 351 |
|
| 352 |
* **INT8 quantization**, including attention
|
| 353 |
-
* Multi-level streaming (L3
|
| 354 |
* Integer LayerNorm, GELU, softmax
|
| 355 |
* Static memory arena via liveness analysis
|
| 356 |
|
| 357 |
### Runtime (DB5 pipeline)
|
| 358 |
|
| 359 |
-
* **Inference time
|
| 360 |
-
* **Energy
|
| 361 |
-
* **Average power
|
| 362 |
|
| 363 |
This is the **first EMG foundation model demonstrated on a microcontroller**.
|
| 364 |
|
|
@@ -381,7 +381,7 @@ This is the **first EMG foundation model demonstrated on a microcontroller**.
|
|
| 381 |
* **Silent Speech Production:** 33.54% WER
|
| 382 |
* **Silent Speech Recognition:** 33.95% WER
|
| 383 |
|
| 384 |
-
TinyMyo matches or exceeds state-of-the-art performance
|
| 385 |
|
| 386 |
---
|
| 387 |
|
|
|
|
| 208 |
* Patch length: **20 samples**
|
| 209 |
* Patch stride: **20 samples**
|
| 210 |
* Tokens/channel: **50**
|
| 211 |
+
* Total seq length: **800 tokens** (16 x 50)
|
| 212 |
* Positional encoding: **RoPE (rotary)**
|
| 213 |
|
| 214 |
This preserves electrode-specific structure while allowing attention to learn cross-channel relationships.
|
|
|
|
| 341 |
|
| 342 |
**WER:** **33.95 ± 0.97%**
|
| 343 |
|
| 344 |
+
TinyMyo is EMG-only, unlike multimodal systems like MONA-LISA.
|
| 345 |
|
| 346 |
---
|
| 347 |
|
|
|
|
| 350 |
TinyMyo runs efficiently on **GAP9 (RISC-V)** via:
|
| 351 |
|
| 352 |
* **INT8 quantization**, including attention
|
| 353 |
+
* Multi-level streaming (L3 to L2 to L1)
|
| 354 |
* Integer LayerNorm, GELU, softmax
|
| 355 |
* Static memory arena via liveness analysis
|
| 356 |
|
| 357 |
### Runtime (DB5 pipeline)
|
| 358 |
|
| 359 |
+
* **Inference time**: **0.785 s**
|
| 360 |
+
* **Energy**: **44.91 mJ**
|
| 361 |
+
* **Average power**: **57.18 mW**
|
| 362 |
|
| 363 |
This is the **first EMG foundation model demonstrated on a microcontroller**.
|
| 364 |
|
|
|
|
| 381 |
* **Silent Speech Production:** 33.54% WER
|
| 382 |
* **Silent Speech Recognition:** 33.95% WER
|
| 383 |
|
| 384 |
+
TinyMyo matches or exceeds state-of-the-art performance, while being smaller and more efficient than all prior EMG foundation models.
|
| 385 |
|
| 386 |
---
|
| 387 |
|