eugenehp commited on
Commit
c98dfe4
Β·
verified Β·
1 Parent(s): a3665e4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +183 -61
README.md CHANGED
@@ -12,6 +12,8 @@ tags:
12
  - vector-quantization
13
  - rvq
14
  - safetensors
 
 
15
  language:
16
  - en
17
  library_name: neurorvq-rs
@@ -20,45 +22,48 @@ pipeline_tag: feature-extraction
20
 
21
  # NeuroRVQ β€” Safetensors Weights
22
 
23
- Pre-converted [safetensors](https://github.com/huggingface/safetensors) weights for the [NeuroRVQ](https://github.com/KonstantinosBarmpas/NeuroRVQ) multi-scale biosignal tokenizer, ready for use with [neurorvq-rs](https://github.com/eugenehp/neurorvq-rs) (pure-Rust inference) or any framework that supports safetensors.
24
 
25
  Weights are converted from the official PyTorch `.pt` checkpoints published at [ntinosbarmpas/NeuroRVQ](https://huggingface.co/ntinosbarmpas/NeuroRVQ).
26
 
27
- ## Files
28
 
29
- ### Tokenizers
30
 
31
- | File | Modality | Parameters | Size | RVQ Levels | Codebook |
32
- |------|----------|------------|------|------------|----------|
33
- | `NeuroRVQ_EEG_tokenizer_v1.safetensors` | EEG | 76.0M | 304 MB | 8 | 8192 Γ— 128 |
34
- | `NeuroRVQ_ECG_tokenizer_v1.safetensors` | ECG | 68.1M | 272 MB | 8 | 8192 Γ— 128 |
35
- | `NeuroRVQ_EMG_tokenizer_v1.safetensors` | EMG | 143.6M | 574 MB | 16 | 8192 Γ— 128 |
36
 
37
- ### Foundation Models
38
 
39
- | File | Modality | Parameters | Size | Depth |
40
- |------|----------|------------|------|-------|
41
- | `NeuroRVQ_EEG_foundation_model_v1.safetensors` | EEG | 58.6M | 234 MB | 12 blocks |
42
- | `NeuroRVQ_EMG_foundation_model_v1.safetensors` | EMG | 111.2M | 445 MB | 12 blocks |
43
 
44
  ### Config Flags
45
 
46
- | File | Modality |
47
- |------|----------|
48
- | `flags/NeuroRVQ_EEG_v1.yml` | EEG (103 ch, patch=200, embed=200) |
49
- | `flags/NeuroRVQ_ECG_v1.yml` | ECG (15 ch, patch=40, embed=40) |
50
- | `flags/NeuroRVQ_EMG_v1.yml` | EMG (16 ch, patch=200, embed=200) |
51
 
52
- ## Usage with neurorvq-rs (Rust)
53
 
54
  ```bash
55
- # Download weights
 
 
 
56
  huggingface-cli download eugenehp/NeuroRVQ \
57
  NeuroRVQ_EEG_tokenizer_v1.safetensors \
58
  flags/NeuroRVQ_EEG_v1.yml \
59
  --local-dir weights/
60
 
61
- # Run inference
62
  cargo run --release --bin infer -- \
63
  --config weights/flags/NeuroRVQ_EEG_v1.yml \
64
  --weights weights/NeuroRVQ_EEG_tokenizer_v1.safetensors
@@ -67,7 +72,7 @@ cargo run --release --bin infer -- \
67
  ### Library API
68
 
69
  ```rust
70
- use neurorvq_rs::{NeuroRVQEncoder, Modality};
71
  use std::path::Path;
72
 
73
  let (model, _ms) = NeuroRVQEncoder::<B>::load_with_modality(
@@ -77,11 +82,32 @@ let (model, _ms) = NeuroRVQEncoder::<B>::load_with_modality(
77
  device,
78
  )?;
79
 
 
80
  let tokens = model.tokenize(&batch)?;
81
- // tokens.branch_tokens: 4 branches Γ— 8 RVQ levels of token indices
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  ```
83
 
84
- ## Usage with Python (PyTorch)
85
 
86
  ```python
87
  from safetensors.torch import load_file
@@ -90,54 +116,139 @@ state_dict = load_file("NeuroRVQ_EEG_tokenizer_v1.safetensors")
90
  model.load_state_dict(state_dict, strict=False)
91
  ```
92
 
93
- ## Numerical Parity
94
-
95
- Verified 100% numerical parity between the Rust and Python implementations:
96
-
97
- | Layer | Max Abs Error | Status |
98
- |-------|--------------|--------|
99
- | Encoder features (4 branches) | < 8Γ—10⁻³ | βœ… |
100
- | Encode heads (to_quantizer) | < 2Γ—10⁻³ | βœ… |
101
- | RVQ quantized vectors | < 5Γ—10⁻¹ ΒΉ | βœ… |
102
- | Token indices | 99.3% exact match Β² | βœ… |
103
- | Decode outputs (amp/sin/cos) | < 8Γ—10⁻¹ ΒΉ | βœ… |
104
-
105
- ΒΉ Differences caused by ≀0.7% of tokens landing on codebook boundaries
106
- Β² With random-init weights: 100% match (all "mismatches" are ties to identical codebook vectors)
107
-
108
- ## Benchmark (Apple M4 Pro, 64 GB)
109
-
110
- | Configuration | PyTorch CPU (ms) | Rust NdArray (ms) | Rust wgpu GPU (ms) |
111
- |---|---:|---:|---:|
112
- | EEG 4ch Γ—64t | 179 | 661 | 445 |
113
- | EEG 64ch Γ—4t | 179 | 664 | 411 |
114
- | ECG 15ch Γ—40t | 272 | 1878 | β€” |
115
- | EMG 4ch Γ—64t | 255 | 998 | 617 |
116
- | EMG 16ch Γ—16t | 254 | 1001 | 660 |
117
-
118
  ## Architecture
119
 
120
  ```
121
  Raw Signal [B, N, T]
 
122
  β–Ό
123
- MultiScale Temporal Conv (4 branches, modality-specific kernels)
 
 
 
 
 
124
  β–Ό
125
- Transformer Encoder (12 blocks, shared weights, batched 4Γ—B)
 
 
 
 
126
  β–Ό
127
- Encode Heads (Linear β†’ Tanh β†’ Linear β†’ code_dim=128)
 
 
 
 
128
  β–Ό
129
- Residual VQ (8 or 16 levels, codebook 8192Γ—128, L2-norm)
130
- β–Ό ← Token indices
131
- Transformer Decoder (3 blocks, per-branch PatchEmbed)
 
 
 
132
  β–Ό
133
- Decode Heads (Amplitude: GELU, Phase: Tanh)
 
 
 
 
134
  β–Ό
135
- Inverse FFT β†’ Reconstructed Signal
 
 
 
 
 
 
136
  ```
137
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
  ## Conversion
139
 
140
- These weights were converted from the original `.pt` files using:
141
 
142
  ```python
143
  import torch
@@ -149,6 +260,14 @@ converted = {k: v.float().contiguous() for k, v in state_dict.items()
149
  save_file(converted, "model.safetensors")
150
  ```
151
 
 
 
 
 
 
 
 
 
152
  ## Citation
153
 
154
  ```bibtex
@@ -165,6 +284,9 @@ Apache-2.0 β€” same as the original NeuroRVQ release.
165
 
166
  ## Links
167
 
168
- - **Rust crate**: [eugenehp/neurorvq-rs](https://github.com/eugenehp/neurorvq-rs)
169
- - **Original model**: [ntinosbarmpas/NeuroRVQ](https://huggingface.co/ntinosbarmpas/NeuroRVQ)
170
- - **Paper / Code**: [KonstantinosBarmpas/NeuroRVQ](https://github.com/KonstantinosBarmpas/NeuroRVQ)
 
 
 
 
12
  - vector-quantization
13
  - rvq
14
  - safetensors
15
+ - burn
16
+ - rust
17
  language:
18
  - en
19
  library_name: neurorvq-rs
 
22
 
23
  # NeuroRVQ β€” Safetensors Weights
24
 
25
+ Pre-converted [safetensors](https://github.com/huggingface/safetensors) weights for the [NeuroRVQ](https://github.com/KonstantinosBarmpas/NeuroRVQ) multi-scale biosignal tokenizer, ready for use with **[neurorvq-rs](https://github.com/eugenehp/neurorvq-rs)** (pure-Rust inference on [Burn 0.20](https://burn.dev)) or any framework that supports safetensors.
26
 
27
  Weights are converted from the official PyTorch `.pt` checkpoints published at [ntinosbarmpas/NeuroRVQ](https://huggingface.co/ntinosbarmpas/NeuroRVQ).
28
 
29
+ ## Model Files
30
 
31
+ ### Tokenizers (encoder β†’ RVQ β†’ decoder)
32
 
33
+ | File | Modality | Params | Size | Embed | Patch | RVQ |
34
+ |------|----------|--------|------|-------|-------|-----|
35
+ | [`NeuroRVQ_EEG_tokenizer_v1.safetensors`](NeuroRVQ_EEG_tokenizer_v1.safetensors) | **EEG** | 76.0 M | 304 MB | 200 | 200 | 8 levels |
36
+ | [`NeuroRVQ_ECG_tokenizer_v1.safetensors`](NeuroRVQ_ECG_tokenizer_v1.safetensors) | **ECG** | 68.1 M | 272 MB | 40 | 40 | 8 levels |
37
+ | [`NeuroRVQ_EMG_tokenizer_v1.safetensors`](NeuroRVQ_EMG_tokenizer_v1.safetensors) | **EMG** | 143.6 M | 574 MB | 200 | 200 | 16 levels |
38
 
39
+ ### Foundation Models (encoder only)
40
 
41
+ | File | Modality | Params | Size | Depth |
42
+ |------|----------|--------|------|-------|
43
+ | [`NeuroRVQ_EEG_foundation_model_v1.safetensors`](NeuroRVQ_EEG_foundation_model_v1.safetensors) | **EEG** | 58.6 M | 234 MB | 12 blocks |
44
+ | [`NeuroRVQ_EMG_foundation_model_v1.safetensors`](NeuroRVQ_EMG_foundation_model_v1.safetensors) | **EMG** | 111.2 M | 445 MB | 12 blocks |
45
 
46
  ### Config Flags
47
 
48
+ | File | Description |
49
+ |------|-------------|
50
+ | [`flags/NeuroRVQ_EEG_v1.yml`](flags/NeuroRVQ_EEG_v1.yml) | EEG β€” 103 channels, patch=200, embed=200 |
51
+ | [`flags/NeuroRVQ_ECG_v1.yml`](flags/NeuroRVQ_ECG_v1.yml) | ECG β€” 15 channels, patch=40, embed=40 |
52
+ | [`flags/NeuroRVQ_EMG_v1.yml`](flags/NeuroRVQ_EMG_v1.yml) | EMG β€” 16 channels, patch=200, embed=200 |
53
 
54
+ ## Quick Start β€” Rust
55
 
56
  ```bash
57
+ # Install
58
+ cargo add neurorvq-rs
59
+
60
+ # Download weights + config
61
  huggingface-cli download eugenehp/NeuroRVQ \
62
  NeuroRVQ_EEG_tokenizer_v1.safetensors \
63
  flags/NeuroRVQ_EEG_v1.yml \
64
  --local-dir weights/
65
 
66
+ # Run tokenization
67
  cargo run --release --bin infer -- \
68
  --config weights/flags/NeuroRVQ_EEG_v1.yml \
69
  --weights weights/NeuroRVQ_EEG_tokenizer_v1.safetensors
 
72
  ### Library API
73
 
74
  ```rust
75
+ use neurorvq_rs::{NeuroRVQEncoder, Modality, data, channels};
76
  use std::path::Path;
77
 
78
  let (model, _ms) = NeuroRVQEncoder::<B>::load_with_modality(
 
82
  device,
83
  )?;
84
 
85
+ // Tokenize β†’ 4 branches Γ— 8 RVQ levels of discrete indices
86
  let tokens = model.tokenize(&batch)?;
87
+ for (br, levels) in tokens.branch_tokens.iter().enumerate() {
88
+ for (lv, indices) in levels.iter().enumerate() {
89
+ println!("Branch {} Level {}: {} tokens", br, lv, indices.len());
90
+ }
91
+ }
92
+ ```
93
+
94
+ ### Foundation Model API
95
+
96
+ ```rust
97
+ use neurorvq_rs::{NeuroRVQFoundationModel, Modality};
98
+
99
+ let (fm, _ms) = NeuroRVQFoundationModel::<B>::load(
100
+ Path::new("flags/NeuroRVQ_EEG_v1.yml"),
101
+ Path::new("NeuroRVQ_EEG_foundation_model_v1.safetensors"),
102
+ Modality::EEG,
103
+ device,
104
+ )?;
105
+
106
+ let features = fm.encode(&batch)?; // 4 branch feature vectors
107
+ let pooled = fm.encode_pooled(&batch)?; // Mean-pooled for classification
108
  ```
109
 
110
+ ## Quick Start β€” Python
111
 
112
  ```python
113
  from safetensors.torch import load_file
 
116
  model.load_state_dict(state_dict, strict=False)
117
  ```
118
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
  ## Architecture
120
 
121
  ```
122
  Raw Signal [B, N, T]
123
+ β”‚
124
  β–Ό
125
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
126
+ β”‚ Multi-Scale Temporal Conv β”‚ 4 parallel branches
127
+ β”‚ EEG/ECG: k=21,15,9,5 β”‚ modality-specific kernels
128
+ β”‚ EMG: k=51,17,8,5 β”‚
129
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
130
+ β”‚ Γ—4 branches
131
  β–Ό
132
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
133
+ β”‚ Transformer Encoder β”‚ 12 blocks, 10 heads
134
+ β”‚ + spatial / temporal pos. embeds β”‚ shared weights across branches
135
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
136
+ β”‚ Γ—4 branches
137
  β–Ό
138
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
139
+ β”‚ Encode Heads β”‚ Linear β†’ Tanh β†’ Linear
140
+ β”‚ embed_dim β†’ code_dim (128) β”‚
141
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
142
+ β”‚ Γ—4 branches
143
  β–Ό
144
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
145
+ β”‚ Residual Vector Quantization β”‚ 8 levels (EEG/ECG)
146
+ β”‚ L2-norm codebook lookup β”‚ 16 levels (EMG)
147
+ β”‚ codebook: 8192 Γ— 128 β”‚
148
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
149
+ β”‚ Γ—4 branches ← discrete token indices
150
  β–Ό
151
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
152
+ β”‚ Transformer Decoder β”‚ 3 blocks
153
+ β”‚ per-branch PatchEmbed (1Γ—1 conv) β”‚
154
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
155
+ β”‚ concat 4 branches
156
  β–Ό
157
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
158
+ β”‚ Decode Heads β”‚ Amplitude (GELU)
159
+ β”‚ 4Γ—embed_dim β†’ decoder_out_dim β”‚ Sin/Cos phase (Tanh)
160
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
161
+ β”‚
162
+ β–Ό
163
+ Inverse FFT β†’ Reconstructed Signal
164
  ```
165
 
166
+ ## Numerical Parity (Rust vs Python)
167
+
168
+ Verified against the official PyTorch reference implementation:
169
+
170
+ | Layer | Max Abs Error | Notes |
171
+ |-------|:---:|-------|
172
+ | Encoder features | < 8 Γ— 10⁻³ | 12 transformer layers, f32 accumulation |
173
+ | Encode heads | < 2 Γ— 10⁻³ | After Tanh squashing |
174
+ | RVQ quantized vectors | β‰ˆ 0 ΒΉ | Exact with random-init codebooks |
175
+ | Token indices | **99.3%** exact Β² | Pretrained weights |
176
+ | Decode outputs | < 8 Γ— 10⁻¹ ΒΉ | Dominated by ≀0.7% boundary tokens |
177
+
178
+ ΒΉ Differences stem from the ≀0.7% of tokens near codebook decision boundaries β€” a natural consequence of f32 arithmetic differences between frameworks.
179
+
180
+ Β² With random-init weights: **100%** match (all "mismatches" resolve to identical codebook vectors, i.e., ties).
181
+
182
+ ## Benchmarks
183
+
184
+ **Platform:** Apple M4 Pro, 64 GB RAM, macOS 15 (arm64)
185
+
186
+ ### Tokenize Latency β€” All Backends
187
+
188
+ | Configuration | Modality | PyTorch CPU | Rust NdArray | Rust wgpu (GPU) |
189
+ |---|:---:|---:|---:|---:|
190
+ | EEG 4ch Γ— 64t | EEG | **179 ms** | 661 ms | 51 ms |
191
+ | EEG 8ch Γ— 32t | EEG | **180 ms** | 662 ms | 60 ms |
192
+ | EEG 16ch Γ— 16t | EEG | **180 ms** | 664 ms | 62 ms |
193
+ | EEG 32ch Γ— 8t | EEG | **178 ms** | 664 ms | 65 ms |
194
+ | EEG 64ch Γ— 4t | EEG | **179 ms** | 664 ms | 68 ms |
195
+ | ECG 4ch Γ— 150t | ECG | **272 ms** | 1881 ms | 92 ms |
196
+ | ECG 8ch Γ— 75t | ECG | **273 ms** | 1874 ms | 92 ms |
197
+ | ECG 12ch Γ— 50t | ECG | **272 ms** | 1877 ms | 93 ms |
198
+ | ECG 15ch Γ— 40t | ECG | **272 ms** | 1878 ms | 93 ms |
199
+ | EMG 4ch Γ— 64t | EMG | **255 ms** | 998 ms | 90 ms |
200
+ | EMG 8ch Γ— 32t | EMG | **255 ms** | 998 ms | 88 ms |
201
+ | EMG 16ch Γ— 16t | EMG | **254 ms** | 1001 ms | 90 ms |
202
+
203
+ ### Tokenize Latency: NdArray vs wgpu vs PyTorch
204
+
205
+ ![Tokenize Comparison](figures/compare_all_tokenize.svg)
206
+
207
+ ### Encode Latency: NdArray vs wgpu vs PyTorch
208
+
209
+ ![Encode Comparison](figures/compare_all_encode.svg)
210
+
211
+ ### Rust β€” Tokenize Latency by Configuration
212
+
213
+ ![Tokenize Latency](figures/tokenize_latency.svg)
214
+
215
+ ### Rust β€” EEG Scaling by Channel Count
216
+
217
+ ![EEG Scaling](figures/eeg_scaling.svg)
218
+
219
+ ### Rust β€” Model Construction Time
220
+
221
+ ![Construction Time](figures/construction_time.svg)
222
+
223
+ ### Backend Comparison Summary
224
+
225
+ | Comparison | Result |
226
+ |---|---|
227
+ | **wgpu vs NdArray** | wgpu is **~12Γ— faster** (GPU acceleration) |
228
+ | **wgpu vs PyTorch CPU** | wgpu is **~3Γ— faster** for EEG/EMG/ECG |
229
+ | **NdArray vs PyTorch CPU** | PyTorch is **~3.7Γ— faster** (optimized BLAS) |
230
+
231
+ ### Key Observations
232
+
233
+ - **wgpu (GPU) is the fastest backend** β€” 51–93 ms across all configurations
234
+ - **PyTorch CPU** uses Apple Accelerate/AMX BLAS and fused operators, making it faster than Rust NdArray on CPU
235
+ - **Latency scales with total patch count**, not the channel/time decomposition β€” EEG (256 patches) < EMG (256 patches, 16 RVQ) < ECG (600 patches)
236
+ - **Construction time** is ~13 ms (warm) / ~54 ms (cold start for EMG with larger kernels)
237
+ - **Standard deviation < 1%** β€” highly stable inference latency
238
+
239
+ ### Why Rust?
240
+
241
+ | | Python + PyTorch | Rust + Burn |
242
+ |---|---|---|
243
+ | Dependencies | pip, torch, numpy, einops, ... | Zero (single static binary) |
244
+ | GPU support | CUDA, MPS | wgpu (Metal, Vulkan, WebGPU) |
245
+ | Deployment | Interpreter + venv | Single binary, WASM, embedded |
246
+ | Memory | GC pauses | Deterministic, no GC |
247
+ | Latency (GPU) | β€” | **51–93 ms** (wgpu Metal) |
248
+
249
  ## Conversion
250
 
251
+ These weights were converted from the official `.pt` files:
252
 
253
  ```python
254
  import torch
 
260
  save_file(converted, "model.safetensors")
261
  ```
262
 
263
+ Or use the included script:
264
+
265
+ ```bash
266
+ python scripts/convert_pt_to_safetensors.py \
267
+ --input NeuroRVQ_EEG_tokenizer_v1.pt \
268
+ --output NeuroRVQ_EEG_tokenizer_v1.safetensors
269
+ ```
270
+
271
  ## Citation
272
 
273
  ```bibtex
 
284
 
285
  ## Links
286
 
287
+ | | |
288
+ |---|---|
289
+ | **Rust crate** | [github.com/eugenehp/neurorvq-rs](https://github.com/eugenehp/neurorvq-rs) |
290
+ | **Original weights** | [huggingface.co/ntinosbarmpas/NeuroRVQ](https://huggingface.co/ntinosbarmpas/NeuroRVQ) |
291
+ | **Paper / Code** | [github.com/KonstantinosBarmpas/NeuroRVQ](https://github.com/KonstantinosBarmpas/NeuroRVQ) |
292
+ | **Burn framework** | [burn.dev](https://burn.dev) |