earthlyframes commited on
Commit
df30b2b
Β·
verified Β·
1 Parent(s): 9a18c0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +277 -5
README.md CHANGED
@@ -1,5 +1,277 @@
1
- ---
2
- license: other
3
- license_name: collaborative-intelligence-license
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ license: collaborative-intelligence
2
+ language:
3
+ - en
4
+ tags:
5
+ - music
6
+ - audio
7
+ - midi
8
+ - onnx
9
+ - multimodal
10
+ - music-generation
11
+ - evolutionary-music
12
+ - chromatic-modes
13
+ datasets:
14
+ - earthlyframes/white-training-data
15
+ metrics:
16
+ - accuracy
17
+ library_name: onnxruntime
18
+ pipeline_tag: audio-classification
19
+ ---
20
+
21
+ # Refractor
22
+
23
+ Refractor is a multimodal fitness function for evolutionary music composition. It takes up to five input modalities β€” MIDI piano roll, audio embedding, concept text, lyric text, and artist "sounds-like" descriptions β€” and scores a piece of music against a chromatic concept, classifying it across three independent mode dimensions: **temporal**, **spatial**, and **ontological**.
24
+
25
+ It is the scoring engine at the heart of the [White](https://github.com/brotherclone/white) AI-assisted album production system.
26
+
27
+ ## Model Details
28
+
29
+ ### What are chromatic modes?
30
+
31
+ The White project encodes musical character using a colour-theory system. Each colour (Red, Orange, Yellow, Green, Blue, Indigo, Violet) maps to a unique combination of three independent categorical dimensions:
32
+
33
+ | Dimension | Classes | Example |
34
+ |-----------|---------|---------|
35
+ | **Temporal** | Past Β· Present Β· Future | Red β†’ Past |
36
+ | **Spatial** | Thing Β· Place Β· Person | Red β†’ Thing |
37
+ | **Ontological** | Known Β· Imagined Β· Forgotten | Red β†’ Known |
38
+
39
+ Refractor learns to predict which cell in this 3Γ—3Γ—3 space a piece of music occupies, and how confidently it does so.
40
+
41
+ ### Architecture
42
+
43
+ ```
44
+ Inputs
45
+ piano_roll [B, 1, 128, 256] β€” MIDI as a piano roll image
46
+ audio_emb [B, 512] β€” CLAP audio embedding
47
+ concept_emb [B, 768] β€” DeBERTa-v3-base concept text embedding
48
+ lyric_emb [B, 768] β€” DeBERTa-v3-base lyric text embedding
49
+ sounds_like_emb [B, 768] β€” DeBERTa-v3-base mean-pooled artist descriptions
50
+ has_audio [B] β€” bool mask
51
+ has_midi [B] β€” bool mask
52
+ has_lyric [B] β€” bool mask
53
+ has_sounds_like [B] β€” bool mask
54
+
55
+ PianoRollEncoder (CNN)
56
+ Conv2d(1β†’32) β†’ BN β†’ ReLU β†’ MaxPool2d
57
+ Conv2d(32β†’64) β†’ BN β†’ ReLU β†’ MaxPool2d
58
+ Conv2d(64β†’128) β†’ BN β†’ ReLU β†’ AdaptiveAvgPool2d(4,4)
59
+ Linear(2048β†’512) β†’ ReLU
60
+ β†’ midi_emb [B, 512]
61
+
62
+ Fusion MLP
63
+ cat([audio 512, midi 512, concept 768, lyric 768, sounds_like 768]) = [B, 3328]
64
+ Linear(3328β†’1024) β†’ ReLU β†’ Dropout(0.3)
65
+ Linear(1024β†’512) β†’ ReLU β†’ Dropout(0.2)
66
+ β†’ fused [B, 512]
67
+
68
+ Heads
69
+ temporal_head Linear(512β†’3) β†’ Softmax
70
+ spatial_head Linear(512β†’3) β†’ Softmax
71
+ ontological_head Linear(512β†’3) β†’ Softmax
72
+ confidence_head Linear(512β†’1) β†’ Sigmoid
73
+
74
+ Total parameters: 5,084,362
75
+ CNN encoder: 1,142,208
76
+ Fusion + heads: 3,942,154
77
+ ```
78
+
79
+ Absent modalities are handled via **learned null embeddings** (one per modality, trained end-to-end). During training, **modality dropout** (p=0.15) randomly masks present modalities, forcing the model to be robust to any combination of available inputs. At inference, dropout is disabled and the null path is used for any missing modality.
80
+
81
+ ### Model Details
82
+
83
+ - **Developed by:** Gabriel Walsh (brotherclone)
84
+ - **Model type:** Multimodal classification (ONNX)
85
+ - **License:** Collaborative Intelligence
86
+ - **Repository:** [brotherclone/white](https://github.com/brotherclone/white)
87
+ - **Training dataset:** [earthlyframes/white-training-data](https://huggingface.co/datasets/earthlyframes/white-training-data)
88
+
89
+ ## Uses
90
+
91
+ ### Primary use β€” evolutionary music composition
92
+
93
+ Refractor is the fitness function in an evolutionary pipeline that generates music structured around chromatic concepts:
94
+
95
+ 1. A colour concept is selected (e.g. **Red** β†’ `temporal=Past, spatial=Thing, ontological=Known`)
96
+ 2. ~50 MIDI candidates are generated (chord progressions, drum patterns, bass lines, melodies)
97
+ 3. Refractor scores each candidate against the concept embedding
98
+ 4. Candidates are ranked by `confidence`; low scorers are pruned
99
+ 5. The surviving candidates are promoted and the next generation begins
100
+
101
+ ```python
102
+ from training.refractor import Refractor
103
+
104
+ scorer = Refractor() # loads refractor.onnx (~19 MB)
105
+
106
+ # Encode concept once, reuse across the whole evolutionary batch
107
+ concept_emb = scorer.prepare_concept(
108
+ "RED temporal=Past spatial=Thing ontological=Known"
109
+ )
110
+
111
+ # Score a single MIDI candidate
112
+ result = scorer.score(midi_bytes=midi_data, concept_emb=concept_emb)
113
+ # β†’ {
114
+ # "temporal": {"past": 0.89, "present": 0.07, "future": 0.04},
115
+ # "spatial": {"thing": 0.91, "place": 0.05, "person": 0.04},
116
+ # "ontological": {"known": 0.88, "imagined": 0.07, "forgotten": 0.05},
117
+ # "confidence": 0.87
118
+ # }
119
+
120
+ # Score a batch of 50 candidates (single ONNX call)
121
+ candidates = [{"midi_bytes": m} for m in midi_variants]
122
+ ranked = scorer.score_batch(candidates, concept_emb=concept_emb)
123
+ # β†’ list sorted by confidence descending, each with rank + original candidate
124
+ ```
125
+
126
+ ### With sounds-like context
127
+
128
+ If you have artist aesthetic descriptions for the target sound, pass them to further condition the score:
129
+
130
+ ```python
131
+ sounds_like = [
132
+ "Motorik rhythms, kosmische synthesizer textures, hypnotic repetition",
133
+ "Driving post-punk guitars, angular riffs, sardonic delivery",
134
+ ]
135
+ result = scorer.score(
136
+ midi_bytes=midi_data,
137
+ concept_emb=concept_emb,
138
+ sounds_like_texts=sounds_like,
139
+ )
140
+ ```
141
+
142
+ Or pre-compute the embedding once and reuse across a batch:
143
+
144
+ ```python
145
+ sl_emb = scorer.prepare_sounds_like(sounds_like)
146
+ ranked = scorer.score_batch(candidates, concept_emb=concept_emb, sounds_like_emb=sl_emb)
147
+ ```
148
+
149
+ ### Using ONNX directly
150
+
151
+ ```python
152
+ import onnxruntime as ort
153
+ import numpy as np
154
+
155
+ sess = ort.InferenceSession("refractor.onnx", providers=["CPUExecutionProvider"])
156
+
157
+ feed = {
158
+ "piano_roll": np.zeros((1, 1, 128, 256), dtype=np.float32),
159
+ "audio_emb": np.zeros((1, 512), dtype=np.float32),
160
+ "concept_emb": concept_vec.reshape(1, 768),
161
+ "lyric_emb": np.zeros((1, 768), dtype=np.float32),
162
+ "sounds_like_emb": sl_vec.reshape(1, 768),
163
+ "has_audio": np.array([False]),
164
+ "has_midi": np.array([False]),
165
+ "has_lyric": np.array([False]),
166
+ "has_sounds_like": np.array([True]),
167
+ }
168
+ temporal, spatial, ontological, confidence = sess.run(None, feed)
169
+ ```
170
+
171
+ ### Out-of-scope use
172
+
173
+ - General-purpose music genre or mood classification (this model is calibrated to the White colour-theory system, not universal taxonomies)
174
+ - Real-time inference on audio streams (designed for batch scoring of pre-rendered candidates)
175
+ - Replacement for human artistic judgement (scores are a compositional signal, not ground truth)
176
+
177
+ ## Training Details
178
+
179
+ ### Training data
180
+
181
+ [earthlyframes/white-training-data](https://huggingface.co/datasets/earthlyframes/white-training-data) v0.2.0
182
+
183
+ - **11,605 segments** across **83 songs**, all **8 chromatic colours**
184
+ - Audio coverage: 85.4% (9,907 segments with CLAP embeddings)
185
+ - MIDI coverage: 44.3% (5,145 segments with piano rolls)
186
+ - Lyric coverage: 92.7% (10,764 segments with DeBERTa lyric embeddings)
187
+ - Sounds-like coverage: 100% (11,605 segments, 237 artists, song-level signal broadcast to segments)
188
+
189
+ Labels are derived from per-song colour assignments in the White album metadata. The `None` class (3,154 segments) covers unlabelled or transitional segments and is excluded from accuracy calculations.
190
+
191
+ ### Preprocessing
192
+
193
+ - **MIDI β†’ piano roll**: `pretty_midi`, quantised to 128 pitches Γ— 256 time steps, velocity-normalised to [0, 1]
194
+ - **Audio β†’ embedding**: [laion/larger_clap_music](https://huggingface.co/laion/larger_clap_music), 512-dim
195
+ - **Text β†’ embedding**: [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base), mean-pooled CLS token, 768-dim; applied to concept strings, lyric text, and artist description lists
196
+ - **Sounds-like**: per-song artist descriptions mean-pooled to a single 768-dim vector, broadcast to all segments of that song
197
+
198
+ ### Training procedure
199
+
200
+ Phase 5 fine-tunes from a Phase 3 checkpoint (audio + MIDI + concept + lyric, 2560-dim fusion) by re-initialising the first fusion layer for the expanded 3328-dim input and loading all other weights.
201
+
202
+ - **Hardware:** NVIDIA A10 (23.7 GB VRAM) via Modal
203
+ - **Epochs:** 30 (early stopping, patience=10)
204
+ - **Best checkpoint:** epoch 14
205
+ - **Optimizer:** AdamW, lr=1e-5 β†’ 5e-6 (cosine decay)
206
+ - **Batch size:** 32
207
+ - **Label smoothing:** 0.1
208
+ - **Modality dropout:** p=0.15 per modality during training
209
+ - **Model selection criterion:** best mean accuracy across temporal + spatial + ontological (not val loss β€” loss plateaus at ~0.0002–0.0003 during fine-tuning while accuracy varies Β±15%)
210
+
211
+ ## Evaluation
212
+
213
+ ### Results
214
+
215
+ Evaluated on a held-out 20% split (2,321 segments), excluding `None`-labelled segments.
216
+
217
+ | Dimension | Accuracy |
218
+ |-----------|----------|
219
+ | Temporal | 89.3% |
220
+ | Spatial | 91.6% |
221
+ | Ontological | 90.7% |
222
+ | **Mean** | **90.5%** |
223
+ | Confidence (sigmoid) | ~0.87 at target match |
224
+
225
+ The spatial dimension historically lagged (62% in text-only Phase 4) because instrumental tracks have no lyric signal and spatial mode correlates strongly with vocal character. Adding MIDI piano rolls in Phase 3 closed the gap to 93%; the sounds-like modality further stabilises scores on instrumental passages.
226
+
227
+ ### Limitations
228
+
229
+ - Chromatic mode labels are derived from a single artistic framework (the White project). Scores are only meaningful relative to that framework's colour β†’ mode mapping.
230
+ - The confidence head is a sigmoid over a single logit, not a calibrated probability. Use it for relative ranking within a batch, not as an absolute reliability score.
231
+ - MIDI coverage is 44% of the training data; piano-roll features have weaker gradients than the text/audio paths on segments without MIDI.
232
+ - Sounds-like embeddings are song-level averages β€” they cannot distinguish between sections of the same song that have different timbral character.
233
+
234
+ ## Technical Specifications
235
+
236
+ ### Compute infrastructure
237
+
238
+ - Training: Modal (cloud), NVIDIA A10 GPU
239
+ - Inference: CPU only (`CPUExecutionProvider`), tested on Apple M-series and x86 Linux
240
+ - ONNX opset: 17
241
+ - Inference time: ~4 ms per batch of 50 on M2 MacBook Pro (MIDI-only, no CLAP)
242
+
243
+ ### Software
244
+
245
+ - PyTorch 2.x (training)
246
+ - ONNX opset 17 (export)
247
+ - onnxruntime β‰₯ 1.17 (inference)
248
+ - transformers β‰₯ 4.40 (DeBERTa / CLAP encoders, lazy-loaded at runtime)
249
+ - pretty_midi (piano roll preprocessing)
250
+
251
+ ### Files
252
+
253
+ | File | Size | Description |
254
+ |------|------|-------------|
255
+ | `refractor.onnx` | 19.4 MB | ONNX model (all 9 inputs) |
256
+ | `refractor.pt` | 19.4 MB | PyTorch checkpoint |
257
+
258
+ ## Citation
259
+
260
+ ```bibtex
261
+ @misc{walsh2026refractor,
262
+ author = {Gabriel Walsh},
263
+ title = {Refractor: A Multimodal Fitness Function for Chromatic Music Composition},
264
+ year = {2026},
265
+ publisher = {Hugging Face},
266
+ howpublished = {\url{https://huggingface.co/earthlyframes/refractor}},
267
+ note = {Part of the White project: \url{https://github.com/brotherclone/white}}
268
+ }
269
+ ```
270
+
271
+ ## Glossary
272
+
273
+ - **Chromatic modes**: The three classification dimensions (temporal, spatial, ontological) derived from the White colour-theory system for music
274
+ - **Null embedding**: A learned parameter vector substituted for any absent modality at inference time
275
+ - **Modality dropout**: Training-time regularisation that randomly masks present modalities, making the model robust to missing inputs
276
+ - **Confidence**: A sigmoid scalar [0, 1] indicating how strongly the fused representation matches the target chromatic concept
277
+ - **Sounds-like**: Song-level aesthetic descriptions of reference artists, mean-pooled into a 768-dim conditioning vector