earthlyframes commited on
Commit
59acd42
·
verified ·
1 Parent(s): 1544861

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -5
README.md CHANGED
@@ -1,5 +1,108 @@
1
- ---
2
- license: other
3
- license_name: collaborative-intelligence-license
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - audio
5
+ - music
6
+ - classification
7
+ - onnx
8
+ - chromatic
9
+ license: other
10
+ base_model:
11
+ - laion/larger_clap_music
12
+ - microsoft/deberta-v3-base
13
+ ---
14
+
15
+ # Refractor CDM
16
+
17
+ **Refractor CDM** (Compact Disc Module) is a lightweight MLP calibration head that classifies full-mix audio recordings into one of nine "rainbow colors" — a chromatic taxonomy used in *The Rainbow
18
+ Table*, an AI-assisted album series.
19
+
20
+ The CDM is a companion to the base Refractor ONNX model (a multimodal fusion network trained on short catalog segments). The base model works well for MIDI and short audio clips but predicts poorly on
21
+ full-mix audio because CLAP embeddings are optimized for short segments. The CDM corrects this by training directly on chunked full-mix audio.
22
+
23
+ ## Model Details
24
+
25
+ | Property | Value |
26
+ |---|---|
27
+ | Architecture | 2-layer MLP (256 → 128 → 9) |
28
+ | Parameters | 361,993 |
29
+ | Input | CLAP audio (512-dim) + DeBERTa concept (768-dim) = 1280-dim |
30
+ | Output | Softmax probabilities over 9 colors (`color_probs`, shape `[batch, 9]`) |
31
+ | Format | ONNX (`refractor_cdm.onnx`, 1.4 MB) |
32
+ | Training data | 3,450 chunks from 78 full-mix songs across all 9 colors |
33
+ | Loss | CrossEntropyLoss with label smoothing (0.1) + inverse-frequency class weights |
34
+
35
+ ## Color Classes
36
+
37
+ Index Color CHROMATIC_TARGETS (temporal / spatial / ontological)
38
+ 0 Red Past-heavy / Thing-heavy / Known-heavy
39
+ 1 Orange Present-heavy / Thing-heavy / Known-heavy
40
+ 2 Yellow Present-heavy / Place-heavy / Known-heavy
41
+ 3 Green Present-heavy / Place-heavy / Known-heavy ← same as Yellow
42
+ 4 Blue Future-heavy / Place-heavy / Forgotten-heavy
43
+ 5 Indigo Future-heavy / Future-heavy / Forgotten-heavy
44
+ 6 Violet Future-heavy / Future-heavy / Imagined-heavy
45
+ 7 White Uniform across all axes
46
+ 8 Black Present-heavy / Thing-heavy / Imagined-heavy
47
+
48
+ ## Validation Results
49
+
50
+ Evaluated on 78 labeled songs from `staged_raw_material` using 30s/5s-stride chunked scoring with confidence-weighted aggregation.
51
+
52
+ | Color | Correct | Total | Accuracy |
53
+ |---|---|---|---|
54
+ | Red | 11 | 12 | 91.7% |
55
+ | Orange | 4 | 4 | 100.0% |
56
+ | Yellow | 10 | 10 | 100.0% |
57
+ | Green | 0 | 8 | 0.0% ⚠️ |
58
+ | Blue | 11 | 11 | 100.0% |
59
+ | Indigo | 10 | 11 | 90.9% |
60
+ | Violet | 11 | 12 | 91.7% |
61
+ | White | 0 | 10 | 0.0% ⚠️ |
62
+ | **Overall** | **57** | **78** | **73.1%** |
63
+
64
+ **Green (0%)** — all predicted as Yellow. This is pipeline-safe: Green and Yellow share identical CHROMATIC_TARGETS distributions, so downstream chromatic match and drift scores are unaffected.
65
+
66
+ **White (0%)** — all predicted as Yellow or Blue. White's uniform `[0.33, 0.34, 0.33]` targets are meaningfully different, so this is a known open issue. White albums are musically intentionally
67
+ diverse, which makes them acoustically diffuse in CLAP's feature space.
68
+
69
+ ## Usage
70
+
71
+ The CDM is used via the `Refractor` wrapper in `training/refractor.py`. It auto-loads when `training/data/refractor_cdm.onnx` is present.
72
+
73
+ ```python
74
+ from training.refractor import Refractor
75
+
76
+ scorer = Refractor() # CDM auto-detected
77
+
78
+ # Score a full-mix wav with concept text
79
+ result = scorer.score(
80
+ audio_emb=scorer.prepare_audio(waveform, sr=48000),
81
+ concept_emb=scorer.prepare_concept("A song about forgetting the future"),
82
+ )
83
+ # result: {"temporal": {...}, "spatial": {...}, "ontological": {...}, "confidence": 0.93}
84
+
85
+ For full-mix WAV files, use chunk_audio + aggregate_chunk_scores from app/generators/midi/production/score_mix.py to score in overlapping windows and pool results.
86
+
87
+ Training
88
+
89
+ # Phase 1 — extract CLAP + concept embeddings from staged_raw_material/
90
+ python training/extract_cdm_embeddings.py
91
+
92
+ # Phase 2 — train on Modal (A10G)
93
+ modal run training/modal_train_refractor_cdm.py
94
+
95
+ # Validate
96
+ python training/validate_mix_scoring.py
97
+
98
+ Limitations
99
+
100
+ - CLAP embeddings have a maximum internal window of ~10s; chunking is essential for full-length tracks
101
+ - Green and White classification are unreliable (see validation results above)
102
+ - Training data is drawn from a single artist's catalog — generalization to other music is untested
103
+ - The concept embedding path requires a DeBERTa-v3-base inference pass (~600 MB model)
104
+
105
+ Citation
106
+
107
+ Part of The Rainbow Table generative music pipeline. See https://github.com/brotherclone/white.
108
+ ```