refractor / README.md
earthlyframes's picture
Update README.md
501aa04 verified
metadata
license: other
language:
  - en
tags:
  - music
  - audio
  - midi
  - onnx
  - multimodal
  - music-generation
  - evolutionary-music
  - chromatic-modes
datasets:
  - earthlyframes/white-training-data
metrics:
  - accuracy
library_name: onnxruntime
pipeline_tag: audio-classification

Refractor

Refractor is a multimodal fitness function for evolutionary music composition. It takes up to five input modalities β€” MIDI piano roll, audio embedding, concept text, lyric text, and artist "sounds-like" descriptions β€” and scores a piece of music against a chromatic concept, classifying it across three independent mode dimensions: temporal, spatial, and ontological.

It is the scoring engine at the heart of the White AI-assisted album production system.

Model Details

What are chromatic modes?

The White project encodes musical character using a colour-theory system. Each colour (Red, Orange, Yellow, Green, Blue, Indigo, Violet) maps to a unique combination of three independent categorical dimensions:

Dimension Classes Example
Temporal Past Β· Present Β· Future Red β†’ Past
Spatial Thing Β· Place Β· Person Red β†’ Thing
Ontological Known Β· Imagined Β· Forgotten Red β†’ Known

Refractor learns to predict which cell in this 3Γ—3Γ—3 space a piece of music occupies, and how confidently it does so.

Architecture

Inputs
  piano_roll       [B, 1, 128, 256]   β€” MIDI as a piano roll image
  audio_emb        [B, 512]           β€” CLAP audio embedding
  concept_emb      [B, 768]           β€” DeBERTa-v3-base concept text embedding
  lyric_emb        [B, 768]           β€” DeBERTa-v3-base lyric text embedding
  sounds_like_emb  [B, 768]           β€” DeBERTa-v3-base mean-pooled artist descriptions
  has_audio        [B]                β€” bool mask
  has_midi         [B]                β€” bool mask
  has_lyric        [B]                β€” bool mask
  has_sounds_like  [B]                β€” bool mask

PianoRollEncoder (CNN)
  Conv2d(1β†’32) β†’ BN β†’ ReLU β†’ MaxPool2d
  Conv2d(32β†’64) β†’ BN β†’ ReLU β†’ MaxPool2d
  Conv2d(64β†’128) β†’ BN β†’ ReLU β†’ AdaptiveAvgPool2d(4,4)
  Linear(2048β†’512) β†’ ReLU
  β†’ midi_emb [B, 512]

Fusion MLP
  cat([audio 512, midi 512, concept 768, lyric 768, sounds_like 768]) = [B, 3328]
  Linear(3328β†’1024) β†’ ReLU β†’ Dropout(0.3)
  Linear(1024β†’512)  β†’ ReLU β†’ Dropout(0.2)
  β†’ fused [B, 512]

Heads
  temporal_head    Linear(512β†’3)  β†’ Softmax
  spatial_head     Linear(512β†’3)  β†’ Softmax
  ontological_head Linear(512β†’3)  β†’ Softmax
  confidence_head  Linear(512β†’1)  β†’ Sigmoid

Total parameters: 5,084,362
  CNN encoder:      1,142,208
  Fusion + heads:   3,942,154

Absent modalities are handled via learned null embeddings (one per modality, trained end-to-end). During training, modality dropout (p=0.15) randomly masks present modalities, forcing the model to be robust to any combination of available inputs. At inference, dropout is disabled and the null path is used for any missing modality.

Model Details

Uses

Primary use β€” evolutionary music composition

Refractor is the fitness function in an evolutionary pipeline that generates music structured around chromatic concepts:

  1. A colour concept is selected (e.g. Red β†’ temporal=Past, spatial=Thing, ontological=Known)
  2. ~50 MIDI candidates are generated (chord progressions, drum patterns, bass lines, melodies)
  3. Refractor scores each candidate against the concept embedding
  4. Candidates are ranked by confidence; low scorers are pruned
  5. The surviving candidates are promoted and the next generation begins
from training.refractor import Refractor

scorer = Refractor()  # loads refractor.onnx (~19 MB)

# Encode concept once, reuse across the whole evolutionary batch
concept_emb = scorer.prepare_concept(
    "RED temporal=Past spatial=Thing ontological=Known"
)

# Score a single MIDI candidate
result = scorer.score(midi_bytes=midi_data, concept_emb=concept_emb)
# β†’ {
#     "temporal":    {"past": 0.89, "present": 0.07, "future": 0.04},
#     "spatial":     {"thing": 0.91, "place": 0.05, "person": 0.04},
#     "ontological": {"known": 0.88, "imagined": 0.07, "forgotten": 0.05},
#     "confidence":  0.87
#   }

# Score a batch of 50 candidates (single ONNX call)
candidates = [{"midi_bytes": m} for m in midi_variants]
ranked = scorer.score_batch(candidates, concept_emb=concept_emb)
# β†’ list sorted by confidence descending, each with rank + original candidate

With sounds-like context

If you have artist aesthetic descriptions for the target sound, pass them to further condition the score:

sounds_like = [
    "Motorik rhythms, kosmische synthesizer textures, hypnotic repetition",
    "Driving post-punk guitars, angular riffs, sardonic delivery",
]
result = scorer.score(
    midi_bytes=midi_data,
    concept_emb=concept_emb,
    sounds_like_texts=sounds_like,
)

Or pre-compute the embedding once and reuse across a batch:

sl_emb = scorer.prepare_sounds_like(sounds_like)
ranked = scorer.score_batch(candidates, concept_emb=concept_emb, sounds_like_emb=sl_emb)

Using ONNX directly

import onnxruntime as ort
import numpy as np

sess = ort.InferenceSession("refractor.onnx", providers=["CPUExecutionProvider"])

feed = {
    "piano_roll":       np.zeros((1, 1, 128, 256), dtype=np.float32),
    "audio_emb":        np.zeros((1, 512),          dtype=np.float32),
    "concept_emb":      concept_vec.reshape(1, 768),
    "lyric_emb":        np.zeros((1, 768),          dtype=np.float32),
    "sounds_like_emb":  sl_vec.reshape(1, 768),
    "has_audio":        np.array([False]),
    "has_midi":         np.array([False]),
    "has_lyric":        np.array([False]),
    "has_sounds_like":  np.array([True]),
}
temporal, spatial, ontological, confidence = sess.run(None, feed)

Out-of-scope use

  • General-purpose music genre or mood classification (this model is calibrated to the White colour-theory system, not universal taxonomies)
  • Real-time inference on audio streams (designed for batch scoring of pre-rendered candidates)
  • Replacement for human artistic judgement (scores are a compositional signal, not ground truth)

Training Details

Training data

earthlyframes/white-training-data v0.2.0

  • 11,605 segments across 83 songs, all 8 chromatic colours
  • Audio coverage: 85.4% (9,907 segments with CLAP embeddings)
  • MIDI coverage: 44.3% (5,145 segments with piano rolls)
  • Lyric coverage: 92.7% (10,764 segments with DeBERTa lyric embeddings)
  • Sounds-like coverage: 100% (11,605 segments, 237 artists, song-level signal broadcast to segments)

Labels are derived from per-song colour assignments in the White album metadata. The None class (3,154 segments) covers unlabelled or transitional segments and is excluded from accuracy calculations.

Preprocessing

  • MIDI β†’ piano roll: pretty_midi, quantised to 128 pitches Γ— 256 time steps, velocity-normalised to [0, 1]
  • Audio β†’ embedding: laion/larger_clap_music, 512-dim
  • Text β†’ embedding: microsoft/deberta-v3-base, mean-pooled CLS token, 768-dim; applied to concept strings, lyric text, and artist description lists
  • Sounds-like: per-song artist descriptions mean-pooled to a single 768-dim vector, broadcast to all segments of that song

Training procedure

Phase 5 fine-tunes from a Phase 3 checkpoint (audio + MIDI + concept + lyric, 2560-dim fusion) by re-initialising the first fusion layer for the expanded 3328-dim input and loading all other weights.

  • Hardware: NVIDIA A10 (23.7 GB VRAM) via Modal
  • Epochs: 30 (early stopping, patience=10)
  • Best checkpoint: epoch 14
  • Optimizer: AdamW, lr=1e-5 β†’ 5e-6 (cosine decay)
  • Batch size: 32
  • Label smoothing: 0.1
  • Modality dropout: p=0.15 per modality during training
  • Model selection criterion: best mean accuracy across temporal + spatial + ontological (not val loss β€” loss plateaus at ~0.0002–0.0003 during fine-tuning while accuracy varies Β±15%)

Evaluation

Results

Evaluated on a held-out 20% split (2,321 segments), excluding None-labelled segments.

Dimension Accuracy
Temporal 89.3%
Spatial 91.6%
Ontological 90.7%
Mean 90.5%
Confidence (sigmoid) ~0.87 at target match

The spatial dimension historically lagged (62% in text-only Phase 4) because instrumental tracks have no lyric signal and spatial mode correlates strongly with vocal character. Adding MIDI piano rolls in Phase 3 closed the gap to 93%; the sounds-like modality further stabilises scores on instrumental passages.

Limitations

  • Chromatic mode labels are derived from a single artistic framework (the White project). Scores are only meaningful relative to that framework's colour β†’ mode mapping.
  • The confidence head is a sigmoid over a single logit, not a calibrated probability. Use it for relative ranking within a batch, not as an absolute reliability score.
  • MIDI coverage is 44% of the training data; piano-roll features have weaker gradients than the text/audio paths on segments without MIDI.
  • Sounds-like embeddings are song-level averages β€” they cannot distinguish between sections of the same song that have different timbral character.

Technical Specifications

Compute infrastructure

  • Training: Modal (cloud), NVIDIA A10 GPU
  • Inference: CPU only (CPUExecutionProvider), tested on Apple M-series and x86 Linux
  • ONNX opset: 17
  • Inference time: ~4 ms per batch of 50 on M2 MacBook Pro (MIDI-only, no CLAP)

Software

  • PyTorch 2.x (training)
  • ONNX opset 17 (export)
  • onnxruntime β‰₯ 1.17 (inference)
  • transformers β‰₯ 4.40 (DeBERTa / CLAP encoders, lazy-loaded at runtime)
  • pretty_midi (piano roll preprocessing)

Files

File Size Description
refractor.onnx 19.4 MB ONNX model (all 9 inputs)
refractor.pt 19.4 MB PyTorch checkpoint

Citation

@misc{walsh2026refractor,
  author       = {Gabriel Walsh},
  title        = {Refractor: A Multimodal Fitness Function for Chromatic Music Composition},
  year         = {2026},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/earthlyframes/refractor}},
  note         = {Part of the White project: \url{https://github.com/brotherclone/white}}
}

Glossary

  • Chromatic modes: The three classification dimensions (temporal, spatial, ontological) derived from the White colour-theory system for music
  • Null embedding: A learned parameter vector substituted for any absent modality at inference time
  • Modality dropout: Training-time regularisation that randomly masks present modalities, making the model robust to missing inputs
  • Confidence: A sigmoid scalar [0, 1] indicating how strongly the fused representation matches the target chromatic concept
  • Sounds-like: Song-level aesthetic descriptions of reference artists, mean-pooled into a 768-dim conditioning vector