Datasets:
Update dataset card for v0.3.0
Browse files
README.md
CHANGED
|
@@ -14,365 +14,11 @@ tags:
|
|
| 14 |
- evolutionary-composition
|
| 15 |
size_categories:
|
| 16 |
- 10K<n<100K
|
| 17 |
-
dataset_info:
|
| 18 |
-
- config_name: base_manifest
|
| 19 |
-
features:
|
| 20 |
-
- name: id
|
| 21 |
-
dtype: string
|
| 22 |
-
- name: bpm
|
| 23 |
-
dtype: int64
|
| 24 |
-
- name: tempo_numerator
|
| 25 |
-
dtype: float64
|
| 26 |
-
- name: tempo_denominator
|
| 27 |
-
dtype: float64
|
| 28 |
-
- name: key_signature_note
|
| 29 |
-
dtype: string
|
| 30 |
-
- name: key_signature_mode
|
| 31 |
-
dtype: string
|
| 32 |
-
- name: rainbow_color
|
| 33 |
-
dtype: string
|
| 34 |
-
- name: rainbow_color_temporal_mode
|
| 35 |
-
dtype: string
|
| 36 |
-
- name: rainbow_color_objectional_mode
|
| 37 |
-
dtype: string
|
| 38 |
-
- name: rainbow_color_ontological_mode
|
| 39 |
-
dtype: string
|
| 40 |
-
- name: transmigrational_mode
|
| 41 |
-
dtype: string
|
| 42 |
-
- name: title
|
| 43 |
-
dtype: string
|
| 44 |
-
- name: release_date
|
| 45 |
-
dtype: string
|
| 46 |
-
- name: total_running_time
|
| 47 |
-
dtype: string
|
| 48 |
-
- name: vocals
|
| 49 |
-
dtype: bool
|
| 50 |
-
- name: lyrics
|
| 51 |
-
dtype: bool
|
| 52 |
-
- name: lrc_lyrics
|
| 53 |
-
dtype: string
|
| 54 |
-
- name: sounds_like
|
| 55 |
-
dtype: string
|
| 56 |
-
- name: mood
|
| 57 |
-
dtype: string
|
| 58 |
-
- name: genres
|
| 59 |
-
dtype: string
|
| 60 |
-
- name: lrc_file
|
| 61 |
-
dtype: string
|
| 62 |
-
- name: concept
|
| 63 |
-
dtype: string
|
| 64 |
-
- name: training_data
|
| 65 |
-
struct:
|
| 66 |
-
- name: album_sequence
|
| 67 |
-
dtype: int64
|
| 68 |
-
- name: avg_word_length
|
| 69 |
-
dtype: float64
|
| 70 |
-
- name: boundary_fluidity_score
|
| 71 |
-
dtype: float64
|
| 72 |
-
- name: concept_length
|
| 73 |
-
dtype: int64
|
| 74 |
-
- name: discrepancy_intensity
|
| 75 |
-
dtype: float64
|
| 76 |
-
- name: exclamation_marks
|
| 77 |
-
dtype: int64
|
| 78 |
-
- name: has_rebracketing_markers
|
| 79 |
-
dtype: bool
|
| 80 |
-
- name: memory_discrepancy_severity
|
| 81 |
-
dtype: float64
|
| 82 |
-
- name: narrative_complexity
|
| 83 |
-
dtype: float64
|
| 84 |
-
- name: ontological_uncertainty
|
| 85 |
-
dtype: float64
|
| 86 |
-
- name: question_marks
|
| 87 |
-
dtype: int64
|
| 88 |
-
- name: rebracketing_coverage
|
| 89 |
-
dtype: float64
|
| 90 |
-
- name: rebracketing_intensity
|
| 91 |
-
dtype: float64
|
| 92 |
-
- name: rebracketing_type
|
| 93 |
-
dtype: string
|
| 94 |
-
- name: sentence_count
|
| 95 |
-
dtype: int64
|
| 96 |
-
- name: temporal_complexity_score
|
| 97 |
-
dtype: float64
|
| 98 |
-
- name: track_duration
|
| 99 |
-
dtype: float64
|
| 100 |
-
- name: track_id
|
| 101 |
-
dtype: string
|
| 102 |
-
- name: track_position
|
| 103 |
-
dtype: int64
|
| 104 |
-
- name: uncertainty_level
|
| 105 |
-
dtype: float64
|
| 106 |
-
- name: word_count
|
| 107 |
-
dtype: int64
|
| 108 |
-
- name: song_structure
|
| 109 |
-
dtype: string
|
| 110 |
-
- name: track_id
|
| 111 |
-
dtype: int64
|
| 112 |
-
- name: description
|
| 113 |
-
dtype: string
|
| 114 |
-
- name: audio_file
|
| 115 |
-
dtype: string
|
| 116 |
-
- name: midi_file
|
| 117 |
-
dtype: string
|
| 118 |
-
- name: group
|
| 119 |
-
dtype: string
|
| 120 |
-
- name: midi_group_file
|
| 121 |
-
dtype: string
|
| 122 |
-
- name: player
|
| 123 |
-
dtype: string
|
| 124 |
-
splits:
|
| 125 |
-
- name: train
|
| 126 |
-
num_bytes: 3522729
|
| 127 |
-
num_examples: 1327
|
| 128 |
-
download_size: 177786
|
| 129 |
-
dataset_size: 3522729
|
| 130 |
-
- config_name: training_full
|
| 131 |
-
features:
|
| 132 |
-
- name: segment_id
|
| 133 |
-
dtype: string
|
| 134 |
-
- name: segment_index
|
| 135 |
-
dtype: int64
|
| 136 |
-
- name: song_id
|
| 137 |
-
dtype: string
|
| 138 |
-
- name: track_number
|
| 139 |
-
dtype: int64
|
| 140 |
-
- name: track_description
|
| 141 |
-
dtype: string
|
| 142 |
-
- name: track_group
|
| 143 |
-
dtype: string
|
| 144 |
-
- name: track_player
|
| 145 |
-
dtype: string
|
| 146 |
-
- name: source_audio_file
|
| 147 |
-
dtype: string
|
| 148 |
-
- name: segment_audio_file
|
| 149 |
-
dtype: string
|
| 150 |
-
- name: midi_file
|
| 151 |
-
dtype: string
|
| 152 |
-
- name: start_seconds
|
| 153 |
-
dtype: float64
|
| 154 |
-
- name: end_seconds
|
| 155 |
-
dtype: float64
|
| 156 |
-
- name: duration_seconds
|
| 157 |
-
dtype: float64
|
| 158 |
-
- name: has_audio
|
| 159 |
-
dtype: bool
|
| 160 |
-
- name: has_midi
|
| 161 |
-
dtype: bool
|
| 162 |
-
- name: lyric_text
|
| 163 |
-
dtype: string
|
| 164 |
-
- name: structure_section
|
| 165 |
-
dtype: string
|
| 166 |
-
- name: segment_type
|
| 167 |
-
dtype: string
|
| 168 |
-
- name: original_start_seconds
|
| 169 |
-
dtype: float64
|
| 170 |
-
- name: original_end_seconds
|
| 171 |
-
dtype: float64
|
| 172 |
-
- name: has_structure_adjustments
|
| 173 |
-
dtype: bool
|
| 174 |
-
- name: structure_adjustments
|
| 175 |
-
dtype: string
|
| 176 |
-
- name: is_sub_segment
|
| 177 |
-
dtype: bool
|
| 178 |
-
- name: sub_segment_info
|
| 179 |
-
dtype: string
|
| 180 |
-
- name: lrc_line_number
|
| 181 |
-
dtype: float64
|
| 182 |
-
- name: lyric_char_count
|
| 183 |
-
dtype: uint32
|
| 184 |
-
- name: lyric_word_count
|
| 185 |
-
dtype: uint32
|
| 186 |
-
- name: start_adjustment_seconds
|
| 187 |
-
dtype: float64
|
| 188 |
-
- name: end_adjustment_seconds
|
| 189 |
-
dtype: float64
|
| 190 |
-
- name: content_type
|
| 191 |
-
dtype: string
|
| 192 |
-
- name: manifest_track_key
|
| 193 |
-
dtype: string
|
| 194 |
-
- name: bpm
|
| 195 |
-
dtype: int64
|
| 196 |
-
- name: tempo_numerator
|
| 197 |
-
dtype: float64
|
| 198 |
-
- name: tempo_denominator
|
| 199 |
-
dtype: float64
|
| 200 |
-
- name: key_signature_note
|
| 201 |
-
dtype: string
|
| 202 |
-
- name: key_signature_mode
|
| 203 |
-
dtype: string
|
| 204 |
-
- name: rainbow_color
|
| 205 |
-
dtype: string
|
| 206 |
-
- name: rainbow_color_temporal_mode
|
| 207 |
-
dtype: string
|
| 208 |
-
- name: rainbow_color_objectional_mode
|
| 209 |
-
dtype: string
|
| 210 |
-
- name: rainbow_color_ontological_mode
|
| 211 |
-
dtype: string
|
| 212 |
-
- name: transmigrational_mode
|
| 213 |
-
dtype: string
|
| 214 |
-
- name: title
|
| 215 |
-
dtype: string
|
| 216 |
-
- name: release_date
|
| 217 |
-
dtype: string
|
| 218 |
-
- name: total_running_time
|
| 219 |
-
dtype: string
|
| 220 |
-
- name: vocals
|
| 221 |
-
dtype: bool
|
| 222 |
-
- name: lyrics
|
| 223 |
-
dtype: bool
|
| 224 |
-
- name: lrc_lyrics
|
| 225 |
-
dtype: string
|
| 226 |
-
- name: sounds_like
|
| 227 |
-
dtype: string
|
| 228 |
-
- name: mood
|
| 229 |
-
dtype: string
|
| 230 |
-
- name: genres
|
| 231 |
-
dtype: string
|
| 232 |
-
- name: lrc_file
|
| 233 |
-
dtype: string
|
| 234 |
-
- name: concept
|
| 235 |
-
dtype: string
|
| 236 |
-
- name: training_data
|
| 237 |
-
struct:
|
| 238 |
-
- name: album_sequence
|
| 239 |
-
dtype: int64
|
| 240 |
-
- name: avg_word_length
|
| 241 |
-
dtype: float64
|
| 242 |
-
- name: boundary_fluidity_score
|
| 243 |
-
dtype: float64
|
| 244 |
-
- name: concept_length
|
| 245 |
-
dtype: int64
|
| 246 |
-
- name: discrepancy_intensity
|
| 247 |
-
dtype: float64
|
| 248 |
-
- name: exclamation_marks
|
| 249 |
-
dtype: int64
|
| 250 |
-
- name: has_rebracketing_markers
|
| 251 |
-
dtype: bool
|
| 252 |
-
- name: memory_discrepancy_severity
|
| 253 |
-
dtype: float64
|
| 254 |
-
- name: narrative_complexity
|
| 255 |
-
dtype: float64
|
| 256 |
-
- name: ontological_uncertainty
|
| 257 |
-
dtype: float64
|
| 258 |
-
- name: question_marks
|
| 259 |
-
dtype: int64
|
| 260 |
-
- name: rebracketing_coverage
|
| 261 |
-
dtype: float64
|
| 262 |
-
- name: rebracketing_intensity
|
| 263 |
-
dtype: float64
|
| 264 |
-
- name: rebracketing_type
|
| 265 |
-
dtype: string
|
| 266 |
-
- name: sentence_count
|
| 267 |
-
dtype: int64
|
| 268 |
-
- name: temporal_complexity_score
|
| 269 |
-
dtype: float64
|
| 270 |
-
- name: track_duration
|
| 271 |
-
dtype: float64
|
| 272 |
-
- name: track_id
|
| 273 |
-
dtype: string
|
| 274 |
-
- name: track_position
|
| 275 |
-
dtype: int64
|
| 276 |
-
- name: uncertainty_level
|
| 277 |
-
dtype: float64
|
| 278 |
-
- name: word_count
|
| 279 |
-
dtype: int64
|
| 280 |
-
- name: song_structure
|
| 281 |
-
dtype: string
|
| 282 |
-
- name: midi_group_file
|
| 283 |
-
dtype: string
|
| 284 |
-
splits:
|
| 285 |
-
- name: train
|
| 286 |
-
num_bytes: 35869494
|
| 287 |
-
num_examples: 11605
|
| 288 |
-
download_size: 580351
|
| 289 |
-
dataset_size: 35869494
|
| 290 |
-
- config_name: training_segments
|
| 291 |
-
features:
|
| 292 |
-
- name: segment_id
|
| 293 |
-
dtype: string
|
| 294 |
-
- name: segment_index
|
| 295 |
-
dtype: int64
|
| 296 |
-
- name: track_id
|
| 297 |
-
dtype: string
|
| 298 |
-
- name: track_number
|
| 299 |
-
dtype: int64
|
| 300 |
-
- name: track_description
|
| 301 |
-
dtype: string
|
| 302 |
-
- name: track_group
|
| 303 |
-
dtype: string
|
| 304 |
-
- name: track_player
|
| 305 |
-
dtype: string
|
| 306 |
-
- name: source_audio_file
|
| 307 |
-
dtype: string
|
| 308 |
-
- name: segment_audio_file
|
| 309 |
-
dtype: string
|
| 310 |
-
- name: midi_file
|
| 311 |
-
dtype: string
|
| 312 |
-
- name: start_seconds
|
| 313 |
-
dtype: float64
|
| 314 |
-
- name: end_seconds
|
| 315 |
-
dtype: float64
|
| 316 |
-
- name: duration_seconds
|
| 317 |
-
dtype: float64
|
| 318 |
-
- name: has_audio
|
| 319 |
-
dtype: bool
|
| 320 |
-
- name: has_midi
|
| 321 |
-
dtype: bool
|
| 322 |
-
- name: lyric_text
|
| 323 |
-
dtype: string
|
| 324 |
-
- name: structure_section
|
| 325 |
-
dtype: string
|
| 326 |
-
- name: segment_type
|
| 327 |
-
dtype: string
|
| 328 |
-
- name: original_start_seconds
|
| 329 |
-
dtype: float64
|
| 330 |
-
- name: original_end_seconds
|
| 331 |
-
dtype: float64
|
| 332 |
-
- name: has_structure_adjustments
|
| 333 |
-
dtype: bool
|
| 334 |
-
- name: structure_adjustments
|
| 335 |
-
dtype: string
|
| 336 |
-
- name: is_sub_segment
|
| 337 |
-
dtype: bool
|
| 338 |
-
- name: sub_segment_info
|
| 339 |
-
dtype: string
|
| 340 |
-
- name: lrc_line_number
|
| 341 |
-
dtype: float64
|
| 342 |
-
- name: lyric_char_count
|
| 343 |
-
dtype: uint32
|
| 344 |
-
- name: lyric_word_count
|
| 345 |
-
dtype: uint32
|
| 346 |
-
- name: start_adjustment_seconds
|
| 347 |
-
dtype: float64
|
| 348 |
-
- name: end_adjustment_seconds
|
| 349 |
-
dtype: float64
|
| 350 |
-
- name: content_type
|
| 351 |
-
dtype: string
|
| 352 |
-
splits:
|
| 353 |
-
- name: train
|
| 354 |
-
num_bytes: 6049737
|
| 355 |
-
num_examples: 11605
|
| 356 |
-
download_size: 389389
|
| 357 |
-
dataset_size: 6049737
|
| 358 |
-
configs:
|
| 359 |
-
- config_name: base_manifest
|
| 360 |
-
data_files:
|
| 361 |
-
- split: train
|
| 362 |
-
path: base_manifest/train-*
|
| 363 |
-
- config_name: training_full
|
| 364 |
-
data_files:
|
| 365 |
-
- split: train
|
| 366 |
-
path: training_full/train-*
|
| 367 |
-
- config_name: training_segments
|
| 368 |
-
data_files:
|
| 369 |
-
- split: train
|
| 370 |
-
path: training_segments/train-*
|
| 371 |
---
|
| 372 |
|
| 373 |
# White Training Data
|
| 374 |
|
| 375 |
-
Training data for the **Rainbow Table** chromatic fitness function — a multimodal ML model that scores how well audio, MIDI, and text align with a target chromatic mode (Black, Red, Orange, Yellow, Green, Blue, Indigo, Violet).
|
| 376 |
|
| 377 |
Part of [The Earthly Frames](https://github.com/brotherclone/white) project, a conscious collaboration between human creativity and AI.
|
| 378 |
|
|
@@ -388,7 +34,7 @@ These models are **fitness functions for evolutionary music composition**, not c
|
|
| 388 |
|
| 389 |
## Version
|
| 390 |
|
| 391 |
-
Current: **v0.
|
| 392 |
|
| 393 |
## Dataset Structure
|
| 394 |
|
|
@@ -413,6 +59,28 @@ Current: **v0.2.0** — 2026-02-12
|
|
| 413 |
|
| 414 |
**Note:** Audio waveforms and MIDI binaries are stored separately (not included in metadata configs due to size). The `preview` config includes playable audio for exploration. The media parquet (~15 GB) is used locally during training.
|
| 415 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 416 |
## Key Features
|
| 417 |
|
| 418 |
### `training_full` (primary training table)
|
|
@@ -450,10 +118,12 @@ manifest = load_dataset("earthlyframes/white-training-data", "base_manifest")
|
|
| 450 |
segments = load_dataset("earthlyframes/white-training-data", "training_segments")
|
| 451 |
|
| 452 |
# Load a specific version
|
| 453 |
-
training = load_dataset("earthlyframes/white-training-data", "training_full", revision="v0.
|
| 454 |
```
|
| 455 |
|
| 456 |
-
## Training Results
|
|
|
|
|
|
|
| 457 |
|
| 458 |
| Task | Metric | Result |
|
| 459 |
|------|--------|--------|
|
|
@@ -463,15 +133,23 @@ training = load_dataset("earthlyframes/white-training-data", "training_full", re
|
|
| 463 |
| Ontological mode regression | Mode accuracy | 92.9% |
|
| 464 |
| Spatial mode regression | Mode accuracy | 61.6% |
|
| 465 |
|
| 466 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 467 |
|
| 468 |
## Source
|
| 469 |
|
| 470 |
-
83 songs across 8 chromatic albums
|
| 471 |
|
| 472 |
## License
|
| 473 |
|
| 474 |
[Collaborative Intelligence License v1.0](https://github.com/brotherclone/white/blob/main/COLLABORATIVE_INTELLIGENCE_LICENSE.md) — This work represents conscious partnership between human creativity and AI. Both parties have agency; both must consent to sharing.
|
| 475 |
|
| 476 |
---
|
| 477 |
-
*Generated 2026-02-
|
|
|
|
| 14 |
- evolutionary-composition
|
| 15 |
size_categories:
|
| 16 |
- 10K<n<100K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
| 18 |
|
| 19 |
# White Training Data
|
| 20 |
|
| 21 |
+
Training data and models for the **Rainbow Table** chromatic fitness function — a multimodal ML model that scores how well audio, MIDI, and text align with a target chromatic mode (Black, Red, Orange, Yellow, Green, Blue, Indigo, Violet).
|
| 22 |
|
| 23 |
Part of [The Earthly Frames](https://github.com/brotherclone/white) project, a conscious collaboration between human creativity and AI.
|
| 24 |
|
|
|
|
| 34 |
|
| 35 |
## Version
|
| 36 |
|
| 37 |
+
Current: **v0.3.0** — 2026-02-13
|
| 38 |
|
| 39 |
## Dataset Structure
|
| 40 |
|
|
|
|
| 59 |
|
| 60 |
**Note:** Audio waveforms and MIDI binaries are stored separately (not included in metadata configs due to size). The `preview` config includes playable audio for exploration. The media parquet (~15 GB) is used locally during training.
|
| 61 |
|
| 62 |
+
## Trained Models
|
| 63 |
+
|
| 64 |
+
| File | Size | Description |
|
| 65 |
+
|------|------|-------------|
|
| 66 |
+
| `data/models/fusion_model.pt` | ~16 MB | PyTorch checkpoint — `MultimodalFusionModel` (4.3M params) |
|
| 67 |
+
| `data/models/fusion_model.onnx` | ~16 MB | ONNX export for fast CPU inference |
|
| 68 |
+
|
| 69 |
+
The models are consumed via the `ChromaticScorer` class, which wraps encoding and inference:
|
| 70 |
+
|
| 71 |
+
```python
|
| 72 |
+
from chromatic_scorer import ChromaticScorer
|
| 73 |
+
|
| 74 |
+
scorer = ChromaticScorer("path/to/fusion_model.onnx")
|
| 75 |
+
result = scorer.score(midi_bytes=midi, audio_waveform=audio, concept_text="a haunted lullaby")
|
| 76 |
+
# result: {"temporal": 0.87, "spatial": 0.91, "ontological": 0.83, "confidence": 0.89}
|
| 77 |
+
|
| 78 |
+
# Batch scoring for evolutionary candidate selection
|
| 79 |
+
ranked = scorer.score_batch(candidates, target_color="Violet")
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
**Architecture:** PianoRollEncoder CNN (1.1M params, unfrozen) + fusion MLP (3.2M params) with 4 regression heads. Input: audio (512-dim CLAP) + MIDI (512-dim piano roll) + concept (768-dim DeBERTa) + lyric (768-dim DeBERTa) = 2560-dim fused representation. Trained with learned null embeddings and modality dropout (p=0.15) for robustness to missing modalities.
|
| 83 |
+
|
| 84 |
## Key Features
|
| 85 |
|
| 86 |
### `training_full` (primary training table)
|
|
|
|
| 118 |
segments = load_dataset("earthlyframes/white-training-data", "training_segments")
|
| 119 |
|
| 120 |
# Load a specific version
|
| 121 |
+
training = load_dataset("earthlyframes/white-training-data", "training_full", revision="v0.3.0")
|
| 122 |
```
|
| 123 |
|
| 124 |
+
## Training Results
|
| 125 |
+
|
| 126 |
+
### Text-Only (Phases 1-4)
|
| 127 |
|
| 128 |
| Task | Metric | Result |
|
| 129 |
|------|--------|--------|
|
|
|
|
| 133 |
| Ontological mode regression | Mode accuracy | 92.9% |
|
| 134 |
| Spatial mode regression | Mode accuracy | 61.6% |
|
| 135 |
|
| 136 |
+
### Multimodal Fusion (Phase 3)
|
| 137 |
+
|
| 138 |
+
| Dimension | Text-Only | Multimodal | Improvement |
|
| 139 |
+
|-----------|-----------|------------|-------------|
|
| 140 |
+
| Temporal | 94.9% | 90.0% | — |
|
| 141 |
+
| Ontological | 92.9% | 91.0% | — |
|
| 142 |
+
| Spatial | 61.6% | **93.0%** | **+31.4%** |
|
| 143 |
+
|
| 144 |
+
Spatial mode was bottlenecked by instrumental albums (Yellow, Green) which lack text. The multimodal fusion model resolves this by incorporating CLAP audio embeddings and piano roll MIDI features, enabling accurate scoring even without lyrics. Temporal and ontological show slight regression in multi-task mode but remain strong; single-task variants can be used where maximum per-dimension accuracy is needed.
|
| 145 |
|
| 146 |
## Source
|
| 147 |
|
| 148 |
+
83 songs across 8 chromatic albums. The 7 color albums (Black through Violet) are **human-composed source material** spanning 10+ years of original work — all audio, lyrics, and arrangements are the product of human creativity. The White album is being co-produced with AI using the evolutionary composition pipeline described above. No sampled or licensed material is used in any album.
|
| 149 |
|
| 150 |
## License
|
| 151 |
|
| 152 |
[Collaborative Intelligence License v1.0](https://github.com/brotherclone/white/blob/main/COLLABORATIVE_INTELLIGENCE_LICENSE.md) — This work represents conscious partnership between human creativity and AI. Both parties have agency; both must consent to sharing.
|
| 153 |
|
| 154 |
---
|
| 155 |
+
*Generated 2026-02-13 | [GitHub](https://github.com/brotherclone/white)*
|