danielritchie commited on
Commit
5d82c13
Β·
verified Β·
1 Parent(s): ed278a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -3
README.md CHANGED
@@ -1,3 +1,77 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ datasets:
4
+ - danielritchie/cinematic-mood-palette
5
+ language:
6
+ - en
7
+ tags:
8
+ - tflite
9
+ - embedded
10
+ - emotion
11
+ - color
12
+ - hri
13
+ - robotics
14
+ - affective-computing
15
+ - real-time
16
+ - vad
17
+ - tiny-model
18
+ ---
19
+
20
+ # VIBE Color Model
21
+
22
+ A 365-parameter TFLite model that maps emotional state to cinematic color expression. Designed to run on embedded hardware with minimal compute.
23
+
24
+ ## Model Description
25
+
26
+ Given a 5-dimensional emotional coordinate (VAD+CC), returns a cinematic visual treatment β€” not just a color, but RGB plus independent Energy and Intensity parameters drawn from cinematographic practice.
27
+
28
+ **Architecture:** 5β†’16β†’12β†’5 fully connected network
29
+ **Size:** 3.5KB
30
+ **Parameters:** 365
31
+ **Format:** TFLite (embedded deployment), H5 (inspection/fine-tuning)
32
+
33
+ ## Inputs and Outputs
34
+
35
+ **Input:** VAD+CC vector β€” 5 float values in [0, 1]
36
+
37
+ | Dimension | Meaning |
38
+ |---|---|
39
+ | Valence | Negative ↔ Positive emotional tone |
40
+ | Arousal | Calm ↔ Energized |
41
+ | Dominance | Passive ↔ Powerful |
42
+ | Complexity | Minimal ↔ Rich |
43
+ | Coherence | Chaotic ↔ Harmonious |
44
+
45
+ **Output:** 5 cinematic parameters β€” 5 float values in [0, 1]
46
+
47
+ | Dimension | Meaning |
48
+ |---|---|
49
+ | R | Red channel |
50
+ | G | Green channel |
51
+ | B | Blue channel |
52
+ | Energy | How alive/active the display feels |
53
+ | Intensity | How pronounced the effect is applied |
54
+
55
+ ## Training Data
56
+
57
+ Trained on [danielritchie/cinematic-mood-palette](https://huggingface.co/datasets/danielritchie/cinematic-mood-palette) β€” ~80 curated anchor points mapping emotional states to visual treatments drawn from film and photography.
58
+
59
+ ## Validation
60
+
61
+ Validation is qualitative. The model is evaluated by behavioral coherence β€” does the output feel cinematically appropriate for the emotional input? Formal quantitative benchmarks are not meaningful for a model of this size and purpose.
62
+
63
+ ## Intended Use
64
+
65
+ Part of [VIBE-Eyes](https://github.com/brainwavecollective/vibe-eyes) β€” a real-time emotional display system for conversational robots. The model runs on-device, receiving VAD+CC vectors from an edge emotion engine and driving LED color output without any cloud dependency.
66
+
67
+ Also useful as a lightweight reference implementation for anyone mapping affective state to visual expression in constrained environments.
68
+
69
+ ## Limitations
70
+
71
+ - Small training set (~80 anchor points): functions as a reference structure, not comprehensive coverage
72
+ - Culturally specific: draws primarily from Western cinematic tradition
73
+ - Interpretive: mappings reflect observed patterns in film, not objective measurements
74
+
75
+ ## License
76
+
77
+ CC-BY-4.0 β€” use freely with credit