vibe-color-model / README.md
danielritchie's picture
Update README.md
5d82c13 verified
---
license: cc-by-4.0
datasets:
- danielritchie/cinematic-mood-palette
language:
- en
tags:
- tflite
- embedded
- emotion
- color
- hri
- robotics
- affective-computing
- real-time
- vad
- tiny-model
---
# VIBE Color Model
A 365-parameter TFLite model that maps emotional state to cinematic color expression. Designed to run on embedded hardware with minimal compute.
## Model Description
Given a 5-dimensional emotional coordinate (VAD+CC), returns a cinematic visual treatment β€” not just a color, but RGB plus independent Energy and Intensity parameters drawn from cinematographic practice.
**Architecture:** 5β†’16β†’12β†’5 fully connected network
**Size:** 3.5KB
**Parameters:** 365
**Format:** TFLite (embedded deployment), H5 (inspection/fine-tuning)
## Inputs and Outputs
**Input:** VAD+CC vector β€” 5 float values in [0, 1]
| Dimension | Meaning |
|---|---|
| Valence | Negative ↔ Positive emotional tone |
| Arousal | Calm ↔ Energized |
| Dominance | Passive ↔ Powerful |
| Complexity | Minimal ↔ Rich |
| Coherence | Chaotic ↔ Harmonious |
**Output:** 5 cinematic parameters β€” 5 float values in [0, 1]
| Dimension | Meaning |
|---|---|
| R | Red channel |
| G | Green channel |
| B | Blue channel |
| Energy | How alive/active the display feels |
| Intensity | How pronounced the effect is applied |
## Training Data
Trained on [danielritchie/cinematic-mood-palette](https://huggingface.co/datasets/danielritchie/cinematic-mood-palette) β€” ~80 curated anchor points mapping emotional states to visual treatments drawn from film and photography.
## Validation
Validation is qualitative. The model is evaluated by behavioral coherence β€” does the output feel cinematically appropriate for the emotional input? Formal quantitative benchmarks are not meaningful for a model of this size and purpose.
## Intended Use
Part of [VIBE-Eyes](https://github.com/brainwavecollective/vibe-eyes) β€” a real-time emotional display system for conversational robots. The model runs on-device, receiving VAD+CC vectors from an edge emotion engine and driving LED color output without any cloud dependency.
Also useful as a lightweight reference implementation for anyone mapping affective state to visual expression in constrained environments.
## Limitations
- Small training set (~80 anchor points): functions as a reference structure, not comprehensive coverage
- Culturally specific: draws primarily from Western cinematic tradition
- Interpretive: mappings reflect observed patterns in film, not objective measurements
## License
CC-BY-4.0 β€” use freely with credit