Update app.py
Browse files
app.py
CHANGED
|
@@ -182,15 +182,19 @@ This section provides a simplified visualization of a more complex on-device arc
|
|
| 182 |
|
| 183 |
In hardware deployment, the NVIDIA Jetson Orin Nano performs:
|
| 184 |
|
|
|
|
|
|
|
| 185 |
• Real-time transcript ingestion
|
| 186 |
• VAD extraction (NRC-VAD lexicon)
|
| 187 |
• Structural language metrics (Complexity + Coherence)
|
| 188 |
• Radial emotional amplification (Passion)
|
| 189 |
• Cinematic nearest-exemplar alignment (Drama)
|
| 190 |
• Dual-timescale blending (fast burst + slow baseline via Nemotron/Ollama)
|
| 191 |
-
• Continuous emotional state streaming
|
| 192 |
|
| 193 |
-
This demo isolates
|
|
|
|
|
|
|
| 194 |
"""
|
| 195 |
)
|
| 196 |
|
|
@@ -221,8 +225,8 @@ This demo isolates the fast-loop transformation for clarity.
|
|
| 221 |
gr.Markdown(
|
| 222 |
"""
|
| 223 |
**Note:**
|
| 224 |
-
This plot shows a 2D Valence–Arousal projection for visualization only.
|
| 225 |
-
|
| 226 |
"""
|
| 227 |
)
|
| 228 |
|
|
@@ -235,7 +239,7 @@ All transformations and color inference operate on the full 5D VAD+CC vector.
|
|
| 235 |
|
| 236 |
gr.Markdown(
|
| 237 |
"""
|
| 238 |
-
The finalized VAD+CC vector is transmitted to an
|
| 239 |
|
| 240 |
The module does not compute emotion.
|
| 241 |
It receives the 5D emotional state and runs a trained neural model to convert it into expressive color.
|
|
@@ -243,7 +247,7 @@ It receives the 5D emotional state and runs a trained neural model to convert it
|
|
| 243 |
Model used here (same as deployment):
|
| 244 |
https://huggingface.co/danielritchie/vibe-color-model
|
| 245 |
|
| 246 |
-
VAD+CC → Embedded Model → Color Rendering
|
| 247 |
"""
|
| 248 |
)
|
| 249 |
|
|
|
|
| 182 |
|
| 183 |
In hardware deployment, the NVIDIA Jetson Orin Nano performs:
|
| 184 |
|
| 185 |
+
• Robot hardware daemon service
|
| 186 |
+
• Interactive conversational application
|
| 187 |
• Real-time transcript ingestion
|
| 188 |
• VAD extraction (NRC-VAD lexicon)
|
| 189 |
• Structural language metrics (Complexity + Coherence)
|
| 190 |
• Radial emotional amplification (Passion)
|
| 191 |
• Cinematic nearest-exemplar alignment (Drama)
|
| 192 |
• Dual-timescale blending (fast burst + slow baseline via Nemotron/Ollama)
|
| 193 |
+
• Continuous emotional state streaming for display on an expression module
|
| 194 |
|
| 195 |
+
This demo isolates a single loop transformation for clarity.
|
| 196 |
+
|
| 197 |
+
Our low-power edge device is capable of running this loop 200x per second.
|
| 198 |
"""
|
| 199 |
)
|
| 200 |
|
|
|
|
| 225 |
gr.Markdown(
|
| 226 |
"""
|
| 227 |
**Note:**
|
| 228 |
+
This plot shows a 2D Valence–Arousal projection for visualization only, but results are from the actual model.
|
| 229 |
+
Actual transformation and color inference are more complex and operate on the full 5D VAD+CC vector.
|
| 230 |
"""
|
| 231 |
)
|
| 232 |
|
|
|
|
| 239 |
|
| 240 |
gr.Markdown(
|
| 241 |
"""
|
| 242 |
+
The finalized VAD+CC vector is transmitted to an expressive display module. In this example, we are converting to colors to be used for eyes.
|
| 243 |
|
| 244 |
The module does not compute emotion.
|
| 245 |
It receives the 5D emotional state and runs a trained neural model to convert it into expressive color.
|
|
|
|
| 247 |
Model used here (same as deployment):
|
| 248 |
https://huggingface.co/danielritchie/vibe-color-model
|
| 249 |
|
| 250 |
+
VAD+CC (Affect Engine) → Embedded Model → Color Rendering (Expression)
|
| 251 |
"""
|
| 252 |
)
|
| 253 |
|