Spaces:
Running
Running
| <html lang="en"> | |
| <head> | |
| <meta charset="UTF-8" /> | |
| <meta name="viewport" content="width=device-width, initial-scale=1.0" /> | |
| <title>Live Expression Reader</title> | |
| <meta | |
| name="description" | |
| content="Real-time facial expression analysis with calibrated confidence, cognitive states, and FACS-grounded explanations. Runs entirely in your browser." | |
| /> | |
| <script src="https://cdn.tailwindcss.com"></script> | |
| <link rel="preconnect" href="https://fonts.googleapis.com" /> | |
| <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin /> | |
| <link | |
| rel="stylesheet" | |
| href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&family=JetBrains+Mono:wght@400;500&display=swap" | |
| /> | |
| <style> | |
| body { font-family: 'Inter', system-ui, -apple-system, sans-serif; } | |
| .font-mono, code, kbd { font-family: 'JetBrains Mono', ui-monospace, SFMono-Regular, Menlo, monospace; } | |
| kbd { | |
| display: inline-block; | |
| padding: 0.05rem 0.35rem; | |
| font-size: 0.78em; | |
| border: 1px solid rgb(64, 64, 64); | |
| border-bottom-width: 2px; | |
| border-radius: 0.25rem; | |
| background: rgb(23, 23, 23); | |
| color: rgb(229, 229, 229); | |
| } | |
| /* When AI proxy isn't configured, hide every AI-related affordance. | |
| Toggle via body.no-ai class set in main.ts at startup. */ | |
| body.no-ai .ai-feature { display: none ; } | |
| </style> | |
| <script type="module" crossorigin src="./assets/index-QsYxV-23.js"></script> | |
| </head> | |
| <body | |
| class="bg-gradient-to-b from-neutral-950 via-neutral-950 to-neutral-900 text-neutral-100 min-h-screen" | |
| > | |
| <main class="max-w-5xl mx-auto p-4 sm:p-6"> | |
| <header class="mb-5 flex items-start justify-between gap-4 flex-wrap"> | |
| <div> | |
| <h1 class="text-2xl sm:text-3xl font-bold tracking-tight"> | |
| Live Expression Reader | |
| </h1> | |
| <p class="text-sm text-neutral-400 mt-1 max-w-2xl"> | |
| Real-time facial analysis with calibrated confidence, cognitive | |
| states, and FACS-grounded AI explanations. | |
| </p> | |
| <p class="text-xs text-neutral-500 mt-1.5 flex items-center gap-1.5"> | |
| <span class="inline-block w-1.5 h-1.5 rounded-full bg-emerald-400"></span> | |
| Runs entirely in your browser. No frames leave your device. | |
| </p> | |
| </div> | |
| <button | |
| id="help-toggle" | |
| class="shrink-0 px-3 py-1.5 rounded-lg border border-neutral-800 hover:border-neutral-600 hover:bg-neutral-900 text-sm flex items-center gap-1.5 transition-colors" | |
| aria-label="Open help" | |
| > | |
| <span | |
| class="inline-flex items-center justify-center w-4 h-4 rounded-full bg-indigo-600/20 border border-indigo-500/50 text-indigo-300 text-[11px] font-bold" | |
| >?</span> | |
| Help | |
| </button> | |
| </header> | |
| <section | |
| id="video-container" | |
| class="relative rounded-xl overflow-hidden bg-black shadow-2xl ring-1 ring-neutral-800/60 mx-auto" | |
| style="aspect-ratio: 16 / 9; max-height: 75vh; width: 100%" | |
| > | |
| <video | |
| id="webcam" | |
| class="w-full h-full object-contain scale-x-[-1]" | |
| autoplay | |
| muted | |
| playsinline | |
| ></video> | |
| <canvas | |
| id="overlay" | |
| class="absolute inset-0 w-full h-full pointer-events-none object-contain scale-x-[-1]" | |
| ></canvas> | |
| <div | |
| id="personal-calib-overlay" | |
| class="absolute inset-0 hidden flex-col items-center justify-center bg-black/75 backdrop-blur-sm" | |
| > | |
| <div | |
| class="bg-neutral-950/95 border-2 border-rose-500 rounded-2xl px-10 py-8 shadow-2xl flex flex-col items-center gap-3 min-w-[320px] max-w-[80%]" | |
| > | |
| <div | |
| class="flex items-center gap-2 text-rose-400 text-xs font-bold tracking-[0.2em] uppercase" | |
| > | |
| <span class="relative flex h-2.5 w-2.5"> | |
| <span | |
| class="absolute inline-flex h-full w-full animate-ping rounded-full bg-rose-400 opacity-75" | |
| ></span> | |
| <span | |
| class="relative inline-flex rounded-full h-2.5 w-2.5 bg-rose-500" | |
| ></span> | |
| </span> | |
| Recording template | |
| </div> | |
| <div class="text-neutral-400 text-sm">Show your</div> | |
| <div | |
| id="personal-calib-emotion" | |
| class="text-3xl sm:text-5xl font-bold text-white capitalize tracking-tight" | |
| > | |
| happy | |
| </div> | |
| <div class="text-neutral-400 text-sm">face β hold it steady</div> | |
| <div | |
| id="personal-calib-countdown" | |
| class="mt-1 text-4xl font-mono tabular-nums text-rose-300" | |
| > | |
| 3 | |
| </div> | |
| <div class="w-full h-1.5 bg-neutral-800 rounded overflow-hidden"> | |
| <div | |
| id="personal-calib-progress" | |
| class="h-full bg-rose-500 transition-[width] duration-100 ease-linear" | |
| style="width: 0%" | |
| ></div> | |
| </div> | |
| </div> | |
| </div> | |
| <div | |
| id="loading" | |
| class="absolute inset-0 hidden items-center justify-center bg-black/70 backdrop-blur-sm text-sm gap-3" | |
| > | |
| <svg | |
| class="animate-spin h-5 w-5 text-indigo-400" | |
| viewBox="0 0 24 24" | |
| fill="none" | |
| aria-hidden="true" | |
| > | |
| <circle | |
| cx="12" | |
| cy="12" | |
| r="10" | |
| stroke="currentColor" | |
| stroke-width="3" | |
| stroke-opacity="0.25" | |
| ></circle> | |
| <path | |
| d="M22 12a10 10 0 0 1-10 10" | |
| stroke="currentColor" | |
| stroke-width="3" | |
| stroke-linecap="round" | |
| ></path> | |
| </svg> | |
| <span id="loading-msg">Loading face modelβ¦</span> | |
| </div> | |
| </section> | |
| <section class="mt-4 flex gap-2 flex-wrap items-center"> | |
| <div class="flex gap-2 flex-wrap"> | |
| <button | |
| id="pause" | |
| class="px-3 py-1.5 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-sm font-medium transition-colors" | |
| > | |
| Pause | |
| </button> | |
| <button | |
| id="explain" | |
| class="ai-feature px-3 py-1.5 rounded-lg bg-indigo-600 hover:bg-indigo-500 text-sm font-medium transition-colors" | |
| > | |
| Why? | |
| </button> | |
| <button | |
| id="recalibrate" | |
| class="px-3 py-1.5 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-sm transition-colors" | |
| title="Recapture your neutral baseline (3s)" | |
| > | |
| Recalibrate | |
| </button> | |
| </div> | |
| <div class="hidden sm:block w-px h-5 bg-neutral-800 mx-1" aria-hidden="true"></div> | |
| <div class="flex gap-2 flex-wrap"> | |
| <button | |
| id="export" | |
| class="px-3 py-1.5 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-sm transition-colors" | |
| title="Download this session's timeline as JSON" | |
| > | |
| Export | |
| </button> | |
| <button | |
| id="record" | |
| class="px-3 py-1.5 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-sm flex items-center gap-1.5 transition-colors" | |
| title="Capture a clip of this session's data for later analysis (max 10 min)" | |
| > | |
| <span | |
| id="record-dot" | |
| class="w-2 h-2 rounded-full bg-rose-500 hidden" | |
| ></span> | |
| <span id="record-label">Record</span> | |
| </button> | |
| </div> | |
| <div class="ai-feature flex items-center gap-2 ml-auto"> | |
| <label | |
| for="window-slider" | |
| class="text-xs text-neutral-400 select-none" | |
| > | |
| Why? window | |
| </label> | |
| <input | |
| id="window-slider" | |
| type="range" | |
| min="2" | |
| max="30" | |
| value="5" | |
| step="1" | |
| class="w-28 accent-indigo-500" | |
| /> | |
| <span | |
| id="window-value" | |
| class="text-xs text-neutral-300 w-8 tabular-nums" | |
| >5s</span | |
| > | |
| </div> | |
| </section> | |
| <section class="ai-feature mt-6"> | |
| <h2 | |
| class="text-xs font-semibold uppercase tracking-[0.18em] text-neutral-500 mb-2 px-1" | |
| > | |
| Conversation | |
| </h2> | |
| <div | |
| id="explanation" | |
| class="p-4 rounded-xl bg-neutral-900/80 ring-1 ring-neutral-800/60 text-sm text-neutral-200 min-h-[3rem] max-h-96 overflow-y-auto" | |
| ></div> | |
| <form id="chat-form" class="mt-2 flex gap-2 items-stretch flex-wrap"> | |
| <input | |
| id="chat-input" | |
| type="text" | |
| placeholder="Ask a question about your expressionβ¦" | |
| class="flex-1 min-w-[200px] px-3 py-1.5 rounded-lg bg-neutral-950 border border-neutral-800 text-sm focus:outline-none focus:border-indigo-600 focus:ring-1 focus:ring-indigo-600/40 transition-colors" | |
| autocomplete="off" | |
| /> | |
| <button | |
| type="submit" | |
| id="chat-send" | |
| class="px-3 py-1.5 rounded-lg bg-indigo-600 hover:bg-indigo-500 text-sm font-medium transition-colors" | |
| > | |
| Ask | |
| </button> | |
| <button | |
| type="button" | |
| id="summarize-session" | |
| class="px-3 py-1.5 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-sm transition-colors" | |
| title="Summarize the entire session so far" | |
| > | |
| Summarize session | |
| </button> | |
| <button | |
| type="button" | |
| id="discuss-recording" | |
| class="hidden px-3 py-1.5 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-sm transition-colors" | |
| title="Analyze the saved recording with the chat" | |
| > | |
| Discuss recording | |
| </button> | |
| </form> | |
| </section> | |
| <section class="mt-6"> | |
| <button | |
| id="personal-calib-toggle" | |
| class="text-sm text-neutral-400 hover:text-neutral-200 transition-colors" | |
| aria-expanded="false" | |
| aria-controls="personal-calib-panel" | |
| > | |
| βΈ Personal emotion calibration (advanced) | |
| </button> | |
| <div | |
| id="personal-calib-panel" | |
| class="hidden mt-3 p-4 rounded-xl bg-neutral-900/80 ring-1 ring-neutral-800/60" | |
| > | |
| <div | |
| class="flex gap-3 p-3 mb-3 rounded-lg bg-amber-950/40 border border-amber-700/60" | |
| > | |
| <span class="text-amber-300 text-lg leading-tight">β </span> | |
| <div class="text-sm text-amber-100"> | |
| <p class="font-semibold mb-1">Experimental β read first.</p> | |
| <p> | |
| Recording personal templates lets the app learn what | |
| <em>your</em> happy / sad / angry face looks like. Replacing an | |
| existing template overwrites it for the rest of this session. | |
| The model already works without this β only do it if you | |
| understand what you're changing. | |
| </p> | |
| </div> | |
| </div> | |
| <p class="text-xs text-neutral-400 mb-3"> | |
| Click an emotion, then hold that expression for 3 seconds. Templates | |
| stay in this browser tab only and are lost on refresh (unless you | |
| opt in to persistence below). | |
| </p> | |
| <div | |
| id="personal-calib-grid" | |
| class="grid grid-cols-2 sm:grid-cols-4 gap-2" | |
| ></div> | |
| <div | |
| class="mt-4 pt-3 border-t border-neutral-800 flex items-center gap-3 flex-wrap" | |
| > | |
| <label | |
| class="flex items-center gap-2 text-sm text-neutral-300 cursor-pointer select-none" | |
| > | |
| <input | |
| id="storage-enabled" | |
| type="checkbox" | |
| class="accent-indigo-500" | |
| /> | |
| Remember my templates on this device | |
| </label> | |
| <span id="storage-status" class="text-xs text-neutral-500"></span> | |
| <button | |
| id="storage-clear" | |
| class="ml-auto px-2 py-1 rounded text-xs text-rose-300 border border-rose-900/60 hover:bg-rose-950/40 transition-colors" | |
| > | |
| Clear stored data | |
| </button> | |
| </div> | |
| </div> | |
| </section> | |
| <footer | |
| class="mt-10 pt-4 border-t border-neutral-900 text-xs text-neutral-500 flex flex-wrap gap-x-3 gap-y-1 items-center" | |
| > | |
| <span>Apache-2.0 / MIT throughout</span> | |
| <span aria-hidden="true">Β·</span> | |
| <span>HSEmotion + MediaPipe + LLM (proxied)</span> | |
| <span aria-hidden="true">Β·</span> | |
| <span>Calibrated, in-browser, no tracking</span> | |
| <span aria-hidden="true">Β·</span> | |
| <a | |
| href="https://github.com/Arjun10g/live-expression-reader" | |
| target="_blank" | |
| rel="noopener" | |
| class="inline-flex items-center gap-1 text-neutral-400 hover:text-indigo-300 underline decoration-dotted underline-offset-2 transition-colors" | |
| > | |
| GitHub β | |
| </a> | |
| </footer> | |
| </main> | |
| <!-- Help modal --> | |
| <div | |
| id="help-modal" | |
| class="hidden fixed inset-0 z-50 bg-black/75 backdrop-blur-sm overflow-y-auto" | |
| role="dialog" | |
| aria-modal="true" | |
| aria-labelledby="help-title" | |
| > | |
| <div class="min-h-full flex items-start justify-center p-4 sm:p-8"> | |
| <div | |
| class="bg-neutral-900 border border-neutral-800 rounded-2xl max-w-2xl w-full p-6 sm:p-8 shadow-2xl" | |
| > | |
| <div class="flex items-start justify-between gap-4 mb-5"> | |
| <div> | |
| <h2 id="help-title" class="text-xl sm:text-2xl font-semibold"> | |
| How to use | |
| </h2> | |
| <p class="text-xs text-neutral-500 mt-1"> | |
| The whole app runs in this browser. Frames never leave your | |
| device. | |
| </p> | |
| </div> | |
| <button | |
| id="help-close" | |
| class="shrink-0 w-8 h-8 rounded-lg bg-neutral-800 hover:bg-neutral-700 text-lg leading-none flex items-center justify-center transition-colors" | |
| aria-label="Close help" | |
| > | |
| Γ | |
| </button> | |
| </div> | |
| <div class="space-y-5 text-sm text-neutral-300 leading-relaxed"> | |
| <section> | |
| <h3 class="font-semibold text-neutral-100 mb-1.5"> | |
| 1. Get started | |
| </h3> | |
| <ol | |
| class="list-decimal list-inside space-y-1 text-neutral-300 marker:text-neutral-500" | |
| > | |
| <li> | |
| Click <kbd>Start</kbd> and allow camera access. | |
| </li> | |
| <li> | |
| Hold a neutral, relaxed face for 3 seconds while the | |
| red-bordered overlay runs. This captures your resting | |
| baseline. | |
| </li> | |
| <li> | |
| After that you'll see the live readout panel attached to | |
| your face. Cognitive states (tired, focused, etc.) take | |
| about a minute to warm up β they need a one-minute window | |
| per the literature (PERCLOS, blink rate). | |
| </li> | |
| </ol> | |
| </section> | |
| <section> | |
| <h3 class="font-semibold text-neutral-100 mb-1.5"> | |
| 2. What you'll see | |
| </h3> | |
| <ul class="space-y-1.5 text-neutral-300"> | |
| <li> | |
| <span class="text-indigo-300 font-medium">Top 3 emotions</span> | |
| β calibrated probabilities from HSEmotion (AffectNet 8-class). | |
| Bar opacity scales with confidence. | |
| </li> | |
| <li> | |
| <span class="text-amber-300 font-medium">Compound label</span> | |
| β when the top two emotions are sustained close, you'll see | |
| e.g. <em>"bittersweet"</em>, <em>"angrily disgusted"</em> | |
| (Du, Tao & Martinez 2014). | |
| </li> | |
| <li> | |
| <span class="text-neutral-200 font-medium">v / a</span> β | |
| Russell circumplex valence (negative β positive) and arousal | |
| (low β high). The 2D inset on the bottom-left shows the last | |
| ~2 seconds. | |
| </li> | |
| <li> | |
| <span class="text-neutral-200 font-medium">Intensity</span> | |
| β overall facial activity above your resting baseline, | |
| independent of emotion classification. | |
| </li> | |
| <li> | |
| <span class="text-neutral-200 font-medium">Sparkline</span> | |
| β top-1 confidence over the last ~6s. Flat = stable, jagged | |
| = the model is changing its mind. | |
| </li> | |
| <li> | |
| <span class="text-emerald-300 font-medium">States</span> | |
| β engaged / focused / tired / bored / stressed / confused / | |
| calm. Heuristics grounded in published FACS literature | |
| (Stern 1984, Wierwille 1994, D'Mello & Graesser 2010, | |
| Whitehill 2014, Russell 1980). | |
| </li> | |
| <li> | |
| <span class="text-neutral-200 font-medium">Active muscles</span> | |
| β top 3 ARKit blendshapes after baseline subtraction. Same | |
| data the AI sees. | |
| </li> | |
| <li> | |
| <span class="text-cyan-300 font-medium">Personal pick</span> | |
| β appears once you've calibrated 2+ emotions. Cyan if it | |
| agrees with the model, amber if it disagrees. | |
| </li> | |
| </ul> | |
| </section> | |
| <section class="ai-feature"> | |
| <h3 class="font-semibold text-neutral-100 mb-1.5"> | |
| 3. Ask the AI | |
| </h3> | |
| <ul class="space-y-1.5 text-neutral-300"> | |
| <li> | |
| <kbd>Why?</kbd> β the AI summarizes what's happening over | |
| the slider window (drag the slider to set 2β30 s). | |
| </li> | |
| <li> | |
| <kbd>Summarize session</kbd> β the AI summarizes the entire | |
| session since you last calibrated. | |
| </li> | |
| <li> | |
| Type a follow-up in the chat box and hit | |
| <kbd>Ask</kbd>. Conversation history is preserved, so | |
| follow-ups have context. | |
| </li> | |
| <li> | |
| All AI calls go through a privacy-preserving proxy β | |
| numerical features only, no frames. | |
| </li> | |
| </ul> | |
| </section> | |
| <section> | |
| <h3 class="font-semibold text-neutral-100 mb-1.5"> | |
| 4. Record & export | |
| </h3> | |
| <ul class="space-y-1.5 text-neutral-300"> | |
| <li> | |
| <kbd>Record</kbd> β captures a clip of session data (max | |
| 10 minutes).<span class="ai-feature"> | |
| After stopping, click <kbd>Discuss recording</kbd> to have | |
| the AI analyze the whole clip.</span> | |
| </li> | |
| <li> | |
| <kbd>Export</kbd> β downloads the session timeline as a JSON | |
| file (timestamps, emotions, V/A, intensity, top blendshapes). | |
| No frames; just numbers. Useful for research or self-review. | |
| </li> | |
| </ul> | |
| </section> | |
| <section> | |
| <h3 class="font-semibold text-neutral-100 mb-1.5"> | |
| 5. Personal calibration (advanced) | |
| </h3> | |
| <p> | |
| Open the <em>"Personal emotion calibration"</em> section. For | |
| each emotion, click the button and hold that expression for | |
| 3 seconds. After two or more emotions, a <em>personal classifier</em> | |
| runs alongside the model and surfaces in the panel. Templates | |
| stay in this browser tab unless you opt in to persistence. | |
| </p> | |
| </section> | |
| <section> | |
| <h3 class="font-semibold text-neutral-100 mb-1.5"> | |
| 6. Tips | |
| </h3> | |
| <ul class="space-y-1.5 text-neutral-300"> | |
| <li> | |
| Good lighting, face centered. Side-lighting is fine; backlit | |
| is hard. | |
| </li> | |
| <li> | |
| Recalibrate if you change posture, lighting, or move closer | |
| / further from the camera. | |
| </li> | |
| <li> | |
| Yawning, talking, eating, and long blinks confound the | |
| emotion classifier β the AI is told to look for those | |
| artifacts before reaching for an emotional narrative. | |
| </li> | |
| </ul> | |
| </section> | |
| <section | |
| class="rounded-lg bg-emerald-950/30 border border-emerald-800/50 p-3" | |
| > | |
| <h3 class="font-semibold text-emerald-200 mb-1">Privacy</h3> | |
| <p class="text-emerald-100/90 text-xs leading-relaxed"> | |
| Webcam frames stay on this device. <span class="ai-feature">Only numerical features | |
| (blendshapes, calibrated probabilities, valence / arousal) | |
| are sent to the AI, and only when you click Why? / Ask / | |
| Summarize. </span>Personal templates and saved recordings live only | |
| in this browser tab; the storage opt-in is off by default. | |
| </p> | |
| </section> | |
| <section | |
| class="rounded-lg bg-neutral-950/60 border border-neutral-800 p-3" | |
| > | |
| <h3 class="font-semibold text-neutral-200 mb-1.5"> | |
| Source & license | |
| </h3> | |
| <p class="text-neutral-400 text-xs leading-relaxed"> | |
| Apache-2.0. Built on HSEmotion, MediaPipe, onnxruntime-web, | |
| Vite, and TypeScript. | |
| <a | |
| href="https://github.com/Arjun10g/live-expression-reader" | |
| target="_blank" | |
| rel="noopener" | |
| class="text-indigo-300 hover:text-indigo-200 underline decoration-dotted underline-offset-2" | |
| >View the source on GitHub β</a | |
| >. | |
| </p> | |
| </section> | |
| </div> | |
| <div class="mt-6 flex justify-end"> | |
| <button | |
| id="help-close-bottom" | |
| class="px-4 py-2 rounded-lg bg-indigo-600 hover:bg-indigo-500 text-sm font-medium transition-colors" | |
| > | |
| Got it | |
| </button> | |
| </div> | |
| </div> | |
| </div> | |
| </div> | |
| </body> | |
| </html> | |