metadata
license: cc-by-4.0
tags:
- benchmarking
- llm
- model-evaluation
- vision
- ai
pretty_name: https://OpenMark.ai AI Model Emotion Detection Benchmark
AI Model Emotion Detection Benchmark
Benchmark results from testing 11 AI models on emotion detection from movie stills, conducted on OpenMark — a deterministic AI model benchmarking platform.
Methodology
- Task: Identify emotions from 4 movie stills (varying complexity)
- Models tested: 11 (GPT-5.2, Gemini 3 Pro, Gemini 3.1 Pro, Claude Opus 4.6, Claude Sonnet 4.6, Grok 4.1 Fast, Llama 4 Maverick, Qwen 3.5, Sonar, Gemini 3 Flash, Mistral Medium)
- Runs per model: 3 (for stability measurement)
- Scoring: Deterministic, task-specific evaluation
- Costs: Real API costs tracked per task
Key Findings
- GPT-5.2 and Gemini 3 Pro tied at 75% accuracy
- Claude Opus 4.6 ($0.025/task) scored identically to Llama 4 Maverick ($0.002/task) — 12x price difference
- Half the models showed ±1.000 stability variance (changed answers across runs)
Source
Data generated using OpenMark. Full analysis: I benchmarked 10 ai models on reading human emotions