OpenMarkAI commited on
Commit
011c907
·
verified ·
1 Parent(s): 944b9e2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ tags:
4
+ - benchmarking
5
+ - llm
6
+ - model-evaluation
7
+ - vision
8
+ - ai
9
+ pretty_name: https://OpenMark.ai AI Model Emotion Detection Benchmark
10
+ ---
11
+
12
+ # AI Model Emotion Detection Benchmark
13
+
14
+ Benchmark results from testing 11 AI models on emotion detection from movie stills, conducted on [OpenMark](https://openmark.ai) — a deterministic AI model benchmarking platform.
15
+
16
+ ## Methodology
17
+
18
+ - **Task:** Identify emotions from 4 movie stills (varying complexity)
19
+ - **Models tested:** 11 (GPT-5.2, Gemini 3 Pro, Gemini 3.1 Pro, Claude Opus 4.6, Claude Sonnet 4.6, Grok 4.1 Fast, Llama 4 Maverick, Qwen 3.5, Sonar, Gemini 3 Flash, Mistral Medium)
20
+ - **Runs per model:** 3 (for stability measurement)
21
+ - **Scoring:** Deterministic, task-specific evaluation
22
+ - **Costs:** Real API costs tracked per task
23
+
24
+ ## Key Findings
25
+
26
+ - GPT-5.2 and Gemini 3 Pro tied at 75% accuracy
27
+ - Claude Opus 4.6 ($0.025/task) scored identically to Llama 4 Maverick ($0.002/task) — 12x price difference
28
+ - Half the models showed ±1.000 stability variance (changed answers across runs)
29
+
30
+ ## Source
31
+
32
+ Data generated using [OpenMark](https://openmark.ai). Full analysis: [I benchmarked 10 ai models on reading human emotions](https://dev.to/openmarkai/i-benchmarked-10-ai-models-on-reading-human-emotions-3m0b)