Dataset Viewer
Auto-converted to Parquet Duplicate
Model
stringlengths
5
23
Provider
stringclasses
8 values
Score (%)
float64
42.5
75
Score (Raw)
float64
1.7
3
Max Score
float64
4
4
Stability
stringclasses
2 values
Rec. Temp
float64
0.3
0.3
Pricing Tier
stringclasses
4 values
Cost ($)
float64
0
0.06
Time (s)
float64
6.73
87.4
Acc/$
float64
48.9
2.51k
Acc/min
float64
1.37
23.7
Completion (%)
float64
25
100
Input Tokens (avg/run)
int64
1.82k
7.47k
Output Tokens (avg/run)
int64
4
4.36k
Status
stringclasses
1 value
claude-sonnet-4.6
anthropic
50
2
4
±0.000
0.3
High
0.014787
8.19
135.25
14.65
100
4,849
16
completed
claude-opus-4.6
anthropic
50
2
4
±0.000
0.3
Very High
0.024645
12.41
81.15
9.67
100
4,849
16
completed
gpt-5.2
openai
75
3
4
±0.000
0.3
High
0.008537
7.59
351.43
23.72
100
4,750
16
completed
Qwen3.5-397B-A17B
qwen
50
2
4
±0.000
0.3
Medium
0.007312
87.39
273.54
1.37
25
3,372
1,469
completed
grok-4-1-fast-reasoning
xai
57.5
2.3
4
±1.000
0.3
Low
0.000917
15.89
2,507.63
8.68
100
1,816
1,108
completed
mistral-medium-latest
mistral
42.5
1.7
4
±1.000
0.3
Medium
0.002189
6.73
776.68
15.17
100
5,417
11
completed
sonar
perplexity
57.5
2.3
4
±1.000
0.3
Medium
0.02559
11.83
89.88
11.66
100
5,595
4
completed
llama4-maverick
meta
50
2
4
±0.000
0.3
Low
0.002023
7.59
988.82
15.8
100
7,466
8
completed
gemini-3-pro
gemini
75
3
4
±0.000
0.3
High
0.06139
65.1
48.87
2.77
100
4,535
4,360
completed
gemini-3-flash
gemini
67.5
2.7
4
±1.000
0.3
Medium
0.005999
14.67
450.04
11.04
100
4,535
1,244
completed
gemini-3.1-pro
gemini
75
3
4
±0.000
0.3
High
0.028054
27.59
106.94
6.52
100
4,535
1,582
completed

AI Model Emotion Detection Benchmark

Benchmark results from testing 11 AI models on emotion detection from movie stills, conducted on OpenMark — a deterministic AI model benchmarking platform.

Methodology

  • Task: Identify emotions from 4 movie stills (varying complexity)
  • Models tested: 11 (GPT-5.2, Gemini 3 Pro, Gemini 3.1 Pro, Claude Opus 4.6, Claude Sonnet 4.6, Grok 4.1 Fast, Llama 4 Maverick, Qwen 3.5, Sonar, Gemini 3 Flash, Mistral Medium)
  • Runs per model: 3 (for stability measurement)
  • Scoring: Deterministic, task-specific evaluation
  • Costs: Real API costs tracked per task

Key Findings

  • GPT-5.2 and Gemini 3 Pro tied at 75% accuracy
  • Claude Opus 4.6 ($0.025/task) scored identically to Llama 4 Maverick ($0.002/task) — 12x price difference
  • Half the models showed ±1.000 stability variance (changed answers across runs)

Source

Data generated using OpenMark. Full analysis: I benchmarked 10 ai models on reading human emotions

Downloads last month
10