Datasets:
Tasks:
Text Classification
Languages:
English
Size:
n<1K
Tags:
pragmatic-reasoning
theory-of-mind
emotion-inference
indirect-speech
benchmark
multi-annotator
License:
metadata
language:
- en
license: cc-by-4.0
size_categories:
- n<1K
task_categories:
- text-classification
task_ids:
- emotion-classification
tags:
- pragmatic-reasoning
- theory-of-mind
- emotion-inference
- indirect-speech
- benchmark
- multi-annotator
- plutchik-emotions
- vad-dimensions
dataset_info:
features:
- name: id
dtype: int64
- name: subtype
dtype: string
- name: context
dtype: string
- name: speaker
dtype: string
- name: listener
dtype: string
- name: utterance
dtype: string
- name: power_relation
dtype: string
- name: social_context
dtype: string
- name: gold_standard
dtype: string
splits:
- name: train
num_examples: 210
- name: validation
num_examples: 45
- name: test
num_examples: 45
CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models
Dataset Description
CEI (Contextual Emotional Inference) is a benchmark of 300 expert-authored scenarios for evaluating how well language models interpret pragmatically complex utterances in social contexts. Each scenario presents a communicative exchange involving indirect speech (sarcasm, mixed signals, strategic politeness, passive aggression, or deflection) where the speaker's literal words diverge from their actual emotional state.
- Paper: CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models (DMLR 2026)
- Repository: https://github.com/jon-chun/cei-tom-dataset-base
- Zenodo: https://doi.org/10.5281/zenodo.18528706
- License: CC-BY-4.0 (data), MIT (code)
Dataset Structure
Scenarios
- 300 scenarios across 5 pragmatic subtypes (60 each)
- 3 independent annotations per scenario (900 total)
- Predefined splits: train (210), validation (45), test (45), stratified by subtype and power relation
Pragmatic Subtypes
| Subtype | Description | Fleiss' kappa |
|---|---|---|
| Sarcasm/Irony | Speaker says the opposite of what they mean | 0.25 |
| Passive Aggression | Hostility expressed through superficial compliance | 0.22 |
| Strategic Politeness | Polite language masking negative intent | 0.20 |
| Mixed Signals | Contradictory verbal and contextual cues | 0.16 |
| Deflection/Misdirection | Speaker redirects to avoid revealing feelings | 0.06 |
Labels
- Primary emotion: One of Plutchik's 8 basic emotions (joy, trust, fear, surprise, sadness, disgust, anger, anticipation)
- VAD ratings: Valence, Arousal, Dominance on 7-point scales mapped to [-1.0, +1.0]
- Confidence: Annotator self-reported confidence
- Gold standard: Majority vote with expert adjudication
Power Relations
- Peer (72%), High-to-Low authority (20%), Low-to-High authority (7%)
Key Statistics
- Inter-annotator agreement: Overall kappa = 0.21 (fair), ranging from 0.06 (deflection) to 0.25 (sarcasm)
- Human accuracy (vs. gold): 61% mean, 14.3% unanimous, 31.3% three-way split
- Best LLM baseline: 25.7% accuracy (Phi-4, zero-shot) vs. 54% human majority agreement
- Random baseline: 12.5% (8-class)
Intended Uses
- Benchmarking LLM pragmatic reasoning capabilities
- Diagnosing model failure modes on indirect speech subtypes
- Research on emotion inference, social AI, Theory of Mind
- Soft-label training using per-annotator distributions
Limitations
- All scenarios are expert-authored (not naturalistic)
- English only
- 15 undergraduate annotators from a single institution
- Small scale (300 scenarios) optimized for annotation quality over quantity
Citation
@article{chun2026cei,
title={CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models},
author={Chun, Jon and Sussman, Hannah and Mangine, Adrian and Kocaman, Murathan and Sidorko, Kirill and Koirala, Abhigya and McCloud, Andre and Eisenbeis, Gwen and Akanwe, Wisdom and Gassama, Moustapha and Gonzalez Chirinos, Eliezer and Enright, Anne-Duncan and Dunson, Peter and Ng, Tiffanie and von Rosenstiel, Anna and Idowu, Godwin},
journal={Journal of Data-centric Machine Learning Research (DMLR)},
year={2026}
}