jonc commited on
Commit
96fd08f
·
verified ·
1 Parent(s): cc7dd37

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: cc-by-4.0
5
+ size_categories:
6
+ - n<1K
7
+ task_categories:
8
+ - text-classification
9
+ task_ids:
10
+ - emotion-classification
11
+ tags:
12
+ - pragmatic-reasoning
13
+ - theory-of-mind
14
+ - emotion-inference
15
+ - indirect-speech
16
+ - benchmark
17
+ - multi-annotator
18
+ - plutchik-emotions
19
+ - vad-dimensions
20
+ dataset_info:
21
+ features:
22
+ - name: id
23
+ dtype: int64
24
+ - name: subtype
25
+ dtype: string
26
+ - name: context
27
+ dtype: string
28
+ - name: speaker
29
+ dtype: string
30
+ - name: listener
31
+ dtype: string
32
+ - name: utterance
33
+ dtype: string
34
+ - name: power_relation
35
+ dtype: string
36
+ - name: social_context
37
+ dtype: string
38
+ - name: gold_standard
39
+ dtype: string
40
+ splits:
41
+ - name: train
42
+ num_examples: 210
43
+ - name: validation
44
+ num_examples: 45
45
+ - name: test
46
+ num_examples: 45
47
+ ---
48
+
49
+ # CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models
50
+
51
+ ## Dataset Description
52
+
53
+ CEI (Contextual Emotional Inference) is a benchmark of 300 expert-authored scenarios for evaluating how well language models interpret pragmatically complex utterances in social contexts. Each scenario presents a communicative exchange involving indirect speech (sarcasm, mixed signals, strategic politeness, passive aggression, or deflection) where the speaker's literal words diverge from their actual emotional state.
54
+
55
+ - **Paper:** CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models (DMLR 2026)
56
+ - **Repository:** https://github.com/jon-chun/cei-tom-dataset-base
57
+ - **Zenodo:** https://doi.org/10.5281/zenodo.18528706
58
+ - **License:** CC-BY-4.0 (data), MIT (code)
59
+
60
+ ## Dataset Structure
61
+
62
+ ### Scenarios
63
+ - **300 scenarios** across 5 pragmatic subtypes (60 each)
64
+ - **3 independent annotations** per scenario (900 total)
65
+ - **Predefined splits:** train (210), validation (45), test (45), stratified by subtype and power relation
66
+
67
+ ### Pragmatic Subtypes
68
+ | Subtype | Description | Fleiss' kappa |
69
+ |---------|-------------|---------------|
70
+ | Sarcasm/Irony | Speaker says the opposite of what they mean | 0.25 |
71
+ | Passive Aggression | Hostility expressed through superficial compliance | 0.22 |
72
+ | Strategic Politeness | Polite language masking negative intent | 0.20 |
73
+ | Mixed Signals | Contradictory verbal and contextual cues | 0.16 |
74
+ | Deflection/Misdirection | Speaker redirects to avoid revealing feelings | 0.06 |
75
+
76
+ ### Labels
77
+ - **Primary emotion:** One of Plutchik's 8 basic emotions (joy, trust, fear, surprise, sadness, disgust, anger, anticipation)
78
+ - **VAD ratings:** Valence, Arousal, Dominance on 7-point scales mapped to [-1.0, +1.0]
79
+ - **Confidence:** Annotator self-reported confidence
80
+ - **Gold standard:** Majority vote with expert adjudication
81
+
82
+ ### Power Relations
83
+ - Peer (72%), High-to-Low authority (20%), Low-to-High authority (7%)
84
+
85
+ ## Key Statistics
86
+ - **Inter-annotator agreement:** Overall kappa = 0.21 (fair), ranging from 0.06 (deflection) to 0.25 (sarcasm)
87
+ - **Human accuracy (vs. gold):** 61% mean, 14.3% unanimous, 31.3% three-way split
88
+ - **Best LLM baseline:** 25.7% accuracy (Phi-4, zero-shot) vs. 54% human majority agreement
89
+ - **Random baseline:** 12.5% (8-class)
90
+
91
+ ## Intended Uses
92
+ - Benchmarking LLM pragmatic reasoning capabilities
93
+ - Diagnosing model failure modes on indirect speech subtypes
94
+ - Research on emotion inference, social AI, Theory of Mind
95
+ - Soft-label training using per-annotator distributions
96
+
97
+ ## Limitations
98
+ - All scenarios are expert-authored (not naturalistic)
99
+ - English only
100
+ - 15 undergraduate annotators from a single institution
101
+ - Small scale (300 scenarios) optimized for annotation quality over quantity
102
+
103
+ ## Citation
104
+ ```bibtex
105
+ @article{chun2026cei,
106
+ title={CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models},
107
+ author={Chun, Jon and Sussman, Hannah and Mangine, Adrian and Kocaman, Murathan and Sidorko, Kirill and Koirala, Abhigya and McCloud, Andre and Eisenbeis, Gwen and Akanwe, Wisdom and Gassama, Moustapha and Gonzalez Chirinos, Eliezer and Enright, Anne-Duncan and Dunson, Peter and Ng, Tiffanie and von Rosenstiel, Anna and Idowu, Godwin},
108
+ journal={Journal of Data-centric Machine Learning Research (DMLR)},
109
+ year={2026}
110
+ }
111
+ ```