Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
annotation_id: int64
annotator: int64
created_at: string
id: int64
lead_time: double
sd_listener_role: string
sd_situation: string
sd_speaker_role: string
sd_utterance: string
sl_plutchik_primary_Ann: string
sl_plutchik_primary_Tiffanie: string
sl_plutchik_primary_Kirill: string
gold_standard: string
sl_v_Ann: string
sl_v_Tiffanie: string
sl_v_Kirill: string
sl_a_Ann: string
sl_a_Tiffanie: string
sl_a_Kirill: string
sl_d_Ann: string
sl_d_Tiffanie: string
sl_d_Kirill: string
sl_confidence_Ann: string
updated_at: string
sl_confidence_Tiffanie: string
sl_confidence_Kirill: string
Notes: string
subtype: null
context: null
speaker: null
listener: null
utterance: null
power_relation: null
social_context: null
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 3668
to
{'id': Value('int64'), 'subtype': Value('string'), 'context': Value('string'), 'speaker': Value('string'), 'listener': Value('string'), 'utterance': Value('string'), 'power_relation': Value('string'), 'social_context': Value('string'), 'gold_standard': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              annotation_id: int64
              annotator: int64
              created_at: string
              id: int64
              lead_time: double
              sd_listener_role: string
              sd_situation: string
              sd_speaker_role: string
              sd_utterance: string
              sl_plutchik_primary_Ann: string
              sl_plutchik_primary_Tiffanie: string
              sl_plutchik_primary_Kirill: string
              gold_standard: string
              sl_v_Ann: string
              sl_v_Tiffanie: string
              sl_v_Kirill: string
              sl_a_Ann: string
              sl_a_Tiffanie: string
              sl_a_Kirill: string
              sl_d_Ann: string
              sl_d_Tiffanie: string
              sl_d_Kirill: string
              sl_confidence_Ann: string
              updated_at: string
              sl_confidence_Tiffanie: string
              sl_confidence_Kirill: string
              Notes: string
              subtype: null
              context: null
              speaker: null
              listener: null
              utterance: null
              power_relation: null
              social_context: null
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 3668
              to
              {'id': Value('int64'), 'subtype': Value('string'), 'context': Value('string'), 'speaker': Value('string'), 'listener': Value('string'), 'utterance': Value('string'), 'power_relation': Value('string'), 'social_context': Value('string'), 'gold_standard': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_ids "emotion-classification" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models

Dataset Description

CEI (Contextual Emotional Inference) is a benchmark of 300 expert-authored scenarios for evaluating how well language models interpret pragmatically complex utterances in social contexts. Each scenario presents a communicative exchange involving indirect speech (sarcasm, mixed signals, strategic politeness, passive aggression, or deflection) where the speaker's literal words diverge from their actual emotional state.

Dataset Structure

Scenarios

  • 300 scenarios across 5 pragmatic subtypes (60 each)
  • 3 independent annotations per scenario (900 total)
  • Predefined splits: train (210), validation (45), test (45), stratified by subtype and power relation

Pragmatic Subtypes

Subtype Description Fleiss' kappa
Sarcasm/Irony Speaker says the opposite of what they mean 0.25
Passive Aggression Hostility expressed through superficial compliance 0.22
Strategic Politeness Polite language masking negative intent 0.20
Mixed Signals Contradictory verbal and contextual cues 0.16
Deflection/Misdirection Speaker redirects to avoid revealing feelings 0.06

Labels

  • Primary emotion: One of Plutchik's 8 basic emotions (joy, trust, fear, surprise, sadness, disgust, anger, anticipation)
  • VAD ratings: Valence, Arousal, Dominance on 7-point scales mapped to [-1.0, +1.0]
  • Confidence: Annotator self-reported confidence
  • Gold standard: Majority vote with expert adjudication

Power Relations

  • Peer (72%), High-to-Low authority (20%), Low-to-High authority (7%)

Key Statistics

  • Inter-annotator agreement: Overall kappa = 0.21 (fair), ranging from 0.06 (deflection) to 0.25 (sarcasm)
  • Human accuracy (vs. gold): 61% mean, 14.3% unanimous, 31.3% three-way split
  • Best LLM baseline: 25.7% accuracy (Phi-4, zero-shot) vs. 54% human majority agreement
  • Random baseline: 12.5% (8-class)

Intended Uses

  • Benchmarking LLM pragmatic reasoning capabilities
  • Diagnosing model failure modes on indirect speech subtypes
  • Research on emotion inference, social AI, Theory of Mind
  • Soft-label training using per-annotator distributions

Limitations

  • All scenarios are expert-authored (not naturalistic)
  • English only
  • 15 undergraduate annotators from a single institution
  • Small scale (300 scenarios) optimized for annotation quality over quantity

Citation

@article{chun2026cei,
  title={CEI: A Benchmark for Evaluating Pragmatic Reasoning in Language Models},
  author={Chun, Jon and Sussman, Hannah and Mangine, Adrian and Kocaman, Murathan and Sidorko, Kirill and Koirala, Abhigya and McCloud, Andre and Eisenbeis, Gwen and Akanwe, Wisdom and Gassama, Moustapha and Gonzalez Chirinos, Eliezer and Enright, Anne-Duncan and Dunson, Peter and Ng, Tiffanie and von Rosenstiel, Anna and Idowu, Godwin},
  journal={Journal of Data-centric Machine Learning Research (DMLR)},
  year={2026}
}
Downloads last month
-