CENTERBENCH / README.md
Sangmitra-06's picture
Update README.md
85471d1 verified
metadata
language:
  - en
license: cc-by-4.0
task_categories:
  - question-answering
  - text-classification
tags:
  - syntax
  - center-embedding
  - linguistic-evaluation
  - semantic-reasoning
  - structural-understanding
size_categories:
  - 1K<n<10K
configs:
  - config_name: plausible
    data_files:
      - split: train
        path: plausible.jsonl
  - config_name: implausible
    data_files:
      - split: train
        path: implausible.jsonl

🧩 CENTERBENCH

Paper: The Dog the Cat Chased Stumped the Model: Measuring When Language Models Abandon Structure for Shortcuts

Authors: Sangmitra Madhusudan, Kaige Chen, and Ali Emami

GitHub Repository: https://github.com/Sangmitra-06/CENTERBENCH

📄 Paper Abstract

When language models correctly parse "The cat that the dog chased meowed," are they analyzing syntax or simply familiar with dogs chasing cats? Despite extensive benchmarking, we lack methods to distinguish structural understanding from semantic pattern matching. We introduce CENTERBENCH, a dataset of 9,720 comprehension questions on center-embedded sentences (like "The cat [that the dog chased] meowed") where relative clauses nest recursively, creating processing demands from simple to deeply nested structures. Each sentence has a syntactically identical but semantically implausible counterpart (e.g., mailmen prescribe medicine, doctors deliver mail) and six comprehension questions testing surface understanding, syntactic dependencies, and causal reasoning. Testing six models reveals that performance gaps between plausible and implausible sentences widen systematically with complexity, with models showing median gaps up to 26.8 percentage points, quantifying when they abandon structural analysis for semantic associations. Notably, semantic plausibility harms performance on questions about resulting actions, where following causal relationships matters more than semantic coherence. Reasoning models improve accuracy but their traces show semantic shortcuts, overthinking, and answer refusal. Unlike models whose plausibility advantage systematically widens with complexity, humans shows variable semantic effects. CenterBench provides the first framework to identify when models shift from structural analysis to pattern matching.

🗃️ Dataset

The dataset contains two subsets:

  • plausible.jsonl: Plausible center-embedded sentences with question-answer pairs
  • implausible.jsonl: Implausible center-embedded sentences with question-answer pairs

Dataset Structure

Each line in the JSONL files represents a single sentence with all its associated data:

  • id: Unique identifier for the sentence
  • sentence: The center-embedded sentence text
  • structure: The event chain as a list of subject-action-object triples (ordered from main clause to deepest embedding)
  • middle_entity: The entity at the center of the embedding
  • all_entities: List of all entities in the sentence
  • questions: A flat list of all question-answer pairs for this sentence, each with:
    • question: The question text
    • answer: The correct answer
    • type: Question type (action_performed, agent_identification, entity_count, nested_dependency, causal_sequence, chain_consequence)
    • difficulty: Difficulty level (easy, medium, hard)
    • entity: The entity this question focuses on
    • entity_name: Which entity this question is about
    • is_middle_entity: Boolean indicating if this is the middle entity
  • total_questions: Total number of questions for this sentence
  • complexity_level: The complexity level of this sentence (complexity_1 to complexity_6)

Example Entry

{
  "id": "complexity_1_sentence_1",
  "sentence": "The train that the airplane whistled at taxied.",
  "structure": [
    {
      "subject": "train",
      "action": "taxied",
      "object": null
    },
    {
      "subject": "airplane",
      "action": "whistled at",
      "object": "train"
    }
  ],
  "middle_entity": "train",
  "all_entities": ["airplane", "train"],
  "questions": [
    {
      "question": "What did the airplane do?",
      "answer": "whistle at the train",
      "type": "action_performed",
      "difficulty": "easy",
      "entity": "airplane",
      "entity_name": "airplane",
      "is_middle_entity": false
    },
    {
      "question": "How many distinct entities are in the sentence?",
      "answer": "2",
      "type": "entity_count",
      "difficulty": "medium",
      "entity": "airplane",
      "entity_name": "airplane",
      "is_middle_entity": false
    },
    {
      "question": "What did the train do?",
      "answer": "taxi",
      "type": "action_performed",
      "difficulty": "easy",
      "entity": "train",
      "entity_name": "train",
      "is_middle_entity": true
    }
  ],
  "total_questions": 12,
  "complexity_level": "complexity_1"
}

🖥️ Usage

Loading the Dataset

from datasets import load_dataset

# Load plausible subset
plausible = load_dataset("Sangmitra-06/CENTERBENCH", "plausible")

# Load implausible subset
implausible = load_dataset("Sangmitra-06/CENTERBENCH", "implausible")

# Load both subsets
dataset = load_dataset("Sangmitra-06/CENTERBENCH")

Accessing the Data

# Access a sentence
sentence_data = plausible['train'][0]
sentence_text = sentence_data['sentence']
complexity = sentence_data['complexity_level']

# Filter by complexity level
complexity_1_sentences = plausible['train'].filter(lambda x: x['complexity_level'] == 'complexity_1')

# Iterate through questions (now a flat list)
for q in sentence_data['questions']:
    print(f"Q: {q['question']}")
    print(f"A: {q['answer']}")
    print(f"Type: {q['type']}, Difficulty: {q['difficulty']}")
    print(f"Entity: {q['entity_name']}\n")

✏️ Reference

If you use CENTERBENCH in your research, please cite:

@inproceedings{madhusudan-etal-2026-dog,
    title = "The Dog the Cat Chased Stumped the Model: Measuring When Language Models Abandon Structure for Shortcuts",
    author = "Madhusudan, Sangmitra  and
      Chen, Kaige  and
      Emami, Ali",
    editor = "Demberg, Vera  and
      Inui, Kentaro  and
      Marquez, Llu{\'i}s",
    booktitle = "Proceedings of the 19th Conference of the {E}uropean Chapter of the {A}ssociation for {C}omputational {L}inguistics (Volume 1: Long Papers)",
    month = mar,
    year = "2026",
    address = "Rabat, Morocco",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2026.eacl-long.19/",
    pages = "428--453",
    ISBN = "979-8-89176-380-7",
    abstract = "When language models correctly parse ``The cat that the dog chased meowed,'' are they analyzing syntax or simply familiar with dogs chasing cats? Despite extensive benchmarking, we lack methods to distinguish structural understanding from semantic pattern matching. We introduce **CenterBench**, a dataset of 9,720 comprehension questions on center-embedded sentences (like ``The cat [that the dog chased] meowed'') where relative clauses nest recursively, creating processing demands from simple to deeply nested structures. Each sentence has a syntactically identical but semantically implausible counterpart (e.g., mailmen prescribe medicine, doctors deliver mail) and six comprehension questions testing surface understanding, syntactic dependencies, and causal reasoning. Testing six models reveals that performance gaps between plausible and implausible sentences widen systematically with complexity, with models showing median gaps up to 26.8 percentage points, quantifying when they abandon structural analysis for semantic associations. Notably, semantic plausibility harms performance on questions about resulting actions, where following causal relationships matters more than semantic coherence. Reasoning models improve accuracy but their traces show semantic shortcuts, overthinking, and answer refusal. Unlike models whose plausibility advantage systematically widens with complexity, humans shows variable semantic effects. CenterBench provides the first framework to identify when models shift from structural analysis to pattern matching."
}