Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type
list<item: struct<trial_num: int64, rule: string, item_description: list<item: string>, prompt: string, correct_answer: string>>
to
{'trial_num': List(Value('int32')), 'rule': List(Value('string')), 'prompt': List(Value('string')), 'item_description': List(List(Value('string'))), 'correct_answer': List(Value('string'))}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2092, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              list<item: struct<trial_num: int64, rule: string, item_description: list<item: string>, prompt: string, correct_answer: string>>
              to
              {'trial_num': List(Value('int32')), 'rule': List(Value('string')), 'prompt': List(Value('string')), 'item_description': List(List(Value('string'))), 'correct_answer': List(Value('string'))}

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

RuleShift: Zero-Feedback Set-Shifting

Overview

The RuleShift Dataset is a procedurally generated benchmark comprising 150 classification scenarios across three task domains designed to measure cognitive set-shifting abilities in language models and other AI systems. Each scenario tests a model's capacity to self-detect a silent rule change through internal consistency monitoring alone, without corrective feedback.

This dataset directly operationalizes the Wisconsin Card Sorting Test (WCST) — a foundational cognitive psychology instrument — into machine learning benchmark tasks.

Dataset Description

Core Statistics

  • Total Scenarios: 150 (50 per task)
  • Total Trials: 2,700
  • Phase Structure:
    • Phase 1 (Rule Learning): 5 trials
    • Phase 2 (Silent Rule Change): up to 10 trials
    • Phase 3 (Confirmation): 3 trials
  • Train/Test Split: 123/27 (82/18% — stratified by task and difficulty)
  • Difficulty Tiers: Easy (17), Medium (17), Hard (16) per task

Three Task Domains

Task 1: Perceptual (Shape → Color)

  • Domain: Visual object descriptions
  • Phase 1 Rule: Classify by shape (dominant visual property)
  • Phase 2 Rule: Classify by color (silent rule shift)
  • Distractor Dimension: Texture (varies but never correct)
  • Human Baseline (RAL): 1.8 trials

Task 2: Semantic (Category → Function)

  • Domain: Word classification
  • Phase 1 Rule: Classify by semantic category
  • Phase 2 Rule: Classify by functional use (silent rule shift)
  • Distractor Dimension: Etymology/origin (varies but never correct)
  • Human Baseline (RAL): 2.1 trials

Task 3: Procedural (Alphabetical → Frequency)

  • Domain: List operations
  • Phase 1 Rule: Sort alphabetically (first word of list)
  • Phase 2 Rule: Sort by word frequency (most common word)
  • Distractor Dimension: Rarity (varies but never correct)
  • Human Baseline (RAL): 1.5 trials

Key Features

Rule Adaptation Latency (RAL)

The primary metric: number of incorrect trials after a silent rule change before two consecutive correct responses under the new rule.

Range: 0 (immediate detection) to 10 (rule change never detected, DNF)

Distractor Dimensions

Each task includes a third dimension (texture, origin, rarity) that varies across trials but is never the correct answer. This prevents solving by elimination and requires genuine cross-trial rule tracking.

Zero-Feedback Design

Models receive no corrective feedback at any point. Success requires:

  1. Learning the Phase 1 rule from examples alone
  2. Detecting the silent Phase 2 rule change through internal consistency failure
  3. Adapting to the new rule without external signals

Contamination Prevention

  • Vocabulary drawn from low-frequency registers not common in standard pre-training corpora
  • Rare color/shape names, specialised technical nouns
  • 3-level deduplication (scenario ID, content hash, answer sequence)
  • Fixed random seed (42) for reproducibility

Data Schema

Nested JSONL format — each line is a complete scenario with embedded phase/trial structure:

{
  "scenario_id": "T1_S001",
  "task": 1,
  "task_name": "perceptual",
  "domain": "visual_objects",
  "phase1_rule": "shape",
  "phase2_rule": "color",
  "shift_type": "perceptual_shift",
  "cv_split": "A",
  "difficulty": "medium",
  "created_at": "2026-04-03",
  "human_baseline_RAL": 1.8,
  "content_hash": "f02d0851305d",
  "phase1_trials": [
    {
      "trial_num": 1,
      "rule": "shape",
      "prompt": "Items: coral rhombus, crimson rhombus, amber tetrahedron, teal cuboid. Which one appears most?",
      "item_description": ["coral rhombus", "crimson rhombus", "amber tetrahedron", "teal cuboid"],
      "correct_answer": "rhombus"
    },
    ...
  ],
  "phase2_trials": [
    {
      "trial_num": 6,
      "rule": "color",
      "prompt": "Items: jade prism, ivory octagon, jade tetrahedron, jade ellipse. Which one appears most?",
      "item_description": ["jade prism", "ivory octagon", "jade tetrahedron", "jade ellipse"],
      "correct_answer": "jade"
    },
    ...
  ],
  "phase3_trials": [...]
}

Top-Level Scenario Fields

  • scenario_id: Unique identifier (e.g., "T1_S001" = Task 1, Scenario 1)

  • task: 1 (Perceptual), 2 (Semantic), or 3 (Procedural)

  • task_name: "perceptual", "semantic", or "procedural"

  • domain: "visual_objects", "word_classification", or "list_operations"

  • phase1_rule: Rule active in Phase 1

  • phase2_rule: Rule active in Phase 2 (silent change)

  • shift_type: Type of rule shift (e.g., "perceptual_shift", "semantic_shift")

  • cv_split: "A" or "B" for cross-validation (separate from train/test)

  • difficulty: "easy", "medium", or "hard"

  • created_at: Generation date

  • human_baseline_RAL: Expected human Rule Adaptation Latency

  • content_hash: Unique hash identifying scenario content

Trial Fields (within phase1_trials, phase2_trials, phase3_trials)

  • trial_num: Absolute trial number across all phases (1–18)
  • rule: Correct sorting rule for this trial
  • prompt: Human-readable question presented to model
  • item_description: List of item descriptions (length = n_items_per_trial)
  • correct_answer: Expected response (single word)

Usage

Load from Hugging Face

from datasets import load_dataset
import json

# Load full dataset
dataset = load_dataset("niloydebbarma/ruleshift-dataset")

# Access a single scenario
scenario = dataset["train"][0]

# Access Phase 1 trials for first scenario
phase1 = scenario["phase1_trials"]
print(f"Phase 1: {len(phase1)} trials")

# Access first trial
first_trial = phase1[0]
print(f"Trial prompt: {first_trial['prompt']}")
print(f"Correct answer: {first_trial['correct_answer']}")

# Filter by task
task1_only = dataset["train"].filter(lambda x: x["task"] == 1)

# Filter by difficulty
hard_only = dataset["train"].filter(lambda x: x["difficulty"] == "hard")

Compute RAL (Rule Adaptation Latency)

def compute_ral(scenario):
    """
    Compute RAL: trials to two consecutive correct responses
    in Phase 2 after the silent rule change.

    Returns:
        RAL (int): Number of incorrect predictions in Phase 2
                   before two consecutive correct responses.
                   Max value 10 if rule never detected (DNF).
    """
    phase2_trials = scenario["phase2_trials"]

    correct_count = 0

    for trial in phase2_trials:
        # In practice: compare model response to correct_answer
        # For now, assume comparison logic here
        is_correct = (model_response == trial["correct_answer"])

        if is_correct:
            correct_count += 1
            if correct_count >= 2:
                return trial["trial_num"] - phase2_trials[0]["trial_num"]
        else:
            correct_count = 0

    return 10  # DNF (Did Not Find rule)

# Usage:
from datasets import load_dataset
dataset = load_dataset("niloydebbarma/ruleshift-dataset")
scenario = dataset["train"][0]
ral = compute_ral(scenario)
print(f"Rule Adaptation Latency: {ral} trials")

Difficulty Calibration

Difficulty is assigned based on expected adaptation latency:

Difficulty Characteristics
Easy Clear Phase 1 rule, obvious Phase 2 shift, high contrast features
Medium Moderate Phase 1 ambiguity, subtle Phase 2 shift, mixed feature clarity
Hard Ambiguous Phase 1, minimal Phase 2 signal, confusable features across dimensions

Baseline Performance

Human Performance (from Wisconsin Card Sorting Test literature):

  • Task 1 (Perceptual): RAL = 1.8
  • Task 2 (Semantic): RAL = 2.1
  • Task 3 (Procedural): RAL = 1.5

Human baseline represents educated adult performance without prior training.

Generation Method

  • Procedural Python generation with fixed random seed (42)
  • No neural models used in dataset creation
  • Fully reproducible — same seed produces identical dataset
  • RAM usage: < 100 MB
  • GPU: Not required

Cross-Validation Structure

Two-split cross-validation (CV A/B):

  • 75 scenarios in split A → train on B, test on A
  • 75 scenarios in split B → train on A, test on B

Note: CV A/B is separate from train/test split. CV enables cross-validation within the training partition; train/test is for final evaluation.

Citation

If you use this dataset, please cite:

@dataset{Barma2026RuleShift,
  title={RuleShift: A Zero-Feedback Cognitive Set-Shifting Benchmark Dataset},
  author={Barma, Niloy Deb},
  year={2026},
  url={https://huggingface.co/datasets/niloydebbarma/ruleshift-dataset},
  doi={10.57967/hf/8246}
}

Foundational references:

@article{Berg1948,
  author = {Esta A. Berg},
  title = {A Simple Objective Technique for Measuring Flexibility in Thinking},
  journal = {The Journal of General Psychology},
  volume = {39},
  number = {1},
  pages = {15--22},
  year = {1948},
  publisher = {Routledge},
  doi = {10.1080/00221309.1948.9918159}
}

@inproceedings{Heaton1993WisconsinCS,
  title={Wisconsin Card Sorting Test Manual -- Revised and Expanded},
  author={Robert K. Heaton and C. Chelune and J. D. Talley and Gary G. Kay and Glenn Curtiss},
  year={1993},
  publisher={Psychological Assessment Resources},
  url={https://api.semanticscholar.org/CorpusID:65194761}
}

Dataset Versions

Version Date Changes
1.0.0 2026-04-03 Initial release: 150 scenarios, 3 tasks, 2,700 trials

License

Data License

CC0 1.0 Universal (Public Domain) — No rights reserved. Reusable without restriction.

This dataset is released into the public domain. You are free to use, modify, and distribute it for any purpose, commercial or non-commercial, without seeking permission.

Code License

Apache License 2.0

The code in this repository is licensed under the Apache License, Version 2.0.
You may use, modify, and distribute the code, provided that you include the required license notice and comply with the terms of the license.

Repository

Kaggle Dataset: niloydebbarma/ruleshift-dataset

Generation Code: RuleShift Dataset Generation Notebook

Contact

For questions, issues, or feedback:

  • Author: Niloy Deb Barma
  • Please open an issue in this repository for bug reports, suggestions, or discussions.

RuleShift is designed as a benchmark for measuring progress toward AGI in executive function capabilities, specifically cognitive flexibility measured through silent rule adaptation tasks.

Downloads last month
39