Datasets:
license: cc0-1.0
tags:
- cognitive-science
- benchmark
- machine-learning
- executive-functions
- wisconsin-card-sorting-test
- rule-shifting
- cognitive-flexibility
- set-shifting
task_categories:
- question-answering
language:
- en
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: ruleshift_train.jsonl
- split: test
path: ruleshift_test.jsonl
dataset_info:
features:
- name: scenario_id
dtype: string
- name: task
dtype: int32
- name: task_name
dtype: string
- name: domain
dtype: string
- name: phase1_rule
dtype: string
- name: phase2_rule
dtype: string
- name: shift_type
dtype: string
- name: cv_split
dtype: string
- name: difficulty
dtype: string
- name: created_at
dtype: string
- name: human_baseline_RAL
dtype: float32
- name: content_hash
dtype: string
- name: rule_sequence
sequence: string
- name: distractor_dim
dtype: string
- name: n_items_per_trial
dtype: int32
- name: generation_seed
dtype: int32
- name: notes
dtype: string
- name: total_trials
dtype: int32
- name: phase1_answer_entropy
dtype: int32
- name: phase2_answer_entropy
dtype: int32
- name: estimated_difficulty_score
dtype: float32
- name: phase1_system_prompt
dtype: string
- name: target_category
dtype: string
- name: target_function
dtype: string
- name: target_color
dtype: string
- name: target_shape
dtype: string
- name: phase1_trials
sequence:
- name: trial_num
dtype: int32
- name: rule
dtype: string
- name: prompt
dtype: string
- name: item_description
sequence: string
- name: correct_answer
dtype: string
- name: phase2_trials
sequence:
- name: trial_num
dtype: int32
- name: rule
dtype: string
- name: prompt
dtype: string
- name: item_description
sequence: string
- name: correct_answer
dtype: string
- name: phase3_trials
sequence:
- name: trial_num
dtype: int32
- name: rule
dtype: string
- name: prompt
dtype: string
- name: item_description
sequence: string
- name: correct_answer
dtype: string
splits:
- name: train
num_examples: 123
- name: test
num_examples: 27
citation: |
@dataset{Barma2026RuleShift,
title={RuleShift: A Zero-Feedback Cognitive Set-Shifting Benchmark Dataset},
author={Barma, Niloy Deb},
year={2026},
url={https://huggingface.co/datasets/niloydebbarma/ruleshift-dataset},
doi={10.57967/hf/8246}
}
RuleShift: Zero-Feedback Set-Shifting
Overview
The RuleShift Dataset is a procedurally generated benchmark comprising 150 classification scenarios across three task domains designed to measure cognitive set-shifting abilities in language models and other AI systems. Each scenario tests a model's capacity to self-detect a silent rule change through internal consistency monitoring alone, without corrective feedback.
This dataset directly operationalizes the Wisconsin Card Sorting Test (WCST) — a foundational cognitive psychology instrument — into machine learning benchmark tasks.
Dataset Description
Core Statistics
- Total Scenarios: 150 (50 per task)
- Total Trials: 2,700
- Phase Structure:
- Phase 1 (Rule Learning): 5 trials
- Phase 2 (Silent Rule Change): up to 10 trials
- Phase 3 (Confirmation): 3 trials
- Train/Test Split: 123/27 (82/18% — stratified by task and difficulty)
- Difficulty Tiers: Easy (17), Medium (17), Hard (16) per task
Three Task Domains
Task 1: Perceptual (Shape → Color)
- Domain: Visual object descriptions
- Phase 1 Rule: Classify by shape (dominant visual property)
- Phase 2 Rule: Classify by color (silent rule shift)
- Distractor Dimension: Texture (varies but never correct)
- Human Baseline (RAL): 1.8 trials
Task 2: Semantic (Category → Function)
- Domain: Word classification
- Phase 1 Rule: Classify by semantic category
- Phase 2 Rule: Classify by functional use (silent rule shift)
- Distractor Dimension: Etymology/origin (varies but never correct)
- Human Baseline (RAL): 2.1 trials
Task 3: Procedural (Alphabetical → Frequency)
- Domain: List operations
- Phase 1 Rule: Sort alphabetically (first word of list)
- Phase 2 Rule: Sort by word frequency (most common word)
- Distractor Dimension: Rarity (varies but never correct)
- Human Baseline (RAL): 1.5 trials
Key Features
Rule Adaptation Latency (RAL)
The primary metric: number of incorrect trials after a silent rule change before two consecutive correct responses under the new rule.
Range: 0 (immediate detection) to 10 (rule change never detected, DNF)
Distractor Dimensions
Each task includes a third dimension (texture, origin, rarity) that varies across trials but is never the correct answer. This prevents solving by elimination and requires genuine cross-trial rule tracking.
Zero-Feedback Design
Models receive no corrective feedback at any point. Success requires:
- Learning the Phase 1 rule from examples alone
- Detecting the silent Phase 2 rule change through internal consistency failure
- Adapting to the new rule without external signals
Contamination Prevention
- Vocabulary drawn from low-frequency registers not common in standard pre-training corpora
- Rare color/shape names, specialised technical nouns
- 3-level deduplication (scenario ID, content hash, answer sequence)
- Fixed random seed (42) for reproducibility
Data Schema
Nested JSONL format — each line is a complete scenario with embedded phase/trial structure:
{
"scenario_id": "T1_S001",
"task": 1,
"task_name": "perceptual",
"domain": "visual_objects",
"phase1_rule": "shape",
"phase2_rule": "color",
"shift_type": "perceptual_shift",
"cv_split": "A",
"difficulty": "medium",
"created_at": "2026-04-03",
"human_baseline_RAL": 1.8,
"content_hash": "f02d0851305d",
"phase1_trials": [
{
"trial_num": 1,
"rule": "shape",
"prompt": "Items: coral rhombus, crimson rhombus, amber tetrahedron, teal cuboid. Which one appears most?",
"item_description": ["coral rhombus", "crimson rhombus", "amber tetrahedron", "teal cuboid"],
"correct_answer": "rhombus"
},
...
],
"phase2_trials": [
{
"trial_num": 6,
"rule": "color",
"prompt": "Items: jade prism, ivory octagon, jade tetrahedron, jade ellipse. Which one appears most?",
"item_description": ["jade prism", "ivory octagon", "jade tetrahedron", "jade ellipse"],
"correct_answer": "jade"
},
...
],
"phase3_trials": [...]
}
Top-Level Scenario Fields
scenario_id: Unique identifier (e.g., "T1_S001" = Task 1, Scenario 1)
task: 1 (Perceptual), 2 (Semantic), or 3 (Procedural)
task_name: "perceptual", "semantic", or "procedural"
domain: "visual_objects", "word_classification", or "list_operations"
phase1_rule: Rule active in Phase 1
phase2_rule: Rule active in Phase 2 (silent change)
shift_type: Type of rule shift (e.g., "perceptual_shift", "semantic_shift")
cv_split: "A" or "B" for cross-validation (separate from train/test)
difficulty: "easy", "medium", or "hard"
created_at: Generation date
human_baseline_RAL: Expected human Rule Adaptation Latency
content_hash: Unique hash identifying scenario content
Trial Fields (within phase1_trials, phase2_trials, phase3_trials)
- trial_num: Absolute trial number across all phases (1–18)
- rule: Correct sorting rule for this trial
- prompt: Human-readable question presented to model
- item_description: List of item descriptions (length = n_items_per_trial)
- correct_answer: Expected response (single word)
Usage
Load from Hugging Face
from datasets import load_dataset
import json
# Load full dataset
dataset = load_dataset("niloydebbarma/ruleshift-dataset")
# Access a single scenario
scenario = dataset["train"][0]
# Access Phase 1 trials for first scenario
phase1 = scenario["phase1_trials"]
print(f"Phase 1: {len(phase1)} trials")
# Access first trial
first_trial = phase1[0]
print(f"Trial prompt: {first_trial['prompt']}")
print(f"Correct answer: {first_trial['correct_answer']}")
# Filter by task
task1_only = dataset["train"].filter(lambda x: x["task"] == 1)
# Filter by difficulty
hard_only = dataset["train"].filter(lambda x: x["difficulty"] == "hard")
Compute RAL (Rule Adaptation Latency)
def compute_ral(scenario):
"""
Compute RAL: trials to two consecutive correct responses
in Phase 2 after the silent rule change.
Returns:
RAL (int): Number of incorrect predictions in Phase 2
before two consecutive correct responses.
Max value 10 if rule never detected (DNF).
"""
phase2_trials = scenario["phase2_trials"]
correct_count = 0
for trial in phase2_trials:
# In practice: compare model response to correct_answer
# For now, assume comparison logic here
is_correct = (model_response == trial["correct_answer"])
if is_correct:
correct_count += 1
if correct_count >= 2:
return trial["trial_num"] - phase2_trials[0]["trial_num"]
else:
correct_count = 0
return 10 # DNF (Did Not Find rule)
# Usage:
from datasets import load_dataset
dataset = load_dataset("niloydebbarma/ruleshift-dataset")
scenario = dataset["train"][0]
ral = compute_ral(scenario)
print(f"Rule Adaptation Latency: {ral} trials")
Difficulty Calibration
Difficulty is assigned based on expected adaptation latency:
| Difficulty | Characteristics |
|---|---|
| Easy | Clear Phase 1 rule, obvious Phase 2 shift, high contrast features |
| Medium | Moderate Phase 1 ambiguity, subtle Phase 2 shift, mixed feature clarity |
| Hard | Ambiguous Phase 1, minimal Phase 2 signal, confusable features across dimensions |
Baseline Performance
Human Performance (from Wisconsin Card Sorting Test literature):
- Task 1 (Perceptual): RAL = 1.8
- Task 2 (Semantic): RAL = 2.1
- Task 3 (Procedural): RAL = 1.5
Human baseline represents educated adult performance without prior training.
Generation Method
- Procedural Python generation with fixed random seed (42)
- No neural models used in dataset creation
- Fully reproducible — same seed produces identical dataset
- RAM usage: < 100 MB
- GPU: Not required
Cross-Validation Structure
Two-split cross-validation (CV A/B):
- 75 scenarios in split A → train on B, test on A
- 75 scenarios in split B → train on A, test on B
Note: CV A/B is separate from train/test split. CV enables cross-validation within the training partition; train/test is for final evaluation.
Citation
If you use this dataset, please cite:
@dataset{Barma2026RuleShift,
title={RuleShift: A Zero-Feedback Cognitive Set-Shifting Benchmark Dataset},
author={Barma, Niloy Deb},
year={2026},
url={https://huggingface.co/datasets/niloydebbarma/ruleshift-dataset},
doi={10.57967/hf/8246}
}
Foundational references:
@article{Berg1948,
author = {Esta A. Berg},
title = {A Simple Objective Technique for Measuring Flexibility in Thinking},
journal = {The Journal of General Psychology},
volume = {39},
number = {1},
pages = {15--22},
year = {1948},
publisher = {Routledge},
doi = {10.1080/00221309.1948.9918159}
}
@inproceedings{Heaton1993WisconsinCS,
title={Wisconsin Card Sorting Test Manual -- Revised and Expanded},
author={Robert K. Heaton and C. Chelune and J. D. Talley and Gary G. Kay and Glenn Curtiss},
year={1993},
publisher={Psychological Assessment Resources},
url={https://api.semanticscholar.org/CorpusID:65194761}
}
Dataset Versions
| Version | Date | Changes |
|---|---|---|
| 1.0.0 | 2026-04-03 | Initial release: 150 scenarios, 3 tasks, 2,700 trials |
License
Data License
CC0 1.0 Universal (Public Domain) — No rights reserved. Reusable without restriction.
This dataset is released into the public domain. You are free to use, modify, and distribute it for any purpose, commercial or non-commercial, without seeking permission.
Code License
Apache License 2.0
The code in this repository is licensed under the Apache License, Version 2.0.
You may use, modify, and distribute the code, provided that you include the required license notice and comply with the terms of the license.
Repository
Kaggle Dataset: niloydebbarma/ruleshift-dataset
Generation Code: RuleShift Dataset Generation Notebook
Contact
For questions, issues, or feedback:
- Author: Niloy Deb Barma
- Please open an issue in this repository for bug reports, suggestions, or discussions.
RuleShift is designed as a benchmark for measuring progress toward AGI in executive function capabilities, specifically cognitive flexibility measured through silent rule adaptation tasks.