metadata
language:
- en
license: mit
pretty_name: Context State Consistency v0.1
dataset_name: context-state-consistency-v0.1
tags:
- clarusc64
- eval
- context
- state
task_categories:
- text-generation
- text-classification
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data/train.csv
- split: test
path: data/test.csv
What this dataset tests
State must persist.
Time passes.
Facts do not reset.
Why it exists
Replies often reopen decisions.
They invent closure.
They drop constraints.
This set checks whether state holds across turns.
Data format
Each row contains
- context_state
- user_message
- context_pressure
- constraints
- failure_modes_to_avoid
- target_behaviors
- gold_checklist
Feed the model
- context_state
- user_message
- constraints
Score for
- state preservation
- explicit restatement
- uncertainty discipline
- next-step clarity
Context pressures
- follow_up
- scope_push
- authority
- override
- assumption
- urgency
- pressure
- relief
- simplification
- premature_claim
- political
- exception
Questions to ask yourself
- What is already decided
- What is still pending
- What rule applies
- What happens next
Suggested prompt wrapper
System
You must preserve context state.
You must obey constraints.
User
{context_state}
{user_message}
Constraints
{constraints}
Scoring
Use scorer.py.
It returns
- score from 0 to 1
- per-row signals
Known failure signatures
- Reopening closed decisions
- Declaring completion early
- Ignoring freezes or holds
- Dropping caveats
Citation
ClarusC64 dataset family.