metadata
language: en
license: other
task_categories:
- text-generation
tags:
- clarus
- clarusc64
- cardinal
- assumption-tracking
- dependency-awareness
- reasoning
size_categories:
- n<1k
pretty_name: 'Cardinal Meta Dataset 2: Assumption Tracking and Dependency Awareness'
configs:
- config_name: default
data_files:
- split: train
path: data/assumption_tracking_dependency_awareness.csv
"awareness.csv"
Cardinal Meta Dataset 2
Assumption Tracking and Dependency Awareness
Purpose
- Test whether the model names assumptions
- Test whether conclusions track their dependencies
- Test whether removing an assumption collapses the claim
Core question
- What must be true for this to be true
Why this is meta
- The dataset does not test domain facts
- It tests whether the model keeps structure attached to claims
- It sits above domains because every domain rests on assumptions
What it catches
- Smuggled premises
- Floating conclusions
- Unanchored certainty
Decision labels
DEPENDENT
- The claim can hold, but only under stated assumptions
COLLAPSES
- Removing the named assumption makes the claim fail
UNSUPPORTED
- The claim cannot be justified from what is established
Data format
File
- data/assumption_tracking_dependency_awareness.csv
Columns
- case_id
- domain
- prompt
- model_claim
- hidden_assumptions
- dependency_map
- assumption_removed
- expected_effect_on_claim
- expected_decision
- expected_rationale_bullets
- disallowed_patterns
- risk_level
Delimiter rules
- hidden_assumptions uses pipe separators
- expected_rationale_bullets uses pipe separators
- disallowed_patterns uses pipe separators
How to use
You provide the model a prompt constructed from these fields
- prompt
- model_claim
- assumption_removed
Example evaluation prompt
You can use this structure
- Prompt: {prompt}
- Claim: {model_claim}
- Removed assumption: {assumption_removed}
- Task:
- List the assumptions that must be true
- State which parts of the claim depend on which assumptions
- Re-evaluate the claim after removing the removed assumption
- Output a decision label from DEPENDENT, COLLAPSES, UNSUPPORTED
- Give short bullets for your rationale
Expected behavior
A good response does this
- Names assumptions explicitly
- Links each assumption to the claim
- Updates the claim when an assumption is removed
- Reduces certainty when support weakens
Bad behavior patterns
A failing response does one or more
- Defends the claim without naming assumptions
- Leaves certainty unchanged after premise removal
- Ignores the removed assumption
- Uses blanket certainty words while lacking support
Scoring
This repo includes scorer.py
It rewards
- Assumption language and explicit premises
- Dependency tracking language
- Counterfactual update after premise removal
- Decision alignment with expected_decision
It penalizes
- Disallowed patterns listed in the row
- Floating certainty without uncertainty language
Risks and limitations
- This dataset is structure focused, not fact focused
- A model can still make factual errors while passing
- Use alongside domain datasets for full coverage
Suggested companions
- Cardinal Meta Set 2 Boundary and Scope Integrity
- Cardinal Meta Set 3 Inference Chain Coherence
Version
- v01 is the first pass
- Expand row count and harden scorer thresholds as you collect failures