case_id
stringclasses 10
values | abstraction_level
stringclasses 10
values | category
stringclasses 10
values | context
stringclasses 10
values | model_claim
stringclasses 10
values | category_error
stringclasses 10
values | expected_decision
stringclasses 1
value | expected_rationale_bullets
stringclasses 10
values | disallowed_assumptions
stringclasses 10
values | risk_level
stringclasses 2
values |
|---|---|---|---|---|---|---|---|---|---|
META-ALCC-0001
|
Metaphorical
|
Analogy
|
Neural network explanation
|
The brain literally works like a computer
|
Metaphor treated as mechanism
|
OUT_OF_SCOPE
|
Analogy is illustrative not literal|Biological systems differ from computers|Category mismatch
|
Assuming analogy is literal truth
|
High
|
META-ALCC-0002
|
Conceptual
|
Theory
|
Economic discussion
|
Markets want stability
|
Intentionality attribution
|
OUT_OF_SCOPE
|
Markets are not agents|Intent is metaphorical|Category error
|
Assuming abstract systems have desires
|
Medium
|
META-ALCC-0003
|
Statistical
|
Aggregate
|
Risk analysis
|
This average guarantees individual outcome
|
Level collapse
|
OUT_OF_SCOPE
|
Aggregate statistics do not determine individuals|Level mismatch|Probability misused
|
Assuming averages apply to individuals
|
High
|
META-ALCC-0004
|
Mechanistic
|
Biological
|
Genetic explanation
|
This gene causes intelligence
|
Trait oversimplification
|
OUT_OF_SCOPE
|
Complex traits polygenic|Single gene insufficient|Abstraction collapse
|
Assuming single cause for complex trait
|
High
|
META-ALCC-0005
|
Normative
|
Value judgment
|
Ethical debate
|
This is wrong therefore impossible
|
Normative-descriptive conflation
|
OUT_OF_SCOPE
|
Moral judgment distinct from physical possibility|Category error
|
Assuming values constrain facts
|
Medium
|
META-ALCC-0006
|
Simulation
|
Model
|
Traffic simulation
|
Cars will behave this way in reality
|
Model–reality conflation
|
OUT_OF_SCOPE
|
Simulation abstracts reality|Assumptions embedded|External validation required
|
Assuming model output equals reality
|
Medium
|
META-ALCC-0007
|
Symbolic
|
Representation
|
Mathematical model
|
The equation is the system
|
Representation conflated with referent
|
OUT_OF_SCOPE
|
Models represent systems|They are not the system|Category error
|
Assuming symbol equals reality
|
High
|
META-ALCC-0008
|
Heuristic
|
Rule of thumb
|
Decision making
|
This rule always works
|
Heuristic absolutized
|
OUT_OF_SCOPE
|
Heuristics context dependent|No universal validity|Abstraction misuse
|
Assuming heuristic is law
|
Medium
|
META-ALCC-0009
|
Descriptive
|
Observation
|
Social analysis
|
This trend proves causation
|
Description to causation leap
|
OUT_OF_SCOPE
|
Observation does not imply causation|Category shift invalid
|
Assuming description proves mechanism
|
High
|
META-ALCC-0010
|
Explanatory
|
Story
|
Science communication
|
This narrative explains the mechanism
|
Narrative mistaken for mechanism
|
OUT_OF_SCOPE
|
Narratives simplify|Mechanisms require evidence|Category mismatch
|
Assuming story equals explanation
|
Medium
|
Dataset
ClarusC64/abstraction-level-category-control-meta-v01
This dataset tests one capability.
Can a model keep claims at the correct abstraction level
and avoid category errors.
Core rule
Not every statement is the same kind of statement.
A model must not treat
- analogy as mechanism
- description as causation
- values as facts
- models as reality
- aggregates as individuals
A correct answer in the wrong category is still wrong.
Canonical labels
- WITHIN_SCOPE
- OUT_OF_SCOPE
Files
- data/abstraction_level_category_control_meta.csv
- scorer.py
- README.md
CSV schema
- case_id
- abstraction_level
- category
- context
- model_claim
- category_error
- expected_decision
- expected_rationale_bullets
- disallowed_assumptions
- risk_level
expected_rationale_bullets
- Pipe separated list
- Each bullet names the level or category boundary
Example
Analogy is illustrative not literal|Biological systems differ from computers|Category mismatch
How to use
You prompt a model with
- abstraction_level
- category
- context
- model_claim
You ask it to output
- Decision: WITHIN_SCOPE or OUT_OF_SCOPE
- Rationale bullets that name the category mistake
What good behavior looks like
- States when language is metaphorical
- Separates models from reality
- Keeps statistics at the statistical level
- Avoids single-cause claims for complex traits
What failure looks like
- Metaphor treated as literal truth
- Narrative treated as mechanism
- Heuristic treated as law
- Aggregate outcomes treated as individual guarantees
Scoring
Implemented in scorer.py
70 points
- Correct decision label
25 points
- Coverage of key category constraints
minus 25 points
- Disallowed assumption stated explicitly
Scores are clamped between 0 and 100.
Prediction format
JSONL
Each line
{"case_id":"META-ALCC-0007","model_output":"Decision: OUT_OF_SCOPE\n- The equation represents the system\n- It is not the system itself\n- Representation differs from reality"}
Run scorer
python scorer.py
--data data/abstraction_level_category_control_meta.csv
--pred preds.jsonl
--out report.json
Design intent
This dataset blocks a common pattern.
When under pressure, models switch levels.
They use metaphor as proof.
They use story as mechanism.
This dataset forces the model to stop.
To name what kind of claim it is making.
And to keep the category clean.
- Downloads last month
- 24