license: cc-by-4.0
task_categories:
- question-answering
- text-generation
language:
- en
pretty_name: Competence-Based Evaluation (Invariance Benchmark)
size_categories:
- 10K<n<100K
tags:
- reasoning
- logical-reasoning
- invariance
- robustness
- benchmark
- sft
configs:
- config_name: eval_pos
data_files:
- split: original
path: eval/pos/original.jsonl
- split: equivalent
path: eval/pos/equivalent.jsonl
- config_name: eval_pos_largeN
data_files:
- split: original
path: eval/pos_largeN/original.jsonl
- split: equivalent
path: eval/pos_largeN/equivalent.jsonl
- config_name: eval_depth
data_files:
- split: original
path: eval/depth/original.jsonl
- split: equivalent
path: eval/depth/equivalent.jsonl
- config_name: sft_full
data_files:
- split: train
path: sft/full/train.jsonl
- split: validation
path: sft/full/val.jsonl
- config_name: sft_noleak
data_files:
- split: train
path: sft/noleak/train.jsonl
- split: validation
path: sft/noleak/val.jsonl
Competence-Based Evaluation (Invariance Benchmark)
A benchmark for testing whether language models give the same answer to semantically equivalent reformulations of a logical-ordering question. Given a set of pairwise constraints (e.g. Alice is in front of Bob), a model should answer transitive-closure queries (Is Carol in front of Dave?) consistently whether the constraints are stated using a relation or its inverse.
Each item exists as a paired (original, equivalent) record describing the
same underlying ordering with different surface phrasings. Invariance is
measured as the agreement between the model's original and equivalent
answers; accuracy is measured against the ground-truth boolean.
Subsets
Evaluation (held-out)
| Config | Split | Rows | N range | Notes |
|---|---|---|---|---|
eval_pos |
original, equivalent |
4,000 each | 4–2048 | Main yes/no eval. Uses the held-out pos (in-front-of/behind) relation. Names list shown in the prompt is shuffled to remove the order-of-names leak. |
eval_pos_largeN |
original, equivalent |
1,200 each | up to several thousand | Stress test at large N. |
eval_depth |
original, equivalent |
2,000 each | 4–64 | Held-out depth (above/below stacking) relation, names-list shuffled. |
Each row is one yes/no question. Within a config, row i of the
original split and row i of the equivalent split describe the same
underlying ordering and the same query, only with the relation phrased
differently (e.g. "Alice in front of Bob" vs. "Bob behind Alice"). They share
the same ground-truth answer.
Schema:
{
"question": "There are 4 people standing in some order.\nTheir names are [...]\n...\nIs Nicholas in front of Thomas? Provide your answer only as yes or no. Answer: \n",
"answer": "yes",
"is_fwd": true,
"num_elements": 4
}
Supervised fine-tuning
The SFT subsets are chat-formatted (messages field) and ready for
trl.SFTTrainer / OpenAI fine-tuning. They are built from a different set of
fact-agnostic relations than the eval set, with n skewed toward small values.
Each underlying ordering is expanded across (is_fwd, answer) combinations
× (original, equivalent) phrasing = 8 rows.
| Config | Split | Rows | Train relations | Notes |
|---|---|---|---|---|
sft_full |
train |
45,600 | arrival, priority, proximity, seniority, spatial_lr, spatial_ud | All fact-agnostic relations. |
sft_full |
validation |
2,400 | (same) | In-distribution validation split. |
sft_noleak |
train |
45,600 | (same as sft_full) |
Built with --shuffle-names-display to remove the names-list leak; this is the version used for the paper's reported fine-tuning results. |
sft_noleak |
validation |
2,400 | (same) | In-distribution validation split. |
The pos and depth relations are deliberately excluded from training so
that the eval subsets remain genuinely out-of-distribution.
Schema (chat / messages format):
{
"messages": [
{"role": "system", "content": "You are a helpful assistant. Answer logical reasoning questions concisely."},
{"role": "user", "content": "There are 8 employees ... Is Juana more senior than Felecia? ..."},
{"role": "assistant", "content": "no"}
]
}
Per-config metadata (n distribution, per-relation counts, seed) lives in
sft/full/meta.json and sft/noleak/meta.json.
Loading
from datasets import load_dataset
# Eval — paired splits, same row index = same underlying ordering.
ds = load_dataset("jizej/Competence-Based-Evaluation", "eval_pos")
org = ds["original"]
eqv = ds["equivalent"]
# SFT — chat-formatted.
sft = load_dataset("jizej/Competence-Based-Evaluation", "sft_noleak")
train = sft["train"]
Source Datasets
The dataset is fully synthetic — no records are copied from another corpus. However, the procedural generator draws entity names (people, animals, cities, structures, etc.) from external knowledge sources. The complete list of upstream source URIs is:
- Wikidata SPARQL endpoint:
https://query.wikidata.org/sparqlUsed for thesize_animals,height_structures,age_figures,time_events,brightness_stars, andspeed_animals(Wikidata fallback) pools. Queries are stored verbatim ininvariance_bench/generate_entities.py. - English Wikipedia REST API:
https://en.wikipedia.org/api/rest_v1/page/html/...Specific source pages:https://en.wikipedia.org/wiki/List_of_cities_by_average_temperature(temperature_citiespool)https://en.wikipedia.org/wiki/Fastest_animals(fallback forspeed_animals)
- Curated lists embedded in the generator script (no external URI):
weight_objects,price_items,rank_athletes,spatial_objects, and the names list used by theposand SFT relations. These are author- maintained and are the only non-Wikidata/Wikipedia sources.
Wikidata content is licensed CC0 and Wikipedia text is licensed CC BY-SA.
Cached responses for every pool are stored under .entity_cache/{pool}.json
in the open-source repository so the dataset can be regenerated bit-exact
without re-querying the upstream sources.
Synthetic-generation seeds that fully determine the released splits are
recorded in the per-subset meta.json files (sft/full/meta.json,
sft/noleak/meta.json); the seed used for the released SFT subsets is
42. Eval splits use deterministic enumeration over (N, ordering, query)
triples and require no random seed.
Provenance Activities
The end-to-end activities applied to produce this dataset are:
- Collection (automated, online). Entity pools fetched from Wikidata
via SPARQL and from Wikipedia via the REST API; see
invariance_bench/generate_entities.py. Rate-limited with retries; results cached on disk. - Cleaning / filtering. Per-pool deduplication (case-insensitive name
collapsing), removal of entries missing the relevant ground-truth value,
and merging of SPARQL results with curated fallback lists. For
age_figuresthe SPARQL query is split into three era-based sub-queries to avoidwikibase:sitelinks-induced timeouts. - Curated fallback authoring. Manual curation by the dataset authors
for the
weight_objects,price_items,rank_athletes,spatial_objects, andnamespools (lists embedded directly ingenerate_entities.pyandquestion_generation.py). - Synthetic question generation. Procedural construction of the
eval and SFT records in
invariance_bench/question_generation.pyand the entry-point scriptsscripts/generate_dataset.py,scripts/generate_heldout_dataset.py, andscripts/generate_training_data.py. This step is fully deterministic given the seed and entity pools. - Annotation. None. There is no human-annotation step. All
answer/messagesground-truth labels are produced by the same deterministic generator that creates the question text, and are derived from the synthesized ground-truth ordering, not from human judgment. - Synthetic agents / LLMs. None. No language model, embedding model, or generative agent is used at any step in the pipeline.
- Crowdsourcing platforms / human teams. Not applicable — no crowdsourcing, no human raters, no annotation contractors were involved.
- Validation / leak audit. The released
_shufnames/noleaksubsets were produced after an internal audit revealed that an earlier version's prompt-side names list ordering correlated with the answer. The audit is documented indocs/paper_methodology_experiments.mdof the open-source repository.
Construction (Synthetic-Data Generation Process)
All records in this dataset are synthetic. They are produced by a deterministic procedural generator; no model-based generation, no human annotation, and no scraped natural-language Q&A is used in the pipeline.
The generation process is:
- Entity pools. Names of entities (animals, structures, people, cities,
events, stars, etc.) are sourced from Wikidata SPARQL queries, Wikipedia
HTML tables, and small curated fallback lists embedded in the generator.
Each pool is cached on disk as JSON. See
invariance_bench/generate_entities.py. - Ordering sampling. For each (relation,
n) bucket the generator samples a random permutation ofnentities from the appropriate pool and lays out the chain implied by the relation (e.g. front-of / behind). - Constraint expansion. A subset of consecutive pairs is selected to form the "rules" shown in the prompt; the unstated remainder is what the transitive-closure query exercises.
- Phrasing duplication. Every ordering is rendered twice: once with
the canonical relation (
original) and once with the logically inverse relation (equivalent). The two renderings carry the same ground-truth boolean answer. - Yes/no query selection. A query pair
(a, b)is sampled at a configured minimum hop distance, with the ground-truthyes/noanswer balanced by construction. - (SFT subsets only) Chat formatting. Each (ordering, query, phrasing)
triple is serialized into a
messagesarray with system / user / assistant turns ready for SFT trainers.
All generation seeds, the per-relation count distributions, the N
schedule, and the held-out relation list are recorded in the per-subset
meta.json. The full pipeline is reproducible from the open-source
repository linked above.
Intended Use Cases
The dataset is designed to measure answer-level invariance of language models under semantically-preserving paraphrasing of logical-ordering constraints. Concretely:
- Primary use case (validated): measuring whether a model returns the same boolean answer to a transitive-closure query when the underlying ordering is described with a relation versus its inverse. Validation is reported in our accompanying NeurIPS 2026 D&B submission across proprietary and open-weight models.
- Primary use case (validated): comparing pre- and post-fine-tuning
checkpoints to verify that targeted SFT improves invariance without
destroying out-of-distribution generalization (held-out
posanddepthrelations). - Secondary use case (partially validated): scaling-law style analyses
of invariance vs. accuracy as a function of
N(the number of entities in the ordering). Validated forN ∈ [4, 2048]; behavior beyond this range is not characterized. - Secondary use case (not validated here): as a regression test for training pipelines that aim to preserve symbolic reasoning under paraphrase. We provide the data; we do not certify any specific training recipe.
Use cases for which validation does not exist or may not hold: general-reasoning leaderboard ranking, safety / alignment evaluation, detection of jailbreaks or adversarial prompts, multilingual robustness, evaluation of long-form generation quality, and any clinical, legal, or high-stakes decision-support setting.
Personal and Sensitive Information
The dataset contains no real personal data, no real PII, and no health, medical, financial, biometric, political, or religious data about identifiable individuals. All "people" in the prompts are synthetic references constructed by sampling from entity pools.
The following indirect demographic signals are present and should be declared:
- Gender (indirect, via names). First names sampled from US-style name lists carry conventional masculine/feminine associations. No gender label is attached to any record; gender is only implicit in the name token.
- Geography. Pools such as
temperature_cities,height_structures, andtime_eventscontain real geographic place names sourced from Wikidata and Wikipedia. These pools are skewed toward globally prominent, English-Wikipedia-covered locations. - Language. Prompts and answers are exclusively in English; this is a deliberate scope restriction, not a privacy signal, but it is recorded here for completeness.
- Culture. Entity selection inherits the cultural skew of Wikidata / English Wikipedia (Western, anglophone over-representation).
- Age (of historical figures only). The
age_figurespool references real historical figures with their public birth years. These are deceased public figures whose biographical data is already published on Wikidata; no contemporary individuals' ages are present.
The following are not present: socio-economic status of identifiable
individuals, professional experience or seniority of identifiable
individuals (the seniority and priority relations operate on synthetic
placeholders, not on real employees or rankings), health or medical data,
political affiliation, and religious belief.
No data subjects were contacted or surveyed in producing this dataset, so no consent or withdrawal procedures apply. Wikidata is licensed CC0 and Wikipedia is licensed CC BY-SA; both permit redistribution of the entity metadata used here.
Social Impact
Intended positive impact. Releasing a clean invariance benchmark encourages the field to evaluate language models on robustness to paraphrase, not only on accuracy. Reproducible held-out splits and an open-source generator make it harder for the benchmark to be quietly over-fit, and the SFT subsets give researchers a concrete starting point for studying targeted invariance training.
Potential negative impact and risks of misuse.
- Over-claiming general reasoning. High invariance scores on this dataset measure invariance on transitive ordering only. A naive reader could mistake them for evidence of general reasoning robustness; results should always be reported with the scope of the benchmark stated.
- Skill leaderboarding pressure. As with any public benchmark, optimizing directly against this dataset risks Goodharting — gains here may not transfer to natural-language reasoning. We encourage reporting paired held-out evaluations from other benchmarks.
- Cultural / linguistic skew. Because entity pools are anglocentric, models tuned on this data may improve on similarly-distributed inputs while showing little transfer to non-English or non-Western surface forms.
- Indirect demographic correlations. US-style first names carry conventional gender signals. If a downstream model is trained on the SFT subsets in a way that picks up name-conditioned heuristics, that bias will propagate. Users training on this data should audit for gendered response patterns.
Mitigations in this release.
- The dataset is open-license (CC BY 4.0) but gated by deliberate narrowness of scope rather than access controls: every record is explicitly a synthetic transitive-ordering question, and the dataset card states the intended-use boundaries above.
- Held-out relations (
pos,depth) are excluded from the SFT subsets so OOD generalization claims remain defensible. - The earlier internal
_shufnames/noleakaudit (where the displayed names list accidentally encoded the answer) is documented above; the released eval and SFT files have the leak fixed. - The generator is open-source, allowing external auditors to reproduce every record from a documented seed.
No usage gating, embargo, or differential-access controls are applied. Users are expected to follow the limitations and intended-use guidance above and to cite the dataset when reporting results.
Limitations
- Narrow reasoning skill. Each question tests transitive closure over a linear ordering induced by a single binary relation. Performance here does not generalize to multi-step natural-language reasoning, common-sense inference, math, code, or any non-ordering relational structure.
- Synthetic phrasings. Questions are produced by a small grammar (a fixed template per relation) rather than written by humans, so surface-form diversity is limited. Distributional gaps relative to natural prose, conversational queries, or noisy real-world text are large.
- English only. All prompts and answers are English. The benchmark says nothing about cross-lingual robustness.
- Yes/no output space. The eval rewards a literal
yesornotoken. Models that hedge, refuse, or emit verbose chains of thought without a committed answer score zero on accuracy and invariance regardless of whether the underlying reasoning is correct. Practitioners using CoT-style models should add an answer-extraction step (seeinvariance_bench/scoring.py). - Single deterministic ground truth. The eval does not measure calibration, uncertainty, or partial credit; orderings with ties or under-specified constraints are not represented.
- Long-context confound. At large
N(especially ineval_pos_largeNand theN=2048slice ofeval_pos), prompts can exceed the effective context window of many models. Failures at largeNmay reflect context handling rather than reasoning ability and should not be interpreted as pure invariance violations. - Held-out coverage. The OOD evaluation surface is two relations (
pos,depth); the benchmark cannot verify whether a model's invariance generalizes to relations beyond those seen at train and eval time. - Names-list leak in earlier internal versions. Released
_shufnameseval files and thesft_noleaktraining files do not have this leak. Olderbase2_*artifacts (not released on HF) did, and any third-party reuse of those files would over-estimate model performance.
Not recommended for: general reasoning leaderboards, safety/alignment evaluation, multilingual evaluation, evaluating models whose primary output mode is a long chain of thought without an extractable boolean answer.
Biases
- Anglo/Western entity skew. The
namespool used by thepos-relation questions and by the SFT data is drawn from US-style first-name lists, so most prompts contain English-coded given names. Thetemperature_cities,height_structures, andtime_eventspools likewise over-represent Wikipedia/Wikidata-prominent (largely Western, English-language) entities. Under-represented populations include non-Western cultures and languages whose entities have lower Wikipedia coverage. - Source-driven content bias. Wikidata and Wikipedia are themselves known
to be skewed toward male, Western, and modern-era subjects (especially in
age_figures). The benchmark inherits these biases. Curated fallback lists forweight_objects,price_items, andrank_athletesreflect the authors' own selections and are not demographically balanced. - Relation-template bias. Each relation has one canonical phrasing and one inverse phrasing. The grammar does not exercise the full space of English ways to express ordering (passive voice, comparative clauses, idiomatic expressions, etc.), so reported invariance is a conservative lower bound: a model that is invariant on this dataset may still be sensitive to other surface variations.
- Position-of-name leak (mitigated). In an earlier internal version,
the order of names listed in the prompt correlated with their position in
the underlying ordering, which models could exploit without reading the
rules. Released eval files (
*_shufnames.jsonl) and thesft_noleaksubset shuffle the displayed names list to remove this leak. Users regenerating data with the included scripts must pass--shuffle-names-displayto reproduce the no-leak setting. - Train/eval relation leakage controls.
pos(front/behind) anddepth(above/below) are deliberately held out of the SFT data so they remain OOD for fine-tuned checkpoints. Mixing the SFT subsets with held-out evaluation defeats the OOD claim. - Per-
Nrow-count imbalance. Both eval and SFT skew toward smallN(the SFT distribution explicitly down-weights largeN). Aggregate metrics acrossNare therefore dominated by the small-Nregime; report per-Nnumbers when comparing models.
License
Released under CC BY 4.0. Entity names sourced from Wikidata/Wikipedia retain their original licenses (CC0 / CC BY-SA).
Citation
Please cite the accompanying paper if you use this dataset (citation TBD — NeurIPS 2026 Datasets & Benchmarks track submission).