Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
DOI:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

CORE: Comprehensive Ontological Relation Evaluation

🌐 Website | 📄 Paper | 💻 Code

Dataset Summary

CORE is a human-grounded benchmark for evaluating large language models on fundamental semantic and ontological reasoning. It assesses whether models can correctly recognize a broad range of sense-level relations and, critically, identify when no meaningful relationship exists between concepts. With comprehensive relation coverage and strong human baselines, CORE is designed to reveal systematic failure modes such as confidence–accuracy misalignment, semantic collapse, and overconfident reasoning that diverges from human judgment.

This dataset contains the open split of CORE, which is publicly released to support transparent evaluation and reproducible research.


Dataset Structure

Each row in the dataset questions represents a single multiple-choice question.

Fields

Field Type Description
question_id string Globally unique identifier
question_text string Natural language question
options dict Mapping from option labels (A/B/C/D) to text
correct_answer string Correct option label
discipline string Broad domain (e.g., general, commonsense)
relation string Ontological / semantic relation being tested
question_type string "open" (attempted by all participants)

Ontological & Semantic Relations

CORE evaluates reasoning across relations such as (non-exhaustive):

  • cause–effect
  • part–whole
  • function–object
  • spatial, temporal, and logical relations
  • related vs unrelated concept distinctions

This makes CORE one of the first benchmarks to systematically cover the majority of common ontological and semantic relation types in a unified evaluation framework.


Splits

Open Split (this dataset)

  • Questions attempted by 1000+ participants
  • Publicly released via Hugging Face
  • Used for transparent benchmarking and leaderboard reporting

Blind Split (not included here)

  • Questions attempted by only a subset of participants
  • Intentionally kept private to prevent data leakage
  • Used for periodic blind evaluation of models

Evaluation on the blind split can be requested by contacting: 📧 connect@vaikhari.ai


Human Baseline

Each question in CORE has been answered by hundreds to thousands of human participants. A separate dataset human_baseline provides:

  • Accuracy and difficulty indices
  • Entropy and consensus metrics
  • Distractor analysis
  • Agreement statistics

This allows direct comparison between:

  • Human performance
  • Model performance
  • Human–model divergence under confidence

Intended Uses

  • Academic research
  • Evaluation and benchmarking
  • Educational purposes
  • Non-commercial experimentation

Limitations

  • This dataset covers only the open split of CORE
  • Blind questions are not included to preserve benchmark integrity
  • English-only
  • Multiple-choice format (no free-form generation)

Evaluation

CORE is typically evaluated using:

  • Accuracy and balanced accuracy
  • Expected Calibration Error (ECE)
  • Overconfident error rate
  • Human–model disagreement under confidence
  • Semantic collapse indicators

Evaluation scripts are available in the CORE GitHub repository.


Licensing

This dataset is distributed under the CC BY-NC-ND 4.0 license and is intended solely for research and evaluation. See the repository’s file for detailed usage conditions. Any commercial use, redistribution, or modification of the benchmark requires prior written approval.


Contact

For questions, blind evaluations, or full benchmark access:

📧 connect@vaikhari.ai 🌐 https://core.vaikhari.ai


Downloads last month
18

Paper for vaikhari-ai/core