Asymmetricity-2.0 / README.md
MoyYuan's picture
Update README.md
9234162 verified
metadata
license: mit
size_categories:
  - 10M<n<100M
language:
  - en

Asymmetricity v2: A Benchmark for Evaluating LLMs on Symmetric and Asymmetric Relation Understanding

Asymmetricity v2 is a massive upgrade to the original benchmark dataset, designed to evaluate large language models (LLMs) on their ability to distinguish and reason over symmetric (e.g., borders) and antisymmetric (e.g., parent of) relations in natural language. Now expanded to over 70 million entries, the dataset is derived from Wikidata triples and cast into a natural language inference (NLI) format, enabling fine-grained, large-scale analysis of relational understanding.

The dataset includes a variety of textual forms—both in natural language and in a delexicalized version where entities are replaced by Wikidata IDs (e.g., Q7024230). This enables models to be evaluated both on surface-level text and on abstract relational structure.

This is the second version of the dataset. The original version can be found here: Asymmetricity v1.


Overview

Understanding the symmetry properties of relations is essential for robust reasoning. For example, if A is the parent of B, then B is the parent of A should clearly be false. Many LLMs, however, struggle to consistently apply this logic, particularly when the phrasing or entity names change.

Asymmetricity v2 provides a structured and scalable testbed for evaluating this capability, drawing on real-world knowledge base relations and reformulating them as NLI-style sentence pairs. With the inclusion of reasoning chain lengths, v2 also supports evaluating multi-step relational reasoning.


Motivation

Current language models often rely on surface patterns and statistical co-occurrence, which can obscure their understanding of logical constraints like symmetry and directionality. This benchmark tests models on:

  • Recognizing whether a relation is symmetric or asymmetric
  • Identifying correct entailments and contradictions in natural language
  • Generalizing across entity names and abstract identifiers (Wikidata IDs)
  • Handling reasoning chains of varying lengths

Dataset Design

Each example is based on Wikidata triples involving entities and relations. The data is converted into a list of natural language premises and a hypothesis representing a logical consequence (or contradiction). A label indicates whether the hypothesis logically follows from the premises.


Evaluation Focus

This dataset supports research in:

  • Logical consistency and relation reasoning in LLMs
  • Sensitivity to relation directionality and symmetry
  • Robustness across lexicalized and abstract (ID-based) inputs
  • Pretraining biases related to relation semantics
  • Multi-step reasoning capabilities (via chain length analysis)

It is suitable for prompting, zero/few-shot evaluation, embedding-based retrieval, and supervised fine-tuning.


Data Format

Each line in the dataset is a JSON object with the following fields:

  • tier: A string indicating the difficulty tier or partition of the example.
  • lex: The lexicalization type (e.g., text for natural language, delex for ID-based).
  • lang: The language code of the text (e.g., en).
  • premises: A list of natural language sentences acting as the logical basis for the inference.
  • hypothesis: The target sentence to be validated against the premises.
  • label: The inference label (e.g., entailment, contradiction).
  • relation_ids: A list of Wikidata property IDs (e.g., ['P40']) involved in the reasoning chain.
  • rule: The specific logical rule being tested (e.g., symmetry, antisymmetry).
  • entities: A list of entity identifiers or names present in the example.
  • chain_len: An integer (int64) representing the length of the reasoning chain (number of steps/triples).

Citation

If you use this dataset in your work, please cite the following paper:

@article{yuan2025capturing,
  title={Capturing Symmetry and Antisymmetry in Language Models through Symmetry-Aware Training Objectives},
  author={Yuan, Zhangdie and Vlachos, Andreas},
  journal={arXiv preprint arXiv:2504.16312},
  year={2025}
}