Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
License:
KG-MuLQA-D / README.md
Nikita-A-Tatarinov's picture
Updated the citation
a535ea2 verified
metadata
license: cc-by-nc-nd-4.0
configs:
  - config_name: L1
    default: true
    data_files:
      - split: dev
        path: L1/dev-*
      - split: test
        path: L1/test-*
  - config_name: L2
    data_files:
      - split: dev
        path: L2/dev-*
      - split: test
        path: L2/test-*
  - config_name: L3
    data_files:
      - split: dev
        path: L3/dev-*
      - split: test
        path: L3/test-*
  - config_name: L4
    data_files:
      - split: dev
        path: L4/dev-*
      - split: test
        path: L4/test-*
  - config_name: L5
    data_files:
      - split: dev
        path: L5/dev-*
      - split: test
        path: L5/test-*
dataset_info:
  - config_name: L2
    features:
      - name: document_number
        dtype: int64
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: num_hops
        dtype: int64
      - name: num_set_operations
        dtype: int64
      - name: multiple_answer_dimension
        dtype: int64
    splits:
      - name: dev
        num_bytes: 147549
        num_examples: 1123
      - name: test
        num_bytes: 527312
        num_examples: 5084
    download_size: 122171
    dataset_size: 674861
  - config_name: L3
    features:
      - name: document_number
        dtype: int64
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: num_hops
        dtype: int64
      - name: num_set_operations
        dtype: int64
      - name: multiple_answer_dimension
        dtype: int64
    splits:
      - name: dev
        num_bytes: 87881
        num_examples: 582
      - name: test
        num_bytes: 384612
        num_examples: 3000
    download_size: 87661
    dataset_size: 472493
  - config_name: L4
    features:
      - name: document_number
        dtype: int64
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: num_hops
        dtype: int64
      - name: num_set_operations
        dtype: int64
      - name: multiple_answer_dimension
        dtype: int64
    splits:
      - name: dev
        num_bytes: 156757
        num_examples: 975
      - name: test
        num_bytes: 287182
        num_examples: 2119
    download_size: 60353
    dataset_size: 443939
  - config_name: L5
    features:
      - name: document_number
        dtype: int64
      - name: question
        dtype: string
      - name: answer
        dtype: string
      - name: num_hops
        dtype: int64
      - name: num_set_operations
        dtype: int64
      - name: multiple_answer_dimension
        dtype: int64
    splits:
      - name: dev
        num_bytes: 40322
        num_examples: 239
      - name: test
        num_bytes: 65790
        num_examples: 467
    download_size: 18756
    dataset_size: 106112

KG-MuLQA: A Framework for KG-based Multi-Level QA Extraction and Long-Context LLM Evaluation

KG‑MuLQA is a framework that (1) extracts QA pairs at multiple complexity levels (2) along three key dimensions -- multi-hop retrieval, set operations, and answer plurality, (3) by leveraging knowledge-graph-based document representations.

KG‑MuLQA Overview

Overview of KG-MuLQA. Credit agreements are annotated to identify entities and their relationships, forming a knowledge graph representation. This graph is then used to systematically extract multi-level QA pairs, which serve as the basis for benchmarking long-context LLMs.

KG‑MuLQA-D Dataset

We produce KG‑MuLQA‑D, a dataset of 20,139 QA pairs derived from 170 SEC credit agreements (2013–2022) and categorized by five complexity levels. Each QA pair is tagged with a composite complexity level (L = #hops + #set‑ops + plurality), split into Easy, Medium, and Hard.

QA Templates

This table illustrates the question templates used to construct KG-MuLQA-D, structured along three dimensions: plurality (P), number of hops (H), and set operations (#SO). It includes example templates, corresponding knowledge graph query paths, and logical operations involved. These dimensions are used to compute the overall complexity level for each QA pair. The full list of templates can be found in the paper.

LLM Benchmarking & Evaluation

We evaluate 16 proprietary and open-weight LLMs on KG-MuLQA-D benchmark. As question complexity increases, the LLM's ability to retrieve and generate correct responses degrades markedly. We categorize observed LLM failures into four major types, each of which presents recurring challenges as question complexity increases: Misinterpretation of Semantics, Implicit Information Gaps, Set Operation Failures, and Long-Context Retrieval Errors. See the paper for detailed analysis.

Evaluation Results

This table presents the performance of 16 LLMs, evaluated across Easy, Medium, and Hard question categories. The metrics include the F1 Score and the LLM-as-a-Judge rating, capturing both token-level accuracy and semantic correctness. The results reveal a consistent decline in performance as question complexity increases, with notable model-specific strengths and weaknesses. * denotes the models evaluated on a smaller subset due to cost constraints (see the paper for extended evaluation).

Dataset Release

To facilitate reproducibility and future research, we release the KG‑MuLQA‑D dataset under a CC-BY-NC-ND 4.0 license. The dataset is divided into development and test sets as follows:

Stats Dev Test Total
# Documents 40 130 170
# Questions per Doc (Min) 1 1 1
# Questions per Doc (Avg) 14.75 23.49 21.44
# Questions per Doc (Max) 83 428 428
# Easy Questions 1,499 5,051 6,550
# Medium Questions 2,680 10,203 12,883
# Hard Questions 239 467 706
Total Questions 4,418 15,721 20,139
  • Development Set (~25%): 40 documents and 4,418 QA pairs are publicly released to support model development and validation.
  • Test Set (~75%): 130 documents and 15,721 QA pairs are not released to prevent data contamination and ensure fair evaluation (questions are released for the leaderboard).

Citation

If you use KG‑MuLQA in your work, please cite:

@misc{tatarinov2026kgmulqaframeworkkgbasedmultilevel,
      title={KG-MuLQA: A Framework for KG-based Multi-Level QA Extraction and Long-Context LLM Evaluation}, 
      author={Nikita Tatarinov and Vidhyakshaya Kannan and Haricharana Srinivasa and Arnav Raj and Harpreet Singh Anand and Varun Singh and Aditya Luthra and Ravij Lade and Agam Shah and Sudheer Chava},
      year={2026},
      eprint={2505.12495},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.12495}, 
}

For questions or issues, please reach out to: