Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
License:
File size: 7,598 Bytes
b0dd84f
7cf4813
cea7879
9815fb2
537edd6
9815fb2
 
 
 
 
cea7879
 
 
 
 
 
3574444
 
 
 
 
 
1027421
 
 
 
 
 
0265a5b
 
 
 
 
 
f70d182
e979510
f70d182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e979510
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1bd52b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b97df37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0dd84f
8b0823e
 
fe6352f
 
8b0823e
 
fe6352f
8b0823e
 
fe6352f
8b0823e
 
fe6352f
 
 
8b0823e
 
 
 
 
 
 
 
fe6352f
8b0823e
fe6352f
8b0823e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe6352f
8b0823e
fe6352f
8b0823e
 
 
fe6352f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b0823e
fe6352f
 
a535ea2
 
c4c1146
a535ea2
c4c1146
 
 
 
8b0823e
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
---
license: cc-by-nc-nd-4.0
configs:
- config_name: L1
  default: true
  data_files:
  - split: dev
    path: L1/dev-*
  - split: test
    path: L1/test-*
- config_name: L2
  data_files:
  - split: dev
    path: L2/dev-*
  - split: test
    path: L2/test-*
- config_name: L3
  data_files:
  - split: dev
    path: L3/dev-*
  - split: test
    path: L3/test-*
- config_name: L4
  data_files:
  - split: dev
    path: L4/dev-*
  - split: test
    path: L4/test-*
- config_name: L5
  data_files:
  - split: dev
    path: L5/dev-*
  - split: test
    path: L5/test-*
dataset_info:
- config_name: L2
  features:
  - name: document_number
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: num_hops
    dtype: int64
  - name: num_set_operations
    dtype: int64
  - name: multiple_answer_dimension
    dtype: int64
  splits:
  - name: dev
    num_bytes: 147549
    num_examples: 1123
  - name: test
    num_bytes: 527312
    num_examples: 5084
  download_size: 122171
  dataset_size: 674861
- config_name: L3
  features:
  - name: document_number
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: num_hops
    dtype: int64
  - name: num_set_operations
    dtype: int64
  - name: multiple_answer_dimension
    dtype: int64
  splits:
  - name: dev
    num_bytes: 87881
    num_examples: 582
  - name: test
    num_bytes: 384612
    num_examples: 3000
  download_size: 87661
  dataset_size: 472493
- config_name: L4
  features:
  - name: document_number
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: num_hops
    dtype: int64
  - name: num_set_operations
    dtype: int64
  - name: multiple_answer_dimension
    dtype: int64
  splits:
  - name: dev
    num_bytes: 156757
    num_examples: 975
  - name: test
    num_bytes: 287182
    num_examples: 2119
  download_size: 60353
  dataset_size: 443939
- config_name: L5
  features:
  - name: document_number
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: num_hops
    dtype: int64
  - name: num_set_operations
    dtype: int64
  - name: multiple_answer_dimension
    dtype: int64
  splits:
  - name: dev
    num_bytes: 40322
    num_examples: 239
  - name: test
    num_bytes: 65790
    num_examples: 467
  download_size: 18756
  dataset_size: 106112
---

# KG-MuLQA: A Framework for KG-based Multi-Level QA Extraction and Long-Context LLM Evaluation

<p align="center">
  <a href="https://arxiv.org/abs/2505.12495">
    <img src="https://img.shields.io/badge/arXiv-2505.12495-red?logo=arxiv"/>
  </a>
  <a href="https://huggingface.co/datasets/gtfintechlab/KG-MuLQA-D">
    <img src="https://img.shields.io/badge/HuggingFace-KG--MuLQA--D-yellow?logo=huggingface" />
  </a>
  <a href="https://github.com/gtfintechlab/KG-MuLQA">
    <img src="https://img.shields.io/badge/GitHub-KG--MuLQA-black?logo=github"/>
  </a>
</p>

KG‑MuLQA is a framework that (1) extracts QA pairs at multiple complexity levels (2) along three key dimensions -- multi-hop retrieval, set operations, and answer plurality, (3) by leveraging knowledge-graph-based document representations.

<p align="center">
  <img src="artifacts/our_pipeline.png" width="80%" alt="KG‑MuLQA Overview" />
</p>

*Overview of KG-MuLQA. Credit agreements are annotated to identify entities and their relationships, forming a knowledge graph representation. This graph is then used to systematically extract multi-level QA pairs, which serve as the basis for benchmarking long-context LLMs.*


## KG‑MuLQA-D Dataset

We produce **KG‑MuLQA‑D**, a dataset of 20,139 QA pairs derived from 170 SEC credit agreements (2013–2022) and categorized by five complexity levels. Each QA pair is tagged with a composite complexity level (L = \#hops + \#set‑ops + plurality), split into *Easy*, *Medium*, and *Hard*.

<p align="center">
  <img src="artifacts/templates.png" width="100%" alt="QA Templates" />
</p>

*This table illustrates the question templates used to construct KG-MuLQA-D, structured along three dimensions: plurality (P), number of hops (H), and set operations (\#SO). It includes example templates, corresponding knowledge graph query paths, and logical operations involved. These dimensions are used to compute the overall complexity level for each QA pair. The full list of templates can be found in the paper.*

## LLM Benchmarking & Evaluation

We evaluate 16 proprietary and open-weight LLMs on KG-MuLQA-D benchmark. As question complexity increases, the LLM's ability to retrieve and generate correct responses degrades markedly. We categorize observed LLM failures into four major types, each of which presents recurring challenges as question complexity increases: Misinterpretation of Semantics, Implicit Information Gaps, Set Operation Failures, and Long-Context Retrieval Errors. See the paper for detailed analysis.

<p align="center">
  <img src="artifacts/evaluation.png" width="100%" alt="Evaluation Results" />
</p>

*This table presents the performance of 16 LLMs, evaluated across Easy, Medium, and Hard question categories. The metrics include the F1 Score and the LLM-as-a-Judge rating, capturing both token-level accuracy and semantic correctness. The results reveal a consistent decline in performance as question complexity increases, with notable model-specific strengths and weaknesses. \* denotes the models evaluated on a smaller subset due to cost constraints (see the paper for extended evaluation).*

## Dataset Release

To facilitate reproducibility and future research, we release the **KG‑MuLQA‑D** dataset under a [CC-BY-NC-ND 4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/). The dataset is divided into development and test sets as follows:

| **Stats**                 |   **Dev** |   **Test** |  **Total** |
| ------------------------- | --------: | ---------: | ---------: |
| # Documents               |        40 |        130 |        170 |
| # Questions per Doc (Min) |         1 |          1 |          1 |
| # Questions per Doc (Avg) |     14.75 |      23.49 |      21.44 |
| # Questions per Doc (Max) |       83  |        428 |        428 |
| # Easy Questions          |     1,499 |      5,051 |      6,550 |
| # Medium Questions        |     2,680 |     10,203 |     12,883 |
| # Hard Questions          |       239 |        467 |        706 |
| **Total Questions**       | **4,418** | **15,721** | **20,139** |

* **Development Set (~25%)**: 40 documents and 4,418 QA pairs are publicly released to support model development and validation.
* **Test Set (~75%)**: 130 documents and 15,721 QA pairs are **not released** to prevent data contamination and ensure fair evaluation (questions are released for the leaderboard).

## Citation

If you use KG‑MuLQA in your work, please cite:

```bibtex
@misc{tatarinov2026kgmulqaframeworkkgbasedmultilevel,
      title={KG-MuLQA: A Framework for KG-based Multi-Level QA Extraction and Long-Context LLM Evaluation}, 
      author={Nikita Tatarinov and Vidhyakshaya Kannan and Haricharana Srinivasa and Arnav Raj and Harpreet Singh Anand and Varun Singh and Aditya Luthra and Ravij Lade and Agam Shah and Sudheer Chava},
      year={2026},
      eprint={2505.12495},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.12495}, 
}
```

For questions or issues, please reach out to:

- Nikita Tatarinov: [ntatarinov3@gatech.edu](mailto:ntatarinov3@gatech.edu)
- Agam Shah: [ashah482@gatech.edu](mailto:ashah482@gatech.edu)