Datasets:

Languages:
English
ArXiv:
License:
File size: 2,760 Bytes
2d3ddef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb6ae40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---

license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: AirQA-Real
---


# NeuSym-RAG: Hybrid Neural Symbolic Retrieval with Multiview Structuring for PDF Question Answering

This repository contains the dataset, database and vectorstore employed in the experiment part of our paper [**NeuSym-RAG: Hybrid Neural Symbolic Retrieval with Multiview Structuring for PDF Question Answering**](https://www.arxiv.org/abs/2505.19754). 


## Dataset Statistics

We choose and experiment on these three datasets:

| Dataset  | PDF Count | Test Data Size | Task Type | Evaluation Type | Original Link |
| :----: | :----: | :----: | :----: | :----: | :----: |
| AirQA-Real | 6795 | 553  | single, multiple, retrieval | subjective, objective | this work |
| M3SciQA    | 1789 | 452  | multiple                    | subjective            | https://github.com/yale-nlp/M3SciQA |
| SciDQA     | 576  | 2937 | single, multiple            | subjective            | https://github.com/yale-nlp/SciDQA |


## Folder Structure of Dataset Directory

For each dataset, we provide:
- a `zip` file containing relevant materials of this dataset. Its structure is as follows:
    ```txt
    airqa/
    β”œβ”€β”€ metadata/
    |   |── a0008a3c-743d-5589-bea2-0f4aad710e50.json
    |   └── ... # more metadata dicts
    β”œβ”€β”€ papers/
    |   |── acl2023/
    |   |   |── 001ab93b-7665-5d56-a28e-eac95d2a9d7e.pdf
    |   |   └── ... # more .pdf published in ACL 2023
    |   └── ... # other sub-folders of paper collections
    |── processed_data/
    |   |── a0008a3c-743d-5589-bea2-0f4aad710e50.json # cached data for PDF parsing
    |   └── ... # more cached data for PDFs
    |── test_data_*.jsonl
    └── uuids.json
    ```
    For M3SciQA dataset, there will be another sub-folder `images` containing the anchor images of the questions.
- a `duckdb` file, corresponding to the database we use in the main experiments.
- a `db` file, corresponding to the vectorstore we use in the main experiments.

For more information, please refer to the documents in our [repository](https://github.com/X-LANCE/NeuSym-RAG).


## Citation

If you find this dataset useful, please cite our work:
```txt
@misc{cao2025neusymraghybridneuralsymbolic,
      title={NeuSym-RAG: Hybrid Neural Symbolic Retrieval with Multiview Structuring for PDF Question Answering}, 
      author={Ruisheng Cao and Hanchong Zhang and Tiancheng Huang and Zhangyi Kang and Yuxin Zhang and Liangtai Sun and Hanqi Li and Yuxun Miao and Shuai Fan and Lu Chen and Kai Yu},
      year={2025},
      eprint={2505.19754},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.19754}, 
}
```