Update dataset card for Golden Touchstone with Think-on-Graph 3.0 paper and RAG-Factory details
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,9 +1,74 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
-
# Citation
|
| 5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
@misc{wu2024goldentouchstonecomprehensivebilingual,
|
| 8 |
title={Golden Touchstone: A Comprehensive Bilingual Benchmark for Evaluating Financial Large Language Models},
|
| 9 |
author={Xiaojun Wu and Junxi Liu and Huanyi Su and Zhouchi Lin and Yiyan Qi and Chengjin Xu and Jiajun Su and Jiajie Zhong and Fuwei Wang and Saizhuo Wang and Fengrui Hua and Jia Li and Jian Guo},
|
|
@@ -15,3 +80,16 @@ license: apache-2.0
|
|
| 15 |
}
|
| 16 |
```
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
- zh
|
| 8 |
+
tags:
|
| 9 |
+
- rag
|
| 10 |
+
- graph-neural-networks
|
| 11 |
+
- llm-reasoning
|
| 12 |
+
- financial
|
| 13 |
+
- benchmark
|
| 14 |
+
- bilingual
|
| 15 |
---
|
|
|
|
| 16 |
|
| 17 |
+
# Golden Touchstone: A Comprehensive Bilingual Benchmark for Evaluating Financial Large Language Models
|
| 18 |
+
|
| 19 |
+
This repository contains the **Golden Touchstone** dataset, a comprehensive bilingual benchmark designed for evaluating Financial Large Language Models.
|
| 20 |
+
|
| 21 |
+
This dataset is utilized and contextualized by the research presented in the paper:
|
| 22 |
+
[**Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval**](https://huggingface.co/papers/2509.21710)
|
| 23 |
+
|
| 24 |
+
The associated code and framework implementing `Think-on-Graph 3.0` is the **RAG-Factory**:
|
| 25 |
+
[https://github.com/DataArcTech/RAG-Factory](https://github.com/DataArcTech/RAG-Factory)
|
| 26 |
+
|
| 27 |
+
## Abstract of Think-on-Graph 3.0
|
| 28 |
+
|
| 29 |
+
Retrieval-Augmented Generation (RAG) and Graph-based RAG has become the important paradigm for enhancing Large Language Models (LLMs) with external knowledge. However, existing approaches face a fundamental trade-off. While graph-based methods are inherently dependent on high-quality graph structures, they face significant practical constraints: manually constructed knowledge graphs are prohibitively expensive to scale, while automatically extracted graphs from corpora are limited by the performance of the underlying LLM extractors, especially when using smaller, local-deployed models. This paper presents Think-on-Graph 3.0 (ToG-3), a novel framework that introduces Multi-Agent Context Evolution and Retrieval (MACER) mechanism to overcome these limitations. Our core innovation is the dynamic construction and refinement of a Chunk-Triplets-Community heterogeneous graph index, which pioneeringly incorporates a dual-evolution mechanism of Evolving Query and Evolving Sub-Graph for precise evidence retrieval. This approach addresses a critical limitation of prior Graph-based RAG methods, which typically construct a static graph index in a single pass without adapting to the actual query. A multi-agent system, comprising Constructor, Retriever, Reflector, and Responser agents, collaboratively engages in an iterative process of evidence retrieval, answer generation, sufficiency reflection, and, crucially, evolving query and subgraph. This dual-evolving multi-agent system allows ToG-3 to adaptively build a targeted graph index during reasoning, mitigating the inherent drawbacks of static, one-time graph construction and enabling deep, precise reasoning even with lightweight LLMs. Extensive experiments demonstrate that ToG-3 outperforms compared baselines on both deep and broad reasoning benchmarks, and ablation studies confirm the efficacy of the components of MACER framework.
|
| 30 |
+
|
| 31 |
+
## RAG-Factory: Advanced and Easy-Use RAG Pipelines Features
|
| 32 |
+
|
| 33 |
+
RAG-Factory is a factory for building advanced RAG (Retrieval-Augmented Generation) pipelines, including:
|
| 34 |
+
|
| 35 |
+
- Standard RAG implementations
|
| 36 |
+
- GraphRAG architectures
|
| 37 |
+
- Multi-modal RAG systems
|
| 38 |
+
|
| 39 |
+
Key Features:
|
| 40 |
+
- Modular design for easy customization
|
| 41 |
+
- Support for various knowledge graph backends
|
| 42 |
+
- Integration with multiple LLM providers
|
| 43 |
+
- Configurable pipeline components
|
| 44 |
+
|
| 45 |
+
## Installation (for RAG-Factory)
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
pip install -e .
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
## Sample Usage (with RAG-Factory)
|
| 52 |
+
|
| 53 |
+
To run examples using the `RAG-Factory` framework:
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
bash run.sh naive_rag/graph_rag/mm_rag
|
| 57 |
```
|
| 58 |
+
|
| 59 |
+
Alternatively, you can specify a configuration file:
|
| 60 |
+
|
| 61 |
+
```bash
|
| 62 |
+
python main.py --config examples/graphrag/config.yaml
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
More examples can be found in the `examples/` directory of the [RAG-Factory GitHub repository](https://github.com/DataArcTech/RAG-Factory).
|
| 66 |
+
|
| 67 |
+
## Citation
|
| 68 |
+
|
| 69 |
+
### For Golden Touchstone
|
| 70 |
+
|
| 71 |
+
```bibtex
|
| 72 |
@misc{wu2024goldentouchstonecomprehensivebilingual,
|
| 73 |
title={Golden Touchstone: A Comprehensive Bilingual Benchmark for Evaluating Financial Large Language Models},
|
| 74 |
author={Xiaojun Wu and Junxi Liu and Huanyi Su and Zhouchi Lin and Yiyan Qi and Chengjin Xu and Jiajun Su and Jiajie Zhong and Fuwei Wang and Saizhuo Wang and Fengrui Hua and Jia Li and Jian Guo},
|
|
|
|
| 80 |
}
|
| 81 |
```
|
| 82 |
|
| 83 |
+
### For Think-on-Graph 3.0
|
| 84 |
+
|
| 85 |
+
```bibtex
|
| 86 |
+
@misc{wu2025ToG-3,
|
| 87 |
+
title={Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval},
|
| 88 |
+
author={Xiaojun Wu, Cehao Yang, Xueyuan Lin, Chengjin Xu, Xuhui Jiang, Yuanliang Sun, Hui Xiong, Jia Li, Jian Guo},
|
| 89 |
+
year={2025},
|
| 90 |
+
eprint={2509.21710},
|
| 91 |
+
archivePrefix={arXiv},
|
| 92 |
+
primaryClass={cs.CL},
|
| 93 |
+
url={https://arxiv.org/abs/2509.21710},
|
| 94 |
+
}
|
| 95 |
+
```
|