Improve dataset card for RM3QA: Add paper, code, project page, description, and usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,91 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
- text-generation
|
| 6 |
+
tags:
|
| 7 |
+
- rag
|
| 8 |
+
- retrieval-augmented-generation
|
| 9 |
+
- multi-source
|
| 10 |
+
- benchmark
|
| 11 |
+
language:
|
| 12 |
+
- en
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# RM3QA: A Real-time Multi-domain, Multi-format, Multi-source Question Answer Dataset for RAG
|
| 16 |
+
|
| 17 |
+
This repository hosts **RM3QA**, a refined benchmark dataset for Multi-Source Knowledge Pruning in Retrieval-Augmented Generation (RAG). It was introduced in the paper:
|
| 18 |
+
[**Multi-Source Knowledge Pruning for Retrieval-Augmented Generation: A Benchmark and Empirical Study**](https://huggingface.co/papers/2409.13694)
|
| 19 |
+
|
| 20 |
+
**Project Page**: [https://ustc-rag-x.github.io/](https://ustc-rag-x.github.io/)
|
| 21 |
+
**Code (PruningRAG Framework)**: [https://github.com/USTCAGI/PruningRAG](https://github.com/USTCAGI/PruningRAG)
|
| 22 |
+
|
| 23 |
+
## About RM3QA
|
| 24 |
+
|
| 25 |
+
`RM3QA` addresses the critical need for a suitable benchmark dataset for multi-source RAG evaluation. While most existing RAG studies focus on single knowledge sources, real-world applications often involve diverse knowledge from various sources. This dataset standardizes a benchmark that combines structured and unstructured knowledge across diverse and complementary domains.
|
| 26 |
+
|
| 27 |
+
It is a refined version of the KDD Cup 2024 CRAG competition dataset, which originally included unstructured web page knowledge (HTML) and structured mock API data. To enhance usability and compatibility with current RAG frameworks, `RM3QA` features significant improvements:
|
| 28 |
+
|
| 29 |
+
- **Web Page Data:** Cleaned by removing excessive HTML noise and converted into a standardized Markdown format.
|
| 30 |
+
- **Mock API Data:** Processed using rule-based mechanisms to align entity names with queries, and API responses are transformed into natural language, improving accessibility for LLM-based reasoning.
|
| 31 |
+
|
| 32 |
+
These enhancements establish `RM3QA` as a valuable and robust resource for advancing research in the RAG community, particularly for scenarios involving multiple external knowledge sources.
|
| 33 |
+
|
| 34 |
+
## PruningRAG Framework Overview
|
| 35 |
+
|
| 36 |
+
`RM3QA` is designed to be used in conjunction with **PruningRAG**, a plug-and-play RAG framework. PruningRAG employs multi-granularity pruning strategies to optimize the integration of relevant information while minimizing misleading context, consistently improving performance across various existing RAG variants.
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+
## Sample Usage
|
| 41 |
+
|
| 42 |
+
This section provides an overview of how to get started with the PruningRAG framework, which utilizes the `RM3QA` dataset. For more detailed instructions and advanced configurations, please refer to the [GitHub repository](https://github.com/USTCAGI/PruningRAG).
|
| 43 |
+
|
| 44 |
+
### Installation
|
| 45 |
+
|
| 46 |
+
First, clone the `PruningRAG` repository and install the necessary dependencies:
|
| 47 |
+
|
| 48 |
+
```bash
|
| 49 |
+
pip install -r requirements.txt
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### Adaptive Few-Shot CoT Prompt Example
|
| 53 |
+
|
| 54 |
+
The PruningRAG framework leverages an adaptive few-shot Chain-of-Thought (CoT) prompting strategy. Below is an example of the prompt structure used:
|
| 55 |
+
|
| 56 |
+
```python
|
| 57 |
+
"""For the given question and multiple references from Mock API, think step by step, then provide the final answer.
|
| 58 |
+
Current date: {query_time}
|
| 59 |
+
Note:
|
| 60 |
+
- For your final answer, please use as few words as possible.
|
| 61 |
+
- The user's question may contain factual errors, in which case you MUST reply `invalid question` Here are some examples of invalid questions:
|
| 62 |
+
- `what's the latest score update for OKC's game today?` (There is no game for OKC today)
|
| 63 |
+
- `how many times has curry won the nba dunk contest?` (Steph Curry has never participated in the NBA dunk contest)
|
| 64 |
+
- If you don't know the answer, you MUST respond with `I don't know`
|
| 65 |
+
- If the references do not contain the necessary information to answer the question, respond with `I don't know`
|
| 66 |
+
- Using only the refernces below and not prior knowledge, if there is no reference, respond with `I don't know`
|
| 67 |
+
- Your output format needs to meet the requirements: First, start with `## Thought
|
| 68 |
+
` and then output the thought process regarding the user's question. After you finish thinking, you MUST reply with the final answer on the last line, starting with `## Final Answer
|
| 69 |
+
` and using as few words as possible.
|
| 70 |
+
### Question
|
| 71 |
+
{query}
|
| 72 |
+
### References
|
| 73 |
+
{references}
|
| 74 |
+
"""
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
### Predict with LLM
|
| 78 |
+
|
| 79 |
+
After setting up your Large Language Model (LLM) server (either via API or locally with vLLM/Ollama as described in the GitHub README), you can run predictions using the `main.py` script:
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
python main.py
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
### Evaluate Results
|
| 86 |
+
|
| 87 |
+
To evaluate the predictions generated by your RAG system against the `RM3QA` dataset, use the `evaluation.py` script:
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
python evaluation.py
|
| 91 |
+
```
|