RM3QA / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card for RM3QA: Add paper, code, project page, description, and usage
41eb06b verified
|
raw
history blame
4.68 kB
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text-generation
tags:
  - rag
  - retrieval-augmented-generation
  - multi-source
  - benchmark
language:
  - en

RM3QA: A Real-time Multi-domain, Multi-format, Multi-source Question Answer Dataset for RAG

This repository hosts RM3QA, a refined benchmark dataset for Multi-Source Knowledge Pruning in Retrieval-Augmented Generation (RAG). It was introduced in the paper: Multi-Source Knowledge Pruning for Retrieval-Augmented Generation: A Benchmark and Empirical Study

Project Page: https://ustc-rag-x.github.io/ Code (PruningRAG Framework): https://github.com/USTCAGI/PruningRAG

About RM3QA

RM3QA addresses the critical need for a suitable benchmark dataset for multi-source RAG evaluation. While most existing RAG studies focus on single knowledge sources, real-world applications often involve diverse knowledge from various sources. This dataset standardizes a benchmark that combines structured and unstructured knowledge across diverse and complementary domains.

It is a refined version of the KDD Cup 2024 CRAG competition dataset, which originally included unstructured web page knowledge (HTML) and structured mock API data. To enhance usability and compatibility with current RAG frameworks, RM3QA features significant improvements:

  • Web Page Data: Cleaned by removing excessive HTML noise and converted into a standardized Markdown format.
  • Mock API Data: Processed using rule-based mechanisms to align entity names with queries, and API responses are transformed into natural language, improving accessibility for LLM-based reasoning.

These enhancements establish RM3QA as a valuable and robust resource for advancing research in the RAG community, particularly for scenarios involving multiple external knowledge sources.

PruningRAG Framework Overview

RM3QA is designed to be used in conjunction with PruningRAG, a plug-and-play RAG framework. PruningRAG employs multi-granularity pruning strategies to optimize the integration of relevant information while minimizing misleading context, consistently improving performance across various existing RAG variants.

framework1_00

Sample Usage

This section provides an overview of how to get started with the PruningRAG framework, which utilizes the RM3QA dataset. For more detailed instructions and advanced configurations, please refer to the GitHub repository.

Installation

First, clone the PruningRAG repository and install the necessary dependencies:

pip install -r requirements.txt

Adaptive Few-Shot CoT Prompt Example

The PruningRAG framework leverages an adaptive few-shot Chain-of-Thought (CoT) prompting strategy. Below is an example of the prompt structure used:

"""For the given question and multiple references from Mock API, think step by step, then provide the final answer.
Current date: {query_time}
Note: 
- For your final answer, please use as few words as possible. 
- The user's question may contain factual errors, in which case you MUST reply `invalid question` Here are some examples of invalid questions:
    - `what's the latest score update for OKC's game today?` (There is no game for OKC today)
    - `how many times has curry won the nba dunk contest?` (Steph Curry has never participated in the NBA dunk contest)
- If you don't know the answer, you MUST respond with `I don't know`
- If the references do not contain the necessary information to answer the question, respond with `I don't know`
- Using only the refernces below and not prior knowledge, if there is no reference, respond with `I don't know`
- Your output format needs to meet the requirements: First, start with `## Thought
` and then output the thought process regarding the user's question. After you finish thinking, you MUST reply with the final answer on the last line, starting with `## Final Answer
` and using as few words as possible.
### Question
{query}
### References
{references}
"""

Predict with LLM

After setting up your Large Language Model (LLM) server (either via API or locally with vLLM/Ollama as described in the GitHub README), you can run predictions using the main.py script:

python main.py

Evaluate Results

To evaluate the predictions generated by your RAG system against the RM3QA dataset, use the evaluation.py script:

python evaluation.py