|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- question-answering |
|
|
- text-generation |
|
|
- summarization |
|
|
language: |
|
|
- zh |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# PRGB Benchmark |
|
|
PRGB (Placeholder RAG Benchmark) is a benchmark tool focused on evaluating document faithfulness and external knowledge utilization efficiency in Retrieval-Augmented Generation (RAG) systems. |
|
|
It comprehensively evaluates model performance through progressive dimensions such as multi-level filtering and cross-entity reasoning, using placeholders with noise-injected datasets to help researchers and developers analyze the performance of mainstream RAG models in complex scenarios. |
|
|
|
|
|
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to [PRGB GitHub](https://github.com/AQ-MedAI/PRGB) and PRGB Paper at [🤗 HuggingFace](https://huggingface.co/papers/2507.22927). |
|
|
|
|
|
# Key Features |
|
|
🎯 Multi-Model Support: Supports multiple large language models with local VLLM inference |
|
|
|
|
|
📊 Standardized Evaluation: Provides unified evaluation metrics and processes |
|
|
|
|
|
🔧 Flexible Configuration: Supports noise configuration, placeholder configuration, and other parameter adjustments |
|
|
|
|
|
🌍 Multi-Language Support: Supports Chinese and English dataset evaluation |
|
|
|
|
|
📈 Detailed Reports: Generates comprehensive evaluation results and score reports |
|
|
|
|
|
## PRGB's Leaderboard |
|
|
|
|
|
In our experiments, we uniformly set the following configurations: |
|
|
`noise_config: '{"noise_doc_level1":4,"noise_doc_level2":4,"noise_doc_level3":1}'`, `num_iterations: 3`, and `shuffle: True`. |
|
|
|
|
|
### Chinese Dataset Performance Comparison |
|
|
|
|
|
The table below presents the performance of various state-of-the-art models on Chinese datasets, sorted by Overall score from highest to lowest. **Bold** values indicate the best experimental results, and ***italic bold*** values indicate the second-best experimental results. |
|
|
|
|
|
| Models | Overall | Multi-Level Filter | Composition | Reasoning | |
|
|
| ----------------------------- | ------- | ------------------- | ------------------- | --------------- | |
|
|
| `Gemini-2.5-pro-preview` | 87.33 | **97.92** | **94.20** | 70.18 | |
|
|
| `DeepSeek-R1-0528` | 86.68 | 96.17 | ***93.69*** | **72.79** | |
|
|
| `Claude-3.7-sonnet` | 85.74 | ***97.62*** | 90.59 | **70.39** | |
|
|
| `Gemini-2.5-flash-preview` | 81.85 | 93.92 | 88.54 | 63.86 | |
|
|
| `Qwen3-235B-A22B` | 80.76 | 94.92 | 88.18 | 60.23 | |
|
|
| `Qwen3-30B-A3B` | 80.45 | 95.87 | 86.11 | 61.42 | |
|
|
| `Deepseek-V3(241226)` | 77.54 | 94.58 | 81.00 | 60.32 | |
|
|
| `Qwen3-235B-A22B w/o think` | 75.20 | 91.50 | 79.67 | 57.14 | |
|
|
| `Qwen-2.5-MAX` | 74.43 | 93.25 | 78.28 | 55.37 | |
|
|
| `Qwen3-30B-A3B w/o think` | 71.05 | 91.08 | 72.22 | 54.76 | |
|
|
| `Gemma3_27b` | 70.24 | 73.09 | 92.21 | 50.24 | |
|
|
| `Qwen3_32B` | 69.69 | 89.75 | 75.74 | 46.70 | |
|
|
| `Hunyuan-80B-A13B` | 68.84 | 93.50 | 68.94 | 50.64 | |
|
|
| `GPT4.1` | 66.26 | 89.75 | 71.95 | 41.27 | |
|
|
| `Qwen2.5_72B` | 64.87 | 92.92 | 64.99 | 44.14 | |
|
|
| `GPT4o-1120` | 64.58 | 88.50 | 70.21 | 39.35 | |
|
|
| `Gemma3_12b` | 64.10 | 60.20 | 89.92 | 50.52 | |
|
|
| `Qwen3_8B` | 63.04 | 86.87 | 67.49 | 39.47 | |
|
|
| `Qwen3_32B w/o think` | 60.73 | 59.53 | 89.50 | 41.30 | |
|
|
| `Qwen2.5_32B` | 58.76 | 92.00 | 51.33 | 44.60 | |
|
|
| `Qwen2.5_14B` | 55.94 | 89.42 | 52.69 | 35.87 | |
|
|
| `Qwen2.5_7B` | 49.31 | 83.29 | 47.47 | 26.92 | |
|
|
| `Qwen3_8B w/o think` | 50.02 | 47.83 | 83.96 | 28.17 | |
|
|
| `Gemma3_4b` | 47.67 | 37.41 | 78.33 | 39.26 | |
|
|
|
|
|
|
|
|
|
|
|
## Installation |
|
|
|
|
|
### Requirements |
|
|
|
|
|
- Python 3.7+ |
|
|
- CUDA (if using GPU inference) |
|
|
|
|
|
### Installation Steps |
|
|
|
|
|
1. Clone the repository |
|
|
|
|
|
```bash |
|
|
git clone https://github.com/Alipay-Med/PRGB.git |
|
|
cd PRGB |
|
|
``` |
|
|
|
|
|
2. Install dependencies |
|
|
|
|
|
```bash |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
3. Verify installation |
|
|
|
|
|
```bash |
|
|
python test_imports.py |
|
|
``` |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Verify Imports |
|
|
|
|
|
Before running evaluations, it's recommended to verify that imports work correctly: |
|
|
|
|
|
```bash |
|
|
python test_imports.py |
|
|
``` |
|
|
|
|
|
### Three Ways to Run Evaluation |
|
|
|
|
|
#### Method 1: Using Makefile (Recommended) |
|
|
|
|
|
If you only need to modify the model path, using Makefile is recommended |
|
|
|
|
|
```bash |
|
|
# View all available commands |
|
|
make help |
|
|
|
|
|
# Set environment variables and run evaluation |
|
|
export EVAL_MODEL_PATH=/path/to/your/model |
|
|
make eval |
|
|
|
|
|
# Or set environment variables in one line |
|
|
EVAL_MODEL_PATH=/path/to/your/model make eval |
|
|
|
|
|
# Chinese evaluation (using data/zh.jsonl) |
|
|
# Chinese evaluation with inference mode (using data/zh.jsonl) |
|
|
EVAL_MODEL_PATH=/path/to/your/model make eval-ch-infer |
|
|
|
|
|
# English evaluation (using data/en.jsonl) |
|
|
# English evaluation with inference mode (using data/en.jsonl) |
|
|
EVAL_MODEL_PATH=/path/to/your/model make eval-en-infer |
|
|
|
|
|
# Test evaluation (no real model needed) |
|
|
make eval-test |
|
|
|
|
|
# Export error samples (requires evaluation result file path) |
|
|
EVAL_RESULT_FILE=results/model_eval_result.jsonl make export-errors |
|
|
``` |
|
|
|
|
|
#### Method 2: Using Shell Script |
|
|
|
|
|
If you need to modify other parameters, using the shell is recommended. |
|
|
|
|
|
```bash |
|
|
# Run with default parameters (requires model path) |
|
|
./run_eval.sh /path/to/your/model |
|
|
|
|
|
# Pass all parameters |
|
|
./run_eval.sh /path/to/your/model data/zh.jsonl Qwen3_infer ./results |
|
|
``` |
|
|
|
|
|
#### Method 3: Using Python Command |
|
|
|
|
|
```bash |
|
|
# Basic usage |
|
|
python eval.py \ |
|
|
--model-name "Qwen3" \ |
|
|
--model-path "/path/to/your/model" \ |
|
|
--data-path "tests/test.jsonl" \ |
|
|
--output-path "./results" |
|
|
|
|
|
# Complete parameter example |
|
|
python eval.py \ |
|
|
--model-name "Qwen3" \ |
|
|
--model-path "/path/to/your/model" \ |
|
|
--data-path "your_data.jsonl" \ |
|
|
--output-path "./results" \ |
|
|
--batch-size 16 \ |
|
|
--temperature 0.7 \ |
|
|
--noise-config '{"noise_doc_level1":4,"noise_doc_level2":4,"noise_doc_level3":1}' \ |
|
|
--custom_config "config/default_prompt_config.json" \ |
|
|
--shuffle True \ |
|
|
--num-iterations 3 \ |
|
|
--verbose |
|
|
``` |
|
|
|