File size: 5,454 Bytes
44529cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
548523d
44529cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
license: cc-by-4.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: id
    dtype: string
  - name: domain
    dtype: string
  - name: question_type
    dtype: string
  - name: dynamism
    dtype: string
  - name: question
    dtype: string
  - name: reference_answer
    dtype: string
  - name: sources
    list:
    - name: filename
      dtype: string
    - name: id
      dtype: string
    - name: pages
      sequence: int64
  splits:
  - name: train
    num_bytes: 35785
    num_examples: 100
  download_size: 21165
  dataset_size: 35785
---


# EntRAG Benchmark: Question Answering Dataset

## Description

EntRAG is a specialized benchmark dataset designed for evaluating Retrieval-Augmented Generation (RAG) systems in enterprise contexts.
The dataset addresses the unique challenges of business environments where information comes from heterogeneous sources including structured databases, documents, and dynamic mock APIs.

The dataset comprises 100 manually constructed question-answer pairs across six enterprise domains: Finance, Technical Documentation, Environment, Legal and Compliance, Human Resources, and Marketing and Sales.
Questions are designed to evaluate both static document retrieval and dynamic API integration scenarios, reflecting realistic enterprise information needs.

## Dataset Structure

### Columns

* `id`: Unique identifier for each question-answer pair
* `domain`: The subject area or field of knowledge the question pertains to (e.g., "Technical Documentation", "Finance", "Healthcare")
* `question_type`: The category of reasoning required (e.g., "comparison", "factual", "analytical", "procedural")
* `dynamism`: Indicates whether the answer content changes over time ("static" for timeless information, "dynamic" for evolving content)
* `question`: A natural language question that requires information retrieval and reasoning to answer accurately
* `reference_answer`: The correct, comprehensive answer that serves as the ground truth for evaluation
* `sources`: Array of source documents that contain the information needed to answer the question, including:
  * `id`: Unique identifier for the source
  * `filename`: Name of the source document or API endpoint
  * `pages`: Array of specific page numbers where relevant information is found (empty for API sources)

## Use Cases

This dataset is particularly valuable for:

* **RAG System Evaluation**: Testing RAG systems with realistic business scenarios and multi-source information integration
* **Hybrid System Assessment**: Evaluating systems that combine document retrieval with API-based data access
* **Domain-Specific Analysis**: Understanding RAG performance across different business domains
* **Dynamic Information Handling**: Assessing systems that work with both static documents and real-time data sources

## Accessing the Dataset

You can load this dataset via the Hugging Face Datasets library using the following Python code:

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("fkapsahili/EntRAG")

# Access the data
for example in dataset['train']:
    print(f"Domain: {example['domain']}")
    print(f"Question Type: {example['question_type']}")
    print(f"Dynamism: {example['dynamism']}")
    print(f"Question: {example['question']}")
    print(f"Answer: {example['reference_answer']}")
    print(f"Sources: {len(example['sources'])} documents")
    print("---")
```

### Alternative Loading Methods

For direct integration with evaluation frameworks:

```python
import json
from datasets import load_dataset

# Load and convert to list format
dataset = load_dataset("fkapsahili/EntRAG", split="train")
qa_pairs = [dict(item) for item in dataset]
```

## Integration with RAG Frameworks

This dataset supports evaluation of various RAG architectures and can be integrated with existing evaluation pipelines.
The format is compatible with standard RAG evaluation frameworks and supports both document-based and API-integrated systems.

## Dataset Statistics

* **Total QA Pairs**: 100 manually constructed questions
* **Domains**: 6 domains (Finance, Technical Documentation, Environment, Legal and Compliance, Human Resources, Marketing and Sales)
* **Question Types**: 7 reasoning patterns (simple queries, comparison, aggregation, multi-hop reasoning, simple with conditions, factual contradiction, post-processing)
* **Dynamism Distribution**: 
  * Static questions: 28% (document-based retrieval)
  * Dynamic questions: 72% (requiring real-time API integration)
* **Source Documents**: 9,500+ pages from authentic enterprise documents across 10 major companies
* **Company Sectors**: Technology, healthcare, e-commerce, retail, automotive, and energy
* **Mock APIs**: 4 domain-specific APIs (finance, SEC filings, HR statistics, web search)

## Citation

If you use this dataset in your research, please cite:

```bibtex
@dataset{entrag_2025,
  title={EntRAG: Enterprise RAG Benchmark},
  author={Fabio Kapsahili},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/fkapsahili/EntRAG}
}
```

## License

This dataset is released under Creative Commons Attribution 4.0. Please see the LICENSE file for full details.

## Additional Resources

* **Evaluation Code**: https://github.com/fkapsahili/EntRAG

For questions, issues, please open an issue in the associated GitHub repository.