license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: domain
dtype: string
- name: question_type
dtype: string
- name: dynamism
dtype: string
- name: question
dtype: string
- name: reference_answer
dtype: string
- name: sources
list:
- name: filename
dtype: string
- name: id
dtype: string
- name: pages
sequence: int64
splits:
- name: train
num_bytes: 35785
num_examples: 100
download_size: 21165
dataset_size: 35785
EntRAG Benchmark: Question Answering Dataset
Description
EntRAG is a specialized benchmark dataset designed for evaluating Retrieval-Augmented Generation (RAG) systems in enterprise contexts. The dataset addresses the unique challenges of business environments where information comes from heterogeneous sources including structured databases, documents, and dynamic mock APIs.
The dataset comprises 100 manually constructed question-answer pairs across six enterprise domains: Finance, Technical Documentation, Environment, Legal and Compliance, Human Resources, and Marketing and Sales. Questions are designed to evaluate both static document retrieval and dynamic API integration scenarios, reflecting realistic enterprise information needs.
Dataset Structure
Columns
id: Unique identifier for each question-answer pairdomain: The subject area or field of knowledge the question pertains to (e.g., "Technical Documentation", "Finance", "Healthcare")question_type: The category of reasoning required (e.g., "comparison", "factual", "analytical", "procedural")dynamism: Indicates whether the answer content changes over time ("static" for timeless information, "dynamic" for evolving content)question: A natural language question that requires information retrieval and reasoning to answer accuratelyreference_answer: The correct, comprehensive answer that serves as the ground truth for evaluationsources: Array of source documents that contain the information needed to answer the question, including:id: Unique identifier for the sourcefilename: Name of the source document or API endpointpages: Array of specific page numbers where relevant information is found (empty for API sources)
Use Cases
This dataset is particularly valuable for:
- RAG System Evaluation: Testing RAG systems with realistic business scenarios and multi-source information integration
- Hybrid System Assessment: Evaluating systems that combine document retrieval with API-based data access
- Domain-Specific Analysis: Understanding RAG performance across different business domains
- Dynamic Information Handling: Assessing systems that work with both static documents and real-time data sources
Accessing the Dataset
You can load this dataset via the Hugging Face Datasets library using the following Python code:
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("fkapsahili/EntRAG")
# Access the data
for example in dataset['train']:
print(f"Domain: {example['domain']}")
print(f"Question Type: {example['question_type']}")
print(f"Dynamism: {example['dynamism']}")
print(f"Question: {example['question']}")
print(f"Answer: {example['reference_answer']}")
print(f"Sources: {len(example['sources'])} documents")
print("---")
Alternative Loading Methods
For direct integration with evaluation frameworks:
import json
from datasets import load_dataset
# Load and convert to list format
dataset = load_dataset("fkapsahili/EntRAG", split="train")
qa_pairs = [dict(item) for item in dataset]
Integration with RAG Frameworks
This dataset supports evaluation of various RAG architectures and can be integrated with existing evaluation pipelines. The format is compatible with standard RAG evaluation frameworks and supports both document-based and API-integrated systems.
Dataset Statistics
- Total QA Pairs: 100 manually constructed questions
- Domains: 6 domains (Finance, Technical Documentation, Environment, Legal and Compliance, Human Resources, Marketing and Sales)
- Question Types: 7 reasoning patterns (simple queries, comparison, aggregation, multi-hop reasoning, simple with conditions, factual contradiction, post-processing)
- Dynamism Distribution:
- Static questions: 28% (document-based retrieval)
- Dynamic questions: 72% (requiring real-time API integration)
- Source Documents: 9,500+ pages from authentic enterprise documents across 10 major companies
- Company Sectors: Technology, healthcare, e-commerce, retail, automotive, and energy
- Mock APIs: 4 domain-specific APIs (finance, SEC filings, HR statistics, web search)
Citation
If you use this dataset in your research, please cite:
@dataset{entrag_2025,
title={EntRAG: Enterprise RAG Benchmark},
author={Fabio Kapsahili},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/fkapsahili/EntRAG}
}
License
This dataset is released under Creative Commons Attribution 4.0. Please see the LICENSE file for full details.
Additional Resources
- Evaluation Code: https://github.com/fkapsahili/EntRAG
For questions, issues, please open an issue in the associated GitHub repository.