Upload 9 files
Browse files- CHANGELOG.md +14 -6
- README_space.md +212 -0
- data/eval_runs.csv +0 -0
- data/rag_corpus_documents.csv +0 -0
- data/rag_retrieval_events.csv +0 -0
- data/scenarios.csv +63 -0
- docs/data_dictionary.csv +100 -0
CHANGELOG.md
CHANGED
|
@@ -1,14 +1,22 @@
|
|
| 1 |
# Changelog
|
| 2 |
|
| 3 |
-
|
| 4 |
-
-
|
|
|
|
| 5 |
- Document corpus (`rag_corpus_documents.csv`)
|
| 6 |
- Chunk index (`rag_corpus_chunks.csv`)
|
| 7 |
-
- QA evaluation runs (`rag_qa_eval_runs.csv`)
|
| 8 |
- Retrieval events (`rag_retrieval_events.csv`)
|
| 9 |
-
-
|
| 10 |
-
-
|
|
|
|
| 11 |
- Included core supervision targets:
|
| 12 |
- `is_correct`, `hallucination_flag`, `faithfulness_label`
|
| 13 |
- Included key RAG telemetry:
|
| 14 |
-
- retrieval metrics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Changelog
|
| 2 |
|
| 3 |
+
|
| 4 |
+
- Initial release of **RAG QA Logs & Corpus — Multi-Table Synthetic RAG Benchmark**.
|
| 5 |
+
- Added **6 linked CSV tables**:
|
| 6 |
- Document corpus (`rag_corpus_documents.csv`)
|
| 7 |
- Chunk index (`rag_corpus_chunks.csv`)
|
|
|
|
| 8 |
- Retrieval events (`rag_retrieval_events.csv`)
|
| 9 |
+
- QA evaluation runs (`eval_runs.csv`)
|
| 10 |
+
- Scenario templates (`scenarios.csv`)
|
| 11 |
+
- Data dictionary (`data_dictionary.csv`)
|
| 12 |
- Included core supervision targets:
|
| 13 |
- `is_correct`, `hallucination_flag`, `faithfulness_label`
|
| 14 |
- Included key RAG telemetry:
|
| 15 |
+
- retrieval metrics (rank/score/relevance + recall/mrr signals)
|
| 16 |
+
- latency (`latency_ms_retrieval`, `latency_ms_generation`, `total_latency_ms`)
|
| 17 |
+
- token usage (`prompt_tokens`, `answer_tokens`)
|
| 18 |
+
- approximate cost (`total_cost_usd`)
|
| 19 |
+
- Stable join keys across tables:
|
| 20 |
+
- `doc_id`, `chunk_id`, `example_id`, `run_id`, `scenario_id`, `query_id`
|
| 21 |
+
- Privacy note:
|
| 22 |
+
- all records are **fully synthetic** (no real users/customers/company data).
|
README_space.md
ADDED
|
@@ -0,0 +1,212 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
pretty_name: RAG QA Logs & Corpus
|
| 6 |
+
task_categories:
|
| 7 |
+
- question-answering
|
| 8 |
+
- tabular-classification
|
| 9 |
+
- tabular-regression
|
| 10 |
+
tags:
|
| 11 |
+
- rag
|
| 12 |
+
- retrieval-augmented-generation
|
| 13 |
+
- evaluation
|
| 14 |
+
- hallucination
|
| 15 |
+
- meta-modeling
|
| 16 |
+
- risk-scoring
|
| 17 |
+
- logs
|
| 18 |
+
- telemetry
|
| 19 |
+
- tabular-data
|
| 20 |
+
- multi-table
|
| 21 |
+
- machine-learning
|
| 22 |
+
- open-dataset
|
| 23 |
+
- synthetic
|
| 24 |
+
- simulated
|
| 25 |
+
size_categories:
|
| 26 |
+
- 100K<n<1M
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
# 🧠 RAG QA Logs & Corpus
|
| 30 |
+
### Multi-Table Synthetic RAG Telemetry for Quality, Hallucinations, Latency, and Cost
|
| 31 |
+
|
| 32 |
+
A **production-style, privacy-safe** synthetic dataset that mimics telemetry exported from a real **Retrieval-Augmented Generation (RAG)** system — from **corpus → chunk index → ranked retrieval lists → evaluation outcomes**.
|
| 33 |
+
|
| 34 |
+
It is designed as an **analysis-ready multi-table benchmark** for:
|
| 35 |
+
- **RAG quality analysis** (correctness, faithfulness, hallucination)
|
| 36 |
+
- **retrieval strategy comparisons** (dense / bm25 / hybrid / rerank variants)
|
| 37 |
+
- **risk & meta-modeling** (predict failures/hallucinations from telemetry)
|
| 38 |
+
- **latency & cost trade-offs** (ms, tokens, USD)
|
| 39 |
+
- dashboards and teaching materials
|
| 40 |
+
|
| 41 |
+
✅ All records are **fully synthetic** — no real users, customers, patients, or company data.
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## 🔐 Privacy & Synthetic Data
|
| 46 |
+
|
| 47 |
+
This dataset is **fully synthetic**:
|
| 48 |
+
- No real users, customers, patients, or organizations are represented.
|
| 49 |
+
- No personally identifiable information (PII) is included.
|
| 50 |
+
- All IDs, queries, documents, and logs were programmatically generated to resemble realistic RAG telemetry while preserving privacy.
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## 📌 Quick Overview
|
| 55 |
+
|
| 56 |
+
- **Total size:** **103,255 rows** across **6 linked CSV tables**
|
| 57 |
+
- **Type:** multi-table tabular logs + short text fields (queries, answers, chunk text)
|
| 58 |
+
- **Main targets (labels):** `is_correct`, `hallucination_flag`, `faithfulness_label`
|
| 59 |
+
- **Stable join keys:** `doc_id`, `chunk_id`, `example_id`, `run_id`, `scenario_id`, `query_id`
|
| 60 |
+
- **Splits:** `train`, `val`, `test`
|
| 61 |
+
|
| 62 |
+
**Domains covered (documents):**
|
| 63 |
+
`support_faq`, `hr_policies`, `product_docs`, `developer_docs`,
|
| 64 |
+
`policies`, `financial_reports`, `medical_guides`, `research_papers`,
|
| 65 |
+
`customer_success`, `data_platform_docs`, `mlops_docs`, `marketing_analytics`
|
| 66 |
+
|
| 67 |
+
**Task types (evaluation runs):**
|
| 68 |
+
`factoid`, `explanation`, `summarization`, `multi_hop`,
|
| 69 |
+
`table_qa`, `temporal_reasoning`, `comparison`, `instruction_following`
|
| 70 |
+
|
| 71 |
+
**Retrieval strategies:**
|
| 72 |
+
`dense`, `bm25`, `hybrid`, `dense_then_rerank`, `bm25_then_rerank`
|
| 73 |
+
|
| 74 |
+
> Note: `run_id` is a per-example request/trace identifier (not a single run spanning many examples).
|
| 75 |
+
|
| 76 |
+
---
|
| 77 |
+
|
| 78 |
+
## 📂 Files
|
| 79 |
+
|
| 80 |
+
This dataset contains the following CSV files:
|
| 81 |
+
|
| 82 |
+
- `rag_corpus_documents.csv` — document-level corpus metadata
|
| 83 |
+
- `rag_corpus_chunks.csv` — chunk-level index + chunk text
|
| 84 |
+
- `rag_retrieval_events.csv` — per-chunk retrieval telemetry (rank, score, relevance)
|
| 85 |
+
- `eval_runs.csv` — QA evaluation runs (labels, metrics, latency, cost, configs)
|
| 86 |
+
- `scenarios.csv` — scenario templates and use cases
|
| 87 |
+
- `data_dictionary.csv` — column-level schema documentation across all tables
|
| 88 |
+
|
| 89 |
+
All files use **snake_case**, are **tabular and ML-ready**, and join cleanly via stable IDs.
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
## 📊 Exact Table Summary
|
| 94 |
+
|
| 95 |
+
| File | Rows | Columns | Granularity |
|
| 96 |
+
|---|---:|---:|---|
|
| 97 |
+
| `rag_corpus_documents.csv` | 658 | 19 | One row per document |
|
| 98 |
+
| `rag_corpus_chunks.csv` | 5,237 | 6 | One row per chunk |
|
| 99 |
+
| `rag_retrieval_events.csv` | 93,375 | 12 | One row per retrieved chunk per example |
|
| 100 |
+
| `eval_runs.csv` | 3,824 | 49 | One row per QA evaluation example |
|
| 101 |
+
| `scenarios.csv` | 62 | 13 | One row per scenario template |
|
| 102 |
+
| `data_dictionary.csv` | 99 | 5 | One row per column definition |
|
| 103 |
+
|
| 104 |
+
---
|
| 105 |
+
|
| 106 |
+
## 🔗 How to Join (Schema Map)
|
| 107 |
+
|
| 108 |
+
- **Documents → Chunks**
|
| 109 |
+
`rag_corpus_documents.doc_id = rag_corpus_chunks.doc_id`
|
| 110 |
+
|
| 111 |
+
- **Eval Runs → Retrieval Events**
|
| 112 |
+
`eval_runs.example_id = rag_retrieval_events.example_id`
|
| 113 |
+
(also `run_id`, `scenario_id`, `query_id`, `split` are present for consistency)
|
| 114 |
+
|
| 115 |
+
- **Eval Runs → Scenarios**
|
| 116 |
+
`eval_runs.scenario_id = scenarios.scenario_id`
|
| 117 |
+
(and `query_id`)
|
| 118 |
+
|
| 119 |
+
- **Retrieval Events → Chunks**
|
| 120 |
+
`rag_retrieval_events.chunk_id = rag_corpus_chunks.chunk_id`
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## 🚀 Loading Examples
|
| 125 |
+
|
| 126 |
+
### Option A — Pandas (local files)
|
| 127 |
+
```python
|
| 128 |
+
import pandas as pd
|
| 129 |
+
|
| 130 |
+
docs = pd.read_csv("rag_corpus_documents.csv")
|
| 131 |
+
chunks = pd.read_csv("rag_corpus_chunks.csv")
|
| 132 |
+
events = pd.read_csv("rag_retrieval_events.csv")
|
| 133 |
+
runs = pd.read_csv("eval_runs.csv")
|
| 134 |
+
scenarios = pd.read_csv("scenarios.csv")
|
| 135 |
+
|
| 136 |
+
# Example join: runs → events → chunks → docs
|
| 137 |
+
df = (runs.merge(events, on="example_id", how="left", suffixes=("", "_evt"))
|
| 138 |
+
.merge(chunks[["chunk_id","doc_id","domain","chunk_index","estimated_tokens"]], on="chunk_id", how="left")
|
| 139 |
+
.merge(docs[["doc_id","domain","title","source_type","pii_risk_level","security_tier"]], on="doc_id", how="left",
|
| 140 |
+
suffixes=("", "_doc")))
|
| 141 |
+
|
| 142 |
+
df.head()
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
### Option B — Hugging Face Datasets (explicit data_files)
|
| 146 |
+
```python
|
| 147 |
+
from datasets import load_dataset
|
| 148 |
+
|
| 149 |
+
# Replace with your dataset repo id on the Hub, e.g. "tarekmasryo/<repo_name>"
|
| 150 |
+
repo_id = "<your-namespace>/<your-dataset-repo>"
|
| 151 |
+
|
| 152 |
+
ds = load_dataset(
|
| 153 |
+
repo_id,
|
| 154 |
+
data_files={
|
| 155 |
+
"documents": "rag_corpus_documents.csv",
|
| 156 |
+
"chunks": "rag_corpus_chunks.csv",
|
| 157 |
+
"retrieval_events": "rag_retrieval_events.csv",
|
| 158 |
+
"eval_runs": "eval_runs.csv",
|
| 159 |
+
"scenarios": "scenarios.csv",
|
| 160 |
+
"data_dictionary": "data_dictionary.csv",
|
| 161 |
+
},
|
| 162 |
+
)
|
| 163 |
+
ds["eval_runs"].to_pandas().head()
|
| 164 |
+
```
|
| 165 |
+
|
| 166 |
+
---
|
| 167 |
+
|
| 168 |
+
## 🎯 Targets & Typical Tasks
|
| 169 |
+
|
| 170 |
+
Primary learning targets:
|
| 171 |
+
- **`is_correct`** — classification (correct vs incorrect)
|
| 172 |
+
- **`hallucination_flag`** — classification (hallucinated vs not)
|
| 173 |
+
- **`faithfulness_label`** — multi-class / categorical faithfulness
|
| 174 |
+
|
| 175 |
+
Paired with telemetry (retrieval ranks/scores, recall/MRR, latency, tokens, configs), this enables:
|
| 176 |
+
- **meta-models** that predict answer failure/hallucination risk
|
| 177 |
+
- **risk scoring** for guardrails (block / escalate / rerun / abstain)
|
| 178 |
+
- **policy design** for fast vs careful modes based on cost/latency/quality trade-offs
|
| 179 |
+
|
| 180 |
+
---
|
| 181 |
+
|
| 182 |
+
## 📘 Data Dictionary Notes
|
| 183 |
+
|
| 184 |
+
`data_dictionary.csv` provides column-level documentation across all tables:
|
| 185 |
+
`table_name`, `column_name`, `dtype`, `description`, `allowed_values`.
|
| 186 |
+
|
| 187 |
+
**Important naming note:** if `table_name` uses logical names (e.g., `rag_qa_eval_runs`, `rag_qa_scenarios`)
|
| 188 |
+
while your filenames are `eval_runs.csv` and `scenarios.csv`, treat `table_name` as a logical group name.
|
| 189 |
+
For strict 1:1 naming, either rename the CSVs or update `table_name` values.
|
| 190 |
+
|
| 191 |
+
---
|
| 192 |
+
|
| 193 |
+
## ⚠️ Limitations
|
| 194 |
+
|
| 195 |
+
- This is a **synthetic benchmark**, not production data.
|
| 196 |
+
- `rag_corpus_chunks.chunk_text` may be **more templated / less diverse** than real-world corpora.
|
| 197 |
+
- Intended for research, teaching, benchmarking, and prototyping.
|
| 198 |
+
- Not for high-stakes decisions (clinical, legal, financial).
|
| 199 |
+
|
| 200 |
+
---
|
| 201 |
+
|
| 202 |
+
## 📜 License
|
| 203 |
+
|
| 204 |
+
**CC BY 4.0 (Attribution Required)**
|
| 205 |
+
|
| 206 |
+
---
|
| 207 |
+
|
| 208 |
+
## 📚 Citation
|
| 209 |
+
|
| 210 |
+
If you use this dataset in notebooks, papers, demos, or teaching material, please cite the dataset page on the Hub and reference:
|
| 211 |
+
|
| 212 |
+
> **“RAG QA Logs & Corpus — Multi-Table Synthetic RAG Benchmark”** by **Tarek Masryo**
|
data/eval_runs.csv
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/rag_corpus_documents.csv
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/rag_retrieval_events.csv
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/scenarios.csv
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
scenario_id,query_id,domain,primary_doc_id,query,gold_answer,scenario_type,has_answer_in_corpus,n_eval_examples,is_used_in_eval,split,difficulty_level,use_case
|
| 2 |
+
SC0001,Q0004,developer_docs,DOC0159,How can I test the API in a local development environment without affecting production data?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,train,easy,rag_evaluation
|
| 3 |
+
SC0002,Q0007,developer_docs,DOC0410,How do I authenticate using API keys?,API keys are passed as a bearer token in the Authorization header.,standard_qa,1,77,1,train,easy,rag_evaluation
|
| 4 |
+
SC0003,Q0007,developer_docs,DOC0158,How do I authenticate using API keys?,API keys are passed as a bearer token in the Authorization header.,standard_qa,1,71,1,train,hard,rag_evaluation
|
| 5 |
+
SC0004,Q0007,developer_docs,DOC0331,How do I authenticate using API keys?,API keys are passed as a bearer token in the Authorization header.,standard_qa,1,229,1,train,medium,rag_evaluation
|
| 6 |
+
SC0005,Q0017,developer_docs,DOC0640,What is the rate limit for write requests?,Write requests are limited per minute and per project ID.,standard_qa,1,82,1,train,easy,rag_evaluation
|
| 7 |
+
SC0006,Q0017,developer_docs,DOC0406,What is the rate limit for write requests?,Write requests are limited per minute and per project ID.,standard_qa,1,58,1,train,hard,rag_evaluation
|
| 8 |
+
SC0007,Q0017,developer_docs,DOC0406,What is the rate limit for write requests?,Write requests are limited per minute and per project ID.,standard_qa,1,217,1,train,medium,rag_evaluation
|
| 9 |
+
SC0008,Q0005,financial_reports,DOC0240,"How did operating margin change compared to the previous year, and what were the main drivers?",NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,test,hard,rag_evaluation
|
| 10 |
+
SC0009,Q0006,financial_reports,DOC0217,How did operating margin change year over year?,Operating margin improved by approximately two percentage points.,standard_qa,1,64,1,test,easy,rag_evaluation
|
| 11 |
+
SC0010,Q0006,financial_reports,DOC0171,How did operating margin change year over year?,Operating margin improved by approximately two percentage points.,standard_qa,1,67,1,test,hard,rag_evaluation
|
| 12 |
+
SC0011,Q0006,financial_reports,DOC0240,How did operating margin change year over year?,Operating margin improved by approximately two percentage points.,standard_qa,1,228,1,test,medium,rag_evaluation
|
| 13 |
+
SC0012,Q0024,financial_reports,DOC0597,Which segment contributed most to revenue growth?,The enterprise subscription segment contributed the largest share of growth.,standard_qa,1,75,1,train,easy,rag_evaluation
|
| 14 |
+
SC0013,Q0024,financial_reports,DOC0502,Which segment contributed most to revenue growth?,The enterprise subscription segment contributed the largest share of growth.,standard_qa,1,73,1,train,hard,rag_evaluation
|
| 15 |
+
SC0014,Q0024,financial_reports,DOC0381,Which segment contributed most to revenue growth?,The enterprise subscription segment contributed the largest share of growth.,standard_qa,1,261,1,train,medium,rag_evaluation
|
| 16 |
+
SC0015,Q0012,hr_policies,DOC0020,How many days of annual leave are employees entitled to?,Employees receive a standard allocation of annual leave as defined in their contract.,standard_qa,1,54,1,train,easy,rag_evaluation
|
| 17 |
+
SC0016,Q0012,hr_policies,DOC0139,How many days of annual leave are employees entitled to?,Employees receive a standard allocation of annual leave as defined in their contract.,standard_qa,1,60,1,train,hard,rag_evaluation
|
| 18 |
+
SC0017,Q0012,hr_policies,DOC0119,How many days of annual leave are employees entitled to?,Employees receive a standard allocation of annual leave as defined in their contract.,standard_qa,1,180,1,train,medium,rag_evaluation
|
| 19 |
+
SC0018,Q0013,hr_policies,DOC0131,How many days of annual leave are granted to new employees?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,train,easy,rag_evaluation
|
| 20 |
+
SC0019,Q0019,hr_policies,DOC0118,What is the standard probation period for new hires?,The typical probation period is three to six months depending on role.,standard_qa,1,68,1,test,easy,rag_evaluation
|
| 21 |
+
SC0020,Q0019,hr_policies,DOC0228,What is the standard probation period for new hires?,The typical probation period is three to six months depending on role.,standard_qa,1,49,1,test,hard,rag_evaluation
|
| 22 |
+
SC0021,Q0019,hr_policies,DOC0033,What is the standard probation period for new hires?,The typical probation period is three to six months depending on role.,standard_qa,1,184,1,test,medium,rag_evaluation
|
| 23 |
+
SC0022,Q0018,medical_guides,DOC0485,What is the recommended dosage for adult patients?,Dosage is weight-adjusted and must follow local clinical guidelines.,standard_qa,1,54,1,val,easy,rag_evaluation
|
| 24 |
+
SC0023,Q0018,medical_guides,DOC0285,What is the recommended dosage for adult patients?,Dosage is weight-adjusted and must follow local clinical guidelines.,standard_qa,1,37,1,val,hard,rag_evaluation
|
| 25 |
+
SC0024,Q0018,medical_guides,DOC0485,What is the recommended dosage for adult patients?,Dosage is weight-adjusted and must follow local clinical guidelines.,standard_qa,1,146,1,val,medium,rag_evaluation
|
| 26 |
+
SC0025,Q0025,medical_guides,DOC0474,Which symptoms indicate that a patient with chest discomfort should be escalated for urgent evaluation?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,train,medium,rag_evaluation
|
| 27 |
+
SC0026,Q0026,medical_guides,DOC0259,Which symptoms indicate the need for immediate escalation?,"Red-flag symptoms include chest pain, severe shortness of breath, and confusion.",standard_qa,1,44,1,train,easy,rag_evaluation
|
| 28 |
+
SC0027,Q0026,medical_guides,DOC0485,Which symptoms indicate the need for immediate escalation?,"Red-flag symptoms include chest pain, severe shortness of breath, and confusion.",standard_qa,1,31,1,train,hard,rag_evaluation
|
| 29 |
+
SC0028,Q0026,medical_guides,DOC0485,Which symptoms indicate the need for immediate escalation?,"Red-flag symptoms include chest pain, severe shortness of breath, and confusion.",standard_qa,1,156,1,train,medium,rag_evaluation
|
| 30 |
+
SC0029,Q0010,policies,DOC0156,How is user consent stored for analytics tracking?,User consent is stored as a hashed record with timestamps.,standard_qa,1,45,1,train,easy,rag_evaluation
|
| 31 |
+
SC0030,Q0010,policies,DOC0116,How is user consent stored for analytics tracking?,User consent is stored as a hashed record with timestamps.,standard_qa,1,28,1,train,hard,rag_evaluation
|
| 32 |
+
SC0031,Q0010,policies,DOC0156,How is user consent stored for analytics tracking?,User consent is stored as a hashed record with timestamps.,standard_qa,1,104,1,train,medium,rag_evaluation
|
| 33 |
+
SC0032,Q0016,policies,DOC0156,What is the data retention policy for customer logs?,Customer logs are retained for 180 days by default.,standard_qa,1,29,1,train,easy,rag_evaluation
|
| 34 |
+
SC0033,Q0016,policies,DOC0116,What is the data retention policy for customer logs?,Customer logs are retained for 180 days by default.,standard_qa,1,22,1,train,hard,rag_evaluation
|
| 35 |
+
SC0034,Q0016,policies,DOC0156,What is the data retention policy for customer logs?,Customer logs are retained for 180 days by default.,standard_qa,1,115,1,train,medium,rag_evaluation
|
| 36 |
+
SC0035,Q0023,policies,DOC0449,Which employees are allowed to access production databases?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,test,hard,rag_evaluation
|
| 37 |
+
SC0036,Q0001,product_docs,DOC0326,Can I downgrade from Enterprise to Standard without losing historical data?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,train,hard,rag_evaluation
|
| 38 |
+
SC0037,Q0002,product_docs,DOC0179,Can I downgrade from enterprise to pro mid-cycle?,Downgrades are applied at the end of the current billing period.,standard_qa,1,28,1,train,easy,rag_evaluation
|
| 39 |
+
SC0038,Q0002,product_docs,DOC0276,Can I downgrade from enterprise to pro mid-cycle?,Downgrades are applied at the end of the current billing period.,standard_qa,1,17,1,train,hard,rag_evaluation
|
| 40 |
+
SC0039,Q0002,product_docs,DOC0113,Can I downgrade from enterprise to pro mid-cycle?,Downgrades are applied at the end of the current billing period.,standard_qa,1,78,1,train,medium,rag_evaluation
|
| 41 |
+
SC0040,Q0009,product_docs,DOC0336,How does the billing cycle work for annual plans?,Annual plans are billed once per year with proration on upgrades.,standard_qa,1,32,1,train,easy,rag_evaluation
|
| 42 |
+
SC0041,Q0009,product_docs,DOC0339,How does the billing cycle work for annual plans?,Annual plans are billed once per year with proration on upgrades.,standard_qa,1,24,1,train,hard,rag_evaluation
|
| 43 |
+
SC0042,Q0009,product_docs,DOC0336,How does the billing cycle work for annual plans?,Annual plans are billed once per year with proration on upgrades.,standard_qa,1,72,1,train,medium,rag_evaluation
|
| 44 |
+
SC0043,Q0014,product_docs,DOC0344,What are the limits for the enterprise tier?,Enterprise tier includes custom limits negotiated per contract.,standard_qa,1,15,1,train,easy,rag_evaluation
|
| 45 |
+
SC0044,Q0014,product_docs,DOC0326,What are the limits for the enterprise tier?,Enterprise tier includes custom limits negotiated per contract.,standard_qa,1,14,1,train,hard,rag_evaluation
|
| 46 |
+
SC0045,Q0014,product_docs,DOC0326,What are the limits for the enterprise tier?,Enterprise tier includes custom limits negotiated per contract.,standard_qa,1,79,1,train,medium,rag_evaluation
|
| 47 |
+
SC0046,Q0008,research_papers,DOC0369,How do the authors suggest extending this work in future research?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,train,medium,rag_evaluation
|
| 48 |
+
SC0047,Q0011,research_papers,DOC0497,How large is the evaluation dataset?,The evaluation dataset contains several tens of thousands of examples.,standard_qa,1,20,1,val,easy,rag_evaluation
|
| 49 |
+
SC0048,Q0011,research_papers,DOC0132,How large is the evaluation dataset?,The evaluation dataset contains several tens of thousands of examples.,standard_qa,1,24,1,val,hard,rag_evaluation
|
| 50 |
+
SC0049,Q0011,research_papers,DOC0085,How large is the evaluation dataset?,The evaluation dataset contains several tens of thousands of examples.,standard_qa,1,74,1,val,medium,rag_evaluation
|
| 51 |
+
SC0050,Q0015,research_papers,DOC0105,What baseline models were used in the study?,The study compares transformer-based baselines with classical models.,standard_qa,1,25,1,train,easy,rag_evaluation
|
| 52 |
+
SC0051,Q0015,research_papers,DOC0342,What baseline models were used in the study?,The study compares transformer-based baselines with classical models.,standard_qa,1,27,1,train,hard,rag_evaluation
|
| 53 |
+
SC0052,Q0015,research_papers,DOC0284,What baseline models were used in the study?,The study compares transformer-based baselines with classical models.,standard_qa,1,91,1,train,medium,rag_evaluation
|
| 54 |
+
SC0053,Q0003,support_faq,DOC0196,How can I reset my account password?,You can reset your password from the sign-in page using the forgot password link.,standard_qa,1,23,1,train,easy,rag_evaluation
|
| 55 |
+
SC0054,Q0003,support_faq,DOC0117,How can I reset my account password?,You can reset your password from the sign-in page using the forgot password link.,standard_qa,1,15,1,train,hard,rag_evaluation
|
| 56 |
+
SC0055,Q0003,support_faq,DOC0274,How can I reset my account password?,You can reset your password from the sign-in page using the forgot password link.,standard_qa,1,50,1,train,medium,rag_evaluation
|
| 57 |
+
SC0056,Q0020,support_faq,DOC0129,What should I do if the app keeps crashing?,Collect logs and contact support with device details and app version.,standard_qa,1,16,1,train,easy,rag_evaluation
|
| 58 |
+
SC0057,Q0020,support_faq,DOC0289,What should I do if the app keeps crashing?,Collect logs and contact support with device details and app version.,standard_qa,1,17,1,train,hard,rag_evaluation
|
| 59 |
+
SC0058,Q0020,support_faq,DOC0501,What should I do if the app keeps crashing?,Collect logs and contact support with device details and app version.,standard_qa,1,60,1,train,medium,rag_evaluation
|
| 60 |
+
SC0059,Q0021,support_faq,DOC0122,Where can I check the status of my support ticket?,Ticket status is available on the support portal under 'My Requests'.,standard_qa,1,20,1,train,easy,rag_evaluation
|
| 61 |
+
SC0060,Q0021,support_faq,DOC0299,Where can I check the status of my support ticket?,Ticket status is available on the support portal under 'My Requests'.,standard_qa,1,11,1,train,hard,rag_evaluation
|
| 62 |
+
SC0061,Q0021,support_faq,DOC0129,Where can I check the status of my support ticket?,Ticket status is available on the support portal under 'My Requests'.,standard_qa,1,60,1,train,medium,rag_evaluation
|
| 63 |
+
SC0062,Q0022,support_faq,DOC0462,Where can I download my monthly invoices for accounting purposes?,NO_ANSWER_IN_CORPUS,no_answer_probe,0,3,1,train,medium,rag_evaluation
|
docs/data_dictionary.csv
ADDED
|
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
table_name,column_name,dtype,description,allowed_values
|
| 2 |
+
rag_corpus_documents,doc_id,string,Unique identifier for each document.,Pattern: DOC[0-9]{4}
|
| 3 |
+
rag_corpus_documents,domain,string,"High level domain or category of the document (support, product_docs, medical_guides, etc.).","customer_success, data_platform_docs, developer_docs, financial_reports, hr_policies, marketing_analytics, medical_guides, mlops_docs, policies, product_docs, research_papers, support_faq"
|
| 4 |
+
rag_corpus_documents,title,text,Short title of the document.,
|
| 5 |
+
rag_corpus_documents,source_type,category,"Source type of the document (kb_article, runbook, policy_pdf, report, etc.).","internal_doc, notebook, pdf_manual, policy_file, spreadsheet, web_article, wiki_page"
|
| 6 |
+
rag_corpus_documents,language,string,"Language of the document, usually ISO language code (e.g., en).",en
|
| 7 |
+
rag_corpus_documents,n_sections,int,Number of logical sections inside the document.,
|
| 8 |
+
rag_corpus_documents,n_tokens,int,Estimated total token count for the full document.,
|
| 9 |
+
rag_corpus_documents,n_chunks,int,Number of chunks the document is split into for retrieval.,
|
| 10 |
+
rag_corpus_documents,avg_chunk_tokens,float,Average token count per chunk for this document.,
|
| 11 |
+
rag_corpus_documents,created_at_utc,datetime,UTC timestamp when the document was first created in the corpus.,
|
| 12 |
+
rag_corpus_documents,last_updated_at_utc,datetime,UTC timestamp when the document was last updated.,
|
| 13 |
+
rag_corpus_documents,is_active,int,Whether the document is currently active and used by the RAG system.,0/1
|
| 14 |
+
rag_corpus_documents,contains_tables,int,Whether the document contains tabular data.,0/1
|
| 15 |
+
rag_corpus_documents,pii_risk_level,category,Qualitative PII risk for this document.,"low, medium, none"
|
| 16 |
+
rag_corpus_documents,security_tier,category,Security classification tier for the document.,"highly_restricted, internal, public, restricted"
|
| 17 |
+
rag_corpus_documents,embedding_model,string,Name of the embedding model used to embed this document.,"all-minilm-l12-v2, bge-m3, e5-mistral-7b, gte-large, text-embedding-3-large, text-embedding-3-small"
|
| 18 |
+
rag_corpus_documents,owner_team,string,Logical team or function that owns the document content.,"data, engineering, finance, hr, legal, marketing, product, support"
|
| 19 |
+
rag_corpus_documents,search_index,string,Search index or collection name where this document is indexed.,Index name (string)
|
| 20 |
+
rag_corpus_documents,top_keywords,text,"Representative keywords extracted for the document, stored as a short text list.",
|
| 21 |
+
rag_corpus_chunks,chunk_id,string,Unique identifier for each text chunk in the corpus.,Pattern: C[0-9]{5}
|
| 22 |
+
rag_corpus_chunks,doc_id,string,Identifier of the parent document that this chunk belongs to.,Pattern: DOC[0-9]{4}
|
| 23 |
+
rag_corpus_chunks,domain,string,"Domain of the parent document, repeated for convenience.","customer_success, data_platform_docs, developer_docs, financial_reports, hr_policies, marketing_analytics, medical_guides, mlops_docs, policies, product_docs, research_papers, support_faq"
|
| 24 |
+
rag_corpus_chunks,chunk_index,int,Index of the chunk within its parent document (0 based).,
|
| 25 |
+
rag_corpus_chunks,estimated_tokens,int,Estimated token count for the chunk text.,
|
| 26 |
+
rag_corpus_chunks,chunk_text,text,Raw text content of the chunk used for retrieval.,
|
| 27 |
+
rag_retrieval_events,run_id,string,Identifier of the QA evaluation run this retrieval event belongs to. Links to rag_qa_eval_runs.run_id.,Pattern: run_[0-9]+
|
| 28 |
+
rag_retrieval_events,chunk_id,string,Identifier of the retrieved chunk. Links to rag_corpus_chunks.chunk_id.,Pattern: C[0-9]{5}
|
| 29 |
+
rag_retrieval_events,rank,int,Rank position of the chunk in the retrieved list (1 = top ranked).,
|
| 30 |
+
rag_retrieval_events,retrieval_score,float,Raw retrieval score for the chunk (higher is more similar or relevant).,
|
| 31 |
+
rag_retrieval_events,is_relevant,int,Whether this chunk is labeled as relevant to the query.,"0 / 1 (0 = not relevant, 1 = relevant)"
|
| 32 |
+
rag_retrieval_events,query_domain,string,Domain of the question/query being evaluated (not the chunk domain).,"developer_docs, financial_reports, hr_policies, medical_guides, policies, product_docs, research_papers, support_faq"
|
| 33 |
+
rag_retrieval_events,difficulty,category,Difficulty label of the underlying QA example.,"easy, hard, medium"
|
| 34 |
+
rag_retrieval_events,retrieval_strategy,category,Retrieval strategy used in this run.,"bm25, bm25_then_rerank, dense, dense_then_rerank, hybrid"
|
| 35 |
+
rag_retrieval_events,example_id,string,Identifier of the QA example (scenario) used for this run.,Pattern: QA[0-9]{6}
|
| 36 |
+
rag_retrieval_events,scenario_id,string,Scenario identifier for the query (denormalized for convenience).,Pattern: SC[0-9]{4}
|
| 37 |
+
rag_qa_eval_runs,example_id,string,Identifier for the QA example that this run is evaluating.,Pattern: QA[0-9]{6}
|
| 38 |
+
rag_qa_eval_runs,run_id,string,Unique identifier for this evaluation run. Joins with rag_retrieval_events.run_id.,Pattern: run_[0-9]+
|
| 39 |
+
rag_qa_eval_runs,domain,string,Domain or topic of the QA example.,"developer_docs, financial_reports, hr_policies, medical_guides, policies, product_docs, research_papers, support_faq"
|
| 40 |
+
rag_qa_eval_runs,task_type,string,"High level task type for the run (e.g., qa, summarization, classification).","comparison, explanation, factoid, instruction_following, multi_hop, summarization, table_qa, temporal_reasoning"
|
| 41 |
+
rag_qa_eval_runs,difficulty,category,"Observed difficulty label for the QA example (easy, medium, hard), derived from retrieval quality, hallucination, and correctness.","easy, hard, medium"
|
| 42 |
+
rag_qa_eval_runs,query,text,Natural language query or question posed to the RAG system.,
|
| 43 |
+
rag_qa_eval_runs,gold_answer,text,Reference answer used as the gold standard for evaluation.,
|
| 44 |
+
rag_qa_eval_runs,answer_tokens,int,Approximate token count of the model answer.,
|
| 45 |
+
rag_qa_eval_runs,is_correct,int,"Binary correctness label for the final answer (1 = sufficiently correct, 0 = not correct). Coarser, binary view of the same signal represented by correctness_label.","[0, 1]"
|
| 46 |
+
rag_qa_eval_runs,correctness_label,category,"Multi-class correctness label for the final answer, for example correct / partial / incorrect. More fine-grained view of overall correctness than is_correct.","correct, incorrect, partial"
|
| 47 |
+
rag_qa_eval_runs,faithfulness_label,category,"Multi-class faithfulness label capturing how well the answer is grounded in retrieved evidence (e.g., faithful / unfaithful / unknown).","faithful, unfaithful, unknown"
|
| 48 |
+
rag_qa_eval_runs,hallucination_flag,int,"Binary hallucination label (1 = hallucination present, 0 = no hallucination detected). Related to the more fine-grained faithfulness_label.",0/1
|
| 49 |
+
rag_qa_eval_runs,retrieval_strategy,category,Retrieval strategy used for this run.,"bm25, bm25_then_rerank, dense, dense_then_rerank, hybrid"
|
| 50 |
+
rag_qa_eval_runs,chunking_strategy,category,Chunking strategy used when building the corpus.,"by_heading, fixed_512_tokens, semantic, sliding_window_256_overlap"
|
| 51 |
+
rag_qa_eval_runs,n_retrieved_chunks,int,"Total number of chunks returned by the retriever for this query. May be larger than the number of rows stored in rag_retrieval_events, which usually logs only the top-k results for analysis (e.g., top 10).",
|
| 52 |
+
rag_qa_eval_runs,top1_score,float,Retrieval score of the highest ranked chunk in this run.,
|
| 53 |
+
rag_qa_eval_runs,mean_retrieved_score,float,Mean retrieval score across all retrieved chunks for this run.,
|
| 54 |
+
rag_qa_eval_runs,recall_at_5,float,Binary recall@5 of relevant chunks for this QA example.,
|
| 55 |
+
rag_qa_eval_runs,recall_at_10,float,Binary recall@10 of relevant chunks for this QA example.,
|
| 56 |
+
rag_qa_eval_runs,mrr_at_10,float,Mean reciprocal rank@10 for this QA example.,
|
| 57 |
+
rag_qa_eval_runs,used_long_context_window,int,Flag indicating whether the run was configured to allow a longer context window (not strictly derived from context_window_tokens).,0/1
|
| 58 |
+
rag_qa_eval_runs,context_window_tokens,int,Maximum context window size in tokens used for this run.,
|
| 59 |
+
rag_qa_eval_runs,latency_ms_retrieval,int,Time taken by retrieval in milliseconds.,
|
| 60 |
+
rag_qa_eval_runs,latency_ms_generation,int,Time taken by answer generation in milliseconds.,
|
| 61 |
+
rag_qa_eval_runs,total_latency_ms,int,Total end to end latency in milliseconds (retrieval + generation + overhead).,
|
| 62 |
+
rag_qa_eval_runs,embedding_model,string,Name of the embedding model powering the retriever.,"all-minilm-l12-v2, bge-m3, e5-mistral-7b, gte-large, text-embedding-3-large, text-embedding-3-small"
|
| 63 |
+
rag_qa_eval_runs,reranker_model,string,"Name of the reranker model, if used.","bge-reranker-base, colbert-v2, cross-encoder-ms-marco, gte-qwen-reranker, jina-reranker-v1, none"
|
| 64 |
+
rag_qa_eval_runs,doc_ids_used,text,"Pipe separated list of document IDs that contributed context in this run. (Pipe-delimited; represents top-k chunks/docs used to build the generation prompt, not all retrieved chunks.)",Pipe-delimited list of IDs (top-k prompt context)
|
| 65 |
+
rag_qa_eval_runs,chunk_ids_used,text,"Pipe separated list of chunk IDs that contributed context in this run. (Pipe-delimited; represents top-k chunks/docs used to build the generation prompt, not all retrieved chunks.)",Pipe-delimited list of IDs (top-k prompt context)
|
| 66 |
+
rag_qa_eval_runs,supervising_judge_label,category,Label from an external or supervising judge model or human.,"borderline, fail, pass"
|
| 67 |
+
rag_qa_eval_runs,eval_mode,category,Evaluation mode used for this run.,"human_eval, no_answer_probe, offline_batch, online_shadow"
|
| 68 |
+
rag_qa_eval_runs,user_feedback_label,category,Optional user feedback label for this answer.,"dissatisfied, negative, neutral, no_feedback, positive, satisfied, very_negative, very_positive"
|
| 69 |
+
rag_qa_eval_runs,created_at_utc,datetime,UTC timestamp when this run record was created.,
|
| 70 |
+
rag_qa_eval_runs,generator_model,string,Name of the LLM / generator model used to produce the answer.,"claude-3.5-sonnet, gpt-4o, gpt-4o-mini, llama-3.1-70b-instruct, llama-3.1-8b-instruct, mistral-large"
|
| 71 |
+
rag_qa_eval_runs,temperature,float,Sampling temperature used for generation.,
|
| 72 |
+
rag_qa_eval_runs,top_p,float,Top-p nucleus sampling parameter used for generation.,
|
| 73 |
+
rag_qa_eval_runs,max_new_tokens,int,Maximum number of new tokens allowed for the generated answer.,
|
| 74 |
+
rag_qa_eval_runs,stop_reason,string,"Reason why the generation stopped (length, stop_sequence, end_of_turn, etc.).","eos, error, length"
|
| 75 |
+
rag_qa_eval_runs,prompt_tokens,int,Number of tokens in the prompt / input context.,
|
| 76 |
+
rag_qa_eval_runs,total_cost_usd,float,"Approximate total cost of the run in USD, based on token consumption.",
|
| 77 |
+
rag_qa_scenarios,scenario_id,string,"Unique identifier for the QA scenario (SC001, SC002, ...).",Pattern: SC[0-9]{4}
|
| 78 |
+
rag_qa_scenarios,domain,string,"Domain of the scenario, aligned with corpus document domains.","analytics_reports, customer_success, data_platform_docs, developer_docs, financial_reports, hr_policies, marketing_analytics, medical_guides, mlops_docs, policies, product_docs, research_papers, security_runbooks, support_faq"
|
| 79 |
+
rag_qa_scenarios,primary_doc_id,string,Primary document ID that contains the canonical answer content.,Pattern: DOC[0-9]{4}
|
| 80 |
+
rag_qa_scenarios,query,text,User facing question or query for this scenario.,
|
| 81 |
+
rag_qa_scenarios,gold_answer,text,"Gold reference answer for this scenario, grounded in the primary document.",
|
| 82 |
+
rag_qa_scenarios,difficulty_level,category,Scenario difficulty level.,"easy, hard, medium"
|
| 83 |
+
rag_qa_scenarios,scenario_type,category,"Short label describing the scenario type (policy_lookup, troubleshooting, monitoring, etc.).","standard_qa, no_answer_probe"
|
| 84 |
+
rag_qa_scenarios,use_case,category,"Intended team or function that would typically ask this question (customer_support, clinical_support, etc.).","analytics_team, clinical_support, customer_success, customer_support, data_platform_team, engineering, executive_reporting, finance_team, hr_team, marketing_team, mlops_engineering, no_answer_detection, operations, product_management, research, security_admin, security_compliance"
|
| 85 |
+
rag_qa_eval_runs,scenario_id,string,Identifier linking each QA example to a high-level scenario in rag_qa_scenarios.,Pattern: SC[0-9]{4}
|
| 86 |
+
rag_qa_scenarios,has_answer_in_corpus,int,"1 if the scenario is constructed such that the answer exists somewhere in the corpus, 0 for explicit no-answer probes.",0 / 1
|
| 87 |
+
rag_qa_eval_runs,has_answer_in_corpus,int,Flag indicating whether the underlying scenario has an answer in the corpus (1) or is a no-answer probe (0).,0 / 1
|
| 88 |
+
rag_qa_eval_runs,is_noanswer_probe,int,Flag marking queries intentionally designed to have no valid answer in the corpus (no-answer probes). Only a small fraction of examples use this mode.,0 / 1
|
| 89 |
+
rag_qa_eval_runs,has_relevant_in_top5,int,Flag indicating whether at least one relevant chunk was retrieved within the top 5 ranks. Derived from relevance labels in rag_retrieval_events.,0 / 1
|
| 90 |
+
rag_qa_eval_runs,has_relevant_in_top10,int,Flag indicating whether at least one relevant chunk was retrieved within the top 10 ranks. Typically derived from recall_at_10.,0 / 1
|
| 91 |
+
rag_qa_eval_runs,answered_without_retrieval,int,Flag set to 1 when the model produced a correct answer even though recall_at_10 = 0 and the answer exists somewhere in the corpus; 0 otherwise.,0 / 1
|
| 92 |
+
rag_qa_scenarios,n_eval_examples,int,Number of QA evaluation examples in rag_qa_eval_runs that reference this scenario_id.,>= 0
|
| 93 |
+
rag_qa_scenarios,is_used_in_eval,int,"1 if this scenario_id appears at least once in rag_qa_eval_runs, 0 otherwise.",0 / 1
|
| 94 |
+
rag_qa_scenarios,query_id,string,Stable identifier for the unique query text used across scenarios and runs.,Pattern: Q[0-9]{4}
|
| 95 |
+
rag_qa_scenarios,split,category,Recommended ML split assigned deterministically by query_id to avoid leakage.,"train, val, test"
|
| 96 |
+
rag_qa_eval_runs,query_id,string,Stable identifier for the query text in this evaluation example.,Pattern: Q[0-9]{4}
|
| 97 |
+
rag_qa_eval_runs,split,category,Recommended ML split assigned deterministically by query_id.,"train, val, test"
|
| 98 |
+
rag_qa_eval_runs,run_config_id,string,Deterministic hash identifier for the run configuration (model + decoding + retrieval + chunking).,Pattern: CFG[A-F0-9]{10}
|
| 99 |
+
rag_retrieval_events,query_id,string,Stable identifier for the query associated with this retrieval event (via example_id).,Pattern: Q[0-9]{4}
|
| 100 |
+
rag_retrieval_events,split,category,Recommended ML split assigned deterministically by query_id (via example_id).,"train, val, test"
|