File size: 4,762 Bytes
b9e2e1d 0a57646 b9e2e1d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | ---
language:
- en
- zh
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
tags:
- hallucination
- regulatory-compliance
- preference-optimization
- dpo
- long-context
- detail-faithfulness
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*.jsonl
- split: validation
path: data/val-*.jsonl
- split: test
path: data/test-*.jsonl
---
# DetailBench
**A Benchmark for Detail Hallucination in Long Regulatory Documents**
DetailBench is a benchmark for evaluating and mitigating *detail hallucination* in LLM outputs on long regulatory documents.
## Overview
Large language models frequently produce *detail hallucinations*—subtle errors in threshold values, units, scopes, obligation levels, and conditions—when processing long regulatory documents. DetailBench provides:
- **322 source documents** (172 real + 150 synthetic) from three jurisdictions
- **13,000 preference pairs** (10,000 train / 1,000 validation / 2,000 test)
- **Five detail error types** (τ₁–τ₅) with balanced training distribution
- **Three context-length tiers**: Short (8K–16K), Medium (16K–32K), Long (32K–64K tokens)
## Data Sources
| Source | Count | Description |
|--------|------:|-------------|
| GB Standards | 65 | Chinese national standards on hydrogen production, storage, transportation, and safety |
| US CFR | 31 | Code of Federal Regulations (Title 49: Transportation, Title 40: Environmental Protection) via eCFR API |
| EUR-Lex | 76 | EU regulations on hydrogen infrastructure, clean energy, pressure equipment via CELLAR API |
| Synthetic | 150 | Domain-template generated documents for training augmentation |
## Schema
Each sample in the JSONL files contains:
```json
{
"sample_id": "test_00000",
"context_tier": "long",
"token_count": 43368,
"documents": [
{
"doc_id": "SYNTH_0075",
"source": "synthetic",
"segments": [
{
"segment_id": "SYNTH_0075_seg_0",
"text": "...",
"token_count": 605
}
]
}
],
"query": "An electrolyser plant produces hydrogen at ...",
"chosen": {
"is_compliant": true,
"constraints": [
{"type": "tau_1", "description": "...", "value": "82", "unit": "°C"}
],
"evidence": [
{"segment_id": "...", "quote": "..."}
]
},
"rejected": {
"is_compliant": true,
"constraints": ["... (with one perturbed detail)"],
"evidence": ["..."]
},
"perturbation": {
"error_type": "tau_5_condition",
"original_value": "where appropriate",
"perturbed_value": "[dropped]",
"detail_element_id": "...",
"segment_id": "..."
},
"detail_elements": [
{
"element_id": "...",
"type": "tau_1",
"value": "3928.0",
"unit": "kg",
"span": [46, 55],
"segment_id": "...",
"quote": "..."
}
]
}
```
## Detail Error Taxonomy
| Type | Name | Description | Example |
|------|------|-------------|---------|
| τ₁ | Threshold | Numeric value errors | "pressure ≤ **35** MPa" → "pressure ≤ **45** MPa" |
| τ₂ | Unit | Measurement unit errors | "distance in **meters**" → "distance in **feet**" |
| τ₃ | Scope | Applicability scope errors | "for **indoor** facilities" → "for **all** facilities" |
| τ₄ | Level | Obligation level errors | "**shall** comply" → "**should** comply" |
| τ₅ | Condition | Conditional clause errors | "if temperature **exceeds 60°C**" → condition dropped |
## Evaluation Metrics
- **Compliance Accuracy**: Fraction of correct compliance judgments
- **Detail Error Rate (DER)**: Per-type and overall error rate on detail elements
- **Evidence F1**: Precision/recall/F1 of predicted evidence citations
- **Evidence Consistency**: Fraction of citations where quoted text matches source
## Usage
```python
from datasets import load_dataset
ds = load_dataset("YOUR_USERNAME/DetailBench")
# Access splits
train = ds["train"] # 10,000 samples
val = ds["validation"] # 1,000 samples
test = ds["test"] # 2,000 samples
# Example: inspect a test sample
sample = test[0]
print(sample["query"])
print(sample["context_tier"]) # "short", "medium", or "long"
print(len(sample["documents"]))
```
## Split Statistics
| Split | Samples | Short | Medium | Long |
|-------|--------:|------:|-------:|-----:|
| Train | 10,000 | 6,463 | 2,263 | 1,274 |
| Val | 1,000 | 605 | 249 | 146 |
| Test | 2,000 | 1,215 | 519 | 266 |
Error type distribution in the training set is balanced at 20% each (2,000 per type).
## License
This dataset is released under CC-BY-4.0. The underlying regulatory documents are sourced from public government repositories (eCFR, EUR-Lex, openstd.samr.gov.cn).
|