DetailBench / README.md
YangL1122's picture
Upload README.md with huggingface_hub
0a57646 verified
metadata
language:
  - en
  - zh
license: cc-by-4.0
task_categories:
  - text-generation
  - question-answering
tags:
  - hallucination
  - regulatory-compliance
  - preference-optimization
  - dpo
  - long-context
  - detail-faithfulness
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*.jsonl
      - split: validation
        path: data/val-*.jsonl
      - split: test
        path: data/test-*.jsonl

DetailBench

A Benchmark for Detail Hallucination in Long Regulatory Documents

DetailBench is a benchmark for evaluating and mitigating detail hallucination in LLM outputs on long regulatory documents.

Overview

Large language models frequently produce detail hallucinations—subtle errors in threshold values, units, scopes, obligation levels, and conditions—when processing long regulatory documents. DetailBench provides:

  • 322 source documents (172 real + 150 synthetic) from three jurisdictions
  • 13,000 preference pairs (10,000 train / 1,000 validation / 2,000 test)
  • Five detail error types (τ₁–τ₅) with balanced training distribution
  • Three context-length tiers: Short (8K–16K), Medium (16K–32K), Long (32K–64K tokens)

Data Sources

Source Count Description
GB Standards 65 Chinese national standards on hydrogen production, storage, transportation, and safety
US CFR 31 Code of Federal Regulations (Title 49: Transportation, Title 40: Environmental Protection) via eCFR API
EUR-Lex 76 EU regulations on hydrogen infrastructure, clean energy, pressure equipment via CELLAR API
Synthetic 150 Domain-template generated documents for training augmentation

Schema

Each sample in the JSONL files contains:

{
  "sample_id": "test_00000",
  "context_tier": "long",
  "token_count": 43368,
  "documents": [
    {
      "doc_id": "SYNTH_0075",
      "source": "synthetic",
      "segments": [
        {
          "segment_id": "SYNTH_0075_seg_0",
          "text": "...",
          "token_count": 605
        }
      ]
    }
  ],
  "query": "An electrolyser plant produces hydrogen at ...",
  "chosen": {
    "is_compliant": true,
    "constraints": [
      {"type": "tau_1", "description": "...", "value": "82", "unit": "°C"}
    ],
    "evidence": [
      {"segment_id": "...", "quote": "..."}
    ]
  },
  "rejected": {
    "is_compliant": true,
    "constraints": ["... (with one perturbed detail)"],
    "evidence": ["..."]
  },
  "perturbation": {
    "error_type": "tau_5_condition",
    "original_value": "where appropriate",
    "perturbed_value": "[dropped]",
    "detail_element_id": "...",
    "segment_id": "..."
  },
  "detail_elements": [
    {
      "element_id": "...",
      "type": "tau_1",
      "value": "3928.0",
      "unit": "kg",
      "span": [46, 55],
      "segment_id": "...",
      "quote": "..."
    }
  ]
}

Detail Error Taxonomy

Type Name Description Example
τ₁ Threshold Numeric value errors "pressure ≤ 35 MPa" → "pressure ≤ 45 MPa"
τ₂ Unit Measurement unit errors "distance in meters" → "distance in feet"
τ₃ Scope Applicability scope errors "for indoor facilities" → "for all facilities"
τ₄ Level Obligation level errors "shall comply" → "should comply"
τ₅ Condition Conditional clause errors "if temperature exceeds 60°C" → condition dropped

Evaluation Metrics

  • Compliance Accuracy: Fraction of correct compliance judgments
  • Detail Error Rate (DER): Per-type and overall error rate on detail elements
  • Evidence F1: Precision/recall/F1 of predicted evidence citations
  • Evidence Consistency: Fraction of citations where quoted text matches source

Usage

from datasets import load_dataset

ds = load_dataset("YOUR_USERNAME/DetailBench")

# Access splits
train = ds["train"]      # 10,000 samples
val = ds["validation"]   # 1,000 samples
test = ds["test"]        # 2,000 samples

# Example: inspect a test sample
sample = test[0]
print(sample["query"])
print(sample["context_tier"])  # "short", "medium", or "long"
print(len(sample["documents"]))

Split Statistics

Split Samples Short Medium Long
Train 10,000 6,463 2,263 1,274
Val 1,000 605 249 146
Test 2,000 1,215 519 266

Error type distribution in the training set is balanced at 20% each (2,000 per type).

License

This dataset is released under CC-BY-4.0. The underlying regulatory documents are sourced from public government repositories (eCFR, EUR-Lex, openstd.samr.gov.cn).