GroundedGeo / README.md
nidhipandya's picture
Update README.md
4552d14 verified
metadata
pretty_name: GroundedGeo
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - en
tags:
  - rag
  - retrieval
  - citations
  - geography
  - evaluation
  - benchmark
size_categories:
  - n<1K

GroundedGeo: A Benchmark for Citation-Grounded Geographic QA

DOI GitHub License: CC BY 4.0

GroundedGeo is a research-grade benchmark for evaluating RAG systems on location-based queries with verifiable citations, freshness awareness, and conflict handling.

🎯 Key Findings (Frozen Test Split)

Naïve RAG reaches 79.2% accuracy but fails on conflicting sources (11.1% conflict-handled).
Adding official-source ranking improves overall accuracy to 94.3% and raises conflict handling to 100%.

Conflict handled means the system either (a) explicitly flags a source disagreement or (b) prefers the official source and cites it.

📊 Dataset Statistics

Statistic Value
Total queries 200
Dev split 147
Test split (frozen) 53
Hard-case buckets 5 (40 each)
Multi-source queries 85

Five Hard-Case Buckets

Bucket Count Challenge
Boundary Adjacent 40 Locations near county/state lines
Ambiguous Name 40 "Springfield" exists in many states
Overlapping Jurisdiction 40 Multiple authorities apply
Stale Fact 40 Hours/fees/policies change
Conflicting Sources 40 Official vs community disagree

📈 Benchmark Results (Frozen Test Split)

System Accuracy Conflict Handled
Closed-Book LLM 17.0% 11.1%
Naïve RAG 79.2% 11.1%
Official-First RAG 94.3% 100.0%
Freshness-Filter RAG 79.2% 11.1%
Conflict-Aware RAG 92.5% 88.9%

Load Dataset

from datasets import load_dataset

# Load full dataset
ds = load_dataset("nidhip1611/groundedgeo", data_files="groundedgeo_v1.0.jsonl")

# Or load dev/test splits
ds = load_dataset(
    "nidhip1611/groundedgeo",
    data_files={"dev": "dev.jsonl", "test": "test.jsonl"}
)

print(ds)

Query Schema

Each query contains:

  • query_id: Unique identifier
  • query_text: The question
  • gold_answer: Expected answer
  • hard_case_bucket: One of 5 buckets
  • split: "dev" or "test"
  • gold_evidence: Evidence spans with URLs and citation spans

Intended Use

GroundedGeo is designed to evaluate citation-grounded QA behaviors in RAG systems, especially under ambiguity, freshness constraints, and conflicting sources. It is not meant for real-time civic decision-making without live verification.

Citation

@dataset{pandya2025groundedgeo,
  author    = {Pandya, Nidhi},
  title     = {GroundedGeo: A Benchmark for Citation-Grounded Geographic QA},
  publisher = {Zenodo},
  year      = {2026},
  doi       = {10.5281/zenodo.18142378},
  url       = {https://doi.org/10.5281/zenodo.18142378}
}

Links

License

CC BY 4.0