File size: 1,591 Bytes
ff7ca58 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ---
pretty_name: CitationGround-1M (Platinum)
language:
- en
license: apache-2.0
task_categories:
- question-answering
- text-generation
tags:
- rag
- grounding
- citations
- retrieval
- hallucination-reduction
- hard-negatives
size_categories:
- n<1K # sample pack; replace after scaling
dataset_info:
creator: "Within US AI"
contact: "Within US AI"
created: "2025-12-30T16:53:41Z"
schema: "See Features section below"
---
# CitationGround-1M (Platinum)
**Developer/Publisher:** Within US AI
**Version:** 0.1.0 (sample pack)
**Created:** 2025-12-30T16:53:41Z
## What this dataset is
`CitationGround-1M` is a **citation-locked** grounded QA/RAG dataset:
- Answer using only the provided `contexts`
- Provide **span-level citations** (doc_id + offsets)
- Includes `answerable=false` hard negatives for abstention behavior
## Features / schema (JSONL)
- `example_id` (string)
- `question` (string)
- `contexts` (list of docs)
- `answer` (string)
- `citations` (list of spans)
- `answerable` (bool)
- `difficulty` (int; 1–5)
- `reason` (string)
- `language` (string)
- `created_utc` (string)
- `license_note` (string)
### Context doc format
- `doc_id`, `title`, `text`, `source_type`, `provenance`
### Citation span format
- `doc_id`, `start`, `end` (character offsets in `text`)
## Splits
- `data/train.jsonl`
- `data/validation.jsonl`
- `data/test.jsonl`
## How to load
```python
from datasets import load_dataset
ds = load_dataset("json", data_files={"train":"data/train.jsonl","validation":"data/validation.jsonl","test":"data/test.jsonl"})
print(ds["train"][0])
```
|