gss1147 commited on
Commit
ff7ca58
·
verified ·
1 Parent(s): 1a58d9d

Upload 2 files

Browse files
Files changed (2) hide show
  1. LICENSE +5 -0
  2. README.md +66 -0
LICENSE ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+
4
+ This repository ships a template dataset package.
5
+ For the full Apache-2.0 text, see: https://www.apache.org/licenses/LICENSE-2.0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: CitationGround-1M (Platinum)
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
+ task_categories:
7
+ - question-answering
8
+ - text-generation
9
+ tags:
10
+ - rag
11
+ - grounding
12
+ - citations
13
+ - retrieval
14
+ - hallucination-reduction
15
+ - hard-negatives
16
+ size_categories:
17
+ - n<1K # sample pack; replace after scaling
18
+ dataset_info:
19
+ creator: "Within US AI"
20
+ contact: "Within US AI"
21
+ created: "2025-12-30T16:53:41Z"
22
+ schema: "See Features section below"
23
+ ---
24
+
25
+ # CitationGround-1M (Platinum)
26
+
27
+ **Developer/Publisher:** Within US AI
28
+ **Version:** 0.1.0 (sample pack)
29
+ **Created:** 2025-12-30T16:53:41Z
30
+
31
+ ## What this dataset is
32
+ `CitationGround-1M` is a **citation-locked** grounded QA/RAG dataset:
33
+ - Answer using only the provided `contexts`
34
+ - Provide **span-level citations** (doc_id + offsets)
35
+ - Includes `answerable=false` hard negatives for abstention behavior
36
+
37
+ ## Features / schema (JSONL)
38
+ - `example_id` (string)
39
+ - `question` (string)
40
+ - `contexts` (list of docs)
41
+ - `answer` (string)
42
+ - `citations` (list of spans)
43
+ - `answerable` (bool)
44
+ - `difficulty` (int; 1–5)
45
+ - `reason` (string)
46
+ - `language` (string)
47
+ - `created_utc` (string)
48
+ - `license_note` (string)
49
+
50
+ ### Context doc format
51
+ - `doc_id`, `title`, `text`, `source_type`, `provenance`
52
+
53
+ ### Citation span format
54
+ - `doc_id`, `start`, `end` (character offsets in `text`)
55
+
56
+ ## Splits
57
+ - `data/train.jsonl`
58
+ - `data/validation.jsonl`
59
+ - `data/test.jsonl`
60
+
61
+ ## How to load
62
+ ```python
63
+ from datasets import load_dataset
64
+ ds = load_dataset("json", data_files={"train":"data/train.jsonl","validation":"data/validation.jsonl","test":"data/test.jsonl"})
65
+ print(ds["train"][0])
66
+ ```