onionmonster commited on
Commit
ea8d08d
·
0 Parent(s):

Initial commit: DREAM dataset v1.0.0

Browse files
.gitattributes ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # all files in this repo that match go through git‑lfs
2
+ *.json filter=lfs diff=lfs merge=lfs -text
3
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
4
+ *.csv filter=lfs diff=lfs merge=lfs -text
5
+ data/** filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # ---- ignore only junk inside this dataset repo ----
2
+ __pycache__/
3
+ .ipynb_checkpoints/
4
+ .DS_Store
5
+ .vscode/
6
+ *.tmp
7
+ *.bak
8
+ data/cache/
9
+ data/tmp/
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ # ====== YAML metadata for the Hub ======
3
+ pretty_name: DREAM-CFB
4
+ license: mit
5
+ language:
6
+ - en
7
+ tags:
8
+ - multiple-choice
9
+ - reading-comprehension
10
+ - dialogue
11
+ - conversational-ai
12
+ - question-answering
13
+ - openai-format
14
+ task_categories:
15
+ - question-answering
16
+ size_categories:
17
+ - 1K<n<10K
18
+ source_datasets:
19
+ - dream
20
+ annotations_creators:
21
+ - expert-generated
22
+ ---
23
+
24
+ # DREAM‑CFB · _Dialogue-based Reading Comprehension Examination through Machine Reading (Conversation Fact Benchmark Format)_
25
+
26
+ **DREAM‑CFB** is a 6,444 example dataset derived from the original **DREAM** dataset, transformed and adapted for the Conversation Fact Benchmark framework. Each item consists of multi-turn dialogues with associated multiple-choice questions that test reading comprehension and conversational understanding.
27
+
28
+ The dataset focuses on **dialogue-based reading comprehension**: questions require understanding conversational context, speaker intentions, and implicit information that emerges through multi-turn interactions.
29
+
30
+ The dataset follows a structured format with dialogue turns and questions, making it suitable for evaluating conversational AI systems and reading comprehension models.
31
+
32
+ ---
33
+
34
+ ## Dataset at a glance
35
+
36
+ | Field | Type / shape | Description |
37
+ | ---------------------- | --------------------- | ---------------------------------------------------- |
38
+ | `id` | `str` | Unique identifier for the dialogue instance |
39
+ | `dialogue_turns` | `list[dict]` | Multi-turn conversation with speaker and text fields |
40
+ | `questions` | `list[dict]` | List of questions associated with the dialogue |
41
+ | `question_text` | `str` | The comprehension question about the dialogue |
42
+ | `answer_text` | `str` | Ground-truth answer string |
43
+ | `choices` | `list[str]` (len = 3) | Three multiple-choice answer options |
44
+ | `correct_choice_index` | `int` (0‑2) | Index of the correct answer (0-based) |
45
+
46
+ ---
47
+
48
+ ## Intended uses
49
+
50
+ | Use case | How to use it |
51
+ | ---------------------------- | --------------------------------------------------------------- |
52
+ | Reading comprehension eval | Test model's ability to understand dialogue context and meaning |
53
+ | Conversational understanding | Evaluate comprehension of multi-turn speaker interactions |
54
+ | Multiple-choice QA | Assess reasoning capabilities in structured question formats |
55
+ | Dialogue systems | Benchmark conversational AI understanding of context and intent |
56
+
57
+ ---
58
+
59
+ ## Example
60
+
61
+ ```json
62
+ {
63
+ "id": "5-510",
64
+ "dialogue_turns": [
65
+ {
66
+ "speaker": "M",
67
+ "text": "I am considering dropping my dancing class. I am not making any progress."
68
+ },
69
+ {
70
+ "speaker": "W",
71
+ "text": "If I were you, I stick with it. It's definitely worth time and effort."
72
+ }
73
+ ],
74
+ "questions": [
75
+ {
76
+ "question_text": "What does the man suggest the woman do?",
77
+ "answer_text": "Continue her dancing class.",
78
+ "choices": [
79
+ "Consult her dancing teacher.",
80
+ "Take a more interesting class.",
81
+ "Continue her dancing class."
82
+ ],
83
+ "correct_choice_index": 2
84
+ }
85
+ ]
86
+ }
87
+ ```
88
+
89
+ ## Dataset Statistics
90
+
91
+ - **Total examples**: 6,444 dialogue-question pairs
92
+ - **Average choices per question**: 3 (standard multiple-choice format)
93
+ - **Source**: Original DREAM dataset
94
+ - **Language**: English
95
+ - **Domain**: General conversational scenarios
96
+
97
+ ## Data Splits
98
+
99
+ The dataset includes the following splits from the original DREAM dataset:
100
+
101
+ - Train: ~4,000 examples
102
+ - Dev: ~1,300 examples
103
+ - Test: ~1,300 examples
104
+
105
+ ## Changelog
106
+
107
+ v1.0.0 · Initial release – transformed original DREAM dataset to Conversation Fact Benchmark format with structured dialogue turns and multiple-choice questions
108
+
109
+ ## Dataset Creation
110
+
111
+ This dataset was created by transforming the original DREAM dataset into a format suitable for the [Conversation Fact Benchmark](https://github.com/savourylie/Conversation-Fact-Benchmark) framework. The transformation process:
112
+
113
+ 1. Converted raw dialogue text into structured speaker turns
114
+ 2. Preserved original multiple-choice questions and answers
115
+ 3. Added explicit choice indexing for evaluation
116
+ 4. Maintained dialogue context and question associations
117
+
118
+ ## Citation
119
+
120
+ If you use this dataset, please cite both the original DREAM paper and the Conversation Fact Benchmark:
121
+
122
+ ```bibtex
123
+ @inproceedings{sun2019dream,
124
+ title={DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension},
125
+ author={Sun, Kai and Yu, Dian and Chen, Jianshu and Yu, Dong and Choi, Yejin and Cardie, Claire},
126
+ booktitle={Transactions of the Association for Computational Linguistics},
127
+ year={2019}
128
+ }
129
+ ```
130
+
131
+ ## Contributing
132
+
133
+ We welcome contributions for:
134
+
135
+ - Additional data formats (CSV, Parquet)
136
+ - Evaluation scripts and baselines
137
+ - Error analysis and dataset improvements
138
+
139
+ Please maintain the MIT license and cite appropriately.
140
+
141
+ ## License
142
+
143
+ This dataset is released under the MIT License, following the original DREAM dataset licensing terms.
144
+
145
+ Enjoy benchmarking your conversational reading comprehension models!
146
+ # Last updated: Mon Jun 30 16:27:51 HKT 2025
processed/full_transformed.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67d4250eb60f68e132641c4503416b38caba4b6568b8905a68b49fe1f110b292
3
+ size 5534662
raw/dev.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d5af2e580d809c73872a7dd43fe93d0b07c6f6086b04a9a9a1917603009d961
3
+ size 1097827
raw/full.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f59fdbf6d768c4e984edbc8a6f7b4c399be78a78c88a4a23c490a77f4fc50b78
3
+ size 5754196
raw/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d96d7d0752f7eab1ea8f165a582430f35653c428d36633ab4d7f26fc14946c3a
3
+ size 1104882
raw/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90942ddea1b56231a0ad2097dc5f115ce1face1cfb86f029041e5bad38e68566
3
+ size 3355481