Nathan97 nkazi commited on
Commit
cee2413
·
verified ·
0 Parent(s):

Duplicate from nkazi/MohlerASAG

Browse files

Co-authored-by: Nazmul Kazi <nkazi@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README-Mohler.pdf ADDED
Binary file (89.3 kB). View file
 
README.md ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Mohler ASAG
3
+ license: cc-by-4.0
4
+ language:
5
+ - en
6
+ task_categories:
7
+ - text-classification
8
+ - sentence-similarity
9
+ - question-answering
10
+ size_categories:
11
+ - 1K<n<10K
12
+ tags:
13
+ - ASAG
14
+ - NLP
15
+ - Automatic Short Answer Grading
16
+ - Student Responses
17
+ - Computer Science
18
+ - Data Structure
19
+ - Educational Data
20
+ - Semantic Similarity
21
+ - Question-Answering
22
+ - Text Classification
23
+ dataset_info:
24
+ features:
25
+ - name: id
26
+ dtype: string
27
+ - name: question
28
+ dtype: string
29
+ - name: instructor_answer
30
+ dtype: string
31
+ - name: student_answer
32
+ dtype: string
33
+ - name: score_grader_1
34
+ dtype: float32
35
+ - name: score_grader_2
36
+ dtype: float32
37
+ - name: score_avg
38
+ dtype: float32
39
+ splits:
40
+ - name: open_ended
41
+ num_bytes: 153600
42
+ num_examples: 2273
43
+ - name: close_ended
44
+ num_bytes: 11776
45
+ num_examples: 169
46
+ dataset_size: 953344
47
+ configs:
48
+ - config_name: raw
49
+ default: true
50
+ data_files:
51
+ - split: open_ended
52
+ path: data/raw-oe-*
53
+ - split: close_ended
54
+ path: data/raw-ce-*
55
+ - config_name: cleaned
56
+ data_files:
57
+ - split: open_ended
58
+ path: data/cleaned-oe-*
59
+ - split: close_ended
60
+ path: data/cleaned-ce-*
61
+ - config_name: parsed
62
+ data_files:
63
+ - split: open_ended
64
+ path: data/parsed-oe-*
65
+ - split: close_ended
66
+ path: data/parsed-ce-*
67
+ - config_name: annotations
68
+ data_files:
69
+ - split: annotations
70
+ path: data/annotations-*
71
+ ---
72
+
73
+ <style>
74
+ .callout {
75
+ background-color: #cff4fc;
76
+ border-left: 0.25rem solid #9eeaf9;
77
+ padding: 1rem;
78
+ }
79
+
80
+ .readme-table-container table {
81
+ font-family:monospace;
82
+ margin: 0;
83
+ }
84
+ </style>
85
+
86
+ # Dataset Card for "Mohler ASAG"
87
+
88
+ The **Mohler ASAG** dataset is recognized as one of the first publicly
89
+ available and widely used benchmark datasets for Automatic Short
90
+ Answer Grading (ASAG). It was first introduced by Michael Mohler and
91
+ Rada Mihalcea in 2009. An extended version of the dataset with
92
+ additional questions and corresponding student answers was released in
93
+ 2011. This repository presents the 2011 dataset along with a code
94
+ snippet to extract the 2009 subset.
95
+
96
+ The dataset was collected from an introductory data structures course
97
+ at the University of North Texas. It covers 87 assessment questions in
98
+ total, including 81 open-ended and 6 closed-ended selection or
99
+ ordering questions. These questions are distributed across 10
100
+ assignments and 2 examinations. Altogether, the dataset contains 2,442
101
+ student responses, with 2,273 corresponding to open-ended questions
102
+ and 169 to closed-ended questions.
103
+
104
+ - **Authors:** Michael Mohler, Razvan Bunescu, and Rada Mihalcea.
105
+ - **Paper:** [Learning to Grade Short Answer Questions using Semantic
106
+ Similarity Measures and Dependency Graph Alignments](https://aclanthology.org/P11-1076/)
107
+
108
+ <div class="callout">
109
+ A curated version of the dataset is available on Hugging Face at
110
+ <a href="https://huggingface.co/datasets/nkazi/MohlerASAG-Curated">
111
+ <code>nkazi/MohlerASAG-Curated</code>
112
+ </a>,
113
+ created to improve its quality and usability for NLP research,
114
+ particularly for LLM-based approaches.
115
+ </div>
116
+
117
+ ## Known Errata
118
+
119
+ 1. The 2009 paper reports 30 student answers per question for each
120
+ assignment. In reality, assignment 1 contains 29 answers per
121
+ question, assignment 2 contains 30 answers per question, and
122
+ assignment 3 contains 31 answers per question.
123
+ 2. The 2011 paper states that the dataset contains student answers for
124
+ 80 questions. According to the README file included with the data,
125
+ it actually includes answers for 81 open-ended questions.
126
+
127
+ ## Dataset Conversion Notebook
128
+
129
+ The Python notebook I developed to convert the Mohler ASAG dataset
130
+ from its source files into a Hugging Face Dataset is available on my
131
+ GitHub profile. It exhibits the process of parsing questions,
132
+ instructor answers, student answers, scores, and annotations from
133
+ their respective source files for each stage, correcting mojibakes in
134
+ the raw data, structuring and organizing the information, dividing and
135
+ transforming the data into subsets and splits, and exporting the final
136
+ dataset in Parquet format for the Hugging Face repository. This
137
+ demonstration ensures transparency, reproducibility, and traceability
138
+ of the conversion process.
139
+
140
+ <strong>GitHub Link:</strong>
141
+ <a href="https://github.com/nazmulkazi/ML-DL-NLP/blob/main/HF%20Dataset%20-%20Mohler%20ASAG.ipynb">
142
+ https://github.com/nazmulkazi/ML-DL-NLP/blob/main/HF%20Dataset%20-%20Mohler%20ASAG.ipynb
143
+ </a>
144
+
145
+ ## Dataset Structure and Details
146
+
147
+ The dataset underwent several processing stages, each represented as a
148
+ separate subset. The raw subset contains the original and unaltered
149
+ student answers exactly as written. In the cleaned subset, the authors
150
+ preprocessed the data by cleaning the text and tokenizing it into
151
+ sentences using the LingPipe toolkit, with sentence boundaries marked
152
+ by `<STOP>` tags. The parsed subset includes outputs from the Stanford
153
+ Dependency Parser with additional postprocessing performed by the
154
+ authors. The annotations subset contains manually annotated data.
155
+ However, only 32 student answers were randomly selected for
156
+ annotation.
157
+
158
+ The authors ignored responses to the closed-ended questions in all of
159
+ their work. Therefore, the raw, cleaned, and parsed subsets are
160
+ divided into open-ended and closed-ended splits.
161
+
162
+ Each sample in the raw, cleaned, and parsed subsets includes a unique
163
+ identifier, the question, the instructor's answer, the student's
164
+ answer, scores from two graders, and the average score. Samples in the
165
+ annotations subset contain a unique identifier and the corresponding
166
+ annotations. The unique identifiers are consistent across all subsets
167
+ and follow the format `EXX.QXX.AXX`, where each component corresponds
168
+ to its exercise (i.e., assignment), question, and answer, respectively,
169
+ and `XX` are zero-padded numbers. For consistency, reproducibility,
170
+ and traceability, the identifiers are constructed following the same
171
+ indexing scheme used by the authors, with 1-based numbering for
172
+ exercises and questions and 0-based numbering for student answers.
173
+
174
+ Exercises E01 through E10 were graded on a 0-5 scale, while E11 and
175
+ E12 were graded on a 0-10 scale. The scores for E11 and E12 were
176
+ converted to a 0-5 scale before computing the average by the authors,
177
+ so all values in the score_avg column are in the 0-5 range. Grader 1
178
+ was the course teaching assistant, and Grader 2 was Michael Mohler.
179
+
180
+ For further details, please refer to the [README](./README-Mohler.pdf)
181
+ (a formatted and styled version of the README provided by the authors)
182
+ and the associated publications.
183
+
184
+ ## Student Answer Distribution
185
+
186
+ Distribution of student answers in the raw, cleaned, and parsed subsets:
187
+
188
+ <div class="readme-table-container">
189
+
190
+ | | Q01 | Q02 | Q03 | Q04 | Q05 | Q06 | Q07 | Q08 | Q09 | Q10 | Total |
191
+ |:--------|----:|----:|----:|----:|----:|----:|----:|----:|----:|----:|------:|
192
+ | **E01** | 29 | 29 | 29 | 29 | 29 | 29 | 29 | - | - | - | 203 |
193
+ | **E02** | 30 | 30 | 30 | 30 | 30 | 30 | 30 | - | - | - | 210 |
194
+ | **E03** | 31 | 31 | 31 | 31 | 31 | 31 | 31 | - | - | - | 217 |
195
+ | **E04** | 30 | 30 | 30 | 30 | 30 | 30 | 30 | - | - | - | 210 |
196
+ | **E05** | 28 | 28 | 28 | 28 | - | - | - | - | - | - | 112 |
197
+ | **E06** | 26 | 26 | 26 | 26 | 26 | 26 | 26 | - | - | - | 182 |
198
+ | **E07** | 26 | 26 | 26 | 26 | 26 | 26 | 26 | - | - | - | 182 |
199
+ | **E08** | 27 | 27 | 27 | 27 | 27 | 27 | 27 | - | - | - | 189 |
200
+ | **E09** | 27 | 27 | 27 | 27 | 27 | 27 | 27 | - | - | - | 189 |
201
+ | **E10** | 24 | 24 | 24 | 24 | 24 | 24 | 24 | - | - | - | 168 |
202
+ | **E11** | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 300 |
203
+ | **E12** | 28 | 28 | 28 | 28 | 28 | 28 | 28 | 28 | 28 | 28 | 280 |
204
+
205
+ </div>
206
+
207
+ Distribution of student answers in the annotations subset/split:
208
+
209
+ <div class="readme-table-container">
210
+
211
+ | | Q01 | Q02 | Q03 | Q04 | Q05 | Q06 | Q07 | Total |
212
+ |:--------|----:|----:|----:|----:|----:|----:|----:|------:|
213
+ | **E01** | 3 | 3 | 3 | 3 | 2 | 1 | 1 | 16 |
214
+ | **E02** | 1 | 1 | 1 | 2 | 1 | 1 | 1 | 8 |
215
+ | **E03** | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 8 |
216
+
217
+ </div>
218
+
219
+ ## Code Snippets
220
+ ### Extracting 2009 Dataset
221
+
222
+ Exercises 1-3 are inherited from the 2009 dataset. The following code
223
+ extracts the raw samples of the 2009 dataset from the raw subset:
224
+
225
+ ```python
226
+ from datasets import load_dataset
227
+
228
+ ds = load_dataset('nkazi/MohlerASAG', name='raw', split='open_ended')
229
+ ds_2009 = ds.filter(lambda row: row['id'].split('.')[0] in ['E01', 'E02', 'E03'])
230
+ ```
231
+
232
+ ### Concatenating Splits
233
+
234
+ The following code creates a new dataset with rows from both
235
+ open-ended and close-ended splits from the raw subset:
236
+
237
+ ```python
238
+ from datasets import load_dataset
239
+ from datasets import concatenate_datasets
240
+
241
+ ds = load_dataset('nkazi/MohlerASAG', name='raw')
242
+ ds_all = concatenate_datasets([ds['open_ended'], ds['close_ended']]).sort('id')
243
+ ```
244
+
245
+ ### Joining Open-Ended Raw Data with Annotations
246
+
247
+ The following code joins the annotations with their corresponding
248
+ samples from the raw subset.
249
+
250
+ ```python
251
+ from datasets import load_dataset
252
+
253
+ # Load the annotations split and create a mapping
254
+ # from IDs to their annotations.
255
+ ds_ann = load_dataset('nkazi/MohlerASAG', name='annotations', split='annotations')
256
+ ann_map = {row['id']: row['annotations'] for row in ds_ann}
257
+
258
+ # Load the raw open-ended subset and keep only rows
259
+ # with IDs present in the annotations set.
260
+ ds_raw = load_dataset('nkazi/MohlerASAG', name='raw', split='open_ended') \
261
+ .filter(lambda row: row['id'] in ann_map)
262
+
263
+ # Collect annotations in the same order as the IDs in
264
+ # the filtered raw dataset.
265
+ ann_list = [ann_map.get(row_id, None) for row_id in ds_raw['id']]
266
+
267
+ # Add an annotations column to the filtered raw dataset,
268
+ # using the annotations list and feature specification
269
+ # from the annotations subset.
270
+ ds_joined = ds_raw.add_column(
271
+ name = 'annotations',
272
+ column = ann_list,
273
+ feature = ds_ann.features['annotations']
274
+ )
275
+ ```
276
+
277
+ ## Citation
278
+
279
+ In addition to citing **Mohler et al. (2011)**, we kindly request that
280
+ a footnote be included referencing the Hugging Face page of this dataset
281
+ ([https://huggingface.co/datasets/nkazi/MohlerASAG](https://huggingface.co/datasets/nkazi/MohlerASAG))
282
+ in order to inform the community of this readily usable version.
283
+
284
+ ```tex
285
+ @inproceedings{mohler2011learning,
286
+ title = {Learning to Grade Short Answer Questions using Semantic
287
+ Similarity Measures and Dependency Graph Alignments},
288
+ author = {Mohler, Michael and Bunescu, Razvan and Mihalcea, Rada},
289
+ year = 2011,
290
+ month = jun,
291
+ booktitle = {Proceedings of the 49th Annual Meeting of the Association
292
+ for Computational Linguistics: Human Language Technologies},
293
+ pages = {752--762},
294
+ editor = {Lin, Dekang and Matsumoto, Yuji and Mihalcea, Rada},
295
+ publisher = {Association for Computational Linguistics},
296
+ address = {Portland, Oregon, USA},
297
+ url = {https://aclanthology.org/P11-1076},
298
+ }
299
+ ```
data/annotations-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e0783646d9bcb83be2262bc1bac509afb228af20fb61fe44cca61dd121fb38f
3
+ size 15038
data/cleaned-ce-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0907f7046238a92ec35f60ee1a96fcc887b987ff89c709ea3dc052d84311ad99
3
+ size 12228
data/cleaned-oe-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2aff37d7a3c6e6a7267afe0ab29ca4ebfbf90285b8eea77b08da9e1217376b0
3
+ size 153016
data/parsed-ce-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d2e34003e4724a48e22549e6aa7d79a35fa0a86d62fdea7b19884db7290efad
3
+ size 35359
data/parsed-oe-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ef977923532812a4a8b586188bc37f112b733fede3690efcc0b27ce4677b012
3
+ size 554591
data/raw-ce-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e7938f9c8b23d295630479962735601b2d805cc71dd9280af62cbfda3683ade
3
+ size 11599
data/raw-oe-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b120a31aac6eb69e5922b954f8dfc27846177394b8d8e6a7590b5741b19cc7c
3
+ size 150308