drelhaj commited on
Commit
f874832
·
verified ·
1 Parent(s): 211c50d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +375 -342
README.md CHANGED
@@ -1,342 +1,375 @@
1
- # EASC: The Essex Arabic Summaries Corpus
2
-
3
- Mo El-Haj, Udo Kruschwitz, Chris Fox
4
- University of Essex, UK
5
-
6
- This repository hosts **EASC** — the Essex Arabic Summaries Corpus — a collection of **153 Arabic source documents** and **765 human-generated extractive summaries**, created using Amazon Mechanical Turk.
7
-
8
- EASC is one of the earliest publicly available datasets for **Arabic single-document summarisation** and remains widely used in research on Arabic NLP, extractive summarisation, sentence ranking, and evaluation.
9
-
10
- ---
11
-
12
- ## 📘 Background
13
-
14
- EASC was introduced in:
15
-
16
- **El-Haj, M., Kruschwitz, U., & Fox, C. (2010).
17
- *Using Mechanical Turk to Create a Corpus of Arabic Summaries.*
18
- Workshop on LRs & HLT for Semitic Languages @ LREC 2010.**
19
-
20
- The corpus was motivated by the lack of gold-standard resources for evaluating **Arabic text summarisation**, particularly extractive systems. Mechanical Turk was used to collect **five independent extractive summaries per article**, offering natural diversity and enabling aggregation into different gold-standard levels.
21
-
22
- The work was later expanded in:
23
-
24
- - **El-Haj (2012). *Multi-document Arabic Text Summarisation.* PhD Thesis, University of Essex.**
25
- - **El-Haj, Kruschwitz & Fox (2011). Exploring clustering for multi-document Arabic summarisation. AIRS 2011.**
26
-
27
- ---
28
-
29
- ## 🗂 Corpus Contents
30
-
31
- EASC contains:
32
-
33
- | Component | Count | Description |
34
- |----------|-------|-------------|
35
- | Articles | 153 | Arabic Wikipedia + AlRai (Jordan) + AlWatan (Saudi Arabia) |
36
- | Summaries | 765 | Five extractive summaries per article |
37
- | Topics | 10 | Art, Environment, Politics, Sport, Health, Finance, Science & Technology, Tourism, Religion, Education |
38
-
39
- Each summary was produced by a different Mechanical Turk worker, who selected up to **50% of the sentences** they considered most important.
40
-
41
- ---
42
-
43
- ## 📁 Directory Structure
44
-
45
- ```
46
-
47
- Articles/
48
- Article001/
49
- Article002/
50
- ...
51
- MTurk/
52
- Article001/
53
- Article002/
54
- ...
55
-
56
- ```
57
-
58
- Where:
59
-
60
-
61
- - `Articles/ArticleXX/*.txt` → full document
62
-
63
- - `MTurk/ArticleXX/Dxxxx.M.250.A.#.*` → five extractive summaries (A–E)
64
-
65
- ---
66
-
67
-
68
-
69
- ## 📦 Modern Dataset Format (this repository)
70
-
71
-
72
-
73
- To make EASC easier to use with modern NLP tools, this repository includes a **unified CSV/JSONL version**:
74
-
75
-
76
-
77
- ### **CSV Schema**
78
-
79
-
80
- | Field | Description |
81
- |---------------|-------------------------------------|
82
- | `article_id` | Unique article identifier (1–153) |
83
- | `topic_name` | Topic label extracted from filename |
84
- | `article_text`| Full article text |
85
- | `summary_A` | Human summary A |
86
- | `summary_B` | Human summary B |
87
- | `summary_C` | Human summary C |
88
- | `summary_D` | Human summary D |
89
- | `summary_E` | Human summary E |
90
-
91
- ### **JSON Schema**
92
- One JSON Object per article:
93
-
94
- {
95
- "article_id": 1,
96
- "topic_name": "Art and Music",
97
- "article_text": "...",
98
- "summaries": ["...", "...", "...", "...", "..."]
99
- }
100
-
101
- ---
102
-
103
- ## 🛠️ Regenerating the CSV / JSONL
104
-
105
- The following Python script reconstructs the unified dataset from the raw Articles/ and MTurk/ folders (strict UTF-8):
106
-
107
- ```
108
- import os
109
- import re
110
- import json
111
- import pandas as pd
112
-
113
- ARTICLES_DIR = "Articles"
114
- MTURK_DIR = "MTurk"
115
-
116
- records_csv = []
117
- records_jsonl = []
118
-
119
- for folder in sorted(os.listdir(ARTICLES_DIR)):
120
- folder_path = os.path.join(ARTICLES_DIR, folder)
121
- if not os.path.isdir(folder_path):
122
- continue
123
-
124
- m = re.match(r"Article(\d+)", folder)
125
- if not m:
126
- continue
127
-
128
- article_id = int(m.group(1))
129
- article_files = os.listdir(folder_path)
130
-
131
- article_file = article_files[0]
132
- article_file_path = os.path.join(folder_path, article_file)
133
-
134
- base = os.path.splitext(article_file)[0]
135
- match = re.match(r"(.+?)\s*\(\d+\)", base)
136
- topic_name = match.group(1).strip() if match else "Unknown"
137
-
138
- with open(article_file_path, "r", encoding="utf-8", errors="replace") as f:
139
- article_text = f.read().strip()
140
-
141
- summaries_dir = os.path.join(MTURK_DIR, folder)
142
- summary_files = sorted(os.listdir(summaries_dir))
143
- summaries = []
144
-
145
- for sfile in summary_files:
146
- s_path = os.path.join(summaries_dir, sfile)
147
- with open(s_path, "r", encoding="utf-8", errors="replace") as f:
148
- summaries.append(f.read().strip())
149
-
150
- while len(summaries) < 5:
151
- summaries.append("")
152
- summaries = summaries[:5]
153
-
154
- records_csv.append({
155
- "article_id": article_id,
156
- "topic_name": topic_name,
157
- "article_text": article_text,
158
- "summary_A": summaries[0],
159
- "summary_B": summaries[1],
160
- "summary_C": summaries[2],
161
- "summary_D": summaries[3],
162
- "summary_E": summaries[4]
163
- })
164
-
165
- records_jsonl.append({
166
- "article_id": article_id,
167
- "topic_name": topic_name,
168
- "article_text": article_text,
169
- "summaries": summaries
170
- })
171
-
172
- df = pd.DataFrame(records_csv)
173
- df.to_csv("EASC.csv", index=False, encoding="utf-8")
174
-
175
- with open("EASC.jsonl", "w", encoding="utf-8") as f:
176
- for row in records_jsonl:
177
- f.write(json.dumps(row, ensure_ascii=False) + "\n")
178
-
179
- print("Done! Created EASC.csv and EASC.jsonl")
180
-
181
-
182
- ```
183
- ---
184
-
185
- ## 📥 Train / Validation / Test Splits
186
- ```
187
- import pandas as pd
188
- from sklearn.model_selection import train_test_split
189
-
190
- df = pd.read_csv("EASC.csv")
191
-
192
- train_df, temp_df = train_test_split(df, test_size=0.2, random_state=42)
193
- val_df, test_df = train_test_split(temp_df, test_size=0.5, random_state=42)
194
-
195
- train_df.to_csv("EASC_train.csv", index=False)
196
- val_df.to_csv("EASC_val.csv", index=False)
197
- test_df.to_csv("EASC_test.csv", index=False)
198
-
199
- ```
200
- ---
201
-
202
- ## 🎯 Intended Use
203
-
204
- EASC supports research in:
205
-
206
- - Extractive summarisation
207
-
208
- - Sentence ranking and scoring
209
-
210
- - Gold-summary aggregation (Level2, Level3)
211
-
212
- - ROUGE and Dice evaluation
213
-
214
- - Learning sentence importance
215
-
216
- - Human–machine evaluation comparisons
217
-
218
- - Crowdsourcing quality analysis
219
-
220
-
221
- EASC is the only Arabic summarisation dataset with:
222
-
223
- - consistent multiple references per document
224
-
225
- - real extractive human judgements
226
-
227
- - cross-worker variability suitable for probabilistic modelling
228
- ---
229
-
230
-
231
-
232
- ## 📊 Recommended Gold Standards
233
-
234
- Based on the original paper:
235
-
236
- - **Level 3**: sentences selected by ≥3 workers
237
-
238
- - **Level 2**: sentences selected by ≥2 workers
239
-
240
- - **All**: all sentences selected by any worker
241
-
242
- &nbsp; (not recommended as a gold standard; used for analysis only)
243
-
244
-
245
-
246
- These levels can be regenerated programmatically from the unified CSV.
247
-
248
-
249
-
250
- ---
251
-
252
-
253
-
254
- ## 🧪 Evaluations (from the 2010 paper)
255
-
256
-
257
- Systems evaluated against EASC include:
258
-
259
- - Sakhr Arabic Summariser
260
-
261
- - AQBTSS
262
-
263
- - Gen-Summ
264
-
265
- - LSA-Summ
266
-
267
- - Baseline-1 (first sentence)
268
-
269
-
270
-
271
- Metrics used:
272
-
273
- - **Dice coefficient** (recommended for extractive summarisation)
274
-
275
- - **ROUGE-2 / ROUGE-L / ROUGE-W / ROUGE-S**
276
-
277
- - **AutoSummENG**
278
-
279
-
280
-
281
- All details are documented in the LREC 2010 paper.
282
-
283
- ---
284
-
285
- ## 📑 Citation
286
-
287
-
288
-
289
- If you use EASC, please cite:
290
-
291
-
292
-
293
- El-Haj, M., Kruschwitz, U., & Fox, C. (2010).
294
-
295
- Using Mechanical Turk to Create a Corpus of Arabic Summaries.
296
-
297
- In LRs & HLT for Semitic Languages Workshop, LREC 2010.
298
-
299
-
300
-
301
- Additional references:
302
-
303
-
304
-
305
- El-Haj (2012). *Multi-document Arabic Text Summarisation.* PhD Thesis.
306
-
307
-
308
- El-Haj, Kruschwitz & Fox (2011). *Exploring Clustering for Multi-Document Arabic Summarisation.* AIRS 2011.
309
-
310
-
311
-
312
- ## 📜 Licence
313
-
314
- The original EASC release permits research use.
315
-
316
- This cleaned and reformatted version follows the same academic-research usage terms.
317
-
318
-
319
-
320
- ## ✔ Notes
321
-
322
-
323
- - Some Mechanical Turk summaries may include noisy selections or inconsistent behaviour; these are preserved to avoid subjective filtering.
324
- - File encodings reflect the original dataset; all modern versions are normalised to UTF-8.
325
- - The unified CSV/JSONL is provided for convenience and reproducibility.
326
-
327
-
328
-
329
- ## 🧭 Maintainer
330
-
331
-
332
-
333
- Dr Mo El-Haj
334
-
335
- Associate Professor in Natural Language Processing
336
-
337
- VinUniversity, Vietnam / Lancaster University, UK
338
-
339
-
340
-
341
-
342
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - summarization
5
+ language:
6
+ - ar
7
+ size_categories:
8
+ - 1K<n<10K
9
+ pretty_name: "EASC: The Essex Arabic Summaries Corpus"
10
+ dataset_info:
11
+ features:
12
+ - name: article_id
13
+ type: int32
14
+ - name: topic_name
15
+ type: string
16
+ - name: article_text
17
+ type: string
18
+ - name: summary_A
19
+ type: string
20
+ - name: summary_B
21
+ type: string
22
+ - name: summary_C
23
+ type: string
24
+ - name: summary_D
25
+ type: string
26
+ - name: summary_E
27
+ type: string
28
+ splits:
29
+ - name: train
30
+ - name: validation
31
+ - name: test
32
+ ---
33
+
34
+ # EASC: The Essex Arabic Summaries Corpus
35
+
36
+ Mo El-Haj, Udo Kruschwitz, Chris Fox
37
+ University of Essex, UK
38
+
39
+ This repository hosts **EASC** the Essex Arabic Summaries Corpus a collection of **153 Arabic source documents** and **765 human-generated extractive summaries**, created using Amazon Mechanical Turk.
40
+
41
+ EASC is one of the earliest publicly available datasets for **Arabic single-document summarisation** and remains widely used in research on Arabic NLP, extractive summarisation, sentence ranking, and evaluation.
42
+
43
+ ---
44
+
45
+ ## 📘 Background
46
+
47
+ EASC was introduced in:
48
+
49
+ **El-Haj, M., Kruschwitz, U., & Fox, C. (2010).
50
+ *Using Mechanical Turk to Create a Corpus of Arabic Summaries.*
51
+ Workshop on LRs & HLT for Semitic Languages @ LREC 2010.**
52
+
53
+ The corpus was motivated by the lack of gold-standard resources for evaluating **Arabic text summarisation**, particularly extractive systems. Mechanical Turk was used to collect **five independent extractive summaries per article**, offering natural diversity and enabling aggregation into different gold-standard levels.
54
+
55
+ The work was later expanded in:
56
+
57
+ - **El-Haj (2012). *Multi-document Arabic Text Summarisation.* PhD Thesis, University of Essex.**
58
+ - **El-Haj, Kruschwitz & Fox (2011). Exploring clustering for multi-document Arabic summarisation. AIRS 2011.**
59
+
60
+ ---
61
+
62
+ ## 🗂 Corpus Contents
63
+
64
+ EASC contains:
65
+
66
+ | Component | Count | Description |
67
+ |----------|-------|-------------|
68
+ | Articles | 153 | Arabic Wikipedia + AlRai (Jordan) + AlWatan (Saudi Arabia) |
69
+ | Summaries | 765 | Five extractive summaries per article |
70
+ | Topics | 10 | Art, Environment, Politics, Sport, Health, Finance, Science & Technology, Tourism, Religion, Education |
71
+
72
+ Each summary was produced by a different Mechanical Turk worker, who selected up to **50% of the sentences** they considered most important.
73
+
74
+ ---
75
+
76
+ ## 📁 Directory Structure
77
+
78
+ ```
79
+
80
+ Articles/
81
+ Article001/
82
+ Article002/
83
+ ...
84
+ MTurk/
85
+ Article001/
86
+ Article002/
87
+ ...
88
+
89
+ ```
90
+
91
+ Where:
92
+
93
+
94
+ - `Articles/ArticleXX/*.txt` → full document
95
+
96
+ - `MTurk/ArticleXX/Dxxxx.M.250.A.#.*` five extractive summaries (A–E)
97
+
98
+ ---
99
+
100
+
101
+
102
+ ## 📦 Modern Dataset Format (this repository)
103
+
104
+
105
+
106
+ To make EASC easier to use with modern NLP tools, this repository includes a **unified CSV/JSONL version**:
107
+
108
+
109
+
110
+ ### **CSV Schema**
111
+
112
+
113
+ | Field | Description |
114
+ |---------------|-------------------------------------|
115
+ | `article_id` | Unique article identifier (1–153) |
116
+ | `topic_name` | Topic label extracted from filename |
117
+ | `article_text`| Full article text |
118
+ | `summary_A` | Human summary A |
119
+ | `summary_B` | Human summary B |
120
+ | `summary_C` | Human summary C |
121
+ | `summary_D` | Human summary D |
122
+ | `summary_E` | Human summary E |
123
+
124
+ ### **JSON Schema**
125
+ One JSON Object per article:
126
+
127
+ {
128
+ "article_id": 1,
129
+ "topic_name": "Art and Music",
130
+ "article_text": "...",
131
+ "summaries": ["...", "...", "...", "...", "..."]
132
+ }
133
+
134
+ ---
135
+
136
+ ## 🛠️ Regenerating the CSV / JSONL
137
+
138
+ The following Python script reconstructs the unified dataset from the raw Articles/ and MTurk/ folders (strict UTF-8):
139
+
140
+ ```
141
+ import os
142
+ import re
143
+ import json
144
+ import pandas as pd
145
+
146
+ ARTICLES_DIR = "Articles"
147
+ MTURK_DIR = "MTurk"
148
+
149
+ records_csv = []
150
+ records_jsonl = []
151
+
152
+ for folder in sorted(os.listdir(ARTICLES_DIR)):
153
+ folder_path = os.path.join(ARTICLES_DIR, folder)
154
+ if not os.path.isdir(folder_path):
155
+ continue
156
+
157
+ m = re.match(r"Article(\d+)", folder)
158
+ if not m:
159
+ continue
160
+
161
+ article_id = int(m.group(1))
162
+ article_files = os.listdir(folder_path)
163
+
164
+ article_file = article_files[0]
165
+ article_file_path = os.path.join(folder_path, article_file)
166
+
167
+ base = os.path.splitext(article_file)[0]
168
+ match = re.match(r"(.+?)\s*\(\d+\)", base)
169
+ topic_name = match.group(1).strip() if match else "Unknown"
170
+
171
+ with open(article_file_path, "r", encoding="utf-8", errors="replace") as f:
172
+ article_text = f.read().strip()
173
+
174
+ summaries_dir = os.path.join(MTURK_DIR, folder)
175
+ summary_files = sorted(os.listdir(summaries_dir))
176
+ summaries = []
177
+
178
+ for sfile in summary_files:
179
+ s_path = os.path.join(summaries_dir, sfile)
180
+ with open(s_path, "r", encoding="utf-8", errors="replace") as f:
181
+ summaries.append(f.read().strip())
182
+
183
+ while len(summaries) < 5:
184
+ summaries.append("")
185
+ summaries = summaries[:5]
186
+
187
+ records_csv.append({
188
+ "article_id": article_id,
189
+ "topic_name": topic_name,
190
+ "article_text": article_text,
191
+ "summary_A": summaries[0],
192
+ "summary_B": summaries[1],
193
+ "summary_C": summaries[2],
194
+ "summary_D": summaries[3],
195
+ "summary_E": summaries[4]
196
+ })
197
+
198
+ records_jsonl.append({
199
+ "article_id": article_id,
200
+ "topic_name": topic_name,
201
+ "article_text": article_text,
202
+ "summaries": summaries
203
+ })
204
+
205
+ df = pd.DataFrame(records_csv)
206
+ df.to_csv("EASC.csv", index=False, encoding="utf-8")
207
+
208
+ with open("EASC.jsonl", "w", encoding="utf-8") as f:
209
+ for row in records_jsonl:
210
+ f.write(json.dumps(row, ensure_ascii=False) + "\n")
211
+
212
+ print("Done! Created EASC.csv and EASC.jsonl")
213
+
214
+
215
+ ```
216
+ ---
217
+
218
+ ## 📥 Train / Validation / Test Splits
219
+ ```
220
+ import pandas as pd
221
+ from sklearn.model_selection import train_test_split
222
+
223
+ df = pd.read_csv("EASC.csv")
224
+
225
+ train_df, temp_df = train_test_split(df, test_size=0.2, random_state=42)
226
+ val_df, test_df = train_test_split(temp_df, test_size=0.5, random_state=42)
227
+
228
+ train_df.to_csv("EASC_train.csv", index=False)
229
+ val_df.to_csv("EASC_val.csv", index=False)
230
+ test_df.to_csv("EASC_test.csv", index=False)
231
+
232
+ ```
233
+ ---
234
+
235
+ ## 🎯 Intended Use
236
+
237
+ EASC supports research in:
238
+
239
+ - Extractive summarisation
240
+
241
+ - Sentence ranking and scoring
242
+
243
+ - Gold-summary aggregation (Level2, Level3)
244
+
245
+ - ROUGE and Dice evaluation
246
+
247
+ - Learning sentence importance
248
+
249
+ - Human–machine evaluation comparisons
250
+
251
+ - Crowdsourcing quality analysis
252
+
253
+
254
+ EASC is the only Arabic summarisation dataset with:
255
+
256
+ - consistent multiple references per document
257
+
258
+ - real extractive human judgements
259
+
260
+ - cross-worker variability suitable for probabilistic modelling
261
+ ---
262
+
263
+
264
+
265
+ ## 📊 Recommended Gold Standards
266
+
267
+ Based on the original paper:
268
+
269
+ - **Level 3**: sentences selected by ≥3 workers
270
+
271
+ - **Level 2**: sentences selected by ≥2 workers
272
+
273
+ - **All**: all sentences selected by any worker
274
+
275
+ &nbsp; (not recommended as a gold standard; used for analysis only)
276
+
277
+
278
+
279
+ These levels can be regenerated programmatically from the unified CSV.
280
+
281
+
282
+
283
+ ---
284
+
285
+
286
+
287
+ ## 🧪 Evaluations (from the 2010 paper)
288
+
289
+
290
+ Systems evaluated against EASC include:
291
+
292
+ - Sakhr Arabic Summariser
293
+
294
+ - AQBTSS
295
+
296
+ - Gen-Summ
297
+
298
+ - LSA-Summ
299
+
300
+ - Baseline-1 (first sentence)
301
+
302
+
303
+
304
+ Metrics used:
305
+
306
+ - **Dice coefficient** (recommended for extractive summarisation)
307
+
308
+ - **ROUGE-2 / ROUGE-L / ROUGE-W / ROUGE-S**
309
+
310
+ - **AutoSummENG**
311
+
312
+
313
+
314
+ All details are documented in the LREC 2010 paper.
315
+
316
+ ---
317
+
318
+ ## 📑 Citation
319
+
320
+
321
+
322
+ If you use EASC, please cite:
323
+
324
+
325
+
326
+ El-Haj, M., Kruschwitz, U., & Fox, C. (2010).
327
+
328
+ Using Mechanical Turk to Create a Corpus of Arabic Summaries.
329
+
330
+ In LRs & HLT for Semitic Languages Workshop, LREC 2010.
331
+
332
+
333
+
334
+ Additional references:
335
+
336
+
337
+
338
+ El-Haj (2012). *Multi-document Arabic Text Summarisation.* PhD Thesis.
339
+
340
+
341
+ El-Haj, Kruschwitz & Fox (2011). *Exploring Clustering for Multi-Document Arabic Summarisation.* AIRS 2011.
342
+
343
+
344
+
345
+ ## 📜 Licence
346
+
347
+ The original EASC release permits research use.
348
+
349
+ This cleaned and reformatted version follows the same academic-research usage terms.
350
+
351
+
352
+
353
+ ## ✔ Notes
354
+
355
+
356
+ - Some Mechanical Turk summaries may include noisy selections or inconsistent behaviour; these are preserved to avoid subjective filtering.
357
+ - File encodings reflect the original dataset; all modern versions are normalised to UTF-8.
358
+ - The unified CSV/JSONL is provided for convenience and reproducibility.
359
+
360
+
361
+
362
+ ## 🧭 Maintainer
363
+
364
+
365
+
366
+ Dr Mo El-Haj
367
+
368
+ Associate Professor in Natural Language Processing
369
+
370
+ VinUniversity, Vietnam / Lancaster University, UK
371
+
372
+
373
+
374
+
375
+