File size: 8,641 Bytes
f874832
 
 
 
 
 
 
 
 
2a1217e
f874832
 
 
2a1217e
f874832
2a1217e
f874832
2a1217e
f874832
2a1217e
f874832
2a1217e
f874832
2a1217e
f874832
2a1217e
f874832
2a1217e
 
f874832
 
 
 
 
 
2a1217e
f874832
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
---
license: cc-by-4.0
task_categories:
  - summarization
language:
  - ar
size_categories:
  - 1K<n<10K
pretty_name: "EASC: The Essex Arabic Summaries Corpus"

dataset_info:
  features:
    - name: article_id
      dtype: int32
    - name: topic_name
      dtype: string
    - name: article_text
      dtype: string
    - name: summary_A
      dtype: string
    - name: summary_B
      dtype: string
    - name: summary_C
      dtype: string
    - name: summary_D
      dtype: string
    - name: summary_E
      dtype: string

  splits:
    - name: train
    - name: validation
    - name: test
---


# EASC: The Essex Arabic Summaries Corpus

Mo El-Haj, Udo Kruschwitz, Chris Fox  
University of Essex, UK  

This repository hosts **EASC** — the Essex Arabic Summaries Corpus — a collection of **153 Arabic source documents** and **765 human-generated extractive summaries**, created using Amazon Mechanical Turk.

EASC is one of the earliest publicly available datasets for **Arabic single-document summarisation** and remains widely used in research on Arabic NLP, extractive summarisation, sentence ranking, and evaluation.

---

## 📘 Background

EASC was introduced in:

**El-Haj, M., Kruschwitz, U., & Fox, C. (2010).  
*Using Mechanical Turk to Create a Corpus of Arabic Summaries.*  
Workshop on LRs & HLT for Semitic Languages @ LREC 2010.**

The corpus was motivated by the lack of gold-standard resources for evaluating **Arabic text summarisation**, particularly extractive systems. Mechanical Turk was used to collect **five independent extractive summaries per article**, offering natural diversity and enabling aggregation into different gold-standard levels.

The work was later expanded in:

- **El-Haj (2012). *Multi-document Arabic Text Summarisation.* PhD Thesis, University of Essex.**  
- **El-Haj, Kruschwitz & Fox (2011). Exploring clustering for multi-document Arabic summarisation. AIRS 2011.**

---

## 🗂 Corpus Contents

EASC contains:

| Component | Count | Description |
|----------|-------|-------------|
| Articles | 153 | Arabic Wikipedia + AlRai (Jordan) + AlWatan (Saudi Arabia) |
| Summaries | 765 | Five extractive summaries per article |
| Topics | 10 | Art, Environment, Politics, Sport, Health, Finance, Science & Technology, Tourism, Religion, Education |

Each summary was produced by a different Mechanical Turk worker, who selected up to **50% of the sentences** they considered most important.

---

## 📁 Directory Structure

```

Articles/
  Article001/
  Article002/
  ...
MTurk/
  Article001/
  Article002/
  ...
  
```

Where:


- `Articles/ArticleXX/*.txt` → full document  

- `MTurk/ArticleXX/Dxxxx.M.250.A.#.*` → five extractive summaries (A–E)

---



## 📦 Modern Dataset Format (this repository)



To make EASC easier to use with modern NLP tools, this repository includes a **unified CSV/JSONL version**:



### **CSV Schema**


| Field         | Description                         |
|---------------|-------------------------------------|
| `article_id`  | Unique article identifier (1–153)   |
| `topic_name`  | Topic label extracted from filename |
| `article_text`| Full article text                   |
| `summary_A`   | Human summary A                     |
| `summary_B`   | Human summary B                     |
| `summary_C`   | Human summary C                     |
| `summary_D`   | Human summary D                     |
| `summary_E`   | Human summary E                     |

### **JSON Schema**
One JSON Object per article:

{
  "article_id": 1,
  "topic_name": "Art and Music",
  "article_text": "...",
  "summaries": ["...", "...", "...", "...", "..."]
}

---

## 🛠️ Regenerating the CSV / JSONL

The following Python script reconstructs the unified dataset from the raw Articles/ and MTurk/ folders (strict UTF-8):

```
import os
import re
import json
import pandas as pd

ARTICLES_DIR = "Articles"
MTURK_DIR = "MTurk"

records_csv = []
records_jsonl = []

for folder in sorted(os.listdir(ARTICLES_DIR)):
    folder_path = os.path.join(ARTICLES_DIR, folder)
    if not os.path.isdir(folder_path):
        continue

    m = re.match(r"Article(\d+)", folder)
    if not m:
        continue

    article_id = int(m.group(1))
    article_files = os.listdir(folder_path)

    article_file = article_files[0]
    article_file_path = os.path.join(folder_path, article_file)

    base = os.path.splitext(article_file)[0]
    match = re.match(r"(.+?)\s*\(\d+\)", base)
    topic_name = match.group(1).strip() if match else "Unknown"

    with open(article_file_path, "r", encoding="utf-8", errors="replace") as f:
        article_text = f.read().strip()

    summaries_dir = os.path.join(MTURK_DIR, folder)
    summary_files = sorted(os.listdir(summaries_dir))
    summaries = []

    for sfile in summary_files:
        s_path = os.path.join(summaries_dir, sfile)
        with open(s_path, "r", encoding="utf-8", errors="replace") as f:
            summaries.append(f.read().strip())

    while len(summaries) < 5:
        summaries.append("")
    summaries = summaries[:5]

    records_csv.append({
        "article_id": article_id,
        "topic_name": topic_name,
        "article_text": article_text,
        "summary_A": summaries[0],
        "summary_B": summaries[1],
        "summary_C": summaries[2],
        "summary_D": summaries[3],
        "summary_E": summaries[4]
    })

    records_jsonl.append({
        "article_id": article_id,
        "topic_name": topic_name,
        "article_text": article_text,
        "summaries": summaries
    })

df = pd.DataFrame(records_csv)
df.to_csv("EASC.csv", index=False, encoding="utf-8")

with open("EASC.jsonl", "w", encoding="utf-8") as f:
    for row in records_jsonl:
        f.write(json.dumps(row, ensure_ascii=False) + "\n")

print("Done! Created EASC.csv and EASC.jsonl")


```
---

## 📥 Train / Validation / Test Splits
```
import pandas as pd
from sklearn.model_selection import train_test_split

df = pd.read_csv("EASC.csv")

train_df, temp_df = train_test_split(df, test_size=0.2, random_state=42)
val_df, test_df = train_test_split(temp_df, test_size=0.5, random_state=42)

train_df.to_csv("EASC_train.csv", index=False)
val_df.to_csv("EASC_val.csv", index=False)
test_df.to_csv("EASC_test.csv", index=False)

```
---

## 🎯 Intended Use

EASC supports research in:

- Extractive summarisation  

- Sentence ranking and scoring  

- Gold-summary aggregation (Level2, Level3)  

- ROUGE and Dice evaluation  

- Learning sentence importance  

- Human–machine evaluation comparisons  

- Crowdsourcing quality analysis


EASC is the only Arabic summarisation dataset with:

- consistent multiple references per document  

- real extractive human judgements  

- cross-worker variability suitable for probabilistic modelling
---



## 📊 Recommended Gold Standards

Based on the original paper:

- **Level 3**: sentences selected by ≥3 workers  

- **Level 2**: sentences selected by ≥2 workers  

- **All**: all sentences selected by any worker  

&nbsp; (not recommended as a gold standard; used for analysis only)



These levels can be regenerated programmatically from the unified CSV.



---



## 🧪 Evaluations (from the 2010 paper)


Systems evaluated against EASC include:

- Sakhr Arabic Summariser  

- AQBTSS  

- Gen-Summ  

- LSA-Summ  

- Baseline-1 (first sentence)  



Metrics used:

- **Dice coefficient** (recommended for extractive summarisation)  

- **ROUGE-2 / ROUGE-L / ROUGE-W / ROUGE-S**  

- **AutoSummENG**



All details are documented in the LREC 2010 paper.

---

## 📑 Citation



If you use EASC, please cite:



El-Haj, M., Kruschwitz, U., & Fox, C. (2010).  

Using Mechanical Turk to Create a Corpus of Arabic Summaries.  

In LRs & HLT for Semitic Languages Workshop, LREC 2010.



Additional references:



El-Haj (2012). *Multi-document Arabic Text Summarisation.* PhD Thesis.  


El-Haj, Kruschwitz & Fox (2011). *Exploring Clustering for Multi-Document Arabic Summarisation.* AIRS 2011.



## 📜 Licence

The original EASC release permits research use.  

This cleaned and reformatted version follows the same academic-research usage terms.



## ✔ Notes


- Some Mechanical Turk summaries may include noisy selections or inconsistent behaviour; these are preserved to avoid subjective filtering.
- File encodings reflect the original dataset; all modern versions are normalised to UTF-8.
- The unified CSV/JSONL is provided for convenience and reproducibility.



## 🧭 Maintainer



Dr Mo El-Haj  

Associate Professor in Natural Language Processing  

VinUniversity, Vietnam / Lancaster University, UK