File size: 3,983 Bytes
9b433e9
 
d515e69
9b433e9
 
 
 
 
d515e69
9b433e9
 
 
 
 
 
 
 
 
d515e69
9b433e9
d515e69
9b433e9
 
d515e69
 
 
9b433e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da3b8ad
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
dataset_name: "KALIMAT"
dataset_summary: "Kalimat - a multipurpose Arabic Corpus containing 18k+ news articles across multiple categories, provided in CSV and JSONL formats."
language:
  - ar
license: "cc-by-4.0"
task_categories:
  - text-generation
  - text-classification
pretty_name: "Kalimat - a multipurpose Arabic Corpus"
size_categories:
  - 10K<n<100K
tags:
  - arabic
  - news
  - corpus
  - nlp
  - low-resource

configs:
  - config_name: csv
    data_files:
      - kalimat.csv

  - config_name: jsonl
    data_files:
      - kalimat.jsonl
---



# Kalimat - a multipurpose Arabic Corpus

This repository provides a cleaned and consolidated version of the **Kalimat - a multipurpose Arabic Corpus**, containing **18,256 Arabic news articles** collected from a diverse range of domains. The original material consisted of thousands of individual `.txt` files organised across multiple category folders. These have been reconstructed, normalised, and compiled into modern machine-learning-friendly formats.

---

## πŸ“š Corpus Overview

The corpus includes **20k+** articles covering a wide selection of news categories:

- **Politics**
- **Economy**
- **Culture**
- **Religion**
- **Sport**
- **Social / Society-related topics**
- **Other sub-domains depending on the original folder structure**

Each article was originally stored as *one word per line*. In this cleaned edition, all documents have been reconstructed into natural text format with proper spacing, UTF-8 encoding, and consistent metadata extraction.

Although the original filenames varied widely, each document is now associated with the following fields:

- **id** – numeric identifier extracted from the filename (or `-1` where none existed)
- **filename** – original filename exactly as it appeared
- **category** – derived from directory structure or filename
- **year_month** – extracted from filename where possible, otherwise `"unknown"`
- **text** – reconstructed, cleaned article text

---

## πŸ“¦ Provided Formats

The cleaned dataset is released in the following forms:

### **1. CSV File**
`kalimat.csv`

A single UTF-8 CSV containing all metadata and article texts.

### **2. JSONL File**
`kalimat.jsonl`

One JSON object per line, suitable for training modern NLP models (e.g. HuggingFace Transformers).

### **3. TXT Version (Zipped)**
`kalimat_txt.zip`

All reconstructed `.txt` documents are included in a single compressed archive to avoid storing thousands of individual files in the repository. Each `.txt` file uses the original filename for easy reference.

---

## πŸ”€ Train / Validation / Test Splits

The dataset has been randomly split (using a fixed seed for reproducibility) into:

- **Training set** – 80%
- **Validation set** – 10%
- **Test set** – 10%

These splits are provided as:

- `kalimat_train.csv`
- `kalimat_val.csv`
- `kalimat_test.csv`

All CSVs preserve the same column structure as the main file.

### Code used for splitting (for reference)

```
python
import pandas as pd
from sklearn.model_selection import train_test_split

df = pd.read_csv("kalimat.csv", encoding="utf-8")

train_df, temp_df = train_test_split(
    df, test_size=0.20, random_state=42, shuffle=True
)

val_df, test_df = train_test_split(
    temp_df, test_size=0.50, random_state=42, shuffle=True
)

train_df.to_csv("kalimat_train.csv", index=False, encoding="utf-8")
val_df.to_csv("kalimat_val.csv", index=False, encoding="utf-8")
test_df.to_csv("kalimat_test.csv", index=False, encoding="utf-8")
```

### πŸ“ Repository Structure
```
kalimat.csv
kalimat.jsonl
kalimat_train.csv
kalimat_val.csv
kalimat_test.csv
kalimat_txt.zip
README.md
```

## πŸ“„ Citation

If you use this dataset in your work, please cite the original Kalimat paper:

El-Haj, M., & Koulali, R. (2013). _Kalimat: a multipurpose Arabic corpus_. In Second Workshop on Arabic Corpus Linguistics (WACL-2), pp. 22–25.  
[PDF available here](https://elhaj.uk/docs/KALIMAT_ELHAJ_KOULALI.pdf)  

---