noamor commited on
Commit
24e692f
·
1 Parent(s): 7571339

clean corpus release: strict span-novel splits

Browse files
README.md CHANGED
@@ -1,131 +1,152 @@
1
  ---
2
- pretty_name: ShamNER
 
3
  license: cc-by-4.0
4
- task_categories:
5
- - token-classification
6
- language:
7
- - ar
8
- dataset_info:
 
9
  features:
10
- - name: id
 
11
  dtype: int64
12
- - name: round
13
- dtype: string
14
- - name: doc_name
15
  dtype: string
16
- - name: doc_id
17
  dtype: int64
18
- - name: annotator
19
- dtype: string
20
- - name: sent_id
21
  dtype: int64
22
- - name: input_ids
23
- sequence: int32
24
- - name: attention_mask
25
- sequence: int8
26
- - name: labels
27
- sequence: int64
28
- splits:
29
- - name: train
30
- num_bytes: 9137670
31
- num_examples: 22362
32
- - name: dev
33
- num_bytes: 1023190
34
- num_examples: 2634
35
- - name: test_spoken
36
- num_bytes: 673559
37
- num_examples: 2802
38
- - name: test_msa
39
- num_bytes: 229726
40
- num_examples: 180
41
- download_size: 2287653
42
- dataset_size: 11064145
43
- configs:
44
- - config_name: default
45
- data_files:
46
- - split: train
47
- path: data/train-*
48
- - split: dev
49
- path: data/dev-*
50
- - split: test_spoken
51
- path: data/test_spoken-*
52
- - split: test_msa
53
- path: data/test_msa-*
 
 
 
54
  ---
55
 
56
- # ShamNER
57
 
58
- # Spoken-Arabic Named-Entity Dataset (Levantine NER v1.0)
59
 
60
- A curated corpus of Levantine-Arabic sentences annotated for Named Entities, plus parallel dual-annotator files for assessing annotation noise.
61
- Ideal for fine-tuning Arabic-BERT–style models on noisy, spoken data and testing cross-register robustness.
 
62
 
63
- ---
64
 
65
- ## 1 Corpus snapshot
66
-
67
- | statistic | value |
68
- |----------------------------------------|-------|
69
- | Sentences (unique set) | **23 422** |
70
- | Sentences incl. 2nd annotator (A + B) | **29 228** |
71
- | Tokens (approx.) | ~290 k |
72
- | Annotated entity spans | **17 589** |
73
- | Avg. entities ∕ sentence | 0.75 |
74
- | Annotators | Arzy · Rawan · Reem · Sabil · Wiam · Amir |
75
- | Rounds | `round1` – `round5` (natural speech) + `round6` (synthetic news/MSA) |
76
- | File format | JSON Lines (UTF-8) |
77
-
78
- ### Label inventory
79
-
80
- | label | description | count |
81
- |--------|-------------------------|------:|
82
- | `GPE` | geopolitical entity | 4 601 |
83
- | `PER` | person | 3 628 |
84
- | `ORG` | organisation | 1 426 |
85
- | `MISC` | misc. named item | 1 301 |
86
- | `FAC` | facility | 947 |
87
- | `TIMEX`| temporal expression | 926 |
88
- | `DUC` | product/brand | 711 |
89
- | `EVE` | event | 487 |
90
- | `LOC` | (non-GPE) location | 467 |
91
- | `ANG` | angle/measure | 322 |
92
- | `WOA` | work of art | 292 |
93
- | `TTL` | title/honorific | 227 |
94
 
95
- ---
96
 
97
- ## 2 File list
98
 
99
- | file | lines | purpose |
100
- |------|------:|---------|
101
- | **`unique_sentences.jsonl`** | 23 422 | canonical training/dev/test pool (one Levantine sentence per line) |
102
- | **`iaa_A.jsonl`** | 5 806 | first annotator in each inter-annotator pair (not in `unique`) |
103
- | **`iaa_B.jsonl`** | 5 806 | second annotator for the same sentences (aligned 1-to-1 with `iaa_A`) |
104
- | `sentences.parquet` / `spans.parquet` | 52 274 / 17 589 | columnar versions for quick Pandas analysis (optional) |
105
 
106
- ### Record schema (`unique_sentences.jsonl`)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
 
108
  ```jsonc
109
  {
110
- "doc_id" : 137,
111
- "doc_name" : "22صدى-الصوت22",
112
- "sent_id" : 11,
113
- "orig_ID" : "29891",
114
- "round" : "round3", // round1-5 natural, round6 synthetic
115
- "annotator" : "Rawan",
116
- "text" : "جيب جوال أو أي اشي ضو هيك",
117
- "source_type": "social_videos",
118
  "spans": [
119
- { "start": 4, "end": 8, "label": "DUC" }
120
- ],
121
-
122
- // only for round6
123
- "msa": {
124
- "text" : "<parallel MSA sentence>",
125
- "spans" : [{ "start": 5, "end": 16, "label": "LOC" }]
126
- },
127
-
128
- // provenance (optional)
129
- "url" : "https://…",
130
- "date" : "2019-05-02 18:30:44"
131
  }
 
 
 
 
 
 
 
 
1
  ---
2
+
3
+ pretty\_name: ShamNER
4
  license: cc-by-4.0
5
+ task\_categories:
6
+
7
+ * token-classification
8
+ language:
9
+ * ar
10
+ dataset\_info:
11
  features:
12
+
13
+ * name: doc\_id
14
  dtype: int64
15
+ * name: doc\_name
 
 
16
  dtype: string
17
+ * name: sent\_id
18
  dtype: int64
19
+ * name: orig\_ID
 
 
20
  dtype: int64
21
+ * name: round
22
+ dtype: string
23
+ * name: annotator
24
+ dtype: string
25
+ * name: text
26
+ dtype: string
27
+ * name: spans
28
+ list:
29
+
30
+ * name: start
31
+ dtype: int64
32
+ * name: end
33
+ dtype: int64
34
+ * name: label
35
+ dtype: string
36
+ splits:
37
+ * name: train
38
+ num\_examples: 19783
39
+ * name: validation
40
+ num\_examples: 1795
41
+ * name: test
42
+ num\_examples: 1844
43
+ download\_size: TBD # filled automatically by HF on push
44
+ dataset\_size: TBD
45
+ configs:
46
+ * config\_name: default
47
+ data\_files:
48
+
49
+ * split: train
50
+ path: train.parquet
51
+ * split: validation
52
+ path: validation.parquet
53
+ * split: test
54
+ path: test.parquet
55
+
56
  ---
57
 
58
+ # ShamNER – Spoken Arabic Named‑Entity Recognition Corpus (Levantine v1.1)
59
 
60
+ ShamNER is a curated corpus of Levantine‑Arabic sentences annotated for Named Entities, plus dual annotation to check for consisetency (`agreement`) across human annotators.
61
 
62
+ * **Rounds** : `pilot`, `round1`–`round5` (manual, as a rule quality improved across rounds) and `round6` (synthetic, post‑edited). The `sythentic` data is done by sampling label-rich annotated spans from an MSA project and writing it with an LLM while force-injecting the annotated spans. Native speakers of Arabic then edited the these chunks to see to it that they sound as fluent and dilactical as possible. They were instructed not to touch the annotated spans. A script validated that no spans were modified.
63
+ * **Strict span‑novel evaluation** : validation and test contain **no entity surface‑form that appears in train** (after normalisation). This probes true generalisation.
64
+ * **Tokeniser‑agnostic** : only raw sentences and character spans are stored; regenerate BIO tags with any tokenizer you wish.
65
 
66
+ ## Quick start
67
 
68
+ ```python
69
+ from datasets import load_dataset
70
+ sham = load_dataset("your‑org/ShamNER")
71
+ train_ds = sham["train"]
72
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
+ `datasets` streams the top‑level `*.parquet` files automatically; use the matching `*.jsonl` for grep‑friendly inspection.
75
 
76
+ ## Split Philosophy
77
 
78
+ * **No duplicate documents** A *document* is identified by the pair
79
+ `(doc_name, round)`; each such bundle is assigned to exactly one split.
 
 
 
 
80
 
81
+ * **Rounds** Six annotation iterations:
82
+ `pilot`, `round1` – `round5` (manual, quality improving each round) and
83
+ `round6` (synthetic, then post-edited).
84
+ Early rounds feed **train**; span-novel slices of `round5` + `round6`
85
+ populate **test**.
86
+
87
+ * **Single test set** – The corpus ships one held-out test split:
88
+ *`test` = span-novel bundles from round 5 **plus** span-novel bundles from
89
+ round 6.*
90
+ No separate `test_synth` file.
91
+
92
+ * **Span-novelty rule** – Before allocation, normalise every entity string
93
+ (lower-case, strip Arabic diacritics and leading “ال”, collapse whitespace).
94
+ A bundle is forced to **train** if *any* of its normalised spans already
95
+ occurs in train; otherwise it may enter validation or test.
96
+
97
+ * **Tokeniser-agnostic** – Each record stores only raw `text` and
98
+ character-offset `spans`; no BIO arrays. Users regenerate token-level labels
99
+ with whichever tokenizer their model requires.
100
+
101
+
102
+ ## Split sizes
103
+
104
+ | split | sentences | files |
105
+ | ---------- | ---------- | ------------------------------- |
106
+ | train | **19 783** | `train.jsonl` / `train.parquet` |
107
+ | validation | 1 795 | `validation.*` |
108
+ | test | 1 844 | `test.*` |
109
+ | iaa\_A | 5 806 | optional, dual annotator A |
110
+ | iaa\_B | 5 806 | optional, annotator B |
111
+
112
+ Every sentence that appears in iaa_A.jsonl is also in the train split (with the same labels), while iaa_B.jsonl provides the alternative annotation for agreement/noise studies.
113
+
114
+ ## Label inventory (computed from `unique_sentences.jsonl`)
115
+
116
+ | label | description | count |
117
+ |-------|---------------------------|------:|
118
+ | GPE | Geopolitical Entity | 4 601 |
119
+ | PER | Person | 3 628 |
120
+ | ORG | Organisation | 1 426 |
121
+ | MISC | Catch-all category | 1 301 |
122
+ | FAC | Facility | 947 |
123
+ | TIMEX | Temporal expression | 926 |
124
+ | DUC | Product / Brand | 711 |
125
+ | EVE | Event | 487 |
126
+ | LOC | (non-GPE/natural) Location | 467 |
127
+ | ANG | Language | 322 |
128
+ | WOA | Work of Art | 292 |
129
+ | TTL | Title / Honorific | 227 |
130
+
131
+ ## File schema (`*.jsonl`)
132
 
133
  ```jsonc
134
  {
135
+ "doc_id": 137,
136
+ "doc_name": "mohamedghalie",
137
+ "sent_id": 11,
138
+ "orig_ID": 20653,
139
+ "round": "round3",
140
+ "annotator": "Rawan",
141
+ "text": "جيب جوال أو أي اشي ضو هيك",
 
142
  "spans": [
143
+ {"start": 4, "end": 8, "label": "DUC"}
144
+ ]
 
 
 
 
 
 
 
 
 
 
145
  }
146
+ ```
147
+
148
+ ### Inter‑annotator files
149
+
150
+ `iaa_A.jsonl` and `iaa_B.jsonl` contain parallel annotations for the same 5 806 sentences. Use them to measure agreement or experiment with noise‑robust training. These sentences **do not** overlap with the primary train/val/test splits. As stated above, only `iaa_A.jsonl` were injected into the train, dev and test set.
151
+
152
+ © 2025 · CC BY‑4.0
hf_data/dataset_dict.json DELETED
@@ -1 +0,0 @@
1
- {"splits": ["train", "dev", "test_spoken", "test_msa"]}
 
 
hf_data/dev/data-00000-of-00001.arrow DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ca5b692f789136f09d6958990fef71128b98c235f0aa7500efefcdc191811ead
3
- size 1026848
 
 
 
 
hf_data/dev/dataset_info.json DELETED
@@ -1,53 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "int64",
7
- "_type": "Value"
8
- },
9
- "round": {
10
- "dtype": "string",
11
- "_type": "Value"
12
- },
13
- "doc_name": {
14
- "dtype": "string",
15
- "_type": "Value"
16
- },
17
- "doc_id": {
18
- "dtype": "int64",
19
- "_type": "Value"
20
- },
21
- "annotator": {
22
- "dtype": "string",
23
- "_type": "Value"
24
- },
25
- "sent_id": {
26
- "dtype": "int64",
27
- "_type": "Value"
28
- },
29
- "input_ids": {
30
- "feature": {
31
- "dtype": "int32",
32
- "_type": "Value"
33
- },
34
- "_type": "Sequence"
35
- },
36
- "attention_mask": {
37
- "feature": {
38
- "dtype": "int8",
39
- "_type": "Value"
40
- },
41
- "_type": "Sequence"
42
- },
43
- "labels": {
44
- "feature": {
45
- "dtype": "int64",
46
- "_type": "Value"
47
- },
48
- "_type": "Sequence"
49
- }
50
- },
51
- "homepage": "",
52
- "license": ""
53
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/dev/state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "3cdcf26e51681292",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/test_msa/dataset_info.json DELETED
@@ -1,53 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "int64",
7
- "_type": "Value"
8
- },
9
- "round": {
10
- "dtype": "string",
11
- "_type": "Value"
12
- },
13
- "doc_name": {
14
- "dtype": "string",
15
- "_type": "Value"
16
- },
17
- "doc_id": {
18
- "dtype": "int64",
19
- "_type": "Value"
20
- },
21
- "annotator": {
22
- "dtype": "string",
23
- "_type": "Value"
24
- },
25
- "sent_id": {
26
- "dtype": "int64",
27
- "_type": "Value"
28
- },
29
- "input_ids": {
30
- "feature": {
31
- "dtype": "int32",
32
- "_type": "Value"
33
- },
34
- "_type": "Sequence"
35
- },
36
- "attention_mask": {
37
- "feature": {
38
- "dtype": "int8",
39
- "_type": "Value"
40
- },
41
- "_type": "Sequence"
42
- },
43
- "labels": {
44
- "feature": {
45
- "dtype": "int64",
46
- "_type": "Value"
47
- },
48
- "_type": "Sequence"
49
- }
50
- },
51
- "homepage": "",
52
- "license": ""
53
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/test_msa/state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "978b9a6ae1fe95c4",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/test_spoken/data-00000-of-00001.arrow DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d6c268809d363584df770d5bcbf8fe19654144c87db3173607f92fe743bb77ff
3
- size 677224
 
 
 
 
hf_data/test_spoken/dataset_info.json DELETED
@@ -1,53 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "int64",
7
- "_type": "Value"
8
- },
9
- "round": {
10
- "dtype": "string",
11
- "_type": "Value"
12
- },
13
- "doc_name": {
14
- "dtype": "string",
15
- "_type": "Value"
16
- },
17
- "doc_id": {
18
- "dtype": "int64",
19
- "_type": "Value"
20
- },
21
- "annotator": {
22
- "dtype": "string",
23
- "_type": "Value"
24
- },
25
- "sent_id": {
26
- "dtype": "int64",
27
- "_type": "Value"
28
- },
29
- "input_ids": {
30
- "feature": {
31
- "dtype": "int32",
32
- "_type": "Value"
33
- },
34
- "_type": "Sequence"
35
- },
36
- "attention_mask": {
37
- "feature": {
38
- "dtype": "int8",
39
- "_type": "Value"
40
- },
41
- "_type": "Sequence"
42
- },
43
- "labels": {
44
- "feature": {
45
- "dtype": "int64",
46
- "_type": "Value"
47
- },
48
- "_type": "Sequence"
49
- }
50
- },
51
- "homepage": "",
52
- "license": ""
53
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/test_spoken/state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "bb93cafb46bcc2d2",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/train/data-00000-of-00001.arrow DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e8ef935a888f2351d976d33c3944cf1dd2a6900352b2375144c2400e24d968e1
3
- size 9156912
 
 
 
 
hf_data/train/dataset_info.json DELETED
@@ -1,53 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "int64",
7
- "_type": "Value"
8
- },
9
- "round": {
10
- "dtype": "string",
11
- "_type": "Value"
12
- },
13
- "doc_name": {
14
- "dtype": "string",
15
- "_type": "Value"
16
- },
17
- "doc_id": {
18
- "dtype": "int64",
19
- "_type": "Value"
20
- },
21
- "annotator": {
22
- "dtype": "string",
23
- "_type": "Value"
24
- },
25
- "sent_id": {
26
- "dtype": "int64",
27
- "_type": "Value"
28
- },
29
- "input_ids": {
30
- "feature": {
31
- "dtype": "int32",
32
- "_type": "Value"
33
- },
34
- "_type": "Sequence"
35
- },
36
- "attention_mask": {
37
- "feature": {
38
- "dtype": "int8",
39
- "_type": "Value"
40
- },
41
- "_type": "Sequence"
42
- },
43
- "labels": {
44
- "feature": {
45
- "dtype": "int64",
46
- "_type": "Value"
47
- },
48
- "_type": "Sequence"
49
- }
50
- },
51
- "homepage": "",
52
- "license": ""
53
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
hf_data/train/state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "1ffc13131ac359e5",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/dev-00000-of-00001.parquet → iaa_A.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:550d405d79e0b70e1287471474351a22feee7a28e75d1b9d793c12a1fff48ce1
3
- size 220986
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b10076f434d864f7336bb72598eb6cf2308f3bbac0c65b3b5d5cad2dc4baf595
3
+ size 585929
data/test_msa-00000-of-00001.parquet → iaa_B.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:edc90323ba741f023ff47553899af81ca8b9735baa96872396343b9d5bc59bf1
3
- size 63678
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6421912d360b7cfb0d7928344e9ec3a5f591ac4bffb8da957c0ca0d2bd3e7651
3
+ size 582654
label_mapping.json DELETED
@@ -1,56 +0,0 @@
1
- {
2
- "label2id": {
3
- "B-ANG": 0,
4
- "B-DUC": 1,
5
- "B-EVE": 2,
6
- "B-FAC": 3,
7
- "B-GPE": 4,
8
- "B-LOC": 5,
9
- "B-MISC": 6,
10
- "B-ORG": 7,
11
- "B-PER": 8,
12
- "B-TIMEX": 9,
13
- "B-TTL": 10,
14
- "B-WOA": 11,
15
- "I-ANG": 12,
16
- "I-DUC": 13,
17
- "I-EVE": 14,
18
- "I-FAC": 15,
19
- "I-GPE": 16,
20
- "I-LOC": 17,
21
- "I-MISC": 18,
22
- "I-ORG": 19,
23
- "I-PER": 20,
24
- "I-TIMEX": 21,
25
- "I-TTL": 22,
26
- "I-WOA": 23,
27
- "O": 24
28
- },
29
- "id2label": {
30
- "0": "B-ANG",
31
- "1": "B-DUC",
32
- "2": "B-EVE",
33
- "3": "B-FAC",
34
- "4": "B-GPE",
35
- "5": "B-LOC",
36
- "6": "B-MISC",
37
- "7": "B-ORG",
38
- "8": "B-PER",
39
- "9": "B-TIMEX",
40
- "10": "B-TTL",
41
- "11": "B-WOA",
42
- "12": "I-ANG",
43
- "13": "I-DUC",
44
- "14": "I-EVE",
45
- "15": "I-FAC",
46
- "16": "I-GPE",
47
- "17": "I-LOC",
48
- "18": "I-MISC",
49
- "19": "I-ORG",
50
- "20": "I-PER",
51
- "21": "I-TIMEX",
52
- "22": "I-TTL",
53
- "23": "I-WOA",
54
- "24": "O"
55
- }
56
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
make_split.py ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ make_split.py – Create **train / validation / test** splits for the
4
+ **ShamNER final release** and serialise **both JSONL and Parquet** versions.
5
+
6
+ Philosophy
7
+ ----------------------
8
+ * **No duplicate documents** – A *document* is `(doc_name, round)`; each bundle
9
+ goes to exactly one split.
10
+ * **Rounds** – Six annotation iterations:
11
+ `pilot`, `round1`‑`round5` = manual (improving quality), `round6` = synthetic
12
+ post‑edited. Early rounds feed *train*, round5 + (filtered) round6 populate
13
+ *test*.
14
+ * **Single test set** – User requested **one** held‑out test, not two.
15
+ Therefore:
16
+ * `test` ∶ span‑novel bundles from round5 **plus** span‑novel bundles from
17
+ round6 (synthetic see README). No separate `test_synth` file.
18
+ * **Span novelty rule** – Normalise every entity string (lower‑case, strip
19
+ Arabic diacritics & leading «ال», collapse whitespace). A bundle is forced
20
+ to *train* if **any** of its normalised spans already exists in train.
21
+ * **Tokeniser‑agnostic** – Data carries only raw `text` and character‑offset
22
+ `spans`. No BIO arrays.
23
+
24
+ Output files
25
+ ------------
26
+ ```
27
+ train.jsonl train.parquet
28
+ validation.jsonl validation.parquet
29
+ test.jsonl test.parquet
30
+ iaa_A.jsonl / iaa_A.parquet
31
+ iaa_B.jsonl / iaa_B.parquet
32
+ dataset_info.json
33
+ ```
34
+ A **post‑allocation cleanup** moves any *validation* or *test* sentence whose
35
+ normalised spans already appear in *train* back into **train**. This enforces
36
+ strict span‑novelty for evaluation, even if an early bundle introduced a name
37
+ and a later bundle reused it.
38
+ """
39
+ from __future__ import annotations
40
+ import json, re, unicodedata, pathlib, collections, random
41
+ from typing import List, Dict, Tuple
42
+ from datasets import Dataset
43
+
44
+ # --------------------------- configuration ----------------------------------
45
+ SEED = 42
46
+ DEV_FRAC = 0.10
47
+ TEST_FRAC = 0.10
48
+ ROUND_ORDER = {
49
+ "pilot": 0,
50
+ "round1": 1,
51
+ "round2": 2,
52
+ "round3": 3,
53
+ "round4": 4,
54
+ "round5": 5, # assumed best manual round
55
+ "round6": 6, # synthetic examples (post‑edited, see README)
56
+ }
57
+ JSONL_FILES = {
58
+ "unique": "unique_sentences.jsonl",
59
+ "iaa_A": "iaa_A.jsonl",
60
+ "iaa_B": "iaa_B.jsonl",
61
+ }
62
+
63
+ # --------------------------- helpers ----------------------------------------
64
+ Bundle = Tuple[str, str] # (doc_name, round)
65
+ Row = Dict[str, object]
66
+
67
+ AR_DIACRITICS_RE = re.compile(r"[\u0610-\u061A\u064B-\u065F\u06D6-\u06ED]")
68
+ AL_PREFIX_RE = re.compile(r"^ال(?=[\u0621-\u064A])")
69
+ MULTISPACE_RE = re.compile(r"\s+")
70
+
71
+ def normalise_span(text: str) -> str:
72
+ """Return a span string normalised for novelty comparison."""
73
+ t = AR_DIACRITICS_RE.sub("", text)
74
+ t = AL_PREFIX_RE.sub("", t)
75
+ t = unicodedata.normalize("NFKC", t).lower()
76
+ t = MULTISPACE_RE.sub(" ", t).strip()
77
+ return t
78
+
79
+ def read_jsonl(path: pathlib.Path) -> List[Row]:
80
+ with path.open(encoding="utf-8") as fh:
81
+ return [json.loads(l) for l in fh]
82
+
83
+ def build_bundles(rows: List[Row]):
84
+ d: Dict[Bundle, List[Row]] = collections.defaultdict(list)
85
+ for r in rows:
86
+ d[(r["doc_name"], r["round"])].append(r)
87
+ return d
88
+
89
+ def span_set(rows: List[Row]) -> set[str]:
90
+ """Collect normalised span strings from a list of sentence rows.
91
+ If a span dict lacks a explicit ``text`` key we fall back to slicing
92
+ ``row['text'][start:end]``. Rows without usable span text are skipped.
93
+ """
94
+ s: set[str] = set()
95
+ for r in rows:
96
+ sent_text = r.get("text", "")
97
+ for sp in r.get("spans", []):
98
+ raw = sp.get("text")
99
+ if raw is None and "start" in sp and "end" in sp:
100
+ raw = sent_text[sp["start"]: sp["end"]]
101
+ if raw:
102
+ s.add(normalise_span(raw))
103
+ return s
104
+
105
+ # --------------------------- utilities --------------------------------------
106
+ ID_FIELDS = ["doc_id", "sent_id", "orig_ID"]
107
+
108
+ def harmonise_id_types(rows: List[Row]):
109
+ """Ensure every identifier field is stored consistently as *int*.
110
+ If a value is a digit‑only string it is cast to int; otherwise it is left
111
+ unchanged."""
112
+ for r in rows:
113
+ for f in ID_FIELDS:
114
+ v = r.get(f)
115
+ if isinstance(v, str) and v.isdigit():
116
+ r[f] = int(v)
117
+
118
+ # --------------------------- main -------------------------------------------
119
+
120
+ def prune_overlap(split_name: str, splits: Dict[str, List[Row]], lexicon: set[str]):
121
+ """A post-procession cautious step: move sentences from *split_name* into *train* if any of their spans
122
+ already exist in the `lexicon` (train span set). Updates `splits` in
123
+ place and returns the number of rows moved."""
124
+ kept, moved = [], 0
125
+ for r in splits[split_name]:
126
+ sent = r["text"]
127
+ spans_here = {normalise_span(sp.get("text") or sent[sp["start"]:sp["end"]])
128
+ for sp in r["spans"]}
129
+ if spans_here & lexicon:
130
+ splits["train"].append(r)
131
+ lexicon.update(spans_here)
132
+ moved += 1
133
+ else:
134
+ kept.append(r)
135
+ splits[split_name] = kept
136
+ return moved
137
+
138
+
139
+
140
+ def main():
141
+ random.seed(SEED)
142
+
143
+ # 1. read corpus (single‑annotator view)
144
+ unique_rows = read_jsonl(pathlib.Path(JSONL_FILES["unique"]))
145
+ bundles = build_bundles(unique_rows)
146
+
147
+ # meta per bundle
148
+ meta = []
149
+ for key, rows in bundles.items():
150
+ rd_ord = ROUND_ORDER.get(key[1], 99)
151
+ meta.append({
152
+ "key": key, "rows": rows, "spans": span_set(rows),
153
+ "size": len(rows), "rd": rd_ord,
154
+ })
155
+
156
+ # sort bundles: early rounds first
157
+ meta.sort(key=lambda m: (m["rd"], m["key"]))
158
+
159
+ splits: Dict[str, List[Row]] = {n: [] for n in ["train", "validation", "test"]}
160
+ train_span_lex: set[str] = set()
161
+
162
+ corpus_size = sum(m["size"] for m in meta) # round6 included for quota calc
163
+ dev_quota = int(corpus_size * DEV_FRAC)
164
+ test_quota = int(corpus_size * TEST_FRAC)
165
+
166
+ for m in meta:
167
+ key, rows, spans, size, rd = m.values()
168
+
169
+ # if overlaps train lexicon -> train directly
170
+ if spans & train_span_lex:
171
+ splits["train"].extend(rows)
172
+ train_span_lex.update(spans)
173
+ continue
174
+
175
+ # span‑novel bundle: allocate dev/test quotas first
176
+ if len(splits["validation"]) < dev_quota:
177
+ splits["validation"].extend(rows)
178
+ elif len(splits["test"]) < test_quota:
179
+ splits["test"].extend(rows)
180
+ else:
181
+ # quotas filled – fallback to train
182
+ splits["train"].extend(rows)
183
+ train_span_lex.update(spans)
184
+
185
+ # 2a. post‑pass cleanup to guarantee span novelty ------------------------
186
+ mv_val = prune_overlap("validation", splits, train_span_lex)
187
+ mv_test = prune_overlap("test", splits, train_span_lex)
188
+ print(f"Moved {mv_val} val and {mv_test} test rows back to train due to span overlap.")
189
+
190
+ # 2b. iaa views unchanged ----------------------------------------------
191
+ iaa_A_rows = read_jsonl(pathlib.Path(JSONL_FILES["iaa_A"]))
192
+ iaa_B_rows = read_jsonl(pathlib.Path(JSONL_FILES["iaa_B"]))
193
+
194
+ out_dir = pathlib.Path(".")
195
+ for name, rows in {**splits, "iaa_A": iaa_A_rows, "iaa_B": iaa_B_rows}.items():
196
+ harmonise_id_types(rows)
197
+ json_path = out_dir / f"{name}.jsonl"
198
+ with json_path.open("w", encoding="utf-8") as fh:
199
+ for r in rows:
200
+ fh.write(json.dumps(r, ensure_ascii=False) + "\n")
201
+ Dataset.from_list(rows).to_parquet(out_dir / f"{name}.parquet")
202
+ print(f"-> {name}: {len(rows):,} rows → .jsonl & .parquet")
203
+
204
+ print("--> all splits done.")
205
+
206
+ if __name__ == "__main__":
207
+ main()
test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/test_spoken-00000-of-00001.parquet → test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:702d61add3a7db12d2290a9807dba2fc853ff82552cab37261926fc75828703b
3
- size 115775
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:379c39a92ae24ac8c8122b954837c9385e03e84fb847a2db01b96a2585d1daf3
3
+ size 116590
train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/train-00000-of-00001.parquet → train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d4f4f19f0df7e970d1a1286988e1b7c13e7b7163809fc49aa1b64b6fb1f5bf6a
3
- size 1887214
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:033aab517dcfa1c6e9cc3bf6f8325e94933aeb6ced3febc0ce3aad694ebec7bb
3
+ size 2043561
train_sample.jsonl DELETED
@@ -1,200 +0,0 @@
1
- {"id":1264,"round":"round2","doc_name":"WhatsApp-Video-2021-11-14-at-19.34.28-1","doc_id":7,"annotator":"Rawan","sent_id":1}
2
- {"id":1265,"round":"round2","doc_name":"WhatsApp-Video-2021-11-14-at-19.34.28-1","doc_id":7,"annotator":"Rawan","sent_id":2}
3
- {"id":1266,"round":"round2","doc_name":"WhatsApp-Video-2021-11-14-at-19.34.28-1","doc_id":7,"annotator":"Rawan","sent_id":3}
4
- {"id":1267,"round":"round2","doc_name":"WhatsApp-Video-2021-11-14-at-19.34.28-1","doc_id":7,"annotator":"Rawan","sent_id":4}
5
- {"id":1268,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":5}
6
- {"id":1269,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":6}
7
- {"id":1270,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":7}
8
- {"id":1271,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":8}
9
- {"id":1272,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":9}
10
- {"id":1273,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":10}
11
- {"id":1274,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":11}
12
- {"id":1275,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":12}
13
- {"id":1276,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-07.59.36","doc_id":8,"annotator":"Rawan","sent_id":13}
14
- {"id":1277,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":14}
15
- {"id":1278,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":15}
16
- {"id":1279,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":16}
17
- {"id":1280,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":17}
18
- {"id":1281,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":18}
19
- {"id":1282,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":19}
20
- {"id":1283,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":20}
21
- {"id":1284,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":21}
22
- {"id":1285,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":22}
23
- {"id":1286,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":23}
24
- {"id":1287,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":24}
25
- {"id":1288,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":25}
26
- {"id":1289,"round":"round2","doc_name":"WhatsApp-Video-2021-11-15-at-10.09.11","doc_id":9,"annotator":"Rawan","sent_id":26}
27
- {"id":1290,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":27}
28
- {"id":1291,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":28}
29
- {"id":1292,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":29}
30
- {"id":1293,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":30}
31
- {"id":1294,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":31}
32
- {"id":1295,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":32}
33
- {"id":1296,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":33}
34
- {"id":1297,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":34}
35
- {"id":1298,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":35}
36
- {"id":1299,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":36}
37
- {"id":1300,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":37}
38
- {"id":1301,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":38}
39
- {"id":1302,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":39}
40
- {"id":1303,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":40}
41
- {"id":1304,"round":"round2","doc_name":"WhatsApp-Video-2021-11-16-at-11.17.18","doc_id":10,"annotator":"Rawan","sent_id":41}
42
- {"id":1305,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":42}
43
- {"id":1306,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":43}
44
- {"id":1307,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":44}
45
- {"id":1308,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":45}
46
- {"id":1309,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":46}
47
- {"id":1310,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":47}
48
- {"id":1311,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":48}
49
- {"id":1312,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":49}
50
- {"id":1313,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":50}
51
- {"id":1314,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":51}
52
- {"id":1315,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":52}
53
- {"id":1316,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":53}
54
- {"id":1317,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":54}
55
- {"id":1318,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":55}
56
- {"id":1319,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":56}
57
- {"id":1320,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-10.12.13","doc_id":11,"annotator":"Rawan","sent_id":57}
58
- {"id":1321,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-12.30.01","doc_id":12,"annotator":"Rawan","sent_id":58}
59
- {"id":1322,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-12.30.01","doc_id":12,"annotator":"Rawan","sent_id":59}
60
- {"id":1323,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-12.30.01","doc_id":12,"annotator":"Rawan","sent_id":60}
61
- {"id":1324,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-12.30.01","doc_id":12,"annotator":"Rawan","sent_id":61}
62
- {"id":1325,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-12.30.01","doc_id":12,"annotator":"Rawan","sent_id":62}
63
- {"id":1326,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-12.30.01","doc_id":12,"annotator":"Rawan","sent_id":63}
64
- {"id":1327,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":64}
65
- {"id":1328,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":65}
66
- {"id":1329,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":66}
67
- {"id":1330,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":67}
68
- {"id":1331,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":68}
69
- {"id":1332,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":69}
70
- {"id":1333,"round":"round2","doc_name":"WhatsApp-Video-2021-11-17-at-21.49.20-1","doc_id":13,"annotator":"Rawan","sent_id":70}
71
- {"id":1334,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-09.28.56-1","doc_id":14,"annotator":"Rawan","sent_id":71}
72
- {"id":1335,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-09.28.56-1","doc_id":14,"annotator":"Rawan","sent_id":72}
73
- {"id":1336,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-09.28.56-1","doc_id":14,"annotator":"Rawan","sent_id":73}
74
- {"id":1337,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-09.28.56-1","doc_id":14,"annotator":"Rawan","sent_id":74}
75
- {"id":1338,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-09.28.56-1","doc_id":14,"annotator":"Rawan","sent_id":75}
76
- {"id":1339,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-09.28.56-1","doc_id":14,"annotator":"Rawan","sent_id":76}
77
- {"id":1340,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-14.24.23","doc_id":15,"annotator":"Rawan","sent_id":77}
78
- {"id":1341,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-14.24.23","doc_id":15,"annotator":"Rawan","sent_id":78}
79
- {"id":1342,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-14.24.23","doc_id":15,"annotator":"Rawan","sent_id":79}
80
- {"id":1343,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-14.24.23","doc_id":15,"annotator":"Rawan","sent_id":80}
81
- {"id":1344,"round":"round2","doc_name":"WhatsApp-Video-2021-11-18-at-14.24.23","doc_id":15,"annotator":"Rawan","sent_id":81}
82
- {"id":1345,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":82}
83
- {"id":1346,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":83}
84
- {"id":1347,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":84}
85
- {"id":1348,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":85}
86
- {"id":1349,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":86}
87
- {"id":1350,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":87}
88
- {"id":1351,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":88}
89
- {"id":1352,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":89}
90
- {"id":1353,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":90}
91
- {"id":1354,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":91}
92
- {"id":1355,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":92}
93
- {"id":1356,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":93}
94
- {"id":1357,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":94}
95
- {"id":1358,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":95}
96
- {"id":1359,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":96}
97
- {"id":1360,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":97}
98
- {"id":1361,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":98}
99
- {"id":1362,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":99}
100
- {"id":1363,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":100}
101
- {"id":1364,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":101}
102
- {"id":1365,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":102}
103
- {"id":1366,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":103}
104
- {"id":1367,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":104}
105
- {"id":1368,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":105}
106
- {"id":1369,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":106}
107
- {"id":1370,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":107}
108
- {"id":1371,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":108}
109
- {"id":1372,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":109}
110
- {"id":1373,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":110}
111
- {"id":1374,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":111}
112
- {"id":1375,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":112}
113
- {"id":1376,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":113}
114
- {"id":1377,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":114}
115
- {"id":1378,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":115}
116
- {"id":1379,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":116}
117
- {"id":1380,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":117}
118
- {"id":1381,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":118}
119
- {"id":1382,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":119}
120
- {"id":1383,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":120}
121
- {"id":1384,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":121}
122
- {"id":1385,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":122}
123
- {"id":1386,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":123}
124
- {"id":1387,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":124}
125
- {"id":1388,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":125}
126
- {"id":1389,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":126}
127
- {"id":1390,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":127}
128
- {"id":1391,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":128}
129
- {"id":1392,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":129}
130
- {"id":1393,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":130}
131
- {"id":1394,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":131}
132
- {"id":1395,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":132}
133
- {"id":1396,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":133}
134
- {"id":1397,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":134}
135
- {"id":1398,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":135}
136
- {"id":1399,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":136}
137
- {"id":1400,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":137}
138
- {"id":1401,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":138}
139
- {"id":1402,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":139}
140
- {"id":1403,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":140}
141
- {"id":1404,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":141}
142
- {"id":1405,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":142}
143
- {"id":1406,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":143}
144
- {"id":1407,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":144}
145
- {"id":1408,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":145}
146
- {"id":1409,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":146}
147
- {"id":1410,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":147}
148
- {"id":1411,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":148}
149
- {"id":1412,"round":"round2","doc_name":"حكاية-البخل-بالوراثة-_-الحكواتية-سارة-قصير","doc_id":16,"annotator":"Rawan","sent_id":149}
150
- {"id":1413,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":150}
151
- {"id":1414,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":151}
152
- {"id":1415,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":152}
153
- {"id":1416,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":153}
154
- {"id":1417,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":154}
155
- {"id":1418,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":155}
156
- {"id":1419,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-12.06.10","doc_id":17,"annotator":"Rawan","sent_id":156}
157
- {"id":1420,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":157}
158
- {"id":1421,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":158}
159
- {"id":1422,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":159}
160
- {"id":1423,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":160}
161
- {"id":1424,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":161}
162
- {"id":1425,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":162}
163
- {"id":1426,"round":"round2","doc_name":"WhatsApp-Video-2021-11-19-at-14.21.18","doc_id":18,"annotator":"Rawan","sent_id":163}
164
- {"id":1453,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":190}
165
- {"id":1454,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":191}
166
- {"id":1455,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":192}
167
- {"id":1456,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":193}
168
- {"id":1457,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":194}
169
- {"id":1458,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":195}
170
- {"id":1459,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-07.17.27","doc_id":20,"annotator":"Rawan","sent_id":196}
171
- {"id":1460,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-11.29.49","doc_id":21,"annotator":"Rawan","sent_id":197}
172
- {"id":1461,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-11.29.49","doc_id":21,"annotator":"Rawan","sent_id":198}
173
- {"id":1462,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-11.29.49","doc_id":21,"annotator":"Rawan","sent_id":199}
174
- {"id":1463,"round":"round2","doc_name":"WhatsApp-Video-2021-11-20-at-11.29.49","doc_id":21,"annotator":"Rawan","sent_id":200}
175
- {"id":1464,"round":"round2","doc_name":"50480229","doc_id":22,"annotator":"Rawan","sent_id":201}
176
- {"id":1465,"round":"round2","doc_name":"50480230","doc_id":23,"annotator":"Rawan","sent_id":202}
177
- {"id":1466,"round":"round2","doc_name":"50480231","doc_id":24,"annotator":"Rawan","sent_id":203}
178
- {"id":1467,"round":"round2","doc_name":"50480232","doc_id":25,"annotator":"Rawan","sent_id":204}
179
- {"id":1468,"round":"round2","doc_name":"50480233","doc_id":26,"annotator":"Rawan","sent_id":205}
180
- {"id":1469,"round":"round2","doc_name":"50480234","doc_id":27,"annotator":"Rawan","sent_id":206}
181
- {"id":1471,"round":"round2","doc_name":"50480236","doc_id":29,"annotator":"Rawan","sent_id":208}
182
- {"id":1472,"round":"round2","doc_name":"50480237","doc_id":30,"annotator":"Rawan","sent_id":209}
183
- {"id":1473,"round":"round2","doc_name":"50480238","doc_id":31,"annotator":"Rawan","sent_id":210}
184
- {"id":1474,"round":"round2","doc_name":"50480239","doc_id":32,"annotator":"Rawan","sent_id":211}
185
- {"id":1475,"round":"round2","doc_name":"50480240","doc_id":33,"annotator":"Rawan","sent_id":212}
186
- {"id":1476,"round":"round2","doc_name":"50480241","doc_id":34,"annotator":"Rawan","sent_id":213}
187
- {"id":1478,"round":"round2","doc_name":"50480243","doc_id":36,"annotator":"Rawan","sent_id":215}
188
- {"id":1480,"round":"round2","doc_name":"50480245","doc_id":38,"annotator":"Rawan","sent_id":217}
189
- {"id":1481,"round":"round2","doc_name":"50480246","doc_id":39,"annotator":"Rawan","sent_id":218}
190
- {"id":1482,"round":"round2","doc_name":"50480247","doc_id":40,"annotator":"Rawan","sent_id":219}
191
- {"id":1483,"round":"round2","doc_name":"50480248","doc_id":41,"annotator":"Rawan","sent_id":220}
192
- {"id":1484,"round":"round2","doc_name":"50480249","doc_id":42,"annotator":"Rawan","sent_id":221}
193
- {"id":1485,"round":"round2","doc_name":"50480250","doc_id":43,"annotator":"Rawan","sent_id":222}
194
- {"id":1487,"round":"round2","doc_name":"50480252","doc_id":45,"annotator":"Rawan","sent_id":224}
195
- {"id":1488,"round":"round2","doc_name":"50480253","doc_id":46,"annotator":"Rawan","sent_id":225}
196
- {"id":1489,"round":"round2","doc_name":"50480254","doc_id":47,"annotator":"Rawan","sent_id":226}
197
- {"id":1490,"round":"round2","doc_name":"50480255","doc_id":48,"annotator":"Rawan","sent_id":227}
198
- {"id":1493,"round":"round2","doc_name":"50480258","doc_id":51,"annotator":"Rawan","sent_id":230}
199
- {"id":1494,"round":"round2","doc_name":"50480259","doc_id":52,"annotator":"Rawan","sent_id":231}
200
- {"id":1495,"round":"round2","doc_name":"50480260","doc_id":53,"annotator":"Rawan","sent_id":232}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
validation.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
hf_data/test_msa/data-00000-of-00001.arrow → validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:511f2dcbccdf05379cea334ef46f9883d262b6716ac065587e1f07a78d89ad56
3
- size 231872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:565656f2a507cc9139796c8e7e58b43fc7e4e5ed5e8f3deb5c02d1edeb408926
3
+ size 142658