Datasets:
Tasks:
Token Classification
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
Arabic
Size:
10K - 100K
License:
Update README.md with new split philosophy details
Browse files
README.md
CHANGED
|
@@ -64,6 +64,8 @@ ShamNER is a curated corpus of Levantine‑Arabic sentences annotated for Named
|
|
| 64 |
## Quick start
|
| 65 |
|
| 66 |
```python
|
|
|
|
|
|
|
| 67 |
from datasets import load_dataset
|
| 68 |
sham = load_dataset("HebArabNlpProject/ShamNER")
|
| 69 |
train_ds = sham["train"]
|
|
@@ -74,7 +76,7 @@ train_ds = sham["train"]
|
|
| 74 |
## Split Philosophy
|
| 75 |
|
| 76 |
* **No duplicate documents** – A *document* is identified by the pair
|
| 77 |
-
`(doc_name, round)`; each such bundle is assigned to exactly one split.
|
| 78 |
|
| 79 |
* **Rounds** – Six annotation iterations:
|
| 80 |
`pilot`, `round1` – `round5` (manual, quality improving each round) and
|
|
@@ -87,10 +89,21 @@ train_ds = sham["train"]
|
|
| 87 |
round 6.*
|
| 88 |
No separate `test_synth` file.
|
| 89 |
|
| 90 |
-
* **Span-novelty rule**
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
* **Tokeniser-agnostic** – Each record stores only raw `text` and
|
| 96 |
character-offset `spans`; no BIO arrays. Users regenerate token-level labels
|
|
@@ -101,9 +114,9 @@ train_ds = sham["train"]
|
|
| 101 |
|
| 102 |
| split | sentences | files |
|
| 103 |
| ---------- | ---------- | ------------------------------- |
|
| 104 |
-
| train | **19
|
| 105 |
-
| validation | 1
|
| 106 |
-
| test | 1
|
| 107 |
| iaa\_A | 5 806 | optional, dual annotator A |
|
| 108 |
| iaa\_B | 5 806 | optional, annotator B |
|
| 109 |
|
|
|
|
| 64 |
## Quick start
|
| 65 |
|
| 66 |
```python
|
| 67 |
+
# Uncomment next line if you hit a LocalFileSystem / fsspec error on Colab
|
| 68 |
+
# !pip install -U "datasets>=2.16.0" "fsspec>=2023.10.0"
|
| 69 |
from datasets import load_dataset
|
| 70 |
sham = load_dataset("HebArabNlpProject/ShamNER")
|
| 71 |
train_ds = sham["train"]
|
|
|
|
| 76 |
## Split Philosophy
|
| 77 |
|
| 78 |
* **No duplicate documents** – A *document* is identified by the pair
|
| 79 |
+
`(doc_name, round)`; each such bundle is assigned to exactly one split. This rule holds true for bundles, though individual sentences within bundles might have overlapping spans after post-allocation pruning for specific thresholds.
|
| 80 |
|
| 81 |
* **Rounds** – Six annotation iterations:
|
| 82 |
`pilot`, `round1` – `round5` (manual, quality improving each round) and
|
|
|
|
| 89 |
round 6.*
|
| 90 |
No separate `test_synth` file.
|
| 91 |
|
| 92 |
+
* **Span-novelty rule (Relaxed)**
|
| 93 |
+
|
| 94 |
+
Before allocation, normalise every entity string:
|
| 95 |
+
- Convert to lowercase (Latin aphbaet exists in social media)
|
| 96 |
+
- Strip Arabic diacritics
|
| 97 |
+
- Remove leading “ال”
|
| 98 |
+
- Collapse internal whitespace
|
| 99 |
+
|
| 100 |
+
A bundle is forced to **train** if **any** of its normalised spans already occurs in train.
|
| 101 |
+
|
| 102 |
+
A **post-allocation pruning** step then moves sentences from validation or test back to train
|
| 103 |
+
**only if more than 50%** of their normalized spans already exist in the training set.
|
| 104 |
+
|
| 105 |
+
This threshold (**0.50**) was chosen to provide more learning examples to the model
|
| 106 |
+
in the evaluation sets, leading to improved performance.
|
| 107 |
|
| 108 |
* **Tokeniser-agnostic** – Each record stores only raw `text` and
|
| 109 |
character-offset `spans`; no BIO arrays. Users regenerate token-level labels
|
|
|
|
| 114 |
|
| 115 |
| split | sentences | files |
|
| 116 |
| ---------- | ---------- | ------------------------------- |
|
| 117 |
+
| train | **19 532** | `train.jsonl` / `train.parquet` |
|
| 118 |
+
| validation | 1 931 | `validation.*` |
|
| 119 |
+
| test | 1 931 | `test.*` |
|
| 120 |
| iaa\_A | 5 806 | optional, dual annotator A |
|
| 121 |
| iaa\_B | 5 806 | optional, annotator B |
|
| 122 |
|