Datasets:
ShamNER – Spoken Arabic Named‑Entity Recognition Corpus (Levantine v1.1)
ShamNER is a curated corpus of Levantine‑Arabic sentences annotated for Named Entities, plus dual annotation to check for consisetency (agreement) across human annotators.
- Rounds :
pilot,round1–round5(manual, as a rule quality improved across rounds) andround6(synthetic, post‑edited). Thesythenticdata is done by sampling label-rich annotated spans from an MSA project and writing it with an LLM while force-injecting the annotated spans. Native speakers of Arabic then edited the these chunks to see to it that they sound as fluent and dilactical as possible. They were instructed not to touch the annotated spans. A script validated that no spans were modified. - Strict span‑novel evaluation : validation and test contain no entity surface‑form that appears in train (after normalisation). This probes true generalisation.
- Tokeniser‑agnostic : only raw sentences and character spans are stored; regenerate BIO tags with any tokenizer you wish.
Quick start
from datasets import load_dataset
sham = load_dataset("your‑org/ShamNER")
train_ds = sham["train"]
datasets streams the top‑level *.parquet files automatically; use the matching *.jsonl for grep‑friendly inspection.
Split Philosophy
No duplicate documents – A document is identified by the pair
(doc_name, round); each such bundle is assigned to exactly one split.Rounds – Six annotation iterations:
pilot,round1–round5(manual, quality improving each round) andround6(synthetic, then post-edited).
Early rounds feed train; span-novel slices ofround5+round6populate test.Single test set – The corpus ships one held-out test split:
test= span-novel bundles from round 5 plus span-novel bundles from round 6.
No separatetest_synthfile.Span-novelty rule – Before allocation, normalise every entity string (lower-case, strip Arabic diacritics and leading “ال”, collapse whitespace). A bundle is forced to train if any of its normalised spans already occurs in train; otherwise it may enter validation or test.
Tokeniser-agnostic – Each record stores only raw
textand character-offsetspans; no BIO arrays. Users regenerate token-level labels with whichever tokenizer their model requires.
Split sizes
| split | sentences | files |
|---|---|---|
| train | 19 783 | train.jsonl / train.parquet |
| validation | 1 795 | validation.* |
| test | 1 844 | test.* |
| iaa_A | 5 806 | optional, dual annotator A |
| iaa_B | 5 806 | optional, annotator B |
Every sentence that appears in iaa_A.jsonl is also in the train split (with the same labels), while iaa_B.jsonl provides the alternative annotation for agreement/noise studies.
Label inventory (computed from unique_sentences.jsonl)
| label | description | count |
|---|---|---|
| GPE | Geopolitical Entity | 4 601 |
| PER | Person | 3 628 |
| ORG | Organisation | 1 426 |
| MISC | Catch-all category | 1 301 |
| FAC | Facility | 947 |
| TIMEX | Temporal expression | 926 |
| DUC | Product / Brand | 711 |
| EVE | Event | 487 |
| LOC | (non-GPE/natural) Location | 467 |
| ANG | Language | 322 |
| WOA | Work of Art | 292 |
| TTL | Title / Honorific | 227 |
File schema (*.jsonl)
{
"doc_id": 137,
"doc_name": "mohamedghalie",
"sent_id": 11,
"orig_ID": 20653,
"round": "round3",
"annotator": "Rawan",
"text": "جيب جوال أو أي اشي ضو هيك",
"spans": [
{"start": 4, "end": 8, "label": "DUC"}
]
}
Inter‑annotator files
iaa_A.jsonl and iaa_B.jsonl contain parallel annotations for the same 5 806 sentences. Use them to measure agreement or experiment with noise‑robust training. These sentences do not overlap with the primary train/val/test splits. As stated above, only iaa_A.jsonl were injected into the train, dev and test set.
© 2025 · CC BY‑4.0