--- license: cc-by-4.0 language: - ar task_categories: - text-classification - text-generation size_categories: - 10M dict[int, str]: """ Creates a mapping: art_id -> split_name, using stratified split on (type, origin). Split is done at artwork level to avoid leakage across splits. """ # Read only the columns needed to define strata at artwork level usecols = ["art_id", "type", "origin"] meta = pd.read_csv(path, usecols=usecols) # Artwork-level metadata (one row per art_id) art = ( meta.groupby("art_id", as_index=False) .agg({"type": "first", "origin": "first"}) ) # Stratum ensures coverage across songs/poems and countries/eras art["stratum"] = art["type"].astype(str) + "|" + art["origin"].astype(str) art_ids = art["art_id"].to_numpy() strata = art["stratum"].to_numpy() # 70% train, 30% temp train_ids, temp_ids = train_test_split( art_ids, test_size=0.30, random_state=RANDOM_STATE, stratify=strata ) # Split temp into 15% val, 15% test (i.e., half/half of 30%) # Need strata for temp only temp_strata = art.set_index("art_id").loc[temp_ids, "stratum"].to_numpy() val_ids, test_ids = train_test_split( temp_ids, test_size=0.50, random_state=RANDOM_STATE, stratify=temp_strata ) split_map = {int(a): "train" for a in train_ids} split_map.update({int(a): "validation" for a in val_ids}) split_map.update({int(a): "test" for a in test_ids}) return split_map def write_splits_streaming(path: str, split_map: dict[int, str]) -> None: """ Streams through the big CSV and writes out train/val/test without loading everything at once. """ # Reset outputs for f in (OUT_TRAIN, OUT_VAL, OUT_TEST): open(f, "w", encoding="utf-8").close() header_written = {"train": False, "validation": False, "test": False} for chunk in pd.read_csv(path, chunksize=CHUNK_SIZE): # Assign split by art_id chunk["__split__"] = chunk["art_id"].map(split_map) # Drop any rows whose art_id isn't mapped (shouldn't happen, but safe) chunk = chunk.dropna(subset=["__split__"]) for split_name, out_path in [ ("train", OUT_TRAIN), ("validation", OUT_VAL), ("test", OUT_TEST), ]: part = chunk[chunk["__split__"] == split_name].drop(columns=["__split__"]) if part.empty: continue part.to_csv( out_path, mode="a", index=False, header=not header_written[split_name], encoding="utf-8" ) header_written[split_name] = True if __name__ == "__main__": split_map = build_artwork_split_map(INPUT_CSV) write_splits_streaming(INPUT_CSV, split_map) print("Done.") print("Wrote:", OUT_TRAIN, OUT_VAL, OUT_TEST) ``` --- ## Dialect-Specific Subsets In addition to the standard train/validation/test splits, the repository provides dialect-specific CSV files, where the corpus is partitioned by the dialect label. Each file contains all verses belonging to a single dialect category: - Classical - MSA - Egyptian - Gulf - Levantine - Iraqi - Sudanese - Maghrebi The dialect splits are derived directly from the master file and preserve full metadata, including origin, type, and art_id. These subsets support: - Dialect-specific modelling and evaluation - Controlled experiments on regional linguistic variation - Cross-dialect transfer learning - Vocabulary and stylistic analysis within dialect boundaries ``` import os import pandas as pd # ====== CONFIG ====== INPUT_FILE = "tarab_full.csv" OUTPUT_DIR = "tarab_by_dialect" ENCODING = "utf-8" # ==================== # Create output directory if it doesn't exist os.makedirs(OUTPUT_DIR, exist_ok=True) # Load dataset df = pd.read_csv(INPUT_FILE, encoding=ENCODING) # Basic sanity check print(f"Total rows: {len(df):,}") print(f"Unique dialects: {df['dialect'].nunique()}") # Clean dialect labels (optional but safer) df["dialect"] = df["dialect"].astype(str).str.strip() # Get unique dialects dialects = sorted(df["dialect"].unique()) print("\nCreating files per dialect...\n") for d in dialects: dialect_df = df[df["dialect"] == d] # Safe filename safe_name = d.replace(" ", "_").replace("/", "_") output_path = os.path.join(OUTPUT_DIR, f"tarab_{safe_name}.csv") dialect_df.to_csv(output_path, index=False, encoding="utf-8") print(f"{d}:") print(f" Verses: {len(dialect_df):,}") print(f" Works: {dialect_df['art_id'].nunique():,}") print(f" File: {output_path}\n") print("Done.") ``` --- ## Tarab Miscellaneous: Additional Thematic and Web-Derived Split We compiled a supplementary split based on thematic categories collected from publicly available Arabic song websites. These sources are informal and not officially curated, therefore their categorisation cannot be independently verified. - **Tarab_love_songs.csv** Songs labelled under romantic or love-related themes. - **Tarab_hiphop_songs.csv** Arabic hip hop tracks. - **Tarab_deeni_songs.csv** Religious songs. - **Tarab_khaleeji_songs.csv** Songs categorised as Gulf (Khaleeji). This reflects dialect or stylistic classification rather than artist nationality. For example, an Egyptian singer may perform in Gulf dialect. - **Tarab_maghribi_songs.csv** Songs labelled as Maghrebi. As above, this reflects dialectal or stylistic features, not necessarily the artist’s country of origin. A Saudi singer, for instance, may perform in Moroccan dialect. - **Tarab_video_songs.csv** Songs associated with video-clip releases, as identified by the source websites. - **Tarab_poetry.csv** Poetry entries collected from Kaggle (see Tarab paper for reference) - **artists_details.csv** A partially completed metadata file from Wiki-Data containing finer-grained information about artists, including nationality, dominant dialect, birth and death years, active period, and brief biographical notes extracted from Wikidata. Due to resource constraints, this metadata enrichment was not completed. In principle, this component could be extended using a robust large language model to assist with structured biographical completion and validation. This split should be treated as weakly supervised metadata derived from web categorisation rather than authoritative genre or dialect annotation. --- ## Citation If you use Tarab, please cite: ```bibtex @inproceedings{elhaj2026tarab, title={Tarab: A Multi-Dialect Corpus of Arabic Lyrics and Poetry}, author={El-Haj, Mo}, booktitle={Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script (AbjadNLP 2026) at the 19th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2026)}, pages={37--46}, address={Rabat, Morocco}, month={March}, year={2026} }