ZELAIHANDICLEAN / README.md
Carlos1411's picture
Update README.md
95fc2a6 verified
metadata
language:
  - eu
pretty_name: ZelaiHandiClean 🤠
task_categories:
  - text-generation
size_categories:
  - 100M<n<1B

Dataset Summary

A large, cleaned Basque-language corpus originally based on the ZelaiHandi dataset, augmented with books and Wikipedia articles to support language-modeling experiments. The data have been normalized and stripped of extraneous whitespace, blank lines and non-linguistic characters.

For example, this are the stats for Ekaia subset:

Metric Value
Initial characters (Ekaia subset) 14,480,942
Final characters (after cleaning) 12,746,071
Overall cleaned 11.98 %
Extra spaces removed 0.03 %
Blank lines removed 15.47 %
Non-linguistic characters removed 11.96 %

Supported Tasks

  • Causal language modeling
  • Masked language modeling
  • Next-sentence prediction
  • Any downstream Basque NLP task (fine-tuning)

Languages

  • Basque (eu)

Dataset Statistics

Metric Value
Total words 660 million
Disk size 5.8 GB
Additional books scraped from Booktegui 400
Wikipedia articles added 2,500

Dataset Structure

Each example in the JSONL files has the following schema:

{
  "id": "unique for each document",
  "periodico": "source",
  "lugar": "geographic focus of the source",
  "dominio": "type of content (articles, news, books…)",
  "texto": "raw text used for training",
  "licencia": "document license"
}

There is no specific train-val distinction in the dataset, but I would just take the dataset, divide it in 100 chunks of text and use 1 of them for val to make sure the model is generalising well.

Data Collection and Cleaning

  1. Original Source

    • ZelaiHandi dataset (Basque news, books, articles)
  2. Cleaning Steps

    • Removed extra whitespace and blank lines
    • Normalized Unicode characters
    • Stripped non-linguistic symbols (HTML tags, control characters)
  3. Augmentation

    • +400 books (liburuak) scraped from Booktegui
    • +2,500 articles from the Basque Wikipedia (Wikipediabi)
    • Wikipedia Berria and Legebiltzarra datasets were ingested in three parts to avoid interface issues

Considerations for Use

  • All text is raw; you may wish to tokenize or further normalize per your model’s requirements. I have my own basque tokenizer and I will maybe also upload it.
  • Maintain consistent train/validation splits for reproducible benchmarks.

License

Various Creative Commons licenses (CC-BY, CC-BY-SA).
See each JSONL record’s "licencia" field for details.


Citation

If you use this dataset, please cite:

Orai NLP Teknologiak (2025). ZelaiHandi + Booktegui + Wikipediabi Basque Corpus. CC-BY-SA.


Acknowledgements

Special thanks to:

  • San Vicente, Iñaki & Urbizu
  • Gorka & Corral
  • Ander & Beloki
  • Zuhaitz & Saralegi
  • Xabier

…for creating the original ZelaiHandi dataset, which served as the foundation for this cleaned and slightly expanded corpus.

*I have just noticed that there is STILL A LOT to clean, so I will be uploading the updates during this month 🤠