latxa-corpus-v2 / README.md
nperez's picture
squash history
68a5aec
metadata
language:
  - eu
configs:
  - config_name: wikipedia
    data_files:
      - split: train
        path: wikipedia/train*.jsonl.gz
      - split: validation
        path: wikipedia/valid*.jsonl.gz
      - split: test
        path: wikipedia/test*.jsonl.gz
  - config_name: euscrawl-v2
    data_files:
      - split: train
        path: euscrawl-v2-free/train*.jsonl.gz
      - split: validation
        path: euscrawl-v2-free/valid*.jsonl.gz
      - split: test
        path: euscrawl-v2-free/test*.jsonl.gz
  - config_name: zelaihandi
    data_files:
      - split: train
        path: zelaihandi/train*.jsonl.gz
      - split: validation
        path: zelaihandi/valid*.jsonl.gz
      - split: test
        path: zelaihandi/test*.jsonl.gz
  - config_name: bog
    data_files:
      - split: train
        path: bog/train*.jsonl.gz
      - split: validation
        path: bog/valid*.jsonl.gz
      - split: test
        path: bog/test*.jsonl.gz
  - config_name: bopv
    data_files:
      - split: train
        path: bopv/train*.jsonl.gz
      - split: validation
        path: bopv/valid*.jsonl.gz
      - split: test
        path: bopv/test*.jsonl.gz
  - config_name: botha
    data_files:
      - split: train
        path: botha/train*.jsonl.gz
      - split: validation
        path: botha/valid*.jsonl.gz
      - split: test
        path: botha/test*.jsonl.gz
  - config_name: parleus
    data_files:
      - split: train
        path: parleus/train*.jsonl.gz
      - split: validation
        path: parleus/valid*.jsonl.gz
      - split: test
        path: parleus/test*.jsonl.gz
  - config_name: aldizkariak
    data_files:
      - split: train
        path: aldizkariak/train*.jsonl.gz
      - split: validation
        path: aldizkariak/valid*.jsonl.gz
      - split: test
        path: aldizkariak/test*.jsonl.gz
  - config_name: cultura-x
    data_files:
      - split: train
        path: cultura-x/train*.jsonl.gz
      - split: validation
        path: cultura-x/valid*.jsonl.gz
      - split: test
        path: cultura-x/test*.jsonl.gz
  - config_name: hplt-v2
    data_files:
      - split: train
        path: hplt-v2/train*.jsonl.gz
      - split: validation
        path: hplt-v2/valid*.jsonl.gz
      - split: test
        path: hplt-v2/test*.jsonl.gz
  - config_name: finepdfs
    data_files:
      - split: train
        path: finepdfs/train*.jsonl.gz
      - split: validation
        path: finepdfs/valid*.jsonl.gz
      - split: test
        path: finepdfs/test*.jsonl.gz
  - config_name: fineweb2
    data_files:
      - split: train
        path: fineweb2/train*.jsonl.gz
      - split: validation
        path: fineweb2/valid*.jsonl.gz
      - split: test
        path: fineweb2/test*.jsonl.gz
  - config_name: colossal-oscar
    data_files:
      - split: train
        path: colossal-oscar/train*.jsonl.gz
      - split: validation
        path: colossal-oscar/valid*.jsonl.gz
      - split: test
        path: colossal-oscar/test*.jsonl.gz
  - config_name: hplt-v1
    data_files:
      - split: train
        path: hplt-v1/train*.jsonl.gz
      - split: validation
        path: hplt-v1/valid*.jsonl.gz
      - split: test
        path: hplt-v1/test*.jsonl.gz
task_categories:
  - fill-mask
  - text-generation
task_ids:
  - language-modeling
  - masked-language-modeling
annotations_creators:
  - no-annotation
multilinguality:
  - monolingual

Latxa Corpus v2

Dataset Summary

  • Curated by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
  • Language(s): eu-ES

Latxa Corpus v2 is a large-scale monolingual Basque corpus, created by combining curated crawls, public datasets, institutional data, and newly collected resources. Compared to v1.1, it substantially increases coverage, diversity, and volume. The final corpus is deduplicated, filtered, and ready for language model pretraining.

📌 Notice
As of February 13th 2026, this repository reflects a curated version of the original dataset. Some data points have been removed to ensure permission compliance. Researchers with a legitimate need for access to the original data may contact the maintainers for further information.

Data Sources

Data Statistics

The size of each dataset in terms of number of documents can be found below:

Train Valid Test
Aldizkariak 3,260 33 33
BOG 169,665 1,731 1,731
BOPV 42,798 437 887
BOTHA 92,249 941 941
Colossal OSCAR 89,856 915 915
CulturaX 870,464 8,882 8,882
Euscrawl v2 1,636,010 16,682 17,097
FinePDFs 277,572 2,803 837
FineWeb2 341,296 3,482 3,482
HPLT v1 894,590 9,128 9,128
HPLT v2 567,503 5,790 5,790
ParlEus 22,818 231 560
Wikipedia 456,273 4,655 4,655
ZelaiHandi 189,975 1,938 1,938

Licensing

We do not claim ownership of any document in the corpus. Should you consider that our data contains material that is owned by you and you would not like to be reproduced here, please contact us at hitz@ehu.eus. For detailed information regarding the licenses associated with each individual corpus and document comprising this training dataset, please refer to the "license" field within "meta" or the respective references listed alongside each corpus entry.

Funding

This work is funded by the Basque Government (IKER-GAITU project) and the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215335 and within the framework of the project Desarrollo de Modelos ALIA.