|
|
--- |
|
|
language: |
|
|
- eu |
|
|
configs: |
|
|
- config_name: BSMauthor |
|
|
data_files: |
|
|
- split: train |
|
|
path: BSMauthor/train.jsonl.gz |
|
|
- split: validation |
|
|
path: BSMauthor/validation.jsonl.gz |
|
|
- split: test |
|
|
path: BSMauthor/test.jsonl.gz |
|
|
- config_name: BSMtime |
|
|
data_files: |
|
|
- split: train |
|
|
path: BSMtime/train.jsonl.gz |
|
|
- split: validation |
|
|
path: BSMtime/validation.jsonl.gz |
|
|
- split: test |
|
|
path: BSMtime/test.jsonl.gz |
|
|
- config_name: EKC |
|
|
data_files: |
|
|
- split: train |
|
|
path: EKC/train.jsonl.gz |
|
|
- split: validation |
|
|
path: EKC/validation.jsonl.gz |
|
|
- split: test |
|
|
path: EKC/test.jsonl.gz |
|
|
task_categories: |
|
|
- fill-mask |
|
|
- text-generation |
|
|
task_ids: |
|
|
- language-modeling |
|
|
- masked-language-modeling |
|
|
annotations_creators: |
|
|
- no-annotation |
|
|
multilinguality: |
|
|
- monolingual |
|
|
license: apache-2.0 |
|
|
--- |
|
|
|
|
|
# BERnaT: Basque Encoders for Representing Natural Textual Diversity |
|
|
|
|
|
Submitted to LREC 2026 |
|
|
|
|
|
## Abstract |
|
|
|
|
|
Language models depend on massive text corpora that are often filtered for quality, a process that can unintentionally |
|
|
exclude non-standard linguistic varieties, reduce model robustness and reinforce representational biases. In this |
|
|
paper, we argue that language models should aim to capture the full spectrum of language variation (dialectal, |
|
|
historical, informal, etc.) rather than relying solely on standardized text. Focusing on Basque, a morphologically rich |
|
|
and low-resource language, we construct new corpora combining standard, social media, and historical sources, and |
|
|
pre-train the BERnaT family of encoder-only models in three configurations: standard, diverse, and combined. We |
|
|
further propose an evaluation framework that separates Natural Language Understanding (NLU) tasks into standard |
|
|
and diverse subsets to assess linguistic generalization. Results show that models trained on both standard and |
|
|
diverse data consistently outperform those trained on standard corpora, improving performance across all task types |
|
|
without compromising standard benchmark accuracy. These findings highlight the importance of linguistic diversity in |
|
|
building inclusive, generalizable language models. |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
This work has been partially supported by the Basque Government (Research group funding IT1570-22 and IKER-GAITU project), the Spanish Ministry for Digital Transformation and Civil Service, and the EU-funded NextGenerationEU Recovery, Transformation and Resilience Plan (ILENIA project, 2022/TL22/00215335; and ALIA project). The project also received funding from the European Union’s Horizon Europe research and innovation program under Grant Agreement No 101135724, Topic HORIZON-CL4-2023-HUMAN-01-21 and DeepKnowledge (PID2021-127777OB-C21) founded by MCIN/AEI/10.13039/501100011033 and FEDER. Jaione Bengoetxea, Julen Etxaniz and Ekhi Azurmendi hold a PhD grant from the Basque Government (PRE_2024_1_0028, PRE_2024_2_0028 and PRE_2024_1_0035, respectively). Maite Heredia and Mikel Zubillaga hold a PhD grant from the University of the Basque Country UPV/EHU (PIF23/218 and PIF24/04, respectively). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2024E01-042. |
|
|
|
|
|
## Citation: |
|
|
|
|
|
To cite our work, please use: |
|
|
|
|
|
```bibtex |
|
|
@misc{azurmendi2025bernatbasqueencodersrepresenting, |
|
|
title={BERnaT: Basque Encoders for Representing Natural Textual Diversity}, |
|
|
author={Ekhi Azurmendi and Joseba Fernandez de Landa and Jaione Bengoetxea and Maite Heredia and Julen Etxaniz and Mikel Zubillaga and Ander Soraluze and Aitor Soroa}, |
|
|
year={2025}, |
|
|
eprint={2512.03903}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2512.03903}, |
|
|
} |
|
|
``` |