Aitana-Tourism-Encoder (Spanish & Valencian)
A ModernBERT-base model continually pretrained on tourism domain data in Spanish and Valencian. This specialized encoder model is optimized for understanding tourism-related texts, including hotel descriptions, destination guides, travel services, and cultural heritage content.
Model Details
| Attribute | Value |
|---|---|
| Base Model | answerdotai/ModernBERT-base |
| Architecture | FlexBERT (22 layers, 768 hidden, 12 heads) |
| Parameters | ~149M |
| Vocabulary Size | 256,000 tokens |
| Max Sequence Length | 8,192 tokens |
| Languages | Spanish (es), Valencian (va) |
| Domain | Tourism |
Training Data
This model was trained on the gplsi/alia_tourism dataset, filtered for Spanish and Valencian languages.
Dataset Statistics
| Metric | Value |
|---|---|
| Total Documents | 66,548 |
| Spanish Documents | 49,644 (74.6%) |
| Valencian Documents | 16,904 (25.4%) |
| Raw Text Size | 1.2 GB |
| Training Samples | 80,839 |
| Validation Samples | 8,862 |
| Total Tokens (Train) | ~348 million |
| Tokens Seen (4 epochs) | ~1.39 billion |
Data Processing Pipeline
- Download: Extracted from
gplsi/alia_tourismHuggingFace dataset - Filtering: Selected only
language=["es", "va"]subsets - Tokenization: BPE tokenization with MrBERT tokenizer (256k vocab)
- Chunking: Packed into 8,192-token sequences
- Split: 90% train / 10% validation
Training Configuration
| Parameter | Value |
|---|---|
| Training Epochs | 4 |
| Sequence Length | 8,192 |
| MLM Probability | 30% (train), 15% (eval) |
| Batch Size | 32 |
| Learning Rate | 5e-5 (cosine decay to 5e-6) |
| Warmup | 101 batches (1%) |
| Optimizer | StableAdamW |
| Precision | bfloat16 |
| Hardware | 1× NVIDIA RTX 4090 |
Training Results
| Epoch | Training Loss | Masked Accuracy |
|---|---|---|
| 1 | 2.84 → 1.30 | 80.64% → 84.39% |
| 2 | 1.07 → 1.05 | 85.67% |
| 3 | 0.92 → 1.26 | 86.11% |
| Final | 1.26 | 86.11% |
Key Achievements
- ✅ 87% loss reduction (9.4 → 1.26)
- ✅ +5.5 pp accuracy gain (80.6% → 86.1%)
- ✅ No overfitting observed
- ✅ Stable gradients throughout training
Usage
With Transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("gplsi/Aitana-tourism-mb-encoder-1.0")
tokenizer = AutoTokenizer.from_pretrained("gplsi/Aitana-tourism-mb-encoder-1.0")
# Fill-mask example
text = "El hotel ofrece vistas [MASK] al mar Mediterráneo."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# Get predictions
import torch
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
predicted_token_id = outputs.logits[0, mask_token_index].argmax(axis=-1)
print(tokenizer.decode(predicted_token_id))
For Embeddings
from transformers import AutoModel, AutoTokenizer
import torch
model = AutoModel.from_pretrained("gplsi/Aitana-tourism-mb-encoder-1.0")
tokenizer = AutoTokenizer.from_pretrained("gplsi/Aitana-tourism-mb-encoder-1.0")
text = "Descubre las playas de la Costa Blanca"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
embeddings = outputs.last_hidden_state.mean(dim=1) # Mean pooling
Intended Use
Primary Use Cases
- Tourism NLP: Named entity recognition, text classification, sentiment analysis for tourism content
- Semantic Search: Document retrieval and similarity for travel-related queries
- Information Extraction: Extracting entities like hotels, destinations, amenities
- Multilingual Tourism: Processing Spanish and Valencian tourism texts
Out-of-Scope Uses
- General-purpose language understanding outside tourism domain
- Languages other than Spanish and Valencian
- Text generation (this is an encoder-only model)
Limitations
- Domain-specific: Performance may degrade on non-tourism texts
- Language coverage: Optimized for Spanish (es) and Valencian (va) only
- Encoder-only: Cannot generate text, only encode/understand
Ethical Considerations
The training data is automatically curated from tourism sources and may contain:
- Geographic and cultural biases toward specific regions
- Commercial content from tourism businesses
- Limited representation of certain destinations or services
Users should evaluate the model's outputs for fairness and bias in their specific applications.
Additional Information
Author
The model has been developed by the Language and Information Systems Group (GPLSI) and the Centro de Inteligencia Digital (CENID), both part of the University of Alicante (UA), as part of their ongoing research in Natural Language Processing (NLP).
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública, co-financed by the EU – NextGenerationEU, within the framework of the project Desarrollo de Modelos ALIA.
Acknowledgments
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
Special thanks to:
- Language Technologies Laboratory at Barcelona Supercomputing Center
- Centro Vasco de Tecnología de la Lengua (HiTZ)
- Centro Singular de Investigación en Tecnologías Inteligentes (CiTIUS)
- Sistemas Inteligentes de Acceso a la Información (SINAI)
- Instituto Universitario de Investigación Informática (IUII)
- Leonardo HPC System
- European supercomputing ecosystem (EUROHPC)
- Answer.AI for the original ModernBERT architecture
- MosaicML/Databricks for the Composer training framework
We also acknowledge the financial, technical, and scientific support of the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA, whose contribution has been essential to the completion of this research.
License
This model is released under the Apache License 2.0.
Disclaimer
This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.
Reference
If you use this model, please cite:
@misc{modernbert-tourism-2025,
author = {Yáñez-Romero, Fabio and Sepúlveda-Torres, Robiert and Estevanell-Valladares, Ernesto L. and Galeano, Santiago and Martínez-Murillo, Iván and Grande, Eduardo and Canal-Esteve, Miquel and Miró Maestre, María and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
title = {Aitana Tourism Encoder: Domain-Adapted Language Model for Spanish and Valencian Tourism},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/gplsi/Aitana-tourism-mb-encoder-1.0}}
}
Copyright © 2025 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA). Distributed under the Apache License 2.0.
- Downloads last month
- 7
Model tree for gplsi/Aitana-tourism-mb-encoder-1.0
Base model
answerdotai/ModernBERT-base