File size: 1,689 Bytes
4a46916 9e6d460 4a46916 9e6d460 d8ce100 605f751 9e6d460 d55c8a7 9e6d460 8b4176a 9e6d460 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
library_name: transformers
tags:
- readability
license: mit
base_model:
- aubmindlab/bert-base-arabertv02
pipeline_tag: text-classification
---
# AraBERTv02+Word+CE Readability Model
## Model description
**AraBERTv02+Word+CE** is a readability assessment model that was built by fine-tuning the **AraBERTv02** model with cross-entropy loss (**CE**).
For the fine-tuning, we used the **Word** input variant from [BAREC-Corpus-v1.0](https://huggingface.co/datasets/CAMeL-Lab/BAREC-Corpus-v1.0).
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment](https://arxiv.org/abs/2502.13520)."*
## Intended uses
You can use the AraBERTv02+Word+CE model as part of the transformers pipeline.
## How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> readability = pipeline("text-classification", model="CAMeL-Lab/readability-arabertv02-word-CE")
>>> text = 'و قال له انه يحب اكل الطعام بكثره'
>>> readability_level = int(readability(text)[0]['label'][6:])+1
>>> print("readability level: {}".format(readability_level))
readability level: 10
```
## Citation
```bibtex
@inproceedings{elmadani-etal-2025-readability,
title = "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment",
author = "Elmadani, Khalid N. and
Habash, Nizar and
Taha-Thomure, Hanada",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2025",
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics"
}
``` |