| | --- |
| | license: apache-2.0 |
| | --- |
| | |
| | # Overview |
| |
|
| | <p align="center"> |
| | <img src="https://avatars.githubusercontent.com/u/12619994?s=200&v=4" width="150"> |
| | </p> |
| |
|
| | <!-- -------------------------------------------------------------------------------- --> |
| |
|
| | This model is **only compatible** with the code in [this github repo](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/JABER-PyTorch) (not supported by the [Transformers](https://github.com/huggingface/transformers) library) |
| | |
| | ## Citation |
| |
|
| | Please cite the following paper when using our code and model: |
| |
|
| | ``` bibtex |
| | @misc{ghaddar2024importance, |
| | title={On the importance of Data Scale in Pretraining Arabic Language Models}, |
| | author={Abbas Ghaddar and Philippe Langlais and Mehdi Rezagholizadeh and Boxing Chen}, |
| | year={2024}, |
| | eprint={2401.07760}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL} |
| | } |
| | ``` |
| |
|
| |
|