| | --- |
| | license: cc-by-nc-4.0 |
| | language: |
| | - ar |
| | tags: |
| | - Arabic BERT |
| | - Poetry |
| | - Masked Langauge Model |
| |
|
| | --- |
| | |
| |
|
| | **AraPoemBERT** is the first pre-trained large language model focused exclusively on Arabic poetry. The dataset used in pretraining the model contains more than 2 million verses. The code files along with the results are available on [repo](https://github.com/FaisalQarah/araPoemBERT). |
| |
|
| |
|
| |
|
| | # BibTex |
| |
|
| | If you use SaudiBERT model in your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (citation details to be updated): |
| | ```bibtex |
| | @article{qarah2024arapoembert, |
| | title={AraPoemBERT: A Pretrained Language Model for Arabic Poetry Analysis}, |
| | author={Qarah, Faisal}, |
| | journal={arXiv preprint arXiv:2403.12392}, |
| | year={2024} |
| | } |
| | |
| | |
| | ``` |
| |
|