Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- ar
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
widget:
|
| 6 |
+
- text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع"
|
| 7 |
+
---
|
| 8 |
+
# CAMeLBERT-CA NER Model
|
| 9 |
+
## Model description
|
| 10 |
+
**CAMeLBERT-CA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
|
| 11 |
+
For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset.
|
| 12 |
+
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
|
| 13 |
+
## Intended uses
|
| 14 |
+
You can use the CAMeLBERT-CA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline.
|
| 15 |
+
#### How to use
|
| 16 |
+
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component:
|
| 17 |
+
```python
|
| 18 |
+
>>> from camel_tools.ner import NERecognizer
|
| 19 |
+
>>> from camel_tools.tokenizers.word import simple_word_tokenize
|
| 20 |
+
>>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-ca-ner')
|
| 21 |
+
>>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع')
|
| 22 |
+
>>> ner.predict_sentence(sentence)
|
| 23 |
+
>>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O']
|
| 24 |
+
```
|
| 25 |
+
You can also use the NER model directly with a transformers pipeline:
|
| 26 |
+
```python
|
| 27 |
+
>>> from transformers import pipeline
|
| 28 |
+
>>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-ner')
|
| 29 |
+
>>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع")
|
| 30 |
+
[{'word': 'أبوظبي',
|
| 31 |
+
'score': 0.9895730018615723,
|
| 32 |
+
'entity': 'B-LOC',
|
| 33 |
+
'index': 2,
|
| 34 |
+
'start': 6,
|
| 35 |
+
'end': 12},
|
| 36 |
+
{'word': 'الإمارات',
|
| 37 |
+
'score': 0.8156259655952454,
|
| 38 |
+
'entity': 'B-LOC',
|
| 39 |
+
'index': 8,
|
| 40 |
+
'start': 33,
|
| 41 |
+
'end': 41},
|
| 42 |
+
{'word': 'العربية',
|
| 43 |
+
'score': 0.890906810760498,
|
| 44 |
+
'entity': 'I-LOC',
|
| 45 |
+
'index': 9,
|
| 46 |
+
'start': 42,
|
| 47 |
+
'end': 49},
|
| 48 |
+
{'word': 'المتحدة',
|
| 49 |
+
'score': 0.8169114589691162,
|
| 50 |
+
'entity': 'I-LOC',
|
| 51 |
+
'index': 10,
|
| 52 |
+
'start': 50,
|
| 53 |
+
'end': 57}]
|
| 54 |
+
```
|
| 55 |
+
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models
|
| 56 |
+
## Citation
|
| 57 |
+
```bibtex
|
| 58 |
+
@inproceedings{inoue-etal-2021-interplay,
|
| 59 |
+
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
|
| 60 |
+
author = "Inoue, Go and
|
| 61 |
+
Alhafni, Bashar and
|
| 62 |
+
Baimukan, Nurpeiis and
|
| 63 |
+
Bouamor, Houda and
|
| 64 |
+
Habash, Nizar",
|
| 65 |
+
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
|
| 66 |
+
month = apr,
|
| 67 |
+
year = "2021",
|
| 68 |
+
address = "Kyiv, Ukraine (Online)",
|
| 69 |
+
publisher = "Association for Computational Linguistics",
|
| 70 |
+
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
|
| 71 |
+
}
|
| 72 |
+
```
|