Update README.md
Browse files# EGYbERT: Fine-Tuned AraBERT for Egyptian Arabic
## Model Description
This model is a fine-tuned version of [AraBERT (aubmindlab/bert-base-arabert)](https://huggingface.co/aubmindlab/bert-base-arabert) on datasets containing Egyptian Arabic. It is optimized for the masked language modeling (MLM) task.
Key Features:
- **Task**: Masked Language Modeling
- **Language**: Egyptian Arabic
- **Base Model**: BERT (AraBERT)
## Training Details
- **Dataset**:
- A custom dataset of Egyptian Arabic collected from conversational text sources.
- Preprocessed to include common colloquial phrases and reduce noise in data.
- **Training Setup**:
- Pre-trained model: `aubmindlab/bert-base-arabert`
- Fine-tuning performed for 3 epochs with a batch size of 16.
- Learning rate: 2e-5.
- MLM Probability: 15%
- **Tools**:
- **Hugging Face Transformers Library**
- **PyTorch**
## Evaluation Results
Model Perplexity
- **Baseline Model**: 36.2377
- **Fine-Tuned Model**: 26.5359
The fine-tuned model outperforms the baseline AraBERT model in terms of perplexity, indicating better performance on MLM tasks in Egyptian Arabic.
## How to Use
Here’s an example of how to use EgBERT in your project:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
# Load the fine-tuned model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("noortamerr/EgBERT")
model = AutoModelForMaskedLM.from_pretrained("noortamerr/EgBERT")
# Input text with a masked token
text = "الكورة في مصر [MASK] حاجة كل الناس بتتابعها."
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt")
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits
# Decode the top 5 predictions for the [MASK] token
mask_token_logits = predictions[0, mask_token_index, :]
top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
predicted_words = [tokenizer.decode([token]) for token in top_5_tokens]
print(f"Predicted words: {predicted_words}")
@misc {EgBERT,
author = {Noor Tamer, Roba Mahmoud, Orchid Hazem},
title = {EgBERT: Fine-Tuned AraBERT for Egyptian Arabic},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/noortamerr/EgBERT}
}
|
@@ -1,10 +1,18 @@
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
-
- ar
|
| 4 |
metrics:
|
| 5 |
-
- perplexity
|
| 6 |
base_model:
|
| 7 |
-
- google-bert/bert-base-uncased
|
| 8 |
-
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
+
- ar # Arabic
|
| 4 |
metrics:
|
| 5 |
+
- perplexity # Metric used to evaluate the model
|
| 6 |
base_model:
|
| 7 |
+
- google-bert/bert-base-uncased # The original base model used
|
| 8 |
+
pipeline_tag: mask-generation # The task this model performs
|
| 9 |
+
datasets:
|
| 10 |
+
- big_arabic_train # Dataset used for training
|
| 11 |
+
- big_arabic_val # Dataset used for validation
|
| 12 |
+
library_name: transformers # Framework used (Hugging Face Transformers)
|
| 13 |
+
tags:
|
| 14 |
+
- egyptian-arabic # Add relevant tags to describe your model
|
| 15 |
+
- fine-tuned
|
| 16 |
+
- arabert
|
| 17 |
+
license: apache-2.0 # Add a license (choose one appropriate for your work)
|
| 18 |
---
|