Malay BERT for Sentiment Analysis
Fine-tuned BERT model for Malay sentiment analysis with 3-class classification.
Label Mapping
Important: This model uses the following label mapping:
id2label = {
0: "negative",
1: "neutral",
2: "positive"
}
label2id = {
"negative": 0,
"neutral": 1,
"positive": 2
}
Quick Usage
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="rmtariq/ft-Malay-bert")
result = classifier("Saya sangat gembira!")
print(result)
# [{'label': 'LABEL_2', 'score': 0.995}]
# LABEL_2 = positive
Label Interpretation
LABEL_0or0โ negative sentimentLABEL_1or1โ neutral sentimentLABEL_2or2โ positive sentiment
Model Details
- Language: Malay (Bahasa Malaysia)
- Task: Sentiment Analysis
- Classes: 3 (negative, neutral, positive)
- Base Model: BERT
Training
This model was fine-tuned on Malay sentiment analysis data.
Limitations
- Optimized for Malaysian Malay text
- May have reduced performance on other Malay dialects
- Mixed language performance may vary
Citation
@misc{ft-malay-bert,
author = {rmtariq},
title = {Fine-tuned Malay BERT for Sentiment Analysis},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/rmtariq/ft-Malay-bert}
}
- Downloads last month
- 108
Model tree for rmtariq/ft-Malay-bert
Space using rmtariq/ft-Malay-bert 1
Evaluation results
- Accuracy on Malay Sentiment Datasetself-reported0.850