Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ license: apache-2.0
|
|
| 10 |
language:
|
| 11 |
- am
|
| 12 |
- ti
|
| 13 |
-
|
| 14 |
---from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
|
| 15 |
model_name = "Hailay/FT_EXLMR"
|
| 16 |
tokenizer = XLMRobertaTokenizer.from_pretrained(model_name)
|
|
@@ -22,7 +22,7 @@ outputs = model(**inputs)
|
|
| 22 |
# Model Card for Model ID
|
| 23 |
Model Card Summary: Hailay/FT_EXLMR
|
| 24 |
Model Name: Hailay/FT_EXLMR
|
| 25 |
-
Type: XLM-
|
| 26 |
Language(s): [Languages supported by the model]
|
| 27 |
License: [License type, e.g., Apache 2.0]
|
| 28 |
Pre-trained Model: xlm-roberta-base
|
|
@@ -35,13 +35,8 @@ Key Features:
|
|
| 35 |
Trained Data: Custom dataset with text and labels
|
| 36 |
Training Details: 3 epochs, learning rate of 1e-5
|
| 37 |
Evaluation: Accuracy and loss metrics
|
| 38 |
-
Getting Started:
|
| 39 |
-
|
| 40 |
Code Example: Load the model and tokenizer, then use them for text classification.
|
| 41 |
Considerations:
|
| 42 |
|
| 43 |
-
Bias & Risks: Assess for biases; evaluate suitability for specific applications
|
| 44 |
-
Environmental Impact: [Details about hardware and training time]
|
| 45 |
-
Citation:
|
| 46 |
|
| 47 |
BibTeX & APA formats available
|
|
|
|
| 10 |
language:
|
| 11 |
- am
|
| 12 |
- ti
|
| 13 |
+
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. The original Twitter-based RoBERTa model can be found here and the original reference paper is TweetEval. This model is suitable for Amharic and Tigriyna.
|
| 14 |
---from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
|
| 15 |
model_name = "Hailay/FT_EXLMR"
|
| 16 |
tokenizer = XLMRobertaTokenizer.from_pretrained(model_name)
|
|
|
|
| 22 |
# Model Card for Model ID
|
| 23 |
Model Card Summary: Hailay/FT_EXLMR
|
| 24 |
Model Name: Hailay/FT_EXLMR
|
| 25 |
+
Type: XLM-Roberta model for sequence classification
|
| 26 |
Language(s): [Languages supported by the model]
|
| 27 |
License: [License type, e.g., Apache 2.0]
|
| 28 |
Pre-trained Model: xlm-roberta-base
|
|
|
|
| 35 |
Trained Data: Custom dataset with text and labels
|
| 36 |
Training Details: 3 epochs, learning rate of 1e-5
|
| 37 |
Evaluation: Accuracy and loss metrics
|
|
|
|
|
|
|
| 38 |
Code Example: Load the model and tokenizer, then use them for text classification.
|
| 39 |
Considerations:
|
| 40 |
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
BibTeX & APA formats available
|