klue/klue
Viewer • Updated • 206k • 7.23k • 92
How to use chunwoolee0/klue_nli_roberta_base_model with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="chunwoolee0/klue_nli_roberta_base_model") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("chunwoolee0/klue_nli_roberta_base_model")
model = AutoModelForSequenceClassification.from_pretrained("chunwoolee0/klue_nli_roberta_base_model")This model is a fine-tuned version of klue/roberta-base on the klue dataset. It achieves the following results on the evaluation set:
Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details.
NOTE: Use BertTokenizer instead of RobertaTokenizer. (AutoTokenizer will load BertTokenizer)
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/roberta-base")
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base")
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.5988 | 1.0 | 782 | 0.4378 | 0.8363 |
| 0.2753 | 2.0 | 1564 | 0.4169 | 0.851 |
| 0.1735 | 3.0 | 2346 | 0.5267 | 0.8607 |
| 0.0956 | 4.0 | 3128 | 0.6275 | 0.8683 |
| 0.0708 | 5.0 | 3910 | 0.6867 | 0.8653 |