e9t/nsmc
Updated • 482 • 17
How to use jiiyy/bert_multilingual with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="jiiyy/bert_multilingual") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("jiiyy/bert_multilingual")
model = AutoModelForSequenceClassification.from_pretrained("jiiyy/bert_multilingual")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("jiiyy/bert_multilingual")
model = AutoModelForSequenceClassification.from_pretrained("jiiyy/bert_multilingual")This model is a fine-tuned version of bert-base-multilingual-cased on the nsmc dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.3619 | 1.0 | 9375 | 0.3406 | 0.8516 |
| 0.2989 | 2.0 | 18750 | 0.3243 | 0.8644 |
| 0.2655 | 3.0 | 28125 | 0.3346 | 0.8661 |
Base model
google-bert/bert-base-multilingual-cased
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="jiiyy/bert_multilingual")