stanfordnlp/sst
Updated • 2.68k • 22
How to use kennethge123/sst-bert-base-uncased with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="kennethge123/sst-bert-base-uncased") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kennethge123/sst-bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("kennethge123/sst-bert-base-uncased")This model is a fine-tuned version of bert-base-uncased on the sst dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Mse |
|---|---|---|---|---|
| 0.0428 | 1.0 | 534 | 0.0218 | 0.0218 |
| 0.0169 | 2.0 | 1068 | 0.0216 | 0.0216 |
| 0.0096 | 3.0 | 1602 | 0.0231 | 0.0232 |
| 0.0066 | 4.0 | 2136 | 0.0222 | 0.0223 |
| 0.0046 | 5.0 | 2670 | 0.0220 | 0.0220 |
| 0.0035 | 6.0 | 3204 | 0.0209 | 0.0210 |
| 0.0029 | 7.0 | 3738 | 0.0226 | 0.0227 |
| 0.0025 | 8.0 | 4272 | 0.0211 | 0.0211 |
| 0.0023 | 9.0 | 4806 | 0.0207 | 0.0208 |
| 0.002 | 10.0 | 5340 | 0.0218 | 0.0218 |
| 0.0017 | 11.0 | 5874 | 0.0201 | 0.0202 |
| 0.0015 | 12.0 | 6408 | 0.0212 | 0.0212 |
| 0.0014 | 13.0 | 6942 | 0.0202 | 0.0202 |
| 0.0012 | 14.0 | 7476 | 0.0205 | 0.0206 |
| 0.0009 | 15.0 | 8010 | 0.0203 | 0.0203 |
| 0.0008 | 16.0 | 8544 | 0.0202 | 0.0202 |
| 0.0007 | 17.0 | 9078 | 0.0206 | 0.0207 |
| 0.0006 | 18.0 | 9612 | 0.0200 | 0.0200 |
| 0.0005 | 19.0 | 10146 | 0.0201 | 0.0201 |
| 0.0005 | 20.0 | 10680 | 0.0201 | 0.0201 |
Base model
google-bert/bert-base-uncased