stanfordnlp/imdb
Viewer β’ Updated β’ 100k β’ 178k β’ 370
How to use fabriceyhc/bert-base-uncased-imdb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="fabriceyhc/bert-base-uncased-imdb") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("fabriceyhc/bert-base-uncased-imdb")
model = AutoModelForSequenceClassification.from_pretrained("fabriceyhc/bert-base-uncased-imdb")This model is a fine-tuned version of bert-base-uncased on the imdb dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.3952 | 0.65 | 2000 | 0.4012 | 0.86 |
| 0.2954 | 1.29 | 4000 | 0.4535 | 0.892 |
| 0.2595 | 1.94 | 6000 | 0.4320 | 0.892 |
| 0.1516 | 2.59 | 8000 | 0.5309 | 0.896 |
| 0.1167 | 3.23 | 10000 | 0.4070 | 0.928 |
| 0.0624 | 3.88 | 12000 | 0.5055 | 0.908 |
| 0.0329 | 4.52 | 14000 | 0.4342 | 0.92 |