Instructions to use lvwerra/bert-imdb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lvwerra/bert-imdb with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="lvwerra/bert-imdb")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("lvwerra/bert-imdb") model = AutoModelForSequenceClassification.from_pretrained("lvwerra/bert-imdb") - Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
BERT-IMDB
What is it?
BERT (bert-large-cased) trained for sentiment classification on the IMDB dataset.
Training setting
The model was trained on 80% of the IMDB dataset for sentiment classification for three epochs with a learning rate of 1e-5 with the simpletransformers library. The library uses a learning rate schedule.
Result
The model achieved 90% classification accuracy on the validation set.
Reference
The full experiment is available in the tlr repo.
- Downloads last month
- 28