BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper
•
1810.04805
•
Published
•
25
This is a fine-tuned version of BERT-Base-Uncased for binary sentiment classification on the IMDB dataset. The model is trained to classify movie reviews as either positive or negative.
en)from transformers import BertForSequenceClassification, BertTokenizer
model_name = "kparkhade/Fine-tuned-BERT-Imdb"
model = BertForSequenceClassification.from_pretrained(model_name)
tokenizer = BertTokenizer.from_pretrained(model_name)
from transformers import pipeline
sentiment_pipeline = pipeline("text-classification", model=model_name)
result = sentiment_pipeline("The movie was absolutely fantastic! I loved it.")
print(result)
If you use this model, please cite: @article{devlin2019bert, title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2019} }
This model is released under the Apache 2.0 License.
Base model
google-bert/bert-base-uncased