SentimentBERT / README.md
mervp's picture
Update README.md
105e633 verified
|
raw
history blame
1.76 kB
metadata
language: en
license: apache-2.0
tags:
  - sentiment analysis
  - text classification
  - bert
  - transformers
  - news
  - reviews

SentimentBERT — Fine-tuned BERT for Sentiment Classification (Positive, Neutral, Negative)

SentimentBERT is a Finetuned BERT-based model specifically for sentiment classification of sentences into three categories: Positive, Negative, and Neutral.

This model has been trained on a ** 130K large and diverse dataset of news articles** across a wide range of categories. It achieves over 86% accuracy and demonstrates a strong understanding of sentence-level sentiment, even in nuanced or mixed-context cases.


Model Highlights

  • Base model: bert-base-uncased
  • Fine tuned for: Sentiment classification (3-class)
  • Accuracy: > 86%
  • Classes: Positive, Neutral, Negative
  • Language: English
  • Format: safetensors
  • Tokenizer: Compatible with bert-base-uncased

Applications

This model is well-suited for:

  • News article sentiment analysis
  • Amazon product review analysis
  • Customer support or service feedback systems
  • General-purpose opinion mining

Usage Example

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("mervp/SentimentBERT")
tokenizer = AutoTokenizer.from_pretrained("mervp/SentimentBERT")

text = "The government’s response to the crisis was surprisingly effective."
inputs = tokenizer(text, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

predicted_class = torch.argmax(logits, dim=1).item()
print(["Negative", "Neutral", "Positive"][predicted_class])