YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
π§ SentimentClassifier-BERT-TweetEval
A BERT-based sentiment analysis model fine-tuned on the TweetEval dataset. It predicts the sentiment of a text as Positive, Neutral, or Negative with confidence scores. This model is useful for classifying product feedback, user reviews, or social media posts.
β¨ Model Highlights
- π Based on
bert-base-uncased(by Google) - π Fine-tuned on the Sentiment Subtask from TweetEval
- β‘ Supports prediction of 3 classes: Negative, Neutral, Positive
- πΎ Available in both full and quantized versions for inference
π§ Intended Uses
- Product release analysis (e-commerce, apps)
π« Limitations
- Not optimized for other languages than English
- May not generalize well to domains very different from Twitter/product reviews
- Can confuse sarcasm or irony
- Performance may degrade on long texts (>128 tokens, due to truncation)
ποΈββοΈ Training Details
- Base Model:
bert-base-uncased - Dataset: TweetEval: Sentiment Subtask
- Framework: PyTorch with π€ Transformers
- Epochs: 5
- Batch Size: 8
- Max Length: 128 tokens
- Optimizer: AdamW
- Loss: CrossEntropyLoss with class balancing
- Device: Trained on NVIDIA CUDA-enabled GPU
π Evaluation Metrics
| Metric | Score |
|---|---|
| Accuracy | 0.99 |
| F1-macro | 0.98 |
Replace the above with actual scores after evaluation.
π Label Mapping
| Label ID | Sentiment |
|---|---|
| 0 | Negative |
| 1 | Neutral |
| 2 | Positive |
π Usage
Load the Model
from transformers import BertTokenizer, BertForSequenceClassification
import torch
import torch.nn.functional as F
model_name = "AventIQ-AI/sentiment_analysis_product_review_sentiment"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)
model.eval()
def predict(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
probs = F.softmax(outputs.logits, dim=1)
pred = torch.argmax(probs, dim=1).item()
label_map = {0: "Negative", 1: "Neutral", 2: "Positive"}
return f"Sentiment: {label_map[pred]} (Confidence: {probs[0][pred]:.2f})"
# Test predictions
print("\nTest Predictions:")
print(predict("We're thrilled to announce our latest update, packed with new features and performance improvements!"))
Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
Repository Structure
.
βββ model/ # Contains the quantized model files
βββ tokenizer_config/ # Tokenizer configuration and vocabulary files
βββ model.safensors/ # Fine Tuned Model
βββ README.md # Model documentation
Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support