|
|
--- |
|
|
language: en |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- sentiment-analysis |
|
|
- text-classification |
|
|
model_name: "Your Model Name" |
|
|
datasets: "dataset-used" |
|
|
--- |
|
|
|
|
|
--- |
|
|
|
|
|
# **Fine-Tuned Sentiment Analysis Model** |
|
|
|
|
|
[](https://huggingface.co/ktr008/sentiment) |
|
|
π **Model ID:** `ktr008/sentiment` |
|
|
π
**Last Updated:** `2025-02-25` |
|
|
π **Framework:** PyTorch | Transformers |
|
|
|
|
|
## **π Model Description** |
|
|
This is a **fine-tuned RoBERTa model** for **sentiment analysis** based on the **Twitter RoBERTa Base** model. It classifies text into **Positive, Neutral, or Negative** sentiments. |
|
|
|
|
|
## **π οΈ How to Use** |
|
|
### **Install Dependencies** |
|
|
```sh |
|
|
pip install transformers torch scipy |
|
|
``` |
|
|
### **Load Model & Tokenizer** |
|
|
```python |
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
import numpy as np |
|
|
from scipy.special import softmax |
|
|
|
|
|
model = AutoModelForSequenceClassification.from_pretrained("ktr008/sentiment") |
|
|
tokenizer = AutoTokenizer.from_pretrained("ktr008/sentiment") |
|
|
|
|
|
def predict_sentiment(text): |
|
|
encoded_input = tokenizer(text, return_tensors='pt') |
|
|
output = model(**encoded_input) |
|
|
scores = output[0][0].detach().numpy() |
|
|
scores = softmax(scores) |
|
|
|
|
|
labels = ["Negative", "Neutral", "Positive"] |
|
|
ranking = np.argsort(scores)[::-1] |
|
|
|
|
|
return {labels[i]: round(float(scores[i]), 4) for i in ranking} |
|
|
|
|
|
# Example usage |
|
|
print(predict_sentiment("I love this product!")) |
|
|
``` |
|
|
|
|
|
## **π‘ Training Details** |
|
|
- **Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest` |
|
|
- **Dataset:** Custom dataset with labeled sentiment texts |
|
|
- **Fine-Tuning:** Performed on AWS EC2 with PyTorch |
|
|
- **Batch Size:** 16 |
|
|
- **Optimizer:** AdamW |
|
|
- **Learning Rate:** 5e-5 |
|
|
|
|
|
## **π Example Predictions** |
|
|
| Text | Prediction | |
|
|
|-------|------------| |
|
|
| `"I love this product!"` | Positive (98.3%) | |
|
|
| `"It's an okay experience."` | Neutral (67.4%) | |
|
|
| `"I hate this! Never buying again."` | Negative (92.1%) | |
|
|
|
|
|
## **π Performance** |
|
|
- **Accuracy:** `0.9344` |
|
|
|
|
|
## **π Model Limitations** |
|
|
- May struggle with **sarcasm or ambiguous phrases**. |
|
|
- Can be **biased** towards the training dataset. |
|
|
- Not suitable for **long texts** without truncation. |
|
|
|
|
|
## **π Citation** |
|
|
If you use this model, please cite: |
|
|
``` |
|
|
@model{ktr008_sentiment_2025, |
|
|
author = {ktr008}, |
|
|
title = {Fine-Tuned Sentiment Analysis Model}, |
|
|
year = {2025}, |
|
|
publisher = {Hugging Face}, |
|
|
version = {1.0} |
|
|
} |
|
|
``` |
|
|
|
|
|
## **π¬ Contact** |
|
|
For issues or suggestions, reach out to me on [Hugging Face](https://huggingface.co/ktr008) π |
|
|
|
|
|
--- |
|
|
|