ktr008 commited on
Commit
8c3e4b6
Β·
verified Β·
1 Parent(s): b7624ee

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # **Fine-Tuned Sentiment Analysis Model**
3
+
4
+ [![Hugging Face Model](https://img.shields.io/badge/Hugging%20Face-Sentiment%20Model-yellow)](https://huggingface.co/ktr008/sentiment)
5
+ πŸ”— **Model ID:** `ktr008/sentiment`
6
+ πŸ“… **Last Updated:** `2025-02-25`
7
+ πŸš€ **Framework:** PyTorch | Transformers
8
+
9
+ ## **πŸ“Œ Model Description**
10
+ This is a **fine-tuned RoBERTa model** for **sentiment analysis** based on the **Twitter RoBERTa Base** model. It classifies text into **Positive, Neutral, or Negative** sentiments.
11
+
12
+ ## **πŸ› οΈ How to Use**
13
+ ### **Install Dependencies**
14
+ ```sh
15
+ pip install transformers torch scipy
16
+ ```
17
+ ### **Load Model & Tokenizer**
18
+ ```python
19
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
20
+ import numpy as np
21
+ from scipy.special import softmax
22
+
23
+ model = AutoModelForSequenceClassification.from_pretrained("ktr008/sentiment")
24
+ tokenizer = AutoTokenizer.from_pretrained("ktr008/sentiment")
25
+
26
+ def predict_sentiment(text):
27
+ encoded_input = tokenizer(text, return_tensors='pt')
28
+ output = model(**encoded_input)
29
+ scores = output[0][0].detach().numpy()
30
+ scores = softmax(scores)
31
+
32
+ labels = ["Negative", "Neutral", "Positive"]
33
+ ranking = np.argsort(scores)[::-1]
34
+
35
+ return {labels[i]: round(float(scores[i]), 4) for i in ranking}
36
+
37
+ # Example usage
38
+ print(predict_sentiment("I love this product!"))
39
+ ```
40
+
41
+ ## **πŸ’‘ Training Details**
42
+ - **Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest`
43
+ - **Dataset:** Custom dataset with labeled sentiment texts
44
+ - **Fine-Tuning:** Performed on AWS EC2 with PyTorch
45
+ - **Batch Size:** 16
46
+ - **Optimizer:** AdamW
47
+ - **Learning Rate:** 5e-5
48
+
49
+ ## **πŸ“ Example Predictions**
50
+ | Text | Prediction |
51
+ |-------|------------|
52
+ | `"I love this product!"` | Positive (98.3%) |
53
+ | `"It's an okay experience."` | Neutral (67.4%) |
54
+ | `"I hate this! Never buying again."` | Negative (92.1%) |
55
+
56
+ ## **πŸ“ˆ Performance**
57
+ - **Accuracy:** `0.9344`
58
+
59
+ ## **πŸ“Œ Model Limitations**
60
+ - May struggle with **sarcasm or ambiguous phrases**.
61
+ - Can be **biased** towards the training dataset.
62
+ - Not suitable for **long texts** without truncation.
63
+
64
+ ## **πŸ“ Citation**
65
+ If you use this model, please cite:
66
+ ```
67
+ @model{ktr008_sentiment_2025,
68
+ author = {ktr008},
69
+ title = {Fine-Tuned Sentiment Analysis Model},
70
+ year = {2025},
71
+ publisher = {Hugging Face},
72
+ version = {1.0}
73
+ }
74
+ ```
75
+
76
+ ## **πŸ“¬ Contact**
77
+ For issues or suggestions, reach out to me on [Hugging Face](https://huggingface.co/ktr008) πŸš€
78
+
79
+ ---