ktr008 commited on
Commit
594a61d
Β·
verified Β·
1 Parent(s): 2358766

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -7,3 +7,84 @@ tags:
7
  model_name: "Your Model Name"
8
  datasets: "dataset-used"
9
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  model_name: "Your Model Name"
8
  datasets: "dataset-used"
9
  ---
10
+
11
+ ---
12
+
13
+ # **Fine-Tuned Sentiment Analysis Model**
14
+
15
+ [![Hugging Face Model](https://img.shields.io/badge/Hugging%20Face-Sentiment%20Model-yellow)](https://huggingface.co/ktr008/sentiment)
16
+ πŸ”— **Model ID:** `ktr008/sentiment`
17
+ πŸ“… **Last Updated:** `2025-02-25`
18
+ πŸš€ **Framework:** PyTorch | Transformers
19
+
20
+ ## **πŸ“Œ Model Description**
21
+ This is a **fine-tuned RoBERTa model** for **sentiment analysis** based on the **Twitter RoBERTa Base** model. It classifies text into **Positive, Neutral, or Negative** sentiments.
22
+
23
+ ## **πŸ› οΈ How to Use**
24
+ ### **Install Dependencies**
25
+ ```sh
26
+ pip install transformers torch scipy
27
+ ```
28
+ ### **Load Model & Tokenizer**
29
+ ```python
30
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
31
+ import numpy as np
32
+ from scipy.special import softmax
33
+
34
+ model = AutoModelForSequenceClassification.from_pretrained("ktr008/sentiment")
35
+ tokenizer = AutoTokenizer.from_pretrained("ktr008/sentiment")
36
+
37
+ def predict_sentiment(text):
38
+ encoded_input = tokenizer(text, return_tensors='pt')
39
+ output = model(**encoded_input)
40
+ scores = output[0][0].detach().numpy()
41
+ scores = softmax(scores)
42
+
43
+ labels = ["Negative", "Neutral", "Positive"]
44
+ ranking = np.argsort(scores)[::-1]
45
+
46
+ return {labels[i]: round(float(scores[i]), 4) for i in ranking}
47
+
48
+ # Example usage
49
+ print(predict_sentiment("I love this product!"))
50
+ ```
51
+
52
+ ## **πŸ’‘ Training Details**
53
+ - **Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest`
54
+ - **Dataset:** Custom dataset with labeled sentiment texts
55
+ - **Fine-Tuning:** Performed on AWS EC2 with PyTorch
56
+ - **Batch Size:** 16
57
+ - **Optimizer:** AdamW
58
+ - **Learning Rate:** 5e-5
59
+
60
+ ## **πŸ“ Example Predictions**
61
+ | Text | Prediction |
62
+ |-------|------------|
63
+ | `"I love this product!"` | Positive (98.3%) |
64
+ | `"It's an okay experience."` | Neutral (67.4%) |
65
+ | `"I hate this! Never buying again."` | Negative (92.1%) |
66
+
67
+ ## **πŸ“ˆ Performance**
68
+ - **Accuracy:** `0.9344`
69
+
70
+ ## **πŸ“Œ Model Limitations**
71
+ - May struggle with **sarcasm or ambiguous phrases**.
72
+ - Can be **biased** towards the training dataset.
73
+ - Not suitable for **long texts** without truncation.
74
+
75
+ ## **πŸ“ Citation**
76
+ If you use this model, please cite:
77
+ ```
78
+ @model{ktr008_sentiment_2025,
79
+ author = {ktr008},
80
+ title = {Fine-Tuned Sentiment Analysis Model},
81
+ year = {2025},
82
+ publisher = {Hugging Face},
83
+ version = {1.0}
84
+ }
85
+ ```
86
+
87
+ ## **πŸ“¬ Contact**
88
+ For issues or suggestions, reach out to me on [Hugging Face](https://huggingface.co/ktr008) πŸš€
89
+
90
+ ---