File size: 3,069 Bytes
c27cd1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
language: multilingual
license: apache-2.0
tags:
- sentiment-analysis
- text-classification
- xlm-roberta
- dual-head
---

# Sentiment Classifier

## Model Description

This is a dual-head sentiment classifier built on top of XLM-RoBERTa. The model performs two tasks simultaneously:

1. **Sentiment Classification:** Predicts sentiment labels (positive, neutral, negative)
2. **Sentiment Score Regression:** Predicts a continuous sentiment score in the range [0, 1]

The model uses a weighted loss function combining cross-entropy (70%) for classification and MSE (30%) for regression,
allowing it to capture both discrete sentiment categories and fine-grained sentiment intensity.

## Model Architecture

- **Base Model:** xlm-roberta-base
- **Task:** text-classification
- **Number of Labels:** 3
- **Labels:** negative, neutral, positive

## Training Configuration

- **Epochs:** 10
- **Batch Size:** 128
- **Learning Rate:** 2e-05
- **Warmup Ratio:** 0.1
- **Weight Decay:** 0.01
- **Max Seq Length:** 256
- **Mixed Precision:** FP16=False, BF16=True

## Performance Metrics

- **Loss:** 0.6947
- **Accuracy:** 0.4901
- **Precision:** 0.2402
- **Recall:** 0.4901
- **F1:** 0.3224
- **F1 Macro:** 0.3289
- **F1 Negative:** 0.0000
- **Precision Negative:** 0.0000
- **Recall Negative:** 0.0000
- **Support Negative:** 900
- **F1 Neutral:** 0.6578
- **Precision Neutral:** 0.4901
- **Recall Neutral:** 1.0000
- **Support Neutral:** 865
- **Runtime:** 0.7012
- **Samples Per Second:** 2517.1350
- **Steps Per Second:** 9.9830

## Model Outputs

The model returns two outputs:

- **Logits:** Classification logits for sentiment labels [batch_size, 3]
- **Score Predictions:** Continuous sentiment scores [batch_size]

Both outputs are computed from the same shared representation (CLS token) of the input text.

## Intended Use

This model is intended for sentiment analysis tasks on multilingual text, particularly in scenarios where both
categorical sentiment (positive/neutral/negative) and sentiment intensity are important.

**Typical use cases:**
- Product review analysis
- Social media sentiment monitoring
- Customer feedback classification

## Usage

```python
from transformers import AutoTokenizer
from src.models.sentiment_classifier import SentimentClassifier

# Load model and tokenizer
model = SentimentClassifier.from_pretrained("YOUR_USERNAME/sentiment-classifier")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")

# Prepare input
text = "Your input text here"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)

# Make prediction
outputs = model(**inputs)
predictions = outputs["logits"].argmax(dim=-1)
```

## Citation

If you use this model, please cite:

```bibtex
@misc{sentiment_classifier,
  title={Sentiment Classifier},
  author={{Your Name}},
  year={2025},
  publisher={Hugging Face},
  howpublished={{\url{{https://huggingface.co/YOUR_USERNAME/{model_name.lower().replace(' ', '-')}}}}}
}
```

---

*This model card was automatically generated with [Claude Code](https://claude.com/claude-code)*