File size: 2,801 Bytes
627adc4
 
 
 
 
 
 
0ed1628
627adc4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ed1628
 
 
6f49e62
c0cec9d
627adc4
 
 
 
 
 
 
 
 
 
117cbfb
627adc4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language: en
license: mit
tags:
- stance-detection
- text-classification
- argument-mining
- deberta-v3
metrics:
- accuracy
- f1
model-index:
- name: debertav3-stance-detection
  results:
  - task:
      type: text-classification
      name: Stance Detection
    metrics:
    - type: accuracy
      value: 0.9997
      name: Accuracy
    - type: f1
      value: 0.9997
      name: F1 Score
base_model:
- microsoft/deberta-v3-large
pipeline_tag: text-classification
datasets:
- NLP-Debater-Project/IBM-Debater-ArgKP
---

# Stance Detection with DeBERTa-v3-large

This model detects whether an argument supports (PRO) or opposes (CON) a given topic.

## Model Description

- **Base Model:** microsoft/deberta-v3-large
- **Task:** Binary stance classification (PRO/CON)
- **Training Data:** [IBM ArgKP-2023 dataset (~32,000 examples)](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Key_Point_Analysis)
- **Calibration:** Label smoothing (0.1) for proper confidence scores

## Performance

- **Test Accuracy:** 99.97%
- **Test F1 Score:** 99.97%
- **Mean Confidence:** 93.9% (well-calibrated)
- **Calibration:** ECE < 0.10

## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model
model_name = "yassine-mhirsi/debertav3-stance-detection"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

# Predict
topic = "AI should replace human teachers"
argument = "Teachers provide emotional support that AI cannot replicate"

text = f"Topic: {{topic}} [SEP] Argument: {{argument}}"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)

with torch.no_grad():
    outputs = model(**inputs)
    probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
    predicted_class = torch.argmax(probs, dim=-1).item()

stance = "PRO" if predicted_class == 1 else "CON"
confidence = probs[0][predicted_class].item()

print(f"Stance: {{stance}}")
print(f"Confidence: {{confidence:.2%}}")
```

## Training Details

- **Epochs:** 3
- **Learning Rate:** 3e-6
- **Batch Size:** 4 (with gradient accumulation of 4)
- **Label Smoothing:** 0.1
- **Training Time:** ~1.5 hours on Kaggle GPU

## Limitations

- Trained only on English argumentative text
- Best performance on formal arguments (debate-style)
- May struggle with heavy sarcasm or irony
- Calibrated for confidence, but not perfect

## Citation

If you use this model, please cite:

```bibtex
@misc{{stance-detection-deberta,
  author = Yassine Mhirsi,
  title = {{Stance Detection with DeBERTa-v3-large}},
  year = {{2025}},
  publisher = {{Hugging Face}},
  howpublished = {{\\url{{https://huggingface.co/yassine-mhirsi/debertav3-stance-detection}}}}
}}
```

## License

MIT License
---