Sengil's picture
Update README.md
fd68a15 verified
---
library_name: transformers
tags:
- Dissonant Detection
- transformers
- bert
language:
- tr
metrics:
- accuracy
base_model:
- ytu-ce-cosmos/turkish-base-bert-uncased
pipeline_tag: text-classification
---
# **Sengil/ytu-bert-base-dissonance-tr** 🇹🇷
A Turkish BERT-based model fine-tuned for three-way sentiment classification on single-sentence discourse.
This model categorizes input sentences into one of the following classes:
**Dissonance:** The sentence contains conflicting or contradictory sentiments
    _e.g.,_ "Telefon çok kaliteli ve hızlı bitiyor şarjı"
**Consonance:** The sentence expresses harmonizing or mutually reinforcing sentiments
    _e.g.,_ "Yemeklerde çok güzel manzarada mükemmel"
**Neither:** The sentence is neutral or does not clearly reflect either dissonance or consonance
    _e.g.,_ "Bu gün hava çok güzel"
The model was trained on 37,368 Turkish samples and evaluated on two separate sets of 4,671 samples each.
It achieved 97.5% accuracy and 97.5% macro-F1 score on the test set, demonstrating strong performance in distinguishing subtle semantic contrasts in Turkish sentences.
|**Model Details** | |
| -------------------- | ----------------------------------------------------- |
| **Developed by** | Mert Şengil |
| **Model type** | `BertForSequenceClassification` |
| **Base model** | `ytu-ce-cosmos/turkish-base-bert-uncased` |
| **Languages** | `tr` (Turkish) |
| **License** | Apache-2.0 |
| **Fine-tuning task** | 3-class sentiment (dissonance / consonance / neither) |
## Uses
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "Sengil/ytu-bert-base-dissonance-tr"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "onu çok seviyorum ve güvenmiyorum."
text = text.replace("I", "ı").lower()
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length", max_length=128)
with torch.no_grad():
logits = model(**inputs).logits
label_id = int(logits.argmax())
id2label = {0: "Dissonance", 1: "Consonance", 2: "Neither"}
print(f"{{'label': '{id2label[label_id]}','score':{logits.argmax()}}}")
```
output:
```
{'label': 'Dissonance','score':0}
```
|**Training Details** | |
| ---------------------- | ---------------------------------------------- |
| **Training samples** | 37 368 |
| **Validation samples** | 4 671 |
| **Test samples** | 4 671 |
| **Epochs** | 4 |
| **Batch size** | 32 (train) / 16 (eval) |
| **Optimizer** | `AdamW` (lr = 2 × 10⁻⁵, weight\_decay = 0.005) |
| **Scheduler** | Linear with 10 % warm-up |
| **Precision** | FP32 |
| **Hardware** | 1× GPU P100 |
### Training Loss Progression
| Epoch | Train Loss | Val Loss |
| ----: | ---------: | ---------: |
| 1 | 0.2661 | 0.0912 |
| 2 | 0.0784 | 0.0812 |
| 3 | 0.0520 | 0.0859 |
| 4 | **0.0419** | **0.0859** |
## Evaluation
| Metric | Value |
| ------------------- | ---------: |
| **Accuracy (test)** | **0.9750** |
| **Macro-F1 (test)** | **0.9749** |
|**Environmental Impact** | |
| ----------------------- | -------------------- |
| **Hardware** | 1× A100-40 GB |
| **Training time** | ≈ 4 × 7 min ≈ 0.47 h |
## Citation
```
@misc{Sengil2025DisConBERT,
title = {Sengil/ytu-bert-base-dissonance-tr: A Three-way Dissonance/Consonance Classifier},
author = {Şengil, Mert},
year = {2025},
url = {https://huggingface.co/Sengil/ytu-bert-base-dissonance-tr}
}
```
---
I would like to thank YTU for their open-source contributions that supported the development of this model.
For issues or questions, please open an issue on the Hub repo or contact **[mert sengil](https://www.linkedin.com/in/mertsengil/)**.