atahanuz commited on
Commit
0903518
·
verified ·
1 Parent(s): 5bcd0bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -5
README.md CHANGED
@@ -54,15 +54,15 @@ The easiest way to use this model is via the Hugging Face `pipeline`.
54
  ```python
55
  from transformers import pipeline
56
 
57
- # Initialize the pipeline
58
  classifier = pipeline("text-classification", model="atahanuz/bert-offensive-classifier")
59
 
60
- # Predict
61
  text = "Bu harika bir filmdi, çok beğendim."
62
- result = classifier(text)
63
 
64
- print(result)
65
- # Output: [{'label': 'NOT', 'score': 0.99...}]
 
 
66
  ```
67
 
68
  ### Method 2: Manual PyTorch Implementation
@@ -106,6 +106,15 @@ The model outputs the following labels:
106
  | `0` | **NOT** | **Not Offensive** - Normal, non-hateful speech. |
107
  | `1` | **OFF** | **Offensive** - Contains insults, threats, or inappropriate language. |
108
 
 
 
 
 
 
 
 
 
 
109
  ## 📈 Performance
110
 
111
  The model was evaluated on the test split of the OffensEval-2020-TR dataset (approx. 3,500 samples).
 
54
  ```python
55
  from transformers import pipeline
56
 
 
57
  classifier = pipeline("text-classification", model="atahanuz/bert-offensive-classifier")
58
 
 
59
  text = "Bu harika bir filmdi, çok beğendim."
60
+ result = classifier(text)[0]
61
 
62
+ # Convert LABEL_1 -> Offensive, LABEL_0 -> Not Offensive
63
+ label = "Offensive" if result['label'] == "LABEL_1" else "Not Offensive"
64
+
65
+ print(f"Prediction: {label} (Score: {result['score']:.4f})")
66
  ```
67
 
68
  ### Method 2: Manual PyTorch Implementation
 
106
  | `0` | **NOT** | **Not Offensive** - Normal, non-hateful speech. |
107
  | `1` | **OFF** | **Offensive** - Contains insults, threats, or inappropriate language. |
108
 
109
+ ## 📝 Example Predictions
110
+
111
+ | Text | Label | Prediction |
112
+ | :--- | :--- | :--- |
113
+ | "Bu filmi çok beğendim, oyunculuklar harikaydı." | **NOT** | Non-Offensive |
114
+ | "Beynini kullanmayı denesen belki anlarsın." | **OFF** | Offensive (Insult) |
115
+ | "Maalesef bu konuda sana katılamıyorum." | **NOT** | Non-Offensive |
116
+ | "Senin gibi aptal insanlar yüzünden bu haldeyiz." | **OFF** | Offensive (Toxic) |
117
+
118
  ## 📈 Performance
119
 
120
  The model was evaluated on the test split of the OffensEval-2020-TR dataset (approx. 3,500 samples).