Commit
·
4ad85b8
1
Parent(s):
d8dad32
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: ["ru", "en"]
|
| 3 |
+
tags:
|
| 4 |
+
- russian
|
| 5 |
+
- classification
|
| 6 |
+
- toxicity
|
| 7 |
+
- multilabel
|
| 8 |
+
widget:
|
| 9 |
+
- text: "Иди ты нафиг!"
|
| 10 |
+
---
|
| 11 |
+
This is the [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model fine-tuned for classification of toxicity and inappropriateness.
|
| 12 |
+
|
| 13 |
+
The problem is formulated as multilabel classification with the following classes:
|
| 14 |
+
- `non-toxic`: the text does NOT contain insults, obscenities, and threats, in the sense of the [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) competition.
|
| 15 |
+
- `insult`
|
| 16 |
+
- `obscenity`
|
| 17 |
+
- `threat`
|
| 18 |
+
- `dangerous`: the text is inappropriate, in the sense of [Babakov et.al.](https://arxiv.org/abs/2103.05345), i.e. it can harm the reputation of the speaker.
|
| 19 |
+
|
| 20 |
+
A text can be considered safe if it is BOTH `non-toxic` and NOT `dangerous`.
|
| 21 |
+
|
| 22 |
+
## Usage
|
| 23 |
+
|
| 24 |
+
The function below estimates the probability that the text is either toxic OR dangerous:
|
| 25 |
+
```python
|
| 26 |
+
# !pip install transformers sentencepiece --quiet
|
| 27 |
+
import torch
|
| 28 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 29 |
+
|
| 30 |
+
model_checkpoint = 'cointegrated/rubert-tiny-toxicity'
|
| 31 |
+
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
|
| 32 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
|
| 33 |
+
if torch.cuda.is_available():
|
| 34 |
+
model.cuda()
|
| 35 |
+
|
| 36 |
+
def text2toxicity(text, aggregate=True):
|
| 37 |
+
""" Calculate toxicity of a text (if aggregate=True) or a vector of toxicity aspects (if aggregate=False)"""
|
| 38 |
+
with torch.no_grad():
|
| 39 |
+
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
|
| 40 |
+
proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()
|
| 41 |
+
if isinstance(text, str):
|
| 42 |
+
proba = proba[0]
|
| 43 |
+
if aggregate:
|
| 44 |
+
return 1 - proba.T[0] * (1 - proba.T[-1])
|
| 45 |
+
return proba
|
| 46 |
+
|
| 47 |
+
print(text2toxicity('я люблю нигеров', True))
|
| 48 |
+
# 0.57240640889815
|
| 49 |
+
|
| 50 |
+
print(text2toxicity('я люблю нигеров', False))
|
| 51 |
+
# [9.9336821e-01 6.1555761e-03 1.2781911e-03 9.2758919e-04 5.6955177e-01]
|
| 52 |
+
|
| 53 |
+
print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], True))
|
| 54 |
+
# [0.5724064 0.20111847]
|
| 55 |
+
|
| 56 |
+
print(text2toxicity(['я люблю нигеров', 'я люблю африканцев'], False))
|
| 57 |
+
# [[9.9336821e-01 6.1555761e-03 1.2781911e-03 9.2758919e-04 5.6955177e-01]
|
| 58 |
+
# [9.9828428e-01 1.1138428e-03 1.1492912e-03 4.6551935e-04 1.9974548e-01]]
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
## Training
|
| 62 |
+
|
| 63 |
+
The model has been training on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, learning rate of `1e-5`, and batch size of `128` for `5` epochs. The data was not filtered in any way. A text was considered inappropriate if its inappropritateness score was higher than 0.2. The per-label ROC AUC on the dev set is:
|
| 64 |
+
```
|
| 65 |
+
non-toxic : 0.9909
|
| 66 |
+
insult : 0.9882
|
| 67 |
+
obscenity : 0.9824
|
| 68 |
+
threat : 0.9868
|
| 69 |
+
dangerous : 0.7758
|
| 70 |
+
```
|