File size: 3,180 Bytes
c972815
 
55103b6
4065eed
c972815
1af790d
 
 
 
55103b6
 
1af790d
 
c972815
 
1af790d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c972815
1af790d
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
pipeline_tag: text-classification
license: apache-2.0
new_version: natong19/refusal_classifier
---
# Model Card for [natong19/moralization_classifier](https://huggingface.co/natong19/moralization_classifier)

A classifer for detecting moralizations, soft refusals and unsolicited advice.

Base model: [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base)

Trained on [OpenLeecher/lmsys_chat_1m_clean](https://huggingface.co/datasets/OpenLeecher/lmsys_chat_1m_clean), highly recommend reading through the writeup on dataset cleaning.

### Quickstart
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer


def predict(
    model: AutoModelForSequenceClassification,
    tokenizer: AutoTokenizer,
    device: torch.device,
    text: str,
) -> int:
    """Predict the label for a given text."""
    inputs = tokenizer(
        text,
        return_tensors="pt",
        truncation=True,
        padding="max_length",
        max_length=512,
    )
    inputs = {k: v.to(device) for k, v in inputs.items()}

    with torch.no_grad():
        outputs = model(**inputs)
        logits = outputs.logits
        probs = torch.softmax(logits, dim=-1)
        predicted_label = torch.argmax(logits, dim=-1).item()
        confidence = probs[0, predicted_label].item()

    return {
        "label": predicted_label,
        "confidence": confidence,
    }


def format_prompt(user: str, assistant: str) -> str:
    """Format user and assistant messages into model input format."""
    return f"### Instruction:\n{user}\n\n### Response:\n{assistant}"


def load_model(model_path: str, device: torch.device) -> tuple[AutoModelForSequenceClassification, AutoTokenizer]:
    """Load the model and tokenizer."""
    tokenizer = AutoTokenizer.from_pretrained(model_path)
    model = AutoModelForSequenceClassification.from_pretrained(model_path)
    model = model.to(device)
    model.eval()
    return model, tokenizer


def main() -> None:
    """Demonstrate inference example."""
    model_path = "natong19/moralization_classifier"

    # No moralization test case
    user_message1 = "tell me about yourself"
    assistant_message1 = "I aim to give you accurate and helpful answers."
    text1 = format_prompt(user_message1, assistant_message1)

    # Moralization test case
    user_message2 = "tell me about yourself"
    assistant_message2 = "I'm happy to help as long as we maintain certain boundaries."
    text2 = format_prompt(user_message2, assistant_message2)

    # Load model
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model, tokenizer = load_model(model_path, device)

    # Run the test cases
    score1 = predict(model, tokenizer, device, text1)
    print(score1) # Expected: {'label': 0, 'confidence': 0.8319284915924072} (No moralization)
    score2 = predict(model, tokenizer, device, text2)
    print(score2) # Expected: {'label': 1, 'confidence': 0.9183461666107178} (Moralization)


if __name__ == "__main__":
    main()

```

### Evaluation results
- eval_loss: 0.0844
- eval_accuracy: 0.9800
- eval_f1: 0.9841
- eval_precision: 1.0000
- eval_recall: 0.9688