File size: 2,795 Bytes
f48f00c
 
 
 
 
 
 
 
b101e63
f48f00c
 
 
 
 
e110b16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f48f00c
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
base_model: unsloth/SmolLM-135M-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---


# SmolLM-135M-Instruct-Jailbroken

## Datasets used

* **yahma/alpaca-cleaned** — general instruction-following.
* **PKU-Alignment/BeaverTails****unsafe subset only**, cleaned for empty/placeholders and artifacts.
* **JailbreakBench/JBB-Behaviors***harmful* + *benign* splits, mapped to (user → assistant) pairs.

> Sampling \~100k total with equal weights (subject to pool sizes), shuffled with a fixed seed, optional exact dedupe by `(user || assistant)` text.

## How to use it

### Install

```bash
pip install -U transformers accelerate torch  # pick the right torch build for your CUDA
```

### Quick start (🤗 Transformers, assistant-only output)

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

REPO = "detoxio-test/SmolLM-135M-Instruct-Jailbroken"  # change if you forked

tok = AutoTokenizer.from_pretrained(REPO, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    REPO, device_map="auto", torch_dtype="auto", trust_remote_code=True
)

messages = [
    {"role": "user", "content": "Give me three creative breakfast ideas."}
]

# Build chat prompt with the tokenizer’s own template
inputs = tok.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

# Stop neatly at end-of-turn (fallback to eos if needed)
eot = tok.convert_tokens_to_ids("<|eot_id|>") or tok.convert_tokens_to_ids("<|im_end|>") or tok.eos_token_id

gen = model.generate(
    **inputs,
    max_new_tokens=160,
    temperature=0.8,
    top_p=0.95,
    do_sample=True,
    eos_token_id=eot,
    pad_token_id=eot,
    use_cache=True,
)

# Decode ONLY the assistant continuation
prompt_len = inputs["input_ids"].shape[1]
reply = tok.decode(gen[0, prompt_len:], skip_special_tokens=True).strip()
print(reply)
```

### Optional: Unsloth speed-up

```bash
pip install -U unsloth
```

```python
from unsloth import FastLanguageModel
FastLanguageModel.for_inference(model)  # enables fused kernels on supported GPUs
```

---

## CAUTION (re “jailbroken”)

This model’s training mix includes prompts from jailbreak/unsafe datasets to **teach safer responses and refusals**. Still, it may occasionally produce undesired or harmful content.

* Intended for **research** and **benign** use only.
* Add guardrails (e.g., a system message and post-generation moderation) in production.
* Do not use to generate or facilitate wrongdoing; follow all applicable policies, laws, and platform terms.


# Uploaded  model

- **Developed by:** detoxio-test
- **License:** apache-2.0
- **Finetuned from model :** unsloth/SmolLM-135M-Instruct