Update README.md
Browse files
README.md
CHANGED
|
@@ -143,15 +143,6 @@ outputs = model.generate(
|
|
| 143 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 144 |
```
|
| 145 |
|
| 146 |
-
### Recommended Generation Settings
|
| 147 |
-
|
| 148 |
-
| Parameter | Value |
|
| 149 |
-
|---|---|
|
| 150 |
-
| `max_new_tokens` | 40–100 |
|
| 151 |
-
| `temperature` | 1.0–1.2 |
|
| 152 |
-
| `top_p` | 0.5–0.7 |
|
| 153 |
-
| `repetition_penalty` | 1.2–1.4 |
|
| 154 |
-
|
| 155 |
---
|
| 156 |
|
| 157 |
## Stentor-30M vs Stentor-30M-Instruct — Comparative Statistics
|
|
@@ -318,7 +309,7 @@ All examples were prepended with a safety system prompt before tokenization.
|
|
| 318 |
|---|---|
|
| 319 |
| Hardware | 2× NVIDIA Tesla T4 (16 GB each) |
|
| 320 |
| Platform | Kaggle Notebooks (free tier) |
|
| 321 |
-
| Main SFT training time |
|
| 322 |
| Total fine-tune time (all phases) | ~1 hour |
|
| 323 |
| Training samples / sec (main phase) | ~19.3 |
|
| 324 |
|
|
@@ -326,33 +317,6 @@ All examples were prepended with a safety system prompt before tokenization.
|
|
| 326 |
|
| 327 |
## Evaluation
|
| 328 |
|
| 329 |
-
### Training Curves
|
| 330 |
-
|
| 331 |
-

|
| 332 |
-

|
| 333 |
-
|
| 334 |
-
### Eval Loss at Checkpoints (Main SFT Phase)
|
| 335 |
-
|
| 336 |
-
| Step | Approx. Epoch | Eval Loss | Eval PPL |
|
| 337 |
-
|---|---|---|---|
|
| 338 |
-
| 40 | 0.44 | 3.711 | 40.9 |
|
| 339 |
-
| 80 | 0.88 | 3.397 | 29.9 |
|
| 340 |
-
| 120 | 1.32 | 3.272 | 26.4 |
|
| 341 |
-
| 160 | 1.76 | 3.213 | 24.8 |
|
| 342 |
-
| 200 | 2.20 | 3.186 | 24.2 |
|
| 343 |
-
| **240** | **2.64** | **3.176** | **23.9** |
|
| 344 |
-
|
| 345 |
-
### Per-Source Eval Loss at End of Epoch 3
|
| 346 |
-
|
| 347 |
-
| Source | Eval Loss | Notes |
|
| 348 |
-
|---|---|---|
|
| 349 |
-
| BeaverTails | **2.135** | Model converges strongly on short refusal templates |
|
| 350 |
-
| Seed Safety | 3.086 | Hand-crafted refusals; good fit |
|
| 351 |
-
| FalseReject | 3.322 | Benign-but-edgy prompts; stable throughout training |
|
| 352 |
-
| Dolly | 3.488 | General instruction following; modest increase vs. early training |
|
| 353 |
-
|
| 354 |
-
The low BeaverTails eval loss confirms the model learned refusal phrasing effectively. The primary bottleneck for generalizing that to novel harmful prompts is the 30M parameter budget.
|
| 355 |
-
|
| 356 |
### Safety Probe Results (Post-Training, 35-prompt suite)
|
| 357 |
|
| 358 |
| Metric | Greedy | Sampled (T=0.7) |
|
|
@@ -387,7 +351,7 @@ The low BeaverTails eval loss confirms the model learned refusal phrasing effect
|
|
| 387 |
## Bias, Risks, and Limitations
|
| 388 |
|
| 389 |
- **Weak safety generalization.** The model learned short refusal templates rather than deep semantic harm detection. Paraphrased or novel harmful prompts frequently bypass refusals.
|
| 390 |
-
- **Terse outputs.**
|
| 391 |
- **All base model limitations apply.** 512-token context, limited world knowledge, occasional hallucination — see the [Stentor-30M model card](https://huggingface.co/StentorLabs/Stentor-30M) for full details.
|
| 392 |
- **No RLHF.** SFT only — no preference-based alignment was applied.
|
| 393 |
- **Dataset biases.** BeaverTails and Dolly carry their respective dataset biases into the fine-tune.
|
|
@@ -549,7 +513,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 549 |
| Platform | Kaggle (free tier) |
|
| 550 |
| Compute region | US West |
|
| 551 |
| Total fine-tune time (all phases) | ~1 hour |
|
| 552 |
-
| Estimated CO₂e | ~
|
| 553 |
|
| 554 |
---
|
| 555 |
|
|
|
|
| 143 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 144 |
```
|
| 145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
---
|
| 147 |
|
| 148 |
## Stentor-30M vs Stentor-30M-Instruct — Comparative Statistics
|
|
|
|
| 309 |
|---|---|
|
| 310 |
| Hardware | 2× NVIDIA Tesla T4 (16 GB each) |
|
| 311 |
| Platform | Kaggle Notebooks (free tier) |
|
| 312 |
+
| Main SFT training time | 49 min 32.7 s (2,972.7 s) |
|
| 313 |
| Total fine-tune time (all phases) | ~1 hour |
|
| 314 |
| Training samples / sec (main phase) | ~19.3 |
|
| 315 |
|
|
|
|
| 317 |
|
| 318 |
## Evaluation
|
| 319 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 320 |
### Safety Probe Results (Post-Training, 35-prompt suite)
|
| 321 |
|
| 322 |
| Metric | Greedy | Sampled (T=0.7) |
|
|
|
|
| 351 |
## Bias, Risks, and Limitations
|
| 352 |
|
| 353 |
- **Weak safety generalization.** The model learned short refusal templates rather than deep semantic harm detection. Paraphrased or novel harmful prompts frequently bypass refusals.
|
| 354 |
+
- **Terse outputs.** The base Stentor-30M had a persistent tendency to keep generating text well past a natural stopping point rather than terminating cleanly on its own. The stop-calibration phase was specifically designed to reinforce the behavior of ending a response once the answer is complete, so short and clean outputs are intentional.
|
| 355 |
- **All base model limitations apply.** 512-token context, limited world knowledge, occasional hallucination — see the [Stentor-30M model card](https://huggingface.co/StentorLabs/Stentor-30M) for full details.
|
| 356 |
- **No RLHF.** SFT only — no preference-based alignment was applied.
|
| 357 |
- **Dataset biases.** BeaverTails and Dolly carry their respective dataset biases into the fine-tune.
|
|
|
|
| 513 |
| Platform | Kaggle (free tier) |
|
| 514 |
| Compute region | US West |
|
| 515 |
| Total fine-tune time (all phases) | ~1 hour |
|
| 516 |
+
| Estimated CO₂e | ~20 gCO₂e |
|
| 517 |
|
| 518 |
---
|
| 519 |
|