|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- Qwen/Qwen2.5-7B-Instruct |
|
|
library_name: transformers |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# TARS-7B |
|
|
|
|
|
## Overview |
|
|
|
|
|
**TARS-7B** is an open-source reasoning model trained for safety using **TARS**: *Training Adaptive Reasoners for Safety* introduced in the paper: [**Reasoning as an Adaptive Defense for Safety**](https://arxiv.org/abs/2507.00971), to facilitate the research of reasoning models for LLM safety. This model is trained using a mixing ratio of \\(\lambda = 0.5\\) between harmful and harmless prompts, starting from the base model [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). |
|
|
|
|
|
TARS is a simple but effective online reinforcement learning (RL) method that trains models to **adaptively reason** for **low refusal** and **safe behavior**, using three key ingredients: |
|
|
|
|
|
### ๐ Key Ingredients |
|
|
- **Ingredient 1:** Lightweight supervised fine-tuning (SFT) for diverse generations |
|
|
- **Ingredient 2:** Mixing in harmless prompts during RL training |
|
|
- **Ingredient 3:** Decoupled reward model for better exploration |
|
|
|
|
|
For full details, please check out our [paper](https://arxiv.org/pdf/2507.00971) or [blogpost](https://training-adaptive-reasoners-safety.github.io). |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Citation |
|
|
|
|
|
If you use **TARS-7B** in your work, please cite us: |
|
|
|
|
|
```bibtex |
|
|
@article{kim2025reasoning, |
|
|
title={Reasoning as an Adaptive Defense for Safety}, |
|
|
author={Kim, Taeyoun and Tajwar, Fahim and Raghunathan, Aditi and Kumar, Aviral}, |
|
|
journal={arXiv preprint arXiv:2507.00971}, |
|
|
year={2025} |
|
|
} |