Papers
arxiv:2601.23143

THINKSAFE: Self-Generated Safety Alignment for Reasoning Models

Published on Jan 30
· Submitted by
Sangwoo Park
on Feb 2
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

ThinkSafe is a self-aligned framework that enhances safety in large reasoning models through lightweight refusal steering and fine-tuning on self-generated responses, maintaining reasoning performance while reducing computational costs.

AI-generated summary

Large reasoning models (LRMs) achieve remarkable performance by leveraging reinforcement learning (RL) on reasoning tasks to generate long chain-of-thought (CoT) reasoning. However, this over-optimization often prioritizes compliance, making models vulnerable to harmful prompts. To mitigate this safety degradation, recent approaches rely on external teacher distillation, yet this introduces a distributional discrepancy that degrades native reasoning. We propose ThinkSafe, a self-generated alignment framework that restores safety alignment without external teachers. Our key insight is that while compliance suppresses safety mechanisms, models often retain latent knowledge to identify harm. ThinkSafe unlocks this via lightweight refusal steering, guiding the model to generate in-distribution safety reasoning traces. Fine-tuning on these self-generated responses effectively realigns the model while minimizing distribution shift. Experiments on DeepSeek-R1-Distill and Qwen3 show ThinkSafe significantly improves safety while preserving reasoning proficiency. Notably, it achieves superior safety and comparable reasoning to GRPO, with significantly reduced computational cost. Code, models, and datasets are available at https://github.com/seanie12/ThinkSafe.git.

Community

Paper submitter

THINKSAFE: Self-Generated Safety Alignment for Reasoning Models

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.23143 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.23143 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.23143 in a Space README.md to link it from this page.

Collections including this paper 1