Improve model card: Add paper abstract, project page link and steering instruction
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: transformers
|
| 3 |
-
tags: []
|
| 4 |
-
pipeline_tag: text-generation
|
| 5 |
-
license: mit
|
| 6 |
base_model:
|
| 7 |
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
**Repository for:**
|
|
@@ -13,8 +13,9 @@ base_model:
|
|
| 13 |
|
| 14 |
(We also release ThinkEdit versions for ThinkEdit-deepseek-qwen-1.5b, ThinkEdit-deepseek-llama3-8b, and ThinkEdit-deepseek-qwen-14b.)
|
| 15 |
|
| 16 |
-
**Authors**: Chung-En Sun, Ge Yan, Tsui-Wei Weng
|
| 17 |
**Paper**: [ThinkEdit: Interpretable Weight Editing to Mitigate Overly Short Thinking in Reasoning Models](https://arxiv.org/abs/2503.22048)
|
|
|
|
| 18 |
|
| 19 |
Github: https://github.com/Trustworthy-ML-Lab/ThinkEdit
|
| 20 |
|
|
@@ -26,10 +27,22 @@ Reasoning-augmented models sometimes fail by generating **overly short**, abstra
|
|
| 26 |
|
| 27 |
**ThinkEdit** is a lightweight weight-editing method that:
|
| 28 |
|
| 29 |
-
- Identifies ~4% of "short reasoning" attention heads
|
| 30 |
-
- Edits only ~0.2% of total parameters
|
| 31 |
-
- Removes the "short reasoning" direction from their output
|
| 32 |
-
- Boosts performance, especially on cases with short reasoning traces
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
---
|
| 35 |
|
|
@@ -81,3 +94,4 @@ The usage of ThinkEdit models is exactly the same as the original deepseek-disti
|
|
| 81 |
primaryClass={cs.CL},
|
| 82 |
url={https://arxiv.org/abs/2503.22048},
|
| 83 |
}
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
|
| 4 |
+
library_name: transformers
|
| 5 |
+
license: mit
|
| 6 |
+
pipeline_tag: text-generation
|
| 7 |
+
tags: []
|
| 8 |
---
|
| 9 |
|
| 10 |
**Repository for:**
|
|
|
|
| 13 |
|
| 14 |
(We also release ThinkEdit versions for ThinkEdit-deepseek-qwen-1.5b, ThinkEdit-deepseek-llama3-8b, and ThinkEdit-deepseek-qwen-14b.)
|
| 15 |
|
| 16 |
+
**Authors**: Chung-En Sun, Ge Yan, Tsui-Wei Weng
|
| 17 |
**Paper**: [ThinkEdit: Interpretable Weight Editing to Mitigate Overly Short Thinking in Reasoning Models](https://arxiv.org/abs/2503.22048)
|
| 18 |
+
**Project Page**: [ThinkEdit Project Website](https://lilywenglab.github.io/ThinkEdit/)
|
| 19 |
|
| 20 |
Github: https://github.com/Trustworthy-ML-Lab/ThinkEdit
|
| 21 |
|
|
|
|
| 27 |
|
| 28 |
**ThinkEdit** is a lightweight weight-editing method that:
|
| 29 |
|
| 30 |
+
- Identifies ~4% of "short reasoning" attention heads
|
| 31 |
+
- Edits only ~0.2% of total parameters
|
| 32 |
+
- Removes the "short reasoning" direction from their output
|
| 33 |
+
- Boosts performance, especially on cases with short reasoning traces
|
| 34 |
+
|
| 35 |
+
**Abstract:** Recent studies have shown that Large Language Models (LLMs) augmented with chain-of-thought (CoT) reasoning demonstrate impressive problem-solving abilities. However, in this work, we identify a recurring issue where these models occasionally generate overly short reasoning, leading to degraded performance on even simple mathematical problems. Specifically, we investigate how reasoning length is embedded in the hidden representations of reasoning models and its impact on accuracy. Our analysis reveals that reasoning length is governed by a linear direction in the representation space, allowing us to induce overly short reasoning by steering the model along this direction. Building on this insight, we introduce \textbf{\textit{ThinkEdit}}, a simple yet effective weight-editing approach to mitigate the issue of overly short reasoning. We first identify a small subset of attention heads (approximately 4%) that predominantly drive short reasoning behavior. We then edit the output projection weights of these heads to remove the short reasoning direction. With changes to only 0.2% of the model's parameters, \textbf{\textit{ThinkEdit}} effectively reduces overly short reasoning and yields notable accuracy gains for short reasoning outputs (+6.39%), along with an overall improvement across multiple math benchmarks (+3.34%). Our findings provide new mechanistic insights into how reasoning length is controlled within LLMs and highlight the potential of fine-grained model interventions to improve reasoning quality.
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
## How to steer
|
| 40 |
+
|
| 41 |
+
To steer the model and observe changes in accuracy and reasoning length:
|
| 42 |
+
|
| 43 |
+
1. Generate responses for probing.
|
| 44 |
+
2. Extract the Reasoning Length Direction from Self-Attn or MLP.
|
| 45 |
+
3. Steer the models with the directions.
|
| 46 |
|
| 47 |
---
|
| 48 |
|
|
|
|
| 94 |
primaryClass={cs.CL},
|
| 95 |
url={https://arxiv.org/abs/2503.22048},
|
| 96 |
}
|
| 97 |
+
```
|