Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ tags:
|
|
| 12 |
|
| 13 |
**LogicCoder-7B** is a 7B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Qwen-7B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
|
| 14 |
|
| 15 |
-
This model was fine-tuned on pruned
|
| 16 |
|
| 17 |
# 🧠 Reasoning Mode
|
| 18 |
|
|
|
|
| 12 |
|
| 13 |
**LogicCoder-7B** is a 7B-parameter language model fine-tuned for code generation tasks. It is based on the DeepSeek-R1-Distill-Qwen-7B model and trained on a Python subset of the open-r1/codeforces-cots dataset.
|
| 14 |
|
| 15 |
+
This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
|
| 16 |
|
| 17 |
# 🧠 Reasoning Mode
|
| 18 |
|