Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ tags:
|
|
| 11 |
---
|
| 12 |
# **QwQ-R1-Distill-7B-CoT**
|
| 13 |
|
| 14 |
-
QwQ-R1-Distill-7B-CoT is based on the
|
| 15 |
|
| 16 |
# **Quickstart with Transformers**
|
| 17 |
|
|
|
|
| 11 |
---
|
| 12 |
# **QwQ-R1-Distill-7B-CoT**
|
| 13 |
|
| 14 |
+
QwQ-R1-Distill-7B-CoT is based on the *Qwen [ KT ] model*, which was distilled by DeepSeek-R1-Distill-Qwen-7B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|
| 15 |
|
| 16 |
# **Quickstart with Transformers**
|
| 17 |
|