Update README.md
Browse files
README.md
CHANGED
|
@@ -12,4 +12,52 @@ tags:
|
|
| 12 |
---
|
| 13 |
# **Llama-8B-Distill-CoT**
|
| 14 |
|
| 15 |
-
Llama-8B-Distill-CoT is based on the *Llama [ KT ]* model, distilled by DeepSeek-R1-Distill-Llama-8B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
# **Llama-8B-Distill-CoT**
|
| 14 |
|
| 15 |
+
Llama-8B-Distill-CoT is based on the *Llama [ KT ]* model, distilled by DeepSeek-R1-Distill-Llama-8B. It has been fine-tuned on the long chain-of-thought reasoning model and specialized datasets, focusing on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
# **Use with transformers**
|
| 19 |
+
|
| 20 |
+
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|
| 21 |
+
|
| 22 |
+
Make sure to update your transformers installation via `pip install --upgrade transformers`.
|
| 23 |
+
|
| 24 |
+
```python
|
| 25 |
+
import transformers
|
| 26 |
+
import torch
|
| 27 |
+
|
| 28 |
+
model_id = "prithivMLmods/Llama-8B-Distill-CoT"
|
| 29 |
+
|
| 30 |
+
pipeline = transformers.pipeline(
|
| 31 |
+
"text-generation",
|
| 32 |
+
model=model_id,
|
| 33 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
| 34 |
+
device_map="auto",
|
| 35 |
+
)
|
| 36 |
+
|
| 37 |
+
messages = [
|
| 38 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
| 39 |
+
{"role": "user", "content": "Who are you?"},
|
| 40 |
+
]
|
| 41 |
+
|
| 42 |
+
outputs = pipeline(
|
| 43 |
+
messages,
|
| 44 |
+
max_new_tokens=256,
|
| 45 |
+
)
|
| 46 |
+
print(outputs[0]["generated_text"][-1])
|
| 47 |
+
```
|
| 48 |
+
### **Intended Use:**
|
| 49 |
+
1. **Instruction-Following:** The model is designed to handle detailed instructions, making it ideal for virtual assistants, automation tools, and educational platforms.
|
| 50 |
+
2. **Problem-Solving:** Its fine-tuning on chain-of-thought (CoT) reasoning allows it to tackle multi-step problem-solving in domains such as mathematics, logic, and programming.
|
| 51 |
+
3. **Text Generation:** Capable of generating coherent and contextually relevant content, it is suitable for creative writing, documentation, and report generation.
|
| 52 |
+
4. **Education and Training:** Provides step-by-step explanations and logical reasoning, making it a useful tool for teaching and learning.
|
| 53 |
+
5. **Research and Analysis:** Supports researchers and professionals by generating detailed analyses and structured arguments for complex topics.
|
| 54 |
+
6. **Programming Assistance:** Helps in generating, debugging, and explaining code, as well as creating structured outputs like JSON or XML.
|
| 55 |
+
|
| 56 |
+
### **Limitations:**
|
| 57 |
+
1. **Resource Intensive:** Requires high computational resources to run efficiently, which may limit accessibility for small-scale deployments.
|
| 58 |
+
2. **Hallucination Risk:** May generate incorrect or misleading information, especially when handling ambiguous or poorly framed prompts.
|
| 59 |
+
3. **Domain-Specific Gaps:** While fine-tuned for reasoning, it may not perform well in specialized domains outside its training data.
|
| 60 |
+
4. **Bias in Training Data:** The model's responses can reflect biases present in the datasets it was trained on, potentially leading to biased or inappropriate outputs.
|
| 61 |
+
5. **Dependence on Input Quality:** Performance heavily depends on clear, structured inputs. Ambiguous or vague queries can result in suboptimal outputs.
|
| 62 |
+
6. **Limited Real-Time Context:** The model cannot access real-time information or updates beyond its training data, potentially affecting its relevance for time-sensitive queries.
|
| 63 |
+
7. **Scalability for Long-Context:** While capable of multi-step reasoning, its ability to handle extremely long or complex contexts may be limited compared to larger, more specialized models.
|