Update README.md
Browse files
README.md
CHANGED
|
@@ -13,4 +13,96 @@ tags:
|
|
| 13 |
- phi4
|
| 14 |
- trl
|
| 15 |
- sft
|
| 16 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
- phi4
|
| 14 |
- trl
|
| 15 |
- sft
|
| 16 |
+
---
|
| 17 |
+
# **Megatron-Opus-14B-2.1 [ Exp ]**
|
| 18 |
+
|
| 19 |
+
[Megatron-Opus-14B-2.1 ] Exp finetuned from Microsoft's Phi-4 is a state-of-the-art open model developed with a focus on responsible problem solving and advanced reasoning capabilities. Built upon a diverse blend of synthetic datasets, carefully filtered public domain websites, and high-quality academic books and Q&A datasets, Megatron-Opus-14B-2.1 ensures that small, capable models are trained with datasets of exceptional depth and precision.
|
| 20 |
+
|
| 21 |
+
Megatron-Opus-14B-2.1 adopts a robust safety post-training approach using open-source and in-house synthetic datasets. This involves a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization) techniques, ensuring helpful and harmless outputs across various safety categories.
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
# **Dataset Info**
|
| 25 |
+
|
| 26 |
+
Megatron-Opus-14B-2.1 is fine-tuned on a carefully curated synthetic dataset generated using an advanced pipeline optimized for Chain of Thought (CoT) reasoning and Responsible Problem Breakdown (RPB) methodologies. This ensures that the model excels at:
|
| 27 |
+
|
| 28 |
+
- **Logical reasoning**
|
| 29 |
+
- **Step-by-step problem-solving**
|
| 30 |
+
- **Breaking down complex tasks into manageable parts**
|
| 31 |
+
|
| 32 |
+
The dataset also emphasizes responsible decision-making and fairness in generating solutions.
|
| 33 |
+
|
| 34 |
+
# **Run with Transformers**
|
| 35 |
+
|
| 36 |
+
```python
|
| 37 |
+
# pip install accelerate
|
| 38 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 39 |
+
import torch
|
| 40 |
+
|
| 41 |
+
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Megatron-Opus-14B-2.1")
|
| 42 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 43 |
+
"prithivMLmods/Megatron-Opus-14B-2.1",
|
| 44 |
+
device_map="auto",
|
| 45 |
+
torch_dtype=torch.bfloat16,
|
| 46 |
+
)
|
| 47 |
+
|
| 48 |
+
input_text = "Explain the concept of black holes."
|
| 49 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
| 50 |
+
|
| 51 |
+
outputs = model.generate(**input_ids, max_new_tokens=64)
|
| 52 |
+
print(tokenizer.decode(outputs[0]))
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
For chat-style interactions, use `tokenizer.apply_chat_template`:
|
| 56 |
+
|
| 57 |
+
```python
|
| 58 |
+
messages = [
|
| 59 |
+
{"role": "user", "content": "Explain the concept of black holes."},
|
| 60 |
+
]
|
| 61 |
+
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
|
| 62 |
+
|
| 63 |
+
outputs = model.generate(**input_ids, max_new_tokens=256)
|
| 64 |
+
print(tokenizer.decode(outputs[0]))
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
# **Intended Use**
|
| 68 |
+
|
| 69 |
+
Megatron-Opus-14B-2.1 is tailored for a wide range of applications, especially those involving **advanced reasoning**, **multilingual capabilities**, and **responsible problem-solving**. Its primary use cases include:
|
| 70 |
+
|
| 71 |
+
1. **Responsible Problem Solving**
|
| 72 |
+
- Breaking down complex problems into logical, actionable steps.
|
| 73 |
+
- Offering ethical, well-rounded solutions in academic and professional contexts.
|
| 74 |
+
|
| 75 |
+
2. **Advanced Reasoning Tasks**
|
| 76 |
+
- Excelling in mathematics, logic, and scientific reasoning.
|
| 77 |
+
- Providing detailed explanations and systematic answers.
|
| 78 |
+
|
| 79 |
+
3. **Content Generation**
|
| 80 |
+
- Assisting in generating high-quality content for various domains, including creative writing and technical documentation.
|
| 81 |
+
- Supporting marketers, writers, and educators with detailed and well-structured outputs.
|
| 82 |
+
|
| 83 |
+
4. **Educational Support**
|
| 84 |
+
- Acting as a virtual tutor for students by generating practice questions, answers, and detailed explanations.
|
| 85 |
+
- Helping educators design learning material that promotes critical thinking and step-by-step problem-solving.
|
| 86 |
+
|
| 87 |
+
5. **Customer Support & Dialogue Systems**
|
| 88 |
+
- Enabling chatbots and virtual assistants to provide accurate, helpful, and responsible responses.
|
| 89 |
+
- Enhancing customer service with reasoning-driven automation.
|
| 90 |
+
|
| 91 |
+
# **Limitations**
|
| 92 |
+
|
| 93 |
+
Despite its strengths, Megatron-Opus-14B-2.1 has some limitations that users should be aware of:
|
| 94 |
+
|
| 95 |
+
1. **Bias and Fairness**
|
| 96 |
+
- While great effort has been made to minimize biases, users should critically assess the model’s output in sensitive scenarios to avoid unintended bias.
|
| 97 |
+
|
| 98 |
+
2. **Contextual Interpretation**
|
| 99 |
+
- The model may occasionally misinterpret highly nuanced prompts or ambiguous contexts, leading to suboptimal responses.
|
| 100 |
+
|
| 101 |
+
3. **Knowledge Cutoff**
|
| 102 |
+
- Megatron-Opus-14B-2.1’s knowledge is static and based on the data available at the time of training. It does not include real-time updates or information on recent developments.
|
| 103 |
+
|
| 104 |
+
4. **Safety and Harmlessness**
|
| 105 |
+
- Despite post-training safety alignment, inappropriate or harmful outputs may still occur. Continuous monitoring and human oversight are advised when using the model in critical contexts.
|
| 106 |
+
|
| 107 |
+
5. **Computational Requirements**
|
| 108 |
+
- Deploying Megatron-Opus-14B-2.1 efficiently may require substantial computational resources, particularly for large-scale deployments or real-time applications.
|