|
|
--- |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- MiniMaxAI/SynLogic |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- prithivMLmods/Qwen3-1.7B-ft-bf16 |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- code |
|
|
- synlogic |
|
|
- moe |
|
|
- math |
|
|
--- |
|
|
|
|
|
 |
|
|
|
|
|
# **Megatron-Bots-1.7B-Reasoning** |
|
|
|
|
|
> **Megatron-Bots-1.7B-Reasoning** is a **logical reasoning and general-purpose thinking model** fine-tuned from **Qwen3-1.7B**, specifically designed for **advanced reasoning tasks and analytical problem-solving**. Built with data entries from the **SynLogic Dataset**, it excels at structured thinking, logical deduction, and comprehensive problem analysis in a compact yet powerful architecture. |
|
|
|
|
|
> \[!note] |
|
|
> GGUF: [https://huggingface.co/prithivMLmods/Megatron-Bots-1.7B-Reasoning-GGUF](https://huggingface.co/prithivMLmods/Megatron-Bots-1.7B-Reasoning-GGUF) |
|
|
|
|
|
## **Key Features** |
|
|
1. **Advanced Logical Reasoning** |
|
|
Trained on the SynLogic Dataset to perform complex logical deductions, structured problem-solving, and analytical thinking across diverse domains with exceptional accuracy and clarity. |
|
|
|
|
|
2. **General-Purpose Thinking Engine** |
|
|
Capable of handling multi-step reasoning, causal analysis, pattern recognition, and systematic problem decomposition for a wide range of cognitive tasks. |
|
|
|
|
|
3. **Compact High-Performance Architecture** |
|
|
While only 1.7B parameters, this model delivers sophisticated reasoning capabilities with minimal resource requirements, making it ideal for deployment in resource-constrained environments. |
|
|
|
|
|
4. **SynLogic Dataset Foundation** |
|
|
Built upon carefully curated synthetic logic problems and reasoning patterns, ensuring robust performance across mathematical reasoning, logical puzzles, and analytical challenges. |
|
|
|
|
|
## **Quickstart with Transformers** |
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_name = "prithivMLmods/Megatron-Bots-1.7B-Reasoning" |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, |
|
|
torch_dtype="auto", |
|
|
device_map="auto" |
|
|
) |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
|
|
prompt = "Solve this logic puzzle: If all A are B, and some B are C, what can we conclude about A and C?" |
|
|
|
|
|
messages = [ |
|
|
{"role": "system", "content": "You are an advanced reasoning assistant specialized in logical analysis and problem-solving."}, |
|
|
{"role": "user", "content": prompt} |
|
|
] |
|
|
|
|
|
text = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=False, |
|
|
add_generation_prompt=True |
|
|
) |
|
|
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
|
|
generated_ids = model.generate( |
|
|
**model_inputs, |
|
|
max_new_tokens=512, |
|
|
temperature=0.1, # Lower temperature for more consistent reasoning |
|
|
do_sample=True |
|
|
) |
|
|
|
|
|
generated_ids = [ |
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
|
] |
|
|
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## **Intended Use** |
|
|
- **Educational Platforms**: Logical reasoning tutoring and step-by-step problem explanation for students. |
|
|
- **Research Applications**: Automated logical analysis and hypothesis generation for academic research. |
|
|
- **Decision Support Systems**: Structured analytical thinking for business and strategic decision-making. |
|
|
- **Puzzle and Game AI**: Advanced reasoning for complex puzzles, strategy games, and logical challenges. |
|
|
- **Code Analysis Tools**: Logical flow analysis and debugging assistance for software development. |
|
|
|
|
|
## **Limitations** |
|
|
1. **Reasoning Domain Specificity**: |
|
|
While strong in logical reasoning, performance may vary on tasks requiring extensive domain-specific knowledge outside the training scope. |
|
|
|
|
|
2. **SynLogic Dataset Constraints**: |
|
|
Training primarily on synthetic logic data may limit performance on real-world reasoning scenarios that require contextual understanding. |
|
|
|
|
|
3. **Parameter Scale Trade-offs**: |
|
|
The 1.7B parameter size, while efficient, may struggle with extremely complex multi-step reasoning chains compared to larger models. |
|
|
|
|
|
4. **Base Model Inheritance**: |
|
|
Inherits any limitations from Qwen3-1.7B's base architecture and potential biases from pretraining data. |
|
|
|
|
|
5. **Context Window Limitations**: |
|
|
May face challenges with very long reasoning chains that exceed the model's context window capacity. |