File size: 3,936 Bytes
ddb3d81
 
 
 
 
 
 
 
 
 
1328cb1
4a65a0b
1aaed8d
 
 
 
 
 
 
 
dddab3f
 
1aaed8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba6cd8c
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- code
- math
---

![IOP.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/X4wG8maYiZT68QLGW4NPn.png)

# Vulpecula-4B

> **Vulpecula-4B** is fine-tuned based on the traces of **SK1.1**, consisting of the same 1,000 entries of the **DeepSeek thinking trajectory**, along with fine-tuning on **Fine-Tome 100k** and **Open Math Reasoning** datasets. This specialized 4B parameter model is designed for enhanced mathematical reasoning, logical problem-solving, and structured content generation, optimized for precision and step-by-step explanation.

> [!note]
> GGUF : [https://huggingface.co/prithivMLmods/Vulpecula-4B-GGUF](https://huggingface.co/prithivMLmods/Vulpecula-4B-GGUF)

## Key Features

1. **Advanced Mathematical and Logical Reasoning**
   Fine-tuned on DeepSeek trajectories and Open Math Reasoning to excel at symbolic logic, arithmetic, and complex multi-step math problems, ideal for STEM education and competitions.

2. **Trace-Based Fine-Tuning**
   Leverages SK1.1 trace dataset entries to model deep, interpretable reasoning paths, improving transparency and consistency in problem-solving.

3. **Compact Code Understanding**
   Capable of understanding and generating efficient code snippets in Python, JavaScript, and more, supporting algorithmic explanations and lightweight coding tasks.

4. **Factual and Instructional Precision**
   Trained on curated high-quality data with reasoning benchmarks to minimize hallucinations and strictly follow instructions for structured outputs (Markdown, JSON, tables).

5. **Multilingual Capabilities**
   Supports over 20 languages for technical reasoning and translation, enhancing multilingual educational applications.

6. **Optimized Performance for Resource-Constrained Environments**
   Balances reasoning capability with efficient resource use, suitable for deployment in environments with limited compute.

## Quickstart with Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Vulpecula-4B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Solve the equation: 3x + 7 = 22. Show all steps."

messages = [
    {"role": "system", "content": "You are a step-by-step math tutor."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

## Intended Use

* Advanced mathematical and logical problem solving
* Education-centric STEM tutoring and explanations
* Code assistance and debugging for lightweight coding tasks
* Structured content generation including JSON, Markdown, and tables
* Multilingual reasoning and technical translation
* Efficient deployment in low-resource settings with a focus on accuracy and stepwise reasoning

## Limitations

* Limited creativity in purely open-ended or fictional prompts
* May face challenges with ambiguous or multi-intent queries
* Smaller context window compared to larger 14B+ models
* Possible factual errors in complex edge cases or adversarial inputs

## References

1. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071)
2. Qwen2.5 Technical Report – [https://arxiv.org/pdf/2412.15115](https://arxiv.org/pdf/2412.15115)