File size: 3,899 Bytes
b0ad36c
 
 
 
 
c0ee4b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
language:
- en
library_name: transformers
base_model:
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
tags:
- gspo
- text-generation-inference
- code
- math
---

# **Cerium-Qwen3-R1-Dev**

> Cerium-Qwen3-R1-Dev is a high-efficiency, multi-domain model fine-tuned on **Qwen-0.6B** using the **rStar-Coder** dataset, enhanced with **code expert clusters**, an extended **open code reasoning dataset**, and **DeepSeek R1 coding sample traces**.
> This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.

> \[!note]
> GGUF: [https://huggingface.co/prithivMLmods/Cerium-Qwen3-R1-Dev-GGUF](https://huggingface.co/prithivMLmods/Cerium-Qwen3-R1-Dev-GGUF)

---

## **Key Features**

1. **Unified Reasoning Across Code, Math & Science**
   Fine-tuned on **expert clusters** spanning programming, mathematics, and scientific logic, alongside **open code reasoning datasets** and **DeepSeek R1 coding sample traces**, boosting multi-modal symbolic reasoning.

2. **Advanced Code Reasoning & Generation**
   Supports multi-language coding with explanations, optimization hints, and error detection—ideal for full-stack prototyping, algorithm synthesis, and debugging workflows.

3. **Scientific Problem Solving**
   Performs analytical reasoning in physics, biology, and chemistry—explaining concepts, solving equations, and handling symbolic derivations step-by-step.

4. **Hybrid Symbolic-AI Thinking**
   Combines structured logic, chain-of-thought reasoning, and open-ended inference, delivering robust performance on STEM tasks and complex prompt decomposition.

5. **Structured Output Mastery**
   Seamlessly generates output in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, suited for research reports, technical documentation, and data formats.

6. **Optimized Lightweight Footprint for Versatile Deployment**
   Strikes a balance between performance and efficiency, making it deployable on **mid-range GPUs**, **offline clusters**, and advanced **edge AI systems**.

---

## **Quickstart with Transformers**

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Cerium-Qwen3-R1-Dev"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain the difference between Newtonian mechanics and quantum mechanics with examples."

messages = [
    {"role": "system", "content": "You are a scientific tutor skilled in code, math, and reasoning."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

---

## **Intended Use**

* Scientific tutoring, computational logic, and mathematical education
* Advanced coding assistant for algorithm design, code reviews, and documentation
* Structured technical data generation across formats and fields
* STEM-focused chatbot or API for research and education tools
* Mid-resource deployment requiring high symbolic fidelity

## **Limitations**

* Not tuned for general-purpose or long-form creative writing
* Context limitations may hinder multi-document or full codebase analysis
* Specialized in technical and symbolic tasks—general chat may underperform
* Prioritizes structured reasoning over emotional or casual tone generation