update metadata
Browse files
README.md
CHANGED
|
@@ -3,4 +3,102 @@ license: apache-2.0
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
library_name: transformers
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
library_name: transformers
|
| 6 |
+
base_model:
|
| 7 |
+
- Qwen/Qwen3-0.6B
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
+
tags:
|
| 10 |
+
- gspo
|
| 11 |
+
- text-generation-inference
|
| 12 |
+
- code
|
| 13 |
+
- math
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# **Cerium-Qwen3-R1-Dev**
|
| 17 |
+
|
| 18 |
+
> Cerium-Qwen3-R1-Dev is a high-efficiency, multi-domain model fine-tuned on **Qwen-0.6B** using the **rStar-Coder** dataset, enhanced with **code expert clusters**, an extended **open code reasoning dataset**, and **DeepSeek R1 coding sample traces**.
|
| 19 |
+
> This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.
|
| 20 |
+
|
| 21 |
+
> \[!note]
|
| 22 |
+
> GGUF: [https://huggingface.co/prithivMLmods/Cerium-Qwen3-R1-Dev-GGUF](https://huggingface.co/prithivMLmods/Cerium-Qwen3-R1-Dev-GGUF)
|
| 23 |
+
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
## **Key Features**
|
| 27 |
+
|
| 28 |
+
1. **Unified Reasoning Across Code, Math & Science**
|
| 29 |
+
Fine-tuned on **expert clusters** spanning programming, mathematics, and scientific logic, alongside **open code reasoning datasets** and **DeepSeek R1 coding sample traces**, boosting multi-modal symbolic reasoning.
|
| 30 |
+
|
| 31 |
+
2. **Advanced Code Reasoning & Generation**
|
| 32 |
+
Supports multi-language coding with explanations, optimization hints, and error detection—ideal for full-stack prototyping, algorithm synthesis, and debugging workflows.
|
| 33 |
+
|
| 34 |
+
3. **Scientific Problem Solving**
|
| 35 |
+
Performs analytical reasoning in physics, biology, and chemistry—explaining concepts, solving equations, and handling symbolic derivations step-by-step.
|
| 36 |
+
|
| 37 |
+
4. **Hybrid Symbolic-AI Thinking**
|
| 38 |
+
Combines structured logic, chain-of-thought reasoning, and open-ended inference, delivering robust performance on STEM tasks and complex prompt decomposition.
|
| 39 |
+
|
| 40 |
+
5. **Structured Output Mastery**
|
| 41 |
+
Seamlessly generates output in **LaTeX**, **Markdown**, **JSON**, **CSV**, and **YAML**, suited for research reports, technical documentation, and data formats.
|
| 42 |
+
|
| 43 |
+
6. **Optimized Lightweight Footprint for Versatile Deployment**
|
| 44 |
+
Strikes a balance between performance and efficiency, making it deployable on **mid-range GPUs**, **offline clusters**, and advanced **edge AI systems**.
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## **Quickstart with Transformers**
|
| 49 |
+
|
| 50 |
+
```python
|
| 51 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 52 |
+
|
| 53 |
+
model_name = "prithivMLmods/Cerium-Qwen3-R1-Dev"
|
| 54 |
+
|
| 55 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 56 |
+
model_name,
|
| 57 |
+
torch_dtype="auto",
|
| 58 |
+
device_map="auto"
|
| 59 |
+
)
|
| 60 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 61 |
+
|
| 62 |
+
prompt = "Explain the difference between Newtonian mechanics and quantum mechanics with examples."
|
| 63 |
+
|
| 64 |
+
messages = [
|
| 65 |
+
{"role": "system", "content": "You are a scientific tutor skilled in code, math, and reasoning."},
|
| 66 |
+
{"role": "user", "content": prompt}
|
| 67 |
+
]
|
| 68 |
+
|
| 69 |
+
text = tokenizer.apply_chat_template(
|
| 70 |
+
messages,
|
| 71 |
+
tokenize=False,
|
| 72 |
+
add_generation_prompt=True
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 76 |
+
|
| 77 |
+
generated_ids = model.generate(
|
| 78 |
+
**model_inputs,
|
| 79 |
+
max_new_tokens=512
|
| 80 |
+
)
|
| 81 |
+
generated_ids = [
|
| 82 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 83 |
+
]
|
| 84 |
+
|
| 85 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 86 |
+
print(response)
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
## **Intended Use**
|
| 92 |
+
|
| 93 |
+
* Scientific tutoring, computational logic, and mathematical education
|
| 94 |
+
* Advanced coding assistant for algorithm design, code reviews, and documentation
|
| 95 |
+
* Structured technical data generation across formats and fields
|
| 96 |
+
* STEM-focused chatbot or API for research and education tools
|
| 97 |
+
* Mid-resource deployment requiring high symbolic fidelity
|
| 98 |
+
|
| 99 |
+
## **Limitations**
|
| 100 |
+
|
| 101 |
+
* Not tuned for general-purpose or long-form creative writing
|
| 102 |
+
* Context limitations may hinder multi-document or full codebase analysis
|
| 103 |
+
* Specialized in technical and symbolic tasks—general chat may underperform
|
| 104 |
+
* Prioritizes structured reasoning over emotional or casual tone generation
|