File size: 6,227 Bytes
ad5191a 4c92303 ad5191a 4c92303 ad5191a 4c92303 ad5191a 4c92303 ad5191a 4c92303 ad5191a 4c92303 ad5191a 4c92303 ad5191a 4c92303 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- qwen3-4B
- ZeroXClem
base_model:
- Menlo/Jan-nano
- prithivMLmods/Octans-Qwen3-UI-Code-4B
- prithivMLmods/Logics-Qwen3-Math-4B
- prithivMLmods/Carinae-Qwen3-Radiation-4B
- prithivMLmods/Kepler-Qwen3-4B-Super-Thinking
- prithivMLmods/Bootes-Qwen3_Coder-Reasoning
- Loom-Labs/Apollo-1-4B
- GetSoloTech/Qwen3-Code-Reasoning-4B
- prithivMLmods/Lacaille-MoT-4B-Supreme2
pipeline_tag: text-generation
library_name: transformers
---
# ZeroXClem/Qwen3-4B-ChromaticCoder

**ZeroXClem/Qwen3-4B-ChromaticCoder** is a vibrant and versatile 4B model fusion built using `MergeKit` and the `model_stock` strategy. Blending deep reasoning, mathematical precision, frontend UI generation, and code synthesis, it shines in logic-driven and creative problem spaces.
This model is a chromatic cascade of top-performing Qwen3 derivatives and fine-tuned reasoning specialists โ harmonizing technical accuracy with structured expressiveness across a wide domain of tasks.
---
## ๐ง Overview
**ChromaticCoder** is based on the powerful foundation of `prithivMLmods/Lacaille-MoT-4B-Supreme2`, integrating a spectrum of expert finetunes to produce a model specialized in:
- ๐ **Mathematical and logical reasoning**
- ๐ป **Frontend & UI code generation**
- ๐งฎ **Multi-step algorithmic thinking**
- ๐ ๏ธ **Code reasoning, explanation, and synthesis**
- ๐ **Structured technical content creation**
---
## ๐งฌ Merge Details
| Detail | Value |
|---------------------|------------------------------------------------------------------------|
| **Merge Method** | `model_stock` |
| **Base Model** | [`prithivMLmods/Lacaille-MoT-4B-Supreme2`](https://huggingface.co/prithivMLmods/Lacaille-MoT-4B-Supreme2) |
| **Dtype** | `bfloat16` |
| **Tokenizer Source** | `prithivMLmods/Lacaille-MoT-4B-Supreme2` |
---
## ๐งฉ Models Merged
- [`Menlo/Jan-nano`](https://huggingface.co/Menlo/Jan-nano) โ Agentic research-aligned model with MCP support.
- [`prithivMLmods/Octans-Qwen3-UI-Code-4B`](https://huggingface.co/prithivMLmods/Octans-Qwen3-UI-Code-4B) โ UI code generation with Tailwind/React.
- [`prithivMLmods/Logics-Qwen3-Math-4B`](https://huggingface.co/prithivMLmods/Logics-Qwen3-Math-4B) โ Advanced math and logic reasoning.
- [`prithivMLmods/Carinae-Qwen3-Radiation-4B`](https://huggingface.co/prithivMLmods/Carinae-Qwen3-Radiation-4B) โ Balanced probabilistic modeling with multilingual reasoning.
- [`prithivMLmods/Kepler-Qwen3-4B-Super-Thinking`](https://huggingface.co/prithivMLmods/Kepler-Qwen3-4B-Super-Thinking) โ Hybrid symbolic-probabilistic thought.
- [`prithivMLmods/Bootes-Qwen3_Coder-Reasoning`](https://huggingface.co/prithivMLmods/Bootes-Qwen3_Coder-Reasoning) โ Instruction-tuned code synthesis and stepwise debugging.
- [`Loom-Labs/Apollo-1-4B`](https://huggingface.co/NoemaResearch/Apollo-1-4B) โ General-purpose reasoning and multilingual instruction following.
- [`GetSoloTech/Qwen3-Code-Reasoning-4B`](https://huggingface.co/GetSoloTech/Qwen3-Code-Reasoning-4B) โ Competitive programming and reasoning powerhouse.
---
## ๐ Chromatic Features
โจ **Unified Expert Reasoning**
Brings together multiple specialized reasoning modules โ from UI generation to symbolic math and programming logic โ into one coherent architecture.
๐ง **Deep Logic and Event Simulation**
Excels in modeling probabilistic systems, structured math, and algorithmic solutions with step-by-step clarity.
๐ป **Frontend & UI Coding Mastery**
With Octans and Jan-nano integrations, this model generates accurate and readable frontend code (React, Tailwind, HTML5).
๐งช **STEM-Specialized Performance**
Fine-tuned on math, logic, and scientific problem domains, ChromaticCoder is a strong match for educational and research applications.
๐ ๏ธ **Developer-Centric Reasoning**
Instruction-tuned layers optimize code completion, refactoring, and explanation across Python, JS, C++, and more.
๐ **Multilingual Capabilities**
Thanks to Apollo and Carinae, it supports over 80 languages in both reasoning and coding domains.
---
## ๐ง MergeKit Configuration
```yaml
name: ZeroXClem-Qwen3-4B-ChromaticCoder
base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2
dtype: bfloat16
merge_method: model_stock
models:
- model: Menlo/Jan-nano
- model: prithivMLmods/Octans-Qwen3-UI-Code-4B
- model: prithivMLmods/Logics-Qwen3-Math-4B
- model: prithivMLmods/Carinae-Qwen3-Radiation-4B
- model: prithivMLmods/Kepler-Qwen3-4B-Super-Thinking
- model: prithivMLmods/Bootes-Qwen3_Coder-Reasoning
- model: Loom-Labs/Apollo-1-4B
- model: GetSoloTech/Qwen3-Code-Reasoning-4B
tokenizer_source: prithivMLmods/Lacaille-MoT-4B-Supreme2
````
---
## ๐ก Use Cases
* ๐ **STEM Tutoring & Education**
* ๐งฎ **Mathematical and Logical Explanation**
* ๐ฅ๏ธ **Frontend Development & Prototyping**
* ๐ **Technical Documentation**
* ๐งโ๐ป **Algorithm Debugging & Refactoring**
* ๐ค **Agentic Reasoning and Simulated Tool Use**
---
## ๐งช Limitations
* Limited by 4B parameter size โ may struggle with extremely long or open-domain contexts.
* Some outputs may be verbose or over-explained depending on the base tuning weights.
* Not suitable for unrestricted creative or emotional writing tasks.
---
## โ๏ธ License & Usage
* License: **Apache 2.0**
* Users are responsible for implementing appropriate safety and moderation when deploying the model.
---
## ๐ช Credits & Acknowledgements
This fusion was only possible thanks to the incredible work of:
* **Menlo Research**, **PrithivML**, **Loom Labs**, **GetSoloTech**, and others
* Model authors and dataset contributors across the OSS reasoning community
* Qwen3 for providing a strong base ecosystem for 4B-scale thinking models
---
**Made with ๐ by the ZeroXClem team. ๐ฎ** |