license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- qwen3-4B
- ZeroXClem
base_model:
- Menlo/Jan-nano
- prithivMLmods/Octans-Qwen3-UI-Code-4B
- prithivMLmods/Logics-Qwen3-Math-4B
- prithivMLmods/Carinae-Qwen3-Radiation-4B
- prithivMLmods/Kepler-Qwen3-4B-Super-Thinking
- prithivMLmods/Bootes-Qwen3_Coder-Reasoning
- Loom-Labs/Apollo-1-4B
- GetSoloTech/Qwen3-Code-Reasoning-4B
- prithivMLmods/Lacaille-MoT-4B-Supreme2
pipeline_tag: text-generation
library_name: transformers
ZeroXClem/Qwen3-4B-ChromaticCoder
ZeroXClem/Qwen3-4B-ChromaticCoder is a vibrant and versatile 4B model fusion built using MergeKit and the model_stock strategy. Blending deep reasoning, mathematical precision, frontend UI generation, and code synthesis, it shines in logic-driven and creative problem spaces.
This model is a chromatic cascade of top-performing Qwen3 derivatives and fine-tuned reasoning specialists โ harmonizing technical accuracy with structured expressiveness across a wide domain of tasks.
๐ง Overview
ChromaticCoder is based on the powerful foundation of prithivMLmods/Lacaille-MoT-4B-Supreme2, integrating a spectrum of expert finetunes to produce a model specialized in:
- ๐ Mathematical and logical reasoning
- ๐ป Frontend & UI code generation
- ๐งฎ Multi-step algorithmic thinking
- ๐ ๏ธ Code reasoning, explanation, and synthesis
- ๐ Structured technical content creation
๐งฌ Merge Details
| Detail | Value |
|---|---|
| Merge Method | model_stock |
| Base Model | prithivMLmods/Lacaille-MoT-4B-Supreme2 |
| Dtype | bfloat16 |
| Tokenizer Source | prithivMLmods/Lacaille-MoT-4B-Supreme2 |
๐งฉ Models Merged
Menlo/Jan-nanoโ Agentic research-aligned model with MCP support.prithivMLmods/Octans-Qwen3-UI-Code-4Bโ UI code generation with Tailwind/React.prithivMLmods/Logics-Qwen3-Math-4Bโ Advanced math and logic reasoning.prithivMLmods/Carinae-Qwen3-Radiation-4Bโ Balanced probabilistic modeling with multilingual reasoning.prithivMLmods/Kepler-Qwen3-4B-Super-Thinkingโ Hybrid symbolic-probabilistic thought.prithivMLmods/Bootes-Qwen3_Coder-Reasoningโ Instruction-tuned code synthesis and stepwise debugging.Loom-Labs/Apollo-1-4Bโ General-purpose reasoning and multilingual instruction following.GetSoloTech/Qwen3-Code-Reasoning-4Bโ Competitive programming and reasoning powerhouse.
๐ Chromatic Features
โจ Unified Expert Reasoning
Brings together multiple specialized reasoning modules โ from UI generation to symbolic math and programming logic โ into one coherent architecture.
๐ง Deep Logic and Event Simulation
Excels in modeling probabilistic systems, structured math, and algorithmic solutions with step-by-step clarity.
๐ป Frontend & UI Coding Mastery
With Octans and Jan-nano integrations, this model generates accurate and readable frontend code (React, Tailwind, HTML5).
๐งช STEM-Specialized Performance
Fine-tuned on math, logic, and scientific problem domains, ChromaticCoder is a strong match for educational and research applications.
๐ ๏ธ Developer-Centric Reasoning
Instruction-tuned layers optimize code completion, refactoring, and explanation across Python, JS, C++, and more.
๐ Multilingual Capabilities
Thanks to Apollo and Carinae, it supports over 80 languages in both reasoning and coding domains.
๐ง MergeKit Configuration
name: ZeroXClem-Qwen3-4B-ChromaticCoder
base_model: prithivMLmods/Lacaille-MoT-4B-Supreme2
dtype: bfloat16
merge_method: model_stock
models:
- model: Menlo/Jan-nano
- model: prithivMLmods/Octans-Qwen3-UI-Code-4B
- model: prithivMLmods/Logics-Qwen3-Math-4B
- model: prithivMLmods/Carinae-Qwen3-Radiation-4B
- model: prithivMLmods/Kepler-Qwen3-4B-Super-Thinking
- model: prithivMLmods/Bootes-Qwen3_Coder-Reasoning
- model: Loom-Labs/Apollo-1-4B
- model: GetSoloTech/Qwen3-Code-Reasoning-4B
tokenizer_source: prithivMLmods/Lacaille-MoT-4B-Supreme2
๐ก Use Cases
- ๐ STEM Tutoring & Education
- ๐งฎ Mathematical and Logical Explanation
- ๐ฅ๏ธ Frontend Development & Prototyping
- ๐ Technical Documentation
- ๐งโ๐ป Algorithm Debugging & Refactoring
- ๐ค Agentic Reasoning and Simulated Tool Use
๐งช Limitations
- Limited by 4B parameter size โ may struggle with extremely long or open-domain contexts.
- Some outputs may be verbose or over-explained depending on the base tuning weights.
- Not suitable for unrestricted creative or emotional writing tasks.
โ๏ธ License & Usage
- License: Apache 2.0
- Users are responsible for implementing appropriate safety and moderation when deploying the model.
๐ช Credits & Acknowledgements
This fusion was only possible thanks to the incredible work of:
- Menlo Research, PrithivML, Loom Labs, GetSoloTech, and others
- Model authors and dataset contributors across the OSS reasoning community
- Qwen3 for providing a strong base ecosystem for 4B-scale thinking models
Made with ๐ by the ZeroXClem team. ๐ฎ
