license: apache-2.0
Muscae-Qwen3-UI-Code-4B
Muscae-Qwen3-UI-Code-4B is a reasoning-enhanced model fine-tuned on Qwen using the GPT-OSS Web UI Coding dataset traces, specializing in web interface coding, structured generation, and polished token probabilities. It excels at generating production-grade UI components, frontend layouts, and logic-driven interface code with high precision and consistency.
GGUF: https://huggingface.co/prithivMLmods/Muscae-Qwen3-UI-Code-4B-GGUF
Key Features
UI-Focused Reasoning Engine Fine-tuned for precise frontend development workflows, generating optimized HTML, CSS, React, and Tailwind-based code with minimal refactoring needs.
Web Interface Generation Mastery Excels in building responsive layouts, interactive components, and dashboard UIs directly from natural language prompts or wireframe descriptions.
Polished Token Probabilities Trained for smoother generation curves and deterministic structure in code, minimizing syntax errors and enhancing readability.
Hybrid Logic-Coding Synthesis Combines structural reasoning with frontend logic understanding to generate UI code that’s both functional and aesthetically consistent.
Structured Output Formats Outputs code and structured data in HTML, React (JSX/TSX), Tailwind, JSON, and YAML, supporting full-stack workflows and CI/CD pipelines.
Optimized Lightweight Footprint Compact 4B parameter size, deployable on mid-range GPUs, developer workstations, and edge build servers while maintaining high-quality UI generation.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Muscae-Qwen3-UI-Code-4B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Generate a responsive React dashboard with a sidebar and top navigation bar using Tailwind CSS."
messages = [
{"role": "system", "content": "You are a frontend coding assistant skilled in web UI generation and responsive design."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Web UI component generation and layout scaffolding
- Responsive dashboard, landing page, and frontend application coding
- Educational and research tasks related to frontend development
- Lightweight deployment in developer environments and CI/CD pipelines
- Structured code generation and UI prototyping from natural language prompts
Limitations
- Focused on UI and frontend code generation — not suited for deep backend logic or non-UI tasks
- Might require minor manual adjustments for large-scale production apps
- Prioritizes structured and readable code over creative design experimentation
- Performance may vary with extremely long code contexts or multi-file full-stack generation tasks
