UIGEN-T3-4B-Preview-GGUF

UIGEN-T3-4B-Preview is Tesslate’s advanced UI generation model built on the Qwen3 architecture, designed to produce both individual components and full web pages using a hybrid reasoning system that offers modes for guided reasoning as well as fast, direct code generation. Trained with the UIGenEval benchmark for robust frontend development, it supports clean HTML, CSS, responsive layouts, and technical quality, while also providing design tokens with each output for easy site-wide customization. Released under a research-only, non-commercial license, UIGEN-T3-4B-Preview is tailored for rapid prototyping, internal tooling, and component library creation, requiring at least a 4GB VRAM GPU for inference and compatible with popular ML libraries, but not intended for production use or commercial deployment without special licensing

Model Files

File name Size Quant Type
UIGEN-T3-4B-Preview.F32.gguf 16.1 GB F32
UIGEN-T3-4B-Preview.BF16.gguf 8.05 GB BF16
UIGEN-T3-4B-Preview.F16.gguf 8.05 GB F16
UIGEN-T3-4B-Preview.Q8_0.gguf 4.28 GB Q8_0
UIGEN-T3-4B-Preview.Q6_K.gguf 3.31 GB Q6_K
UIGEN-T3-4B-Preview.Q5_K_M.gguf 2.89 GB Q5_K_M
UIGEN-T3-4B-Preview.Q5_K_S.gguf 2.82 GB Q5_K_S
UIGEN-T3-4B-Preview.Q4_K_M.gguf 2.5 GB Q4_K_M
UIGEN-T3-4B-Preview.Q4_K_S.gguf 2.38 GB Q4_K_S
UIGEN-T3-4B-Preview.Q3_K_L.gguf 2.24 GB Q3_K_L
UIGEN-T3-4B-Preview.Q3_K_M.gguf 2.08 GB Q3_K_M
UIGEN-T3-4B-Preview.Q3_K_S.gguf 1.89 GB Q3_K_S
UIGEN-T3-4B-Preview.Q2_K.gguf 1.67 GB Q2_K

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
2
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/UIGEN-T3-4B-Preview-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(4)
this model