Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf MassivDash/Gemma-4-typescript-coder:# Run inference directly in the terminal:
llama-cli -hf MassivDash/Gemma-4-typescript-coder:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf MassivDash/Gemma-4-typescript-coder:# Run inference directly in the terminal:
./llama-cli -hf MassivDash/Gemma-4-typescript-coder:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf MassivDash/Gemma-4-typescript-coder:# Run inference directly in the terminal:
./build/bin/llama-cli -hf MassivDash/Gemma-4-typescript-coder:Use Docker
docker model run hf.co/MassivDash/Gemma-4-typescript-coder:Gemma-4-TypeScript-Coder : GGUF
This model is a specialized fine-tune of Gemma 4, engineered for TypeScript-centric web development, strict type safety, and modern full-stack architectures. It was trained using Unsloth Studio for maximum efficiency and precision.
🟦 TypeScript Mastery
This fine-tune specializes in:
- Strict Type Systems: Expertise in complex generics, utility types, and advanced interfaces.
- Modern Frameworks: High proficiency in Next.js, React, Vue 3, and Node.js.
- Visual Logic: Leverages vision-language capabilities to transform UI wireframes or screenshots directly into type-safe components.
- Best Practices: Focus on clean architecture and idiomatic TypeScript patterns.
🤝 Credits & Acknowledgments
A major shout-out to mhhmm for the typescript-instruct-20k dataset. This robust collection of instructions allowed the model to grasp the nuances of the TypeScript ecosystem effectively.
🚀 Usage & Inference
The model is provided in GGUF format, compatible with llama.cpp.
Example usage:
- Standard Text Chat:
llama-cli -hf MassivDash/Gemma-4-typescript-coder --jinja - Vision/Image Tasks:
llama-mtmd-cli -hf MassivDash/Gemma-4-typescript-coder --jinja
📂 Available Model Files
gemma-4-e2b-it.Q8_0.ggufgemma-4-e2b-it.BF16-mmproj.gguf
⚠️ Ollama Note for Vision Models
Important: Ollama currently requires a unified blob for vision models.
To use this with Ollama:
- Ensure your
Modelfileis in the same directory as the merged BF16 model. - Run:
ollama create model_name -f ./Modelfile
🔗 Stay Connected
For more insights on AI development and fine-tuning, visit my blog: 👉 spaceout.pl
This model was trained 2x faster with Unsloth
- Downloads last month
- 3,309
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit

Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf MassivDash/Gemma-4-typescript-coder:# Run inference directly in the terminal: llama-cli -hf MassivDash/Gemma-4-typescript-coder: