--- tags: - gguf - llama.cpp - unsloth - vision-language-model - typescript - web-development --- # Gemma-4-TypeScript-Coder : GGUF This model is a specialized fine-tune of **Gemma 4**, engineered for **TypeScript-centric web development**, strict type safety, and modern full-stack architectures. It was trained using **Unsloth Studio** for maximum efficiency and precision. ## 🟦 TypeScript Mastery This fine-tune specializes in: * **Strict Type Systems:** Expertise in complex generics, utility types, and advanced interfaces. * **Modern Frameworks:** High proficiency in **Next.js**, **React**, **Vue 3**, and **Node.js**. * **Visual Logic:** Leverages vision-language capabilities to transform UI wireframes or screenshots directly into type-safe components. * **Best Practices:** Focus on clean architecture and idiomatic TypeScript patterns. ## 🤝 Credits & Acknowledgments A major shout-out to **mhhmm** for the **[typescript-instruct-20k](https://huggingface.co/datasets/mhhmm/typescript-instruct-20k)** dataset. This robust collection of instructions allowed the model to grasp the nuances of the TypeScript ecosystem effectively. ## 🚀 Usage & Inference The model is provided in GGUF format, compatible with `llama.cpp`. **Example usage**: * **Standard Text Chat:** `llama-cli -hf MassivDash/Gemma-4-typescript-coder --jinja` * **Vision/Image Tasks:** `llama-mtmd-cli -hf MassivDash/Gemma-4-typescript-coder --jinja` ## 📂 Available Model Files * `gemma-4-e2b-it.Q8_0.gguf` * `gemma-4-e2b-it.BF16-mmproj.gguf` ## ⚠️ Ollama Note for Vision Models **Important:** Ollama currently requires a unified blob for vision models. To use this with Ollama: 1. Ensure your `Modelfile` is in the same directory as the merged BF16 model. 2. Run: `ollama create model_name -f ./Modelfile` ## 🔗 Stay Connected For more insights on AI development and fine-tuning, visit my blog: 👉 **[spaceout.pl](https://spaceout.pl)** --- *This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)* [](https://github.com/unslothai/unsloth)