Gemma 4 E2B-it (Text-Only)

Text-only version of google/gemma-4-E2B-it with the vision and audio encoders removed.

Why?

The original Gemma 4 E2B is a multimodal model (text + vision + audio). When fine-tuning for text-only tasks, the multimodal architecture causes:

  • Higher training loss due to the multimodal overhead
  • Requires mm_token_type_ids tensors even for text-only inputs
  • Batch size > 1 crashes during training
  • Extra ~450M parameters (vision + audio encoders) that serve no purpose for text tasks

This model extracts just the language model (Gemma4ForCausalLM) and the text tokenizer, making it suitable for standard text-only SFT/DPO/KTO fine-tuning.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("bRadu/gemma-4-E2B-it-textonly", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("bRadu/gemma-4-E2B-it-textonly")

messages = [
    {"role": "user", "content": "Hello!"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True, tokenize=True, return_dict=True).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
-
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bRadu/gemma-4-E2B-it-textonly

Finetuned
(15)
this model