Gemma 4 Particle Edu — E4B Fine-tuned (Q4_K_M GGUF)

Fine-tuned Gemma 4 E4B (4.5B active) for physics simulation parameter generation. Part of the Gemma 4 Particle Edu Kaggle Good Hackathon submission.

What this model does

Given a natural language physics scenario (e.g., "DNA double helix at body temperature"), this model outputs a JSON simulation specification with SI-unit physics parameters:

{
  "simulation": {
    "prompt": "dna",
    "title": "DNA Double Helix",
    "domain": "biology",
    "physics": {
      "gravity": 0,
      "damping": 0.99,
      "springStiffness": 30,
      "particleCount": 22000,
      "temperature": 310,
      "density": 1700
    }
  }
}

Training details

  • Method: Unsloth QLoRA (r=16)
  • Base: Gemma 4 E4B (4.5B active parameters)
  • Dataset: 907 Alpaca-format physics simulation pairs
  • Hardware: Lambda A10 (24GB)
  • Cost: $0.55
  • Quantization: llama.cpp Q4_K_M (CPU-only conversion)

Benchmark vs other Gemma 4 sizes

All 4 sizes fine-tuned on the same 907-pair dataset:

Model Type JSON parse Physics Time Cost
Base Gemma 4 9B Dense 30% 0% 12.7s -
E4B FT (this model) QLoRA r=16 70% 77% 8.9s $0.55
Base Gemma 4 26B MoE MoE 95% 22% 9.3s -
26B FT QLoRA r=8 90% 31% 9.3s $2.40
Base Gemma 4 31B Dense 100% 21% 20.6s -
31B shallow FT r=8, 1ep 100% 18% 21.1s $2.55
31B deep FT r=64, 3ep 100% 18% 20.0s $2.55

Finding: E4B QLoRA is cost-optimal — $0.55 delivers +40%p JSON success and +77%p physics accuracy over the 9B base. Larger bases (26B/31B) already achieve 95-100% JSON parsing, so the 907-pair dataset cannot move them further.

How to use

Ollama (recommended)

# Pull this repo and register with Ollama
huggingface-cli download U2DIA/gemma4-particle-edu-e4b --local-dir ./gemma4-e4b
cd gemma4-e4b
ollama create gemma4-physics-edu -f Modelfile
ollama run gemma4-physics-edu

llama.cpp

./llama-cli -m gemma4-physics-edu-Q4_K_M.gguf -p "Simulate a DNA double helix"

Files

File Size Description
gemma4-physics-edu-Q4_K_M.gguf 5.3 GB Merged Q4_K_M quantized weights
config.json 6 KB Hugging Face model config
tokenizer.json 31 MB Tokenizer
Modelfile 241 B Ollama Modelfile

Related resources

Limitations

  • 70% JSON parse rate means ~30% of outputs need retry or fallback
  • Physics accuracy was measured on 20 scenarios; full 300-scenario benchmark requires the 31B model
  • Fine-tuned on English prompts; Korean prompts fall back to the base model's multilingual capability
  • Not suitable for production medical, safety-critical, or regulatory-compliant simulations

Competition

Submitted to Kaggle Gemma 4 Good Hackathon (2026-05-18 deadline).

Tracks: Impact (Education) + Special Technology (Ollama + Unsloth)

Citation

@misc{gemma4-particle-edu-e4b,
  author = {Yun (U2DIA)},
  title = {Gemma 4 Particle Edu — E4B Fine-tuned},
  year = {2026},
  publisher = {HuggingFace},
  url = {https://huggingface.co/U2DIA/gemma4-particle-edu-e4b}
}
Downloads last month
78
GGUF
Model size
8B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for U2DIA/gemma4-particle-edu-e4b

Adapter
(339)
this model

Evaluation results