lamp-models / README.md
MrMoeeee's picture
Add model card with training details and graphs
d050d18 verified
metadata
license: apache-2.0
base_model: google/gemma-3-4b-it
tags:
  - gemma3
  - gguf
  - fine-tuned
  - lamp
  - lighting
  - smart-home
  - json
datasets:
  - custom
pipeline_tag: text-generation

LAMP Models β€” Fine-tuned for Smart Lighting Control

Fine-tuned language models that generate JSON lighting programs from natural language descriptions.

Models

Model Base Params GGUF Size Final Eval Loss
lamp-gemma-4b-v2 Gemma 3 4B IT 4.3B ~4.1 GB (Q8_0) 0.0288

Training Details

  • Fine-tune Type: Full parameter (no LoRA) β€” all 4,300,079,472 parameters trained
  • Precision: bf16 (bfloat16)
  • Dataset: 6,567 training examples + 730 validation examples
  • Epochs: 2
  • Effective Batch Size: 16 (8 per device Γ— 2 gradient accumulation)
  • Learning Rate: 2e-5 with cosine schedule
  • Optimizer: AdamW (weight decay 0.01)
  • Training Time: 38.1 minutes on NVIDIA H200
  • Peak VRAM: 24.3 GB

Training Loss

Training Loss

Training Details

Training Details

Summary

Training Summary

Usage

With Ollama (GGUF)

# Download the GGUF file and Modelfile from lamp-gemma-4b-v2-gguf/
ollama create lamp-gemma -f Modelfile
ollama run lamp-gemma "warm and cozy lighting"

With Transformers (HuggingFace)

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("MrMoeeee/lamp-models", subfolder="lamp-gemma-4b-v2")
tokenizer = AutoTokenizer.from_pretrained("MrMoeeee/lamp-models", subfolder="lamp-gemma-4b-v2")

Files

lamp-gemma-4b-v2/          # Full model weights + training logs
  β”œβ”€β”€ model-00001-of-00002.safetensors
  β”œβ”€β”€ model-00002-of-00002.safetensors
  β”œβ”€β”€ config.json
  β”œβ”€β”€ tokenizer.json
  β”œβ”€β”€ training_config.json
  β”œβ”€β”€ training_log.json
  β”œβ”€β”€ training_metrics.csv
  β”œβ”€β”€ metrics_detailed.json
  └── graphs/
      β”œβ”€β”€ training_loss.png
      β”œβ”€β”€ training_details.png
      └── training_summary.png

lamp-gemma-4b-v2-gguf/     # Quantized GGUF for inference
  β”œβ”€β”€ lamp-gemma-4b-v2-Q8_0.gguf
  └── Modelfile

Dataset

The LAMP dataset consists of natural language lighting requests paired with JSON lighting programs. Each program controls RGB LEDs with support for:

  • Static colors and gradients
  • Animations (breathing, rainbow, chase, etc.)
  • Multi-step sequences with timing
  • Brightness and speed control