| --- |
| license: apache-2.0 |
| base_model: google/gemma-3-4b-it |
| tags: |
| - gemma3 |
| - gguf |
| - fine-tuned |
| - lamp |
| - lighting |
| - smart-home |
| - json |
| datasets: |
| - custom |
| pipeline_tag: text-generation |
| --- |
| |
| # LAMP Models β Fine-tuned for Smart Lighting Control |
|
|
| Fine-tuned language models that generate JSON lighting programs from natural language descriptions. |
|
|
| ## Models |
|
|
| | Model | Base | Params | GGUF Size | Final Eval Loss | |
| |-------|------|--------|-----------|-----------------| |
| | **lamp-gemma-4b-v2** | Gemma 3 4B IT | 4.3B | ~4.1 GB (Q8_0) | 0.0288 | |
| |
| ## Training Details |
| |
| - **Fine-tune Type:** Full parameter (no LoRA) β all 4,300,079,472 parameters trained |
| - **Precision:** bf16 (bfloat16) |
| - **Dataset:** 6,567 training examples + 730 validation examples |
| - **Epochs:** 2 |
| - **Effective Batch Size:** 16 (8 per device Γ 2 gradient accumulation) |
| - **Learning Rate:** 2e-5 with cosine schedule |
| - **Optimizer:** AdamW (weight decay 0.01) |
| - **Training Time:** 38.1 minutes on NVIDIA H200 |
| - **Peak VRAM:** 24.3 GB |
| |
| ## Training Loss |
| |
|  |
| |
| ## Training Details |
| |
|  |
| |
| ## Summary |
| |
|  |
| |
| ## Usage |
| |
| ### With Ollama (GGUF) |
| |
| ```bash |
| # Download the GGUF file and Modelfile from lamp-gemma-4b-v2-gguf/ |
| ollama create lamp-gemma -f Modelfile |
| ollama run lamp-gemma "warm and cozy lighting" |
| ``` |
| |
| ### With Transformers (HuggingFace) |
| |
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| model = AutoModelForCausalLM.from_pretrained("MrMoeeee/lamp-models", subfolder="lamp-gemma-4b-v2") |
| tokenizer = AutoTokenizer.from_pretrained("MrMoeeee/lamp-models", subfolder="lamp-gemma-4b-v2") |
| ``` |
| |
| ## Files |
| |
| ``` |
| lamp-gemma-4b-v2/ # Full model weights + training logs |
| βββ model-00001-of-00002.safetensors |
| βββ model-00002-of-00002.safetensors |
| βββ config.json |
| βββ tokenizer.json |
| βββ training_config.json |
| βββ training_log.json |
| βββ training_metrics.csv |
| βββ metrics_detailed.json |
| βββ graphs/ |
| βββ training_loss.png |
| βββ training_details.png |
| βββ training_summary.png |
| |
| lamp-gemma-4b-v2-gguf/ # Quantized GGUF for inference |
| βββ lamp-gemma-4b-v2-Q8_0.gguf |
| βββ Modelfile |
| ``` |
| |
| ## Dataset |
| |
| The LAMP dataset consists of natural language lighting requests paired with JSON lighting programs. Each program controls RGB LEDs with support for: |
| - Static colors and gradients |
| - Animations (breathing, rainbow, chase, etc.) |
| - Multi-step sequences with timing |
| - Brightness and speed control |
| |