File size: 2,661 Bytes
e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 e36b9dd d050d18 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | ---
license: apache-2.0
base_model: google/gemma-3-4b-it
tags:
- gemma3
- gguf
- fine-tuned
- lamp
- lighting
- smart-home
- json
datasets:
- custom
pipeline_tag: text-generation
---
# LAMP Models β Fine-tuned for Smart Lighting Control
Fine-tuned language models that generate JSON lighting programs from natural language descriptions.
## Models
| Model | Base | Params | GGUF Size | Final Eval Loss |
|-------|------|--------|-----------|-----------------|
| **lamp-gemma-4b-v2** | Gemma 3 4B IT | 4.3B | ~4.1 GB (Q8_0) | 0.0288 |
## Training Details
- **Fine-tune Type:** Full parameter (no LoRA) β all 4,300,079,472 parameters trained
- **Precision:** bf16 (bfloat16)
- **Dataset:** 6,567 training examples + 730 validation examples
- **Epochs:** 2
- **Effective Batch Size:** 16 (8 per device Γ 2 gradient accumulation)
- **Learning Rate:** 2e-5 with cosine schedule
- **Optimizer:** AdamW (weight decay 0.01)
- **Training Time:** 38.1 minutes on NVIDIA H200
- **Peak VRAM:** 24.3 GB
## Training Loss

## Training Details

## Summary

## Usage
### With Ollama (GGUF)
```bash
# Download the GGUF file and Modelfile from lamp-gemma-4b-v2-gguf/
ollama create lamp-gemma -f Modelfile
ollama run lamp-gemma "warm and cozy lighting"
```
### With Transformers (HuggingFace)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("MrMoeeee/lamp-models", subfolder="lamp-gemma-4b-v2")
tokenizer = AutoTokenizer.from_pretrained("MrMoeeee/lamp-models", subfolder="lamp-gemma-4b-v2")
```
## Files
```
lamp-gemma-4b-v2/ # Full model weights + training logs
βββ model-00001-of-00002.safetensors
βββ model-00002-of-00002.safetensors
βββ config.json
βββ tokenizer.json
βββ training_config.json
βββ training_log.json
βββ training_metrics.csv
βββ metrics_detailed.json
βββ graphs/
βββ training_loss.png
βββ training_details.png
βββ training_summary.png
lamp-gemma-4b-v2-gguf/ # Quantized GGUF for inference
βββ lamp-gemma-4b-v2-Q8_0.gguf
βββ Modelfile
```
## Dataset
The LAMP dataset consists of natural language lighting requests paired with JSON lighting programs. Each program controls RGB LEDs with support for:
- Static colors and gradients
- Animations (breathing, rainbow, chase, etc.)
- Multi-step sequences with timing
- Brightness and speed control
|