MrMoeeee commited on
Commit
e36b9dd
·
verified ·
1 Parent(s): 512170b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ lamp-gemma-4b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
37
+ lamp-llama-3b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ training_results.png filter=lfs diff=lfs merge=lfs -text
.gitkeep ADDED
File without changes
Modelfile.lamp-gemma-4b ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ FROM ./lamp-gemma-4b.Q8_0.gguf
2
+ PARAMETER temperature 0.3
3
+ PARAMETER num_predict 4096
4
+ PARAMETER stop <end_of_turn>
Modelfile.lamp-llama-3b ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ FROM ./lamp-llama-3b.Q8_0.gguf
2
+ PARAMETER temperature 0.3
3
+ PARAMETER num_predict 4096
4
+ PARAMETER stop <|eot_id|>
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - lamp
5
+ - iot
6
+ - pixel-art
7
+ - light-control
8
+ - fine-tuned
9
+ - gguf
10
+ - ollama
11
+ ---
12
+
13
+ # LAMP Fine-Tuned Models
14
+
15
+ Fine-tuned language models for the **LAMP** (Moonside Lamp Emulator) project. These models generate JSON light programs from natural language descriptions.
16
+
17
+ ## Models
18
+
19
+ | Model | Base | Parameters | GGUF Size | Final Eval Loss |
20
+ |-------|------|-----------|-----------|-----------------|
21
+ | `lamp-llama-3b.Q8_0.gguf` | Llama 3.2 3B Instruct | 3.2B | 3.2 GB | 0.0294 |
22
+ | `lamp-gemma-4b.Q8_0.gguf` | Gemma 3 4B IT | 4.3B | 3.9 GB | 0.0247 |
23
+
24
+ ## Training Details
25
+
26
+ - **Method**: Full fine-tune (100% parameters trainable, not LoRA)
27
+ - **Precision**: bf16
28
+ - **Hardware**: NVIDIA H200 (140GB VRAM)
29
+ - **Framework**: Unsloth + HuggingFace TRL
30
+ - **Dataset**: 2,268 train / 253 validation examples
31
+ - **Epochs**: 3
32
+ - **Batch size**: 16 (4 per device x 4 gradient accumulation)
33
+ - **Learning rate**: 2e-5 with cosine decay
34
+ - **Optimizer**: AdamW
35
+
36
+ ## Training Results
37
+
38
+ ![Training Results](training_results.png)
39
+
40
+ Both models converged well:
41
+ - **Llama 3.2 3B**: Loss 1.38 -> 0.018, eval loss 0.0294
42
+ - **Gemma 3 4B**: Loss 1.37 -> 0.018, eval loss 0.0247
43
+
44
+ ## Usage with Ollama
45
+
46
+ 1. Download the GGUF file
47
+ 2. Create a Modelfile:
48
+
49
+ ```
50
+ # Modelfile.lamp-llama-3b
51
+ FROM ./lamp-llama-3b.Q8_0.gguf
52
+ PARAMETER temperature 0.3
53
+ PARAMETER num_predict 4096
54
+ PARAMETER stop <|eot_id|>
55
+ ```
56
+
57
+ 3. Create and run:
58
+ ```bash
59
+ ollama create lamp-llama-3b -f Modelfile.lamp-llama-3b
60
+ ollama run lamp-llama-3b "warm and cozy"
61
+ ```
62
+
63
+ ## Example
64
+
65
+ **Input**: "Create a light program for: warm and cozy"
66
+
67
+ **Output**: A JSON program controlling LED pixels with colors, animations, and timing for a warm ambient effect.
lamp-gemma-4b.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee2ed4dc203391abd941653f5b04598f90656b6d9be0cb1a6ac8f0290534e808
3
+ size 4130401952
lamp-llama-3b.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02eac7bad9f7a3681923b75ea8006b24daf696df4b4170a3d0cee673c4bb8e9f
3
+ size 3421898944
training_results.png ADDED

Git LFS Details

  • SHA256: ec3c225ec2a735738f7d642ea44f6b7761dfb3f0b1e0a395f09b02735b24ecf0
  • Pointer size: 131 Bytes
  • Size of remote file: 210 kB