File size: 2,701 Bytes
9e5967e
 
 
 
 
 
 
 
 
 
 
 
 
 
b0ea140
9e5967e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: other
license_name: lfm1.0
license_link: https://huggingface.co/LiquidAI/LFM2-2.6B/blob/main/LICENSE
metrics:
- magic judge
base_model:
- LiquidAI/LFM2-2.6B
tags:
- lmstudio
- madlabOSS
- magic judge
---

# LMS Guide 2.6b

## 🧠 Overview
The **LMS Guide 2.6b** is part of the **MadlabOSS LM Studio Guide** family — a lineup of small, efficient, and highly aligned assistant models trained specifically to provide deterministic, hallucination‑resistant guidance for LM Studio users.

This model is trained on a curated dataset of LM Studio–specific instructions, workflows, troubleshooting steps, and conceptual explanations.

---

## 🚀 Intended Use
This model is optimized for:

- LM Studio onboarding  
- workflow explanations  
- feature descriptions  
- troubleshooting guidance  
- plugin/server integration help  
- safe, deterministic assistant behavior  

It is **not** intended as a general‑purpose chatbot.

---

## 🧩 Model Details

**Base Model:** LFM2‑2.6B

**Parameter Count:** 2.6 Billion

**Training Type:** Supervised fine‑tuning 

**Sequence Length:** 1024 

**Precision:** FP16

**Framework:** PyTorch / Transformers  

---

## 📦 Training Data
The model was trained on:

- **6,000+ LM Studio–specific instruction/response pairs**  
- Clean, domain‑specific, ontology‑consistent data  
- Minor general‑purpose conversational data  
- No web‑scraped content 
- Full LM Studio Documentation 

A 36k+ expanded dataset is planned for v2.0.

---

## 🏋️ Training Procedure

### **Hyperparameters**
- Epochs: 6  
- Batch size: 16
- Learning rate: cosine schedule, peak ~4e‑5  
- Optimizer: AdamW  
- Gradient clipping: 1.0  
- Gradient accumulation: 1  

### **Hardware**
Training was performed on:

- RTX 6000 Ada (96GB)  (1.2b + 2.6b)
- Dual RTX 3090  (Magic Judge)
- RTX 3070 (for 0.35B + 0.7b)  

---

## 📊 Evaluation

### **Judge Score**
Semantic correctness, ontology adherence, and hallucination resistance.


### **Qualitative Behavior**
- Strong adherence to LM Studio terminology  
- Low hallucination rate  
- Deterministic, predictable responses  
- Not optimized for open‑domain reasoning  

---

## 🔒 Safety
This model is trained exclusively on LM Studio–specific content.  
It avoids hallucinating non‑existent LM Studio features and adheres to a strict ontology.

It is **not** designed for:

- political content  
- medical advice  
- legal advice  
- general‑purpose conversation  

---

## ⚠️ Limitations
- Not a general assistant  
- Not trained for coding, math, or open‑domain reasoning  
- May refuse tasks outside LM Studio scope  
- Static accuracy metrics underestimate real performance  

---