saadxsalman commited on
Commit
4bb0a12
·
verified ·
1 Parent(s): cbfc66b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: apache-2.0
4
+ base_model: saadxsalman/SS-350M-SQL-Strict
5
+ library_name: llama.cpp
6
+ tags:
7
+ - text-to-sql
8
+ - gguf
9
+ - lfm
10
+ - liquid-ai
11
+ - edge-llm
12
+ - database
13
+ - slm
14
+ datasets:
15
+ - gretelai/synthetic_text_to_sql
16
+ language:
17
+ - en
18
+ ---
19
+
20
+ # SS-350M-SQL-Strict-GGUF
21
+
22
+ This repository contains the GGUF quantization of [SS-350M-SQL-Strict](https://huggingface.co/saadxsalman/SS-350M-SQL-Strict).
23
+
24
+ ## Model Summary
25
+ **SS-350M-SQL-Strict-GGUF** is a specialized, ultra-lightweight Small Language Model (SLM) optimized for **Text-to-SQL translation** on edge devices. Built upon the **LiquidAI LFM2.5-350M** architecture, this model is engineered for "Strict" output: it generates **only** raw SQL code, eliminating conversational filler, explanations, or Markdown formatting.
26
+
27
+ ## Technical Specifications
28
+ - **Architecture:** Liquid Foundation Model (LFM) 2.5
29
+ - **Parameters:** 350 Million
30
+ - **Quantization:** Q8_0 (8-bit)
31
+ - **Model Size:** ~370 MB
32
+ - **Context Length:** 32,768 tokens
33
+ - **Inference Engine:** Optimized for `llama.cpp`
34
+
35
+ ## Key Features
36
+ - **Zero Filler:** Returns raw SQL queries immediately (no "Sure, here is your code").
37
+ - **High Speed:** Leverages LFM's linear-complexity architecture for near-instantaneous generation on CPUs.
38
+ - **Low Footprint:** Runs comfortably on devices with < 1GB RAM, making it ideal for mobile or embedded database interfaces.
39
+
40
+ ---
41
+
42
+ ## Prompting Specification (ChatML)
43
+ To ensure the "Strict" behavior and prevent hallucinations, you **must** follow the ChatML prompt format.
44
+
45
+ ### Template
46
+ ```text
47
+ <|im_start|>system
48
+ You are a SQL translation engine. Return ONLY raw SQL. Schema: {YOUR_SCHEMA}<|im_end|>
49
+ <|im_start|>user
50
+ {YOUR_QUESTION}<|im_end|>
51
+ <|im_start|>assistant
52
+ ```
53
+
54
+ ### Example Input
55
+ **System:** `Table 'employees' (id, name, department, salary)`
56
+ **User:** `Find the total salary of the 'Sales' department.`
57
+
58
+ ### Example Output
59
+ ```sql
60
+ SELECT SUM(salary) FROM employees WHERE department = 'Sales';
61
+ ```
62
+
63
+ ---
64
+
65
+ ## Local Deployment with llama.cpp
66
+
67
+ You can run this model locally using the following command:
68
+
69
+ ```bash
70
+ ./llama-cli -m SS-350M-SQL-Strict.Q8_0.gguf \
71
+ -p "<|im_start|>system\nYou are a SQL engine. Return ONLY raw SQL. Schema: Table 'inventory' (item, quantity)\n<|im_end|>\n<|im_start|>user\nHow many items are in stock?\n<|im_end|>\n<|im_start|>assistant\n" \
72
+ --temp 0 \
73
+ -n 128
74
+ ```
75
+
76
+ ## Training Logic
77
+ The base model was fine-tuned using **4-bit QLoRA** on the **Gretel Synthetic SQL** dataset. A key differentiator in its training was the use of **Completion-Only Loss masking**, which focused 100% of the model's learning capacity on SQL syntax rather than prompt structure.
78
+
79
+ ## Limitations & Dialect
80
+ - **Dialect:** Defaulted to Standard SQL.
81
+ - **Complexity:** Best suited for schemas with fewer than 20 tables.
82
+ - **Reasoning:** This is a translation engine; it does not "think" step-by-step or explain its logic. If the input is ambiguous, it will attempt the most likely SQL translation.
83
+
84
+ ## Citation
85
+ If you use this model or the underlying LFM architecture, please cite:
86
+
87
+ ```bibtex
88
+ @article{saadsalman2026sqlstrict,
89
+ author = {Saad Salman},
90
+ title = {SS-350M-SQL-Strict: Edge-Optimized Text-to-SQL},
91
+ year = {2026}
92
+ }
93
+ ```
94
+ ---
95
+ ```