mrcmilo commited on
Commit
30e743b
·
verified ·
1 Parent(s): a29d995

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -14
README.md CHANGED
@@ -1,23 +1,49 @@
1
  ---
 
 
2
  tags:
3
- - gguf
4
- - llama.cpp
 
5
  - unsloth
6
-
 
 
 
 
 
 
 
 
7
  ---
8
 
9
- # phi3-text2sql-lora : GGUF
 
 
 
 
 
 
 
 
 
 
10
 
11
- This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
 
12
 
13
- **Example usage**:
14
- - For text only LLMs: `./llama.cpp/llama-cli -hf mrcmilo/phi3-text2sql-lora --jinja`
15
- - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf mrcmilo/phi3-text2sql-lora --jinja`
 
16
 
17
- ## Available Model files:
18
- - `phi-3-mini-4k-instruct.Q4_K_M.gguf`
 
 
 
 
19
 
20
- ## Ollama
21
- An Ollama Modelfile is included for easy deployment.
22
- This was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
2
+ license: mit
3
+ base_model: microsoft/Phi-3-mini-4k-instruct
4
  tags:
5
+ - text-to-sql
6
+ - natural-language-to-sql
7
+ - phi-3
8
  - unsloth
9
+ - gguf
10
+ - ollama
11
+ - logic
12
+ datasets:
13
+ - b-mc2/sql-create-context
14
+ language:
15
+ - en
16
+ library_name: unsloth
17
+ pipeline_tag: text-generation
18
  ---
19
 
20
+ # Phi-3-SQL-Expert (GGUF)
21
+
22
+ This is a specialized **Text-to-SQL** model fine-tuned from the **Microsoft Phi-3-mini-4k-instruct** architecture. It has been optimized using **Unsloth** to provide high-accuracy SQL generation while remaining lightweight enough to run on consumer hardware.
23
+
24
+ ## 🚀 Key Features
25
+ - **Architecture:** Phi-3-mini (3.8B parameters)
26
+ - **Quantization:** Q4_K_M GGUF (Optimized balance of speed and logic)
27
+ - **Fast Training:** Fine-tuned using [Unsloth](https://github.com/unslothai/unsloth).
28
+ - **Format:** GGUF (Ready for Ollama, LM Studio, and llama.cpp)
29
+
30
+ ## 🛠 Usage Instructions
31
 
32
+ ### 1. Ollama (Recommended)
33
+ To deploy locally:
34
 
35
+ 1. Download the `.gguf` file.
36
+ 2. Create a file named `Modelfile` and paste the following:
37
+ ```dockerfile
38
+ FROM ./phi-3-mini-4k-instruct.Q4_K_M.gguf
39
 
40
+ TEMPLATE """<|system|>
41
+ You are a helpful assistant that writes SQL queries. Given a user question and a table schema, output only the SQL code.<|end|>
42
+ <|user|>
43
+ {{ .Prompt }}<|end|>
44
+ <|assistant|>
45
+ """
46
 
47
+ PARAMETER stop "<|end|>"
48
+ PARAMETER temperature 0.1
49
+ PARAMETER num_ctx 2048