Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,89 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- RnniaSnow/st-code-dataset
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
base_model:
|
| 8 |
+
- RnniaSnow/ST-Coder-14B
|
| 9 |
+
tags:
|
| 10 |
+
- code
|
| 11 |
+
- st
|
| 12 |
+
- plc
|
| 13 |
+
- industry
|
| 14 |
+
- gguf
|
| 15 |
+
- llama.cpp
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# ST-Coder-14B (GGUF)
|
| 19 |
+
|
| 20 |
+
This repository contains the **GGUF quantized versions** of [RnniaSnow/ST-Coder-14B](https://huggingface.co/RnniaSnow/ST-Coder-14B).
|
| 21 |
+
|
| 22 |
+
**ST-Coder-14B** is an industrial-grade Large Language Model optimized for **Programmable Logic Controller (PLC)** programming, specifically focused on the **IEC 61131-3 Structured Text (ST)** language.
|
| 23 |
+
|
| 24 |
+
By providing GGUF formats, automation engineers can run this model **locally and entirely offline** on standard laptops or IPCs (Industrial PCs) using CPU or edge GPUs, which is crucial for secure and air-gapped shop-floor environments.
|
| 25 |
+
|
| 26 |
+
## 💾 Available Quantization Formats
|
| 27 |
+
|
| 28 |
+
We provide several quantization levels. Choose the one that best fits your hardware (RAM/VRAM):
|
| 29 |
+
|
| 30 |
+
| File Name | Bit | Size | RAM Required | Recommended For |
|
| 31 |
+
| :--- | :---: | :---: | :---: | :--- |
|
| 32 |
+
| `st-coder-14b-q4_k_m.gguf` | 4-bit | ~8.5 GB | 12 GB+ | **Recommended.** Best balance of speed, size, and code logic preservation. |
|
| 33 |
+
| `st-coder-14b-q6_k.gguf` | 6-bit | ~11.5 GB | 16 GB+ | Very high quality, minimal precision loss. |
|
| 34 |
+
| `st-coder-14b-q8_0.gguf` | 8-bit | ~15.2 GB | 20 GB+ | Near-lossless precision for complex engineering math/logic. |
|
| 35 |
+
|
| 36 |
+
> *Note: Code generation models are sensitive to heavy quantization. We do not recommend using anything below Q4 for ST code generation, as syntax accuracy may degrade.*
|
| 37 |
+
|
| 38 |
+
## 🚀 How to Use
|
| 39 |
+
|
| 40 |
+
Since this model is based on the Qwen2.5 architecture, it uses the **ChatML** prompt format. Most modern tools will auto-detect this from the GGUF metadata.
|
| 41 |
+
|
| 42 |
+
### Method 1: LM Studio (Easiest GUI)
|
| 43 |
+
This is the recommended method for Windows/macOS users who prefer a graphical interface.
|
| 44 |
+
1. Download and install [LM Studio](https://lmstudio.ai/).
|
| 45 |
+
2. Search for `RnniaSnow/ST-Coder-14B-GGUF` in the top search bar.
|
| 46 |
+
3. Download the `q4_k_m` or `q6_k` file.
|
| 47 |
+
4. Load the model, set the system prompt to: *"You are an expert industrial automation engineer specializing in IEC 61131-3 Structured Text."* and start chatting.
|
| 48 |
+
|
| 49 |
+
### Method 2: Ollama (CLI & API)
|
| 50 |
+
You can easily serve this model using [Ollama](https://ollama.com/).
|
| 51 |
+
1. Download your preferred `.gguf` file to your local machine.
|
| 52 |
+
2. Create a file named `Modelfile` in the same directory with the following content:
|
| 53 |
+
```dockerfile
|
| 54 |
+
FROM ./st-coder-14b-q4_k_m.gguf
|
| 55 |
+
TEMPLATE """<|im_start|>system
|
| 56 |
+
{{ .System }}<|im_end|>
|
| 57 |
+
<|im_start|>user
|
| 58 |
+
{{ .Prompt }}<|im_end|>
|
| 59 |
+
<|im_start|>assistant
|
| 60 |
+
"""
|
| 61 |
+
SYSTEM """You are an expert industrial automation engineer specializing in IEC 61131-3 Structured Text."""
|
| 62 |
+
PARAMETER temperature 0.2
|
| 63 |
+
PARAMETER top_p 0.9
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
3. Create and run the model in your terminal:
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
ollama create st-coder -f Modelfile
|
| 71 |
+
ollama run st-coder "Write a Function Block for a PID controller."
|
| 72 |
+
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
### Method 3: llama.cpp (Command Line)
|
| 76 |
+
|
| 77 |
+
For advanced users deploying via `llama.cpp`:
|
| 78 |
+
|
| 79 |
+
```bash
|
| 80 |
+
./llama-cli -m st-coder-14b-q4_k_m.gguf -p "<|im_start|>system\nYou are an expert PLC programmer.<|im_end|>\n<|im_start|>user\nWrite an ST program for a conveyor belt motor.<|im_end|>\n<|im_start|>assistant\n" -n 1024 -c 8192 --temp 0.2
|
| 81 |
+
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
## ⚠️ Disclaimer & Industrial Safety
|
| 85 |
+
|
| 86 |
+
**Industrial Control Systems (ICS) carry significant physical risks.** * This AI model generates code based on statistical probabilities and does **not** guarantee logical correctness, real-time safety, or hardware compatibility.
|
| 87 |
+
|
| 88 |
+
* **Always** verify, simulate, and strictly test the generated code in a safe environment before deploying it to physical hardware (PLCs, drives, robotics).
|
| 89 |
+
* The creators of this model assume absolutely no liability for any damage, injury, or production downtime resulting from the use of this code.
|