ST-Coder-14B (GGUF)

This repository contains the GGUF quantized versions of RnniaSnow/ST-Coder-14B.

ST-Coder-14B is an industrial-grade Large Language Model optimized for Programmable Logic Controller (PLC) programming, specifically focused on the IEC 61131-3 Structured Text (ST) language.

By providing GGUF formats, automation engineers can run this model locally and entirely offline on standard laptops or IPCs (Industrial PCs) using CPU or edge GPUs, which is crucial for secure and air-gapped shop-floor environments.

πŸ’Ύ Available Quantization Formats

We provide several quantization levels. Choose the one that best fits your hardware (RAM/VRAM):

File Name Bit Size RAM Required Recommended For
st-coder-14b-q4_k_m.gguf 4-bit ~8.5 GB 12 GB+ Recommended. Best balance of speed, size, and code logic preservation.
st-coder-14b-q6_k.gguf 6-bit ~11.5 GB 16 GB+ Very high quality, minimal precision loss.
st-coder-14b-q8_0.gguf 8-bit ~15.2 GB 20 GB+ Near-lossless precision for complex engineering math/logic.

Note: Code generation models are sensitive to heavy quantization. We do not recommend using anything below Q4 for ST code generation, as syntax accuracy may degrade.

πŸš€ How to Use

Since this model is based on the Qwen2.5 architecture, it uses the ChatML prompt format. Most modern tools will auto-detect this from the GGUF metadata.

Method 1: LM Studio (Easiest GUI)

This is the recommended method for Windows/macOS users who prefer a graphical interface.

  1. Download and install LM Studio.
  2. Search for RnniaSnow/ST-Coder-14B-GGUF in the top search bar.
  3. Download the q4_k_m or q6_k file.
  4. Load the model, set the system prompt to: "You are an expert industrial automation engineer specializing in IEC 61131-3 Structured Text." and start chatting.

Method 2: Ollama (CLI & API)

You can easily serve this model using Ollama.

  1. Download your preferred .gguf file to your local machine.
  2. Create a file named Modelfile in the same directory with the following content:
FROM ./st-coder-14b-q4_k_m.gguf
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
SYSTEM """You are an expert industrial automation engineer specializing in IEC 61131-3 Structured Text."""
PARAMETER temperature 0.2
PARAMETER top_p 0.9
  1. Create and run the model in your terminal:
ollama create st-coder -f Modelfile
ollama run st-coder "Write a Function Block for a PID controller."

Method 3: llama.cpp (Command Line)

For advanced users deploying via llama.cpp:

./llama-cli -m st-coder-14b-q4_k_m.gguf -p "<|im_start|>system\nYou are an expert PLC programmer.<|im_end|>\n<|im_start|>user\nWrite an ST program for a conveyor belt motor.<|im_end|>\n<|im_start|>assistant\n" -n 1024 -c 8192 --temp 0.2

⚠️ Disclaimer & Industrial Safety

Industrial Control Systems (ICS) carry significant physical risks. * This AI model generates code based on statistical probabilities and does not guarantee logical correctness, real-time safety, or hardware compatibility.

  • Always verify, simulate, and strictly test the generated code in a safe environment before deploying it to physical hardware (PLCs, drives, robotics).
  • The creators of this model assume absolutely no liability for any damage, injury, or production downtime resulting from the use of this code.
Downloads last month
92
GGUF
Model size
15B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for RnniaSnow/ST-Coder-14B-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(1)
this model