Instructions to use RnniaSnow/ST-Coder-14B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RnniaSnow/ST-Coder-14B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="RnniaSnow/ST-Coder-14B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("RnniaSnow/ST-Coder-14B") model = AutoModelForCausalLM.from_pretrained("RnniaSnow/ST-Coder-14B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RnniaSnow/ST-Coder-14B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RnniaSnow/ST-Coder-14B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RnniaSnow/ST-Coder-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/RnniaSnow/ST-Coder-14B
- SGLang
How to use RnniaSnow/ST-Coder-14B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RnniaSnow/ST-Coder-14B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RnniaSnow/ST-Coder-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RnniaSnow/ST-Coder-14B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RnniaSnow/ST-Coder-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use RnniaSnow/ST-Coder-14B with Docker Model Runner:
docker model run hf.co/RnniaSnow/ST-Coder-14B
ST-Coder-14B
🤖 Model Description
ST-Coder-14B is a specialized code generation model fine-tuned on Qwen2.5-Coder-14B-Instruct. It is specifically optimized for Industrial Automation tasks, with a primary focus on the IEC 61131-3 Structured Text (ST) programming language.
Unlike general-purpose coding models, ST-Coder-14B has been trained on high-quality, domain-specific data to understand:
- PLC Logic: PID control, Motion Control, Safety logic, State Machines.
- IEC 61131-3 Syntax: Correct usage of
FUNCTION_BLOCK,VAR_INPUT,VAR_OUTPUT, and strict typing rules. - Industrial Protocols: Modbus, TCP/IP socket handling in ST, and vendor-specific nuances (Codesys, TwinCAT, Siemens SCL).
💻 Quick Start
1. Installation
pip install transformers torch accelerate
2. Inference with Transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model
model_id = "RnniaSnow/ST-Coder-14B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
# Prepare the prompt
system_prompt = "You are an expert industrial automation engineer specializing in IEC 61131-3 Structured Text."
user_prompt = "Write a Function Block for a 3-axis motion control system with error handling."
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=2048,
temperature=0.2, # Low temperature is recommended for code generation
top_p=0.9
)
# Decode output
output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(output)
3. Usage with vLLM (Recommended for Production)
vllm serve RnniaSnow/ST-Coder-14B --tensor-parallel-size 1 --max-model-len 8192
🔧 Training Details
This model was trained using LLaMA-Factory with the following configuration:
- Base Model: Qwen/Qwen2.5-Coder-14B-Instruct
- Finetuning Method: Full LoRA Merge (Target modules:
all) - Precision: BF16
- Context Window: 8192 tokens
- Optimizer: AdamW (Paged)
- Learning Rate Strategy: Cosine with warmup
The training data includes a mix of:
- Golden Samples: Verified ST code from real-world engineering projects.
- Synthetic Data: High-quality instruction-response pairs generated via DeepSeek-V3 distillation, focusing on edge cases and complex logic.
⚠️ Disclaimer & Safety
Industrial Control Systems (ICS) carry significant physical risks. * This model generates code based on statistical probabilities and does not guarantee functional correctness or safety.
- Always verify, simulate, and test generated code in a safe environment before deploying to physical hardware (PLCs, robots, drives).
- The authors assume no liability for any damage or injury resulting from the use of this model.
📜 License
This model is licensed under the MIT License.
- Downloads last month
- 2
Model tree for RnniaSnow/ST-Coder-14B
Base model
Qwen/Qwen2.5-14B