SLM Workflow Planner v7 β PEFT Format (Universal)
HuggingFace PEFT format β works on any platform (CUDA, CPU, Apple Silicon).
Converted from MLX LoRA adapter (MLX version).
Quick Start
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-7B-Instruct |
| LoRA Rank | 8 |
| LoRA Alpha | 160 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Layers | 0-27 (28 layers) |
| Task | Workflow execution planning β boundary/anomaly specialist |
Performance (Solo)
| Category | Score |
|---|---|
| NEXT | 17/22 (77%) |
| RETRY | 0/12 (0%) |
| FORK | 1/14 (7%) |
| JOIN | 14/15 (93%) |
| META | 10/13 (77%) |
| Total | 42/76 (55.3%) |
Role in Ensemble
v7 is the boundary/anomaly expert β strong on META detection and NEXT/META discrimination. In the 3-expert ensemble, it complements v3 (structural expert) by detecting anomalies that v3 misses, while GPT-4.1 arbitrates disagreements.
- Downloads last month
- 26
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support