Update README.md
Browse files
README.md
CHANGED
|
@@ -13,19 +13,21 @@ datasets:
|
|
| 13 |
pipeline_tag: text-generation
|
| 14 |
language:
|
| 15 |
- en
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# Qwen2.5-Coder-14B-n8n-Workflow-Generator
|
| 19 |
|
| 20 |

|
| 21 |
|
| 22 |
-
Fine-tuned Qwen2.5-Coder-14B-Instruct model specialized for generating n8n workflow JSONs from natural language descriptions.
|
| 23 |
|
| 24 |
## Model Description
|
| 25 |
|
| 26 |
This model is a QLoRA fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the [n8nbuilder-n8n-workflows-dataset](https://huggingface.co/datasets/mbakgun/n8nbuilder-n8n-workflows-dataset), containing +2.5K n8n workflow templates.
|
| 27 |
|
| 28 |
**Training Details:**
|
|
|
|
| 29 |
- **Base Model**: Qwen/Qwen2.5-Coder-14B-Instruct
|
| 30 |
- **Method**: QLoRA (4-bit quantization)
|
| 31 |
- **LoRA Rank**: 32
|
|
@@ -50,7 +52,7 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
| 50 |
device_map="auto"
|
| 51 |
)
|
| 52 |
|
| 53 |
-
system_prompt = "You are an expert n8n workflow generation assistant. Your goal is to create valid, efficient, and
|
| 54 |
|
| 55 |
user_input = "Create a workflow that monitors a RSS feed and sends new items to Discord."
|
| 56 |
|
|
@@ -71,15 +73,9 @@ print(workflow_json)
|
|
| 71 |
### MLX (Apple Silicon)
|
| 72 |
|
| 73 |
```bash
|
| 74 |
-
#
|
| 75 |
-
mlx_lm.convert \
|
| 76 |
-
--hf-path mbakgun/Qwen2.5-Coder-14B-n8n-Workflow-Generator \
|
| 77 |
-
--mlx-path ./qwen25-n8n-mlx \
|
| 78 |
-
-q
|
| 79 |
-
|
| 80 |
-
# Generate
|
| 81 |
mlx_lm.generate \
|
| 82 |
-
--model
|
| 83 |
--prompt "You are an expert n8n workflow generation assistant...\n\nCreate a workflow that sends Slack notifications when GitHub issues are created." \
|
| 84 |
--max-tokens 4096
|
| 85 |
```
|
|
@@ -87,10 +83,11 @@ mlx_lm.generate \
|
|
| 87 |
## Training Data
|
| 88 |
|
| 89 |
This model was fine-tuned on the [n8nbuilder-n8n-workflows-dataset](https://huggingface.co/datasets/mbakgun/n8nbuilder-n8n-workflows-dataset), which contains:
|
| 90 |
-
|
|
|
|
| 91 |
- Format: Alpaca (instruction/input/output)
|
| 92 |
- Source: n8n.io public template gallery
|
| 93 |
-
- [n8nbuilder.dev - Create n8n Workflows in Seconds with AI](https://n8nbuilder.dev)
|
| 94 |
|
| 95 |
## Performance
|
| 96 |
|
|
@@ -104,8 +101,6 @@ This model was fine-tuned on the [n8nbuilder-n8n-workflows-dataset](https://hugg
|
|
| 104 |
- Long workflows (>8192 tokens) may be truncated
|
| 105 |
- Model trained on public templates only
|
| 106 |
|
| 107 |
-
|
| 108 |
-
|
| 109 |
## Citation
|
| 110 |
|
| 111 |
```bibtex
|
|
@@ -126,4 +121,5 @@ This model was fine-tuned on the [n8nbuilder-n8n-workflows-dataset](https://hugg
|
|
| 126 |
- [n8n-mcp](https://github.com/czlonkowski/n8n-mcp) for template indexing
|
| 127 |
|
| 128 |
## License
|
|
|
|
| 129 |
Apache 2.0
|
|
|
|
| 13 |
pipeline_tag: text-generation
|
| 14 |
language:
|
| 15 |
- en
|
| 16 |
+
library_name: mlx
|
| 17 |
---
|
| 18 |
|
| 19 |
# Qwen2.5-Coder-14B-n8n-Workflow-Generator
|
| 20 |
|
| 21 |

|
| 22 |
|
| 23 |
+
Fine-tuned Qwen2.5-Coder-14B-Instruct model specialized for generating n8n workflow JSONs from natural language descriptions.
|
| 24 |
|
| 25 |
## Model Description
|
| 26 |
|
| 27 |
This model is a QLoRA fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the [n8nbuilder-n8n-workflows-dataset](https://huggingface.co/datasets/mbakgun/n8nbuilder-n8n-workflows-dataset), containing +2.5K n8n workflow templates.
|
| 28 |
|
| 29 |
**Training Details:**
|
| 30 |
+
|
| 31 |
- **Base Model**: Qwen/Qwen2.5-Coder-14B-Instruct
|
| 32 |
- **Method**: QLoRA (4-bit quantization)
|
| 33 |
- **LoRA Rank**: 32
|
|
|
|
| 52 |
device_map="auto"
|
| 53 |
)
|
| 54 |
|
| 55 |
+
system_prompt = "You are an expert n8n workflow generation assistant. Your goal is to create valid, efficient, and functional n8n workflow configurations."
|
| 56 |
|
| 57 |
user_input = "Create a workflow that monitors a RSS feed and sends new items to Discord."
|
| 58 |
|
|
|
|
| 73 |
### MLX (Apple Silicon)
|
| 74 |
|
| 75 |
```bash
|
| 76 |
+
# Download MLX Q4 model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
mlx_lm.generate \
|
| 78 |
+
--model mbakgun/Qwen2.5-Coder-14B-n8n-Workflow-Generator/mlx-q4 \
|
| 79 |
--prompt "You are an expert n8n workflow generation assistant...\n\nCreate a workflow that sends Slack notifications when GitHub issues are created." \
|
| 80 |
--max-tokens 4096
|
| 81 |
```
|
|
|
|
| 83 |
## Training Data
|
| 84 |
|
| 85 |
This model was fine-tuned on the [n8nbuilder-n8n-workflows-dataset](https://huggingface.co/datasets/mbakgun/n8nbuilder-n8n-workflows-dataset), which contains:
|
| 86 |
+
|
| 87 |
+
- **+2,304 workflow templates** (after filtering sequences >8192 tokens)
|
| 88 |
- Format: Alpaca (instruction/input/output)
|
| 89 |
- Source: n8n.io public template gallery
|
| 90 |
+
- [n8nbuilder.dev - Create n8n Workflows in Seconds with AI](https://n8nbuilder.dev)
|
| 91 |
|
| 92 |
## Performance
|
| 93 |
|
|
|
|
| 101 |
- Long workflows (>8192 tokens) may be truncated
|
| 102 |
- Model trained on public templates only
|
| 103 |
|
|
|
|
|
|
|
| 104 |
## Citation
|
| 105 |
|
| 106 |
```bibtex
|
|
|
|
| 121 |
- [n8n-mcp](https://github.com/czlonkowski/n8n-mcp) for template indexing
|
| 122 |
|
| 123 |
## License
|
| 124 |
+
|
| 125 |
Apache 2.0
|