Qwen Coder 3B - N8N Workflow Generator (Merged)
This is the merged version of the LoRA fine-tuned model for generating N8N workflow JSON.
Model Details
- Base Model: Qwen/Qwen2.5-Coder-3B-Instruct
- Fine-tuned adapter: eclaude/qwen-coder-3b-n8n-sft
- Training: SFT on 8,782 N8N workflow examples
- Task: Generate valid N8N workflow JSON from natural language prompts (French)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("eclaude/qwen-coder-3b-n8n-merged")
tokenizer = AutoTokenizer.from_pretrained("eclaude/qwen-coder-3b-n8n-merged")
prompt = "Crée un workflow qui récupère des données d'une API et les envoie sur Slack"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
Training Data
- Dataset: eclaude/n8n-workflows-sft
- 8,782 training samples
- 1,197 evaluation samples
Metrics
- Training loss: 1.04
- Eval loss: 1.02
- Token accuracy: 73%
- Downloads last month
- 26