Qwen Coder 3B - N8N Workflow Generator (Merged)

This is the merged version of the LoRA fine-tuned model for generating N8N workflow JSON.

Model Details

  • Base Model: Qwen/Qwen2.5-Coder-3B-Instruct
  • Fine-tuned adapter: eclaude/qwen-coder-3b-n8n-sft
  • Training: SFT on 8,782 N8N workflow examples
  • Task: Generate valid N8N workflow JSON from natural language prompts (French)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("eclaude/qwen-coder-3b-n8n-merged")
tokenizer = AutoTokenizer.from_pretrained("eclaude/qwen-coder-3b-n8n-merged")

prompt = "Crée un workflow qui récupère des données d'une API et les envoie sur Slack"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))

Training Data

Metrics

  • Training loss: 1.04
  • Eval loss: 1.02
  • Token accuracy: 73%
Downloads last month
26
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for eclaude/qwen-coder-3b-n8n-merged

Base model

Qwen/Qwen2.5-3B
Finetuned
(66)
this model