Instructions to use louisguthmann/qwen3.5-2b-shellcommand-linux-lora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use louisguthmann/qwen3.5-2b-shellcommand-linux-lora with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3.5-2B") model = PeftModel.from_pretrained(base_model, "louisguthmann/qwen3.5-2b-shellcommand-linux-lora") - Notebooks
- Google Colab
- Kaggle
metadata
base_model: Qwen/Qwen3.5-2B
library_name: peft
tags:
- lora
- qwen
- bash
- shell
- linux
- text-generation
Qwen3.5-2B ShellCommand Linux LoRA
This repository contains a PEFT LoRA adapter trained for Linux natural-language-to-shell translation.
Artifact Type
This is a LoRA adapter, not a merged full model checkpoint.
Intended Behavior
The model is tuned to return exactly one of:
- a Bash command or short Bash snippet
ASK: <one short clarifying question>CANNOT: <brief reason>
Eval Snapshot
- score:
276.5033 - verifier ok rate:
77.50% - verifier command rate:
76.04% - verifier ask rate:
75.00% - verifier cannot rate:
100.00% - exact any-exact rate:
25.00% - exact parse-ok rate:
98.00%
Usage
Load this adapter on top of Qwen/Qwen3.5-2B with PEFT.