File size: 1,839 Bytes
8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a 8e0f74d 588812a e3a1b01 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
base_model: microsoft/phi-3-mini-128k-instruct
library_name: peft
model_name: phi3-nl2bash-lora
tags:
- lora
- nl2bash
- sft
- bash
- transformers
- trl
license: mit
pipeline_tag: text-generation
---
# phi3-nl2bash-lora
This repository contains **LoRA adapter weights** fine-tuned on the
[`jiacheng-ye/nl2bash`](https://huggingface.co/datasets/jiacheng-ye/nl2bash)
dataset to convert **natural language instructions into Linux bash commands**.
> ⚠️ This repository contains **LoRA adapters only**, not the base model.
> You must load these adapters on top of
> **`microsoft/phi-3-mini-128k-instruct`**.
---
## Intended use
The model is trained to output **only valid bash commands**, with no explanations.
**Example**
Input:
```
List all .txt files recursively and count lines
```
Output:
```bash
find . -name "*.txt" | xargs wc
```
## Training summary
- Base model: microsoft/phi-3-mini-128k-instruct
- Fine-tuning method: LoRA (PEFT)
- Trainer: TRL SFTTrainer
- Dataset: jiacheng-ye/nl2bash
- Output format: Bash commands only
## Loading example
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model = "microsoft/phi-3-mini-128k-instruct"
lora_model = "ayertiam/phi3-nl2bash-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(model, lora_model)
model.eval()
```
## Notes
- These adapters are model-specific and only compatible with `microsoft/phi-3-mini-128k-instruct`.
- For Ollama or GGUF usage, the LoRA must be merged into the base model and converted before inference as done here https://huggingface.co/ayertiam/phi3-nl2bash-gguf |