MattCoddity/dockerNLcommands
Viewer โข Updated โข 2.42k โข 244 โข 27
How to use MajdSuleiman/mistral-docker-nl-lora with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
model = PeftModel.from_pretrained(base_model, "MajdSuleiman/mistral-docker-nl-lora")This repository contains a LoRA fine-tuned model trained using LLaMA-Factory.
checkpoint-19You can load this model using the peft and transformers libraries:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model_name = "mistralai/Mistral-7B-Instruct-v0.3"
lora_model_name = "MajdSuleiman/mistral-docker-nl-lora"
# 1. Load the Base Model
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
device_map="auto",
torch_dtype=torch.float16 # Recommended for Mistral to save memory
)
# 2. Load the LoRA Adapter
model = PeftModel.from_pretrained(base_model, lora_model_name)
model.eval()
# 3. Generate Text
input_text = "I want to see the running containers with the ACME label from the vendor ACME."
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Base model
mistralai/Mistral-7B-v0.3