Mistral-7B-Instruct LoRA Fine-tuned Model
This repository contains a LoRA fine-tuned model trained using LLaMA-Factory.
๐น Base Model
๐น Training Framework
- Framework: LLaMA-Factory
- Adapter: PEFT (LoRA)
๐น Training Details
- Method: LoRA fine-tuning
- Checkpoint:
checkpoint-19 - Task: Instruction fine-tuning
๐น How to Load the Model
You can load this model using the peft and transformers libraries:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model_name = "mistralai/Mistral-7B-Instruct-v0.3"
lora_model_name = "MajdSuleiman/mistral-docker-nl-lora"
# 1. Load the Base Model
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
device_map="auto",
torch_dtype=torch.float16 # Recommended for Mistral to save memory
)
# 2. Load the LoRA Adapter
model = PeftModel.from_pretrained(base_model, lora_model_name)
model.eval()
# 3. Generate Text
input_text = "I want to see the running containers with the ACME label from the vendor ACME."
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 18
Model tree for MajdSuleiman/mistral-docker-nl-lora
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3