Mistral-7B-Instruct LoRA Fine-tuned Model

This repository contains a LoRA fine-tuned model trained using LLaMA-Factory.

๐Ÿ”น Base Model

๐Ÿ”น Training Framework

  • Framework: LLaMA-Factory
  • Adapter: PEFT (LoRA)

๐Ÿ”น Training Details

  • Method: LoRA fine-tuning
  • Checkpoint: checkpoint-19
  • Task: Instruction fine-tuning

๐Ÿ”น How to Load the Model

You can load this model using the peft and transformers libraries:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base_model_name = "mistralai/Mistral-7B-Instruct-v0.3"
lora_model_name = "MajdSuleiman/mistral-docker-nl-lora"

# 1. Load the Base Model
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    device_map="auto",
    torch_dtype=torch.float16 # Recommended for Mistral to save memory
)

# 2. Load the LoRA Adapter
model = PeftModel.from_pretrained(base_model, lora_model_name)
model.eval()

# 3. Generate Text
input_text = "I want to see the running containers with the ACME label from the vendor ACME."
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=100)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for MajdSuleiman/mistral-docker-nl-lora

Adapter
(594)
this model

Dataset used to train MajdSuleiman/mistral-docker-nl-lora