kritarth-lab's picture
Update README.md
7b524ba verified
metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
  - mistral
  - lora
  - peft
  - text-generation
  - causal-lm

LoRA Mistral Forgetting

This repository contains a LoRA adapter fine-tuned on top of:

mistralai/Mistral-7B-Instruct-v0.2

⚠️ The base model is not included.
You must load it separately from Hugging Face Hub.


🚀 Model Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch

# Load base model from Hugging Face
base_model = "mistralai/Mistral-7B-Instruct-v0.2"

# Your LoRA adapter repo
adapter_model = "kritarth-lab/lora-mistral-forgetting"

# Load tokenizer from the base model
tokenizer = AutoTokenizer.from_pretrained(base_model)

# Load base model (CPU/GPU auto-detect)
model = AutoModelForCausalLM.from_pretrained(
    base_model,
    torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
    device_map="auto"
)

# Load LoRA adapter
model = PeftModel.from_pretrained(model, adapter_model)

# Example prompt
prompt = "What is volcanic eruption."

# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate output
with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=300,
        temperature=0.7,
        top_p=0.9,
        do_sample=True
    )

# Decode and print
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)