Case Study Mistral 7B - LoRA Adapter

This is a LoRA (Low-Rank Adaptation) fine-tuned adapter for the Mistral 7B model, specialized for business case study generation.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("unsloth/mistral-7b-instruct-v0.3-bnb-4bit")
tokenizer = AutoTokenizer.from_pretrained("unsloth/mistral-7b-instruct-v0.3-bnb-4bit")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "afzalur/case-study-mistral-7b-v1")

# Generate
prompt = "Create a business case study about digital transformation"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Note

This is a LoRA adapter that requires the base model. For a complete standalone model, see: afzalur/case-study-mistral-7b-full

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for afzalur/case-study-mistral-7b-v1