Case Study Mistral 7B - LoRA Adapter
This is a LoRA (Low-Rank Adaptation) fine-tuned adapter for the Mistral 7B model, specialized for business case study generation.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("unsloth/mistral-7b-instruct-v0.3-bnb-4bit")
tokenizer = AutoTokenizer.from_pretrained("unsloth/mistral-7b-instruct-v0.3-bnb-4bit")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "afzalur/case-study-mistral-7b-v1")
# Generate
prompt = "Create a business case study about digital transformation"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note
This is a LoRA adapter that requires the base model. For a complete standalone model, see: afzalur/case-study-mistral-7b-full
- Downloads last month
- -
Model tree for afzalur/case-study-mistral-7b-v1
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3