Lora Models
Collection
List of instruction-tuned models trained with lora
•
1 item
•
Updated
This is a LoRA adapter for granite-vision-dev/granite-4-vision-micro-pretrained.
trust_remote_code=True for base modelfrom transformers import AutoModelForVision2Seq, AutoProcessor
from peft import PeftModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load base model
base_model = AutoModelForVision2Seq.from_pretrained(
"granite-vision-dev/granite-4-vision-micro-pretrained",
trust_remote_code=True,
torch_dtype=torch.bfloat16
).to(device)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "granite-vision-dev/granite-4-vision-micro-lora")
# Load processor
processor = AutoProcessor.from_pretrained(
"granite-vision-dev/granite-4-vision-micro-pretrained",
trust_remote_code=True
)
# Inference
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": "path/to/image.png"},
{"type": "text", "text": "Describe this image."},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(device)
output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
# Merge adapter weights into base model
merged_model = model.merge_and_unload()
# Save merged model
merged_model.save_pretrained("./merged_model")
| Parameter | Value |
|---|---|
| r | 192 |
| lora_alpha | 192 |
| lora_dropout | 0.05 |
| bias | none |
| peft_type | LORA |