Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Granite 4 Vision Micro LoRA

This is a LoRA adapter for granite-vision-dev/granite-4-vision-micro-pretrained.

Compatibility

  • ✅ PEFT >= 0.17
  • ✅ Transformers >= 4.57.3
  • ✅ Requires trust_remote_code=True for base model

Usage

from transformers import AutoModelForVision2Seq, AutoProcessor
from peft import PeftModel
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

# Load base model
base_model = AutoModelForVision2Seq.from_pretrained(
    "granite-vision-dev/granite-4-vision-micro-pretrained",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16
).to(device)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "granite-vision-dev/granite-4-vision-micro-lora")

# Load processor
processor = AutoProcessor.from_pretrained(
    "granite-vision-dev/granite-4-vision-micro-pretrained",
    trust_remote_code=True
)

# Inference
conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "path/to/image.png"},
            {"type": "text", "text": "Describe this image."},
        ],
    },
]

inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(device)

output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))

Merge LoRA into Base Model (Optional)

# Merge adapter weights into base model
merged_model = model.merge_and_unload()

# Save merged model
merged_model.save_pretrained("./merged_model")

LoRA Configuration

Parameter Value
r 192
lora_alpha 192
lora_dropout 0.05
bias none
peft_type LORA

Modules Trained

  • Language model attention layers (q_proj, k_proj, v_proj, o_proj)
  • Vision encoder attention layers
  • Multi-modal projector (fully trained)
  • Image newline (fully trained)

Framework Versions

  • PEFT 0.17.1
  • Transformers >= 4.57.3
Downloads last month
1,039
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for granite-vision-dev/granite-4-vision-micro-lora

Collection including granite-vision-dev/granite-4-vision-micro-lora