File size: 707 Bytes
5ce559c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
tags:
- vision
- blip-2
- vqa
- lora
---

# My Fine-Tuned BLIP-2 Model

Custom BLIP-2 model fine-tuned for visual question answering with LoRA adapters

## Usage

```python
from transformers import Blip2ForConditionalGeneration, Blip2Processor
import torch

model = Blip2ForConditionalGeneration.from_pretrained(
    "Magneto76/lora_blip2",
    torch_dtype=torch.float16,
    device_map="auto"
)
processor = Blip2Processor.from_pretrained("Magneto76/lora_blip2")

def infer(image, question):
    inputs = processor(image, question, return_tensors="pt").to(model.device)
    outputs = model.generate(**inputs)
    return processor.decode(outputs[0], skip_special_tokens=True)
    
```