| license: apache-2.0 | |
| tags: | |
| - vision | |
| - blip-2 | |
| - vqa | |
| - lora | |
| # My Fine-Tuned BLIP-2 Model | |
| Custom BLIP-2 model fine-tuned for visual question answering with LoRA adapters | |
| ## Usage | |
| ```python | |
| from transformers import Blip2ForConditionalGeneration, Blip2Processor | |
| import torch | |
| model = Blip2ForConditionalGeneration.from_pretrained( | |
| "Magneto76/lora_blip2", | |
| torch_dtype=torch.float16, | |
| device_map="auto" | |
| ) | |
| processor = Blip2Processor.from_pretrained("Magneto76/lora_blip2") | |
| def infer(image, question): | |
| inputs = processor(image, question, return_tensors="pt").to(model.device) | |
| outputs = model.generate(**inputs) | |
| return processor.decode(outputs[0], skip_special_tokens=True) | |
| ``` |