Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,24 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
## 🦙🌲🤏 BLLAMA: A BLIP2 + ALPACA-LORA Pipeline
|
| 2 |
|
| 3 |
-
# Setup
|
| 4 |
-
1. Git clone this repository
|
| 5 |
-
2. ```pip install -r requirements.txt```
|
| 6 |
-
|
| 7 |
# Training
|
| 8 |
This is just a pipeline involving the use of both ALPACA and BLIP-2, without any prior finetuning. You can refer to the details in ALPACA_LORA's repo [here](https://github.com/tloen/alpaca-lora) and the BLIP-2 training details on their GitHub page [here](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). For the pipeline, I have used the BLIP-2 model found on HuggingSpace [here](https://huggingface.co/spaces/Salesforce/BLIP2)
|
| 9 |
|
| 10 |
-
# Inference
|
| 11 |
-
1. cd to the cloned repo
|
| 12 |
-
2. Run ```python3 generate.py```
|
| 13 |
-
|
| 14 |
-
# Sample of inference
|
| 15 |
-

|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
#TODO:
|
| 19 |
-
1. Try to reduce VRAM Usage: It hits around 14GB of VRAM on the 7B Weights when combined with BLIP2
|
| 20 |
-
2. Add ability for users to customise their prompts to BLIP-2 in Gradio. This can help finetune the context given from BLIP2 to ALPACA, improving accuracy of generated outputs
|
| 21 |
|
| 22 |
|
| 23 |
## Acknowledgements
|
| 24 |
-
Once again, I would like to credit the Salesforce team for creating BLIP2, as well as tloen, the original creator of alpaca-lora.
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
title: 'BLLAMA: ALPACA with BLIP 2'
|
| 4 |
+
sdk: gradio
|
| 5 |
+
emoji: 🔥
|
| 6 |
+
colorFrom: red
|
| 7 |
+
colorTo: purple
|
| 8 |
+
---
|
| 9 |
## 🦙🌲🤏 BLLAMA: A BLIP2 + ALPACA-LORA Pipeline
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
# Training
|
| 12 |
This is just a pipeline involving the use of both ALPACA and BLIP-2, without any prior finetuning. You can refer to the details in ALPACA_LORA's repo [here](https://github.com/tloen/alpaca-lora) and the BLIP-2 training details on their GitHub page [here](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). For the pipeline, I have used the BLIP-2 model found on HuggingSpace [here](https://huggingface.co/spaces/Salesforce/BLIP2)
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
|
| 16 |
## Acknowledgements
|
| 17 |
+
Once again, I would like to credit the Salesforce team for creating BLIP2, as well as tloen, the original creator of alpaca-lora. I would also like to credit Meta, the original
|
| 18 |
+
creators of LLAMA, as well as the people behind the HuggingFace implementation of ALPACA
|