Update README.md
Browse files
README.md
CHANGED
|
@@ -17,7 +17,17 @@ Original model: https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct
|
|
| 17 |
|
| 18 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
| 19 |
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
## Prompt format
|
| 23 |
|
|
|
|
| 17 |
|
| 18 |
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
|
| 19 |
|
| 20 |
+
## How to run
|
| 21 |
+
|
| 22 |
+
Since this a new vision model, I'll add special instructions this one time
|
| 23 |
+
|
| 24 |
+
If you've build llama.cpp locally, you'll want to run:
|
| 25 |
+
|
| 26 |
+
```
|
| 27 |
+
./llama-qwen2vl-cli -m /models/Qwen2-VL-7B-Instruct-Q4_0.gguf --mmproj /models/mmproj-Qwen2-VL-7B-Instruct-f32.gguf -p 'Describe this image.' --image '/models/test_image.jpg'
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
And the model will output the answer. Very simple stuff, similar to other llava models, just make sure you use the correct binary!
|
| 31 |
|
| 32 |
## Prompt format
|
| 33 |
|