Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ tags:
|
|
| 10 |
- uncensored
|
| 11 |
---
|
| 12 |
|
| 13 |
-
<!-- README Version: v1.
|
| 14 |
|
| 15 |
# Qwen3-VL-2B-Instruct (Abliterated)
|
| 16 |
|
|
@@ -38,15 +38,15 @@ This model can perform tasks such as:
|
|
| 38 |
|
| 39 |
```
|
| 40 |
qwen3-vl-2b-instruct/
|
| 41 |
-
βββ qwen3-vl-2b-instruct-abliterated.gguf
|
| 42 |
-
βββ qwen3-vl-2b-instruct-abliterated.safetensors (
|
| 43 |
```
|
| 44 |
|
| 45 |
-
**Total Repository Size**: ~7.
|
| 46 |
|
| 47 |
### File Descriptions
|
| 48 |
|
| 49 |
-
- **qwen3-vl-2b-instruct-abliterated.gguf** -
|
| 50 |
- **qwen3-vl-2b-instruct-abliterated.safetensors** - Full-precision SafeTensors format for use with transformers library
|
| 51 |
|
| 52 |
## Hardware Requirements
|
|
@@ -103,7 +103,7 @@ print(response)
|
|
| 103 |
```bash
|
| 104 |
# Run with llama.cpp
|
| 105 |
./llama.cpp \
|
| 106 |
-
--model "E:\huggingface\qwen3-vl-2b-instruct\qwen3-vl-2b-instruct-abliterated.gguf" \
|
| 107 |
--image example.jpg \
|
| 108 |
--prompt "What do you see in this image?" \
|
| 109 |
--n-predict 256 \
|
|
@@ -192,7 +192,7 @@ print(generated_text)
|
|
| 192 |
|
| 193 |
1. **Use Quantized GGUF for Speed**:
|
| 194 |
- GGUF format provides faster inference
|
| 195 |
-
- Lower memory usage (3.
|
| 196 |
- Minimal quality loss for most tasks
|
| 197 |
|
| 198 |
2. **GPU Acceleration**:
|
|
@@ -306,6 +306,6 @@ This abliterated model has had safety mechanisms removed and may generate conten
|
|
| 306 |
|
| 307 |
---
|
| 308 |
|
| 309 |
-
**Model Version**: v1.
|
| 310 |
-
**Last Updated**: 2025-10-
|
| 311 |
-
**Format Versions**: SafeTensors (
|
|
|
|
| 10 |
- uncensored
|
| 11 |
---
|
| 12 |
|
| 13 |
+
<!-- README Version: v1.1 -->
|
| 14 |
|
| 15 |
# Qwen3-VL-2B-Instruct (Abliterated)
|
| 16 |
|
|
|
|
| 38 |
|
| 39 |
```
|
| 40 |
qwen3-vl-2b-instruct/
|
| 41 |
+
βββ qwen3-vl-2b-instruct-abliterated-f16.gguf (3.3 GB)
|
| 42 |
+
βββ qwen3-vl-2b-instruct-abliterated.safetensors (4.0 GB)
|
| 43 |
```
|
| 44 |
|
| 45 |
+
**Total Repository Size**: ~7.3 GB
|
| 46 |
|
| 47 |
### File Descriptions
|
| 48 |
|
| 49 |
+
- **qwen3-vl-2b-instruct-abliterated-f16.gguf** - FP16 quantized GGUF format for efficient inference with llama.cpp and compatible frameworks
|
| 50 |
- **qwen3-vl-2b-instruct-abliterated.safetensors** - Full-precision SafeTensors format for use with transformers library
|
| 51 |
|
| 52 |
## Hardware Requirements
|
|
|
|
| 103 |
```bash
|
| 104 |
# Run with llama.cpp
|
| 105 |
./llama.cpp \
|
| 106 |
+
--model "E:\huggingface\qwen3-vl-2b-instruct\qwen3-vl-2b-instruct-abliterated-f16.gguf" \
|
| 107 |
--image example.jpg \
|
| 108 |
--prompt "What do you see in this image?" \
|
| 109 |
--n-predict 256 \
|
|
|
|
| 192 |
|
| 193 |
1. **Use Quantized GGUF for Speed**:
|
| 194 |
- GGUF format provides faster inference
|
| 195 |
+
- Lower memory usage (3.3 GB vs 4.0 GB)
|
| 196 |
- Minimal quality loss for most tasks
|
| 197 |
|
| 198 |
2. **GPU Acceleration**:
|
|
|
|
| 306 |
|
| 307 |
---
|
| 308 |
|
| 309 |
+
**Model Version**: v1.1
|
| 310 |
+
**Last Updated**: 2025-10-30
|
| 311 |
+
**Format Versions**: SafeTensors (4.0 GB), GGUF FP16 (3.3 GB)
|