Qwen2-VL 2B Instruct β GGUF
Qwen2-VL-2B-Instruct by Alibaba/Qwen, quantized to GGUF format for llama.cpp, packaged for use with the RunAnywhere SDK.
Files:
Qwen2-VL-2B-Instruct-Q4_K_M.ggufβ Language model (~940 MB)mmproj-Qwen2-VL-2B-Instruct-Q8_0.ggufβ Vision encoder (~676 MB)
Usage with RunAnywhere SDK
Swift (iOS / macOS)
import RunAnywhere
RunAnywhere.registerModel(
id: "qwen2-vl-2b-instruct-q4_k_m",
name: "Qwen2-VL 2B Instruct Q4_K_M",
repo: "runanywhere/Qwen2-VL-2B-Instruct-GGUF",
files: ["Qwen2-VL-2B-Instruct-Q4_K_M.gguf", "mmproj-Qwen2-VL-2B-Instruct-Q8_0.gguf"],
framework: .llamaCpp,
modality: .multimodal,
memoryRequirement: 1_800_000_000
)
// VLM inference with image
let result = try await RunAnywhere.generateVLM(
prompt: "Describe this image in detail.",
image: imageData,
modelId: "qwen2-vl-2b-instruct-q4_k_m"
)
Kotlin (Android / JVM)
import com.runanywhere.sdk.RunAnywhere
import com.runanywhere.sdk.models.*
RunAnywhere.registerModel(
id = "qwen2-vl-2b-instruct-q4_k_m",
name = "Qwen2-VL 2B Instruct Q4_K_M",
repo = "runanywhere/Qwen2-VL-2B-Instruct-GGUF",
files = listOf("Qwen2-VL-2B-Instruct-Q4_K_M.gguf", "mmproj-Qwen2-VL-2B-Instruct-Q8_0.gguf"),
framework = InferenceFramework.LLAMA_CPP,
modality = ModelCategory.MULTIMODAL,
memoryRequirement = 1_800_000_000L
)
val result = RunAnywhere.generateVLM(
prompt = "Describe this image in detail.",
image = imageData,
modelId = "qwen2-vl-2b-instruct-q4_k_m"
)
Web (TypeScript)
import { RunAnywhere, LLMFramework, ModelCategory } from '@anthropic/runanywhere-web';
RunAnywhere.registerModels([{
id: 'qwen2-vl-2b-instruct-q4_k_m',
name: 'Qwen2-VL 2B Instruct Q4_K_M',
repo: 'runanywhere/Qwen2-VL-2B-Instruct-GGUF',
files: ['Qwen2-VL-2B-Instruct-Q4_K_M.gguf', 'mmproj-Qwen2-VL-2B-Instruct-Q8_0.gguf'],
framework: LLMFramework.LlamaCpp,
modality: ModelCategory.Multimodal,
memoryRequirement: 1_800_000_000,
}]);
await RunAnywhere.downloadModel('qwen2-vl-2b-instruct-q4_k_m');
await RunAnywhere.loadModel('qwen2-vl-2b-instruct-q4_k_m');
const result = await RunAnywhere.generateVLM('Describe this image in detail.', imageData, 'qwen2-vl-2b-instruct-q4_k_m');
React Native (TypeScript)
import { RunAnywhere } from 'runanywhere-react-native';
RunAnywhere.registerModel({
id: 'qwen2-vl-2b-instruct-q4_k_m',
name: 'Qwen2-VL 2B Instruct Q4_K_M',
repo: 'runanywhere/Qwen2-VL-2B-Instruct-GGUF',
files: ['Qwen2-VL-2B-Instruct-Q4_K_M.gguf', 'mmproj-Qwen2-VL-2B-Instruct-Q8_0.gguf'],
framework: 'llamaCpp',
modality: 'multimodal',
memoryRequirement: 1_800_000_000,
});
const result = await RunAnywhere.generateVLM('Describe this image.', imageData, 'qwen2-vl-2b-instruct-q4_k_m');
Flutter (Dart)
import 'package:runanywhere_flutter/runanywhere_flutter.dart';
RunAnywhere.registerModel(
id: 'qwen2-vl-2b-instruct-q4_k_m',
name: 'Qwen2-VL 2B Instruct Q4_K_M',
repo: 'runanywhere/Qwen2-VL-2B-Instruct-GGUF',
files: ['Qwen2-VL-2B-Instruct-Q4_K_M.gguf', 'mmproj-Qwen2-VL-2B-Instruct-Q8_0.gguf'],
framework: InferenceFramework.llamaCpp,
modality: ModelCategory.multimodal,
memoryRequirement: 1800000000,
);
final result = await RunAnywhere.generateVLM('Describe this image.', imageData, 'qwen2-vl-2b-instruct-q4_k_m');
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen2-VL-2B-Instruct |
| Parameters | 2B |
| Quantization | Q4_K_M (~940 MB) |
| Vision Encoder | Q8_0 mmproj (~676 MB) |
| Runtime | llama.cpp (with multimodal/mtmd) |
| Capabilities | Image understanding, OCR, visual QA |
Attribution
Original model by Qwen/Alibaba. GGUF conversion by ggml-org.
- Downloads last month
- 270
Hardware compatibility
Log In to add your hardware
4-bit