File size: 1,222 Bytes
3ff9bcb 32fc260 3ff9bcb 32fc260 268ef94 32fc260 3ff9bcb 32fc260 3ff9bcb 268ef94 3ff9bcb 32fc260 268ef94 3ff9bcb 32fc260 3ff9bcb 32fc260 3ff9bcb 32fc260 3ff9bcb 32fc260 3ff9bcb 268ef94 3ff9bcb 268ef94 3ff9bcb 32fc260 3ff9bcb 32fc260 3ff9bcb 32fc260 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | ---
language:
- en
license: apache-2.0
library_name: llama-cpp
tags:
- gguf
- vision
- multimodal
- forkjoin-ai
base_model: deepseek-ai/DeepSeek-OCR-2
pipeline_tag: image-text-to-text
---
# Deepseek Ocr 2
Forkjoin.ai conversion of [deepseek-ai/DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) to GGUF format for edge deployment.
## Model Details
- **Source Model**: [deepseek-ai/DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2)
- **Format**: GGUF
- **Converted by**: [Forkjoin.ai](https://forkjoin.ai)
## Usage
### With llama.cpp
```bash
./llama-cli -m deepseek-ocr-2-gguf.gguf -p "Your prompt here" -n 256
```
### With Ollama
Create a `Modelfile`:
```
FROM ./deepseek-ocr-2-gguf.gguf
```
```bash
ollama create deepseek-ocr-2-gguf -f Modelfile
ollama run deepseek-ocr-2-gguf
```
## About Forkjoin.ai
[Forkjoin.ai](https://forkjoin.ai) runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.
All conversions are optimized for edge deployment within browser and mobile memory constraints.
## License
Apache 2.0 (follows upstream model license)
|