| language: | |
| - en | |
| license: apache-2.0 | |
| library_name: llama-cpp | |
| tags: | |
| - gguf | |
| - vision | |
| - multimodal | |
| - forkjoin-ai | |
| base_model: deepseek-ai/DeepSeek-OCR-2 | |
| pipeline_tag: image-text-to-text | |
| # Deepseek Ocr 2 | |
| Forkjoin.ai conversion of [deepseek-ai/DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) to GGUF format for edge deployment. | |
| ## Model Details | |
| - **Source Model**: [deepseek-ai/DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | |
| - **Format**: GGUF | |
| - **Converted by**: [Forkjoin.ai](https://forkjoin.ai) | |
| ## Usage | |
| ### With llama.cpp | |
| ```bash | |
| ./llama-cli -m deepseek-ocr-2-gguf.gguf -p "Your prompt here" -n 256 | |
| ``` | |
| ### With Ollama | |
| Create a `Modelfile`: | |
| ``` | |
| FROM ./deepseek-ocr-2-gguf.gguf | |
| ``` | |
| ```bash | |
| ollama create deepseek-ocr-2-gguf -f Modelfile | |
| ollama run deepseek-ocr-2-gguf | |
| ``` | |
| ## About Forkjoin.ai | |
| [Forkjoin.ai](https://forkjoin.ai) runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities. | |
| All conversions are optimized for edge deployment within browser and mobile memory constraints. | |
| ## License | |
| Apache 2.0 (follows upstream model license) | |