Qwen2.5 7B Instruct

Forkjoin.ai conversion of Qwen/Qwen2.5-7B-Instruct to GGUF format for edge deployment.

Model Details

Usage

With llama.cpp

./llama-cli -m qwen2.5-7b-instruct-gguf.gguf -p "Your prompt here" -n 256

With Ollama

Create a Modelfile:

FROM ./qwen2.5-7b-instruct-gguf.gguf
ollama create qwen2.5-7b-instruct-gguf -f Modelfile
ollama run qwen2.5-7b-instruct-gguf

About Forkjoin.ai

Forkjoin.ai runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.

All conversions are optimized for edge deployment within browser and mobile memory constraints.

License

Apache 2.0 (follows upstream model license)

Downloads last month
5
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for forkjoin-ai/qwen2.5-7b-instruct-gguf

Base model

Qwen/Qwen2.5-7B
Quantized
(283)
this model