Llama 3.2 1B Instruct
Forkjoin.ai conversion of meta-llama/Llama-3.2-1B-Instruct to GGUF format for edge deployment.
Model Details
- Source Model: meta-llama/Llama-3.2-1B-Instruct
- Format: GGUF
- Converted by: Forkjoin.ai
Usage
With llama.cpp
./llama-cli -m llama-3.2-1b-instruct-gguf.gguf -p "Your prompt here" -n 256
With Ollama
Create a Modelfile:
FROM ./llama-3.2-1b-instruct-gguf.gguf
ollama create llama-3.2-1b-instruct-gguf -f Modelfile
ollama run llama-3.2-1b-instruct-gguf
About Forkjoin.ai
Forkjoin.ai runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.
All conversions are optimized for edge deployment within browser and mobile memory constraints.
License
Apache 2.0 (follows upstream model license)
- Downloads last month
- 23
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for forkjoin-ai/llama-3.2-1b-instruct-gguf
Base model
meta-llama/Llama-3.2-1B-Instruct