Instructions to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF", filename="Llama-3.2-3B-Agent007-Coder.Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Ollama:
ollama run hf.co/QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF to start chatting
- Pi new
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Llama-3.2-3B-Agent007-Coder-GGUF-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:# Run inference directly in the terminal:
llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:Use Docker
docker model run hf.co/QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF
This is quantized version of EpistemeAI/Llama-3.2-3B-Agent007-Coder created using llama.cpp
Original Model Card
Llama Agent 3B coder
Fine tuned with Agent dataset and also Code Alpaca 20K and magpie ultra 0.1 datasets.
Original Model card
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model Developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | |
|---|---|---|---|---|---|---|---|---|---|
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| 3B (3.21B) | Multilingual Text | Multilingual Text and code |
Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 Model Family: Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Feedback: Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go here.
Intended Use
Intended Use Cases: Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
Out of Scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
How to use
This repository contains two versions of Llama-3.2-3B-Instruct, for use with transformers and with the original llama codebase.
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
import torch
from transformers import pipeline
model_id = "EpistemeAI/Llama-3.2-3B-Agent007-Coder"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes
Uploaded model
- Developed by: EpistemeAI
- License: apache-2.0
- Finetuned from model : unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 3,858
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit

Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF:# Run inference directly in the terminal: llama-cli -hf QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF: