Granite-3.3-8B-Instruct (GGUF / Quantized)
Granite-3.3-8B-Instruct is an instruction-tuned large language model designed for strong conversational ability, instruction compliance, and efficient inference. This repository contains quantized GGUF formats of the model, allowing for practical local use even on limited hardware.
Quantized variants significantly lower memory requirements while preserving generation quality, making Granite-3.3-8B-Instruct suitable for research, experimentation, and on-device deployments.
Model Overview
- Model Name: Granite-3.3-8B-Instruct
- Base Model: ibm-granite/granite-3.3-8b-instruct
- Architecture: Decoder-only Transformer
- Parameter Count: 8 Billion
- Supported Context Length: Extended
- Modalities: Text
- Developer: IBM Granite
- License: Apache 2.0
Quantization Formats
Q4_K_M
- Approx.
71% size reduction (4.60 GB) - Designed for CPU-only inference
- Good balance of speed vs quality
- Ideal for machines with limited VRAM
Q5_K_M
- Approx.
66% size reduction (5.40 GB) - Higher numeric precision than Q4
- Better stability on reasoning tasks
- Stronger general output consistency
Training Background (Original Model)
Granite-3.3-8B-Instruct is trained with an emphasis on instruction comprehension and versatile generalization across diverse tasks.
Pretraining
- Large pretraining corpus covering diverse domains
- Autoregressive language modeling training regime
- Focus on robust language representation
Instruction Tuning
- Refined using instruction datasets to improve user input compliance
- Promotes clearer answers and more predictable outputs
- Enhanced for multi-step reasoning and conversational flow
Key Capabilities
Instruction Compliance
Handles a wide range of user directives and produces tailored responses.Conversational Fluency
Generates natural dialogue with contextual awareness.Reasoning and Explanation
Performs well on tasks requiring multi-step logic and analysis.Efficient Local Inference
Quantized variants enable practical usage without cloud dependencies.Flexible Text Generation
Suitable for summarization, Q&A, and creative language tasks.
Usage Example
Using llama.cpp
./llama-cli \
-m granite-3.3-8b-instruct_Q4_K_M.gguf \
-p "Describe what makes efficient LLM inference challenging."
Recommended Applications
Local Assistants Build offline chat tools without relying on remote servers.
Instruction Following Systems Create task automation helpers, interactive agents, and Q&A systems.
Research and Prototyping Evaluate model behavior and prompting strategies.
Data Privacy Projects Run generation fully under your own control without external APIs.
Acknowledgments
This repository is based on the original Granite-3.3-8B-Instruct model released by IBM Granite.
Thanks to:
- The IBM Granite team for contributing an open instruction-tuned model
- The
llama.cppcommunity for enabling efficient GGUF inference
Contact
For questions, feedback, or support, please reach out atsupport@sandlogic.com or visit https://www.sandlogic.com/
- Downloads last month
- 14
4-bit
5-bit
Model tree for SandLogicTechnologies/Granite-3.3-8B-Instruct-GGUF
Base model
ibm-granite/granite-3.3-8b-base