--- base_model: - inference-net/Schematron-3B pipeline_tag: text-generation tags: - open4bits license: llama3.2 --- # Open4bits / Schematron-3B-GGUF This repository provides the **Schematron-3B model converted to GGUF format**, published by Open4bits to enable efficient local inference with reduced memory usage and broad CPU compatibility. The underlying base model is **meta-llama/Llama-3.2-3B-Instruct**, fine-tuned by Inference-Net. This repository contains a quantized GGUF conversion of the fine-tuned model weights produced by Open4bits. The model is designed for instruction-based text generation tasks and is suitable for resource-constrained and local deployments. --- ## Model Overview Schematron-3B is an instruction-tuned language model built on the **LLaMA 3.2-3B architecture**. After fine-tuning by Inference-Net for enhanced instruction following and generation quality, the model has been quantized and released in GGUF format to support efficient CPU-friendly inference. --- ## Model Details * **Base Model:** meta-llama/Llama-3.2-3B-Instruct * **Fine-Tuned By:** Inference-Net * **Parameters:** ~3 billion * **Format:** GGUF (quantized) * **Task:** Instruction-based text generation * **Weight tying:** Preserved * **Compatibility:** GGUF-compatible inference engines and CPU environments This quantized release is designed to balance performance and resource efficiency while maintaining strong instruction following capabilities. --- ## Intended Use This model is intended for: * Instruction-guided text generation * Local and CPU-based inference workflows * Research, prototyping, and experimentation * Self-hosted or offline AI systems --- ## Limitations * Reduced generation quality compared to larger or full-precision variants * Performance depends on prompt design and inference parameters * Not fine-tuned for highly specialized or domain-specific tasks --- ## License This model follows the **original LLaMA 3.2 licensing terms** as defined by Meta AI. Users must comply with the licensing conditions of the base model and the fine-tuning provider. --- ## Support If you find this model valuable, please consider supporting the project. Your support helps Open4bits continue releasing and maintaining high-quality quantized models for the community.