Llama-3.1-8B-Instruct-STO-Master-GGUF
This repository contains GGUF quantizations of the Llama-3.1-8B-Instruct-STO-Master (Model E). This model is a high-intelligence fine-tune designed to maximize the reasoning capabilities of the 8B parameter architecture.
π Quick Links
- Full Model Info & Benchmarks: Original Model Card
- Synthetic Data Methodology: LLMResearch.net - Synthetic Data
π¦ Run with Ollama
This model is optimized for local execution. You can find it on Ollama here: π Ollama - Llama-3.1-8B-Instruct-STO-Master
To run it immediately, use the following command:
ollama run aiasistentworld/Llama-3.1-8B-Instruct-STO-Master
π§ Why this model?
- High IQ Tuning: Internal tests show a 20-30 point IQ increase over the base Llama 3.1 8B Instruct.
- STO Methodology: Trained using Specialized Task Optimization, focusing on logical "proofs" and deep understanding rather than simple memorization.
- Efficiency: Achieved superior results using only 800,000 high-tier synthetic tokens, proving that quality beats quantity.
π Credits
- Author: AlexH
- Organization: LLMResearch.net
For detailed benchmark results (MMLU, ARC, Hellaswag) and the full research history, please refer to the Main Model Repository.
βοΈ License
This model is subject to the Llama 3.1 Community License Agreement.
- Downloads last month
- 41
Hardware compatibility
Log In
to add your hardware
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for AiAsistent/Llama-3.1-8B-Instruct-STO-Master-GGUF
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct