Klear-Reasoner-8B-f32-GGUF

The Klear-Reasoner-8B is an 8-billion-parameter language model based on the Qwen3-8B-Base model, specially fine-tuned for advanced long-chain-of-thought reasoning and enhanced problem-solving in math and coding tasks. It combines quality-centric long CoT supervised fine-tuning with a novel reinforcement learning method called Gradient-Preserving Clipping Policy Optimization (GPPO), which maintains gradients from clipped tokens to improve learning and exploration efficiency. The model demonstrates state-of-the-art performance on challenging benchmarks such as AIME 2024/2025 and LiveCodeBench, achieving top scores significantly surpassing many community models using larger inference budgets. It supports extended inference lengths up to 64K tokens, enabling deeper and more complex reasoning processes. Klear-Reasoner-8B is designed for careful deliberation during problem solving and excels at incremental quality improvements in tasks involving both mathematical reasoning and code generation.

Execute using Ollama

run ->

ollama run hf.co/prithivMLmods/Klear-Reasoner-8B-f32-GGUF:BF16

Model Files

File Name Quant Type File Size
Klear-Reasoner-8B.BF16.gguf BF16 16.4 GB
Klear-Reasoner-8B.F16.gguf F16 16.4 GB
Klear-Reasoner-8B.F32.gguf F32 32.8 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
15
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Klear-Reasoner-8B-f32-GGUF

Base model

Qwen/Qwen3-8B-Base
Quantized
(4)
this model