Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
inference-optimization 's Collections
Granite 4 Small and Tiny Quantized Models
NVIDIA-Nemotron-3-Nano-30B-A3B Quantized Models
Qwen3-Next-80B-A3B Quantized Models
Mixed Precision Models
KV Cache Quantization

Mixed Precision Models

updated 1 day ago

Collection of Mixed Precision LLaMA and Qwen Models

Upvote
-

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_5.75-bits

    6B • Updated 1 day ago • 15

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_6.0-bits

    6B • Updated 1 day ago • 18

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_5.0-bits

    5B • Updated 1 day ago • 34

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_5.25-bits

    6B • Updated 1 day ago • 14

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_5.5-bits

    6B • Updated 1 day ago • 15

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_6.25-bits

    6B • Updated 1 day ago • 12

  • inference-optimization/Meta-Llama-3.1-8B-Instruct-NVFP4-FP8-Dynamic_6.5-bits

    7B • Updated 1 day ago • 17
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs