Community Quantization Requests

This space is for requesting oQe builds. These quants utilize multi-stage calibration and Hessian-based error compensation to maintain logic stability, specifically tuned for Apple Silicon performance.

How to Request

Open a new Discussion for requests. To ensure a valid build, please include:

  1. Model Link: URL to the official Hugging Face repository. Note that I only process builds starting from original BF16 or FP16 source weights.
  2. Quantization Format: Specify if you need BF16 or FP16 quants.
    • FP16 is generally recommended for M1/M2 series to utilize AMX units for faster prefill.
    • BF16 is recommended for M3/M4 series with native support.
  3. Preferred Tiers: Specify the target bitrate (e.g., oQ5e, oQ4e) based on your available Unified Memory.

Guidelines

  • Hardware: Builds are processed on a 192GB M2 Ultra. Models up to 70B parameters (standard dense) are supported. Anything significantly larger (100B+ or large MoE architectures) will exceed memory limits when loading source weights for calibration.
  • Selection Criteria: Priority is given to base models and official instruct tunes. Experimental merges or low-epoch fine-tunes are generally excluded unless there is significant community interest.
  • The Process: Every oQe build undergoes a 600-sample calibration pass. These are not one-pass streaming quants.

Technical Spec

All fulfilled requests are processed using the oMLX Enhanced Quantization process:

  • Sensitivity Mapping: Calibration pass measures precision requirements per layer to prevent output drift.
  • Hessian-Based Tuning: GPTQ-Hessian error compensation is applied during weight rounding.
  • Precision Anchoring: Native BF16 for routing gates and FP16 for attention heads to maximize Apple Silicon AMX throughput.
  • Logic Floor: The lm_head and critical early blocks are locked at 8-bit to ensure core reasoning stability.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support