πŸ“ Overview

Tensordyne builds advanced AI-inference systems, enabling faster, more affordable, and sustainable generative AI.

This repository provides resources to quickly get started with Qwen3-Coder-Next on the Tensordyne Inference System and its SDK.

🧩 Model Details

  • Quantization: post-training quantization of the base model, no fine-tuning or additional training was performed
  • Supported data types: Tensordyne FP16 (tFP16), Tensordyne FP8 (tFP8), mixed-precision

βš™οΈ Quantization

The Tensordyne SDK offers multiple post-training quantization strategies to convert AI models for efficient inference on the Tensordyne Inference System β€” fully customizable for your optimization targets.
We showcase several preselected quantization variants that can be applied on-the-fly to quantize to Tensordyne data types here. The calibration-based strategies are defined by quantization configurations provided as .json.

The quantized models are evaluated on 10% of the WikiText-2 raw v1 test set. Negative relative perplexity drops indicate that the model performs better than the float base model.

Model Configuration Absolute Perplexity Relative Perplexity Drop vs. BF16 Details
BF16 6.351 – The baseline model trained in BF16
layerwise_mixed_precision 6.365 0.23 % calibration-based mixed-precision: tFP8, outliers in tFP16
calibration_based_tFP8 6.498 2.33 % calibration-based tFP8 quantization

πŸš€ Getting Started

Refer to the Tensordyne Hugging Face Hub tutorial in our hosted documentation for instructions on using the artifacts provided in this repository.
The documentation provides more information on Tensordyne's quantization strategies and introduces you to our SDK.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Tensordyne/Qwen3-Coder-Next

Quantized
(92)
this model