lsmpp's picture
Add files using upload-large-folder tool
47c146e verified

Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.

Learn how to quantize models in the Quantization guide.

PipelineQuantizationConfig

[[autodoc]] quantizers.PipelineQuantizationConfig

BitsAndBytesConfig

[[autodoc]] BitsAndBytesConfig

GGUFQuantizationConfig

[[autodoc]] GGUFQuantizationConfig

QuantoConfig

[[autodoc]] QuantoConfig

TorchAoConfig

[[autodoc]] TorchAoConfig

DiffusersQuantizer

[[autodoc]] quantizers.base.DiffusersQuantizer