Buckets:

HuggingFaceDocBuilder's picture
|
download
raw
32.8 kB

양자화[[quantization]]

양자화 기법은 가중치와 활성화를 8비트 정수(int8)와 같은 더 낮은 정밀도의 데이터 타입으로 표현함으로써 메모리와 계산 비용을 줄입니다. 이를 통해 일반적으로는 메모리에 올릴 수 없는 더 큰 모델을 로드할 수 있고, 추론 속도를 높일 수 있습니다. Transformers는 AWQ와 GPTQ 양자화 알고리즘을 지원하며, bitsandbytes를 통해 8비트와 4비트 양자화를 지원합니다. Transformers에서 지원되지 않는 양자화 기법들은 HfQuantizer 클래스를 통해 추가될 수 있습니다.

모델을 양자화하는 방법은 이 양자화 가이드를 통해 배울 수 있습니다.

QuantoConfig[[transformers.QuantoConfig]][[transformers.QuantoConfig]]

transformers.QuantoConfig[[transformers.QuantoConfig]]

Source

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using quanto.

post_inittransformers.QuantoConfig.post_inithttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L1048[]

Safety checker that arguments are correct

Parameters:

weights (str, optional, defaults to "int8") : The target dtype for the weights after quantization. Supported values are ("float8","int8","int4","int2")

activations (str, optional) : The target dtype for the activations after quantization. Supported values are (None,"int8","float8")

modules_to_not_convert (list, optional, default to None) : The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).

AqlmConfig[[transformers.AqlmConfig]][[transformers.AqlmConfig]]

transformers.AqlmConfig[[transformers.AqlmConfig]]

Source

This is a wrapper class about aqlm parameters.

post_inittransformers.AqlmConfig.post_inithttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L901[]

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

Parameters:

in_group_size (int, optional, defaults to 8) : The group size along the input dimension.

out_group_size (int, optional, defaults to 1) : The group size along the output dimension. It's recommended to always use 1.

num_codebooks (int, optional, defaults to 1) : Number of codebooks for the Additive Quantization procedure.

nbits_per_codebook (int, optional, defaults to 16) : Number of bits encoding a single codebook vector. Codebooks size is 2**nbits_per_codebook.

linear_weights_not_to_quantize (Optional[list[str]], optional) : List of full paths of nn.Linear weight parameters that shall not be quantized.

kwargs (dict[str, Any], optional) : Additional parameters from which to initialize the configuration object.

VptqConfig[[transformers.VptqConfig]][[transformers.VptqConfig]]

transformers.VptqConfig[[transformers.VptqConfig]]

Source

This is a wrapper class about vptq parameters.

post_inittransformers.VptqConfig.post_inithttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L1009[]

Safety checker that arguments are correct

Parameters:

enable_proxy_error (bool, optional, defaults to False) : calculate proxy error for each layer

config_for_layers (Dict, optional, defaults to {}) : quantization params for each layer

shared_layer_config (Dict, optional, defaults to {}) : shared quantization params among layers

modules_to_not_convert (list, optional, default to None) : The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).

kwargs (dict[str, Any], optional) : Additional parameters from which to initialize the configuration object.

AwqConfig[[transformers.AwqConfig]][[transformers.AwqConfig]]

transformers.AwqConfig[[transformers.AwqConfig]]

Source

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using auto-awq library awq quantization relying on auto_awq backend.

Parameters:

bits (int, optional, defaults to 4) : The number of bits to quantize to.

group_size (int, optional, defaults to 128) : The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.

zero_point (bool, optional, defaults to True) : Whether to use zero point quantization.

backend (AwqBackend, optional, defaults to AwqBackend.AUTO) : The quantization backend.

modules_to_not_convert (list, optional, default to None) : The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers). Note you cannot quantize directly with transformers, please refer to AutoAWQ documentation for quantizing HF models.

EetqConfig[[transformers.EetqConfig]][[transformers.EetqConfig]]

transformers.EetqConfig[[transformers.EetqConfig]]

Source

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using eetq.

post_inittransformers.EetqConfig.post_inithttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L1085[]

Safety checker that arguments are correct

Parameters:

weights (str, optional, defaults to "int8") : The target dtype for the weights. Supported value is only "int8"

modules_to_not_convert (list, optional, default to None) : The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.

GPTQConfig[[transformers.GPTQConfig]][[transformers.GPTQConfig]]

transformers.GPTQConfig[[transformers.GPTQConfig]]

Source

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using optimum api for GPTQ quantization relying on the gptqmodel backend.

from_dict_optimumtransformers.GPTQConfig.from_dict_optimumhttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L787[{"name": "config_dict", "val": ""}]

Get compatible class with optimum gptq config dict

Parameters:

bits (int) : The number of bits to quantize to, supported numbers are (2, 3, 4, 8).

tokenizer (str or PreTrainedTokenizerBase, optional) : The tokenizer used to process the dataset. You can pass either: - A custom tokenizer object. - A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. - A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.

dataset (Union[list[str]], optional) : The dataset used for quantization. You can provide your own dataset in a list of string or just use the original datasets used in GPTQ paper ['wikitext2','c4','c4-new']

group_size (int, optional, defaults to 128) : The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.

damp_percent (float, optional, defaults to 0.1) : The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1.

desc_act (bool, optional, defaults to False) : Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. Also known as act-order.

act_group_aware (bool, optional, defaults to True) : Use GAR (group aware activation order) during quantization. Has measurable positive impact on quantization quality. Only applicable when desc_act = False. Will forced to be False when desc_act = True.

sym (bool, optional, defaults to True) : Whether to use symmetric quantization.

true_sequential (bool, optional, defaults to True) : Whether to perform sequential quantization even within a single Transformer block. Instead of quantizing the entire block at once, we perform layer-wise quantization. As a result, each layer undergoes quantization using inputs that have passed through the previously quantized layers.

format (str, optional, defaults to "gptq") : GPTQ weight format. gptq (v1) is supported by gptqmodel. gptq_v2 is gptqmodel only.

meta (dict[str, any], optional) : Properties, such as tooling:version, that do not directly contributes to quantization or quant inference are stored in meta. i.e. meta.quantizer: ["optimum:version", "gptqmodel:version"]

backend (str, optional) : Controls which kernel to use. Valid values for gptqmodel are auto, auto_trainable and more. Ref gptqmodel backends: https://github.com/ModelCloud/GPTQModel/blob/main/gptqmodel/utils/backend.py

model_seqlen (int, optional) : The maximum sequence length that the model can take.

block_name_to_quantize (str, optional) : The transformers block name to quantize. If None, we will infer the block name using common patterns (e.g. model.layers)

module_name_preceding_first_block (list[str], optional) : The layers that are preceding the first Transformer block.

batch_size (int, optional, defaults to 1) : The batch size used when processing the dataset

pad_token_id (int, optional) : The pad token id. Needed to prepare the dataset when batch_size > 1.

max_input_length (int, optional) : The maximum input length. This is needed to initialize a buffer that depends on the maximum expected input length. It is specific to the exllama backend with act-order.

cache_block_outputs (bool, optional, defaults to True) : Whether to cache block outputs to reuse as inputs for the succeeding block.

modules_in_block_to_quantize (list[list[str]], optional) : List of list of module names to quantize in the specified block. This argument is useful to exclude certain linear modules from being quantized. The block to quantize can be specified by setting block_name_to_quantize. We will quantize each list sequentially. If not set, we will quantize all linear layers. Example: modules_in_block_to_quantize =[["self_attn.k_proj", "self_attn.v_proj", "self_attn.q_proj"], ["self_attn.o_proj"]]. In this example, we will first quantize the q,k,v layers simultaneously since they are independent. Then, we will quantize self_attn.o_proj layer with the q,k,v layers quantized. This way, we will get better results since it reflects the real input self_attn.o_proj will get when the model is quantized.

post_init[[transformers.GPTQConfig.post_init]]

Source

Safety checker that arguments are correct

to_dict_optimum[[transformers.GPTQConfig.to_dict_optimum]]

Source

Get compatible dict for optimum gptq config

BitsAndBytesConfig[[#transformers.BitsAndBytesConfig]][[transformers.BitsAndBytesConfig]]

transformers.BitsAndBytesConfig[[transformers.BitsAndBytesConfig]]

Source

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using bitsandbytes.

Currently only supports LLM.int8(), FP4, and NF4 quantization. If more methods are added to bitsandbytes, then more arguments will be added to this class.

is_quantizabletransformers.BitsAndBytesConfig.is_quantizablehttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L550[]

Returns True if the model is quantizable, False otherwise.

Parameters:

load_in_8bit (bool, optional, defaults to False) : This flag is used to enable 8-bit quantization with LLM.int8().

load_in_4bit (bool, optional, defaults to False) : This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers from bitsandbytes.

llm_int8_threshold (float, optional, defaults to 6.0) : This corresponds to the outlier threshold for outlier detection as described in LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale paper: https://huggingface.co/papers/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).

llm_int8_skip_modules (list[str], optional) : An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example for CausalLM models, the last lm_head is kept in its original dtype.

llm_int8_enable_fp32_cpu_offload (bool, optional, defaults to False) : This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such as google/flan-t5-xxl. Note that the int8 operations will not be run on CPU.

llm_int8_has_fp16_weight (bool, optional, defaults to False) : This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass.

bnb_4bit_compute_dtype (torch.dtype or str, optional, defaults to torch.float32) : This sets the computational type which might be different than the input type. For example, inputs might be fp32, but computation can be set to bf16 for speedups.

bnb_4bit_quant_type (str, optional, defaults to "fp4") : This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified by fp4 or nf4.

bnb_4bit_use_double_quant (bool, optional, defaults to False) : This flag is used for nested quantization where the quantization constants from the first quantization are quantized again.

bnb_4bit_quant_storage (torch.dtype or str, optional, defaults to torch.uint8) : This sets the storage type to pack the quantized 4-bit params.

kwargs (dict[str, Any], optional) : Additional parameters from which to initialize the configuration object.

post_init[[transformers.BitsAndBytesConfig.post_init]]

Source

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

quantization_method[[transformers.BitsAndBytesConfig.quantization_method]]

Source

This method returns the quantization method used for the model. If the model is not quantizable, it returns None.

to_diff_dict[[transformers.BitsAndBytesConfig.to_diff_dict]]

Source

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

Returns:

dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

HfQuantizer[[transformers.quantizers.HfQuantizer]][[transformers.quantizers.HfQuantizer]]

transformers.quantizers.HfQuantizer[[transformers.quantizers.HfQuantizer]]

Source

Abstract class of the HuggingFace quantizer. Supports for now quantizing HF transformers models for inference and/or quantization. This class is used only for transformers.PreTrainedModel.from_pretrained and cannot be easily used outside the scope of that method yet.

Attributes quantization_config (transformers.utils.quantization_config.QuantizationConfigMixin): The quantization config that defines the quantization parameters of your model that you want to quantize. requires_calibration (bool): Whether the quantization method requires to calibrate the model before using it.

adjust_max_memorytransformers.quantizers.HfQuantizer.adjust_max_memoryhttps://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/base.py#L126[{"name": "max_memory", "val": ": dict"}] adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization

dequantize[[transformers.quantizers.HfQuantizer.dequantize]]

Source

Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance. Note not all quantization schemes support this.

get_param_name[[transformers.quantizers.HfQuantizer.get_param_name]]

Source

Override this method if you want to adjust the param_name.

get_state_dict_and_metadata[[transformers.quantizers.HfQuantizer.get_state_dict_and_metadata]]

Source

Get state dict and metadata. Useful when we need to modify a bit the state dict due to quantization

param_needs_quantization[[transformers.quantizers.HfQuantizer.param_needs_quantization]]

Source

Check whether a given param needs to be quantized.

postprocess_model[[transformers.quantizers.HfQuantizer.postprocess_model]]

Source

Post-process the model post weights loading. Make sure to override the abstract method _process_model_after_weight_loading.

Parameters:

model (~transformers.PreTrainedModel) : The model to quantize

kwargs (dict, optional) : The keyword arguments that are passed along _process_model_after_weight_loading.

preprocess_model[[transformers.quantizers.HfQuantizer.preprocess_model]]

Source

Setting model attributes and/or converting model before weights loading. At this point the model should be initialized on the meta device so you can freely manipulate the skeleton of the model in order to replace modules in-place. Make sure to override the abstract method _process_model_before_weight_loading.

Parameters:

model (~transformers.PreTrainedModel) : The model to quantize

kwargs (dict, optional) : The keyword arguments that are passed along _process_model_before_weight_loading.

remove_quantization_config[[transformers.quantizers.HfQuantizer.remove_quantization_config]]

Source

Remove the quantization config from the model.

update_device_map[[transformers.quantizers.HfQuantizer.update_device_map]]

Source

Override this method if you want to pass a override the existing device map with a new one. E.g. for bitsandbytes, since accelerate is a hard requirement, if no device_map is passed, the device_map is set to `"auto"``

Parameters:

device_map (Union[dict, str], optional) : The device_map that is passed through the from_pretrained method.

update_dtype[[transformers.quantizers.HfQuantizer.update_dtype]]

Source

Some quantization methods require to explicitly set the dtype of the model to a target dtype. You need to override this method in case you want to make sure that behavior is preserved

Parameters:

dtype (torch.dtype) : The input dtype that is passed in from_pretrained

update_ep_plan[[transformers.quantizers.HfQuantizer.update_ep_plan]]

Source

updates the tp plan for the scales

update_tp_plan[[transformers.quantizers.HfQuantizer.update_tp_plan]]

Source

updates the tp plan for the scales

update_weight_conversions[[transformers.quantizers.HfQuantizer.update_weight_conversions]]

Source

Give the quantizer a chance to rewrite the weight conversion pipeline.

Loading runs renamings → converters → (dequant → merge → concat). Dequant has to happen before any merge/concat op because those operations aren't aware of per-block scales, so the per-expert (weight, scale) pairs need to be collapsed into full-precision tensors first. Subclasses (e.g. the FP8 quantizer in dequantize=True mode) override this to inject a dequantize op at the start of each model-provided WeightConverter and attach the matching scale source patterns. Default: no-op.

validate_environment[[transformers.quantizers.HfQuantizer.validate_environment]]

Source

This method is used to potentially check for potential conflicts with arguments that are passed in from_pretrained. You need to define it for all future quantizers that are integrated with transformers. If no explicit check are needed, simply return nothing.

HqqConfig[[transformers.HqqConfig]][[transformers.HqqConfig]]

transformers.HqqConfig[[transformers.HqqConfig]]

Source

This is wrapper around hqq's BaseQuantizeConfig.

from_dicttransformers.HqqConfig.from_dicthttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L341[{"name": "config", "val": ": dict"}]

Override from_dict, used in AutoQuantizationConfig.from_dict in quantizers/auto.py

Parameters:

nbits (int, optional, defaults to 4) : Number of bits. Supported values are (8, 4, 3, 2, 1).

group_size (int, optional, defaults to 64) : Group-size value. Supported values are any value that is divisible by weight.shape[axis]).

view_as_float (bool, optional, defaults to False) : View the quantized weight as float (used in distributed training) if set to True.

axis (Optional[int], optional) : Axis along which grouping is performed. Supported values are 0 or 1.

dynamic_config (dict, optional) : Parameters for dynamic configuration. The key is the name tag of the layer and the value is a quantization config. If set, each layer specified by its id will use its dedicated quantization configuration.

skip_modules (list[str], optional, defaults to ['lm_head']) : List of nn.Linear layers to skip.

kwargs (dict[str, Any], optional) : Additional parameters from which to initialize the configuration object.

post_init[[transformers.HqqConfig.post_init]]

Source

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

to_diff_dict[[transformers.HqqConfig.to_diff_dict]]

Source

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

Returns:

dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

FbgemmFp8Config[[transformers.FbgemmFp8Config]][[transformers.FbgemmFp8Config]]

transformers.FbgemmFp8Config[[transformers.FbgemmFp8Config]]

Source

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using fbgemm fp8 quantization.

Parameters:

activation_scale_ub (float, optional, defaults to 1200.0) : The activation scale upper bound. This is used when quantizing the input activation.

modules_to_not_convert (list, optional, default to None) : The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.

CompressedTensorsConfig[[transformers.CompressedTensorsConfig]][[transformers.CompressedTensorsConfig]]

transformers.CompressedTensorsConfig[[transformers.CompressedTensorsConfig]]

Source

This is a wrapper class that handles compressed-tensors quantization config options. It is a wrapper around compressed_tensors.QuantizationConfig

from_dicttransformers.CompressedTensorsConfig.from_dicthttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L1181[{"name": "config_dict", "val": ""}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "**kwargs", "val": ""}]- config_dict (dict[str, Any]) -- Dictionary that will be used to instantiate the configuration object.

  • return_unused_kwargs (bool,optional, defaults to False) -- Whether or not to return a list of unused keyword arguments. Used for from_pretrained method in PreTrainedModel.
  • kwargs (dict[str, Any]) -- Additional parameters from which to initialize the configuration object.0QuantizationConfigMixinThe configuration object instantiated from those parameters.

Instantiates a CompressedTensorsConfig from a Python dictionary of parameters. Optionally unwraps any args from the nested quantization_config

Parameters:

config_groups (typing.dict[str, typing.Union[ForwardRef('QuantizationScheme'), typing.list[str]]], optional) : dictionary mapping group name to a quantization scheme definition

format (str, optional, defaults to "dense") : format the model is represented as. Set run_compressed True to execute model as the compressed format if not dense

quantization_status (QuantizationStatus, optional, defaults to "initialized") : status of model in the quantization lifecycle, ie 'initialized', 'calibration', 'frozen'

kv_cache_scheme (typing.Union[QuantizationArgs, NoneType], optional) : specifies quantization of the kv cache. If None, kv cache is not quantized.

global_compression_ratio (typing.Union[float, NoneType], optional) : 0-1 float percentage of model compression

ignore (typing.Union[typing.list[str], NoneType], optional) : layer names or types to not quantize, supports regex prefixed by 're:'

sparsity_config (typing.dict[str, typing.Any], optional) : configuration for sparsity compression

quant_method (str, optional, defaults to "compressed-tensors") : do not override, should be compressed-tensors

run_compressed (bool, optional, defaults to True) : alter submodules (usually linear) in order to emulate compressed model execution if True, otherwise use default submodule

Returns:

QuantizationConfigMixin

The configuration object instantiated from those parameters.

to_dict[[transformers.CompressedTensorsConfig.to_dict]]

Source

Quantization config to be added to config.json

Serializes this instance to a Python dictionary. Returns: dict[str, Any]: Dictionary of all the attributes that make up this configuration instance.

to_diff_dict[[transformers.CompressedTensorsConfig.to_diff_dict]]

Source

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

Returns:

dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

TorchAoConfig[[transformers.TorchAoConfig]][[transformers.TorchAoConfig]]

transformers.TorchAoConfig[[transformers.TorchAoConfig]]

Source

Config class for torchao quantization/sparsity techniques.

Example:

from torchao.quantization import Int4WeightOnlyConfig

quantization_config = TorchAoConfig(Int4WeightOnlyConfig(group_size=32))
model = AutoModelForCausalLM.from_pretrained(
    model_id, device_map="cuda", torch_dtype=torch.bfloat16, quantization_config=quantization_config
)

from_dicttransformers.TorchAoConfig.from_dicthttps://github.com/huggingface/transformers/blob/main/src/transformers/utils/quantization_config.py#L1537[{"name": "config_dict", "val": ""}, {"name": "return_unused_kwargs", "val": " = False"}, {"name": "**kwargs", "val": ""}] Create configuration from a dictionary.

Parameters:

quant_type (AOBaseConfig) : A torchao AOBaseConfig instance specifying the quantization type, e.g. Int4WeightOnlyConfig(group_size=32), Int8WeightOnlyConfig(), Int8DynamicActivationInt8WeightConfig(), Float8WeightOnlyConfig(), etc.

modules_to_not_convert (list, optional, default to None) : The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.

include_input_output_embeddings (bool, optional, defaults to False) : Whether to include embedding in quantization or not, input embedding will be removed from the module_not_to_convert list as well if this flag is set.

untie_embedding_weights (bool, optional, defaults to False) : Whether to untie the weights when we are quantizing input embedding weights that is tied to other weights.

get_apply_tensor_subclass[[transformers.TorchAoConfig.get_apply_tensor_subclass]]

Source

Return the quantization config to apply.

post_init[[transformers.TorchAoConfig.post_init]]

Source

Validate configuration and set defaults.

to_dict[[transformers.TorchAoConfig.to_dict]]

Source

Convert configuration to a dictionary.

Xet Storage Details

Size:
32.8 kB
·
Xet hash:
1259deb80105a4395924bf9eccf069f5b4c37332036c7260e2b396f7246063c2

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.