id int64 0 190k | prompt stringlengths 21 13.4M | docstring stringlengths 1 12k ⌀ |
|---|---|---|
21,100 | from typing import Optional
from sparseml.pytorch import recipe_template
The provided code snippet includes necessary dependencies for implementing the `create_sparse_transfer_recipe` function. Write a Python function `def create_sparse_transfer_recipe( model: Optional["Module"] = None, # noqa: F821 quant: bool = True, lr_func: str = "linear", **recipe_args, ) -> str` to solve the following problem:
Convenience function to create a sparse transfer recipe :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param quant: `True` if quantization needs to be applied else `False`. Defaults to `True` :param lr_func: the learning rate schedule function. Defaults to `linear` :param recipe_args: additional arguments to pass to recipe_template :return: a valid recipe
Here is the function:
def create_sparse_transfer_recipe(
model: Optional["Module"] = None, # noqa: F821
quant: bool = True,
lr_func: str = "linear",
**recipe_args,
) -> str:
"""
Convenience function to create a sparse transfer recipe
:param model: an instantiated PyTorch Module, or the local path to a torch.jit
loadable *.pt file, if supplied then the recipe is built according to this
architecture
:param quant: `True` if quantization needs to be applied else `False`. Defaults
to `True`
:param lr_func: the learning rate schedule function. Defaults to `linear`
:param recipe_args: additional arguments to pass to recipe_template
:return: a valid recipe
"""
return recipe_template(
model=model,
pruning="constant",
quantization=quant,
lr=lr_func,
**recipe_args,
) | Convenience function to create a sparse transfer recipe :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param quant: `True` if quantization needs to be applied else `False`. Defaults to `True` :param lr_func: the learning rate schedule function. Defaults to `linear` :param recipe_args: additional arguments to pass to recipe_template :return: a valid recipe |
21,101 | from typing import Optional
from sparseml.pytorch import recipe_template
The provided code snippet includes necessary dependencies for implementing the `create_pruning_recipe` function. Write a Python function `def create_pruning_recipe( model: Optional["Module"] = None, # noqa: F821 method: str = "true", lr_func: str = "linear", **recipe_args, ) -> str` to solve the following problem:
Convenience function to create a pruning recipe :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param method: pruning algorithm to use in the recipe, can be any of the following, `true` (represents Magnitude/Global-Magnitude pruning according to global_sparsity), `false` (No pruning), `acdc`, `mfac`, `movement`, `obs` or `constant`. Defaults to `true` :param lr_func: the learning rate schedule function. Defaults to `linear` :param recipe_args: additional arguments to pass to recipe_template :return: a valid recipe
Here is the function:
def create_pruning_recipe(
model: Optional["Module"] = None, # noqa: F821
method: str = "true",
lr_func: str = "linear",
**recipe_args,
) -> str:
"""
Convenience function to create a pruning recipe
:param model: an instantiated PyTorch Module, or the local path to a torch.jit
loadable *.pt file, if supplied then the recipe is built according to this
architecture
:param method: pruning algorithm to use in the recipe, can be any of the
following, `true` (represents Magnitude/Global-Magnitude pruning according to
global_sparsity), `false` (No pruning), `acdc`, `mfac`, `movement`, `obs` or
`constant`. Defaults to `true`
:param lr_func: the learning rate schedule function. Defaults to `linear`
:param recipe_args: additional arguments to pass to recipe_template
:return: a valid recipe
"""
return recipe_template(
model=model,
pruning=method,
quantization=False,
lr=lr_func,
**recipe_args,
) | Convenience function to create a pruning recipe :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param method: pruning algorithm to use in the recipe, can be any of the following, `true` (represents Magnitude/Global-Magnitude pruning according to global_sparsity), `false` (No pruning), `acdc`, `mfac`, `movement`, `obs` or `constant`. Defaults to `true` :param lr_func: the learning rate schedule function. Defaults to `linear` :param recipe_args: additional arguments to pass to recipe_template :return: a valid recipe |
21,102 | from typing import Optional
from sparseml.pytorch import recipe_template
The provided code snippet includes necessary dependencies for implementing the `create_quantization_recipe` function. Write a Python function `def create_quantization_recipe( model: Optional["Module"] = None, # noqa: F821 method: bool = True, lr_func: str = "linear", **recipe_args, ) -> str` to solve the following problem:
Convenience function to create a quantization recipe :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param method: `True` if quantization needs to be applied else `False`. Defaults to `True` :param lr_func: the learning rate schedule function. Defaults to `linear` :param recipe_args: additional arguments to pass to recipe_template :return: a valid recipe
Here is the function:
def create_quantization_recipe(
model: Optional["Module"] = None, # noqa: F821
method: bool = True,
lr_func: str = "linear",
**recipe_args,
) -> str:
"""
Convenience function to create a quantization recipe
:param model: an instantiated PyTorch Module, or the local path to a torch.jit
loadable *.pt file, if supplied then the recipe is built according to this
architecture
:param method: `True` if quantization needs to be applied else `False`. Defaults
to `True`
:param lr_func: the learning rate schedule function. Defaults to `linear`
:param recipe_args: additional arguments to pass to recipe_template
:return: a valid recipe
"""
return recipe_template(
model=model,
pruning=False,
quantization=method,
lr=lr_func,
**recipe_args,
) | Convenience function to create a quantization recipe :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param method: `True` if quantization needs to be applied else `False`. Defaults to `True` :param lr_func: the learning rate schedule function. Defaults to `linear` :param recipe_args: additional arguments to pass to recipe_template :return: a valid recipe |
21,103 | import os
from typing import Any, Dict
import numpy
import torch
from torch import device as device_class
from ultralytics.yolo.utils import LOGGER
EP_list = ["CUDAExecutionProvider", "CPUExecutionProvider"]
def preprocess(
batch: Dict[str, Any], device: device_class, half: bool = False
) -> Dict[str, Any]:
"""
Ported from
https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/v8/detect/val.py
"""
batch["img"] = batch["img"].to(device, non_blocking=True)
batch["img"] = (batch["img"].half() if half else batch["img"].float()) / 255
for k in ["batch_idx", "cls", "bboxes"]:
batch[k] = batch[k].to(device)
return batch
def _export_torch_outputs(
image: torch.Tensor, model: torch.nn.Module, sample_out_dir: str, file_idx: str
):
# Run model to get torch outputs
model_out = model(image)
preds = model_out
sample_output_filename = os.path.join(sample_out_dir, f"out-{file_idx}.npz")
seg_prediction = None
# Move to cpu for exporting
# Segmentation currently supports two outputs
if isinstance(preds, tuple):
preds_out = preds[0].detach().to("cpu")
seg_prediction = preds[1].detach().to("cpu")
else:
preds_out = preds.detach().to("cpu")
numpy.savez(sample_output_filename, preds_out, seg_prediction=seg_prediction)
def _export_ort_outputs(
image: numpy.ndarray,
session: "onnxruntime.InferenceSession", # noqa: F821
sample_out_dir: str,
file_idx: str,
):
# Run model to get onnxruntime outputs
ort_inputs = {session.get_inputs()[0].name: image}
ort_outs = session.run(None, ort_inputs)
preds = ort_outs
seg_prediction = None
if len(preds) > 1:
preds_out = preds[0]
seg_prediction = preds[1]
else:
preds_out = preds[0]
preds_out = numpy.squeeze(preds_out, axis=0)
sample_output_filename = os.path.join(sample_out_dir, f"out-{file_idx}.npz")
numpy.savez(sample_output_filename, preds_out, seg_prediction=seg_prediction)
def _export_inputs(image: torch.Tensor, sample_in_dir: str, file_idx: str):
sample_in = image.detach().to("cpu").squeeze(0)
sample_input_filename = os.path.join(sample_in_dir, f"inp-{file_idx}.npz")
numpy.savez(sample_input_filename, sample_in)
def _graph_has_uint8_inputs(onnx_path: str) -> bool:
"""
Load onnx model and check if it's input is type 2 (unit8)
"""
import onnx
onnx_model = onnx.load(str(onnx_path))
return onnx_model.graph.input[0].type.tensor_type.elem_type == 2
The provided code snippet includes necessary dependencies for implementing the `export_sample_inputs_outputs` function. Write a Python function `def export_sample_inputs_outputs( data_loader: torch.utils.data.DataLoader, model: torch.nn.Module, save_dir: str, device: device_class, number_export_samples: int, onnx_path: str, )` to solve the following problem:
Export sample model input and output for testing with the DeepSparse Engine :param data_loader: path to data loader to take samples from :param model: model to be exported. Used to generate torch outputs :param save_dir: directory to save samples to :param device: device to run the inference (output generation) on :param number_export_samples: number of samples to export :param onnx_path: path to onnx model. Used to generate ORT outputs
Here is the function:
def export_sample_inputs_outputs(
data_loader: torch.utils.data.DataLoader,
model: torch.nn.Module,
save_dir: str,
device: device_class,
number_export_samples: int,
onnx_path: str,
):
"""
Export sample model input and output for testing with the DeepSparse Engine
:param data_loader: path to data loader to take samples from
:param model: model to be exported. Used to generate torch outputs
:param save_dir: directory to save samples to
:param device: device to run the inference (output generation) on
:param number_export_samples: number of samples to export
:param onnx_path: path to onnx model. Used to generate ORT outputs
"""
try:
import onnxruntime
except (ImportError, ModuleNotFoundError) as exception:
raise ValueError(
"onnxruntime is needed to export samples for validation, but the "
"module was not found, try `pip install sparseml[onnxruntime]`"
) from exception
LOGGER.info(
f"Exporting {number_export_samples} sample model inputs and outputs for "
"testing with the DeepSparse Engine"
)
exported_samples = 0
# Sample export directories
sample_in_dir = os.path.join(save_dir, "sample-inputs")
sample_out_dir_torch = os.path.join(save_dir, "sample_outputs_torch")
sample_out_dir_ort = os.path.join(save_dir, "sample_outputs_onnxruntime")
os.makedirs(sample_in_dir, exist_ok=True)
os.makedirs(sample_out_dir_torch, exist_ok=True)
os.makedirs(sample_out_dir_ort, exist_ok=True)
save_inputs_as_uint8 = _graph_has_uint8_inputs(onnx_path) if onnx_path else False
# Prepare model for inference
model = model.to(device)
model.eval()
# Prepare onnxruntime engine for inference
ort_session = onnxruntime.InferenceSession(onnx_path, providers=EP_list)
LOGGER.info(f"Exporting sample inputs to directory {sample_in_dir}")
LOGGER.info(f"Exporting sample torch outputs to directory {sample_out_dir_torch}")
LOGGER.info(
f"Exporting sample onnxruntime outputs to directory {sample_out_dir_ort}"
)
for batch in data_loader:
file_idx = f"{exported_samples}".zfill(4)
preprocessed_batch = preprocess(batch=batch, device=device)
image = preprocessed_batch["img"]
# Save torch outputs as numpy array
_export_torch_outputs(image, model, sample_out_dir_torch, file_idx)
# Convert input data type if needed
if save_inputs_as_uint8:
image = (255 * image).to(dtype=torch.uint8)
# Save inputs as numpy array
_export_inputs(image, sample_in_dir, file_idx)
# Save onnxruntime outputs as numpy array
_export_ort_outputs(
image.cpu().numpy(), ort_session, sample_out_dir_ort, file_idx
)
exported_samples += 1
if exported_samples >= number_export_samples:
break
if exported_samples < number_export_samples:
LOGGER.info(
f"Could not export {number_export_samples} samples. Exhausted dataloader "
f"and exported {exported_samples} samples",
level="warning",
)
LOGGER.info(
f"Completed the export of {number_export_samples} "
f"input/output samples to {save_dir}"
) | Export sample model input and output for testing with the DeepSparse Engine :param data_loader: path to data loader to take samples from :param model: model to be exported. Used to generate torch outputs :param save_dir: directory to save samples to :param device: device to run the inference (output generation) on :param number_export_samples: number of samples to export :param onnx_path: path to onnx model. Used to generate ORT outputs |
21,104 | import glob
import os
import warnings
from argparse import Namespace
from typing import Any, Dict
import yaml
from ultralytics.yolo.data.dataloaders.v5loader import create_dataloader
from ultralytics.yolo.data.utils import ROOT
from ultralytics.yolo.engine.model import DetectionModel
from ultralytics.yolo.engine.trainer import BaseTrainer
The provided code snippet includes necessary dependencies for implementing the `check_coco128_segmentation` function. Write a Python function `def check_coco128_segmentation(args: Namespace) -> Namespace` to solve the following problem:
Checks if the argument 'data' is coco128.yaml and if so, replaces it with coco128-seg.yaml. :param args: arguments to check :return: the updated arguments
Here is the function:
def check_coco128_segmentation(args: Namespace) -> Namespace:
"""
Checks if the argument 'data' is coco128.yaml and if so,
replaces it with coco128-seg.yaml.
:param args: arguments to check
:return: the updated arguments
"""
if args.data == "coco128.yaml":
dataset_name, dataset_extension = os.path.splitext(args.data)
dataset_yaml = dataset_name + "-seg" + dataset_extension
warnings.warn(
f"Dataset yaml {args.data} is not supported for segmentation. "
f"Attempting to use {dataset_yaml} instead."
)
args.data = dataset_yaml
return args | Checks if the argument 'data' is coco128.yaml and if so, replaces it with coco128-seg.yaml. :param args: arguments to check :return: the updated arguments |
21,105 | import glob
import os
import warnings
from argparse import Namespace
from typing import Any, Dict
import yaml
from ultralytics.yolo.data.dataloaders.v5loader import create_dataloader
from ultralytics.yolo.data.utils import ROOT
from ultralytics.yolo.engine.model import DetectionModel
from ultralytics.yolo.engine.trainer import BaseTrainer
def create_grad_sampler(
trainer: BaseTrainer, stride: int, model: DetectionModel
) -> Dict[str, Any]:
if not hasattr(trainer, "train_loader"):
# initialize train loader (if not already initialized)
# and set it as the trainer's attribute
train_set_path = trainer.trainset
train_loader, _ = create_dataloader(
path=train_set_path,
imgsz=trainer.args.imgsz,
batch_size=trainer.args.batch,
stride=stride,
)
trainer.train_loader = train_loader
# convert model's arg to a namespace,
# this is expected by the trainer's criterion
model.args = Namespace(**model.args)
trainer.model = model
grad_sampler = dict(
data_loader_builder=trainer._get_data_loader_builder(),
loss_function=lambda preds, batch: trainer.criterion(preds, batch)[0]
/ train_loader.batch_size,
)
return grad_sampler | null |
21,106 | import glob
import os
import warnings
from argparse import Namespace
from typing import Any, Dict
import yaml
from ultralytics.yolo.data.dataloaders.v5loader import create_dataloader
from ultralytics.yolo.data.utils import ROOT
from ultralytics.yolo.engine.model import DetectionModel
from ultralytics.yolo.engine.trainer import BaseTrainer
The provided code snippet includes necessary dependencies for implementing the `data_from_dataset_path` function. Write a Python function `def data_from_dataset_path(data: str, dataset_path: str) -> str` to solve the following problem:
Given a dataset name, fetch the yaml config for the dataset from the Ultralytics dataset repo, overwrite its 'path' attribute (dataset root dir) to point to the `dataset_path` and finally save it to the current working directory. This allows to create load data yaml config files that point to the arbitrary directories on the disk. :param data: name of the dataset (e.g. "coco.yaml") :param dataset_path: path to the dataset directory :return: a path to the new yaml config file (saved in the current working directory)
Here is the function:
def data_from_dataset_path(data: str, dataset_path: str) -> str:
"""
Given a dataset name, fetch the yaml config for the dataset
from the Ultralytics dataset repo, overwrite its 'path'
attribute (dataset root dir) to point to the `dataset_path`
and finally save it to the current working directory.
This allows to create load data yaml config files that point
to the arbitrary directories on the disk.
:param data: name of the dataset (e.g. "coco.yaml")
:param dataset_path: path to the dataset directory
:return: a path to the new yaml config file
(saved in the current working directory)
"""
ultralytics_dataset_path = glob.glob(os.path.join(ROOT, "**", data), recursive=True)
if len(ultralytics_dataset_path) != 1:
raise ValueError(
"Expected to find a single path to the "
f"dataset yaml file: {data}, but found {ultralytics_dataset_path}"
)
ultralytics_dataset_path = ultralytics_dataset_path[0]
with open(ultralytics_dataset_path, "r") as f:
yaml_config = yaml.safe_load(f)
yaml_config["path"] = dataset_path
yaml_save_path = os.path.join(os.getcwd(), data)
# save the new dataset yaml file
with open(yaml_save_path, "w") as outfile:
yaml.dump(yaml_config, outfile, default_flow_style=False)
return yaml_save_path | Given a dataset name, fetch the yaml config for the dataset from the Ultralytics dataset repo, overwrite its 'path' attribute (dataset root dir) to point to the `dataset_path` and finally save it to the current working directory. This allows to create load data yaml config files that point to the arbitrary directories on the disk. :param data: name of the dataset (e.g. "coco.yaml") :param dataset_path: path to the dataset directory :return: a path to the new yaml config file (saved in the current working directory) |
21,107 | import os
import re
import shutil
import subprocess
import sys
import tempfile
import warnings
from copy import copy, deepcopy
from datetime import datetime, timedelta
from functools import partial
from pathlib import Path
from typing import List, Optional
import torch
from sparseml.optim.helpers import load_recipe_yaml_str
from sparseml.pytorch.optim.manager import ScheduledModifierManager
from sparseml.pytorch.sparsification.quantization import skip_onnx_input_quantize
from sparseml.pytorch.utils import ModuleExporter
from sparseml.pytorch.utils.helpers import download_framework_model_by_recipe_type
from sparseml.pytorch.utils.logger import LoggerManager, PythonLogger, WANDBLogger
from sparseml.yolov8.modules import Bottleneck, Conv
from sparseml.yolov8.utils import (
check_coco128_segmentation,
create_grad_sampler,
data_from_dataset_path,
)
from sparseml.yolov8.utils.export_samples import export_sample_inputs_outputs
from sparseml.yolov8.validators import (
SparseClassificationValidator,
SparseDetectionValidator,
SparseSegmentationValidator,
)
from sparsezoo import Model
from sparsezoo.utils import validate_onnx
from ultralytics import __version__
from ultralytics.nn.modules import Detect, Segment
from ultralytics.nn.tasks import SegmentationModel, attempt_load_one_weight
from ultralytics.yolo.cfg import get_cfg
from ultralytics.yolo.data.dataloaders.v5loader import create_dataloader
from ultralytics.yolo.engine.model import TASK_MAP, YOLO
from ultralytics.yolo.engine.trainer import BaseTrainer
from ultralytics.yolo.utils import LOGGER, IterableSimpleNamespace, yaml_load
from ultralytics.yolo.utils.checks import check_file, check_imgsz, check_yaml
from ultralytics.yolo.utils.dist import (
USER_CONFIG_DIR,
ddp_cleanup,
find_free_network_port,
)
from ultralytics.yolo.utils.files import get_latest_run
from ultralytics.yolo.utils.torch_utils import (
TORCH_1_9,
de_parallel,
smart_inference_mode,
)
from ultralytics.yolo.v8.classify import ClassificationTrainer, ClassificationValidator
from ultralytics.yolo.v8.detect import DetectionTrainer, DetectionValidator
from ultralytics.yolo.v8.segment import SegmentationTrainer, SegmentationValidator
def generate_ddp_file(trainer):
# NOTE: adapted from ultralytics.yolo.utils.dist.generate_ddp_file
content = f"""if __name__ == "__main__":
from sparseml.yolov8.trainers import {trainer.__class__.__name__}
trainer = {trainer.__class__.__name__}(config={dict(trainer.args)})
trainer.train()
"""
(USER_CONFIG_DIR / "DDP").mkdir(exist_ok=True)
with tempfile.NamedTemporaryFile(
prefix="_temp_",
suffix=f"{id(trainer)}.py",
mode="w+",
encoding="utf-8",
dir=USER_CONFIG_DIR / "DDP",
delete=False,
) as file:
file.write(content)
return file.name
The provided code snippet includes necessary dependencies for implementing the `generate_ddp_command` function. Write a Python function `def generate_ddp_command(world_size, trainer)` to solve the following problem:
Generates and returns command for distributed training.
Here is the function:
def generate_ddp_command(world_size, trainer):
# NOTE: copied from ultralytics.yolo.utils.dist.generate_ddp_command
"""Generates and returns command for distributed training."""
import __main__ # noqa local import to avoid https://github.com/Lightning-AI/lightning/issues/15218
if not trainer.resume:
shutil.rmtree(trainer.save_dir) # remove the save_dir
file = str(Path(sys.argv[0]).resolve())
safe_pattern = re.compile(
r"^[a-zA-Z0-9_. /\\-]{1,128}$"
) # allowed characters and maximum of 100 characters
if not (
safe_pattern.match(file) and Path(file).exists() and file.endswith(".py")
): # using CLI
file = generate_ddp_file(trainer)
dist_cmd = "torch.distributed.run" if TORCH_1_9 else "torch.distributed.launch"
port = find_free_network_port()
cmd = [
sys.executable,
"-m",
dist_cmd,
"--nproc_per_node",
f"{world_size}",
"--master_port",
f"{port}",
file,
]
return cmd, file | Generates and returns command for distributed training. |
21,108 | import os
import re
import shutil
import subprocess
import sys
import tempfile
import warnings
from copy import copy, deepcopy
from datetime import datetime, timedelta
from functools import partial
from pathlib import Path
from typing import List, Optional
import torch
from sparseml.optim.helpers import load_recipe_yaml_str
from sparseml.pytorch.optim.manager import ScheduledModifierManager
from sparseml.pytorch.sparsification.quantization import skip_onnx_input_quantize
from sparseml.pytorch.utils import ModuleExporter
from sparseml.pytorch.utils.helpers import download_framework_model_by_recipe_type
from sparseml.pytorch.utils.logger import LoggerManager, PythonLogger, WANDBLogger
from sparseml.yolov8.modules import Bottleneck, Conv
from sparseml.yolov8.utils import (
check_coco128_segmentation,
create_grad_sampler,
data_from_dataset_path,
)
from sparseml.yolov8.utils.export_samples import export_sample_inputs_outputs
from sparseml.yolov8.validators import (
SparseClassificationValidator,
SparseDetectionValidator,
SparseSegmentationValidator,
)
from sparsezoo import Model
from sparsezoo.utils import validate_onnx
from ultralytics import __version__
from ultralytics.nn.modules import Detect, Segment
from ultralytics.nn.tasks import SegmentationModel, attempt_load_one_weight
from ultralytics.yolo.cfg import get_cfg
from ultralytics.yolo.data.dataloaders.v5loader import create_dataloader
from ultralytics.yolo.engine.model import TASK_MAP, YOLO
from ultralytics.yolo.engine.trainer import BaseTrainer
from ultralytics.yolo.utils import LOGGER, IterableSimpleNamespace, yaml_load
from ultralytics.yolo.utils.checks import check_file, check_imgsz, check_yaml
from ultralytics.yolo.utils.dist import (
USER_CONFIG_DIR,
ddp_cleanup,
find_free_network_port,
)
from ultralytics.yolo.utils.files import get_latest_run
from ultralytics.yolo.utils.torch_utils import (
TORCH_1_9,
de_parallel,
smart_inference_mode,
)
from ultralytics.yolo.v8.classify import ClassificationTrainer, ClassificationValidator
from ultralytics.yolo.v8.detect import DetectionTrainer, DetectionValidator
from ultralytics.yolo.v8.segment import SegmentationTrainer, SegmentationValidator
def _get_submodule(module: torch.nn.Module, path: List[str]) -> torch.nn.Module:
if not path:
return module
return _get_submodule(getattr(module, path[0]), path[1:])
class Conv(nn.Module):
"""
Slightly modified version of ultralytics Conv with SiLU instantiated
for each instance. This is to help with SiLU naming in SparseML recipe
"""
def __init__(self, layer: ulm.Conv):
super().__init__()
self.conv = layer.conv
self.bn = layer.bn
for attr in ["i", "f", "type"]:
if hasattr(layer, attr):
setattr(self, attr, getattr(layer, attr))
is_silu = isinstance(layer.act, nn.SiLU)
self.act = nn.SiLU() if is_silu else layer.act
def forward(self, x):
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
return self.act(self.conv(x))
class Bottleneck(nn.Module):
"""
Modified version of ultralyltics Bottleneck with inputs of the residual
adds being marked for potential quantization
"""
def __init__(self, layer: ulm.Bottleneck):
super().__init__()
self.cv1 = layer.cv1
self.cv2 = layer.cv2
self.add = layer.add
for attr in ["i", "f", "type"]:
if hasattr(layer, attr):
setattr(self, attr, getattr(layer, attr))
self.add_input_0 = AddInput()
self.add_input_1 = AddInput()
def forward(self, x):
return (
self.add_input_0(x) + self.add_input_1(self.cv2(self.cv1(x)))
if self.add
else self.cv2(self.cv1(x))
)
def _modify_arch_for_quantization(model):
layer_map = {"Bottleneck": Bottleneck, "Conv": Conv}
for name, layer in model.named_modules():
cls_name = layer.__class__.__name__
if cls_name in layer_map:
submodule_path = name.split(".")
parent_module = _get_submodule(model, submodule_path[:-1])
setattr(parent_module, submodule_path[-1], layer_map[cls_name](layer)) | null |
21,109 | import functools
from typing import Optional
from sparseml.base import check_version
_DEF_TF_MIN_VERSION = "2.1.0"
_DEF_KERAS_MIN_VERSION = "2.4.3"
def check_keras_install(
min_tf_version: Optional[str] = _DEF_TF_MIN_VERSION,
max_tf_version: Optional[str] = None,
min_native_version: Optional[str] = _DEF_KERAS_MIN_VERSION,
require_tensorflow_backend: bool = True,
raise_on_error: bool = True,
) -> bool:
"""
Check that the keras package is installed.
If raise_on_error, will raise an ImportError if it is not installed or
the required version range, if set, is not installed.
If not raise_on_error, will return True if installed with required version
and False otherwise.
:param min_tf_version: The minimum version for keras that it must be greater than
or equal to, if unset will require no minimum version
:type min_tf_version: str
:param max_tf_version: The maximum version for keras that it must be less than
or equal to, if unset will require no maximum version.
:type max_tf_version: str
:param min_native_version: The minimum version for native keras that it must be
greater than or equal to if installed
:type min_native_version: str
:param require_tensorflow_backend: True to require keras to use the tensorflow
backend, False otherwise.
:type require_tensorflow_backend: bool
:param raise_on_error: True to raise any issues such as not installed,
minimum version, or maximum version as ImportError. False to return the result.
:type raise_on_error: bool
:return: If raise_on_error, will return False if keras is not installed
or the version is outside the accepted bounds and True if everything is correct.
:rtype: bool
"""
if keras_err is not None:
if raise_on_error:
raise keras_err
return False
if tensorflow_err is not None and require_tensorflow_backend:
if raise_on_error:
raise tensorflow_err
return False
if require_tensorflow_backend and not check_version(
"tensorflow", min_tf_version, max_tf_version, raise_on_error
):
return False
if is_native_keras and not check_version(
"keras", min_native_version, None, raise_on_error
):
return False
return True
The provided code snippet includes necessary dependencies for implementing the `require_keras` function. Write a Python function `def require_keras( min_tf_version: Optional[str] = _DEF_TF_MIN_VERSION, max_tf_version: Optional[str] = None, min_native_version: Optional[str] = _DEF_KERAS_MIN_VERSION, require_tensorflow_backend: bool = True, )` to solve the following problem:
Decorator function to require use of keras. Will check that keras package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_keras_install` for more info. :param min_tf_version: The minimum version for keras that it must be greater than or equal to, if unset will require no minimum version :type min_tf_version: str :param max_tf_version: The maximum version for keras that it must be less than or equal to, if unset will require no maximum version. :type max_tf_version: str :param min_native_version: The minimum version for native keras that it must be greater than or equal to if installed :type min_native_version: str :param require_tensorflow_backend: True to require keras to use the tensorflow backend, False otherwise. :type require_tensorflow_backend: bool :param require_tensorflow_backend: True to require keras to use the tensorflow backend, False otherwise. :type require_tensorflow_backend: bool
Here is the function:
def require_keras(
min_tf_version: Optional[str] = _DEF_TF_MIN_VERSION,
max_tf_version: Optional[str] = None,
min_native_version: Optional[str] = _DEF_KERAS_MIN_VERSION,
require_tensorflow_backend: bool = True,
):
"""
Decorator function to require use of keras.
Will check that keras package is installed and within the bounding
ranges of min_version and max_version if they are set before calling
the wrapped function.
See :func:`check_keras_install` for more info.
:param min_tf_version: The minimum version for keras that it must be greater than
or equal to, if unset will require no minimum version
:type min_tf_version: str
:param max_tf_version: The maximum version for keras that it must be less than
or equal to, if unset will require no maximum version.
:type max_tf_version: str
:param min_native_version: The minimum version for native keras that it must be
greater than or equal to if installed
:type min_native_version: str
:param require_tensorflow_backend: True to require keras to use the tensorflow
backend, False otherwise.
:type require_tensorflow_backend: bool
:param require_tensorflow_backend: True to require keras to use the tensorflow
backend, False otherwise.
:type require_tensorflow_backend: bool
"""
def _decorator(func):
@functools.wraps(func)
def _wrapper(*args, **kwargs):
check_keras_install(
min_tf_version,
max_tf_version,
min_native_version,
require_tensorflow_backend,
)
return func(*args, **kwargs)
return _wrapper
return _decorator | Decorator function to require use of keras. Will check that keras package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_keras_install` for more info. :param min_tf_version: The minimum version for keras that it must be greater than or equal to, if unset will require no minimum version :type min_tf_version: str :param max_tf_version: The maximum version for keras that it must be less than or equal to, if unset will require no maximum version. :type max_tf_version: str :param min_native_version: The minimum version for native keras that it must be greater than or equal to if installed :type min_native_version: str :param require_tensorflow_backend: True to require keras to use the tensorflow backend, False otherwise. :type require_tensorflow_backend: bool :param require_tensorflow_backend: True to require keras to use the tensorflow backend, False otherwise. :type require_tensorflow_backend: bool |
21,110 | import functools
from typing import Optional
from sparseml.base import check_version
_KERAS2ONNX_MIN_VERSION = "1.0.0"
def check_keras2onnx_install(
min_version: Optional[str] = _KERAS2ONNX_MIN_VERSION,
max_version: Optional[str] = None,
raise_on_error: bool = True,
) -> bool:
"""
Check that the keras2onnx package is installed.
If raise_on_error, will raise an ImportError if it is not installed or
the required version range, if set, is not installed.
If not raise_on_error, will return True if installed with required version
and False otherwise.
:param min_version: The minimum version for keras2onnx that it must be greater than
or equal to, if unset will require no minimum version
:type min_version: str
:param max_version: The maximum version for keras2onnx that it must be less than
or equal to, if unset will require no maximum version.
:type max_version: str
:param raise_on_error: True to raise any issues such as not installed,
minimum version, or maximum version as ImportError. False to return the result.
:type raise_on_error: bool
:return: If raise_on_error, will return False if keras2onnx is not installed
or the version is outside the accepted bounds and True if everything is correct.
:rtype: bool
"""
if keras2onnx_err is not None:
if raise_on_error:
raise keras2onnx_err
return False
return check_version("keras2onnx", min_version, max_version, raise_on_error)
The provided code snippet includes necessary dependencies for implementing the `require_keras2onnx` function. Write a Python function `def require_keras2onnx( min_version: Optional[str] = _KERAS2ONNX_MIN_VERSION, max_version: Optional[str] = None, )` to solve the following problem:
Decorator function to require use of keras2onnx. Will check that keras2onnx package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_keras2onnx_install` for more info. param min_version: The minimum version for keras2onnx that it must be greater than or equal to, if unset will require no minimum version :type min_version: str :param max_version: The maximum version for keras2onnx that it must be less than or equal to, if unset will require no maximum version. :type max_version: str
Here is the function:
def require_keras2onnx(
min_version: Optional[str] = _KERAS2ONNX_MIN_VERSION,
max_version: Optional[str] = None,
):
"""
Decorator function to require use of keras2onnx.
Will check that keras2onnx package is installed and within the bounding
ranges of min_version and max_version if they are set before calling
the wrapped function.
See :func:`check_keras2onnx_install` for more info.
param min_version: The minimum version for keras2onnx that it must be greater than
or equal to, if unset will require no minimum version
:type min_version: str
:param max_version: The maximum version for keras2onnx that it must be less than
or equal to, if unset will require no maximum version.
:type max_version: str
"""
def _decorator(func):
@functools.wraps(func)
def _wrapper(*args, **kwargs):
check_keras2onnx_install(min_version, max_version)
return func(*args, **kwargs)
return _wrapper
return _decorator | Decorator function to require use of keras2onnx. Will check that keras2onnx package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_keras2onnx_install` for more info. param min_version: The minimum version for keras2onnx that it must be greater than or equal to, if unset will require no minimum version :type min_version: str :param max_version: The maximum version for keras2onnx that it must be less than or equal to, if unset will require no maximum version. :type max_version: str |
21,111 | import logging
from sparseml.sparsification import SparsificationInfo
_LOGGER = logging.getLogger(__name__)
The provided code snippet includes necessary dependencies for implementing the `sparsification_info` function. Write a Python function `def sparsification_info() -> SparsificationInfo` to solve the following problem:
Load the available setup for sparsifying model within keras. :return: The sparsification info for the keras framework :rtype: SparsificationInfo
Here is the function:
def sparsification_info() -> SparsificationInfo:
"""
Load the available setup for sparsifying model within keras.
:return: The sparsification info for the keras framework
:rtype: SparsificationInfo
"""
_LOGGER.debug("getting sparsification info for keras")
info = SparsificationInfo(modifiers=[]) # TODO: fill in once available
_LOGGER.info("retrieved sparsification info for keras: %s", info)
return info | Load the available setup for sparsifying model within keras. :return: The sparsification info for the keras framework :rtype: SparsificationInfo |
21,112 | from typing import Tuple
import tensorflow
The provided code snippet includes necessary dependencies for implementing the `random_scaling_crop` function. Write a Python function `def random_scaling_crop( scale_range: Tuple[int, int] = (0.8, 1.0), ratio_range: Tuple[int, int] = (3.0 / 4.0, 4.0 / 3.0), )` to solve the following problem:
Random crop implementation which also randomly scales the crop taken as well as the aspect ratio of the crop. :param scale_range: the (min, max) of the crop scales to take from the orig image :param ratio_range: the (min, max) of the aspect ratios to take from the orig image :return: the callable function for random scaling crop op, takes in the image and outputs randomly cropped image
Here is the function:
def random_scaling_crop(
scale_range: Tuple[int, int] = (0.8, 1.0),
ratio_range: Tuple[int, int] = (3.0 / 4.0, 4.0 / 3.0),
):
"""
Random crop implementation which also randomly scales the crop taken
as well as the aspect ratio of the crop.
:param scale_range: the (min, max) of the crop scales to take from the orig image
:param ratio_range: the (min, max) of the aspect ratios to take from the orig image
:return: the callable function for random scaling crop op,
takes in the image and outputs randomly cropped image
"""
def rand_crop(img: tensorflow.Tensor):
orig_shape = tensorflow.shape(img)
scale = tensorflow.random.uniform(
shape=[1], minval=scale_range[0], maxval=scale_range[1]
)[0]
ratio = tensorflow.random.uniform(
shape=[1], minval=ratio_range[0], maxval=ratio_range[1]
)[0]
height = tensorflow.minimum(
tensorflow.cast(
tensorflow.round(
tensorflow.cast(orig_shape[0], dtype=tensorflow.float32)
* scale
/ ratio
),
tensorflow.int32,
),
orig_shape[0],
)
width = tensorflow.minimum(
tensorflow.cast(
tensorflow.round(
tensorflow.cast(orig_shape[1], dtype=tensorflow.float32) * scale
),
tensorflow.int32,
),
orig_shape[1],
)
img = tensorflow.image.random_crop(img, [height, width, orig_shape[2]])
return img
return rand_crop | Random crop implementation which also randomly scales the crop taken as well as the aspect ratio of the crop. :param scale_range: the (min, max) of the crop scales to take from the orig image :param ratio_range: the (min, max) of the aspect ratios to take from the orig image :return: the callable function for random scaling crop op, takes in the image and outputs randomly cropped image |
21,113 | import random
from typing import Tuple, Union
import tensorflow as tf
from sparseml.keras.datasets.classification import (
ImageFolderDataset,
SplitsTransforms,
imagenet_normalizer,
)
from sparseml.keras.datasets.helpers import random_scaling_crop
from sparseml.keras.datasets.registry import DatasetRegistry
from sparseml.keras.utils import keras
from sparseml.utils import clean_path
from sparseml.utils.datasets import (
IMAGENET_RGB_MEANS,
IMAGENET_RGB_STDS,
default_dataset_path,
)
def torch_imagenet_normalizer():
def normalizer(image: tf.Tensor):
return imagenet_normalizer(image, "torch")
return normalizer | null |
21,114 | import random
from typing import Tuple, Union
import tensorflow as tf
from sparseml.keras.datasets.classification import (
ImageFolderDataset,
SplitsTransforms,
imagenet_normalizer,
)
from sparseml.keras.datasets.helpers import random_scaling_crop
from sparseml.keras.datasets.registry import DatasetRegistry
from sparseml.keras.utils import keras
from sparseml.utils import clean_path
from sparseml.utils.datasets import (
IMAGENET_RGB_MEANS,
IMAGENET_RGB_STDS,
default_dataset_path,
)
def imagenet_pre_resize_processor():
def processor(image: tf.Tensor):
image_batch = tf.expand_dims(image, axis=0)
# Resize the image the following way to match torchvision's Resize
# transform used by Pytorch code path for Imagenet:
# torchvision.transforms.Resize(256)
# which resize the smaller side of images to 256 and the other one based
# on the aspect ratio
shape = tf.shape(image)
h, w = shape[0], shape[1]
if h > w:
new_h, new_w = tf.cast(256 * h / w, dtype=tf.uint16), tf.constant(
256, dtype=tf.uint16
)
else:
new_h, new_w = tf.constant(256, dtype=tf.uint16), tf.cast(
256 * w / h, dtype=tf.uint16
)
resizer = keras.layers.experimental.preprocessing.Resizing(new_h, new_w)
image_batch = tf.cast(resizer(image_batch), dtype=tf.uint8)
# Center crop
center_cropper = keras.layers.experimental.preprocessing.CenterCrop(224, 224)
image_batch = tf.cast(center_cropper(image_batch), dtype=tf.uint8)
return image_batch[0, :]
return processor | null |
21,115 | import glob
import os
import random
from typing import Callable, Iterable, NamedTuple, Tuple, Union
import numpy
import tensorflow
from sparseml.keras.datasets.dataset import Dataset
from sparseml.keras.datasets.helpers import random_scaling_crop
from sparseml.keras.datasets.registry import DatasetRegistry
from sparseml.keras.utils.compat import keras
from sparseml.utils import clean_path
from sparseml.utils.datasets import IMAGENET_RGB_MEANS, IMAGENET_RGB_STDS
def imagenet_normalizer(img: tensorflow.Tensor, mode: str):
"""
Normalize an image using mean and std of the imagenet dataset
:param img: The input image to normalize
:param mode: either "tf", "caffe", "torch"
:return: The normalized image
"""
if mode == "tf":
preprocess_input = keras.applications.mobilenet.preprocess_input
elif mode == "caffe":
preprocess_input = keras.applications.resnet.preprocess_input
elif mode == "torch":
preprocess_input = None
else:
raise ValueError("Unknown preprocessing method")
if preprocess_input is not None:
processed_image = preprocess_input(img)
else:
res = tensorflow.cast(img, dtype=tensorflow.float32) / 255.0
means = tensorflow.constant(IMAGENET_RGB_MEANS, dtype=tensorflow.float32)
stds = tensorflow.constant(IMAGENET_RGB_STDS, dtype=tensorflow.float32)
processed_image = (res - means) / stds
return processed_image
def default_imagenet_normalizer():
def normalizer(img: tensorflow.Tensor):
# Default to the same preprocessing used by Keras Applications ResNet
return imagenet_normalizer(img, "caffe")
return normalizer | null |
21,116 |
def get_layer_name_from_param(param: str):
known_weights = ["kernel", "bias"]
pos = param.rfind("/")
if pos > -1:
suff = param[pos + 1 :]
found = False
for s in known_weights:
colon_pos = suff.rfind(":")
if suff[:colon_pos] == s:
found = True
break
if not found:
raise ValueError(f"Unrecognized weight names. Expected: {known_weights}")
return param[:pos] | null |
21,117 | import abc
import collections
import inspect
from typing import List, Union
import tensorflow
from sparseml.keras.optim.mask_pruning_creator import (
PruningMaskCreator,
load_mask_creator,
)
from sparseml.keras.utils import keras
_LAYER_PRUNABLE_PARAMS_MAP = {
keras.layers.Conv1D: ["kernel"],
keras.layers.Conv2D: ["kernel"],
keras.layers.Conv2DTranspose: ["kernel"],
keras.layers.Conv3D: ["kernel"],
keras.layers.Conv3DTranspose: ["kernel"],
keras.layers.Dense: ["kernel"],
keras.layers.Embedding: ["embeddings"],
keras.layers.LocallyConnected1D: ["kernel"],
keras.layers.LocallyConnected2D: ["kernel"],
keras.layers.SeparableConv1D: ["pointwise_kernel"],
keras.layers.SeparableConv2D: ["pointwise_kernel"],
}
def _get_default_prunable_params(layer: keras.layers.Layer):
if layer.__class__ in _LAYER_PRUNABLE_PARAMS_MAP:
prunable_param_names = _LAYER_PRUNABLE_PARAMS_MAP[layer.__class__]
return {
"{}/{}".format(layer.name, param_name): getattr(layer, param_name)
for param_name in prunable_param_names
}
else:
expected_layers = [layer.__class__ for layer in _LAYER_PRUNABLE_PARAMS_MAP]
raise ValueError(
"Layer {} cannot be pruned. Expected layers: {}".format(
layer, expected_layers
)
) | null |
21,118 | import abc
import collections
import inspect
from typing import List, Union
import tensorflow
from sparseml.keras.optim.mask_pruning_creator import (
PruningMaskCreator,
load_mask_creator,
)
from sparseml.keras.utils import keras
class MaskedLayer(keras.layers.Wrapper):
"""
Masked layer is a layer wrapping around another layer with a mask; the mask however
is shared if the enclosed layer is again of MaskedLayer type
:param layer: either a MaskedLayer or a keras layer
:param pruning_scheduler: a pruning scheduler
:param mask_creator: a mask creator
:param kwargs: optional params for keras layer constructor, e.g. layer name
"""
def __init__(
self,
layer: keras.layers.Layer,
pruning_scheduler: PruningScheduler,
mask_type: Union[str, List[int]] = "unstructured",
**kwargs,
):
if not isinstance(layer, MaskedLayer) and not isinstance(
layer, keras.layers.Layer
):
raise ValueError(
"Invalid layer passed in, expected MaskedLayer or a keras Layer, "
"but got {}".format(layer)
)
super(MaskedLayer, self).__init__(layer, **kwargs)
self._layer = layer
self._pruning_scheduler = pruning_scheduler
self._mask_type = mask_type
self._mask_creator = None
self._pruning_vars = []
self._global_step = None
self._mask_updater = None
def build(self, input_shape):
super(MaskedLayer, self).build(input_shape)
self._mask_creator = load_mask_creator(self._mask_type)
self._pruning_vars = self._reuse_or_create_pruning_vars()
self._global_step = self.add_weight(
"global_step",
shape=[],
initializer=keras.initializers.Constant(-1),
dtype=tensorflow.int64,
trainable=False,
)
self._mask_updater = MaskAndWeightUpdater(
self._pruning_vars,
self._pruning_scheduler,
self._mask_creator,
self._global_step,
)
def _reuse_or_create_pruning_vars(
self,
) -> List[MaskedParamInfo]:
if isinstance(self._layer, MaskedLayer):
# All nested masked layers reused pruning vars created
# for the "core", inner-most, Keras built-in layer
return self._layer.pruning_vars
assert isinstance(self._layer, keras.layers.Layer)
prunable_params = _get_default_prunable_params(self._layer)
pruning_vars = []
for name, param in prunable_params.items():
mask = self.add_weight(
"mask",
shape=param.shape,
initializer=keras.initializers.get("ones"),
dtype=param.dtype,
trainable=False,
)
sparsity = self.add_weight(
"sparsity",
shape=[],
initializer=keras.initializers.get("zeros"),
dtype=param.dtype,
trainable=False,
)
pruning_vars.append(MaskedParamInfo(name, param, mask, sparsity))
return pruning_vars
def call(self, inputs: tensorflow.Tensor, training=None):
"""
Forward function for calling layer instance as function
"""
training = keras.backend.learning_phase() if training is None else training
def _apply_masks_to_weights():
with tensorflow.control_dependencies([self._mask_updater.apply_masks()]):
return tensorflow.no_op("update")
def _no_apply_masks_to_weights():
return tensorflow.no_op("no_update_masks")
tensorflow.cond(
tensorflow.cast(training, tensorflow.bool),
_apply_masks_to_weights,
_no_apply_masks_to_weights,
)
args = inspect.getfullargspec(self._layer.call).args
if "training" in args:
return self._layer.call(inputs, training=training)
else:
return self._layer.call(inputs)
def get_config(self):
"""
Get layer config
Serialization and deserialization should be done using
keras.serialize/deserialize, which create and retrieve the "class_name"
field automatically.
The resulting config below therefore does not contain the field.
"""
config = super(MaskedLayer, self).get_config()
if "layer" not in config:
raise RuntimeError("Expected 'layer' field not found in config")
config.update(
{
"pruning_scheduler": self._pruning_scheduler.get_config(),
"mask_type": self._mask_type,
}
)
return config
def from_config(cls, config):
config = config.copy()
layer = keras.layers.deserialize(
config.pop("layer"), custom_objects={"MaskedLayer": MaskedLayer}
)
if not isinstance(layer, MaskedLayer) and not isinstance(
layer, keras.layers.Layer
):
raise RuntimeError("Unexpected layer created from config")
pruning_scheduler = PruningScheduler.deserialize(
config.pop("pruning_scheduler")
)
if not isinstance(pruning_scheduler, PruningScheduler):
raise RuntimeError("Unexpected pruning scheduler type created from config")
mask_type = config.pop("mask_type")
masked_layer = MaskedLayer(layer, pruning_scheduler, mask_type, **config)
return masked_layer
def compute_output_shape(self, input_shape):
return self._layer.compute_output_shape(input_shape)
def global_step(self):
return self._global_step
def mask_updater(self):
return self._mask_updater
def masks(self):
return [masked_param_info.mask for masked_param_info in self._pruning_vars]
def pruning_vars(self):
return self._pruning_vars
def pruned_layer(self):
if isinstance(self._layer, MaskedLayer):
return self._layer.pruned_layer
elif isinstance(self._layer, keras.layers.Layer):
return self._layer
else:
raise RuntimeError("Unrecognized layer")
def masked_layer(self):
return self._layer
The provided code snippet includes necessary dependencies for implementing the `remove_pruning_masks` function. Write a Python function `def remove_pruning_masks(model: keras.Model)` to solve the following problem:
Remove pruning masks from a model that was pruned using the MaskedLayer logic :param model: a model that was pruned using MaskedLayer :return: the original model with pruned weights
Here is the function:
def remove_pruning_masks(model: keras.Model):
"""
Remove pruning masks from a model that was pruned using the MaskedLayer logic
:param model: a model that was pruned using MaskedLayer
:return: the original model with pruned weights
"""
def _get_pruned_layer(layer):
# If the model is loaded through SavedFormat, the layer of type
# MaskedLayer would belong to a special package, hence the
# second check below based simply on class name
is_masked_layer = isinstance(
layer, MaskedLayer
) or layer.__class__.__name__.endswith("MaskedLayer")
if is_masked_layer:
return _get_pruned_layer(layer.layer)
elif isinstance(layer, keras.layers.Layer):
return layer
else:
raise ValueError("Unknown layer type")
def _remove_pruning_masks(layer):
is_masked_layer = isinstance(
layer, MaskedLayer
) or layer.__class__.__name__.endswith("MaskedLayer")
if is_masked_layer:
return _get_pruned_layer(layer)
return layer
# TODO: while the resulting model could be exported to ONNX, its built status
# is removed
return keras.models.clone_model(
model, input_tensors=None, clone_function=_remove_pruning_masks
) | Remove pruning masks from a model that was pruned using the MaskedLayer logic :param model: a model that was pruned using MaskedLayer :return: the original model with pruned weights |
21,119 | from typing import List, Tuple, Union
from tensorflow import Tensor
from sparseml.keras.utils import KerasLogger, keras
from sparseml.optim import (
BaseModifier,
BaseScheduled,
BaseUpdate,
ModifierProp,
ModifierYAML,
)
from sparseml.utils import KERAS_FRAMEWORK
The provided code snippet includes necessary dependencies for implementing the `epoch_to_steps` function. Write a Python function `def epoch_to_steps(epoch: float, steps_per_epoch: int, min_epoch: float = 0.0) -> int` to solve the following problem:
:param epoch: the (fractional) epoch to convert to the proper number of steps :param steps_per_epoch: number of steps (batches) taken per epoch while training :param min_epoch: if the epoch is less than this, will be set to it. Default 0 :return: the number of steps representing the epoch and state of the epoch
Here is the function:
def epoch_to_steps(epoch: float, steps_per_epoch: int, min_epoch: float = 0.0) -> int:
"""
:param epoch: the (fractional) epoch to convert to the proper number of steps
:param steps_per_epoch: number of steps (batches) taken per epoch while training
:param min_epoch: if the epoch is less than this, will be set to it. Default 0
:return: the number of steps representing the epoch and state of the epoch
"""
if epoch < min_epoch:
epoch = min_epoch
return round(steps_per_epoch * epoch) | :param epoch: the (fractional) epoch to convert to the proper number of steps :param steps_per_epoch: number of steps (batches) taken per epoch while training :param min_epoch: if the epoch is less than this, will be set to it. Default 0 :return: the number of steps representing the epoch and state of the epoch |
21,120 | from abc import ABC, abstractmethod
from typing import Any, Callable, Iterable, List, Tuple, Union
import numpy
import tensorflow
class PruningMaskCreator(ABC):
"""
Base abstract class for a sparsity mask creator.
Subclasses should define all methods for creating masks and their initializers
"""
def get_mask_initializer(
self,
tensor: tensorflow.Tensor,
) -> Callable[[], tensorflow.Tensor]:
"""
:param tensor: A tensor of a model layer's weights
:return: Tensor initializer function for this sparsity mask
"""
raise NotImplementedError()
def create_sparsity_mask(
self,
tensor: tensorflow.Tensor,
sparsity: tensorflow.Tensor,
) -> tensorflow.Tensor:
"""
:param tensor: A tensor of a model layer's weights
:param sparsity: the target sparsity to use for assigning the masks
:return: A sparsity mask close to the set sparsity based on the values of
the input tensor
"""
raise NotImplementedError()
class BlockPruningMaskCreator(GroupedPruningMaskCreator):
"""
Structured sparsity mask creator that groups the input tensor into blocks of
shape block_shape.
block_shape must divide the shape of any input tensor evenly and must have exactly
2 elements for the shape of in and out channels in the blocks.
:param block_shape: The shape of blocks to strucure blocks of in and out channels
in the mask by. -1 represents blocking along the entire dimension.
"""
def __init__(
self,
block_shape: List[int],
grouping_op_name: str = "mean",
):
if len(block_shape) != 2:
raise ValueError(
(
"Invalid block_shape: {}"
" , block_shape must have length == 2 for in and out channels"
).format(block_shape)
)
self._block_shape = block_shape
self._grouping_op = GroupedPruningMaskCreator.get_grouping_op(grouping_op_name)
def group_tensor(self, tensor: tensorflow.Tensor) -> tensorflow.Tensor:
"""
:param tensor: The tensor to transform
:return: The absolute mean values of the tensor grouped by blocks of
shape self._block_shape
"""
blocked_tens_shape, _ = self._get_blocked_tens_shape_and_validate(tensor.shape)
# reorder so that in and out channel dimensions come before kernel
n_dims = len(tensor.shape)
if n_dims >= 3:
tens_trans_dims = [n_dims - 2, n_dims - 1, *range(n_dims - 2)]
tensor = tensorflow.transpose(tensor, tens_trans_dims)
blocked_tens = tensorflow.reshape(tensor, blocked_tens_shape)
reduced_blocks = self._grouping_op(
tensorflow.abs(blocked_tens), 1, keepdims=True
)
return reduced_blocks
def _map_mask_to_tensor(
self,
grouped_mask: tensorflow.Tensor,
original_tensor_shape: tensorflow.TensorShape,
) -> tensorflow.Tensor:
"""
:param grouped_mask: A binary mask the size of a tensor from group_tensor
:param original_tensor_shape: Shape of the original tensor grouped_mask
derives from
:return: The values from grouped_mask mapped to a tensor of size
original_tensor_shape
"""
(
blocked_tens_shape,
original_tensor_shape,
) = self._get_blocked_tens_shape_and_validate(original_tensor_shape)
block_values_shape = [blocked_tens_shape[0], blocked_tens_shape[2]]
# expand so every element has a corresponding value in the original tensor
block_mask = tensorflow.reshape(grouped_mask, block_values_shape)
block_mask = tensorflow.expand_dims(block_mask, 1)
# Recover reduced dimension of block_mask, using tile instead of broadcast_to
# for compatibility with older versions of tf
block_mask_shape = [dim.value for dim in block_mask.shape]
tile_shape = [
int(block_dim / mask_dim)
for (block_dim, mask_dim) in zip(blocked_tens_shape, block_mask_shape)
]
# equivalent to: tensorflow.broadcast_to(block_mask, blocked_tens_shape)
tensor_mask_blocked = tensorflow.tile(block_mask, tile_shape)
mask = tensorflow.reshape(tensor_mask_blocked, original_tensor_shape)
# Undo channel / kernel transpose if applicable
n_dims = len(original_tensor_shape)
if n_dims >= 3:
tens_trans_dims = [*range(2, n_dims), 0, 1]
mask = tensorflow.transpose(mask, tens_trans_dims)
return mask
def _get_blocked_tens_shape_and_validate(
self,
tens_shape: tensorflow.TensorShape,
) -> Tuple[List[int], tensorflow.TensorShape]:
"""
:param tens_shape: The shape of the tensor to group in blocks
:return: shape of tens when blocked by block_shape and the original
tensor shape with any transposes applied to it
:raise: ValueError if we are unable to block tens by shape block_shape
"""
block_shape = self._block_shape
n_dims = len(tens_shape)
if len(tens_shape) >= 3: # conv should have block shape like [1, ..., 1, X, Y]
block_shape = [*[1] * (n_dims - 2), *block_shape]
tens_shape = [dim.value for dim in tens_shape]
for idx, shape in enumerate(block_shape):
if shape == -1:
block_shape[idx] = int(tens_shape[idx])
# Validate
if n_dims < 2:
raise ValueError(
"Invalid tensor shape {}."
" BlockSparsityMaskCreator can only create masks from tensors with 2 or"
" more dimensions, tensor has {}.".format(tens_shape, n_dims)
)
for tens_dim, block_dim in zip(tens_shape, block_shape):
if tens_dim % block_dim != 0:
raise ValueError(
f"Invalid block_shape {block_shape} for parameter shape "
f"{tens_shape}. Elements of block_shape must divide parameter "
f"shape evenly"
)
# If this is a series of conv filters, reorder so in and out channels are first
if n_dims >= 3:
transpose_idx = [n_dims - 2, n_dims - 1, *range(n_dims - 2)]
block_shape = [block_shape[idx] for idx in transpose_idx]
tens_shape = [tens_shape[idx] for idx in transpose_idx]
# Compute blocked tensor shape
if len(block_shape) > 1 and block_shape[1] > 1:
blocked_tens_shape = [
tens_shape[0] * tens_shape[1] // (block_shape[0] * block_shape[1]),
block_shape[0] * block_shape[1],
-1,
]
else:
blocked_tens_shape = [tens_shape[0] // block_shape[0], block_shape[0], -1]
tens_size = numpy.prod(tens_shape)
num_block_elements = blocked_tens_shape[0] * blocked_tens_shape[1]
blocked_tens_shape[2] = tens_size // num_block_elements
return blocked_tens_shape, tens_shape
def __str__(self):
return str(self._block_shape)
def __repr__(self):
return str(self)
mask_creator_name_to_constructor_lambda = {
"unstructured": lambda: UnstructuredPruningMaskCreator(),
"channel": lambda: DimensionPruningMaskCreator("channel"),
"filter": lambda: DimensionPruningMaskCreator("filter"),
}
The provided code snippet includes necessary dependencies for implementing the `load_mask_creator` function. Write a Python function `def load_mask_creator(obj: Union[str, Iterable[int]]) -> PruningMaskCreator` to solve the following problem:
:param obj: Formatted string or iterable of block_shape specifying SparsityMaskCreator object to return :return: SparsityMaskCreator object created from obj
Here is the function:
def load_mask_creator(obj: Union[str, Iterable[int]]) -> PruningMaskCreator:
"""
:param obj: Formatted string or iterable of block_shape specifying
SparsityMaskCreator object to return
:return: SparsityMaskCreator object created from obj
"""
if isinstance(obj, str) and obj in mask_creator_name_to_constructor_lambda:
constructor_lambda = mask_creator_name_to_constructor_lambda[obj]
return constructor_lambda()
# Checking for a BlockSparsityMaskCreator string
if ("[" in obj and "]" in obj) or ("(" in obj and ")" in obj):
stripped_str = obj.strip("[|]|(|)")
block_shape = [int(s) for s in stripped_str.split(",")]
return BlockPruningMaskCreator(block_shape)
if isinstance(obj, list) or isinstance(obj, tuple):
return BlockPruningMaskCreator(obj)
raise ValueError(
"Invalid mask type string: {}, could not map to an object".format(obj)
) | :param obj: Formatted string or iterable of block_shape specifying SparsityMaskCreator object to return :return: SparsityMaskCreator object created from obj |
21,121 | import logging
from typing import Any
from sparseml.base import Framework, get_version
from sparseml.framework import FrameworkInferenceProviderInfo, FrameworkInfo
from sparseml.keras.base import check_keras_install, is_native_keras, keras, tensorflow
from sparseml.keras.sparsification import sparsification_info
from sparseml.sparsification import SparsificationInfo
def detect_framework(item: Any) -> Framework:
"""
Detect the supported ML framework for a given item specifically for the
keras package.
Supported input types are the following:
- A Framework enum
- A string of any case representing the name of the framework
(deepsparse, onnx, keras, pytorch, tensorflow_v1)
- A supported file type within the framework such as model files:
(onnx, pth, h5, pb)
- An object from a supported ML framework such as a model instance
If the framework cannot be determined, will return Framework.unknown
:param item: The item to detect the ML framework for
:type item: Any
:return: The detected framework from the given item
:rtype: Framework
"""
framework = Framework.unknown
if isinstance(item, Framework):
_LOGGER.debug("framework detected from Framework instance")
framework = item
elif isinstance(item, str) and item.lower().strip() in Framework.__members__:
_LOGGER.debug("framework detected from Framework string instance")
framework = Framework[item.lower().strip()]
elif isinstance(item, str) and "keras" in item.lower().strip():
_LOGGER.debug("framework detected from keras text")
# string, check if it's a string saying keras first
framework = Framework.keras
elif isinstance(item, str) and (
".h5" in item.lower().strip() or ".pb" in item.lower().strip()
):
_LOGGER.debug("framework detected from .h5 or .pb")
# string, check if it's a file url or path that ends with h5 extension
framework = Framework.keras
elif check_keras_install(raise_on_error=False):
if isinstance(item, keras.Model):
_LOGGER.debug("framework detected from Keras instance")
# keras native support
framework = Framework.keras
return framework
class Framework(Enum):
"""
Framework types known of/supported within the sparseml/deepsparse ecosystem
"""
unknown = "unknown"
deepsparse = "deepsparse"
onnx = "onnx"
keras = "keras"
pytorch = "pytorch"
tensorflow_v1 = "tensorflow_v1"
try:
import keras
keras_err = None
is_native_keras = True
except Exception as err:
keras = object()
keras_err = err
is_native_keras = False
The provided code snippet includes necessary dependencies for implementing the `is_supported` function. Write a Python function `def is_supported(item: Any) -> bool` to solve the following problem:
:param item: The item to detect the support for :type item: Any :return: True if the item is supported by keras, False otherwise :rtype: bool
Here is the function:
def is_supported(item: Any) -> bool:
"""
:param item: The item to detect the support for
:type item: Any
:return: True if the item is supported by keras, False otherwise
:rtype: bool
"""
framework = detect_framework(item)
return framework == Framework.keras | :param item: The item to detect the support for :type item: Any :return: True if the item is supported by keras, False otherwise :rtype: bool |
21,122 | import logging
from typing import Any
from sparseml.base import Framework, get_version
from sparseml.framework import FrameworkInferenceProviderInfo, FrameworkInfo
from sparseml.keras.base import check_keras_install, is_native_keras, keras, tensorflow
from sparseml.keras.sparsification import sparsification_info
from sparseml.sparsification import SparsificationInfo
class Framework(Enum):
"""
Framework types known of/supported within the sparseml/deepsparse ecosystem
"""
unknown = "unknown"
deepsparse = "deepsparse"
onnx = "onnx"
keras = "keras"
pytorch = "pytorch"
tensorflow_v1 = "tensorflow_v1"
def get_version(
package_name: str,
raise_on_error: bool,
alternate_package_names: Optional[List[str]] = None,
) -> Optional[str]:
"""
:param package_name: The name of the full package, as it would be imported,
to get the version for
:type package_name: str
:param raise_on_error: True to raise an error if package is not installed
or couldn't be imported, False to return None
:type raise_on_error: bool
:param alternate_package_names: List of alternate names to look for the package
under if package_name is not found. Useful for nightly builds.
:type alternate_package_names: Optional[List[str]]
:return: the version of the desired package if detected, otherwise raises an error
:rtype: str
"""
current_version: Optional[str] = None
version_err = None
try:
current_version = pkg_resources.get_distribution(package_name).version
except Exception as err:
version_err = err
if version_err and alternate_package_names:
next_package = alternate_package_names.pop()
return get_version(next_package, raise_on_error, alternate_package_names)
if version_err and raise_on_error:
raise ImportError(
f"error while getting current version for {package_name}: {version_err}"
)
return current_version if not version_err else None
try:
import keras
keras_err = None
is_native_keras = True
except Exception as err:
keras = object()
keras_err = err
is_native_keras = False
try:
import tensorflow
tensorflow_err = None
if keras_err:
from tensorflow import keras
keras_err = None
except Exception as err:
tensorflow = object() # TODO: populate with fake object for necessary improvements
tensorflow_err = err
def check_keras_install(
min_tf_version: Optional[str] = _DEF_TF_MIN_VERSION,
max_tf_version: Optional[str] = None,
min_native_version: Optional[str] = _DEF_KERAS_MIN_VERSION,
require_tensorflow_backend: bool = True,
raise_on_error: bool = True,
) -> bool:
"""
Check that the keras package is installed.
If raise_on_error, will raise an ImportError if it is not installed or
the required version range, if set, is not installed.
If not raise_on_error, will return True if installed with required version
and False otherwise.
:param min_tf_version: The minimum version for keras that it must be greater than
or equal to, if unset will require no minimum version
:type min_tf_version: str
:param max_tf_version: The maximum version for keras that it must be less than
or equal to, if unset will require no maximum version.
:type max_tf_version: str
:param min_native_version: The minimum version for native keras that it must be
greater than or equal to if installed
:type min_native_version: str
:param require_tensorflow_backend: True to require keras to use the tensorflow
backend, False otherwise.
:type require_tensorflow_backend: bool
:param raise_on_error: True to raise any issues such as not installed,
minimum version, or maximum version as ImportError. False to return the result.
:type raise_on_error: bool
:return: If raise_on_error, will return False if keras is not installed
or the version is outside the accepted bounds and True if everything is correct.
:rtype: bool
"""
if keras_err is not None:
if raise_on_error:
raise keras_err
return False
if tensorflow_err is not None and require_tensorflow_backend:
if raise_on_error:
raise tensorflow_err
return False
if require_tensorflow_backend and not check_version(
"tensorflow", min_tf_version, max_tf_version, raise_on_error
):
return False
if is_native_keras and not check_version(
"keras", min_native_version, None, raise_on_error
):
return False
return True
The provided code snippet includes necessary dependencies for implementing the `framework_info` function. Write a Python function `def framework_info() -> FrameworkInfo` to solve the following problem:
Detect the information for the keras framework such as package versions, availability for core actions such as training and inference, sparsification support, and inference provider support. :return: The framework info for keras :rtype: FrameworkInfo
Here is the function:
def framework_info() -> FrameworkInfo:
"""
Detect the information for the keras framework such as package versions,
availability for core actions such as training and inference,
sparsification support, and inference provider support.
:return: The framework info for keras
:rtype: FrameworkInfo
"""
cpu_provider = FrameworkInferenceProviderInfo(
name="cpu",
description="Base CPU provider within Keras",
device="cpu",
supported_sparsification=SparsificationInfo(), # TODO: fill in when available
available=check_keras_install(raise_on_error=False),
properties={},
warnings=[],
)
gpu_provider = FrameworkInferenceProviderInfo(
name="cuda",
description="Base GPU CUDA provider within Keras",
device="gpu",
supported_sparsification=SparsificationInfo(), # TODO: fill in when available
available=(
check_keras_install(raise_on_error=False)
and tensorflow.test.is_gpu_available()
),
properties={},
warnings=[],
)
return FrameworkInfo(
framework=Framework.keras,
package_versions={
"keras": (
get_version(package_name="keras", raise_on_error=False)
if is_native_keras
else get_version(package_name="tensorflow", raise_on_error=False)
),
"tensorflow": get_version(package_name="tensorflow", raise_on_error=False),
"onnx": get_version(package_name="onnx", raise_on_error=False),
"keras2onnx": get_version(package_name="keras2onnx", raise_on_error=False),
"tf2onnx": get_version(package_name="tf2onnx", raise_on_error=False),
"sparsezoo": get_version(
package_name="sparsezoo",
raise_on_error=False,
alternate_package_names=["sparsezoo-nightly"],
),
"sparseml": get_version(
package_name="sparseml",
raise_on_error=False,
alternate_package_names=["sparseml-nightly"],
),
},
sparsification=sparsification_info(),
inference_providers=[cpu_provider, gpu_provider],
properties={
"is_native_keras": is_native_keras,
},
training_available=True,
sparsification_available=True,
exporting_onnx_available=True,
inference_available=True,
) | Detect the information for the keras framework such as package versions, availability for core actions such as training and inference, sparsification support, and inference provider support. :return: The framework info for keras :rtype: FrameworkInfo |
21,123 | from inspect import getmembers, isfunction
from typing import Union
from sparseml import get_main_logger
from sparseml.keras.models.registry import ModelRegistry
from sparseml.keras.utils import keras
_supported_model_funcs = ["ResNet50"]
def _registry_constructor_wrapper(key, model_func):
# wraps the keras_applications model constructor function to be compatible
# with sparseml model registry loading
def wrapper(
pretrained_path: str = None,
pretrained: Union[bool, str] = False,
pretrained_dataset: str = None,
**kwargs,
):
"""
:param pretrained_path: A path to the pretrained weights to load,
if provided will override the pretrained param
:param pretrained: True to load the default pretrained weights,
a string to load a specific pretrained weight
(ex: base, pruned-moderate),
or False to not load any pretrained weights
:param pretrained_dataset: The dataset to load pretrained weights for
(ex: imagenet, mnist, etc).
If not supplied will default to the one preconfigured for the model.
"""
if isinstance(pretrained, str):
if pretrained.lower() == "true":
pretrained = True
elif pretrained.lower() in ["false", "none"]:
pretrained = False
weights = None
if pretrained:
if pretrained_dataset == "imagenet":
weights = "imagenet" # Imagenet pretrained weights
elif pretrained_path is not None:
weights = pretrained_path # Path to a weight file
if weights is not None:
_LOGGER.info("Model being created with weights from {}".format(weights))
else:
_LOGGER.info("Model being created with random weights")
try:
model = model_func(weights=weights, **kwargs)
except ValueError:
# Load the model directly assuming "weights" contain all information
# about the model
_LOGGER.info("Loading model directly from {}".format(weights))
model = keras.models.load_model(weights)
return model
return wrapper
def _get_architecture(model_name):
if model_name == "ResNet50":
return "resnet_v1", "50"
else:
raise ValueError("Model {} unknown or not supported".format(model_name))
class ModelRegistry(object):
"""
Registry class for creating models
"""
_CONSTRUCTORS = {} # type: Dict[str, Callable]
_ATTRIBUTES = {} # type: Dict[str, _ModelAttributes]
def available_keys() -> List[str]:
"""
:return: the keys (models) currently available in the registry
"""
return list(ModelRegistry._CONSTRUCTORS.keys())
def create(
key: str,
pretrained: Union[bool, str] = False,
pretrained_path: str = None,
pretrained_dataset: str = None,
**kwargs,
) -> keras.Model:
"""
Create a new model for the given key
:param key: the model key (name) to create
:param pretrained: True to load pretrained weights; to load a specific version
give a string with the name of the version (pruned-moderate, base).
Default None
:param pretrained_path: A model file path to load into the created model
:param pretrained_dataset: The dataset to load for the model
:param kwargs: any keyword args to supply to the model constructor
:return: the instantiated model
"""
if key not in ModelRegistry._CONSTRUCTORS:
raise ValueError(
"key {} is not in the model registry; available: {}".format(
key, ModelRegistry._CONSTRUCTORS
)
)
return ModelRegistry._CONSTRUCTORS[key](
pretrained=pretrained,
pretrained_path=pretrained_path,
pretrained_dataset=pretrained_dataset,
**kwargs,
)
def create_zoo_model(
key: str,
pretrained: Union[bool, str] = True,
pretrained_dataset: str = None,
) -> Model:
"""
Create a sparsezoo Model for the desired model in the zoo
:param key: the model key (name) to retrieve
:param pretrained: True to load pretrained weights; to load a specific version
give a string with the name of the version (optim, optim-perf), default True
:param pretrained_dataset: The dataset to load for the model
:return: the sparsezoo Model reference for the given model
"""
if key not in ModelRegistry._CONSTRUCTORS:
raise ValueError(
"key {} is not in the model registry; available: {}".format(
key, ModelRegistry._CONSTRUCTORS
)
)
attributes = ModelRegistry._ATTRIBUTES[key]
sparse_name, sparse_category, sparse_target = parse_optimization_str(
pretrained if isinstance(pretrained, str) else attributes.default_desc
)
model_dict = {
"domain": attributes.domain,
"sub_domain": attributes.sub_domain,
"architecture": attributes.architecture,
"sub_architecture": attributes.sub_architecture,
"framework": KERAS_FRAMEWORK,
"repo": attributes.repo_source,
"dataset": attributes.default_dataset
if pretrained_dataset is None
else pretrained_dataset,
"sparse_tag": f"{sparse_name}-{sparse_category}",
}
stub = model_args_to_stub(**model_dict)
return Model(stub)
def input_shape(key: str) -> Any:
"""
:param key: the model key (name) to create
:return: the specified input shape for the model
"""
if key not in ModelRegistry._CONSTRUCTORS:
raise ValueError(
"key {} is not in the model registry; available: {}".format(
key, ModelRegistry._CONSTRUCTORS
)
)
return ModelRegistry._ATTRIBUTES[key].input_shape
def register(
key: Union[str, List[str]],
input_shape: Any,
domain: str,
sub_domain: str,
architecture: str,
sub_architecture: str,
default_dataset: str,
default_desc: str,
repo_source: str = "sparseml",
):
"""
Register a model with the registry. Should be used as a decorator
:param key: the model key (name) to create
:param input_shape: the specified input shape for the model
:param domain: the domain the model belongs to; ex: cv, nlp, etc
:param sub_domain: the sub domain the model belongs to;
ex: classification, detection, etc
:param architecture: the architecture the model belongs to;
ex: resnet, mobilenet, etc
:param sub_architecture: the sub architecture the model belongs to;
ex: 50, 101, etc
:param default_dataset: the dataset to use by default for loading
pretrained if not supplied
:param default_desc: the description to use by default for loading
pretrained if not supplied
:param repo_source: the source repo for the model, default is sparseml
:return: the decorator
"""
if not isinstance(key, List):
key = [key]
def decorator(const_func):
wrapped_constructor = ModelRegistry._registered_wrapper(key[0], const_func)
ModelRegistry.register_wrapped_model_constructor(
wrapped_constructor,
key,
input_shape,
domain,
sub_domain,
architecture,
sub_architecture,
default_dataset,
default_desc,
repo_source,
)
return wrapped_constructor
return decorator
def register_wrapped_model_constructor(
wrapped_constructor: Callable,
key: Union[str, List[str]],
input_shape: Any,
domain: str,
sub_domain: str,
architecture: str,
sub_architecture: str,
default_dataset: str,
default_desc: str,
repo_source: str,
):
"""
Register a model with the registry from a model constructor or provider function
:param wrapped_constructor: Model constructor wrapped to be compatible
by call from ModelRegistry.create should have pretrained, pretrained_path,
pretrained_dataset, load_strict, ignore_error_tensors, and kwargs as
arguments
:param key: the model key (name) to create
:param input_shape: the specified input shape for the model
:param domain: the domain the model belongs to; ex: cv, nlp, etc
:param sub_domain: the sub domain the model belongs to;
ex: classification, detection, etc
:param architecture: the architecture the model belongs to;
ex: resnet, mobilenet, etc
:param sub_architecture: the sub architecture the model belongs to;
ex: 50, 101, etc
:param default_dataset: the dataset to use by default for loading
pretrained if not supplied
:param default_desc: the description to use by default for loading
pretrained if not supplied
:param repo_source: the source repo for the model; ex: sparseml, torchvision
:return: The constructor wrapper registered with the registry
"""
if not isinstance(key, List):
key = [key]
for r_key in key:
if r_key in ModelRegistry._CONSTRUCTORS:
raise ValueError("key {} is already registered".format(key))
ModelRegistry._CONSTRUCTORS[r_key] = wrapped_constructor
ModelRegistry._ATTRIBUTES[r_key] = _ModelAttributes(
input_shape,
domain,
sub_domain,
architecture,
sub_architecture,
default_dataset,
default_desc,
repo_source,
)
def _registered_wrapper(
key: str,
const_func: Callable,
):
def wrapper(
pretrained_path: str = None,
pretrained: Union[bool, str] = False,
pretrained_dataset: str = None,
*args,
**kwargs,
):
"""
:param pretrained_path: A path to the pretrained weights to load,
if provided will override the pretrained param
:param pretrained: True to load the default pretrained weights,
a string to load a specific pretrained weight
(ex: base, optim, optim-perf),
or False to not load any pretrained weights
:param pretrained_dataset: The dataset to load pretrained weights for
(ex: imagenet, mnist, etc).
If not supplied will default to the one preconfigured for the model.
"""
if isinstance(pretrained, str):
if pretrained.lower() == "true":
pretrained = True
elif pretrained.lower() in ["false", "none"]:
pretrained = False
if pretrained_path:
model = const_func(*args, **kwargs)
try:
model.load_weights(pretrained_path)
except ValueError:
_LOGGER.info("Loading model from {}".format(pretrained_path))
model = keras.models.load_model(pretrained_path)
elif pretrained:
zoo_model = ModelRegistry.create_zoo_model(
key, pretrained, pretrained_dataset
)
model_file_paths = zoo_model.download_framework_files(
extensions=[".h5"]
)
if not model_file_paths:
model_file_paths = zoo_model.download_framework_files(
extensions=[".tf"]
)
if not model_file_paths:
raise RuntimeError("Error downloading model from SparseZoo")
model_file_path = model_file_paths[0]
model = keras.models.load_model(model_file_path)
else:
model = const_func(*args, **kwargs)
return model
return wrapper
def _register_classification_models():
# find model functions in keras.applications
for model_func_name, model_func in getmembers(keras.applications, isfunction):
if model_func_name not in _supported_model_funcs:
continue
key = "keras_applications.{}".format(model_func_name)
input_shape = (
(224, 224, 3) if model_func_name != "InceptionV3" else (299, 299, 3)
)
arch, sub_arch = _get_architecture(model_func_name)
# wrap model constructor for registry compatibility
wrapped_constructor = _registry_constructor_wrapper(key, model_func)
ModelRegistry.register_wrapped_model_constructor(
wrapped_constructor,
key=key,
input_shape=input_shape,
domain="cv",
sub_domain="classification",
architecture=arch,
sub_architecture=sub_arch,
default_dataset="imagenet",
default_desc="base",
repo_source="keras.applications",
) | null |
21,124 | from typing import List, Union
import tensorflow
from tensorflow.keras import backend as K
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from sparseml.keras.models.registry import ModelRegistry
from sparseml.keras.utils import keras
BN_EPSILON = 1e-5
def _expand_name(prefix: str, suffix: str, sep: str = "."):
return prefix + sep + suffix
def _identity_modifier(
name: str,
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
out_channels: int,
stride: int,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> tensorflow.Tensor:
bn_axis = 3 if K.image_data_format() == "channels_last" else 1
shortcut = layers.Conv2D(
out_channels, 1, strides=stride, name=_expand_name(name, "conv")
)(x_tens)
shortcut = layers.BatchNormalization(
axis=bn_axis, epsilon=BN_EPSILON, name=_expand_name(name, "bn")
)(shortcut)
return shortcut
def _bottleneck_block(
name: str,
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
out_channels: int,
proj_channels: int,
stride: int,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> tensorflow.Tensor:
bn_axis = 3 if K.image_data_format() == "channels_last" else 1
x = layers.Conv2D(proj_channels, 1, name=_expand_name(name, "conv1"))(x_tens)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=BN_EPSILON, name=_expand_name(name, "bn1")
)(x)
x = layers.Activation("relu", name=_expand_name(name, "relu1"))(x)
x = layers.ZeroPadding2D(
padding=((1, 1), (1, 1)), name=_expand_name(name, "pad_conv2")
)(x)
x = layers.Conv2D(
proj_channels, 3, strides=stride, name=_expand_name(name, "conv2")
)(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=BN_EPSILON, name=_expand_name(name, "bn2")
)(x)
x = layers.Activation("relu", name=_expand_name(name, "relu2"))(x)
x = layers.Conv2D(out_channels, 1, name=_expand_name(name, "conv3"))(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=BN_EPSILON, name=_expand_name(name, "bn3")
)(x)
if stride > 1 or int(x_tens.shape[3]) != out_channels:
shortcut = _identity_modifier(
_expand_name(name, "identity"),
x_tens,
training,
out_channels,
stride,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
else:
shortcut = x_tens
x = layers.Add(name=_expand_name(name, "add"))([shortcut, x])
x = layers.Activation("relu", name=_expand_name(name, "out"))(x)
return x | null |
21,125 | from typing import List, Union
import tensorflow
from tensorflow.keras import backend as K
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from sparseml.keras.models.registry import ModelRegistry
from sparseml.keras.utils import keras
class ResNetSection(object):
"""
Settings to describe how to put together a ResNet based architecture
using user supplied configurations.
:param num_blocks: the number of blocks to put in the section
(ie Basic or Bottleneck blocks)
:param out_channels: the number of output channels from the section
:param downsample: True to apply stride 2 for downsampling of the input,
False otherwise
:param proj_channels: The number of channels in the projection for a
bottleneck block, if < 0 then uses basic
"""
def __init__(
self,
num_blocks: int,
out_channels: int,
downsample: bool,
proj_channels: int = -1,
):
self.num_blocks = num_blocks
self.out_channels = out_channels
self.downsample = downsample
self.proj_channels = proj_channels
def create(
self,
name: str,
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> tensorflow.Tensor:
"""
Create the section in the current graph and scope
:param name: the name for the scope to create the section under
:param x_tens: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the section
"""
out = x_tens
stride = 2 if self.downsample else 1
for block in range(self.num_blocks):
block_name = _expand_name(name, "{}".format(block))
if self.proj_channels > 0:
out = _bottleneck_block(
name=block_name,
x_tens=out,
training=training,
out_channels=self.out_channels,
proj_channels=self.proj_channels,
stride=stride,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
stride = 1
return out
def resnet_const(
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
sec_settings: List[ResNetSection],
num_classes: int,
class_type: str,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> keras.models.Model:
"""
Graph constructor for ResNet implementation.
:param x_tens: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param sec_settings: The settings for each section in the ResNet modoel
:param num_classes: The number of classes to classify
:param class_type: One of [single, multi, None] to support multi class training.
Default single. If None, then will not add the fully connected at the end.
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the created graph
"""
channels_last = K.image_data_format() == "channels_last"
if x_tens is None:
input_shape = (224, 224, 3) if channels_last else (3, 224, 224)
x_tens = layers.Input(shape=input_shape)
out = _input(
"input",
x_tens,
training,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
)
for sec_index, section in enumerate(sec_settings):
out = section.create(
name="sections.{}".format(sec_index),
x_tens=out,
training=training,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
outputs = _classifier(
"classifier",
out,
training,
num_classes,
class_type,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
)
return Model(inputs=x_tens, outputs=outputs)
key=["resnet50", "resnet_50", "resnet-50", "resnetv1_50", "resnetv1-50"],
input_shape=(224, 224, 3),
domain="cv",
sub_domain="classification",
architecture="resnet_v1",
sub_architecture="50",
default_dataset="imagenet",
default_desc="base",
The provided code snippet includes necessary dependencies for implementing the `resnet50` function. Write a Python function `def resnet50( inputs: tensorflow.Tensor = None, training: Union[bool, tensorflow.Tensor] = True, num_classes: int = 1000, class_type: str = None, kernel_initializer=keras.initializers.GlorotUniform(), bias_initializer=keras.initializers.GlorotUniform(), beta_initializer=keras.initializers.GlorotUniform(), gamma_initializer=keras.initializers.GlorotUniform(), ) -> keras.models.Model` to solve the following problem:
Standard ResNet50 implementation; expected input shape is (B, 224, 224, 3) :param inputs: The input tensor to the ResNet architecture :param training: bool or Tensor to specify if the model should be run in training or inference mode :param num_classes: The number of classes to classify :param class_type: One of [single, multi, None] to support multi class training. Default single. If None, then will not add the fully connected at the end. :param kernel_initializer: Initializer to use for the conv and fully connected kernels :param bias_initializer: Initializer to use for the bias in the fully connected :param beta_initializer: Initializer to use for the batch norm beta variables :param gamma_initializer: Initializer to use for the batch norm gama variables :return: the output tensor from the created graph
Here is the function:
def resnet50(
inputs: tensorflow.Tensor = None,
training: Union[bool, tensorflow.Tensor] = True,
num_classes: int = 1000,
class_type: str = None,
kernel_initializer=keras.initializers.GlorotUniform(),
bias_initializer=keras.initializers.GlorotUniform(),
beta_initializer=keras.initializers.GlorotUniform(),
gamma_initializer=keras.initializers.GlorotUniform(),
) -> keras.models.Model:
"""
Standard ResNet50 implementation;
expected input shape is (B, 224, 224, 3)
:param inputs: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param num_classes: The number of classes to classify
:param class_type: One of [single, multi, None] to support multi class training.
Default single. If None, then will not add the fully connected at the end.
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the created graph
"""
sec_settings = [
ResNetSection(
num_blocks=3,
out_channels=256,
downsample=False,
proj_channels=64,
),
ResNetSection(
num_blocks=4,
out_channels=512,
downsample=True,
proj_channels=128,
),
ResNetSection(
num_blocks=6,
out_channels=1024,
downsample=True,
proj_channels=256,
),
ResNetSection(
num_blocks=3,
out_channels=2048,
downsample=True,
proj_channels=512,
),
]
return resnet_const(
inputs,
training,
sec_settings,
num_classes,
class_type,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) | Standard ResNet50 implementation; expected input shape is (B, 224, 224, 3) :param inputs: The input tensor to the ResNet architecture :param training: bool or Tensor to specify if the model should be run in training or inference mode :param num_classes: The number of classes to classify :param class_type: One of [single, multi, None] to support multi class training. Default single. If None, then will not add the fully connected at the end. :param kernel_initializer: Initializer to use for the conv and fully connected kernels :param bias_initializer: Initializer to use for the bias in the fully connected :param beta_initializer: Initializer to use for the batch norm beta variables :param gamma_initializer: Initializer to use for the batch norm gama variables :return: the output tensor from the created graph |
21,126 | from typing import List, Union
import tensorflow
from tensorflow.keras import backend as K
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from sparseml.keras.models.registry import ModelRegistry
from sparseml.keras.utils import keras
class ResNetSection(object):
"""
Settings to describe how to put together a ResNet based architecture
using user supplied configurations.
:param num_blocks: the number of blocks to put in the section
(ie Basic or Bottleneck blocks)
:param out_channels: the number of output channels from the section
:param downsample: True to apply stride 2 for downsampling of the input,
False otherwise
:param proj_channels: The number of channels in the projection for a
bottleneck block, if < 0 then uses basic
"""
def __init__(
self,
num_blocks: int,
out_channels: int,
downsample: bool,
proj_channels: int = -1,
):
self.num_blocks = num_blocks
self.out_channels = out_channels
self.downsample = downsample
self.proj_channels = proj_channels
def create(
self,
name: str,
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> tensorflow.Tensor:
"""
Create the section in the current graph and scope
:param name: the name for the scope to create the section under
:param x_tens: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the section
"""
out = x_tens
stride = 2 if self.downsample else 1
for block in range(self.num_blocks):
block_name = _expand_name(name, "{}".format(block))
if self.proj_channels > 0:
out = _bottleneck_block(
name=block_name,
x_tens=out,
training=training,
out_channels=self.out_channels,
proj_channels=self.proj_channels,
stride=stride,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
stride = 1
return out
def resnet_const(
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
sec_settings: List[ResNetSection],
num_classes: int,
class_type: str,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> keras.models.Model:
"""
Graph constructor for ResNet implementation.
:param x_tens: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param sec_settings: The settings for each section in the ResNet modoel
:param num_classes: The number of classes to classify
:param class_type: One of [single, multi, None] to support multi class training.
Default single. If None, then will not add the fully connected at the end.
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the created graph
"""
channels_last = K.image_data_format() == "channels_last"
if x_tens is None:
input_shape = (224, 224, 3) if channels_last else (3, 224, 224)
x_tens = layers.Input(shape=input_shape)
out = _input(
"input",
x_tens,
training,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
)
for sec_index, section in enumerate(sec_settings):
out = section.create(
name="sections.{}".format(sec_index),
x_tens=out,
training=training,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
outputs = _classifier(
"classifier",
out,
training,
num_classes,
class_type,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
)
return Model(inputs=x_tens, outputs=outputs)
key=["resnet50", "resnet_50", "resnet-50", "resnetv1_50", "resnetv1-50"],
input_shape=(224, 224, 3),
domain="cv",
sub_domain="classification",
architecture="resnet_v1",
sub_architecture="50",
default_dataset="imagenet",
default_desc="base",
The provided code snippet includes necessary dependencies for implementing the `resnet101` function. Write a Python function `def resnet101( inputs: tensorflow.Tensor = None, training: Union[bool, tensorflow.Tensor] = True, num_classes: int = 1000, class_type: str = None, kernel_initializer=keras.initializers.GlorotUniform(), bias_initializer=keras.initializers.GlorotUniform(), beta_initializer=keras.initializers.GlorotUniform(), gamma_initializer=keras.initializers.GlorotUniform(), ) -> keras.models.Model` to solve the following problem:
Standard ResNet101 implementation; expected input shape is (B, 224, 224, 3) :param inputs: The input tensor to the ResNet architecture :param training: bool or Tensor to specify if the model should be run in training or inference mode :param num_classes: The number of classes to classify :param class_type: One of [single, multi, None] to support multi class training. Default single. If None, then will not add the fully connected at the end. :param kernel_initializer: Initializer to use for the conv and fully connected kernels :param bias_initializer: Initializer to use for the bias in the fully connected :param beta_initializer: Initializer to use for the batch norm beta variables :param gamma_initializer: Initializer to use for the batch norm gama variables :return: the output tensor from the created graph
Here is the function:
def resnet101(
inputs: tensorflow.Tensor = None,
training: Union[bool, tensorflow.Tensor] = True,
num_classes: int = 1000,
class_type: str = None,
kernel_initializer=keras.initializers.GlorotUniform(),
bias_initializer=keras.initializers.GlorotUniform(),
beta_initializer=keras.initializers.GlorotUniform(),
gamma_initializer=keras.initializers.GlorotUniform(),
) -> keras.models.Model:
"""
Standard ResNet101 implementation;
expected input shape is (B, 224, 224, 3)
:param inputs: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param num_classes: The number of classes to classify
:param class_type: One of [single, multi, None] to support multi class training.
Default single. If None, then will not add the fully connected at the end.
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the created graph
"""
sec_settings = [
ResNetSection(
num_blocks=3,
out_channels=256,
downsample=False,
proj_channels=64,
),
ResNetSection(
num_blocks=4,
out_channels=512,
downsample=True,
proj_channels=128,
),
ResNetSection(
num_blocks=23,
out_channels=1024,
downsample=True,
proj_channels=256,
),
ResNetSection(
num_blocks=3,
out_channels=2048,
downsample=True,
proj_channels=512,
),
]
return resnet_const(
inputs,
training,
sec_settings,
num_classes,
class_type,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) | Standard ResNet101 implementation; expected input shape is (B, 224, 224, 3) :param inputs: The input tensor to the ResNet architecture :param training: bool or Tensor to specify if the model should be run in training or inference mode :param num_classes: The number of classes to classify :param class_type: One of [single, multi, None] to support multi class training. Default single. If None, then will not add the fully connected at the end. :param kernel_initializer: Initializer to use for the conv and fully connected kernels :param bias_initializer: Initializer to use for the bias in the fully connected :param beta_initializer: Initializer to use for the batch norm beta variables :param gamma_initializer: Initializer to use for the batch norm gama variables :return: the output tensor from the created graph |
21,127 | from typing import List, Union
import tensorflow
from tensorflow.keras import backend as K
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from sparseml.keras.models.registry import ModelRegistry
from sparseml.keras.utils import keras
class ResNetSection(object):
"""
Settings to describe how to put together a ResNet based architecture
using user supplied configurations.
:param num_blocks: the number of blocks to put in the section
(ie Basic or Bottleneck blocks)
:param out_channels: the number of output channels from the section
:param downsample: True to apply stride 2 for downsampling of the input,
False otherwise
:param proj_channels: The number of channels in the projection for a
bottleneck block, if < 0 then uses basic
"""
def __init__(
self,
num_blocks: int,
out_channels: int,
downsample: bool,
proj_channels: int = -1,
):
self.num_blocks = num_blocks
self.out_channels = out_channels
self.downsample = downsample
self.proj_channels = proj_channels
def create(
self,
name: str,
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> tensorflow.Tensor:
"""
Create the section in the current graph and scope
:param name: the name for the scope to create the section under
:param x_tens: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the section
"""
out = x_tens
stride = 2 if self.downsample else 1
for block in range(self.num_blocks):
block_name = _expand_name(name, "{}".format(block))
if self.proj_channels > 0:
out = _bottleneck_block(
name=block_name,
x_tens=out,
training=training,
out_channels=self.out_channels,
proj_channels=self.proj_channels,
stride=stride,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
stride = 1
return out
def resnet_const(
x_tens: tensorflow.Tensor,
training: Union[bool, tensorflow.Tensor],
sec_settings: List[ResNetSection],
num_classes: int,
class_type: str,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) -> keras.models.Model:
"""
Graph constructor for ResNet implementation.
:param x_tens: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param sec_settings: The settings for each section in the ResNet modoel
:param num_classes: The number of classes to classify
:param class_type: One of [single, multi, None] to support multi class training.
Default single. If None, then will not add the fully connected at the end.
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the created graph
"""
channels_last = K.image_data_format() == "channels_last"
if x_tens is None:
input_shape = (224, 224, 3) if channels_last else (3, 224, 224)
x_tens = layers.Input(shape=input_shape)
out = _input(
"input",
x_tens,
training,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
)
for sec_index, section in enumerate(sec_settings):
out = section.create(
name="sections.{}".format(sec_index),
x_tens=out,
training=training,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
beta_initializer=beta_initializer,
gamma_initializer=gamma_initializer,
)
outputs = _classifier(
"classifier",
out,
training,
num_classes,
class_type,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
)
return Model(inputs=x_tens, outputs=outputs)
key=["resnet50", "resnet_50", "resnet-50", "resnetv1_50", "resnetv1-50"],
input_shape=(224, 224, 3),
domain="cv",
sub_domain="classification",
architecture="resnet_v1",
sub_architecture="50",
default_dataset="imagenet",
default_desc="base",
The provided code snippet includes necessary dependencies for implementing the `resnet152` function. Write a Python function `def resnet152( inputs: tensorflow.Tensor = None, training: Union[bool, tensorflow.Tensor] = True, num_classes: int = 1000, class_type: str = None, kernel_initializer=keras.initializers.GlorotUniform(), bias_initializer=keras.initializers.GlorotUniform(), beta_initializer=keras.initializers.GlorotUniform(), gamma_initializer=keras.initializers.GlorotUniform(), ) -> keras.models.Model` to solve the following problem:
Standard ResNet152 implementation; expected input shape is (B, 224, 224, 3) :param inputs: The input tensor to the ResNet architecture :param training: bool or Tensor to specify if the model should be run in training or inference mode :param num_classes: The number of classes to classify :param class_type: One of [single, multi, None] to support multi class training. Default single. If None, then will not add the fully connected at the end. :param kernel_initializer: Initializer to use for the conv and fully connected kernels :param bias_initializer: Initializer to use for the bias in the fully connected :param beta_initializer: Initializer to use for the batch norm beta variables :param gamma_initializer: Initializer to use for the batch norm gama variables :return: the output tensor from the created graph
Here is the function:
def resnet152(
inputs: tensorflow.Tensor = None,
training: Union[bool, tensorflow.Tensor] = True,
num_classes: int = 1000,
class_type: str = None,
kernel_initializer=keras.initializers.GlorotUniform(),
bias_initializer=keras.initializers.GlorotUniform(),
beta_initializer=keras.initializers.GlorotUniform(),
gamma_initializer=keras.initializers.GlorotUniform(),
) -> keras.models.Model:
"""
Standard ResNet152 implementation;
expected input shape is (B, 224, 224, 3)
:param inputs: The input tensor to the ResNet architecture
:param training: bool or Tensor to specify if the model should be run
in training or inference mode
:param num_classes: The number of classes to classify
:param class_type: One of [single, multi, None] to support multi class training.
Default single. If None, then will not add the fully connected at the end.
:param kernel_initializer: Initializer to use for the conv and
fully connected kernels
:param bias_initializer: Initializer to use for the bias in the fully connected
:param beta_initializer: Initializer to use for the batch norm beta variables
:param gamma_initializer: Initializer to use for the batch norm gama variables
:return: the output tensor from the created graph
"""
sec_settings = [
ResNetSection(
num_blocks=3,
out_channels=256,
downsample=False,
proj_channels=64,
),
ResNetSection(
num_blocks=8,
out_channels=512,
downsample=True,
proj_channels=128,
),
ResNetSection(
num_blocks=36,
out_channels=1024,
downsample=True,
proj_channels=256,
),
ResNetSection(
num_blocks=3,
out_channels=2048,
downsample=True,
proj_channels=512,
),
]
return resnet_const(
inputs,
training,
sec_settings,
num_classes,
class_type,
kernel_initializer,
bias_initializer,
beta_initializer,
gamma_initializer,
) | Standard ResNet152 implementation; expected input shape is (B, 224, 224, 3) :param inputs: The input tensor to the ResNet architecture :param training: bool or Tensor to specify if the model should be run in training or inference mode :param num_classes: The number of classes to classify :param class_type: One of [single, multi, None] to support multi class training. Default single. If None, then will not add the fully connected at the end. :param kernel_initializer: Initializer to use for the conv and fully connected kernels :param bias_initializer: Initializer to use for the bias in the fully connected :param beta_initializer: Initializer to use for the batch norm beta variables :param gamma_initializer: Initializer to use for the batch norm gama variables :return: the output tensor from the created graph |
21,128 | import tensorflow
from sparseml.keras.utils import keras
The provided code snippet includes necessary dependencies for implementing the `sparsity` function. Write a Python function `def sparsity(model: keras.Model)` to solve the following problem:
Retrieve sparsity of a Keras model :param model: a Keras model :return: (1) model sparsity, (2) dictionary of layer sparsity
Here is the function:
def sparsity(model: keras.Model):
"""
Retrieve sparsity of a Keras model
:param model: a Keras model
:return: (1) model sparsity, (2) dictionary of layer sparsity
"""
zero = tensorflow.constant(0, dtype=tensorflow.float32)
model_weight_size = 0
model_zeros = 0
sparsity_dict = {}
for layer in model.layers:
layer_sparsity_dict = {}
for i, weight in enumerate(layer.trainable_weights):
mask = tensorflow.cast(tensorflow.equal(weight, zero), tensorflow.uint8)
weight_size = tensorflow.size(weight)
zeros = tensorflow.cast(
tensorflow.math.count_nonzero(mask), tensorflow.int32
)
layer_sparsity_dict[weight.name] = zeros / weight_size
model_weight_size += weight_size
model_zeros += zeros
sparsity_dict[layer.name] = layer_sparsity_dict
model_sparsity = model_zeros / model_weight_size
return model_sparsity, sparsity_dict | Retrieve sparsity of a Keras model :param model: a Keras model :return: (1) model sparsity, (2) dictionary of layer sparsity |
21,129 | import tensorflow
def assign(lhs, rhs, name=None):
if hasattr(tensorflow, "assign"):
return tensorflow.assign(lhs, rhs, name=name)
else:
return lhs.assign(rhs, name=name) | null |
21,130 | import threading
from contextlib import contextmanager
from dataclasses import dataclass
from typing import Any, Callable, Dict, List, Optional, Union
from sparseml.core.event import EventType
from sparseml.core.framework import Framework
from sparseml.core.helpers import log_model_info, should_log_model_info
from sparseml.core.lifecycle import SparsificationLifecycle
from sparseml.core.logger import BaseLogger, LoggerManager
from sparseml.core.recipe import Recipe
from sparseml.core.state import ModifiedState, State
def active_session() -> SparseSession:
"""
:return: the active session for sparsification
"""
global _local_storage
return getattr(_local_storage, "session", _global_session)
class Framework(Enum):
"""
An Enum to represent different frameworks recognized by SparseML
"""
general = "general"
pytorch = "pytorch"
tensorflow = "tensorflow"
onnx = "onnx"
keras = "keras"
jax = "jax"
def from_str(cls, framework: str) -> "Framework":
"""
Factory method for creating a framework enum from a string.
The string is case insensitive and whitespace is stripped before
checking for a match.
:param framework: The string to convert to a framework
:return: The corresponding framework enum for the given string
"""
framework = framework.lower().strip()
if framework == "general":
return cls.general
if framework == "pytorch":
return cls.pytorch
if framework == "tensorflow":
return cls.tensorflow
if framework == "onnx":
return cls.onnx
if framework == "keras":
return cls.keras
if framework == "jax":
return cls.jax
raise ValueError(f"Unknown framework: {framework}")
def __str__(self):
"""
:return: The string representation of the framework
"""
return self.value
def formatted(self) -> str:
"""
:return: The formatted string representation of the framework
"""
if self == self.general:
return "General"
if self == self.pytorch:
return "PyTorch"
if self == self.tensorflow:
return "TensorFlow"
if self == self.onnx:
return "ONNX"
if self == self.keras:
return "Keras"
if self == self.jax:
return "JAX"
raise ValueError(f"Unknown framework: {self}")
def class_name(self) -> str:
"""
Get the class name for the framework.
This is the formatted string representation of the framework.
If the framework is `general`, an empty string is returned
:return: The class name for the framework
"""
return self.formatted() if self != self.general else ""
class ModifiedState:
"""
A dataclass to represent a modified model,
optimizer, and loss
:param model: The modified model
:param optimizer: The modified optimizer
:param loss: The modified loss
:param modifier_data: The modifier data used to modify the
model, optimizer, and loss
"""
model: Optional[Any] = None
optimizer: Optional[Any] = None
loss: Optional[Any] = None
modifier_data: Optional[List[Dict[str, Any]]] = None
def __init__(self, model, optimizer, loss, modifier_data):
self.model = model
self.optimizer = optimizer
self.loss = loss
self.modifier_data = modifier_data
The provided code snippet includes necessary dependencies for implementing the `initialize` function. Write a Python function `def initialize( framework: Optional[Framework] = None, recipe: Union[str, List[str], "Recipe", List["Recipe"], None] = None, recipe_stage: Union[str, List[str], None] = None, recipe_args: Optional[Dict[str, Any]] = None, model: Optional[Any] = None, teacher_model: Optional[Any] = None, optimizer: Optional[Any] = None, attach_optim_callbacks: bool = True, train_data: Optional[Any] = None, val_data: Optional[Any] = None, test_data: Optional[Any] = None, calib_data: Optional[Any] = None, copy_data: bool = True, start: Optional[float] = None, steps_per_epoch: Optional[int] = None, batches_per_step: Optional[int] = None, **kwargs, ) -> ModifiedState` to solve the following problem:
A method to initialize the active session for sparsification :param framework: the framework to use for the sparsification :param recipe: the recipe to use for the sparsification, can be a path to a recipe file, a raw recipe string, a recipe object, or a list of recipe objects. :param recipe_stage: the stage to target for the sparsification :param recipe_args: the args to use for overriding the recipe defaults :param model: the model to sparsify :param teacher_model: the teacher model to use for knowledge distillation :param optimizer: the optimizer to use for the sparsification :param attach_optim_callbacks: True to attach the optimizer callbacks to the sparsification lifecycle, False otherwise :param train_data: the training data to use for the sparsification :param val_data: the validation data to use for the sparsification :param test_data: the testing data to use for the sparsification :param calib_data: the calibration data to use for the sparsification :param copy_data: True to copy the data, False otherwise :param start: the start epoch to use for the sparsification :param steps_per_epoch: the number of steps per epoch to use for the sparsification :param batches_per_step: the number of batches per step to use for sparsification :param kwargs: additional kwargs to pass to the lifecycle's initialize method :return: the modified state of the active session after initializing
Here is the function:
def initialize(
framework: Optional[Framework] = None,
recipe: Union[str, List[str], "Recipe", List["Recipe"], None] = None,
recipe_stage: Union[str, List[str], None] = None,
recipe_args: Optional[Dict[str, Any]] = None,
model: Optional[Any] = None,
teacher_model: Optional[Any] = None,
optimizer: Optional[Any] = None,
attach_optim_callbacks: bool = True,
train_data: Optional[Any] = None,
val_data: Optional[Any] = None,
test_data: Optional[Any] = None,
calib_data: Optional[Any] = None,
copy_data: bool = True,
start: Optional[float] = None,
steps_per_epoch: Optional[int] = None,
batches_per_step: Optional[int] = None,
**kwargs,
) -> ModifiedState:
"""
A method to initialize the active session for sparsification
:param framework: the framework to use for the sparsification
:param recipe: the recipe to use for the sparsification, can be a path to a
recipe file, a raw recipe string, a recipe object, or a list of recipe objects.
:param recipe_stage: the stage to target for the sparsification
:param recipe_args: the args to use for overriding the recipe defaults
:param model: the model to sparsify
:param teacher_model: the teacher model to use for knowledge distillation
:param optimizer: the optimizer to use for the sparsification
:param attach_optim_callbacks: True to attach the optimizer callbacks to the
sparsification lifecycle, False otherwise
:param train_data: the training data to use for the sparsification
:param val_data: the validation data to use for the sparsification
:param test_data: the testing data to use for the sparsification
:param calib_data: the calibration data to use for the sparsification
:param copy_data: True to copy the data, False otherwise
:param start: the start epoch to use for the sparsification
:param steps_per_epoch: the number of steps per epoch to use for the
sparsification
:param batches_per_step: the number of batches per step to use for
sparsification
:param kwargs: additional kwargs to pass to the lifecycle's initialize method
:return: the modified state of the active session after initializing
"""
return active_session().initialize(
framework=framework,
recipe=recipe,
recipe_stage=recipe_stage,
recipe_args=recipe_args,
model=model,
teacher_model=teacher_model,
optimizer=optimizer,
attach_optim_callbacks=attach_optim_callbacks,
train_data=train_data,
val_data=val_data,
test_data=test_data,
calib_data=calib_data,
copy_data=copy_data,
start=start,
steps_per_epoch=steps_per_epoch,
batches_per_step=batches_per_step,
**kwargs,
) | A method to initialize the active session for sparsification :param framework: the framework to use for the sparsification :param recipe: the recipe to use for the sparsification, can be a path to a recipe file, a raw recipe string, a recipe object, or a list of recipe objects. :param recipe_stage: the stage to target for the sparsification :param recipe_args: the args to use for overriding the recipe defaults :param model: the model to sparsify :param teacher_model: the teacher model to use for knowledge distillation :param optimizer: the optimizer to use for the sparsification :param attach_optim_callbacks: True to attach the optimizer callbacks to the sparsification lifecycle, False otherwise :param train_data: the training data to use for the sparsification :param val_data: the validation data to use for the sparsification :param test_data: the testing data to use for the sparsification :param calib_data: the calibration data to use for the sparsification :param copy_data: True to copy the data, False otherwise :param start: the start epoch to use for the sparsification :param steps_per_epoch: the number of steps per epoch to use for the sparsification :param batches_per_step: the number of batches per step to use for sparsification :param kwargs: additional kwargs to pass to the lifecycle's initialize method :return: the modified state of the active session after initializing |
21,131 | import threading
from contextlib import contextmanager
from dataclasses import dataclass
from typing import Any, Callable, Dict, List, Optional, Union
from sparseml.core.event import EventType
from sparseml.core.framework import Framework
from sparseml.core.helpers import log_model_info, should_log_model_info
from sparseml.core.lifecycle import SparsificationLifecycle
from sparseml.core.logger import BaseLogger, LoggerManager
from sparseml.core.recipe import Recipe
from sparseml.core.state import ModifiedState, State
def active_session() -> SparseSession:
"""
:return: the active session for sparsification
"""
global _local_storage
return getattr(_local_storage, "session", _global_session)
class ModifiedState:
"""
A dataclass to represent a modified model,
optimizer, and loss
:param model: The modified model
:param optimizer: The modified optimizer
:param loss: The modified loss
:param modifier_data: The modifier data used to modify the
model, optimizer, and loss
"""
model: Optional[Any] = None
optimizer: Optional[Any] = None
loss: Optional[Any] = None
modifier_data: Optional[List[Dict[str, Any]]] = None
def __init__(self, model, optimizer, loss, modifier_data):
self.model = model
self.optimizer = optimizer
self.loss = loss
self.modifier_data = modifier_data
The provided code snippet includes necessary dependencies for implementing the `finalize` function. Write a Python function `def finalize(**kwargs) -> ModifiedState` to solve the following problem:
Method to finalize the active session for sparsification :param kwargs: additional kwargs to pass to the lifecycle's finalize method :return: the modified state of the active session after finalizing
Here is the function:
def finalize(**kwargs) -> ModifiedState:
"""
Method to finalize the active session for sparsification
:param kwargs: additional kwargs to pass to the lifecycle's finalize method
:return: the modified state of the active session after finalizing
"""
return active_session().finalize(**kwargs) | Method to finalize the active session for sparsification :param kwargs: additional kwargs to pass to the lifecycle's finalize method :return: the modified state of the active session after finalizing |
21,132 | import json
import logging
import os
import re
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Union
import yaml
from pydantic import Field, root_validator
from sparseml.core.framework import Framework
from sparseml.core.modifier import StageModifiers
from sparseml.core.modifier.modifier import Modifier
from sparseml.core.recipe.args import RecipeArgs
from sparseml.core.recipe.base import RecipeBase
from sparseml.core.recipe.metadata import RecipeMetaData
from sparseml.core.recipe.stage import RecipeStage
from sparsezoo import Model
def _load_json_or_yaml_string(content: str) -> Dict[str, Any]:
# try loading as json first, then yaml
# if both fail, raise a ValueError
try:
return json.loads(content)
except json.JSONDecodeError:
try:
return yaml.safe_load(content)
except yaml.YAMLError as err:
raise ValueError(f"Could not parse recipe from string {content}") from err | null |
21,133 | import json
import logging
import os
import re
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Union
import yaml
from pydantic import Field, root_validator
from sparseml.core.framework import Framework
from sparseml.core.modifier import StageModifiers
from sparseml.core.modifier.modifier import Modifier
from sparseml.core.recipe.args import RecipeArgs
from sparseml.core.recipe.base import RecipeBase
from sparseml.core.recipe.metadata import RecipeMetaData
from sparseml.core.recipe.stage import RecipeStage
from sparsezoo import Model
The provided code snippet includes necessary dependencies for implementing the `_parse_recipe_from_md` function. Write a Python function `def _parse_recipe_from_md(file_path, yaml_str)` to solve the following problem:
extract YAML front matter from markdown recipe card. Copied from sparseml.optim.helpers:_load_yaml_str_from_file :param file_path: path to recipe file :param yaml_str: string read from file_path :return: parsed yaml_str with README info removed
Here is the function:
def _parse_recipe_from_md(file_path, yaml_str):
"""
extract YAML front matter from markdown recipe card. Copied from
sparseml.optim.helpers:_load_yaml_str_from_file
:param file_path: path to recipe file
:param yaml_str: string read from file_path
:return: parsed yaml_str with README info removed
"""
# extract YAML front matter from markdown recipe card
# adapted from
# https://github.com/jonbeebe/frontmatter/blob/master/frontmatter
yaml_delim = r"(?:---|\+\+\+)"
yaml = r"(.*?)"
re_pattern = r"^\s*" + yaml_delim + yaml + yaml_delim
regex = re.compile(re_pattern, re.S | re.M)
result = regex.search(yaml_str)
if result:
yaml_str = result.group(1)
else:
# fail if we know whe should have extracted front matter out
raise RuntimeError(
"Could not extract YAML front matter from recipe card:"
" {}".format(file_path)
)
return yaml_str | extract YAML front matter from markdown recipe card. Copied from sparseml.optim.helpers:_load_yaml_str_from_file :param file_path: path to recipe file :param yaml_str: string read from file_path :return: parsed yaml_str with README info removed |
21,134 | import json
import logging
import os
import re
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Union
import yaml
from pydantic import Field, root_validator
from sparseml.core.framework import Framework
from sparseml.core.modifier import StageModifiers
from sparseml.core.modifier.modifier import Modifier
from sparseml.core.recipe.args import RecipeArgs
from sparseml.core.recipe.base import RecipeBase
from sparseml.core.recipe.metadata import RecipeMetaData
from sparseml.core.recipe.stage import RecipeStage
from sparsezoo import Model
class Modifier(BaseModel, ModifierInterface, MultiFrameworkObject):
"""
A base class for all modifiers to inherit from.
Modifiers are used to modify the training process for a model.
Defines base attributes and methods available to all modifiers
:param index: The index of the modifier in the list of modifiers
for the model
:param group: The group name for the modifier
:param start: The start step for the modifier
:param end: The end step for the modifier
:param update: The update step for the modifier
"""
index: int = None
group: str = None
start: float = None
end: Optional[float] = None
update: Optional[float] = None
initialized_structure_: bool = False
initialized_: bool = False
finalized_: bool = False
started_: bool = False
ended_: bool = False
def initialized_structure(self) -> bool:
"""
:return: True if the modifier structure has been
applied to the model
"""
return self.initialized_structure_
def initialized(self) -> bool:
"""
:return: True if the modifier has been initialized
"""
return self.initialized_
def finalized(self) -> bool:
"""
:return: True if the modifier has been finalized
"""
return self.finalized_
def check_initialized(self):
"""
:raises RuntimeError: if the modifier has not been initialized
"""
if not self.initialized_:
raise RuntimeError("modifier has not been initialized")
def calculate_start(self) -> float:
"""
Calculate and return the start epoch for the modifier.
:return: the start epoch for the modifier if set, else -1
"""
return self.start if self.start is not None else -1
def calculate_end(self) -> float:
"""
:return: the end epoch for the modifier if set, else -1
"""
return self.end if self.end is not None else -1
def pre_initialize_structure(self, state: State, **kwargs):
"""
:param state: The current state of the model
:param kwargs: Additional arguments for initializing the structure
of the model in question
"""
self.on_initialize_structure(state, **kwargs)
self.initialized_structure_ = True
def initialize(self, state: State, **kwargs):
"""
Initialize the modifier for the given model and state.
:raises RuntimeError: if the modifier has already been finalized
:param state: The current state of the model
:param kwargs: Additional arguments for initializing the modifier
"""
if self.initialized_:
return
if self.finalized_:
raise RuntimeError("cannot initialize a finalized modifier")
if state.start_event is None:
return
# ignore modifier structure initialized from one-shot
if state.start_event.current_index >= 0 and self.calculate_start() < 0:
return
# if modifier should have ended by current index, don't initialize
if (
self.calculate_end() >= 0
and state.start_event.current_index >= self.calculate_end()
):
return
initialized = self.on_initialize(state=state, **kwargs)
if not isinstance(initialized, bool):
raise ValueError(
"on_initialize must return a boolean value; "
"True for success, False for not initialized"
)
self.initialized_ = initialized
if self.should_start(state.start_event):
self.on_start(state, state.start_event, **kwargs)
self.started_ = True
def finalize(self, state: State, **kwargs):
"""
Finalize the modifier for the given model and state.
:raises RuntimeError: if the modifier has not been initialized
:param state: The current state of the model
:param kwargs: Additional arguments for finalizing the modifier
"""
if self.finalized_ or not self.initialized_:
return
if not self.initialized_:
raise RuntimeError("cannot finalize an uninitialized modifier")
finalized = self.on_finalize(state=state, **kwargs)
if not isinstance(finalized, bool):
raise ValueError(
"on_finalize must return a boolean value; "
"True for success, False for not finalized"
)
self.finalized_ = finalized
def update_event(self, state: State, event: Event, **kwargs):
"""
Update modifier based on the given event. In turn calls
on_start, on_update, and on_end based on the event and
modifier settings. Returns immediately if the modifier is
not initialized
:raises RuntimeError: if the modifier has been finalized
:param state: The current state of sparsification
:param event: The event to update the modifier with
:param kwargs: Additional arguments for updating the modifier
"""
if not self.initialized_:
return
if self.finalized_:
raise RuntimeError("cannot update a finalized modifier")
self.on_event(state, event, **kwargs)
# handle starting the modifier if needed
if (
event.type_ == EventType.BATCH_START
and not self.started_
and self.should_start(event)
):
self.on_start(state, event, **kwargs)
self.started_ = True
self.on_update(state, event, **kwargs)
return
# handle ending the modifier if needed
if (
event.type_ == EventType.BATCH_END
and not self.ended_
and self.should_end(event)
):
self.on_end(state, event, **kwargs)
self.ended_ = True
self.on_update(state, event, **kwargs)
return
if self.started_ and not self.ended_:
self.on_update(state, event, **kwargs)
def should_start(self, event: Event) -> bool:
"""
:param event: The event to check if the modifier should start
:return: True if the modifier should start based on the given event
"""
if self.start is None:
return False
current = event.current_index
return self.start <= current and (self.end is None or current < self.end)
def should_end(self, event: Event):
"""
:param event: The event to check if the modifier should end
:return: True if the modifier should end based on the given event
"""
current = event.current_index
return self.end is not None and current >= self.end
def on_initialize_structure(self, state: State, **kwargs):
"""
on_initialize_structure is called before the model is initialized
with the modifier structure. Must be implemented by the inheriting
modifier.
:param state: The current state of the model
:param kwargs: Additional arguments for initializing the structure
of the model in question
"""
raise NotImplementedError()
def on_initialize(self, state: State, **kwargs) -> bool:
"""
on_initialize is called on modifier initialization and
must be implemented by the inheriting modifier.
:param state: The current state of the model
:param kwargs: Additional arguments for initializing the modifier
:return: True if the modifier was initialized successfully,
False otherwise
"""
raise NotImplementedError()
def on_finalize(self, state: State, **kwargs) -> bool:
"""
on_finalize is called on modifier finalization and
must be implemented by the inheriting modifier.
:param state: The current state of the model
:param kwargs: Additional arguments for finalizing the modifier
:return: True if the modifier was finalized successfully,
False otherwise
"""
raise NotImplementedError()
def on_start(self, state: State, event: Event, **kwargs):
"""
on_start is called when the modifier starts and
must be implemented by the inheriting modifier.
:param state: The current state of the model
:param event: The event that triggered the start
:param kwargs: Additional arguments for starting the modifier
"""
raise NotImplementedError()
def on_update(self, state: State, event: Event, **kwargs):
"""
on_update is called when the model in question must be
updated based on passed in event. Must be implemented by the
inheriting modifier.
:param state: The current state of the model
:param event: The event that triggered the update
:param kwargs: Additional arguments for updating the model
"""
raise NotImplementedError()
def on_end(self, state: State, event: Event, **kwargs):
"""
on_end is called when the modifier ends and must be implemented
by the inheriting modifier.
:param state: The current state of the model
:param event: The event that triggered the end
:param kwargs: Additional arguments for ending the modifier
"""
raise NotImplementedError()
def on_event(self, state: State, event: Event, **kwargs):
"""
on_event is called whenever an event is triggered
:param state: The current state of the model
:param event: The event that triggered the update
:param kwargs: Additional arguments for updating the model
"""
pass
The provided code snippet includes necessary dependencies for implementing the `create_recipe_string_from_modifiers` function. Write a Python function `def create_recipe_string_from_modifiers( modifiers: List[Modifier], modifier_group_name: Optional[str] = None, ) -> str` to solve the following problem:
Create a recipe string from a list of Modifier instances (Note: this pathway assumes there's only one stage in the recipe associated by the modifier_group_name, if None, a dummy default group_name will be assigned.) :param modifiers: The list of Modifier instances :param modifier_group_name: The stage_name of the recipe, if `oneshot` or `train` the run_type of the recipe will be inferred from the modifier_group_name, if None, a dummy default group_name will be assigned. :return: A string in yaml format from which the recipe can be created
Here is the function:
def create_recipe_string_from_modifiers(
modifiers: List[Modifier],
modifier_group_name: Optional[str] = None,
) -> str:
"""
Create a recipe string from a list of Modifier instances
(Note: this pathway assumes there's only one stage in the recipe
associated by the modifier_group_name, if None, a dummy default
group_name will be assigned.)
:param modifiers: The list of Modifier instances
:param modifier_group_name: The stage_name of the recipe,
if `oneshot` or `train` the run_type of the recipe will be
inferred from the modifier_group_name, if None, a dummy default
group_name will be assigned.
:return: A string in yaml format from which the recipe can be created
"""
# Recipe(s) are yaml/json strings of the following format:
# run_type_stage: # should contain oneshot/train
# modifiers:
# ModifierTypeOne:
# start: 0.0
# end: 2.0
# ...
# ModifierTypeTwo:
# ...
# Create a recipe string from the modifiers
default_group_name: str = "DEFAULT"
modifier_group_name: str = modifier_group_name or default_group_name
recipe_dict = {
f"{modifier_group_name}_stage": {
f"{default_group_name}_modifiers": {
modifier.__class__.__name__: modifier.dict() for modifier in modifiers
}
}
}
recipe_str: str = yaml.dump(recipe_dict)
return recipe_str | Create a recipe string from a list of Modifier instances (Note: this pathway assumes there's only one stage in the recipe associated by the modifier_group_name, if None, a dummy default group_name will be assigned.) :param modifiers: The list of Modifier instances :param modifier_group_name: The stage_name of the recipe, if `oneshot` or `train` the run_type of the recipe will be inferred from the modifier_group_name, if None, a dummy default group_name will be assigned. :return: A string in yaml format from which the recipe can be created |
21,135 | import logging
import os
import time
import warnings
from abc import ABC
from contextlib import contextmanager
from datetime import datetime
from logging import CRITICAL, DEBUG, ERROR, INFO, WARN, Logger
from pathlib import Path
from types import ModuleType
from typing import Any, Callable, Dict, List, Optional, Union
from sparseml.core.logger.utils import (
FrequencyManager,
FrequencyType,
LoggingModeType,
LogStepType,
)
from sparseml.utils import is_package_available
def _create_dirs(path: str):
path = Path(path).expanduser().absolute()
path.mkdir(parents=True, exist_ok=True) | null |
21,136 | from typing import Literal, Optional, Union
LogStepType = Union[int, float, None]
The provided code snippet includes necessary dependencies for implementing the `log_ready` function. Write a Python function `def log_ready( current_log_step: Optional[LogStepType], last_log_step: Optional[LogStepType], log_frequency: Optional[LogStepType], last_model_update_step: Optional[LogStepType] = None, check_model_update: bool = False, )` to solve the following problem:
Check if we are ready to log again based on the given parameters (Stateless version of FrequencyManager().log_ready) Conditions for readiness: - log frequency is not None - current log step is None - current log step greater than or equal to the last log step plus the log frequency - if check_model_update is True, then the last model update step must be greater than or equal to the last log step, and the current log step must be greater than or equal to the last model update step plus the log frequency :param current_log_step: The current log step :param last_log_step: The last step at which logging occurred :param log_frequency: The frequency to log at :param last_model_update_step: The last step at which the model was updated :param check_model_update: If True, will check if the model has been updated since the last log step and if log_frequency steps have passed since the last model update; Defaults to False. :return: True if logging cadence has been reached again False otherwise
Here is the function:
def log_ready(
current_log_step: Optional[LogStepType],
last_log_step: Optional[LogStepType],
log_frequency: Optional[LogStepType],
last_model_update_step: Optional[LogStepType] = None,
check_model_update: bool = False,
):
"""
Check if we are ready to log again based on the given parameters
(Stateless version of FrequencyManager().log_ready)
Conditions for readiness:
- log frequency is not None
- current log step is None
- current log step greater than or equal to the last log step
plus the log frequency
- if check_model_update is True, then the last model update step
must be greater than or equal to the last log step, and the
current log step must be greater than or equal to the
last model update step plus the log frequency
:param current_log_step: The current log step
:param last_log_step: The last step at which logging occurred
:param log_frequency: The frequency to log at
:param last_model_update_step: The last step at which the model was updated
:param check_model_update: If True, will check if the model has been updated
since the last log step and if log_frequency steps have passed since the
last model update; Defaults to False.
:return: True if logging cadence has been reached again False otherwise
"""
# format is used to avoid floating point errors
# e.g. 0.1 + 0.2 != 0.3
# format(0.1 + 0.2, ".4f") == format(0.3, ".4f")
cadence_reached: bool = log_frequency is not None and (
current_log_step is None
or last_log_step is None
or current_log_step >= float(format(last_log_step + log_frequency, ".4f"))
)
if not cadence_reached or not check_model_update:
# early return if cadence not reached or,
# model update check not requested
return cadence_reached
model_updated_since_last_log: bool = (
last_model_update_step is None
or last_log_step is None
or current_log_step is None
or (
last_model_update_step >= last_log_step
and current_log_step
>= float(format(log_frequency + last_model_update_step, ".4f"))
)
)
return cadence_reached and model_updated_since_last_log | Check if we are ready to log again based on the given parameters (Stateless version of FrequencyManager().log_ready) Conditions for readiness: - log frequency is not None - current log step is None - current log step greater than or equal to the last log step plus the log frequency - if check_model_update is True, then the last model update step must be greater than or equal to the last log step, and the current log step must be greater than or equal to the last model update step plus the log frequency :param current_log_step: The current log step :param last_log_step: The last step at which logging occurred :param log_frequency: The frequency to log at :param last_model_update_step: The last step at which the model was updated :param check_model_update: If True, will check if the model has been updated since the last log step and if log_frequency steps have passed since the last model update; Defaults to False. :return: True if logging cadence has been reached again False otherwise |
21,137 | from typing import Literal, Optional, Union
The provided code snippet includes necessary dependencies for implementing the `_basic_normalization` function. Write a Python function `def _basic_normalization(value: str) -> str` to solve the following problem:
Basic normalization for string values. Removes leading and trailing whitespace and converts to lowercase. :param value: The value to normalize :return: The normalized value
Here is the function:
def _basic_normalization(value: str) -> str:
"""
Basic normalization for string values.
Removes leading and trailing whitespace and converts to lowercase.
:param value: The value to normalize
:return: The normalized value
"""
return value.strip().lower() | Basic normalization for string values. Removes leading and trailing whitespace and converts to lowercase. :param value: The value to normalize :return: The normalized value |
21,138 | from typing import Any, Generator, Optional, Tuple, Union
from sparseml.core.logger import LoggerManager
from sparseml.core.model.base import ModifiableModel
from sparseml.core.state import State
class ModifiableModel(Generic[MT, LT, PT], MultiFrameworkObject):
"""
A MultiFrameWorkObject for holding a model. Also defines the
contract that must be followed for framework specific implementations.
Automatically instantiates the correct subclass object based on the
specified framework if it exists. If the framework is not specified,
the default "general" framework will be used. The inheritors of this class
must be named in the following format: ModifiableModel{framework.class_name()}
to be searchable by the MultiFrameworkObject factory method.
:param framework: the framework the model is in
:param layer_prefix: name of model attribute that contains the list of layers, i.e.
model.decoder for OPT or just model for Llama
:param model: the model object
"""
model: MT = None
def __init__(
self,
framework: Optional[Framework] = None,
model=None,
layer_prefix: Optional[str] = None,
):
self.model = model
self._layer_prefix = layer_prefix
def get_layers_params(
self, targets: Union[str, List[str]]
) -> Dict[str, ModelParameterizedLayer[LT, PT]]:
"""
:param targets: the targets to get the layers and params for
:return: a dictionary of the layer name to ModelParameterizedLayer instance
for that layer
"""
raise NotImplementedError()
def get_layers(self, targets: Union[str, List[str]]) -> Dict[str, LT]:
"""
:param targets: the targets to get the layers for
:return: a dictionary of the layer name to layer instance for that layer
"""
raise NotImplementedError()
def get_layer(self, target: str) -> LT:
"""
:param target: the target to get the layer for
:return: the layer instance corresponding to the target
"""
raise NotImplementedError()
def set_layer(self, target: str, layer: LT):
"""
:param target: the target to set the layer for
:param layer: the layer instance to set
"""
raise NotImplementedError()
def get_params(self, targets: Union[str, List[str]]) -> Dict[str, PT]:
"""
:param targets: the targets to get the params for
:return: a dictionary of the param name to param instance for that param
"""
raise NotImplementedError()
def get_param(self, target: str) -> PT:
"""
:param target: the target to get the param for
:return: the param instance corresponding to the target
"""
raise NotImplementedError()
def set_param(self, target: str, param: PT):
"""
:param target: the target to set the param for
:param param: the param instance to set
"""
raise NotImplementedError()
def loggable_items(self) -> Generator[Tuple[str, Any], None, None]:
"""
Model level information to be logged for the model
:return a generator that yields a tuple of:
- the name of the loggable item
- the value of the loggable item
"""
def layer_prefix(self) -> Optional[str]:
"""
:return: the name of model attribute that contains the list of layers, i.e.
model.decoder for OPT or just model for Llama
"""
return self._layer_prefix
def layer_prefix(self, value: Optional[str]):
"""
:param value: the name of model attribute that contains the list of layers, i.e.
model.decoder for OPT or just model for Llama
"""
self._layer_prefix = value
def get_matching_layer(
self, target: str, name_to_match: str, model: LT
) -> Optional[Tuple[str, LT]]:
"""
:param target: regex layer name to target when searching model
:param name_to_match: name to match targets to
:param model: model to search for targets
"""
raise NotImplementedError()
def qat_active(self) -> bool:
"""
Checks if quantization aware training is set up in the model
:return: True if QAT is active in any layer, False otherwise
"""
raise NotImplementedError()
def get_no_split_params(self) -> Union[str, List[str]]:
"""
Get list of module classes that shouldn't be split when sharding
:return: list of class names that shouldn't be split
"""
raise NotImplementedError()
The provided code snippet includes necessary dependencies for implementing the `should_log_model_info` function. Write a Python function `def should_log_model_info( model: ModifiableModel, loggers: LoggerManager, current_log_step: float, last_log_step: Optional[float] = None, ) -> bool` to solve the following problem:
Check if we should log model level info Criteria: - model has a loggable_items method - state has a logger manager - logger manager is ready to log based on cadence and last log epoch :param model: The model whose info we want to log :param loggers: The logger manager to log to :param current_log_step: The current epoch :param last_log_step: The last step we logged model info at :return: True if we should log model level info, False otherwise
Here is the function:
def should_log_model_info(
model: ModifiableModel,
loggers: LoggerManager,
current_log_step: float,
last_log_step: Optional[float] = None,
) -> bool:
"""
Check if we should log model level info
Criteria:
- model has a loggable_items method
- state has a logger manager
- logger manager is ready to log based on cadence and last log epoch
:param model: The model whose info we want to log
:param loggers: The logger manager to log to
:param current_log_step: The current epoch
:param last_log_step: The last step we logged model info at
:return: True if we should log model level info, False otherwise
"""
return (
hasattr(model, "loggable_items")
and isinstance(loggers, LoggerManager)
and loggers.log_ready(
current_log_step=current_log_step, last_log_step=last_log_step
)
) | Check if we should log model level info Criteria: - model has a loggable_items method - state has a logger manager - logger manager is ready to log based on cadence and last log epoch :param model: The model whose info we want to log :param loggers: The logger manager to log to :param current_log_step: The current epoch :param last_log_step: The last step we logged model info at :return: True if we should log model level info, False otherwise |
21,139 | from typing import Any, Generator, Optional, Tuple, Union
from sparseml.core.logger import LoggerManager
from sparseml.core.model.base import ModifiableModel
from sparseml.core.state import State
def _log_current_step(
logger_manager: LoggerManager, current_log_step: Union[float, int]
):
"""
Log the Current Log Step to the logger_manager
:param logger_manager: The logger manager to log to
:param current_log_step: The logging step
"""
tag = logger_manager.frequency_manager.frequency_type
logger_manager.log_scalar(tag=tag, value=current_log_step, step=current_log_step)
def _log_model_loggable_items(
logger_manager: LoggerManager,
loggable_items: Generator[Tuple[str, Any], None, None],
epoch: float,
):
"""
Log the model level loggable items to the logger_manager
:param logger_manager: The logger manager to log to
:param loggable_items: The loggable items to log, must be a generator of tuples
of the loggable item name and value
:param epoch: The epoch to log
"""
for loggable_item in loggable_items:
log_tag, log_value = loggable_item
if isinstance(log_value, dict):
logger_manager.log_scalars(tag=log_tag, values=log_value, step=epoch)
elif isinstance(log_value, (int, float)):
logger_manager.log_scalar(tag=log_tag, value=log_value, step=epoch)
else:
logger_manager.log_string(tag=log_tag, string=log_value, step=epoch)
class State:
"""
State class holds information about the current sparsification state
:param framework: The framework being used
:param model: The model being used for training
:param teacher_model: The teacher model being used for training
:param optimizer: The optimizer being used for training
:param optim_wrapped: Whether or not the optimizer has been wrapped
:param loss: The loss function being used for training
:param batch_data: The current batch of data being used for training
:param data: The data sets being used for training, validation, testing,
and/or calibration, wrapped in a Data instance
:param hardware: Hardware Instance holding info about the target hardware being used
:param start_event: The start event to begin training
:param last_event: The last event to stop training
:param loggers: LoggerManager instance holding all the loggers to log
:param model_log_cadence: The cadence to log model information w.r.t epochs.
If 1, logs every epoch. If 2, logs every other epoch, etc. Default is 1.
"""
framework: Framework
model: ModifiableModel = None
teacher_model: ModifiableModel = None
optimizer: ModifiableOptimizer = None
optim_wrapped: bool = None
loss: Any = None
batch_data: Any = None
data = Data()
hardware = Hardware()
start_event: Event = None
last_event: Event = None
loggers: Optional[LoggerManager] = None
model_log_cadence: Optional[float] = None
_last_log_step: Union[float, int, None] = None
def sparsification_ready(self) -> bool:
return (
self.model is not None
and self.optimizer is not None
# and self.loss is not None
# and self.batch_data is not None
)
def update(
self,
model: Any = None,
teacher_model: Any = None,
optimizer: Any = None,
attach_optim_callbacks: bool = True,
train_data: Any = None,
val_data: Any = None,
test_data: Any = None,
calib_data: Any = None,
copy_data: bool = True,
start: float = None,
steps_per_epoch: int = None,
batches_per_step: int = None,
loggers: Union[None, LoggerManager, List[BaseLogger]] = None,
model_log_cadence: Optional[float] = None,
**kwargs,
) -> Dict:
"""
Update the state with the given parameters
:param model: The model to update the state with
:param teacher_model: The teacher model to update the state with
:param optimizer: The optimizer to update the state with
:param attach_optim_callbacks: Whether or not to attach optimizer callbacks
:param train_data: The training data to update the state with
:param val_data: The validation data to update the state with
:param test_data: The testing data to update the state with
:param calib_data: The calibration data to update the state with
:param copy_data: Whether or not to copy the data
:param start: The start index to update the state with
:param steps_per_epoch: The steps per epoch to update the state with
:param batches_per_step: The batches per step to update the state with
:param loggers: the logger manager to setup logging important info and
milestones to, also accepts a list of BaseLogger(s)
:param model_log_cadence: The cadence to log model information w.r.t epochs.
If 1, logs every epoch. If 2, logs every other epoch, etc. Default is 1.
:param kwargs: Additional keyword arguments to update the state with
"""
if model is not None:
self.model = ModifiableModel(framework=self.framework, model=model)
if teacher_model is not None:
self.teacher_model = ModifiableModel(
framework=self.framework, model=teacher_model
)
if optimizer is not None:
self.optim_wrapped = attach_optim_callbacks
self.optimizer = ModifiableOptimizer(
framework=self.framework, optimizer=optimizer
)
if train_data is not None:
self.data.train = train_data if not copy_data else deepcopy(train_data)
if val_data is not None:
self.data.val = val_data if not copy_data else deepcopy(val_data)
if test_data is not None:
self.data.test = test_data if not copy_data else deepcopy(test_data)
if calib_data is not None:
self.data.calib = calib_data if not copy_data else deepcopy(calib_data)
if "device" in kwargs:
self.hardware.device = kwargs["device"]
if (
start is not None
or steps_per_epoch is not None
or batches_per_step is not None
):
if self.start_event is None:
self.start_event = Event()
if start is not None:
self.start_event.current_index = start
if steps_per_epoch is not None:
self.start_event.steps_per_epoch = steps_per_epoch
if batches_per_step is not None:
self.start_event.batches_per_step = batches_per_step
loggers = loggers or []
if isinstance(loggers, List):
loggers = LoggerManager(loggers)
self.loggers = loggers
if model_log_cadence is not None:
self.model_log_cadence = model_log_cadence
return kwargs
The provided code snippet includes necessary dependencies for implementing the `log_model_info` function. Write a Python function `def log_model_info(state: State, current_log_step)` to solve the following problem:
Log model level info to the logger Relies on `state.model` having a `loggable_items` method that returns a generator of tuples of the loggable item name and value. Also relies on `state.loggers` being a `LoggerManager` instance. :param state: The current state of sparsification :param current_log_step: The current log step to log model info at
Here is the function:
def log_model_info(state: State, current_log_step):
"""
Log model level info to the logger
Relies on `state.model` having a `loggable_items` method
that returns a generator of tuples of the loggable item
name and value. Also relies on `state.loggers` being a
`LoggerManager` instance.
:param state: The current state of sparsification
:param current_log_step: The current log step to log
model info at
"""
_log_current_step(logger_manager=state.loggers, current_log_step=current_log_step)
_log_model_loggable_items(
logger_manager=state.loggers,
loggable_items=state.model.loggable_items(),
epoch=current_log_step,
) | Log model level info to the logger Relies on `state.model` having a `loggable_items` method that returns a generator of tuples of the loggable item name and value. Also relies on `state.loggers` being a `LoggerManager` instance. :param state: The current state of sparsification :param current_log_step: The current log step to log model info at |
21,140 | from contextlib import contextmanager
import sparseml.core.session as session_manager
The provided code snippet includes necessary dependencies for implementing the `session_context_manager` function. Write a Python function `def session_context_manager()` to solve the following problem:
A context manager to setup a fresh session and reset it after the context is exited.
Here is the function:
def session_context_manager():
"""
A context manager to setup a fresh session and reset it after the context
is exited.
"""
active_session = session_manager.active_session()
active_session.reset()
yield
# reset the session after each context
active_session.reset() | A context manager to setup a fresh session and reset it after the context is exited. |
21,141 | import copy
import logging
from typing import List, Optional, Set, Tuple, Union
import numpy
import onnx
from onnx import ModelProto, NodeProto, TensorProto, ValueInfoProto, numpy_helper
from sparseml.exporters.transforms.onnx_transform import OnnxTransform
from sparseml.onnx.utils import ONNXGraph
ALLOWED_NODES_FOLLOWING_CONCAT = ["Transpose", "QuantizeLinear"]
def reshape_kv_cache_inputs_outputs(
model: ModelProto,
cache_input_name: str,
cache_output_name: str,
cache_input_dims: List[Union[int, str]],
batch_size: int,
num_attention_heads: int,
) -> Tuple[ModelProto, List[Union[int, str]], str, str]:
"""
Reshapes the input and output of a kv cache in the model, so that the dimensions
`batch_size` and `num_attention_heads` are multiplied together.
Transform:
```
| cache_input_name
| |
| ...
| |
| cache_output_name
```
to:
```
| cache_input_name
| |
| cache_input_name_reshaped
| |
| ...
| |
| cache_output_name_reshaped
| |
| cache_output_name
:param model: The model to update
:param cache_input_name: The name of the input to the submodel
:param cache_output_name: The name of the output from the submodel
:param cache_input_dims: The dimensions of the input to the submodel
:param batch_size: The batch size of the model
:param num_attention_heads: The number of attention heads in the model
:return: The updated model, the updated input dimensions,
the updated input name, and the updated output name
"""
cache_input_name_reshaped = f"{cache_input_name}_reshaped"
cache_output_name_reshaped = f"{cache_output_name}_reshaped"
reshape_in_initializer_name = f"reshape_in.{cache_input_name}"
reshape_out_initializer_name = f"reshape_out.{cache_output_name}"
reshape_input_dims_in = copy.deepcopy(cache_input_dims)
reshape_input_dims_out = copy.deepcopy(cache_input_dims)
# "squash" the batch_size and num_attention_heads dimensions together
reshape_input_dims_in[0] = batch_size * num_attention_heads
reshape_input_dims_in.remove(num_attention_heads)
reshape_in_array = numpy.array(
[dim if isinstance(dim, int) else -1 for dim in reshape_input_dims_in],
dtype=numpy.int64,
)
reshape_out_array = numpy.array(
[dim if isinstance(dim, int) else -1 for dim in reshape_input_dims_out],
dtype=numpy.int64,
)
reshape_in_initializer = numpy_helper.from_array(
numpy.array(
reshape_in_array,
dtype=numpy.int64,
),
reshape_in_initializer_name,
)
reshape_out_initializer = numpy_helper.from_array(
numpy.array(
reshape_out_array,
dtype=numpy.int64,
),
reshape_out_initializer_name,
)
reshape_node_in = onnx.helper.make_node(
op_type="Reshape",
inputs=[cache_input_name, reshape_in_initializer_name],
outputs=[cache_input_name_reshaped],
name=f"reshape.{cache_input_name}",
)
reshape_node_out = onnx.helper.make_node(
op_type="Reshape",
inputs=[cache_output_name_reshaped, reshape_out_initializer_name],
outputs=[cache_output_name],
name=f"reshape.{cache_output_name}",
)
graph = ONNXGraph(model)
graph.add_node(reshape_node_in)
graph.add_node(reshape_node_out)
model.graph.initializer.extend([reshape_in_initializer, reshape_out_initializer])
return (
model,
reshape_input_dims_in,
cache_input_name_reshaped,
cache_output_name_reshaped,
)
def transpose_kv_cache_inputs_outputs(
graph: ONNXGraph,
cache_input_name: str,
cache_output_name: str,
cache_input_dims: List[Union[int, str]],
transpose_input: Tuple[int, int, int, int],
) -> Tuple[ModelProto, List[Union[int, str]], str, str]:
"""
Transposes the input and output of a kv cache in the model
according to the transpose_input sequence
Transform:
```
| cache_input_name
| |
| ...
| |
| cache_output_name
```
to:
```
| cache_input_name
| |
| cache_input_name_transposed
| |
| ...
| |
| cache_output_name_transposed
| |
| cache_output_name
:param graph: The graph to update
:param cache_input_name: The name of the input to the submodel
:param cache_output_name: The name of the output from the submodel
:param transpose_input: The permutation of the input dimensions
:param cache_input_dims: The dimensions of the input to the submodel
:return: The updated model, the updated input dimensions,
the updated input name, and the updated output name
"""
cache_input_name_transposed = f"{cache_input_name}_transposed"
cache_output_name_transposed = f"{cache_output_name}_transposed"
transpose_node_in = onnx.helper.make_node(
op_type="Transpose",
inputs=[cache_input_name],
outputs=[cache_input_name_transposed],
name=f"transpose.{cache_input_name}",
perm=transpose_input,
)
transpose_node_out = onnx.helper.make_node(
op_type="Transpose",
inputs=[cache_output_name_transposed],
outputs=[cache_output_name],
name=f"transpose.{cache_output_name}",
perm=transpose_input,
)
transposed_input_dims = [cache_input_dims[i] for i in transpose_input]
graph.add_node(transpose_node_in)
graph.add_node(transpose_node_out)
return (
graph,
transposed_input_dims,
cache_input_name_transposed,
cache_output_name_transposed,
)
def move_quantize_linear_node(
quantize_linear: NodeProto,
quantize_linear_parent: NodeProto,
concat: NodeProto,
cache_input_idx: str,
graph: ONNXGraph,
) -> NodeProto:
"""
Moves a QuantizeLinear node before the `concat` node, so
that the data that arrives from `ConcatNodeParent` to `concat`
is already quantized (see the diagram below). This is required
so that the `concat` node joins the quantized data from the
`ConcatNodeParent` with the quantized kv cache input.
Transforms
```
| ConcatNodeParent
| |
| | Key/Value Cache(uint8)
| | |
| | ...
| | |
| | |
| |
| Concat
| |
| ...
| |
| QuantizeLinear
| |
| | ...
| | |
| |
| QLinearMatMul
```
to
```
| ConcatNodeParent
| |
| QuantizeLinear
| |
| | Key/Value Cache (uint8)
| | |
| | ...
| | |
| | |
| |
| Concat
| |
| |
| | ...
| | |
| |
| QLinearMatMul
```
:param quantize_linear: The QuantizeLinear node to move.
In reality, this node will be removed and a new node,
that inherits attributes from this node, will be created
in the proper place.
:param quantize_linear_parent: The parent of the QuantizeLinear node.
:param concat: The concat node to move the QuantizeLinear node before.
:param cache_input_idx: The index of the cache input in the concat node.
:param graph: The graph to update.
:return: The updated Concat node.
"""
if quantize_linear.op_type != "QuantizeLinear":
raise ValueError(
f"It is expected that the node: {quantize_linear.name} "
f"has opset: QuantizeLinear, but it has op_type: {quantize_linear.ops_type}"
)
quantize_linear_child = graph.get_node_single_child(quantize_linear)
if quantize_linear_child.op_type != "MatMulInteger":
raise ValueError(
f"It is expected that the node: {quantize_linear.name} "
"has opset: MatMulInteger, but it has "
f"op_type: {quantize_linear_child.op_type}"
)
# remove the dependency on the QuantizeLinear node from its
# neighbouring nodes by connecting output of its parent to its child
quantize_linear_child.input[cache_input_idx] = quantize_linear_parent.output[0]
# get the node precedes the concat node and does not come from
# the kv cache input. Then place the QuantizeLinear node after it
concate_node_parent = graph.get_node_parents(concat)[1]
quantize_linear.input[0] = concate_node_parent.output[0]
concat.input[1] = quantize_linear.output[0]
return concat
The provided code snippet includes necessary dependencies for implementing the `create_cache` function. Write a Python function `def create_cache( model: ModelProto, node: NodeProto, cache_input_idx: int, cache_input_name: str, cache_output_name: str, num_attention_heads: int, hidden_size_kv_cache: int, use_uint8_if_quantized: bool = True, batch_size: int = 1, multiply_batch_by_num_att_heads: bool = True, transpose_input: Optional[Tuple[int, int, int, int]] = None, ) -> Tuple[NodeProto, ValueInfoProto, ValueInfoProto]` to solve the following problem:
Injects a cache (value or key) into the graph for a given Matmul node. :param model: Model to update :param node: MatMul node that follows the cache injection point :param cache_input_idx: Index of the input (where the cache will be injected) to the MatMul :param cache_input_name: Name of cache input :param cache_output_name: Name of cache output :param num_attention_heads: number of attention heads of the model :param hidden_size_kv_cache: hidden size of the key/value cache :param use_uint8_if_quantized: True if quantized MatMuls should have uint8 inputs, if False, uses int8 :param batch_size: batch size of the kv cache. By default, this is 1. :param multiply_batch_by_num_att_heads: If True, the batch size of the kv cache is multiplied by the number of attention heads before the concat node. :param transpose_input: If not None, transpose the input to the cache before the concat node. If `multiply_batch_by_num_att_heads` is True, the transpose is applied after the batch size is multiplied by the number of attention heads. :return: tuple of concat node to add, cache input to add, and cache output to add, updates existing nodes in-place
Here is the function:
def create_cache(
model: ModelProto,
node: NodeProto,
cache_input_idx: int,
cache_input_name: str,
cache_output_name: str,
num_attention_heads: int,
hidden_size_kv_cache: int,
use_uint8_if_quantized: bool = True,
batch_size: int = 1,
multiply_batch_by_num_att_heads: bool = True,
transpose_input: Optional[Tuple[int, int, int, int]] = None,
) -> Tuple[NodeProto, ValueInfoProto, ValueInfoProto]:
"""
Injects a cache (value or key) into the graph for a given Matmul node.
:param model: Model to update
:param node: MatMul node that follows the cache injection point
:param cache_input_idx: Index of the input
(where the cache will be injected) to the MatMul
:param cache_input_name: Name of cache input
:param cache_output_name: Name of cache output
:param num_attention_heads: number of attention heads of the model
:param hidden_size_kv_cache: hidden size of the key/value cache
:param use_uint8_if_quantized: True if quantized MatMuls should have uint8
inputs, if False, uses int8
:param batch_size: batch size of the kv cache. By default, this is 1.
:param multiply_batch_by_num_att_heads: If True, the batch size of the
kv cache is multiplied by the number of attention heads before the
concat node.
:param transpose_input: If not None, transpose the input to the cache
before the concat node. If `multiply_batch_by_num_att_heads` is True,
the transpose is applied after the batch size is multiplied by the
number of attention heads.
:return: tuple of concat node to add, cache input to add, and cache output to add,
updates existing nodes in-place
"""
CACHE_INPUT_DIMS = [
batch_size,
num_attention_heads,
"past_sequence_len",
hidden_size_kv_cache,
]
CACHE_OUTPUT_DIMS = [
batch_size,
num_attention_heads,
"past_sequence_len + 1",
hidden_size_kv_cache,
]
graph = ONNXGraph(model)
cache_data_type = (
TensorProto.FLOAT
if node.op_type not in ["MatMulInteger", "QLinearMatMul"]
else TensorProto.UINT8
if use_uint8_if_quantized
else TensorProto.INT8
)
# create graph input info proto
cache_input_info = onnx.helper.make_tensor_value_info(
cache_input_name,
cache_data_type,
CACHE_INPUT_DIMS,
)
# create graph output info proto
cache_output_info = onnx.helper.make_tensor_value_info(
cache_output_name,
cache_data_type,
CACHE_OUTPUT_DIMS,
)
if node.op_type == "QLinearMatMul" and cache_input_idx == 1:
cache_input_idx = 3 # QLinearMatMul B matrix is at idx 3, not 1
cache_parent = graph.get_node_single_parent(node, index=cache_input_idx)
if (
isinstance(cache_parent, NodeProto)
and cache_parent.op_type in ALLOWED_NODES_FOLLOWING_CONCAT
):
while cache_parent.op_type in ALLOWED_NODES_FOLLOWING_CONCAT:
pre_cache_input_id = cache_parent.input[0]
cache_parent = graph.get_node_single_parent(cache_parent, index=0)
else:
pre_cache_input_id = node.input[cache_input_idx]
cache_input_name_concat = cache_input_name
cache_output_name_concat = cache_output_name
cache_input_dims_concat = CACHE_INPUT_DIMS
if transpose_input:
(
graph,
cache_input_dims_concat,
cache_input_name_concat,
cache_output_name_concat,
) = transpose_kv_cache_inputs_outputs(
graph=graph,
cache_input_name=cache_input_name_concat,
cache_output_name=cache_output_name_concat,
cache_input_dims=cache_input_dims_concat,
transpose_input=transpose_input,
)
if multiply_batch_by_num_att_heads:
(
model,
cache_input_dims_concat,
cache_input_name_concat,
cache_output_name_concat,
) = reshape_kv_cache_inputs_outputs(
model=model,
cache_input_name=cache_input_name_concat,
cache_output_name=cache_output_name_concat,
cache_input_dims=cache_input_dims_concat,
batch_size=batch_size,
num_attention_heads=num_attention_heads,
)
concat_axis = [
idx
for (idx, dim) in enumerate(cache_input_dims_concat)
if dim == "past_sequence_len"
][0]
concat_node = onnx.helper.make_node(
op_type="Concat",
inputs=[cache_input_name_concat, pre_cache_input_id],
outputs=[cache_output_name_concat],
axis=concat_axis,
name=f"concat.{cache_input_name_concat}",
)
for _node in model.graph.node:
for input_idx, input_id in enumerate(_node.input):
if input_id == pre_cache_input_id and _node.name != concat_node.name:
_node.input[input_idx] = cache_output_name_concat
if node.op_type == "MatMulInteger":
quantize_linear = graph.get_node_single_parent(node, cache_input_idx)
quantize_linear_parent = graph.get_node_single_parent(quantize_linear, 0)
if quantize_linear_parent is None:
quantize_linear_parent = concat_node
concat_node = move_quantize_linear_node(
quantize_linear=quantize_linear,
quantize_linear_parent=quantize_linear_parent,
concat=concat_node,
cache_input_idx=cache_input_idx,
graph=graph,
)
graph.add_node(concat_node)
return concat_node, cache_input_info, cache_output_info | Injects a cache (value or key) into the graph for a given Matmul node. :param model: Model to update :param node: MatMul node that follows the cache injection point :param cache_input_idx: Index of the input (where the cache will be injected) to the MatMul :param cache_input_name: Name of cache input :param cache_output_name: Name of cache output :param num_attention_heads: number of attention heads of the model :param hidden_size_kv_cache: hidden size of the key/value cache :param use_uint8_if_quantized: True if quantized MatMuls should have uint8 inputs, if False, uses int8 :param batch_size: batch size of the kv cache. By default, this is 1. :param multiply_batch_by_num_att_heads: If True, the batch size of the kv cache is multiplied by the number of attention heads before the concat node. :param transpose_input: If not None, transpose the input to the cache before the concat node. If `multiply_batch_by_num_att_heads` is True, the transpose is applied after the batch size is multiplied by the number of attention heads. :return: tuple of concat node to add, cache input to add, and cache output to add, updates existing nodes in-place |
21,142 | import copy
import logging
from typing import List, Optional, Set, Tuple, Union
import numpy
import onnx
from onnx import ModelProto, NodeProto, TensorProto, ValueInfoProto, numpy_helper
from sparseml.exporters.transforms.onnx_transform import OnnxTransform
from sparseml.onnx.utils import ONNXGraph
def is_value_matmul(
node: NodeProto,
graph: ONNXGraph,
allowed_nodes_before_softmax: Set[str] = ALLOWED_NODES_BEFORE_SOFTMAX,
) -> bool:
"""
Returns True if the node is a "value" MatMul, i.e. a MatMul that meets
the following criteria:
- is_matmul(node) is True
- node has no parameters
- node has a single `Softmax` parent node
or
the parent node `Softmax` is preceded by
a set of nodes that are specified in the
`allowed_nodes_before_softmax` set
:param node: node to check
:param graph: graph containing the node
:param allowed_nodes_before_softmax: set of node types that are allowed
to be located between the node in question a Softmax node, so that
the node can still be considered a "value" MatMul
"""
if not is_matmul(node) or _is_parameterized_matmul(node, graph):
# not a matmul or MatMul op has a parameter
return False
parent = graph.get_node_single_parent(node, index=0)
while parent.op_type in allowed_nodes_before_softmax:
if not isinstance(parent, NodeProto):
break
parent = graph.get_node_single_parent(parent, index=0)
if parent is None:
raise ValueError(
"While traversing the graph to find a Softmax that precedes "
f"the candidate for a `value` MatMul: {node.name}, found a node "
f"with multiple parents {parent.name}. "
"It is assumed that the graph that connects the Softmax "
"node and the `value` MatMul node is a linear chain of nodes "
"and thus none of the encountered nodes should have multiple "
"parents"
)
if parent.op_type == "Softmax":
# a parent is a Softmax node, assume this is a "value" MatMul
return True
# no parents are a softmax node
return False
def _find_key_matmul_from_value_matmul(
value_matmul: NodeProto,
graph: ONNXGraph,
value_matmul_names: Set[str],
) -> Optional[NodeProto]:
# Perform a BFS up the model DAG from the "value" MatMul until
# we find the corresponding "key" MatMul.
# The "key" MatMul is assumed to be the first non-parameterized
# MatMul we reach during the search.
# We return None if no such matmul is found, or there is an indication that
# we have traversed outside the self attention module (found another
# "value" MatMul)
seen_node_names = {value_matmul.name}
node_queue = [value_matmul]
while node_queue:
current_node = node_queue.pop(0)
node_parents = graph.get_node_parents(current_node)
if (
is_matmul(current_node)
and (current_node.name != value_matmul.name)
and not _is_parameterized_matmul(current_node, graph)
):
# treat root node as regular, non MatMul node
if current_node.name in value_matmul_names:
_LOGGER.info(
f"First MatMul node found for value matmul {value_matmul.name} "
f"was another value matmul {current_node.name}",
)
return None
else:
# Success case -
# first found matmul is non-parameterized
return current_node
for parent in node_parents:
if not isinstance(parent, NodeProto):
continue
if parent.name not in seen_node_names:
seen_node_names.add(parent.name)
node_queue.append(parent)
# No MatMul matched before bottoming
_LOGGER.info(
f"No key matmul found for value matmul {value_matmul.name}",
)
return None
def _find_key_value_matmul_pairs(
graph: ONNXGraph,
) -> List[Tuple[NodeProto, NodeProto]]:
# Find pairs of "key" and "value" MatMuls.
# Each attention block contains a pair of MatMuls:
# - key MatMul that computes Q x K^T
# - value MatMul that computes Softmax(Q x K^T) x V
# The function returns:
# [(key_matmul_0, value_matmul_0), (key_matmul_1, value_matmul_1), ...]
key_value_matmul_pairs = []
value_matmuls = [node for node in graph.nodes if is_value_matmul(node, graph)]
value_matmul_names = {node.name for node in value_matmuls}
# for every value matmul, find the corresponding key matmul
for value_matmul in value_matmuls:
key_matmul = _find_key_matmul_from_value_matmul(
value_matmul, graph, value_matmul_names
)
if key_matmul is not None:
key_value_matmul_pairs.append((key_matmul, value_matmul))
else:
raise RuntimeError(
f"Could not find key matmul for value matmul {value_matmul.name}"
)
return key_value_matmul_pairs | null |
21,143 | import copy
import logging
from typing import List, Optional, Set, Tuple, Union
import numpy
import onnx
from onnx import ModelProto, NodeProto, TensorProto, ValueInfoProto, numpy_helper
from sparseml.exporters.transforms.onnx_transform import OnnxTransform
from sparseml.onnx.utils import ONNXGraph
def _value_input_idx(value_matmul: NodeProto, model: ModelProto) -> int:
graph = ONNXGraph(model)
# get idx of matmul that the value node is an input of
expected_num_inputs = 4 if value_matmul.op_type == "MatMulInteger" else 2
if len(value_matmul.input) != expected_num_inputs:
raise ValueError(
f"Expected value matmul to have {expected_num_inputs} "
f"inputs, got {len(value_matmul.input)}"
)
softmax_input_idx = 0 # default to softmax being on left hand side
for idx, parent in enumerate(graph.get_node_parents(value_matmul)):
if isinstance(parent, NodeProto):
# if a parent is a softmax or the parent of value matmul is a direct
# child of a softmax (quantized scenario), then the softmax is the
# idx'th input to the value matmul
if (
parent.op_type == "Softmax"
or graph.get_node_single_parent(parent, 0).op_type == "Softmax"
):
softmax_input_idx = idx
break
return 1 - softmax_input_idx # return index that isn't the softmax | null |
21,144 | import copy
import logging
from typing import List, Optional, Set, Tuple, Union
import numpy
import onnx
from onnx import ModelProto, NodeProto, TensorProto, ValueInfoProto, numpy_helper
from sparseml.exporters.transforms.onnx_transform import OnnxTransform
from sparseml.onnx.utils import ONNXGraph
def _use_uint8_if_quantized(graph: ONNXGraph) -> bool:
use_uint8_if_quantized = True # default to True
quantize_nodes = [node for node in graph.nodes if node.op_type == "QuantizeLinear"]
if quantize_nodes:
zero_point_example = graph.get_init_by_name(quantize_nodes[0].input[2])
if zero_point_example and zero_point_example.data_type == (TensorProto.INT8):
# quantize node exists and has INT8 input
use_uint8_if_quantized = False
return use_uint8_if_quantized | null |
21,145 | import copy
import logging
from typing import List, Optional, Set, Tuple, Union
import numpy
import onnx
from onnx import ModelProto, NodeProto, TensorProto, ValueInfoProto, numpy_helper
from sparseml.exporters.transforms.onnx_transform import OnnxTransform
from sparseml.onnx.utils import ONNXGraph
def _set_attention_mask_to_dynamic(model: ModelProto) -> ModelProto:
# set the attention mask to be of the dynamic shape
attention_mask_input = [
input.name for input in model.graph.input if input.name == "attention_mask"
]
if not attention_mask_input:
raise ValueError("Could not find `attention_mask` input in model")
if len(attention_mask_input) > 1:
raise ValueError(
"Found multiple `attention_mask` inputs in model, expected only one"
)
model.graph.input[1].type.tensor_type.shape.dim[
1
].dim_param = "past_sequence_len + 1"
return model | null |
21,146 | import json
import logging
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Type, Union
from pydantic import BaseModel, Field
from sparseml.exporters.transforms import OnnxTransform
from sparseml.exporters.transforms.kv_cache.transforms_codegen import (
AdditionalTransformsCodeGen,
)
from sparseml.exporters.transforms.kv_cache.transforms_llama import (
AdditionalTransformsLLAMA,
)
from sparseml.exporters.transforms.kv_cache.transforms_mpt import (
AdditionalTransformsMPT,
)
from sparseml.exporters.transforms.kv_cache.transforms_opt import (
AdditionalTransformsOPT,
)
_LOGGER = logging.getLogger(__name__)
class KeyValueCacheConfig(BaseModel):
model_name: str = Field(
description="The name of the model type. This should correspond to "
"the `model_type` field in the transformer's `config.json` file."
)
additional_transforms: Union[
List[Type[OnnxTransform]], Type[OnnxTransform], None
] = Field(
description="A transform class (or list thereof) to use for additional "
"transforms to the model required for finalizing the kv cache injection."
)
key_num_attention_heads: str = Field(
description="The key to use to get the number of attention heads from the "
"transformer's `config.json` file."
)
key_num_embedding_hidden_size: str = Field(
description="The key to use to get the hidden size "
"from the transformer's `config.json` file."
)
num_attention_heads: Optional[int] = Field(
description="The number of attention heads."
)
hidden_size_kv_cache: Optional[int] = Field(
description="The hidden size of the key/value cache. "
)
multiply_batch_by_num_att_heads: bool = Field(
default=False,
description="Whether or not to internally multiply "
"the batch size by the number of attention heads. "
"This is used to reduce the number of dimensions in "
"the key/value cache.",
)
transpose_value_input: Optional[Tuple[int, int, int, int]] = Field(
default=None,
description="The transpose indices to apply to the value of "
"the kv cache. If this is not provided, no transpose will "
"be applied.",
)
transpose_key_input: Optional[Tuple[int, int, int, int]] = Field(
default=None,
description="The transpose indices to apply to the key of "
"the kv cache. If this is not provided, no transpose will "
"be applied.",
)
class Config:
arbitrary_types_allowed = True
OPT_CONFIG = KeyValueCacheConfig(
model_name="opt",
additional_transforms=AdditionalTransformsOPT,
key_num_attention_heads="num_attention_heads",
key_num_embedding_hidden_size="hidden_size",
transpose_value_input=None,
transpose_key_input=None,
multiply_batch_by_num_att_heads=True,
)
CODEGEN_CONFIG = KeyValueCacheConfig(
model_name="codegen",
additional_transforms=AdditionalTransformsCodeGen,
key_num_attention_heads="n_head",
key_num_embedding_hidden_size="n_embd",
transpose_value_input=(0, 2, 1, 3),
transpose_key_input=None,
multiply_batch_by_num_att_heads=False,
)
MPT_CONFIG = KeyValueCacheConfig(
model_name="mpt",
additional_transforms=AdditionalTransformsMPT,
key_num_attention_heads="n_heads",
key_num_embedding_hidden_size="d_model",
transpose_value_input=(0, 2, 1, 3),
transpose_key_input=(0, 2, 1, 3),
multiply_batch_by_num_att_heads=False,
)
BLOOM_CONFIG = KeyValueCacheConfig(
model_name="bloom",
additional_transforms=None,
key_num_attention_heads="num_attention_heads",
key_num_embedding_hidden_size="n_embed",
transpose_value_input=None,
transpose_key_input=(0, 1, 3, 2),
multiply_batch_by_num_att_heads=True,
)
LLAMA_CONFIG = KeyValueCacheConfig(
model_name="llama",
additional_transforms=AdditionalTransformsLLAMA,
key_num_attention_heads="num_attention_heads",
key_num_embedding_hidden_size="hidden_size",
transpose_value_input=(0, 2, 1, 3),
transpose_key_input=None,
multiply_batch_by_num_att_heads=False,
)
MISTRAL_CONFIG = KeyValueCacheConfig(
model_name="mistral",
additional_transforms=AdditionalTransformsLLAMA,
key_num_attention_heads="num_attention_heads",
key_num_embedding_hidden_size="hidden_size",
transpose_value_input=None,
transpose_key_input=None,
multiply_batch_by_num_att_heads=False,
)
GPT_NEO_CONFIG = KeyValueCacheConfig(
model_name="gpt_neo",
additional_transforms=additional_transforms_gpt_neo,
key_num_attention_heads="num_heads",
key_num_embedding_hidden_size="hidden_size",
transpose_value_input=(0, 2, 1, 3),
transpose_key_input=None,
multiply_batch_by_num_att_heads=False,
)
def adapt_cache_structure_for_gqa(
kv_cache_config: KeyValueCacheConfig,
transformers_config: Dict[str, Any],
model_names: List[str] = ["llama"],
) -> KeyValueCacheConfig:
"""
Potentially adapts the kv_cache_config, so that it
properly works with Grouped Query Attention (GQA).
For now, this function only supports the llama model.
Llama uses:
Multi Head Attention (MHA) if `num_key_value_heads==num_attention_heads` (default),
Grouped Query Attention (GQA) if `num_key_value_heads<num_attention_heads`,
Multi Query Attention (MQA) if `num_key_value_heads==1`,
:param kv_cache_config: The kv cache config for the model.
:param transformers_config: The transformers config for
the model. If contains the key:`num_key_value_heads`,
the model may be potentially using GQA instead of
MHA and thus the kv_cache_config needs to be adapted.
:param model_names: The list of model names that may use
GQA instead of MQA.
:return: Potentially adapted kv cache config for the model.
If the model does not use GQA, the kv_cache_config is
returned unchanged.
"""
# For now, we only support GQA for LLAMA.
model_name = kv_cache_config.model_name
num_attention_heads = kv_cache_config.num_attention_heads
num_key_value_heads = transformers_config.get("num_key_value_heads")
if num_key_value_heads is not None and model_name in model_names:
if num_key_value_heads > 1 and num_key_value_heads != num_attention_heads:
# introduce the modification the config to support GQA for LLAMA.
kv_cache_config.transpose_value_input = None
_LOGGER.info(
f"Adapted the model: {transformers_config['model_type']} "
f"to work with GQA."
)
return kv_cache_config
def _get_transformers_config(model_path: Union[str, Path]) -> Dict[str, Any]:
# from the model path, get the config.json file and return it as a dict.
model_path = Path(model_path) if isinstance(model_path, str) else model_path
if not model_path.is_dir():
raise ValueError(
f"`model_path` is expected to be a directory, found {model_path}"
)
config_file = [file for file in model_path.iterdir() if file.name == "config.json"]
if len(config_file) == 0:
raise ValueError(f"Unable to find config.json in model_path: {model_path}")
config_file = config_file[0]
with open(config_file) as f:
config = json.load(f)
_LOGGER.info(f"Loaded config file {config_file} for model: {config['model_type']}")
return config
The provided code snippet includes necessary dependencies for implementing the `get_kv_cache_config` function. Write a Python function `def get_kv_cache_config( model_path: str, supported_configs: List[BaseModel] = [ OPT_CONFIG, CODEGEN_CONFIG, BLOOM_CONFIG, MPT_CONFIG, LLAMA_CONFIG, MISTRAL_CONFIG, GPT_NEO_CONFIG, ], ) -> KeyValueCacheConfig` to solve the following problem:
Get the kv cache config for the model at the given path. :param model_path: The path to the directory containing the transformers model. It is assumed that the `config.json` file (as supplied by the transformers models) is in this directory. :param supported_configs: The list of supported configs. If the model type is not in this list, a warning will be logged and the first config in the list will be returned. :return: The kv cache config for the model.
Here is the function:
def get_kv_cache_config(
model_path: str,
supported_configs: List[BaseModel] = [
OPT_CONFIG,
CODEGEN_CONFIG,
BLOOM_CONFIG,
MPT_CONFIG,
LLAMA_CONFIG,
MISTRAL_CONFIG,
GPT_NEO_CONFIG,
],
) -> KeyValueCacheConfig:
"""
Get the kv cache config for the model at the given path.
:param model_path: The path to the directory containing
the transformers model. It is assumed that
the `config.json` file (as supplied by the
transformers models) is in this directory.
:param supported_configs: The list of supported configs.
If the model type is not in this list,
a warning will be logged and the first
config in the list will be returned.
:return: The kv cache config for the model.
"""
transformers_config = _get_transformers_config(model_path)
model_name = transformers_config["model_type"]
kv_cache_config = [
kv_cache_config
for kv_cache_config in supported_configs
if kv_cache_config.model_name == model_name
]
if len(kv_cache_config) == 0:
_LOGGER.warning(
f"Could not find a kv cache config for model type: {model_name}."
)
return None
kv_cache_config = kv_cache_config[0]
# set the number of attention heads and the hidden size of the kv cache
num_attention_heads = transformers_config.get(
kv_cache_config.key_num_attention_heads
)
hidden_size_kv_cache = (
transformers_config.get(kv_cache_config.key_num_embedding_hidden_size)
// num_attention_heads
)
kv_cache_config.num_attention_heads = num_attention_heads
kv_cache_config.hidden_size_kv_cache = hidden_size_kv_cache
kv_cache_config = adapt_cache_structure_for_gqa(
kv_cache_config, transformers_config
)
_LOGGER.info("Properly configured arguments for KV Cache Transformation")
return kv_cache_config | Get the kv cache config for the model at the given path. :param model_path: The path to the directory containing the transformers model. It is assumed that the `config.json` file (as supplied by the transformers models) is in this directory. :param supported_configs: The list of supported configs. If the model type is not in this list, a warning will be logged and the first config in the list will be returned. :return: The kv cache config for the model. |
21,147 | import logging
from typing import List
from onnx import ModelProto, NodeProto
from sparseml.exporters.transforms.onnx_transform import OnnxTransform
from sparseml.onnx.utils import ONNXGraph
_LOGGER = logging.getLogger(__name__)
def _delete_quantize_nodes(graph: ONNXGraph, quantize_nodes: List[NodeProto]):
# delete given quantize nodes and forward their inputs to the next graph layer
for quantize_node in quantize_nodes:
quantize_children = graph.get_node_children(quantize_node)
quantize_node_id = quantize_node.output[0]
for child_node in quantize_children:
input_idx = [
idx
for idx, inp in enumerate(child_node.input)
if inp == quantize_node_id
]
if not input_idx:
continue
input_idx = input_idx[0]
graph.update_node_input(child_node, quantize_node.input[0], input_idx)
_LOGGER.debug(
f"set node with output id {child_node.output[0]} as initial node in "
"graph"
)
_LOGGER.debug(
f"deleting QuantizeLinear node(s) with output id(s): "
f"{[n.output for n in quantize_nodes]}"
)
graph.delete_nodes(quantize_nodes) # only contains references to the Quantize nodes
graph.delete_unused_initializers() # cleanup | null |
21,148 | from typing import List, Optional, Union
from onnx import NodeProto, TensorProto
from sparseml.onnx.utils import ONNXGraph
_OPTIONAL_TAG = "Optional-"
The provided code snippet includes necessary dependencies for implementing the `optional_node` function. Write a Python function `def optional_node(op_type: str) -> str` to solve the following problem:
Tells :func:`get_structural_matches` that this op type is an optional one. e.g. ```python get_structural_matches( ..., children_ops=[ [ optional_node("Transpose") "QuantizeLinear", ] ] ) ```
Here is the function:
def optional_node(op_type: str) -> str:
"""
Tells :func:`get_structural_matches` that this op type is an optional one.
e.g.
```python
get_structural_matches(
...,
children_ops=[
[
optional_node("Transpose")
"QuantizeLinear",
]
]
)
```
"""
return _OPTIONAL_TAG + op_type | Tells :func:`get_structural_matches` that this op type is an optional one. e.g. ```python get_structural_matches( ..., children_ops=[ [ optional_node("Transpose") "QuantizeLinear", ] ] ) ``` |
21,149 | from typing import List, Optional, Union
from onnx import NodeProto, TensorProto
from sparseml.onnx.utils import ONNXGraph
_ANY_TAG = "Any-"
The provided code snippet includes necessary dependencies for implementing the `any_of` function. Write a Python function `def any_of(*op_type: str) -> str` to solve the following problem:
Tells :func:`get_structural_matches` that this can be a set of op types ```python get_structural_matches( ..., children_ops=[ [ any_of("QuantizeLinear", "DequantizeLinear"), ] ] ) ```
Here is the function:
def any_of(*op_type: str) -> str:
"""
Tells :func:`get_structural_matches` that this can be a set of op types
```python
get_structural_matches(
...,
children_ops=[
[
any_of("QuantizeLinear", "DequantizeLinear"),
]
]
)
```
"""
return _ANY_TAG + "-".join(op_type) | Tells :func:`get_structural_matches` that this can be a set of op types ```python get_structural_matches( ..., children_ops=[ [ any_of("QuantizeLinear", "DequantizeLinear"), ] ] ) ``` |
21,150 | from typing import List, Optional, Union
from onnx import NodeProto, TensorProto
from sparseml.onnx.utils import ONNXGraph
def _match_structure(
graph: ONNXGraph,
node: Union[NodeProto, TensorProto],
op_type: str,
parent_ops: Optional[List[List[str]]] = None,
children_ops: Optional[List[List[str]]] = None,
) -> Optional[MatchResult]:
match = MatchResult(node)
if not _match_op_type(match, graph, node, op_type):
return None
if parent_ops and not _match_parents(match, graph, node, parent_ops):
return None
if children_ops and not _match_children(match, graph, node, children_ops):
return None
return match
The provided code snippet includes necessary dependencies for implementing the `get_structural_matches` function. Write a Python function `def get_structural_matches( graph: ONNXGraph, op_type: str, parent_ops: Optional[List[List[str]]] = None, children_ops: Optional[List[List[str]]] = None, ) -> List["MatchResult"]` to solve the following problem:
Gathers all nodes in the `graph` that match the `op_type` and the have the specified parent/children structure, controlled via parent_ops/children_ops. ### op_type example A simple example just matching against op_type: ```python matches = get_structural_matches(graph, op_type="Identity") for match in matches: id_node = match.node assert isinstance(id_node, onnx.NodeProto) and id_node.op_type == "Identity" assert match.parents == [] assert match.children == [] ``` `parent_ops` and `children_ops` are list of list of op type strings. It's a list of lists because nodes can have multiple inputs and multiple outputs. So each sub list in `parent_ops` and `children_ops` will be matched against the corresponding entry in node.input/node.output. I.e. - parent_ops[0] will be compared against node.input[0] - parent_ops[2] will be compared against node.input[2] - children_ops[3] will be compared against node.output[3] Because of this, if the length of node.input/node.output is shorter than parent_ops/children_ops, it is not a possible match and will be discarded. ### `parent_ops` and `INITIALIZER_MATCH` example Here's a simple example of matching against an identity node with a single parent (because there is a single list in parent_ops), and the parent must be an initializer (the INITIALIZER_MATCH value): ```python matches = get_structural_matches( graph, op_type="Identity", parent_ops=[[INITIALIZER_MATCH]] ) for match in matches: id_node = match.node (init, ) = match.parents[0] assert isinstance(init, onnx.TensorProto) ``` ### Empty `parent_ops` example Another example is matching against a single parent branch of a node. In this case you can just specify `[]` as one of the parent ops: ```python matches = get_structural_matches( graph, op_type="Add", parent_ops=[ [], ["QuantizeLinear", "DequantizeLinear"] ] ) for match in matches: add_node = match.node assert len(match.parents[0]) == 0 parent_1_quant, parent_1_dequant = match.parents[1] ``` ### `optional_node` example Here is a really complicated example with optional nodes and multiple parents. Here we match against MatMul nodes that have at least 2 inputs from Quant/Dequant sequences. After the MatMul node there are two optional nodes followed by a Quant/Dequant sequence. If the optional_nodes are found, then the values in children_ops will be the nodes. If the optional nodes are NOT found, then those entries in children_ops will be None. In both cases, the length of match.children[0] will be the same as the length of children_ops[0]. ```python matches = get_structural_matches( graph, op_type="MatMul", parent_ops=[ ["QuantizeLinear", "DequantizeLinear"], ["QuantizeLinear", "DequantizeLinear"], ], children_ops=[ [ optional_node("Transpose"), optional_node("Reshape"), "QuantizeLinear", "DequantizeLinear", ] ] ) for match in matches: matmul_node = match.node parent_0_quant, parent_0_dequant = match.parents[0] parent_1_quant, parent_1_dequant = match.parents[1] opt_transpose, opt_reshape, child_quant, child_dequant = match.children[0] ``` ### `any_of` example Here is an example using the `any_of` function to match a parent node against a set of op_types: ```python matches = get_structural_matches( graph, op_type="MatMul", parent_ops=[[any_of("QuantizeLinear", "DequantizeLinear")]] ) for match in matches: (quant_or_dequant, ) = match.parents[0] assert quant_or_dequant.op_type in ["QuantizeLinear", "DequantizeLinear"] ``` :param graph: the graph to search in :param op_type: The `NodeProto.op_type` to match against :param parent_ops: List of List of `NodeProto.op_type` or `INITIALIZER_MATCH` to match against :param children_ops: List of List of `NodeProto.op_type` or `INITIALIZER_MATCH` to match against
Here is the function:
def get_structural_matches(
graph: ONNXGraph,
op_type: str,
parent_ops: Optional[List[List[str]]] = None,
children_ops: Optional[List[List[str]]] = None,
) -> List["MatchResult"]:
"""
Gathers all nodes in the `graph` that match the `op_type` and the
have the specified parent/children structure,
controlled via parent_ops/children_ops.
### op_type example
A simple example just matching against op_type:
```python
matches = get_structural_matches(graph, op_type="Identity")
for match in matches:
id_node = match.node
assert isinstance(id_node, onnx.NodeProto) and id_node.op_type == "Identity"
assert match.parents == []
assert match.children == []
```
`parent_ops` and `children_ops` are list of list of op type strings.
It's a list of lists because nodes can have multiple inputs and multiple outputs.
So each sub list in `parent_ops` and `children_ops` will be
matched against the corresponding entry in node.input/node.output. I.e.
- parent_ops[0] will be compared against node.input[0]
- parent_ops[2] will be compared against node.input[2]
- children_ops[3] will be compared against node.output[3]
Because of this, if the length of node.input/node.output is shorter
than parent_ops/children_ops, it is not a possible match and will
be discarded.
### `parent_ops` and `INITIALIZER_MATCH` example
Here's a simple example of matching against an identity node
with a single parent (because there is a single list in parent_ops),
and the parent must be an initializer (the INITIALIZER_MATCH value):
```python
matches = get_structural_matches(
graph,
op_type="Identity",
parent_ops=[[INITIALIZER_MATCH]]
)
for match in matches:
id_node = match.node
(init, ) = match.parents[0]
assert isinstance(init, onnx.TensorProto)
```
### Empty `parent_ops` example
Another example is matching against a single parent branch of a node. In this
case you can just specify `[]` as one of the parent ops:
```python
matches = get_structural_matches(
graph,
op_type="Add",
parent_ops=[
[],
["QuantizeLinear", "DequantizeLinear"]
]
)
for match in matches:
add_node = match.node
assert len(match.parents[0]) == 0
parent_1_quant, parent_1_dequant = match.parents[1]
```
### `optional_node` example
Here is a really complicated example with optional nodes and multiple parents. Here
we match against MatMul nodes that have at least 2 inputs from Quant/Dequant
sequences. After the MatMul node there are two optional nodes followed by
a Quant/Dequant sequence.
If the optional_nodes are found, then the values in children_ops will be the nodes.
If the optional nodes are NOT found, then those entries in children_ops
will be None.
In both cases, the length of match.children[0] will be the same as
the length of children_ops[0].
```python
matches = get_structural_matches(
graph,
op_type="MatMul",
parent_ops=[
["QuantizeLinear", "DequantizeLinear"],
["QuantizeLinear", "DequantizeLinear"],
],
children_ops=[
[
optional_node("Transpose"),
optional_node("Reshape"),
"QuantizeLinear",
"DequantizeLinear",
]
]
)
for match in matches:
matmul_node = match.node
parent_0_quant, parent_0_dequant = match.parents[0]
parent_1_quant, parent_1_dequant = match.parents[1]
opt_transpose, opt_reshape, child_quant, child_dequant = match.children[0]
```
### `any_of` example
Here is an example using the `any_of` function to match a parent node against
a set of op_types:
```python
matches = get_structural_matches(
graph,
op_type="MatMul",
parent_ops=[[any_of("QuantizeLinear", "DequantizeLinear")]]
)
for match in matches:
(quant_or_dequant, ) = match.parents[0]
assert quant_or_dequant.op_type in ["QuantizeLinear", "DequantizeLinear"]
```
:param graph: the graph to search in
:param op_type: The `NodeProto.op_type` to match against
:param parent_ops: List of List of `NodeProto.op_type` or `INITIALIZER_MATCH`
to match against
:param children_ops: List of List of `NodeProto.op_type` or `INITIALIZER_MATCH`
to match against
"""
# NOTE: gather matches completely first, so we don't have to worry about
# updates to the graph messing with the iteration
matches = []
for node in graph._model.graph.node:
match = _match_structure(graph, node, op_type, parent_ops, children_ops)
if match is not None:
matches.append(match)
return matches | Gathers all nodes in the `graph` that match the `op_type` and the have the specified parent/children structure, controlled via parent_ops/children_ops. ### op_type example A simple example just matching against op_type: ```python matches = get_structural_matches(graph, op_type="Identity") for match in matches: id_node = match.node assert isinstance(id_node, onnx.NodeProto) and id_node.op_type == "Identity" assert match.parents == [] assert match.children == [] ``` `parent_ops` and `children_ops` are list of list of op type strings. It's a list of lists because nodes can have multiple inputs and multiple outputs. So each sub list in `parent_ops` and `children_ops` will be matched against the corresponding entry in node.input/node.output. I.e. - parent_ops[0] will be compared against node.input[0] - parent_ops[2] will be compared against node.input[2] - children_ops[3] will be compared against node.output[3] Because of this, if the length of node.input/node.output is shorter than parent_ops/children_ops, it is not a possible match and will be discarded. ### `parent_ops` and `INITIALIZER_MATCH` example Here's a simple example of matching against an identity node with a single parent (because there is a single list in parent_ops), and the parent must be an initializer (the INITIALIZER_MATCH value): ```python matches = get_structural_matches( graph, op_type="Identity", parent_ops=[[INITIALIZER_MATCH]] ) for match in matches: id_node = match.node (init, ) = match.parents[0] assert isinstance(init, onnx.TensorProto) ``` ### Empty `parent_ops` example Another example is matching against a single parent branch of a node. In this case you can just specify `[]` as one of the parent ops: ```python matches = get_structural_matches( graph, op_type="Add", parent_ops=[ [], ["QuantizeLinear", "DequantizeLinear"] ] ) for match in matches: add_node = match.node assert len(match.parents[0]) == 0 parent_1_quant, parent_1_dequant = match.parents[1] ``` ### `optional_node` example Here is a really complicated example with optional nodes and multiple parents. Here we match against MatMul nodes that have at least 2 inputs from Quant/Dequant sequences. After the MatMul node there are two optional nodes followed by a Quant/Dequant sequence. If the optional_nodes are found, then the values in children_ops will be the nodes. If the optional nodes are NOT found, then those entries in children_ops will be None. In both cases, the length of match.children[0] will be the same as the length of children_ops[0]. ```python matches = get_structural_matches( graph, op_type="MatMul", parent_ops=[ ["QuantizeLinear", "DequantizeLinear"], ["QuantizeLinear", "DequantizeLinear"], ], children_ops=[ [ optional_node("Transpose"), optional_node("Reshape"), "QuantizeLinear", "DequantizeLinear", ] ] ) for match in matches: matmul_node = match.node parent_0_quant, parent_0_dequant = match.parents[0] parent_1_quant, parent_1_dequant = match.parents[1] opt_transpose, opt_reshape, child_quant, child_dequant = match.children[0] ``` ### `any_of` example Here is an example using the `any_of` function to match a parent node against a set of op_types: ```python matches = get_structural_matches( graph, op_type="MatMul", parent_ops=[[any_of("QuantizeLinear", "DequantizeLinear")]] ) for match in matches: (quant_or_dequant, ) = match.parents[0] assert quant_or_dequant.op_type in ["QuantizeLinear", "DequantizeLinear"] ``` :param graph: the graph to search in :param op_type: The `NodeProto.op_type` to match against :param parent_ops: List of List of `NodeProto.op_type` or `INITIALIZER_MATCH` to match against :param children_ops: List of List of `NodeProto.op_type` or `INITIALIZER_MATCH` to match against |
21,151 | import logging
from typing import Any, List, NamedTuple, Set, Union
import numpy
from onnx import AttributeProto, ModelProto, NodeProto, numpy_helper
from sparseml.onnx.utils import ONNXGraph, remove_node_and_params_from_graph
QUANTIZE_OP_NAMES = ["QuantizeLinear", "DequantizeLinear"]
QuantizationParams = NamedTuple(
"QuantizationParams",
[("scale", float), ("zero_point", int), ("target", Union[numpy.ndarray, None])],
)
The provided code snippet includes necessary dependencies for implementing the `get_quantization_params` function. Write a Python function `def get_quantization_params( model: Union[ModelProto, ONNXGraph], node: NodeProto, include_target: bool = False, ) -> QuantizationParams` to solve the following problem:
:param model: ONNX model to read from or ONNXGraph object :param node: A QuantizeLinear or DequantizeLinear Node :param include_target: Set True include quantization target. If False, target value will be returned as None. Default is None :return: QuantizationParams object with scale and zero point, will include the quantization target if it is an initializer otherwise target will be None
Here is the function:
def get_quantization_params(
model: Union[ModelProto, ONNXGraph],
node: NodeProto,
include_target: bool = False,
) -> QuantizationParams:
"""
:param model: ONNX model to read from or ONNXGraph object
:param node: A QuantizeLinear or DequantizeLinear Node
:param include_target: Set True include quantization target. If False,
target value will be returned as None. Default is None
:return: QuantizationParams object with scale and zero point, will include the
quantization target if it is an initializer otherwise target will be None
"""
assert (
node.op_type in QUANTIZE_OP_NAMES
), "Op Type must be either QuantizeLinear or DequantizeLinear, found {} ".format(
node.op_type
)
graph = model if isinstance(model, ONNXGraph) else ONNXGraph(model)
scale = graph.get_init_by_name(node.input[1])
if scale is None:
scale_const = graph.get_node_by_output_id(node.input[1])
if scale_const:
scale = scale_const.attribute[0].t
assert scale, "Quantization scale {} not found".format(node.input[1])
zero_point = graph.get_init_by_name(node.input[2])
if zero_point is None:
zero_point_const = graph.get_node_by_output_id(node.input[2])
if zero_point_const:
zero_point = zero_point_const.attribute[0].t
assert zero_point, "Quantization zero point {} not found".format(node.input[2])
scale = numpy_helper.to_array(scale)
zero_point = numpy_helper.to_array(zero_point)
target = None
if include_target:
target = graph.get_init_by_name(node.input[0])
if target is not None:
target = numpy_helper.to_array(target)
return QuantizationParams(scale=scale, zero_point=zero_point, target=target) | :param model: ONNX model to read from or ONNXGraph object :param node: A QuantizeLinear or DequantizeLinear Node :param include_target: Set True include quantization target. If False, target value will be returned as None. Default is None :return: QuantizationParams object with scale and zero point, will include the quantization target if it is an initializer otherwise target will be None |
21,152 | import logging
from typing import Any, List, NamedTuple, Set, Union
import numpy
from onnx import AttributeProto, ModelProto, NodeProto, numpy_helper
from sparseml.onnx.utils import ONNXGraph, remove_node_and_params_from_graph
QUANTIZE_OP_NAMES = ["QuantizeLinear", "DequantizeLinear"]
The provided code snippet includes necessary dependencies for implementing the `delete_quant_node` function. Write a Python function `def delete_quant_node( model: ModelProto, node: NodeProto, keep_weight: bool = False, )` to solve the following problem:
Deletes a QuantizeLinear or DequantizeLinear and its parameters from the model :param model: ONNX model to modify :param node: the QuantizeLinear or DequantizeLinear node to delete :param keep_weight: set true to not delete the weight param possibly stored as an initializer to the first input of this node
Here is the function:
def delete_quant_node(
model: ModelProto,
node: NodeProto,
keep_weight: bool = False,
):
"""
Deletes a QuantizeLinear or DequantizeLinear and its parameters from the model
:param model: ONNX model to modify
:param node: the QuantizeLinear or DequantizeLinear node to delete
:param keep_weight: set true to not delete the weight param possibly stored as an
initializer to the first input of this node
"""
assert (
node.op_type in QUANTIZE_OP_NAMES
), "Op Type must be either QuantizeLinear or DequantizeLinear, found {} ".format(
node.op_type
)
if keep_weight:
del node.input[0]
remove_node_and_params_from_graph(model, node) | Deletes a QuantizeLinear or DequantizeLinear and its parameters from the model :param model: ONNX model to modify :param node: the QuantizeLinear or DequantizeLinear node to delete :param keep_weight: set true to not delete the weight param possibly stored as an initializer to the first input of this node |
21,153 | import logging
from typing import Any, List, NamedTuple, Set, Union
import numpy
from onnx import AttributeProto, ModelProto, NodeProto, numpy_helper
from sparseml.onnx.utils import ONNXGraph, remove_node_and_params_from_graph
The provided code snippet includes necessary dependencies for implementing the `assert_node_type` function. Write a Python function `def assert_node_type(node: NodeProto, op: Union[List[str], Set[str], str]) -> bool` to solve the following problem:
Checks if a node is of the given op type :param node: the node to check :param op: the operation type to check for :return: True if the node has the given op type, False otherwise
Here is the function:
def assert_node_type(node: NodeProto, op: Union[List[str], Set[str], str]) -> bool:
"""
Checks if a node is of the given op type
:param node: the node to check
:param op: the operation type to check for
:return: True if the node has the given op type, False otherwise
"""
if node is None:
return False
if isinstance(op, str):
return node.op_type == op
else:
return node.op_type in op | Checks if a node is of the given op type :param node: the node to check :param op: the operation type to check for :return: True if the node has the given op type, False otherwise |
21,154 | from typing import Optional, Tuple
import numpy
import onnx
from onnx import ModelProto, NodeProto, TensorProto, numpy_helper
from sparseml.exporters.transforms.utils.helpers import (
QuantizationParams,
attribute_to_kwarg,
quantize_array,
)
def _create_mul_node(
cast_node_output: str,
rescale_scale_name: str,
input_node_name: str,
target_output: str,
model: ModelProto,
node_output_orig: str,
) -> NodeProto:
mul_node_inputs = [
cast_node_output, # a
rescale_scale_name, # b -> rescale factor
]
mul_node_name = "{}_rescale_mul".format(input_node_name)
if target_output is None:
target_output = mul_node_name
# since we skip the add conversion,
# update model to point all outputs of matmul/conv to the rescale mul
_update_model_input_id(model, node_output_orig, target_output)
mul_node = onnx.helper.make_node(
"Mul",
mul_node_inputs,
[target_output],
mul_node_name,
)
return mul_node
def _create_cast_node(
quant_add_name: str, output_quantize_node: Optional[NodeProto] = None
) -> NodeProto:
quant_add_output = (
output_quantize_node.output[0]
if output_quantize_node
else f"{quant_add_name}_output"
)
cast_node_name = "{}_cast".format(quant_add_name)
cast_node_output = "{}_cast".format(quant_add_output)
cast_node = onnx.helper.make_node(
"Cast",
[quant_add_output],
[cast_node_output],
cast_node_name,
to=getattr(onnx.TensorProto, "FLOAT"), # get Float32 enum id
)
return cast_node
def _create_qadd_node(
node: NodeProto,
integer_op_output: str,
quantized_bias_name: str,
output_quantize_node: Optional[NodeProto] = False,
) -> NodeProto:
quant_add_inputs = [
integer_op_output, # MatMul/Conv integer outputs (INT32)
quantized_bias_name, # Quantized bias (INT32)
]
quant_add_name = "{}_bias_add_quant".format(node.name)
quant_add_output = (
output_quantize_node.output[0]
if output_quantize_node
else f"{quant_add_name}_output"
)
# create Add node and add it to graph
qadd_node = onnx.helper.make_node(
"Add",
quant_add_inputs,
[quant_add_output],
quant_add_name,
)
return qadd_node
def _create_integer_op_node(
node: NodeProto,
input_quantize_node: NodeProto,
weight_quantize_node: NodeProto,
quantized_weight_name: str,
) -> NodeProto:
integer_op_inputs = [
input_quantize_node.input[0], # input matrix (replaces previous dequant node)
quantized_weight_name, # quantized weight
input_quantize_node.input[2], # input zero point
weight_quantize_node.input[2], # weight zero point
]
integer_op_output = "{}_quant".format(node.output[0])
integer_op_name = "{}_quant".format(node.name)
# create MatMulInteger/ConvInteger node and add it to graph
if node.op_type == "Conv":
# get conv attributes as kwargs
conv_kwargs = {}
for attribute in node.attribute:
conv_kwargs.update(attribute_to_kwarg(attribute))
# create ConvInteger node and add it to graph
integer_op_node = onnx.helper.make_node(
"ConvInteger",
integer_op_inputs,
[integer_op_output],
integer_op_name,
**conv_kwargs,
)
else:
integer_op_node = onnx.helper.make_node(
"MatMulInteger",
integer_op_inputs,
[integer_op_output],
integer_op_name,
)
return integer_op_node
def _quantize_bias(
node, bias_initializer, input_quantize_params, weight_quantize_params, bias_add_name
) -> Tuple[TensorProto, TensorProto, TensorProto]:
bias_initializer = numpy_helper.to_array(bias_initializer)
bias_scale = input_quantize_params.scale * weight_quantize_params.scale
bias_zero_point = numpy.zeros(bias_scale.shape, dtype=numpy.int32)
quantized_bias = quantize_array(
bias_initializer, bias_scale, bias_zero_point, dtype=numpy.int32
)
if node.op_type == "Conv" and len(quantized_bias.shape) == 1:
# reshape for bias add broadcasting
quantized_bias = quantized_bias.reshape(1, quantized_bias.shape[0], 1, 1)
quantized_bias_name = "{}.bias_quantized".format(bias_add_name)
quantized_bias_initializer = numpy_helper.from_array(
quantized_bias, name=quantized_bias_name
)
quantized_bias_scale_name = "{}.scale".format(quantized_bias_name)
quantized_bias_zero_point_name = "{}.zero_point".format(quantized_bias_name)
return (
quantized_bias_initializer,
numpy_helper.from_array(
numpy.asarray(bias_scale), name=quantized_bias_scale_name
),
numpy_helper.from_array(
numpy.asarray(bias_zero_point, dtype=numpy.uint8),
name=quantized_bias_zero_point_name,
),
)
def _create_rescale_init(
node, input_quantize_params, weight_quantize_params, reshape=False
) -> TensorProto:
output_scale = input_quantize_params.scale * weight_quantize_params.scale
if reshape: # for channel-wise Conv
output_scale = output_scale.reshape(1, output_scale.shape[0], 1, 1)
return numpy_helper.from_array(
numpy.asarray(output_scale), name=f"{node.name}_quant.rescale.scale"
)
def _quantize_weight_initializer(
node: NodeProto,
weight_quantize_params: QuantizationParams,
transpose_weight: bool = False,
) -> TensorProto:
quantized_weight = quantize_array(
weight_quantize_params.target,
weight_quantize_params.scale,
weight_quantize_params.zero_point,
weight_quantize_params.zero_point.dtype,
)
if transpose_weight:
quantized_weight = quantized_weight.transpose()
quantized_weight_name = "{}.weight_quantized".format(node.name)
quantized_weight_initializer = numpy_helper.from_array(
quantized_weight, name=quantized_weight_name
)
return quantized_weight_initializer
QuantizationParams = NamedTuple(
"QuantizationParams",
[("scale", float), ("zero_point", int), ("target", Union[numpy.ndarray, None])],
)
The provided code snippet includes necessary dependencies for implementing the `add_quantized_conv_matmul_add_ops` function. Write a Python function `def add_quantized_conv_matmul_add_ops( model: ModelProto, node: NodeProto, input_quantize_node: NodeProto, weight_quantize_node: NodeProto, input_quantize_params: QuantizationParams, weight_quantize_params: QuantizationParams, bias_initializer: Optional[TensorProto], bias_add_name: str, target_output: str, transpose_weight: bool, output_quantize_node: Optional[NodeProto] = None, output_dequantize_node: Optional[NodeProto] = None, ) -> ModelProto` to solve the following problem:
Helper function for conversion of qat parameterized gemms, matmuls, or convs to conv/matmul integer add blocks. Adds new quantized ops to graph, does not perform any checks or deletions (should be called by the operator main conversion function)
Here is the function:
def add_quantized_conv_matmul_add_ops(
model: ModelProto,
node: NodeProto,
input_quantize_node: NodeProto,
weight_quantize_node: NodeProto,
input_quantize_params: QuantizationParams,
weight_quantize_params: QuantizationParams,
bias_initializer: Optional[TensorProto],
bias_add_name: str,
target_output: str,
transpose_weight: bool,
output_quantize_node: Optional[NodeProto] = None,
output_dequantize_node: Optional[NodeProto] = None,
) -> ModelProto:
"""
Helper function for conversion of qat parameterized gemms, matmuls, or convs to
conv/matmul integer add blocks.
Adds new quantized ops to graph, does not perform any checks or deletions
(should be called by the operator main conversion function)
"""
node_output_orig = node.output[0]
if not target_output and (
any(output.name == node_output_orig for output in model.graph.output)
):
# original node output is a graph output, make that the quant block
# output target id
target_output = node_output_orig
# Quantize weights and add to graph
quantized_weight_initializer = _quantize_weight_initializer(
node, weight_quantize_params, transpose_weight
)
model.graph.initializer.append(quantized_weight_initializer)
# Create MatMulInteger/ConvInteger node and add it to graph
integer_op_node = _create_integer_op_node(
node,
input_quantize_node,
weight_quantize_node,
quantized_weight_initializer.name,
)
model.graph.node.append(integer_op_node)
if bias_initializer is not None:
# Add bias + zero point correction; quantize bias and add it to graph
(
quantized_bias_initializer,
quantized_bias_scale,
quantize_bias_zero_point,
) = _quantize_bias(
node,
bias_initializer,
input_quantize_params,
weight_quantize_params,
bias_add_name,
)
model.graph.initializer.append(quantized_bias_initializer)
model.graph.initializer.append(quantized_bias_scale)
model.graph.initializer.append(quantize_bias_zero_point)
# Create Quantized Bias Add node and add it to graph
qadd_node = _create_qadd_node(
node,
integer_op_output="{}_quant".format(node.output[0]),
quantized_bias_name=quantized_bias_initializer.name,
output_quantize_node=output_quantize_node,
)
model.graph.node.append(qadd_node)
mul_input_node_name = qadd_node.name
# bias has same scale as future rescale op, unless doing channel-wise Conv
if weight_quantize_params.scale.size > 1 and node.op_type == "Conv":
# channel-wise Conv
rescale_scale = _create_rescale_init(
node, input_quantize_params, weight_quantize_params, reshape=True
)
model.graph.initializer.append(rescale_scale)
else:
rescale_scale = quantized_bias_scale
else:
rescale_scale = _create_rescale_init(
node, input_quantize_params, weight_quantize_params
)
model.graph.initializer.append(rescale_scale)
# cast node should come directly after quantize op output instead of add
output_quantize_node = output_quantize_node or integer_op_node
mul_input_node_name = output_quantize_node.name
# create Cast node and add it to graph
cast_node = _create_cast_node(
quant_add_name="{}_bias_add_quant".format(node.name),
output_quantize_node=output_quantize_node,
)
model.graph.node.append(cast_node)
# create Mul node for rescale
mul_node = _create_mul_node(
cast_node_output=cast_node.output[0],
rescale_scale_name=rescale_scale.name,
input_node_name=mul_input_node_name,
target_output=target_output,
model=model,
node_output_orig=node_output_orig,
)
model.graph.node.append(mul_node)
return model | Helper function for conversion of qat parameterized gemms, matmuls, or convs to conv/matmul integer add blocks. Adds new quantized ops to graph, does not perform any checks or deletions (should be called by the operator main conversion function) |
21,155 | from typing import Union
import torch.nn.functional as TF
from torch import Tensor, clamp
from torch.nn import LeakyReLU, Module, PReLU
from torch.nn import ReLU as TReLU
from torch.nn import ReLU6 as TReLU6
The provided code snippet includes necessary dependencies for implementing the `swish` function. Write a Python function `def swish(x_tens: Tensor)` to solve the following problem:
Swish layer functional implementation: x * sigmoid(x). More information can be found in the paper `here <https://arxiv.org/abs/1710.05941>`__. :param x_tens: the input tensor to perform the swish op on :return: the output of x_tens * sigmoid(x_tens)
Here is the function:
def swish(x_tens: Tensor):
"""
Swish layer functional implementation: x * sigmoid(x).
More information can be found in the paper
`here <https://arxiv.org/abs/1710.05941>`__.
:param x_tens: the input tensor to perform the swish op on
:return: the output of x_tens * sigmoid(x_tens)
"""
return x_tens * TF.sigmoid(x_tens) | Swish layer functional implementation: x * sigmoid(x). More information can be found in the paper `here <https://arxiv.org/abs/1710.05941>`__. :param x_tens: the input tensor to perform the swish op on :return: the output of x_tens * sigmoid(x_tens) |
21,156 | from typing import Union
import torch.nn.functional as TF
from torch import Tensor, clamp
from torch.nn import LeakyReLU, Module, PReLU
from torch.nn import ReLU as TReLU
from torch.nn import ReLU6 as TReLU6
The provided code snippet includes necessary dependencies for implementing the `hard_swish` function. Write a Python function `def hard_swish(x_tens: Tensor, inplace: bool = False)` to solve the following problem:
| Hardswish layer implementation: | 0 for x <= -3 | x for x >= 3 | x * (x + 3) / 6 otherwise More information can be found in the paper `here <https://arxiv.org/abs/1905.02244>`__. :param x_tens: the input tensor to perform the swish op on :param inplace: True to run the operation in place in memory, False otherwise :return: 0 for x <= -3, x for x >= 3, x * (x + 3) / 6 otherwise
Here is the function:
def hard_swish(x_tens: Tensor, inplace: bool = False):
"""
| Hardswish layer implementation:
| 0 for x <= -3
| x for x >= 3
| x * (x + 3) / 6 otherwise
More information can be found in the paper
`here <https://arxiv.org/abs/1905.02244>`__.
:param x_tens: the input tensor to perform the swish op on
:param inplace: True to run the operation in place in memory, False otherwise
:return: 0 for x <= -3, x for x >= 3, x * (x + 3) / 6 otherwise
"""
if inplace:
x_tens.mul_(clamp(x_tens + 3, 0, 6))
x_tens.div_(6)
else:
relu_6 = x_tens + 3
relu_6 = relu_6.clamp(0, 6)
x_tens = x_tens * relu_6
x_tens = x_tens / 6
return x_tens | | Hardswish layer implementation: | 0 for x <= -3 | x for x >= 3 | x * (x + 3) / 6 otherwise More information can be found in the paper `here <https://arxiv.org/abs/1905.02244>`__. :param x_tens: the input tensor to perform the swish op on :param inplace: True to run the operation in place in memory, False otherwise :return: 0 for x <= -3, x for x >= 3, x * (x + 3) / 6 otherwise |
21,157 | from typing import Union
import torch.nn.functional as TF
from torch import Tensor, clamp
from torch.nn import LeakyReLU, Module, PReLU
from torch.nn import ReLU as TReLU
from torch.nn import ReLU6 as TReLU6
def create_activation(
act_type: str, inplace: bool, num_channels: int, **kwargs
) -> Module:
"""
Create an activation function using the given parameters.
:param act_type: the type of activation to replace with; options:
[relu, relu6, prelu, lrelu, swish, hardswish, silu]
:param inplace: True to create the activation as an inplace, False otherwise
:param num_channels: The number of channels to create the activation for
:param kwargs: Additional kwargs to pass to the activation constructor
:return: the created activation layer
"""
act_type = act_type.lower()
if act_type == "relu":
return ReLU(num_channels=num_channels, inplace=inplace)
if act_type == "relu6":
return ReLU6(num_channels=num_channels, inplace=inplace)
if act_type == "prelu":
return PReLU(num_parameters=num_channels, **kwargs)
if act_type == "lrelu":
return LeakyReLU(inplace=inplace, **kwargs)
if act_type == "swish":
return Swish(num_channels=num_channels)
if act_type == "hardswish":
return Hardswish(num_channels=num_channels, inplace=inplace)
if act_type == "silu":
return SiLU(**kwargs)
raise ValueError("unknown act_type given of {}".format(act_type))
The provided code snippet includes necessary dependencies for implementing the `replace_activation` function. Write a Python function `def replace_activation( module: Module, name: str, act_type: str, inplace: bool = False, num_channels: Union[int, None] = None, **kwargs, ) -> Module` to solve the following problem:
General function to replace the activation for a specific layer in a Module with a new one. :param module: the module to replace the activation function in :param name: the name of the layer to replace the activation for :param act_type: the type of activation to replace with; options: [relu, relu6, prelu, lrelu, swish, silu] :param inplace: True to create the activation as an inplace, False otherwise :param num_channels: The number of channels to create the activation for :param kwargs: Additional kwargs to pass to the activation constructor :return: the created activation layer
Here is the function:
def replace_activation(
module: Module,
name: str,
act_type: str,
inplace: bool = False,
num_channels: Union[int, None] = None,
**kwargs,
) -> Module:
"""
General function to replace the activation for a specific layer in a Module
with a new one.
:param module: the module to replace the activation function in
:param name: the name of the layer to replace the activation for
:param act_type: the type of activation to replace with; options:
[relu, relu6, prelu, lrelu, swish, silu]
:param inplace: True to create the activation as an inplace, False otherwise
:param num_channels: The number of channels to create the activation for
:param kwargs: Additional kwargs to pass to the activation constructor
:return: the created activation layer
"""
layer = module
layers = name.split(".")
for lay in layers[:-1]:
layer = layer.__getattr__(lay)
cur = layer.__getattr__(layers[-1])
if num_channels is None and hasattr(cur, "num_channels"):
num_channels = cur.num_channels
elif num_channels is None and hasattr(cur, "num_parameters"):
num_channels = cur.num_parameters
act = create_activation(
act_type, inplace=inplace, num_channels=num_channels, **kwargs
)
layer.__setattr__(layers[-1], act)
return act | General function to replace the activation for a specific layer in a Module with a new one. :param module: the module to replace the activation function in :param name: the name of the layer to replace the activation for :param act_type: the type of activation to replace with; options: [relu, relu6, prelu, lrelu, swish, silu] :param inplace: True to create the activation as an inplace, False otherwise :param num_channels: The number of channels to create the activation for :param kwargs: Additional kwargs to pass to the activation constructor :return: the created activation layer |
21,158 | from typing import Union
import torch.nn.functional as TF
from torch import Tensor, clamp
from torch.nn import LeakyReLU, Module, PReLU
from torch.nn import ReLU as TReLU
from torch.nn import ReLU6 as TReLU6
def create_activation(
act_type: str, inplace: bool, num_channels: int, **kwargs
) -> Module:
"""
Create an activation function using the given parameters.
:param act_type: the type of activation to replace with; options:
[relu, relu6, prelu, lrelu, swish, hardswish, silu]
:param inplace: True to create the activation as an inplace, False otherwise
:param num_channels: The number of channels to create the activation for
:param kwargs: Additional kwargs to pass to the activation constructor
:return: the created activation layer
"""
act_type = act_type.lower()
if act_type == "relu":
return ReLU(num_channels=num_channels, inplace=inplace)
if act_type == "relu6":
return ReLU6(num_channels=num_channels, inplace=inplace)
if act_type == "prelu":
return PReLU(num_parameters=num_channels, **kwargs)
if act_type == "lrelu":
return LeakyReLU(inplace=inplace, **kwargs)
if act_type == "swish":
return Swish(num_channels=num_channels)
if act_type == "hardswish":
return Hardswish(num_channels=num_channels, inplace=inplace)
if act_type == "silu":
return SiLU(**kwargs)
raise ValueError("unknown act_type given of {}".format(act_type))
def is_activation(module: Module) -> bool:
"""
:param module: the module to check whether it is a common activation function or not
:return: True if the module is an instance of a common activation function,
False otherwise
"""
return (
isinstance(module, TReLU)
or isinstance(module, TReLU6)
or isinstance(module, ReLU)
or isinstance(module, ReLU6)
or isinstance(module, PReLU)
or isinstance(module, LeakyReLU)
or isinstance(module, Swish)
or isinstance(module, Hardswish)
or (SiLU is not None and isinstance(module, SiLU))
)
The provided code snippet includes necessary dependencies for implementing the `replace_activations` function. Write a Python function `def replace_activations( module: Module, act_type: str, inplace: bool = False, num_channels: Union[int, None] = None, **kwargs, ) -> Module` to solve the following problem:
General function to replace all activation functions in a Module with a new one. :param module: the module to replace the activation function in :param act_type: the type of activation to replace with; options: [relu, relu6, prelu, lrelu, swish, silu] :param inplace: True to create the activation as an inplace, False otherwise :param num_channels: The number of channels to create the activation for :param kwargs: Additional kwargs to pass to the activation constructor :return: the updated module
Here is the function:
def replace_activations(
module: Module,
act_type: str,
inplace: bool = False,
num_channels: Union[int, None] = None,
**kwargs,
) -> Module:
"""
General function to replace all activation functions in a Module
with a new one.
:param module: the module to replace the activation function in
:param act_type: the type of activation to replace with; options:
[relu, relu6, prelu, lrelu, swish, silu]
:param inplace: True to create the activation as an inplace, False otherwise
:param num_channels: The number of channels to create the activation for
:param kwargs: Additional kwargs to pass to the activation constructor
:return: the updated module
"""
if is_activation(module):
return create_activation(
act_type, inplace=inplace, num_channels=num_channels, **kwargs
)
for child_name, child_module in module.named_children():
setattr(
module,
child_name,
replace_activations(
child_module, act_type, inplace, num_channels, **kwargs
),
)
return module | General function to replace all activation functions in a Module with a new one. :param module: the module to replace the activation function in :param act_type: the type of activation to replace with; options: [relu, relu6, prelu, lrelu, swish, silu] :param inplace: True to create the activation as an inplace, False otherwise :param num_channels: The number of channels to create the activation for :param kwargs: Additional kwargs to pass to the activation constructor :return: the updated module |
21,159 | from typing import Dict, List, Union
import torch
import torch.nn.functional as TF
from torch import Tensor
from torch.nn import Module, Parameter, ReLU
def _apply_permuted_channels(apply_fn, tens: Tensor, **kwargs):
if len(tens.shape) < 3:
return apply_fn(tens, **kwargs)
perm = [ind for ind in range(len(tens.shape))]
# swap the channel and the last element so we can broadcast across the channels
perm[1] = perm[-1]
perm[-1] = 1
return apply_fn(tens.permute(perm), **kwargs).permute(perm) | null |
21,160 | from typing import Dict, List, Union
import torch
import torch.nn.functional as TF
from torch import Tensor
from torch.nn import Module, Parameter, ReLU
def fat_relu(tens: Tensor, threshold: Union[Tensor, float], inplace: bool) -> Tensor:
"""
Apply a FATReLU function to a tensor (forced activation threshold):
f(x, t) = 0 if x < t; x if x >= t
:param tens: the tensor to apply the fat relu to
:param threshold: the threshold to apply. if not a single value then
the dimension to broadcast across must be last in the tensor
:param inplace: False to create a new tensor,
True to overwrite the current tensor's values
:return: f(x, t) = 0 if x < t; x if x >= t
"""
if isinstance(threshold, float):
# not channelwise, can get by with using a threshold
return TF.threshold(tens, threshold, 0.0, inplace)
mask = (tens >= threshold).float()
out = tens * mask if not inplace else tens.mul_(mask)
return out
The provided code snippet includes necessary dependencies for implementing the `fat_pw_relu` function. Write a Python function `def fat_pw_relu( tens: Tensor, threshold: Tensor, compression: Tensor, inplace: bool ) -> Tensor` to solve the following problem:
Apply a piecewise separable FATReLU function to a tensor (forced activation threshold): f(x, t, c) = 0 if x <= (t - t/c); x if x >= t; c(x - (t - t/c)) if x > (t - t/c) and x < t :param tens: the tensor to apply the piecewise fat relu to :param threshold: the threshold at which all values will be zero or interpolated between threshold and 0 :param compression: the compression or slope to interpolate between 0 and the threshold with :param inplace: false to create a new tensor, true to overwrite the current tensor's values :return: f(x, t, c) = 0 if x <= (t - t/c); x if x >= t; c(x - (t - t/c)) if x > (t - t/c) and x < t
Here is the function:
def fat_pw_relu(
tens: Tensor, threshold: Tensor, compression: Tensor, inplace: bool
) -> Tensor:
"""
Apply a piecewise separable FATReLU function to a tensor
(forced activation threshold):
f(x, t, c) = 0 if x <= (t - t/c); x if x >= t;
c(x - (t - t/c)) if x > (t - t/c) and x < t
:param tens: the tensor to apply the piecewise fat relu to
:param threshold: the threshold at which all values will be zero or interpolated
between threshold and 0
:param compression: the compression or slope to interpolate between 0
and the threshold with
:param inplace: false to create a new tensor, true to overwrite the
current tensor's values
:return: f(x, t, c) = 0 if x <= (t - t/c); x if x >= t;
c(x - (t - t/c)) if x > (t - t/c) and x < t
"""
x_offset = threshold - threshold / compression
# apply the fat relu up until our x_offset (where our compression region starts)
out = fat_relu(tens, x_offset, inplace)
# calculate the compression region values
comp_mask = ((tens < threshold).float() * tens > x_offset).float()
comp_tens = compression * (out - x_offset)
# reassign the compression values in the output
out = (
(-1.0 * comp_mask + 1.0) * out + comp_tens * comp_mask
if not inplace
else out.mul_(-1.0 * comp_mask + 1.0).add_(comp_tens * comp_mask)
)
return out | Apply a piecewise separable FATReLU function to a tensor (forced activation threshold): f(x, t, c) = 0 if x <= (t - t/c); x if x >= t; c(x - (t - t/c)) if x > (t - t/c) and x < t :param tens: the tensor to apply the piecewise fat relu to :param threshold: the threshold at which all values will be zero or interpolated between threshold and 0 :param compression: the compression or slope to interpolate between 0 and the threshold with :param inplace: false to create a new tensor, true to overwrite the current tensor's values :return: f(x, t, c) = 0 if x <= (t - t/c); x if x >= t; c(x - (t - t/c)) if x > (t - t/c) and x < t |
21,161 | from typing import Dict, List, Union
import torch
import torch.nn.functional as TF
from torch import Tensor
from torch.nn import Module, Parameter, ReLU
The provided code snippet includes necessary dependencies for implementing the `fat_sig_relu` function. Write a Python function `def fat_sig_relu(tens: Tensor, threshold: Tensor, compression: Tensor) -> Tensor` to solve the following problem:
Create a sigmoid approximated FATReLU function to a tensor (forced activation threshold): f(x, t, c) = x / e^(c*(t-x)) Note: there is no option for inplace with this function. :param tens: the tensor to apply the sigmoid fat relu to :param threshold: the threshold at which all values will be zero or approximated in the sigmoid region :param compression: the compression or slope to use in the sigmoid region :return: f(x, t, c) = x / e^(c*(t-x))
Here is the function:
def fat_sig_relu(tens: Tensor, threshold: Tensor, compression: Tensor) -> Tensor:
"""
Create a sigmoid approximated FATReLU function to a tensor
(forced activation threshold):
f(x, t, c) = x / e^(c*(t-x))
Note: there is no option for inplace with this function.
:param tens: the tensor to apply the sigmoid fat relu to
:param threshold: the threshold at which all values will be zero or approximated
in the sigmoid region
:param compression: the compression or slope to use in the sigmoid region
:return: f(x, t, c) = x / e^(c*(t-x))
"""
out = tens / (1.0 + torch.exp(compression * (threshold - tens)))
out = TF.relu(
out, inplace=True
) # make sure the negative region is always zero activation with a regular ReLU
return out | Create a sigmoid approximated FATReLU function to a tensor (forced activation threshold): f(x, t, c) = x / e^(c*(t-x)) Note: there is no option for inplace with this function. :param tens: the tensor to apply the sigmoid fat relu to :param threshold: the threshold at which all values will be zero or approximated in the sigmoid region :param compression: the compression or slope to use in the sigmoid region :return: f(x, t, c) = x / e^(c*(t-x)) |
21,162 | from typing import Dict, List, Union
import torch
import torch.nn.functional as TF
from torch import Tensor
from torch.nn import Module, Parameter, ReLU
The provided code snippet includes necessary dependencies for implementing the `fat_exp_relu` function. Write a Python function `def fat_exp_relu(tens: Tensor, threshold: Tensor, compression: Tensor) -> Tensor` to solve the following problem:
Create a piecewise separable exp approximated FATReLU function to a tensor (forced activation threshold): f(x, t, c) = 0 if x <= 0; = x if x >= t; = x * e^(c(x-t)) if x > 0 and x < t Note: there is no option for inplace with this function :param tens: the tensor to apply the exponential fat relu to :param threshold: the threshold at which all values will be zero or approximated in the exponential region :param compression: the compression or slope to use in the exponential region :return: f(x, t, c) = 0 if x <= 0; = x if x >= t; = x * e^(c(x-t)) if x > 0 and x < t
Here is the function:
def fat_exp_relu(tens: Tensor, threshold: Tensor, compression: Tensor) -> Tensor:
"""
Create a piecewise separable exp approximated FATReLU function to a tensor
(forced activation threshold):
f(x, t, c) = 0 if x <= 0; = x if x >= t;
= x * e^(c(x-t)) if x > 0 and x < t
Note: there is no option for inplace with this function
:param tens: the tensor to apply the exponential fat relu to
:param threshold: the threshold at which all values will be zero or approximated
in the exponential region
:param compression: the compression or slope to use in the exponential region
:return: f(x, t, c) = 0 if x <= 0; = x if x >= t;
= x * e^(c(x-t)) if x > 0 and x < t
"""
# remove the negative values
out = TF.relu(tens)
# calculate the compression region values
comp_mask = ((out < threshold) * (out > 0.0)).float()
comp_tens = out * torch.exp(compression * (out - threshold))
# reassign the compression values in the output
out = (-1.0 * comp_mask + 1.0) * out + comp_tens * comp_mask
return out | Create a piecewise separable exp approximated FATReLU function to a tensor (forced activation threshold): f(x, t, c) = 0 if x <= 0; = x if x >= t; = x * e^(c(x-t)) if x > 0 and x < t Note: there is no option for inplace with this function :param tens: the tensor to apply the exponential fat relu to :param threshold: the threshold at which all values will be zero or approximated in the exponential region :param compression: the compression or slope to use in the exponential region :return: f(x, t, c) = 0 if x <= 0; = x if x >= t; = x * e^(c(x-t)) if x > 0 and x < t |
21,163 | from typing import Dict, List, Union
import torch
import torch.nn.functional as TF
from torch import Tensor
from torch.nn import Module, Parameter, ReLU
class FATReLU(Module):
"""
Applies a FAT ReLU (forced activation threshold) over the input.
Instead of setting all negative values to 0 like with ReLU,
this sets all values < threshold equal to 0
:param threshold: the threshold that all values < threshold will be set to 0.
if type float then f(x) = x if x >= threshold else 0.
if type list then f(x[:, chan]) = x[:, chan]
if x[:, chan] >= threshold[chan] else 0.
if type list and empty, applies activation as the list option
but dynamically initializes to the num chan
:param inplace: perform the operation inplace or create a new tensor
"""
def __init__(
self, threshold: Union[float, List[float]] = 0.0, inplace: bool = False
):
super(FATReLU, self).__init__()
self._dynamic = False
self._channel_wise = False
self._num_channels = None
if isinstance(threshold, List):
self._channel_wise = True
self._num_channels = len(threshold)
if len(threshold) == 0:
# can be dynamic only at init (before first data)
# NB: _num_channles set dynamically - at first pass
self._dynamic = True
self.threshold = Parameter(torch.tensor(threshold))
self.threshold.requires_grad = False
self.inplace = inplace
def dynamic(self) -> bool:
"""
:return: True if the layer is in dynamic mode
(gathering the number of channels), False otherwise
"""
return self._dynamic
def channel_wise(self) -> bool:
"""
:return: True if the FATReLU is applied per channel, False otherwise
"""
return self._channel_wise
def num_channels(self):
"""
:return: The number of channels the FATReLU is acting on
"""
if self._dynamic:
raise Exception(
"number of channels not yet allocated. "
"function should be called only after allocation"
)
return self._num_channels
def set_threshold(self, threshold: Union[float, List[float]]):
"""
:param threshold: the threshold value to set for the activation
"""
if self._dynamic:
raise RuntimeError(
"cannot set threshold, threshold is setup activation dynamic "
"(constructor given empty list)"
)
if self._channel_wise and isinstance(threshold, float):
raise ValueError(
"cannot set threshold to float value, "
"constructor setup with list of channels len {}".format(
self._num_channels
)
)
if self._channel_wise and self._num_channels != len(threshold):
raise ValueError(
"cannot set threshold to list of "
"len({}), constructor setup with list of len({})".format(
len(threshold), self._num_channels
)
)
current_tens = self.threshold.data # type: Tensor
new_tens = current_tens.new_tensor(threshold)
current_tens.copy_(new_tens)
def get_threshold(self) -> Union[float, List[float]]:
"""
:return: the current threshold being applied for the activation
"""
return (
self.threshold.data.cpu().item()
if not self._channel_wise
else self.threshold.data.cpu().tolist()
)
def forward(self, inp: Tensor):
if not self._channel_wise:
threshold = self.threshold.data.item()
return fat_relu(inp, threshold, self.inplace)
if self._dynamic:
thresh = [0.0] * inp.shape[1]
self.threshold.data = torch.tensor(thresh)
self._dynamic = False
self._num_channels = len(thresh)
assert (
inp.shape[1] == self._num_channels
) # runtime test that #channels equals expected #channels
return _apply_permuted_channels(
fat_relu, inp, threshold=self.threshold, inplace=self.inplace
)
def extra_repr(self):
inplace_str = ", inplace" if self.inplace else ""
return "threshold={}{}".format(self.threshold, inplace_str)
def load_state_dict(self, state_dict, strict=True):
if self._dynamic:
raise Exception(
"attempt to load state_dict, but fatrelu is not initialized yet."
"need to pass data once to initialize channel since constructed "
"with dynamic allocation of number of channels"
)
super().load_state_dict(state_dict, strict)
def set_relu_to_fat(module: Module, layer_name: str, **kwargs) -> FATReLU:
"""
Replace a given layer in a module to a FATReLU instance.
:param module: the module to replace the given layer with a FATReLU implementation
:param layer_name: the name of the layer to replace with a FATReLU
:param kwargs: the kwargs to pass to the FATReLU constructor
:return: the created FATReLU instance
"""
layer = module
layers = layer_name.split(".")
for lay in layers[:-1]:
layer = layer.__getattr__(lay)
fat = layer.__getattr__(layers[-1])
if not isinstance(fat, FATReLU):
fat = FATReLU(**kwargs)
layer.__setattr__(layers[-1], fat)
return fat
The provided code snippet includes necessary dependencies for implementing the `convert_relus_to_fat` function. Write a Python function `def convert_relus_to_fat(module: Module, **kwargs) -> Dict[str, FATReLU]` to solve the following problem:
Replace all of the ReLUs in a module with FATReLU instances. Note: only works if the ReLUs are layers in the module, will not work with torch.functional ones. :param module: the module to replace all ReLUs with FATReLU :param kwargs: the kwargs to pass to the FATReLU constructor :return: a dictionary containing a mapping from the names of the replaced layers to the replaced FATReLU
Here is the function:
def convert_relus_to_fat(module: Module, **kwargs) -> Dict[str, FATReLU]:
"""
Replace all of the ReLUs in a module with FATReLU instances.
Note: only works if the ReLUs are layers in the module,
will not work with torch.functional ones.
:param module: the module to replace all ReLUs with FATReLU
:param kwargs: the kwargs to pass to the FATReLU constructor
:return: a dictionary containing a mapping from the names of the replaced layers
to the replaced FATReLU
"""
relu_keys = []
for name, mod in module.named_modules():
if isinstance(mod, ReLU):
relu_keys.append(name)
added = {}
for key in relu_keys:
added[key] = set_relu_to_fat(module, key, **kwargs)
return added | Replace all of the ReLUs in a module with FATReLU instances. Note: only works if the ReLUs are layers in the module, will not work with torch.functional ones. :param module: the module to replace all ReLUs with FATReLU :param kwargs: the kwargs to pass to the FATReLU constructor :return: a dictionary containing a mapping from the names of the replaced layers to the replaced FATReLU |
21,164 | import logging
from collections import defaultdict
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from sparseml.pytorch.recipe_template.description import DESCRIPTION
from sparseml.pytorch.sparsification import (
ACDCPruningModifier,
ConstantPruningModifier,
DistillationModifier,
EpochRangeModifier,
GlobalMagnitudePruningModifier,
LearningRateFunctionModifier,
MagnitudePruningModifier,
MFACPruningModifier,
MovementPruningModifier,
OBSPruningModifier,
)
from sparseml.pytorch.sparsification.quantization.legacy_modifier_quantization import (
QuantizationModifier,
)
from sparseml.pytorch.utils import get_prunable_layers
from sparseml.sparsification import ModifierYAMLBuilder, RecipeYAMLBuilder
def _validate_quantization(quantization: Union[bool, str]) -> bool:
if isinstance(quantization, str):
quantization = quantization.lower() == "true"
if not isinstance(quantization, bool):
raise ValueError("`quantization` must be a bool")
return quantization
def _validate_pruning(
pruning: Union[str, bool, None] = None, quantization: bool = False
) -> str:
# normalize pruning algo name
if isinstance(pruning, bool):
pruning = str(pruning)
pruning = (pruning or "").lower().replace("/", "")
if pruning == "true":
pruning = "magnitude"
elif pruning == "false":
pruning = "constant" if quantization else ""
return pruning
def _validate_mask_type(
mask_type: str = "unstructured", target: Optional[str] = None
) -> str:
target_to_mask_type = defaultdict(
lambda: "unstructured",
{"vnni": "block4", "tensorrt": "2:4"},
)
if target is not None and mask_type != target_to_mask_type[target]:
raise ValueError(
f"The specified mask type {mask_type} and target {target} are "
f"incompatible, try overriding mask_type to {target_to_mask_type[target]}"
)
return mask_type
def _build_recipe_template(
pruning: str,
quantization: bool,
lr_func: str,
mask_type: str,
global_sparsity: bool = True,
target: Optional[str] = None,
model: Union[Module, None] = None,
num_epochs: float = 20.0,
init_lr: float = 0.001,
final_lr: float = 0.0,
sparsity: float = 0.8,
distillation: bool = False,
hardness: float = 0.5,
temperature: float = 2.0,
) -> str:
pruning_was_applied: bool = pruning not in ["constant", ""]
recipe_variables: Dict[str, Any] = _get_base_recipe_variables(
pruning=pruning_was_applied,
quantization=quantization,
lr_func=lr_func,
mask_type=mask_type,
global_sparsity=global_sparsity,
num_epochs=num_epochs,
init_lr=init_lr,
final_lr=final_lr,
sparsity=sparsity,
distillation=distillation,
hardness=hardness,
temperature=temperature,
)
builder_groups = {"training_modifiers": _get_training_builders()}
if pruning:
pruning_builders, pruning_variables = _get_pruning_builders_and_variables(
pruning_algo=pruning,
model=model,
global_sparsity=global_sparsity,
)
recipe_variables.update(pruning_variables)
builder_groups["pruning_modifiers"] = pruning_builders
if quantization:
quant_builders, quant_variables = _get_quantization_builders_and_variables(
post_pruning=pruning_was_applied, target=target
)
recipe_variables.update(quant_variables)
builder_groups["quantization_modifiers"] = quant_builders
if distillation:
builder_groups["distillation_modifiers"] = _get_distillation_builders()
recipe_builder = RecipeYAMLBuilder(
variables=recipe_variables, modifier_groups=builder_groups
)
recipe = recipe_builder.build_yaml_str()
return recipe
def _add_description(recipe: str, description: str = DESCRIPTION) -> str:
return f"---\n{recipe}\n---{description}"
def _write_recipe_to_file(file_name: str, recipe: str):
Path(file_name).parent.mkdir(parents=True, exist_ok=True)
with open(file_name, "w") as file:
file.write(recipe)
_LOGGER.info(f"Recipe written to file {file_name}")
The provided code snippet includes necessary dependencies for implementing the `recipe_template` function. Write a Python function `def recipe_template( pruning: Union[str, bool, None] = None, quantization: Union[bool, str] = False, lr: str = "linear", mask_type: str = "unstructured", global_sparsity: bool = True, target: Optional[str] = None, model: Union[str, Module, None] = None, file_name: Optional[str] = None, num_epochs: float = 20.0, init_lr: float = 0.001, final_lr: float = 0.0, sparsity: float = 0.8, distillation: bool = False, hardness: float = 0.5, temperature: float = 2.0, ) -> str` to solve the following problem:
Returns a valid yaml or md recipe based on specified arguments :param pruning: optional pruning algorithm to use in the recipe, can be any of the following,`true` (represents Magnitude/Global-Magnitude pruning according to global_sparsity), `false` (No pruning), `acdc`, `mfac`, `movement`, `obs` or `constant`. Can also be a bool. Defaults to None :param quantization: True if quantization needs to be applied else False. Defaults to False. Can also be string representation of boolean values i.e `true` or `false` :param lr: the learning rate schedule function. Defaults to `linear` :param mask_type: the mask_type to use for pruning. Defaults to `unstructured` :param global_sparsity: if set to True then apply sparsity globally, defaults to False :param target: the target hardware, can be set to `vnni` or `tensorrt`. Defaults to None :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param file_name: an optional filename to save this recipe to. If specified the extension is used to determine if file should be written in markdown or yaml syntax. If not specified recipe is not written to a file :param num_epochs: total number of epochs to target in recipe, default 20 :param init_lr: target initial learning rate, default 0.001 :param final_lr: target final learning rate, default 0.0 :param sparsity: target model sparsity, default 0.8 :param distillation: add distillation support to the recipe. default is `False` :param hardness: [only used if distillation is set] how much to weight the distillation loss vs the base loss (e.g. hardness of 0.6 will return 0.6 * distill_loss + 0.4 * base_loss). default is 0.5 :param temperature: [only used if distillation is set] temperature applied to teacher and student softmax for distillation. default is 2.0 :return: A valid string recipe based on the arguments
Here is the function:
def recipe_template(
pruning: Union[str, bool, None] = None,
quantization: Union[bool, str] = False,
lr: str = "linear",
mask_type: str = "unstructured",
global_sparsity: bool = True,
target: Optional[str] = None,
model: Union[str, Module, None] = None,
file_name: Optional[str] = None,
num_epochs: float = 20.0,
init_lr: float = 0.001,
final_lr: float = 0.0,
sparsity: float = 0.8,
distillation: bool = False,
hardness: float = 0.5,
temperature: float = 2.0,
) -> str:
"""
Returns a valid yaml or md recipe based on specified arguments
:param pruning: optional pruning algorithm to use in the recipe, can be any of the
following,`true` (represents Magnitude/Global-Magnitude pruning according to
global_sparsity), `false` (No pruning), `acdc`, `mfac`, `movement`, `obs` or
`constant`. Can also be a bool. Defaults to None
:param quantization: True if quantization needs to be applied else False. Defaults
to False. Can also be string representation of boolean values i.e `true` or
`false`
:param lr: the learning rate schedule function. Defaults to `linear`
:param mask_type: the mask_type to use for pruning. Defaults to `unstructured`
:param global_sparsity: if set to True then apply sparsity globally, defaults to
False
:param target: the target hardware, can be set to `vnni` or `tensorrt`. Defaults to
None
:param model: an instantiated PyTorch Module, or the local path to a torch.jit
loadable *.pt file, if supplied then the recipe is built according to this
architecture
:param file_name: an optional filename to save this recipe to. If specified the
extension is used to determine if file should be written in markdown
or yaml syntax. If not specified recipe is not written to a file
:param num_epochs: total number of epochs to target in recipe, default 20
:param init_lr: target initial learning rate, default 0.001
:param final_lr: target final learning rate, default 0.0
:param sparsity: target model sparsity, default 0.8
:param distillation: add distillation support to the recipe. default is
`False`
:param hardness: [only used if distillation is set] how much to weight the
distillation loss vs the base loss (e.g. hardness of 0.6 will return
0.6 * distill_loss + 0.4 * base_loss). default is 0.5
:param temperature: [only used if distillation is set] temperature applied
to teacher and student softmax for distillation. default is 2.0
:return: A valid string recipe based on the arguments
"""
if isinstance(model, str):
# load model file to in memory Module using torch.jit
model = torch.jit.load(model)
quantization: bool = _validate_quantization(quantization=quantization)
mask_type: str = _validate_mask_type(mask_type=mask_type, target=target)
pruning: str = _validate_pruning(pruning=pruning, quantization=quantization)
recipe: str = _build_recipe_template(
pruning=pruning,
quantization=quantization,
lr_func=lr,
mask_type=mask_type,
global_sparsity=global_sparsity,
target=target,
model=model,
num_epochs=num_epochs,
init_lr=init_lr,
final_lr=final_lr,
sparsity=sparsity,
distillation=distillation,
hardness=hardness,
temperature=temperature,
)
if file_name is not None:
if Path(file_name).suffix == ".md":
recipe = _add_description(recipe=recipe)
_write_recipe_to_file(file_name=file_name, recipe=recipe)
return recipe | Returns a valid yaml or md recipe based on specified arguments :param pruning: optional pruning algorithm to use in the recipe, can be any of the following,`true` (represents Magnitude/Global-Magnitude pruning according to global_sparsity), `false` (No pruning), `acdc`, `mfac`, `movement`, `obs` or `constant`. Can also be a bool. Defaults to None :param quantization: True if quantization needs to be applied else False. Defaults to False. Can also be string representation of boolean values i.e `true` or `false` :param lr: the learning rate schedule function. Defaults to `linear` :param mask_type: the mask_type to use for pruning. Defaults to `unstructured` :param global_sparsity: if set to True then apply sparsity globally, defaults to False :param target: the target hardware, can be set to `vnni` or `tensorrt`. Defaults to None :param model: an instantiated PyTorch Module, or the local path to a torch.jit loadable *.pt file, if supplied then the recipe is built according to this architecture :param file_name: an optional filename to save this recipe to. If specified the extension is used to determine if file should be written in markdown or yaml syntax. If not specified recipe is not written to a file :param num_epochs: total number of epochs to target in recipe, default 20 :param init_lr: target initial learning rate, default 0.001 :param final_lr: target final learning rate, default 0.0 :param sparsity: target model sparsity, default 0.8 :param distillation: add distillation support to the recipe. default is `False` :param hardness: [only used if distillation is set] how much to weight the distillation loss vs the base loss (e.g. hardness of 0.6 will return 0.6 * distill_loss + 0.4 * base_loss). default is 0.5 :param temperature: [only used if distillation is set] temperature applied to teacher and student softmax for distillation. default is 2.0 :return: A valid string recipe based on the arguments |
21,165 | import json
import os
from typing import Any, Dict, Optional, Union
import torch
from torch.nn import Module
from torch.utils.data import DataLoader
from tqdm import tqdm
import click
from sparseml import get_main_logger
from sparseml.pytorch.image_classification.utils import cli_helpers, helpers
from sparseml.pytorch.opset import TORCH_DEFAULT_ONNX_OPSET
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import ModuleExporter, load_model
LOGGER = get_main_logger()
TORCH_DEFAULT_ONNX_OPSET = _default_opset()
The provided code snippet includes necessary dependencies for implementing the `export` function. Write a Python function `def export( model: Module, val_loader: "DataLoader", save_dir: str, use_zipfile_serialization_if_available: bool, num_samples: int, onnx_opset: int = TORCH_DEFAULT_ONNX_OPSET, convert_qat: bool = True, image_size: int = 224, labels_to_class_mapping: Optional[Union[str, Dict[int, str]]] = None, ) -> None` to solve the following problem:
Utility method to export the model and data :param model: loaded model architecture to export :param val_loader: A DataLoader for validation data :param save_dir: Directory to store checkpoints at during exporting process :param use_zipfile_serialization_if_available: Whether to use zipfile serialization during export :param num_samples: Number of samples to export :param onnx_opset: ONNX opset version to use :param convert_qat: set True to convert QAT export to fully quantized representation :param image_size: size of image to export :param labels_to_class_mapping: information about the mapping from integer labels to string class names. Can be either a string (path to the .json serialized dictionary) or a dictionary.
Here is the function:
def export(
model: Module,
val_loader: "DataLoader",
save_dir: str,
use_zipfile_serialization_if_available: bool,
num_samples: int,
onnx_opset: int = TORCH_DEFAULT_ONNX_OPSET,
convert_qat: bool = True,
image_size: int = 224,
labels_to_class_mapping: Optional[Union[str, Dict[int, str]]] = None,
) -> None:
"""
Utility method to export the model and data
:param model: loaded model architecture to export
:param val_loader: A DataLoader for validation data
:param save_dir: Directory to store checkpoints at during exporting process
:param use_zipfile_serialization_if_available: Whether to use zipfile
serialization during export
:param num_samples: Number of samples to export
:param onnx_opset: ONNX opset version to use
:param convert_qat: set True to convert QAT export to fully quantized
representation
:param image_size: size of image to export
:param labels_to_class_mapping: information about the mapping
from integer labels to string class names. Can be either a string
(path to the .json serialized dictionary) or a dictionary.
"""
exporter = ModuleExporter(model, save_dir)
if not val_loader:
# create fake data for export
val_loader = [[torch.randn(1, 3, image_size, image_size)]]
onnx_exported = False
for batch, data in tqdm(
enumerate(val_loader),
desc="Exporting samples",
total=num_samples if num_samples > 1 else 1,
):
if not onnx_exported:
# export onnx file using first sample for graph freezing
LOGGER.info(f"exporting onnx in {save_dir}")
exporter.export_onnx(data[0], opset=onnx_opset, convert_qat=convert_qat)
onnx_exported = True
if num_samples > 0:
exporter.export_samples(
sample_batches=[data[0]], sample_labels=[data[1]], exp_counter=batch
)
exporter.create_deployment_folder(labels_to_class_mapping=labels_to_class_mapping) | Utility method to export the model and data :param model: loaded model architecture to export :param val_loader: A DataLoader for validation data :param save_dir: Directory to store checkpoints at during exporting process :param use_zipfile_serialization_if_available: Whether to use zipfile serialization during export :param num_samples: Number of samples to export :param onnx_opset: ONNX opset version to use :param convert_qat: set True to convert QAT export to fully quantized representation :param image_size: size of image to export :param labels_to_class_mapping: information about the mapping from integer labels to string class names. Can be either a string (path to the .json serialized dictionary) or a dictionary. |
21,166 | import json
import os
from typing import Any, Dict, Optional, Union
import torch
from torch.nn import Module
from torch.utils.data import DataLoader
from tqdm import tqdm
import click
from sparseml import get_main_logger
from sparseml.pytorch.image_classification.utils import cli_helpers, helpers
from sparseml.pytorch.opset import TORCH_DEFAULT_ONNX_OPSET
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import ModuleExporter, load_model
def _validate_dataset_num_classes(
dataset: str,
dataset_path: str,
num_samples: int,
num_classes: int,
):
if dataset and not dataset_path:
raise ValueError(f"found dataset {dataset} but dataset_path not specified")
if dataset_path and not dataset:
raise ValueError(f"found dataset_path {dataset_path} but dataset not specified")
if num_classes is None and (not dataset or not dataset_path):
raise ValueError(
"If num_classes is not provided, both dataset and dataset_path must be "
"set to infer num_classes"
)
if num_samples > 0 and (not dataset or not dataset_path):
raise ValueError(
"If num_samples > 0, both dataset and dataset_path must be "
"set to load samples for export"
) | null |
21,167 | import json
import os
from typing import Any, Dict, Optional, Tuple, Union
import torch
import click
from sparseml import get_main_logger
from sparseml.pytorch.image_classification.utils import (
DEFAULT_OPTIMIZER,
OPTIMIZERS,
ImageClassificationTrainer,
cli_helpers,
helpers,
)
from sparseml.pytorch.utils import default_device, get_prunable_layers, tensor_sparsity
from sparseml.pytorch.utils.distributed import record
LOGGER = get_main_logger()
The provided code snippet includes necessary dependencies for implementing the `train` function. Write a Python function `def train( trainer: ImageClassificationTrainer, save_dir: str, max_eval_steps: int, eval_mode: bool, is_main_process: bool, save_best_after: int, save_epochs: Tuple[int, ...], rank: int, )` to solve the following problem:
Utility function to run the training loop :param trainer: The ImageClassificationTrainer object :param save_dir: The directory to save checkpoints to :param max_eval_steps: The number of steps to run for validation :param eval_mode: Whether to run in evaluation mode :param is_main_process: Whether this is the main process :param save_best_after: The number of epochs to wait before saving a new best model :param save_epochs: The epochs to save checkpoints for :param rank: The rank of the process
Here is the function:
def train(
trainer: ImageClassificationTrainer,
save_dir: str,
max_eval_steps: int,
eval_mode: bool,
is_main_process: bool,
save_best_after: int,
save_epochs: Tuple[int, ...],
rank: int,
):
"""
Utility function to run the training loop
:param trainer: The ImageClassificationTrainer object
:param save_dir: The directory to save checkpoints to
:param max_eval_steps: The number of steps to run for validation
:param eval_mode: Whether to run in evaluation mode
:param is_main_process: Whether this is the main process
:param save_best_after: The number of epochs to wait before saving
a new best model
:param save_epochs: The epochs to save checkpoints for
:param rank: The rank of the process
"""
val_res = None
if not trainer.one_shot:
# Baseline eval run
val_res = trainer.run_one_epoch(
mode="val",
max_steps=max_eval_steps,
baseline_run=True,
)
LOGGER.info(f"\nInitial validation results: {val_res}")
if eval_mode:
eval_results_path = os.path.join(save_dir, "eval.txt")
helpers.write_validation_results(eval_results_path, val_res)
if not (eval_mode or trainer.one_shot):
LOGGER.info(f"Starting training from epoch {trainer.epoch}")
val_metric = best_metric = None
while trainer.epoch < trainer.max_epochs:
train_res = trainer.run_one_epoch(
mode="train",
max_steps=trainer.max_train_steps,
)
LOGGER.info(f"\nEpoch {trainer.epoch} training results: {train_res}")
# testing steps
if is_main_process:
val_res = trainer.run_one_epoch(
mode="val",
max_steps=max_eval_steps,
)
val_metric = val_res.result_mean(trainer.target_metric).item()
should_save_epoch = trainer.epoch >= save_best_after and (
best_metric is None
or (
val_metric <= best_metric
if trainer.target_metric != "top1acc"
else val_metric >= best_metric
)
)
if should_save_epoch:
helpers.save_model_training(
model=trainer.model,
optim=trainer.optim,
manager=trainer.manager,
checkpoint_manager=trainer.checkpoint_manager,
save_name="checkpoint-best",
save_dir=save_dir,
epoch=trainer.epoch,
val_res=val_res,
arch_key=trainer.key,
)
# Best metric is based on validation results
best_metric = val_metric
# save checkpoints
should_save_epoch = (
is_main_process and save_epochs and trainer.epoch in save_epochs
)
if should_save_epoch:
save_name = (
f"checkpoint-{trainer.epoch:04d}-{val_metric:.04f}"
if val_metric
else f"checkpoint-{trainer.epoch:04d}"
)
helpers.save_model_training(
model=trainer.model,
optim=trainer.optim,
manager=trainer.manager,
checkpoint_manager=trainer.checkpoint_manager,
save_name=save_name,
save_dir=save_dir,
epoch=trainer.epoch,
val_res=val_res,
arch_key=trainer.key,
)
trainer.epoch += 1
# export the final model
LOGGER.info("completed...")
if is_main_process and not eval_mode:
# Convert QAT -> quantized ONNX graph for finalized model only
save_name = "model" if not trainer.one_shot else "model-one-shot"
helpers.save_model_training(
model=trainer.model,
optim=trainer.optim,
manager=trainer.manager,
checkpoint_manager=trainer.checkpoint_manager,
save_name=save_name,
save_dir=save_dir,
epoch=trainer.epoch - 1 if not trainer.one_shot else None,
val_res=val_res,
)
LOGGER.info("layer sparsities:")
for (name, layer) in get_prunable_layers(trainer.model):
LOGGER.info(f"{name}.weight: {tensor_sparsity(layer.weight).item():.4f}")
# close DDP
if rank != -1:
assert hasattr(torch, "distributed")
torch.distributed.destroy_process_group() | Utility function to run the training loop :param trainer: The ImageClassificationTrainer object :param save_dir: The directory to save checkpoints to :param max_eval_steps: The number of steps to run for validation :param eval_mode: Whether to run in evaluation mode :param is_main_process: Whether this is the main process :param save_best_after: The number of epochs to wait before saving a new best model :param save_epochs: The epochs to save checkpoints for :param rank: The rank of the process |
21,168 | import json
import os
from typing import Any, Dict, List, Optional, Union
from torch.nn import Module
from torch.utils.data import DataLoader
import click
from sparseml import get_main_logger
from sparseml.pytorch.image_classification.utils import cli_helpers, helpers
from sparseml.pytorch.optim import (
pruning_loss_sens_magnitude,
pruning_loss_sens_one_shot,
)
from sparseml.pytorch.utils import (
CrossEntropyLossWrapper,
default_device,
model_to_device,
)
LOGGER = get_main_logger()
The provided code snippet includes necessary dependencies for implementing the `pruning_loss_sensitivity` function. Write a Python function `def pruning_loss_sensitivity( model: Module, train_loader: DataLoader, save_dir: str, loggers: List[Any], approximate: bool, device: Union[str, int], steps_per_measurement: int, ) -> None` to solve the following problem:
Utility function for pruning sensitivity analysis :param model: loaded model architecture to analyse :param train_loader: A DataLoader for training data :param save_dir: Directory to save results :param loggers: List of loggers to use during analysis :param approximate: Whether to use one shot analysis :param device: Device to use for analysis :param steps_per_measurement: Number of steps to run for each measurement
Here is the function:
def pruning_loss_sensitivity(
model: Module,
train_loader: DataLoader,
save_dir: str,
loggers: List[Any],
approximate: bool,
device: Union[str, int],
steps_per_measurement: int,
) -> None:
"""
Utility function for pruning sensitivity analysis
:param model: loaded model architecture to analyse
:param train_loader: A DataLoader for training data
:param save_dir: Directory to save results
:param loggers: List of loggers to use during analysis
:param approximate: Whether to use one shot analysis
:param device: Device to use for analysis
:param steps_per_measurement: Number of steps to run for each measurement
"""
# loss setup
if not approximate:
loss = CrossEntropyLossWrapper()
LOGGER.info(f"created loss: {loss}")
else:
loss = None
# device setup
if not approximate:
module, device, device_ids = model_to_device(
model=model,
device=device,
)
else:
device = None
# kernel sparsity analysis
if approximate:
analysis = pruning_loss_sens_magnitude(model)
else:
analysis = pruning_loss_sens_one_shot(
module=model,
data=train_loader,
loss=loss,
device=device,
steps_per_measurement=steps_per_measurement,
tester_loggers=loggers,
)
# saving and printing results
LOGGER.info("completed...")
LOGGER.info(f"Saving results in {save_dir}")
output_name = (
"ks_approx_sensitivity.json" if approximate else "ks_one_shot_sensitivity.json"
)
analysis.save_json(
os.path.join(
save_dir,
f"{output_name}.json",
)
)
analysis.plot(
os.path.join(
save_dir,
os.path.join(
save_dir,
f"{output_name}.png",
),
),
plot_integral=True,
)
analysis.print_res() | Utility function for pruning sensitivity analysis :param model: loaded model architecture to analyse :param train_loader: A DataLoader for training data :param save_dir: Directory to save results :param loggers: List of loggers to use during analysis :param approximate: Whether to use one shot analysis :param device: Device to use for analysis :param steps_per_measurement: Number of steps to run for each measurement |
21,169 | import json
import os
from typing import Any, Dict, Optional, Union
from torch.nn import Module
from torch.optim import SGD
from torch.utils.data import DataLoader
import click
from sparseml import get_main_logger
from sparseml.pytorch.image_classification.utils import cli_helpers, helpers
from sparseml.pytorch.optim import default_exponential_check_lrs, lr_loss_sensitivity
from sparseml.pytorch.utils import (
CrossEntropyLossWrapper,
PythonLogger,
default_device,
model_to_device,
)
LOGGER = get_main_logger()
The provided code snippet includes necessary dependencies for implementing the `lr_sensitivity` function. Write a Python function `def lr_sensitivity( model: Module, train_loader: DataLoader, save_dir: str, init_lr: float, optim_args: Dict[str, Any], steps_per_measurement: int, device: Union[str, int], final_lr: float, ) -> None` to solve the following problem:
Utility function to run learning rate sensitivity analysis :param model: loaded model architecture to analyse :param train_loader: A DataLoader for training data :param save_dir: Directory to save results :param init_lr: Initial learning rate to use for analysis :param optim_args: Additional arguments to pass to the optimizer :param steps_per_measurement: Number of steps to run for each measurement :param device: Device to use for analysis :param final_lr: Final learning rate to use for analysis
Here is the function:
def lr_sensitivity(
model: Module,
train_loader: DataLoader,
save_dir: str,
init_lr: float,
optim_args: Dict[str, Any],
steps_per_measurement: int,
device: Union[str, int],
final_lr: float,
) -> None:
"""
Utility function to run learning rate sensitivity analysis
:param model: loaded model architecture to analyse
:param train_loader: A DataLoader for training data
:param save_dir: Directory to save results
:param init_lr: Initial learning rate to use for analysis
:param optim_args: Additional arguments to pass to the optimizer
:param steps_per_measurement: Number of steps to run for each measurement
:param device: Device to use for analysis
:param final_lr: Final learning rate to use for analysis
"""
# optimizer setup
optim = SGD(model.parameters(), lr=init_lr, **optim_args)
LOGGER.info(f"created optimizer: {optim}")
# loss setup
loss = CrossEntropyLossWrapper()
LOGGER.info(f"created loss: {loss}")
# device setup
model, device, device_ids = model_to_device(model, device)
# learning rate analysis
LOGGER.info(f"running analysis: {loss}")
analysis = lr_loss_sensitivity(
module=model,
data=train_loader,
loss=loss,
optim=optim,
device=device,
steps_per_measurement=steps_per_measurement,
check_lrs=default_exponential_check_lrs(init_lr, final_lr),
trainer_loggers=[PythonLogger()],
)
# saving and printing results
LOGGER.info("completed...")
LOGGER.info(f"Saving results in {save_dir}")
analysis.save_json(os.path.join(save_dir, "lr_sensitivity.json"))
analysis.plot(os.path.join(save_dir, "lr_sensitivity.png"))
analysis.print_res() | Utility function to run learning rate sensitivity analysis :param model: loaded model architecture to analyse :param train_loader: A DataLoader for training data :param save_dir: Directory to save results :param init_lr: Initial learning rate to use for analysis :param optim_args: Additional arguments to pass to the optimizer :param steps_per_measurement: Number of steps to run for each measurement :param device: Device to use for analysis :param final_lr: Final learning rate to use for analysis |
21,170 | import os
from pathlib import Path
from typing import Any, Callable, Dict, Optional, Tuple, Union
import torch
from pydantic import Field
from sparseml.export.export_data import create_data_samples as create_data_samples_
from sparseml.integration_helper_functions import (
IntegrationHelperFunctions,
Integrations,
)
from sparseml.pytorch.image_classification.utils.helpers import (
create_model as create_image_classification_model,
)
from sparseml.pytorch.image_classification.utils.helpers import (
get_dataset_and_dataloader,
)
The provided code snippet includes necessary dependencies for implementing the `create_model` function. Write a Python function `def create_model( source_path: Union[Path, str], **kwargs ) -> Tuple[torch.nn.Module, Dict]` to solve the following problem:
A contract to create a model and optional dictionary of loaded_model_kwargs (any relevant objects created along with the model) :param source_path: Path to the model files :return: A tuple of the - torch model - (optionally) loaded_model_kwargs (any relevant objects created along with the model)
Here is the function:
def create_model(
source_path: Union[Path, str], **kwargs
) -> Tuple[torch.nn.Module, Dict]:
"""
A contract to create a model and optional dictionary of
loaded_model_kwargs (any relevant objects created along with the model)
:param source_path: Path to the model files
:return: A tuple of the
- torch model
- (optionally) loaded_model_kwargs
(any relevant objects created along with the model)
"""
checkpoint_path = (
os.path.join(source_path, "model.pth")
if not os.path.isfile(source_path)
else source_path
)
return (
create_image_classification_model(checkpoint_path=checkpoint_path, **kwargs)[0],
{},
) | A contract to create a model and optional dictionary of loaded_model_kwargs (any relevant objects created along with the model) :param source_path: Path to the model files :return: A tuple of the - torch model - (optionally) loaded_model_kwargs (any relevant objects created along with the model) |
21,171 | import os
from pathlib import Path
from typing import Any, Callable, Dict, Optional, Tuple, Union
import torch
from pydantic import Field
from sparseml.export.export_data import create_data_samples as create_data_samples_
from sparseml.integration_helper_functions import (
IntegrationHelperFunctions,
Integrations,
)
from sparseml.pytorch.image_classification.utils.helpers import (
create_model as create_image_classification_model,
)
from sparseml.pytorch.image_classification.utils.helpers import (
get_dataset_and_dataloader,
)
def get_dataset_and_dataloader(
dataset_name: str,
dataset_path: str,
batch_size: int,
image_size: int,
dataset_kwargs: Optional[Dict[str, Any]] = None,
training: bool = False,
rank: int = -1,
local_rank: int = -1,
loader_num_workers: int = 0,
loader_pin_memory: bool = False,
max_samples: Optional[int] = None,
ffcv: bool = False,
device: Optional[torch.device] = default_device(),
) -> Tuple[Dataset, Union[DataLoader, Any]]:
"""
:param dataset_name: The name of the dataset
:param dataset_path: The path to the dataset
:param batch_size: The batch size
:param image_size: A tuple of ints representing the image size
:param dataset_kwargs: A dict of kwargs for dataset creation
:param training: Whether this is training or validation
:param rank: The rank of the current process
:param local_rank: The local rank of the current process
:param loader_num_workers: The number of workers to use for the data loader
:param loader_pin_memory: Whether to pin memory for the data loader
:param max_samples: The maximum number of samples to use
:param ffcv: Whether to use ffcv dataset and data loaders
:param device: The device to use for the data loader. Required for ffcv
:return: Tuple with the following format (dataset, dataloader)
"""
download_context = (
torch_distributed_zero_first(local_rank) # only download once locally
if training
else nullcontext()
)
dataset_kwargs = dataset_kwargs or {}
with download_context:
try:
dataset = DatasetRegistry.create(
key=dataset_name,
root=dataset_path,
train=training,
rand_trans=training,
image_size=image_size,
**dataset_kwargs,
)
except Exception as registry_exception:
if dataset_name == "imagefolder" and (
dataset_path in DatasetRegistry.registered_datasets()
):
# user attempting to run imagefolder of pre-supported
# dataset, without a local copy,
# use the dataset_path as registry key instead to attempt
# auto download
dataset = DatasetRegistry.create(
key=dataset_path,
train=training,
rand_trans=training,
image_size=image_size,
**dataset_kwargs,
)
# still treated as image folder dataset, so num_classes attr
# should be set at the object level instead of registry
dataset.num_classes = DatasetRegistry.attributes(dataset_path).get(
"num_classes"
)
else:
raise registry_exception
sampler = (
torch.utils.data.distributed.DistributedSampler(dataset)
if rank != -1 and training # only run on DDP + training
else None
)
shuffle = sampler is None and not training
if ffcv:
if not isinstance(dataset, FFCVCompatibleDataset):
raise ValueError(f"Dataset {dataset} must implement FFCVCompatibleDataset")
dataset_type = "train" if training else "val"
write_path = os.path.join(
dataset_path,
"ffcv_cache",
f"{dataset_type}.beton",
)
data_loader = dataset.get_ffcv_loader(
write_path=write_path,
batch_size=batch_size,
num_workers=loader_num_workers,
device=device,
)
else:
data_loader = DataLoader(
dataset=dataset,
batch_size=batch_size,
shuffle=shuffle,
num_workers=loader_num_workers,
pin_memory=loader_pin_memory,
sampler=sampler,
)
if max_samples is not None:
data_loader = early_stop_data_loader(
data_loader, max_samples if max_samples > 1 else 1
)
return dataset, data_loader
The provided code snippet includes necessary dependencies for implementing the `create_data_loader` function. Write a Python function `def create_data_loader( model: "torch.nn.Module", batch_size: Optional[int] = 1, device: Optional[str] = None, **kwargs, ) -> Tuple[torch.nn.Module, Dict[str, Any]]` to solve the following problem:
A contract to create a model and optional dictionary of loaded_data_loader_kwargs (any relevant objects created along with the data_loader) :param batch_size: The batch size to use for the dataloader creation :param device: The device to use for the model and dataloader instantiation :return: A tuple of the - a data_loader - (optionally) loaded_data_loader_kwargs (any relevant objects created along with the model)
Here is the function:
def create_data_loader(
model: "torch.nn.Module",
batch_size: Optional[int] = 1,
device: Optional[str] = None,
**kwargs,
) -> Tuple[torch.nn.Module, Dict[str, Any]]:
"""
A contract to create a model and optional dictionary of
loaded_data_loader_kwargs (any relevant objects created along with the data_loader)
:param batch_size: The batch size to use for the dataloader creation
:param device: The device to use for the model and dataloader instantiation
:return: A tuple of the
- a data_loader
- (optionally) loaded_data_loader_kwargs
(any relevant objects created along with the model)
"""
dataset_path = kwargs.get("dataset_path", None)
dataset_name = kwargs.get("dataset_name", None)
image_size = kwargs.get("image_size", None)
if dataset_path is not None:
dataset, dataloader = get_dataset_and_dataloader(
dataset_name=dataset_name,
dataset_path=dataset_path,
batch_size=batch_size,
image_size=image_size,
training=False,
loader_num_workers=1,
loader_pin_memory=False,
device=device,
)
else:
dataloader = None
return dataloader, dict(image_size=image_size) | A contract to create a model and optional dictionary of loaded_data_loader_kwargs (any relevant objects created along with the data_loader) :param batch_size: The batch size to use for the dataloader creation :param device: The device to use for the model and dataloader instantiation :return: A tuple of the - a data_loader - (optionally) loaded_data_loader_kwargs (any relevant objects created along with the model) |
21,172 | import os
from pathlib import Path
from typing import Any, Callable, Dict, Optional, Tuple, Union
import torch
from pydantic import Field
from sparseml.export.export_data import create_data_samples as create_data_samples_
from sparseml.integration_helper_functions import (
IntegrationHelperFunctions,
Integrations,
)
from sparseml.pytorch.image_classification.utils.helpers import (
create_model as create_image_classification_model,
)
from sparseml.pytorch.image_classification.utils.helpers import (
get_dataset_and_dataloader,
)
The provided code snippet includes necessary dependencies for implementing the `create_dummy_input` function. Write a Python function `def create_dummy_input( data_loader: Optional[torch.utils.data.DataLoader] = None, image_size: Optional[int] = None, **kwargs, ) -> torch.Tensor` to solve the following problem:
A contract to create a dummy input for a model :param data_loader: The validation dataloader to get a batch from. If None, a fake batch will be created :param image_size: The image size to use for the dummy input :return: The dummy input as a torch tensor
Here is the function:
def create_dummy_input(
data_loader: Optional[torch.utils.data.DataLoader] = None,
image_size: Optional[int] = None,
**kwargs,
) -> torch.Tensor:
"""
A contract to create a dummy input for a model
:param data_loader: The validation dataloader to get a batch from.
If None, a fake batch will be created
:param image_size: The image size to use for the dummy input
:return: The dummy input as a torch tensor
"""
if not data_loader:
# create fake data for export
if image_size is None:
raise ValueError(
"In the absence of validation_dataloader, the "
"image_size must be provided to create a dummy input"
)
data_loader = [[torch.randn(1, 3, image_size, image_size)]]
return next(iter(data_loader))[0] | A contract to create a dummy input for a model :param data_loader: The validation dataloader to get a batch from. If None, a fake batch will be created :param image_size: The image size to use for the dummy input :return: The dummy input as a torch tensor |
21,173 | import os
from pathlib import Path
from typing import Any, Callable, Dict, Optional, Tuple, Union
import torch
from pydantic import Field
from sparseml.export.export_data import create_data_samples as create_data_samples_
from sparseml.integration_helper_functions import (
IntegrationHelperFunctions,
Integrations,
)
from sparseml.pytorch.image_classification.utils.helpers import (
create_model as create_image_classification_model,
)
from sparseml.pytorch.image_classification.utils.helpers import (
get_dataset_and_dataloader,
)
def create_data_samples(
num_samples: int,
data_loader: Optional[torch.utils.data.DataLoader] = None,
model: Optional["torch.nn.Module"] = None,
**kwargs,
):
if data_loader is None:
raise ValueError(
"Attempting to create data samples without a validation dataloader."
)
return create_data_samples_(
data_loader=data_loader, model=model, num_samples=num_samples
) | null |
21,174 | import json
import os
from typing import Any, Dict, Tuple
import click
The provided code snippet includes necessary dependencies for implementing the `parse_json_callback` function. Write a Python function `def parse_json_callback(ctx, params, value: str) -> Dict` to solve the following problem:
Parse a json string into a dictionary :param ctx: The click context :param params: The click params :param value: The json string to parse :return: The parsed dictionary
Here is the function:
def parse_json_callback(ctx, params, value: str) -> Dict:
"""
Parse a json string into a dictionary
:param ctx: The click context
:param params: The click params
:param value: The json string to parse
:return: The parsed dictionary
"""
# JSON string -> dict Callback
if isinstance(value, str):
return json.loads(value)
return value | Parse a json string into a dictionary :param ctx: The click context :param params: The click params :param value: The json string to parse :return: The parsed dictionary |
21,175 | import json
import os
from typing import Any, Dict, Tuple
import click
The provided code snippet includes necessary dependencies for implementing the `create_dir_callback` function. Write a Python function `def create_dir_callback(ctx, params, value: str)` to solve the following problem:
Create and return directory if it doesn't exist. :param ctx: The click context :param params: The click params :param value: The value to create the directory from :returns: The directory path
Here is the function:
def create_dir_callback(ctx, params, value: str):
"""
Create and return directory if it doesn't exist.
:param ctx: The click context
:param params: The click params
:param value: The value to create the directory from
:returns: The directory path
"""
if value is None:
return
os.makedirs(value, exist_ok=True)
return value | Create and return directory if it doesn't exist. :param ctx: The click context :param params: The click params :param value: The value to create the directory from :returns: The directory path |
21,176 | import json
import os
from typing import Any, Dict, Tuple
import click
The provided code snippet includes necessary dependencies for implementing the `parse_into_tuple_of_ints` function. Write a Python function `def parse_into_tuple_of_ints(ctx, params, value) -> Tuple[int, ...]` to solve the following problem:
Parse a string into a tuple of ints. :param ctx: The click context :param params: The click params :param value: The value to parse :return: Tuple of ints
Here is the function:
def parse_into_tuple_of_ints(ctx, params, value) -> Tuple[int, ...]:
"""
Parse a string into a tuple of ints.
:param ctx: The click context
:param params: The click params
:param value: The value to parse
:return: Tuple of ints
"""
if not value:
return ()
return tuple(int(element) for element in eval(value)) | Parse a string into a tuple of ints. :param ctx: The click context :param params: The click params :param value: The value to parse :return: Tuple of ints |
21,177 | import json
import os
from typing import Any, Dict, Tuple
import click
The provided code snippet includes necessary dependencies for implementing the `parameters_to_dict` function. Write a Python function `def parameters_to_dict(ctx) -> Dict[str, Any]` to solve the following problem:
Grab all the click parameters as a dict (where keys are parameter names and values are parameter values). :param ctx: The click context :return: Dictionary containing parameter names and values
Here is the function:
def parameters_to_dict(ctx) -> Dict[str, Any]:
"""
Grab all the click parameters as a dict
(where keys are parameter names and values are parameter values).
:param ctx: The click context
:return: Dictionary containing parameter names and values
"""
parameters = ctx.params
return parameters | Grab all the click parameters as a dict (where keys are parameter names and values are parameter values). :param ctx: The click context :return: Dictionary containing parameter names and values |
21,178 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
_LOGGER = logging.getLogger(__name__)
The provided code snippet includes necessary dependencies for implementing the `save_zoo_directory` function. Write a Python function `def save_zoo_directory( output_dir: str, training_outputs_dir: str, logs_path: Optional[str] = None )` to solve the following problem:
Takes the `training_outputs_dir` (the directory where the pipeline saves its training artifacts), and saves the training artifacts to `output_dir` as a sparsezoo Model class object. :param output_dir: The output path where the artifacts are saved (adhering to the structure of sparsezoo Model class object) :param training_outputs_dir: The path to the existing directory with the saved training artifacts :param logs_path: Optional directory where the training logs reside
Here is the function:
def save_zoo_directory(
output_dir: str, training_outputs_dir: str, logs_path: Optional[str] = None
):
"""
Takes the `training_outputs_dir`
(the directory where the pipeline saves its training artifacts),
and saves the training artifacts to `output_dir` as a sparsezoo Model class object.
:param output_dir: The output path where the artifacts are saved
(adhering to the structure of sparsezoo Model class object)
:param training_outputs_dir: The path to the existing directory
with the saved training artifacts
:param logs_path: Optional directory where the training logs reside
"""
for root_file in [
"model.onnx",
"sample-inputs",
"sample-outputs",
"sample-labels",
"deployment",
]:
root_file_path = os.path.join(training_outputs_dir, root_file)
if not os.path.exists(root_file_path):
raise ValueError(
f"File {root_file_path} missing. To create this file, "
"make sure that the `export` script (for exporting image "
"classification models) has been evoked."
)
setup_model(
output_dir=output_dir,
training=os.path.join(training_outputs_dir, "training"),
deployment=os.path.join(training_outputs_dir, "deployment"),
onnx_model=os.path.join(training_outputs_dir, "model.onnx"),
sample_inputs=os.path.join(training_outputs_dir, "sample-inputs"),
sample_outputs=os.path.join(training_outputs_dir, "sample-outputs"),
sample_labels=os.path.join(training_outputs_dir, "sample-labels"),
model_card=os.path.join(training_outputs_dir, "model.md"),
logs=logs_path,
sample_originals=None,
analysis=None,
benchmarks=None,
eval_results=None,
recipes=None,
)
_LOGGER.info(f"Created sparsezoo Model directory locally in {output_dir}") | Takes the `training_outputs_dir` (the directory where the pipeline saves its training artifacts), and saves the training artifacts to `output_dir` as a sparsezoo Model class object. :param output_dir: The output path where the artifacts are saved (adhering to the structure of sparsezoo Model class object) :param training_outputs_dir: The path to the existing directory with the saved training artifacts :param logs_path: Optional directory where the training logs reside |
21,179 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
class Tasks(Enum):
"""
A class representing supported image classification/detection tasks
"""
TRAIN = auto()
EXPORT = auto()
ANALYSIS = auto()
LR_ANALYSIS = auto()
PR_SENSITIVITY = auto()
The provided code snippet includes necessary dependencies for implementing the `get_save_dir_and_loggers` function. Write a Python function `def get_save_dir_and_loggers( task: Optional[Tasks] = None, is_main_process: bool = True, save_dir: Optional[str] = None, logs_dir: Optional[str] = None, arch_key: Optional[str] = "", model_tag: Optional[str] = None, dataset_name: Optional[str] = "", ) -> Tuple[Union[str, None], List]` to solve the following problem:
:param task: The current task being performed :param is_main_process: Whether this is the main process or not :param save_dir: The directory to save the model :param logs_dir: The directory to save logs :param arch_key: The architecture key of the image classification model :param model_tag: A str tag to optionally tag this model with :param dataset_name: The name of the dataset used to tag a model if model_tag not given :return: A tuple of the save directory and a list of loggers
Here is the function:
def get_save_dir_and_loggers(
task: Optional[Tasks] = None,
is_main_process: bool = True,
save_dir: Optional[str] = None,
logs_dir: Optional[str] = None,
arch_key: Optional[str] = "",
model_tag: Optional[str] = None,
dataset_name: Optional[str] = "",
) -> Tuple[Union[str, None], List]:
"""
:param task: The current task being performed
:param is_main_process: Whether this is the main process or not
:param save_dir: The directory to save the model
:param logs_dir: The directory to save logs
:param arch_key: The architecture key of the image classification model
:param model_tag: A str tag to optionally tag this model with
:param dataset_name: The name of the dataset used to tag a model
if model_tag not given
:return: A tuple of the save directory and a list of loggers
"""
arch_key = arch_key or ""
if is_main_process:
save_dir = os.path.abspath(os.path.expanduser(save_dir))
logs_dir = (
os.path.abspath(os.path.expanduser(os.path.join(logs_dir)))
if task == Tasks.TRAIN
else None
)
arch_key_save_name = f"{arch_key.replace('/', '.')}"
if not model_tag:
model_tag = (
f"{arch_key_save_name}_{dataset_name}"
if dataset_name
else arch_key_save_name
)
model_id = model_tag
model_inc = 0
# set location to check for models with same name
model_main_dir = logs_dir or save_dir
while os.path.exists(os.path.join(model_main_dir, model_id)):
model_inc += 1
model_id = f"{model_tag}__{model_inc:02d}"
else:
model_id = model_tag
save_dir = os.path.join(save_dir, model_id)
create_dirs(save_dir)
# loggers setup
loggers = [PythonLogger()]
if task == Tasks.TRAIN:
logs_dir = os.path.join(logs_dir, model_id)
create_dirs(logs_dir)
try:
loggers.append(TensorBoardLogger(log_path=logs_dir))
except AttributeError:
warnings.warn(
"Failed to initialize TensorBoard logger, "
"it will not be used for logging",
)
print(f"Model id is set to {model_id}")
else:
# do not log for non main processes
save_dir = None
loggers = []
return save_dir, loggers | :param task: The current task being performed :param is_main_process: Whether this is the main process or not :param save_dir: The directory to save the model :param logs_dir: The directory to save logs :param arch_key: The architecture key of the image classification model :param model_tag: A str tag to optionally tag this model with :param dataset_name: The name of the dataset used to tag a model if model_tag not given :return: A tuple of the save directory and a list of loggers |
21,180 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
_LOGGER = logging.getLogger(__name__)
AVAILABLE_DATASETS = ["cifar", "imagenet", "imagenette"]
The provided code snippet includes necessary dependencies for implementing the `label_to_class_mapping_from_dataset` function. Write a Python function `def label_to_class_mapping_from_dataset(dataset: str) -> Optional[Dict[int, str]]` to solve the following problem:
Retrieve the label-to-class-mapping for the chosen dataset If dataset is not recognized, returns None :param dataset: string identifier of the dataset (e.g. "imagenet") :return: mapping from labels to class strings if found. Otherwise None
Here is the function:
def label_to_class_mapping_from_dataset(dataset: str) -> Optional[Dict[int, str]]:
"""
Retrieve the label-to-class-mapping for the chosen dataset
If dataset is not recognized, returns None
:param dataset: string identifier of the dataset (e.g. "imagenet")
:return: mapping from labels to class strings if found. Otherwise None
"""
if dataset not in AVAILABLE_DATASETS:
_LOGGER.warning(f"Dataset: {dataset} not recognized.")
return None
else:
if dataset == "cifar":
return cifar.CIFAR_10_CLASSES
elif dataset == "imagenette":
return imagenette.IMAGENETTE_CLASSES
else:
return imagenet.IMAGENET_CLASSES | Retrieve the label-to-class-mapping for the chosen dataset If dataset is not recognized, returns None :param dataset: string identifier of the dataset (e.g. "imagenet") :return: mapping from labels to class strings if found. Otherwise None |
21,181 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
def _download_model_from_zoo_using_recipe(
recipe_stub: str,
recipe_type: Optional[str],
) -> Optional[str]:
"""
Download a model from the zoo using a recipe stub and return the
path to the downloaded model.
:param recipe_stub: Path to a valid recipe stub
:param recipe_type: recipe type override in zoo stub
:return: Path to the downloaded model
"""
valid_recipe_stub = recipe_stub and recipe_stub.startswith("zoo:")
if not valid_recipe_stub:
raise ValueError(
"The recipe path must start with 'zoo:' to download from the zoo"
f" but got {recipe_stub} instead"
)
return download_framework_model_by_recipe_type(
Model(recipe_stub),
recipe_name=recipe_type,
)
The provided code snippet includes necessary dependencies for implementing the `create_model` function. Write a Python function `def create_model( checkpoint_path: str, num_classes: int, recipe_path: Optional[str] = None, arch_key: Optional[str] = None, pretrained: Union[bool, str] = False, pretrained_dataset: Optional[str] = None, one_shot: Optional[str] = None, local_rank: int = -1, model_kwargs: Optional[Dict[str, Any]] = None, **kwargs, ) -> Tuple[Module, str, str]` to solve the following problem:
:param checkpoint_path: Path to the checkpoint to load. `zoo` for downloading weights with respect to a SparseZoo recipe :param num_classes: Integer representing the number of output classes :param recipe_path: Path or SparseZoo stub to the recipe for downloading, respective model. Defaults to `None` :param arch_key: The architecture key of the image classification model. Defaults to `None` :param pretrained: Whether to use pretrained weights or not. Defaults to False :param pretrained_dataset: The dataset to used for pretraining. Defaults to None :param one_shot: The recipe to be applied in one-shot manner, before exporting. Defaults to None :param local_rank: The local rank of the process. Defaults to -1 :param model_kwargs: Additional keyword arguments to pass to the model :returns: A tuple containing the mode, the model's arch_key, and the checkpoint path
Here is the function:
def create_model(
checkpoint_path: str,
num_classes: int,
recipe_path: Optional[str] = None,
arch_key: Optional[str] = None,
pretrained: Union[bool, str] = False,
pretrained_dataset: Optional[str] = None,
one_shot: Optional[str] = None,
local_rank: int = -1,
model_kwargs: Optional[Dict[str, Any]] = None,
**kwargs,
) -> Tuple[Module, str, str]:
"""
:param checkpoint_path: Path to the checkpoint to load. `zoo` for
downloading weights with respect to a SparseZoo recipe
:param num_classes: Integer representing the number of output classes
:param recipe_path: Path or SparseZoo stub to the recipe for downloading,
respective model. Defaults to `None`
:param arch_key: The architecture key of the image classification model.
Defaults to `None`
:param pretrained: Whether to use pretrained weights or not. Defaults to
False
:param pretrained_dataset: The dataset to used for pretraining. Defaults to
None
:param one_shot: The recipe to be applied in one-shot manner,
before exporting. Defaults to None
:param local_rank: The local rank of the process. Defaults to -1
:param model_kwargs: Additional keyword arguments to pass to the model
:returns: A tuple containing the mode, the model's arch_key, and the
checkpoint path
"""
model_kwargs = model_kwargs or {}
with torch_distributed_zero_first(local_rank):
# only download once locally
if checkpoint_path and checkpoint_path.startswith("zoo"):
recipe_type = None
if recipe_path and "recipe_type=" in recipe_path:
# override recipe type from recipe path
recipe_type = recipe_path.split("recipe_type=")[1]
recipe_type = recipe_type.split("&")[0]
if checkpoint_path.lower() == "zoo":
checkpoint_path = recipe_path
checkpoint_path = _download_model_from_zoo_using_recipe(
recipe_stub=checkpoint_path, recipe_type=recipe_type
)
result = ModelRegistry.create(
key=arch_key,
pretrained=pretrained,
pretrained_path=checkpoint_path,
pretrained_dataset=pretrained_dataset,
num_classes=num_classes,
**model_kwargs,
)
if not isinstance(result, tuple):
model, arch_key = result, arch_key
else:
model, arch_key = result
# TODO: discuss how this is related to the above application of recipes
if recipe_path is not None:
# TODO: replace this with a new manager introduced by @satrat
ScheduledModifierManager.from_yaml(recipe_path).apply_structure(model)
if checkpoint_path:
load_model(checkpoint_path, model, strict=True)
if one_shot is not None:
ScheduledModifierManager.from_yaml(file_path=one_shot).apply(module=model)
return model, arch_key, checkpoint_path | :param checkpoint_path: Path to the checkpoint to load. `zoo` for downloading weights with respect to a SparseZoo recipe :param num_classes: Integer representing the number of output classes :param recipe_path: Path or SparseZoo stub to the recipe for downloading, respective model. Defaults to `None` :param arch_key: The architecture key of the image classification model. Defaults to `None` :param pretrained: Whether to use pretrained weights or not. Defaults to False :param pretrained_dataset: The dataset to used for pretraining. Defaults to None :param one_shot: The recipe to be applied in one-shot manner, before exporting. Defaults to None :param local_rank: The local rank of the process. Defaults to -1 :param model_kwargs: Additional keyword arguments to pass to the model :returns: A tuple containing the mode, the model's arch_key, and the checkpoint path |
21,182 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
The provided code snippet includes necessary dependencies for implementing the `infer_num_classes` function. Write a Python function `def infer_num_classes( train_dataset: Optional[Dataset], val_dataset: Optional[Dataset], dataset: str, model_kwargs: Dict[str, Any], ) -> int` to solve the following problem:
:param train_dataset: dataset representing training data :param val_dataset: dataset representing validation data :param dataset: name of the dataset :param model_kwargs: keyword arguments used for model creation :return: An integer representing the number of classes
Here is the function:
def infer_num_classes(
train_dataset: Optional[Dataset],
val_dataset: Optional[Dataset],
dataset: str,
model_kwargs: Dict[str, Any],
) -> int:
"""
:param train_dataset: dataset representing training data
:param val_dataset: dataset representing validation data
:param dataset: name of the dataset
:param model_kwargs: keyword arguments used for model creation
:return: An integer representing the number of classes
"""
if "num_classes" in model_kwargs:
# handle manually overriden num classes
num_classes = model_kwargs["num_classes"]
del model_kwargs["num_classes"]
elif dataset == "imagefolder":
dataset = val_dataset or train_dataset # get non None dataset
num_classes = dataset.num_classes
else:
dataset_attributes = DatasetRegistry.attributes(dataset)
num_classes = dataset_attributes["num_classes"]
return num_classes | :param train_dataset: dataset representing training data :param val_dataset: dataset representing validation data :param dataset: name of the dataset :param model_kwargs: keyword arguments used for model creation :return: An integer representing the number of classes |
21,183 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
The provided code snippet includes necessary dependencies for implementing the `get_arch_key` function. Write a Python function `def get_arch_key(arch_key: Optional[str], checkpoint_path: Optional[str]) -> str` to solve the following problem:
Utility method to read and return the arch_key from the checkpoint, if it is not passed and exists in the checkpoint. if passed the passed value is returned :param arch_key: Optional[str] The arch_key to use for the model :param checkpoint_path: Optional[str] The path to the checkpoint :return: str The arch_key to use for the model if present in the checkpoint
Here is the function:
def get_arch_key(arch_key: Optional[str], checkpoint_path: Optional[str]) -> str:
"""
Utility method to read and return the arch_key from the checkpoint, if it
is not passed and exists in the checkpoint. if passed the passed value is
returned
:param arch_key: Optional[str] The arch_key to use for the model
:param checkpoint_path: Optional[str] The path to the checkpoint
:return: str The arch_key to use for the model if present in the
checkpoint
"""
if arch_key is None:
if checkpoint_path:
checkpoint = torch.load(checkpoint_path)
else:
raise ValueError(
"Must provide a checkpoint path if no arch_key is provided"
)
if "arch_key" in checkpoint:
arch_key = checkpoint["arch_key"]
else:
raise ValueError(
"Checkpoint does not contain "
"arch_key, provide one using "
"--arch_key"
)
return arch_key | Utility method to read and return the arch_key from the checkpoint, if it is not passed and exists in the checkpoint. if passed the passed value is returned :param arch_key: Optional[str] The arch_key to use for the model :param checkpoint_path: Optional[str] The path to the checkpoint :return: str The arch_key to use for the model if present in the checkpoint |
21,184 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
The provided code snippet includes necessary dependencies for implementing the `set_seeds` function. Write a Python function `def set_seeds(local_rank: int)` to solve the following problem:
Utility method to initialize process group and set seeds :param local_rank: The local rank of the process
Here is the function:
def set_seeds(local_rank: int):
"""
Utility method to initialize process group and set seeds
:param local_rank: The local rank of the process
"""
if local_rank != -1:
torch.distributed.init_process_group(
backend="nccl",
init_method="env://",
)
set_deterministic_seeds(0) | Utility method to initialize process group and set seeds :param local_rank: The local rank of the process |
21,185 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
The provided code snippet includes necessary dependencies for implementing the `get_loss_wrapper` function. Write a Python function `def get_loss_wrapper() -> CrossEntropyLossWrapper` to solve the following problem:
:return loss_wrapper: A Cross Entropy Loss Wrapper with extra metrics
Here is the function:
def get_loss_wrapper() -> CrossEntropyLossWrapper:
"""
:return loss_wrapper: A Cross Entropy Loss Wrapper with extra metrics
"""
extras = {"top1acc": TopKAccuracy(1), "top5acc": TopKAccuracy(5)}
return CrossEntropyLossWrapper(extras=extras) | :return loss_wrapper: A Cross Entropy Loss Wrapper with extra metrics |
21,186 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
The provided code snippet includes necessary dependencies for implementing the `ddp_aware_model_move` function. Write a Python function `def ddp_aware_model_move( device: Any, local_rank: int, model: Any, rank: int, ) -> Tuple[bool, Any, Any]` to solve the following problem:
Move model to device and wrap in DistributedDataParallel if necessary. :param device: device to move model to :param local_rank: local rank of current process :param model: model to move :param rank: rank of current process :return: A tuple of the following form (ddp_state, device, model)
Here is the function:
def ddp_aware_model_move(
device: Any,
local_rank: int,
model: Any,
rank: int,
) -> Tuple[bool, Any, Any]:
"""
Move model to device and wrap in DistributedDataParallel if necessary.
:param device: device to move model to
:param local_rank: local rank of current process
:param model: model to move
:param rank: rank of current process
:return: A tuple of the following form (ddp_state, device, model)
"""
if rank == -1:
ddp = False
else:
torch.cuda.set_device(local_rank)
device = local_rank
ddp = True
model, device, _ = model_to_device(
model=model,
device=device,
ddp=ddp,
)
return ddp, device, model | Move model to device and wrap in DistributedDataParallel if necessary. :param device: device to move model to :param local_rank: local rank of current process :param model: model to move :param rank: rank of current process :return: A tuple of the following form (ddp_state, device, model) |
21,187 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
The provided code snippet includes necessary dependencies for implementing the `extract_metadata` function. Write a Python function `def extract_metadata( metadata_args: List[str], training_args_dict: Dict[str, Any] ) -> Dict[str, Any]` to solve the following problem:
Extract metadata from the training arguments. :param metadata_args: List of keys we are attempting to retrieve from `training_arg` and pass as metadata :param training_args_dict: Dictionary extracted from the TrainingArguments of the pipeline :return: metadata
Here is the function:
def extract_metadata(
metadata_args: List[str], training_args_dict: Dict[str, Any]
) -> Dict[str, Any]:
"""
Extract metadata from the training arguments.
:param metadata_args: List of keys we are attempting to retrieve from
`training_arg` and pass as metadata
:param training_args_dict: Dictionary extracted from
the TrainingArguments of the pipeline
:return: metadata
"""
# TODO: Possibly share this functionality among
# IC and transformers (and future pipelines)
metadata = {}
for arg in metadata_args:
if arg not in training_args_dict.keys():
logging.warning(
f"Required metadata argument {arg} was not found "
f"in the training arguments. Setting {arg} to None."
)
metadata[arg] = None
else:
metadata[arg] = training_args_dict[arg]
return metadata | Extract metadata from the training arguments. :param metadata_args: List of keys we are attempting to retrieve from `training_arg` and pass as metadata :param training_args_dict: Dictionary extracted from the TrainingArguments of the pipeline :return: metadata |
21,188 | import logging
import os
import warnings
from contextlib import nullcontext
from enum import Enum, auto, unique
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import torch
from torch.nn import Module
from torch.optim import Optimizer
from torch.utils.data import DataLoader, Dataset
from sparseml.optim.manager import BaseManager
from sparseml.pytorch.datasets import DatasetRegistry
from sparseml.pytorch.datasets.image_classification.ffcv_dataset import (
FFCVCompatibleDataset,
)
from sparseml.pytorch.image_classification.utils.constants import AVAILABLE_DATASETS
from sparseml.pytorch.models import ModelRegistry
from sparseml.pytorch.optim import ScheduledModifierManager
from sparseml.pytorch.utils import (
DEFAULT_LOSS_KEY,
CrossEntropyLossWrapper,
ModuleExporter,
ModuleRunResults,
PythonLogger,
TensorBoardLogger,
TopKAccuracy,
default_device,
download_framework_model_by_recipe_type,
early_stop_data_loader,
load_model,
model_to_device,
set_deterministic_seeds,
torch_distributed_zero_first,
)
from sparseml.utils import create_dirs
from sparseml.utils.datasets import cifar, imagenet, imagenette
from sparsezoo import Model, setup_model
def _validate_dataset_num_classes(
dataset: str,
dataset_path: str,
num_classes: int,
):
if dataset and not dataset_path:
raise ValueError(f"found dataset {dataset} but dataset_path not specified")
if dataset_path and not dataset:
raise ValueError(f"found dataset_path {dataset_path} but dataset not specified")
if num_classes is None and (not dataset or not dataset_path):
raise ValueError(
"If num_classes is not provided, both dataset and dataset_path must be "
"set to infer num_classes"
) | null |
21,189 | import json
import logging
import os
from typing import Any, Dict, List, Optional
import torch
from torch.nn import Module
import sparseml.core.session as session_manager
from sparseml.core.framework import Framework
from sparseml.pytorch.sparsification.quantization.helpers import (
initialize_channel_wise_scale_zp,
)
from sparseml.pytorch.utils import ModuleSparsificationInfo
_LOGGER = logging.getLogger(__name__)
def reload_model_state(
model: Module, load_path: str, orig_state_dict: Dict[str, Any]
) -> bool:
"""
Reload the weights after model architecture changes due to recipe application.
:param model: PyTorch model to reload
:param load_path: path to model
:param orig_state_dict: state dict of model
:return: True if weights are successfully reloaded; False otherwise.
"""
invalid_load_path = not load_path or not os.path.isdir(load_path)
files = os.listdir(load_path) if not invalid_load_path else []
weight_files = [
os.path.join(load_path, os.path.basename(f))
for f in files
if f.startswith("pytorch_model") and f.endswith("bin")
]
if not weight_files:
_LOGGER.warning(
"Model state was not reloaded for SparseML: "
f"could not find model weights for {load_path}"
)
return False
# PerChannel quantization observers initialize variables
# to dummy shapes that do not match the ones saved in
# state_dict.
# Need to reshape these variables in order to load state_dict
# properly.
initialize_channel_wise_scale_zp(model)
current_state_dict = model.state_dict()
if set(orig_state_dict.keys()) == set(current_state_dict):
# no change in keys, ignore reload
return False
# change in keys due to architecture changes, reload statedict
loaded_state_dict = {}
for f in weight_files:
dd = torch.load(f, map_location="cpu")
loaded_state_dict.update(dd)
_, missing, unexpected, mismatched, _, _ = model._load_pretrained_model(
model=model,
state_dict=loaded_state_dict,
loaded_keys=list(loaded_state_dict.keys()),
resolved_archive_file=None,
pretrained_model_name_or_path=load_path,
_fast_init=False,
)
if missing:
_LOGGER.warning(
"Missing keys found when reloading model state for SparseML recipe:"
f"{missing}"
)
if unexpected:
_LOGGER.warning(
f"Unexpected keys found when reloading model state for SparseML recipe:"
f"{unexpected}"
)
if mismatched:
_LOGGER.warning(
f"Mismatched keys found when reloading model state for SparseML recipe:"
f"{mismatched}"
)
total_loaded = len(current_state_dict) - (len(missing) if len(missing) else 0)
_LOGGER.info(
f"Reloaded {total_loaded} model params for SparseML Recipe from {load_path}"
)
log_model_load(
model,
load_path,
model_type="student",
delayed_load=False,
)
return True
The provided code snippet includes necessary dependencies for implementing the `reload_model_from_checkpoint` function. Write a Python function `def reload_model_from_checkpoint(model: Module, checkpoint: Optional[str] = None)` to solve the following problem:
Reload the model state dict from a specified checkpoint if provided :model: loaded pytorch module :checkpoint: path to checkpoint file to load
Here is the function:
def reload_model_from_checkpoint(model: Module, checkpoint: Optional[str] = None):
"""
Reload the model state dict from a specified checkpoint if provided
:model: loaded pytorch module
:checkpoint: path to checkpoint file to load
"""
if checkpoint is None:
return
orig_state_dict = model.state_dict()
# reload the state dict for the model from the checkpoint
if reload_model_state(model, checkpoint, orig_state_dict):
_LOGGER.info(f"Reloaded model state from checkpoint {checkpoint}") | Reload the model state dict from a specified checkpoint if provided :model: loaded pytorch module :checkpoint: path to checkpoint file to load |
21,190 | import json
import logging
import os
from typing import Any, Dict, List, Optional
import torch
from torch.nn import Module
import sparseml.core.session as session_manager
from sparseml.core.framework import Framework
from sparseml.pytorch.sparsification.quantization.helpers import (
initialize_channel_wise_scale_zp,
)
from sparseml.pytorch.utils import ModuleSparsificationInfo
The provided code snippet includes necessary dependencies for implementing the `get_session_model` function. Write a Python function `def get_session_model() -> Module` to solve the following problem:
:return: pytorch module stored by the active SparseSession, or None if no session is active
Here is the function:
def get_session_model() -> Module:
"""
:return: pytorch module stored by the active SparseSession, or None if no session
is active
"""
session = session_manager.active_session()
if not session:
return None
active_model = session.state.model.model
return active_model | :return: pytorch module stored by the active SparseSession, or None if no session is active |
21,191 | import json
import logging
import os
from typing import Any, Dict, List, Optional
import torch
from torch.nn import Module
import sparseml.core.session as session_manager
from sparseml.core.framework import Framework
from sparseml.pytorch.sparsification.quantization.helpers import (
initialize_channel_wise_scale_zp,
)
from sparseml.pytorch.utils import ModuleSparsificationInfo
COMPLETED_STAGES_FILENAME = "completed_stages.json"
The provided code snippet includes necessary dependencies for implementing the `get_completed_stages` function. Write a Python function `def get_completed_stages(checkpoint_dir: Any) -> List[str]` to solve the following problem:
Given a checkpoint directory for a staged run, get the list of stages that have completed in a prior run if the checkpoint_dir is a string :param checkpoint_dir: path to staged checkpoint :return: list of completed stage names
Here is the function:
def get_completed_stages(checkpoint_dir: Any) -> List[str]:
"""
Given a checkpoint directory for a staged run, get the list of stages that
have completed in a prior run if the checkpoint_dir is a string
:param checkpoint_dir: path to staged checkpoint
:return: list of completed stage names
"""
if isinstance(checkpoint_dir, str):
stage_path = os.path.join(checkpoint_dir, COMPLETED_STAGES_FILENAME)
if os.path.exists(stage_path):
with open(stage_path) as stage_file:
stage_data = json.load(stage_file)
return stage_data["completed"]
return [] | Given a checkpoint directory for a staged run, get the list of stages that have completed in a prior run if the checkpoint_dir is a string :param checkpoint_dir: path to staged checkpoint :return: list of completed stage names |
21,192 | import json
import logging
import os
from typing import Any, Dict, List, Optional
import torch
from torch.nn import Module
import sparseml.core.session as session_manager
from sparseml.core.framework import Framework
from sparseml.pytorch.sparsification.quantization.helpers import (
initialize_channel_wise_scale_zp,
)
from sparseml.pytorch.utils import ModuleSparsificationInfo
COMPLETED_STAGES_FILENAME = "completed_stages.json"
The provided code snippet includes necessary dependencies for implementing the `save_completed_stages` function. Write a Python function `def save_completed_stages(checkpoint_dir: str, completed_stages: List[str])` to solve the following problem:
Save a list of completed stages to a checkpoint directory :param checkpoint_dir: model checkpoint directory to save stages to :param completed_stages: list of stage names that have been run
Here is the function:
def save_completed_stages(checkpoint_dir: str, completed_stages: List[str]):
"""
Save a list of completed stages to a checkpoint directory
:param checkpoint_dir: model checkpoint directory to save stages to
:param completed_stages: list of stage names that have been run
"""
stage_path = os.path.join(checkpoint_dir, COMPLETED_STAGES_FILENAME)
with open(stage_path, "w") as out_file:
json.dump({"completed": completed_stages}, out_file) | Save a list of completed stages to a checkpoint directory :param checkpoint_dir: model checkpoint directory to save stages to :param completed_stages: list of stage names that have been run |
21,193 | import collections
import logging
import os
import warnings
from copy import deepcopy
from typing import Any, Dict, Iterable, List
import onnx
import torch
from packaging import version
from sparseml.exporters import transforms as sparseml_transforms
from sparseml.exporters.base_exporter import BaseExporter
from sparseml.exporters.transforms.base_transform import BaseTransform
from sparseml.pytorch import _PARSED_TORCH_VERSION
from sparseml.pytorch.opset import TORCH_DEFAULT_ONNX_OPSET
from sparseml.pytorch.utils.helpers import (
adjust_quantization_for_onnx_export,
tensors_module_forward,
tensors_to_device,
)
from sparseml.pytorch.utils.model import is_parallel_model
from sparsezoo.utils import save_onnx
try:
import torch
_PARSED_TORCH_VERSION = version.parse(torch.__version__)
if _PARSED_TORCH_VERSION.major >= 2:
torch_compile_func = torch.compile
def raise_torch_compile_warning(*args, **kwargs):
warnings.warn("torch.compile is not supported by sparseml for torch 2.0.x")
return torch_compile_func(*args, **kwargs)
torch.compile = raise_torch_compile_warning
_BYPASS = bool(int(os.environ.get("NM_BYPASS_TORCH_VERSION", "0")))
if _PARSED_TORCH_VERSION.major == 1 and _PARSED_TORCH_VERSION.minor in [10, 11]:
if not _BYPASS:
raise RuntimeError(
"sparseml does not support torch==1.10.* or 1.11.*. "
f"Found torch version {torch.__version__}.\n\n"
"To bypass this error, set environment variable "
"`NM_BYPASS_TORCH_VERSION` to '1'.\n\n"
"Bypassing may result in errors or "
"incorrect behavior, so set at your own risk."
)
else:
warnings.warn(
"sparseml quantized onnx export does not work "
"with torch==1.10.* or 1.11.*"
)
except ImportError:
pass
The provided code snippet includes necessary dependencies for implementing the `_get_output_names` function. Write a Python function `def _get_output_names(out: Any)` to solve the following problem:
Get name of output tensors :param out: outputs of the model :return: list of names
Here is the function:
def _get_output_names(out: Any):
"""
Get name of output tensors
:param out: outputs of the model
:return: list of names
"""
output_names = None
if isinstance(out, torch.Tensor):
output_names = ["output"]
elif hasattr(out, "keys") and callable(out.keys):
output_names = list(out.keys())
elif isinstance(out, Iterable):
output_names = ["output_{}".format(index) for index, _ in enumerate(iter(out))]
return output_names | Get name of output tensors :param out: outputs of the model :return: list of names |
21,194 | import collections
import logging
import os
import warnings
from copy import deepcopy
from typing import Any, Dict, Iterable, List
import onnx
import torch
from packaging import version
from sparseml.exporters import transforms as sparseml_transforms
from sparseml.exporters.base_exporter import BaseExporter
from sparseml.exporters.transforms.base_transform import BaseTransform
from sparseml.pytorch import _PARSED_TORCH_VERSION
from sparseml.pytorch.opset import TORCH_DEFAULT_ONNX_OPSET
from sparseml.pytorch.utils.helpers import (
adjust_quantization_for_onnx_export,
tensors_module_forward,
tensors_to_device,
)
from sparseml.pytorch.utils.model import is_parallel_model
from sparsezoo.utils import save_onnx
def _get_submodule(module: torch.nn.Module, path: List[str]) -> torch.nn.Module:
class _AddNoOpWrapper(torch.nn.Module):
def __init__(self, module: torch.nn.Module):
def forward(self, inp):
try:
import torch
_PARSED_TORCH_VERSION = version.parse(torch.__version__)
if _PARSED_TORCH_VERSION.major >= 2:
torch_compile_func = torch.compile
def raise_torch_compile_warning(*args, **kwargs):
torch.compile = raise_torch_compile_warning
_BYPASS = bool(int(os.environ.get("NM_BYPASS_TORCH_VERSION", "0")))
if _PARSED_TORCH_VERSION.major == 1 and _PARSED_TORCH_VERSION.minor in [10, 11]:
if not _BYPASS:
raise RuntimeError(
"sparseml does not support torch==1.10.* or 1.11.*. "
f"Found torch version {torch.__version__}.\n\n"
"To bypass this error, set environment variable "
"`NM_BYPASS_TORCH_VERSION` to '1'.\n\n"
"Bypassing may result in errors or "
"incorrect behavior, so set at your own risk."
)
else:
warnings.warn(
"sparseml quantized onnx export does not work "
"with torch==1.10.* or 1.11.*"
)
except ImportError:
pass
def _wrap_batch_norms(module: torch.nn.Module) -> bool:
# wrap all batch norm layers in module with a trivial wrapper
# to prevent BN fusing during export
batch_norms_wrapped = False
for name, submodule in module.named_modules():
if (
isinstance(submodule, torch.nn.BatchNorm1d)
or isinstance(submodule, torch.nn.BatchNorm2d)
or isinstance(submodule, torch.nn.BatchNorm3d)
):
submodule_path = name.split(".")
parent_module = _get_submodule(module, submodule_path[:-1])
setattr(parent_module, submodule_path[-1], _AddNoOpWrapper(submodule))
batch_norms_wrapped = True
return batch_norms_wrapped | null |
21,195 | import functools
import os
from typing import Optional
from sparseml.base import check_version
_TORCH_MIN_VERSION = "1.0.0"
_TORCH_MAX_VERSION = os.environ.get("MAX_TORCH", "2.1.10")
def check_torch_install(
min_version: Optional[str] = _TORCH_MIN_VERSION,
max_version: Optional[str] = _TORCH_MAX_VERSION,
raise_on_error: bool = True,
) -> bool:
"""
Check that the torch package is installed.
If raise_on_error, will raise an ImportError if it is not installed or
the required version range, if set, is not installed.
If not raise_on_error, will return True if installed with required version
and False otherwise.
:param min_version: The minimum version for torch that it must be greater than
or equal to, if unset will require no minimum version
:type min_version: str
:param max_version: The maximum version for torch that it must be less than
or equal to, if unset will require no maximum version.
:type max_version: str
:param raise_on_error: True to raise any issues such as not installed,
minimum version, or maximum version as ImportError. False to return the result.
:type raise_on_error: bool
:return: If raise_on_error, will return False if torch is not installed
or the version is outside the accepted bounds and True if everything is correct.
:rtype: bool
"""
if torch_err is not None:
if raise_on_error:
raise torch_err
return False
return check_version("torch", min_version, max_version, raise_on_error)
The provided code snippet includes necessary dependencies for implementing the `require_torch` function. Write a Python function `def require_torch( min_version: Optional[str] = _TORCH_MIN_VERSION, max_version: Optional[str] = _TORCH_MAX_VERSION, )` to solve the following problem:
Decorator function to require use of torch. Will check that torch package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_torch_install` for more info. :param min_version: The minimum version for torch that it must be greater than or equal to, if unset will require no minimum version :type min_version: str :param max_version: The maximum version for torch that it must be less than or equal to, if unset will require no maximum version. :type max_version: str
Here is the function:
def require_torch(
min_version: Optional[str] = _TORCH_MIN_VERSION,
max_version: Optional[str] = _TORCH_MAX_VERSION,
):
"""
Decorator function to require use of torch.
Will check that torch package is installed and within the bounding
ranges of min_version and max_version if they are set before calling
the wrapped function.
See :func:`check_torch_install` for more info.
:param min_version: The minimum version for torch that it must be greater than
or equal to, if unset will require no minimum version
:type min_version: str
:param max_version: The maximum version for torch that it must be less than
or equal to, if unset will require no maximum version.
:type max_version: str
"""
def _decorator(func):
@functools.wraps(func)
def _wrapper(*args, **kwargs):
check_torch_install(min_version, max_version)
return func(*args, **kwargs)
return _wrapper
return _decorator | Decorator function to require use of torch. Will check that torch package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_torch_install` for more info. :param min_version: The minimum version for torch that it must be greater than or equal to, if unset will require no minimum version :type min_version: str :param max_version: The maximum version for torch that it must be less than or equal to, if unset will require no maximum version. :type max_version: str |
21,196 | import functools
import os
from typing import Optional
from sparseml.base import check_version
def check_torchvision_install(
min_version: Optional[str] = None,
max_version: Optional[str] = None,
raise_on_error: bool = True,
) -> bool:
"""
Check that the torchvision package is installed.
If raise_on_error, will raise an ImportError if it is not installed or
the required version range, if set, is not installed.
If not raise_on_error, will return True if installed with required version
and False otherwise.
:param min_version: The minimum version for torchvision that it must be greater than
or equal to, if unset will require no minimum version
:type min_version: str
:param max_version: The maximum version for torchvision that it must be less than
or equal to, if unset will require no maximum version.
:type max_version: str
:param raise_on_error: True to raise any issues such as not installed,
minimum version, or maximum version as ImportError. False to return the result.
:type raise_on_error: bool
:return: If raise_on_error, will return False if torchvision is not installed
or the version is outside the accepted bounds and True if everything is correct.
:rtype: bool
"""
if torchvision_err is not None:
if raise_on_error:
raise torchvision_err
return False
return check_version("torchvision", min_version, max_version, raise_on_error)
The provided code snippet includes necessary dependencies for implementing the `require_torchvision` function. Write a Python function `def require_torchvision( min_version: Optional[str] = None, max_version: Optional[str] = None )` to solve the following problem:
Decorator function to require use of torchvision. Will check that torchvision package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_torchvision_install` for more info. :param min_version: The minimum version for torchvision that it must be greater than or equal to, if unset will require no minimum version :type min_version: str :param max_version: The maximum version for torchvision that it must be less than or equal to, if unset will require no maximum version. :type max_version: str
Here is the function:
def require_torchvision(
min_version: Optional[str] = None, max_version: Optional[str] = None
):
"""
Decorator function to require use of torchvision.
Will check that torchvision package is installed and within the bounding
ranges of min_version and max_version if they are set before calling
the wrapped function.
See :func:`check_torchvision_install` for more info.
:param min_version: The minimum version for torchvision that it must be greater than
or equal to, if unset will require no minimum version
:type min_version: str
:param max_version: The maximum version for torchvision that it must be less than
or equal to, if unset will require no maximum version.
:type max_version: str
"""
def _decorator(func):
@functools.wraps(func)
def _wrapper(*args, **kwargs):
check_torchvision_install(min_version, max_version)
return func(*args, **kwargs)
return _wrapper
return _decorator | Decorator function to require use of torchvision. Will check that torchvision package is installed and within the bounding ranges of min_version and max_version if they are set before calling the wrapped function. See :func:`check_torchvision_install` for more info. :param min_version: The minimum version for torchvision that it must be greater than or equal to, if unset will require no minimum version :type min_version: str :param max_version: The maximum version for torchvision that it must be less than or equal to, if unset will require no maximum version. :type max_version: str |
21,197 | import random
from abc import ABC, abstractmethod
from copy import deepcopy
from typing import List, Optional, Union
import torch
from torch import Tensor
from sparseml.pytorch.utils import memory_aware_threshold
class PruningMaskCreator(ABC):
"""
Base abstract class for a sparsity mask creator.
Subclasses should define all methods for creating masks
"""
def create_sparsity_masks(
self,
tensors: List[Tensor],
target: Union[float, List[float]],
global_sparsity: bool = False,
) -> List[Tensor]:
"""
:param tensors: list of tensors to calculate a masks based on their contained
values
:param target: the desired sparsity (decimal fraction of zeros) to reach
within the mask or other float target value to base sparsity masks on.
Can also be a list where each element is a
target for a tensor in the same position in the tensor list. If global
sparsity is enabled, all values of the target list must be the same
:param global_sparsity: if True, sparsity masks will be created such that the
average sparsity across all given tensors is the target sparsity with the
lowest global values masked. If False, each tensor will be masked to the
target sparsity ranking values within each individual tensor. Default is
False
:return: list of masks (0.0 for values that are masked, 1.0 for values that are
unmasked) calculated from the tensors such that the desired number of zeros
matches the sparsity.
"""
raise NotImplementedError()
class UnstructuredPruningMaskCreator(PruningMaskCreator):
"""
Class for creating unstructured sparsity masks.
Masks will be created using unstructured sparsity by pruning weights ranked
by their value. Each mask will correspond to the given tensor.
"""
def create_sparsity_masks(
self,
tensors: List[Tensor],
target: Union[float, List[float]],
global_sparsity: bool = False,
) -> List[Tensor]:
"""
:param tensors: list of tensors to calculate a mask from based on their
contained values
:param target: the desired sparsity (decimal fraction of zeros) to reach
within the mask. Can also be a list where each element is a
target for a tensor in the same position in the tensor list. If global
sparsity is enabled, all values of the target list must be the same
:param global_sparsity: if True, sparsity masks will be created such that the
average sparsity across all given tensors is the target sparsity with the
lowest global values masked. If False, each tensor will be masked to the
target sparsity ranking values within each individual tensor. Default is
False
:return: list of masks (0.0 for values that are masked, 1.0 for values that are
unmasked) calculated from the tensors such that the desired number of zeros
matches the sparsity. If there are more zeros than the desired sparsity,
zeros will be randomly chosen to match the target sparsity
"""
sparsity = target # target should be desired sparsity level
if isinstance(sparsity, float):
sparsity = [sparsity] * len(tensors)
if len(sparsity) != len(tensors):
raise ValueError(
"a sparsity target must be defined for every given Tensor. Received"
f"{len(sparsity)} targets for {len(tensors)} Tensors."
)
if global_sparsity:
# create tensor to make global mask with
original_tensors = tensors
tensors = [self._flatten_and_stack_tensors(tensors)]
if not all(target == sparsity[0] for target in sparsity):
raise ValueError(
"all sparsity targets must be the same for global pruning "
f"received targets: {sparsity}"
)
sparsity = [sparsity[0]]
else:
original_tensors = None
masks = []
for tensor, sparsity_target in zip(tensors, sparsity):
threshold = self._threshold_from_sparsity(tensor, sparsity_target)
if threshold.numel() < 1:
masks.append(tensor.new_ones(tensor.shape))
continue
num_elem = tensor.numel()
target_num_mask = round(num_elem * sparsity_target)
min_val = tensor.min().item()
if threshold.item() > min_val:
threshold_mask = tensor > threshold
num_masked = num_elem - torch.sum(threshold_mask).item()
if num_masked != target_num_mask:
# attempt to reconcile expected number of masked weights
# may occur if multiple values have the threshold weight
num_to_flip = abs(num_masked - target_num_mask)
over_masked = num_masked > target_num_mask
threshold_mask = self._flip_threshold_mask_vals(
threshold_mask, tensor, threshold, num_to_flip, over_masked
)
masks.append(threshold_mask.type(tensor.type()))
continue
# too many zeros so will go over the already given sparsity
# and choose which zeros to not keep in mask at random
zero_indices = (tensor == min_val).nonzero(as_tuple=False)
rand_indices = list(range(zero_indices.shape[0]))
local_rng = random.Random(42)
local_rng.shuffle(rand_indices)
rand_indices = rand_indices[:target_num_mask]
rand_indices = tensor.new_tensor(rand_indices, dtype=torch.int64)
zero_indices = zero_indices[rand_indices, :]
mask = tensor.new_ones(tensor.shape).type(tensor.type())
mask[zero_indices.split(1, dim=1)] = 0
masks.append(mask.type(tensor.type()))
if global_sparsity:
# unpack global mask into tensor-masks with the original shapes
global_mask = masks[0]
masks = self._unstack_flattened_tensors(global_mask, original_tensors)
del global_mask
return masks
def _threshold_from_sparsity(self, tensor: Tensor, sparsity: float) -> Tensor:
"""
:param tensor: the tensor to find a value in for which setting
all values < that value will give desired sparsity
:param sparsity: the desired sparsity to reach within the mask
(decimal fraction of zeros) can also be a list where each element is a
sparsity for a tensor in the same position in the tensor list
:return: the threshold to get to the desired sparsity or an empty tensor
if it was not possible given the inputs
"""
if tensor.numel() < 1 or sparsity <= 0.0 or sparsity > 1.0:
return tensor.new_tensor([])
lookup_index = round(sparsity * tensor.numel()) - 1
if lookup_index < 0:
lookup_index = 0
elif lookup_index > tensor.numel():
lookup_index = tensor.numel()
return memory_aware_threshold(tensor, lookup_index)
def _flatten_and_stack_tensors(self, tensors: List[Tensor]) -> Tensor:
total_elements = sum(tensor.numel() for tensor in tensors)
global_tensor = (
tensors[0].new_zeros(total_elements).detach().requires_grad_(False)
)
curr_element = 0
for idx, tensor in enumerate(tensors):
global_tensor[
curr_element : curr_element + tensor.numel()
] = tensor.reshape(-1)
curr_element += tensor.numel()
return global_tensor
def _unstack_flattened_tensors(
self, stacked_tensor: Tensor, original_tensors: List[Tensor]
) -> List[Tensor]:
unstacked_tensors = []
global_idx = 0
for tensor in original_tensors:
# unpack global tensor into masks matching original tensor shapes
unstacked_tensor = (
tensor.new_empty(tensor.numel()).detach().requires_grad_(False)
)
unstacked_tensor.copy_(
stacked_tensor[global_idx : global_idx + tensor.numel()]
).type(tensor.type())
unstacked_tensor = unstacked_tensor.reshape(tensor.shape)
unstacked_tensors.append(unstacked_tensor)
global_idx += tensor.numel()
return unstacked_tensors
def _flip_threshold_mask_vals(
self,
mask: Tensor,
tensor: Tensor,
threshold: Tensor,
max_flip: int,
over_masked: bool,
) -> Tensor:
# flip mask values where tensor == threshold until mask has desired
# number of 0s/1s
threshold_idxs = torch.nonzero(tensor == threshold, as_tuple=False)
num_flipped = 0
for threshold_elem_idx in threshold_idxs:
# make tensor returned by nonzero() indexable
threshold_elem_idx = threshold_elem_idx.split(1)
threshold_mask_elem = mask[threshold_elem_idx]
# flip mask val at threshold index if necessary
if over_masked and threshold_mask_elem == 0:
mask[threshold_elem_idx] = 1
num_flipped += 1
elif not over_masked and threshold_mask_elem == 1:
mask[threshold_elem_idx] = 0
num_flipped += 1
if num_flipped >= max_flip:
break
return mask
class FourBlockMaskCreator(GroupedPruningMaskCreator):
"""
semi-structured sparsity mask creator that groups sparsity blocks in groups of four
along the input-channel dimension (assumed to be dimension 1 for pytorch)
Equivalent to BlockPruningMaskCreator([1, 4]) without restrictions on number
of dimensions, or divisibility
:param grouping_fn_name: The name of the torch grouping function to reduce
dimensions by
"""
def __init__(
self,
grouping_fn_name: str = "mean",
):
self._grouping_fn_name = grouping_fn_name
def group_tensor(self, tensor: Tensor) -> Tensor:
"""
:param tensor: The tensor to transform
:return: The mean values of the tensor grouped by blocks of shape
self._block_shape
"""
if tensor.dim() > 2:
# permute input channel dim to last dimension
permute_val = list(range(tensor.dim()))
del permute_val[1]
permute_val.append(1)
tensor = tensor.permute(*permute_val)
remainder = tensor.size(-1) % 4
if remainder != 0:
# pad with zeros to make masks add to 4
pad_num = 4 - remainder
padded_tensor = torch.zeros(
*tensor.shape[:-1],
tensor.size(-1) + pad_num,
device=tensor.device,
dtype=tensor.dtype,
)
padded_tensor[..., :-pad_num] = tensor
padded_tensor[..., -pad_num:] = torch.mean(
# mean of remainder input channel dims
tensor[..., -remainder:],
dim=-1,
keepdim=True,
)
tensor = padded_tensor
blocked_tensor = tensor.reshape(-1, 4)
reduced_blocks = GroupedPruningMaskCreator.reduce_tensor(
blocked_tensor, 1, self._grouping_fn_name
)
return reduced_blocks.type(tensor.type())
def _map_mask_to_tensor(
self,
grouped_mask: Tensor,
original_tensor_shape: torch.Size,
tensor_idx: Optional[int] = None,
) -> Tensor:
"""
:param grouped_mask: A binary mask the size of a tensor from group_tensor
:param original_tensor_shape: Shape of the original tensor grouped_mask
derives from
:param tensor_idx: optional index this tensor was passed into a tensor
list for mask creation
:return: The values from grouped_mask mapped to a tensor of size
original_tensor_shape
"""
# expand so every element has a corresponding value in the original tensor
block_mask = grouped_mask.expand(-1, 4).contiguous()
# adjust for permuted shape if necessary
original_tensor_shape = list(original_tensor_shape)
if len(original_tensor_shape) > 2:
original_tensor_shape.append(original_tensor_shape[1])
del original_tensor_shape[1]
# adjust for padding if necessary
remainder = original_tensor_shape[-1] % 4
if remainder != 0:
original_tensor_shape[-1] += 4 - remainder
# set to original shape
block_mask = block_mask.reshape(original_tensor_shape)
# remove padding if necessary
if remainder != 0:
pad_num = 4 - remainder
block_mask = block_mask[..., :-pad_num]
# repermute mask if necessary
if len(original_tensor_shape) > 2:
permute_val = list(range(len(original_tensor_shape)))
del permute_val[-1]
permute_val.insert(1, len(permute_val))
block_mask = block_mask.permute(*permute_val)
return block_mask
class BlockMaskCreator(GroupedPruningMaskCreator):
"""
Structured sparsity mask creator that groups the input tensor into blocks of
shape block_shape.
:param block_shape: The shape in and out channel should take in blocks. Should be
a list of exactly two integers that divide the input tensors evenly on the
channel dimensions. -1 for a dimension blocks across the entire dimension
:param grouping_fn_name: The name of the torch grouping function to reduce
dimensions by
"""
def __init__(
self,
block_shape: List[int],
grouping_fn_name: str = "mean",
):
if len(block_shape) < 2:
raise ValueError(
(
"Invalid block_shape: {}, "
"block_shape must have length == 2 for in and out channels"
).format(block_shape)
)
if len(block_shape) > 2 and not all([shape == 1 for shape in block_shape[2:]]):
# after in and out channels, only 1 can be used for other dimensions
raise ValueError(
(
"Invalid block_shape: {}, "
"block_shape for indices not in [0, 1] must be equal to 1"
).format(block_shape)
)
self._block_shape = deepcopy(block_shape)
self._grouping_fn_name = grouping_fn_name
def group_tensor(self, tensor: Tensor) -> Tensor:
"""
:param tensor: The tensor to transform
:return: The mean values of the tensor grouped by blocks of shape
self._block_shape
"""
blocked_tens_shape = self._get_blocked_tens_shape_and_validate(tensor.shape)
blocked_tensor = tensor.reshape(blocked_tens_shape)
reduced_blocks = GroupedPruningMaskCreator.reduce_tensor(
blocked_tensor, 1, self._grouping_fn_name
)
return reduced_blocks.type(tensor.type())
def _map_mask_to_tensor(
self,
grouped_mask: Tensor,
original_tensor_shape: torch.Size,
tensor_idx: Optional[int] = None,
) -> Tensor:
"""
:param grouped_mask: A binary mask the size of a tensor from group_tensor
:param original_tensor_shape: Shape of the original tensor grouped_mask
derives from
:param tensor_idx: optional index this tensor was passed into a tensor
list for mask creation
:return: The values from grouped_mask mapped to a tensor of size
original_tensor_shape
"""
blocked_tens_shape = self._get_blocked_tens_shape_and_validate(
original_tensor_shape
)
# expand so every element has a corresponding value in the original tensor
block_mask = grouped_mask.reshape(blocked_tens_shape[0], blocked_tens_shape[2])
block_mask = block_mask.unsqueeze(1)
block_mask = block_mask.expand(*blocked_tens_shape).contiguous()
return block_mask.reshape(original_tensor_shape)
def _get_blocked_tens_shape_and_validate(
self,
tens_shape: torch.Size,
) -> List[int]:
"""
:param tens_shape: The shape of the tensor to group in blocks
:return: shape of tens when blocked by block_shape
:raise: ValueError if we are unable to block tens by shape block_shape
"""
block_shape = self._block_shape
n_dims = len(tens_shape)
while len(block_shape) < n_dims: # Conv will have block shape [X, Y, 1, ..., 1]
block_shape.append(1)
for idx, shape in enumerate(block_shape):
if shape == -1:
block_shape[idx] = tens_shape[idx]
# Validate
if n_dims < 2:
raise ValueError(
"Invalid tensor shape {}."
" BlockSparsityMaskCreator can only create masks from tensors with 2 or"
" more dimensions, tensor has {}.".format(tens_shape, n_dims)
)
for tens_dim, block_dim in zip(tens_shape, block_shape):
if tens_dim % block_dim != 0:
raise ValueError(
f"Invalid block_shape {block_shape} for parameter shape "
f"{tens_shape}. Elements of block_shape must divide parameter "
f"shape evenly"
)
# Compute blocked tensor shape
if len(block_shape) > 1 and block_shape[1] > 1:
return [
tens_shape[0] * tens_shape[1] // (block_shape[0] * block_shape[1]),
block_shape[0] * block_shape[1],
-1,
]
else:
return [tens_shape[0] // block_shape[0], block_shape[0], -1]
class NMPruningMaskCreator(PruningMaskCreator):
"""
Class for creating N:M sparsity masks.
Masks will be created using the N:M ratio, where for every block of M weights,
N will be pruned based on ranked weight value. Each mask will correspond to the
given tensor.
:param N: The number of weights in a group to keep
:param M: The size of a weight group
"""
def __init__(
self,
N: int = 2,
M: int = 4,
):
self._N = N
self._M = M
def create_sparsity_masks(
self,
tensors: List[Tensor],
target: Union[float, List[float]],
global_sparsity: bool = False,
) -> List[Tensor]:
"""
:param tensors: list of tensors to calculate a masks based on their contained
values
:param target: the desired sparsity (decimal fraction of zeros) to reach
within the mask or other float target value to base sparsity masks on.
Can also be a list where each element is a target for a tensor in the same
position in the tensor list. The target value must be within 1e-2 of the
effective sparsity of the N:M ratio.
:param global_sparsity: typically used to determine pruning masks globally.
Not used here because global sparsity doesn't apply to N:M pruning
:return: list of masks (0.0 for values that are masked, 1.0 for values that are
unmasked) calculated from the tensors such that the desired number of zeros
matches the sparsity.
"""
nm_sparsity = 1 - (self._N / self._M)
if (isinstance(target, float) and target == 0.0) or (
isinstance(target, List) and all([sparsity == 0.0 for sparsity in target])
):
return [torch.ones_like(tensor) for tensor in tensors]
if (isinstance(target, float) and abs(nm_sparsity - target) > 1e-2) or (
isinstance(target, List)
and any([abs(nm_sparsity - sparsity) > 1e-2 for sparsity in target])
):
raise ValueError(
"Sparsity must match N:M ratio. e.g. if using '3:4' then sparsity "
"should be set to 0.25"
)
masks = []
for tensor in tensors:
if tensor.numel() % self._M != 0:
raise ValueError(
f"Tensor of size {tensor.shape} can't be evenly divided into "
f"{self._M} groups"
)
original_tensor = tensor.clone()
num_groups = tensor.numel() // self._M
if len(tensor.shape) == 4:
# N:M sparsity for convolutional layers
tensor_temp = (
tensor.detach()
.abs()
.permute(0, 2, 3, 1)
.reshape(num_groups, self._M)
)
index = torch.argsort(tensor_temp, dim=1)[:, : int(self._M - self._N)]
w_b = torch.ones(tensor_temp.shape, device=tensor_temp.device)
masks.append(
w_b.scatter_(dim=1, index=index, value=0).reshape(
original_tensor.permute(0, 2, 3, 1).shape
)
)
elif len(tensor.shape) == 2:
# N:M sparsity for linear layers
tensor_temp = tensor.detach().abs().reshape(num_groups, self._M)
index = torch.argsort(tensor_temp, dim=1)[:, : int(self._M - self._N)]
w_b = torch.ones(tensor_temp.shape, device=tensor_temp.device)
masks.append(
w_b.scatter_(dim=1, index=index, value=0).reshape(tensor.shape)
)
else:
raise NotImplementedError("Only support layers of dimension 2 or 4")
return masks
The provided code snippet includes necessary dependencies for implementing the `get_mask_creator_default` function. Write a Python function `def get_mask_creator_default(mask_type: Union[str, List[int]]) -> PruningMaskCreator` to solve the following problem:
:param mask_type: type of mask creator to use, can be 'unstructured', for unstructured mask creator, 'block4' for 1x4 block pruning, 'N:M' where N and M are integers for N:M pruning, or a list of two integers for custom block pruning (does not support padding) :return: mask creator object created from the mask type
Here is the function:
def get_mask_creator_default(mask_type: Union[str, List[int]]) -> PruningMaskCreator:
"""
:param mask_type: type of mask creator to use, can be 'unstructured', for
unstructured mask creator, 'block4' for 1x4 block pruning, 'N:M' where N and M
are integers for N:M pruning, or a list of two integers for custom block
pruning (does not support padding)
:return: mask creator object created from the mask type
"""
if mask_type == "unstructured":
return UnstructuredPruningMaskCreator()
elif mask_type == "block4":
return FourBlockMaskCreator()
elif ":" in mask_type:
nm = mask_type.split(":")
if len(nm) != 2:
raise ValueError(
"N:M pruning must be specified in the format 'N:M' with "
f"2 values, but {len(nm)} values were found"
)
return NMPruningMaskCreator(N=int(nm[0]), M=int(nm[1]))
elif mask_type == "tensorrt":
return NMPruningMaskCreator(N=2, M=4)
elif isinstance(mask_type, List):
if not all(isinstance(val, int) for val in mask_type):
raise ValueError(
"all values in list specification of BlockMaskCreator must be integers "
f"found {mask_type}"
)
if len(mask_type) != 2:
raise ValueError(
"expected list of length 2 for specification of BlockMaskCreator, "
f"got list with length {len(mask_type)}, mask_type={mask_type}"
)
return BlockMaskCreator(mask_type)
else:
raise ValueError(
f"Unknown mask_type {mask_type}. Supported mask types include "
"'unstructured' and 'block4'"
) | :param mask_type: type of mask creator to use, can be 'unstructured', for unstructured mask creator, 'block4' for 1x4 block pruning, 'N:M' where N and M are integers for N:M pruning, or a list of two integers for custom block pruning (does not support padding) :return: mask creator object created from the mask type |
21,198 | import logging
import math
import os
from abc import ABC, abstractmethod
from functools import wraps
from typing import Any, Dict, List, Optional, Union
import torch
import torch.distributed as dist
from torch import Tensor
from torch.nn import Module, Parameter
from torch.nn.parallel.parallel_apply import parallel_apply
import GPUtil
from sparseml.pytorch.sparsification.modifier import ModifierProp, PyTorchModifierYAML
from sparseml.pytorch.sparsification.pruning.mask_creator import (
PruningMaskCreator,
get_mask_creator_default,
)
from sparseml.pytorch.sparsification.pruning.modifier_pruning_base import (
BaseGradualPruningModifier,
)
from sparseml.pytorch.sparsification.pruning.scorer import PruningParamsGradScorer
from sparseml.pytorch.utils import GradSampler
from sparseml.pytorch.utils.logger import BaseLogger
_LOGGER = logging.getLogger(__name__)
BYTES_IN_MIB = 1024**2
class FisherInverse(ABC):
"""
Abstract class for working with the inverse Fisher information matrix. Storing
the full matrix is not a requirement.
"""
def diag(self) -> Tensor:
"""
:return: the entries along the diagonal entries of the inverse Fisher matrix
"""
raise NotImplementedError()
def mul(self, x: Tensor) -> Tensor:
"""
:param x: tensor to multiply with the inverse Fisher matrix
:return: the matrix multiplied value of x and the inverse Fisher matrix
"""
raise NotImplementedError()
class FisherInverseFast(FisherInverse):
"""
Base implementation of computing the inverse Fisher matrix values based on the
M-FAC paper. Takes O(d * m) memory and O(d * m^2) time to initialize where d
is the number of parameters and m is the number of gradient samples
:param grads: tensor of gradient samples to compute the inverse Fisher product
with. Dimension should be (num_samples, num_parameters)
:param damp: the dampening factor. Default is 1e-5
"""
def __init__(self, grads, damp=1e-5):
self._device = grads.device
self._dtype = grads.dtype
self._num_samples, self._num_params = grads.shape
self._damp = 1.0 / damp
self._hinv_g = grads # placeholder for grads^T * H^-1 * grads
self._denom = torch.zeros(
self._num_samples, device=self._device, dtype=self._dtype
)
grad_sample = grads[0, :].clone()
self._hinv_g[0, :] = self._damp * grad_sample
self._denom[0] = self._num_samples + grad_sample.dot(self._hinv_g[0, :])
for idx in range(1, self._num_samples):
grad_sample = grads[idx, :].clone()
self._hinv_g[idx, :] = self._damp * grad_sample
mul = self._hinv_g[:idx, :].matmul(grad_sample) / self._denom[:idx]
self._hinv_g[idx, :] -= mul.matmul(self._hinv_g[:idx, :])
self._denom[idx] = self._num_samples + grad_sample.dot(self._hinv_g[idx, :])
def diag(self):
"""
:return: the entries along the diagonal entries of the inverse Fisher matrix.
"""
res = self._damp * torch.ones(
self._num_params, device=self._device, dtype=self._dtype
)
for i in range(self._num_samples):
res -= (self._hinv_g[i, :] ** 2) / self._denom[i]
return res
def mul(self, x):
"""
:param x: tensor to multiply with the inverse Fisher matrix
:return: the matrix multiplied value of x and the inverse Fisher matrix
"""
res = self._damp * x
mul = self._hinv_g.matmul(x) / self._denom
res -= mul.matmul(self._hinv_g)
return res
def to(self, device):
"""
:param device: device to move intermediate results to
:return: device movement done in place, returns a copy of this object as well
"""
# in-place
self._hinv_g = self._hinv_g.to(device)
self._denom = self._denom.to(device)
self._device = device
return self
class FisherInverseFastBlock(FisherInverse):
"""
Implementation of computing the inverse Fisher matrix values based on the
M-FAC paper using a given block size to break up computation. Individual
blocks must fit into GPU memory.
:param grads: tensor of gradient samples to compute the inverse Fisher product
with. Dimension should be (num_samples, num_parameters)
:param block_size: size of blocks to form along diagonal of the Fisher matrix
:param damp: the dampening factor. Default is 1e-5
:param devices: list of GPU device ids to use for computation. Default is to use cpu
"""
def __init__(self, grads, block_size, damp=1e-5, devices=None):
self._dtype = grads.dtype
self._block_size = block_size
self._devices = devices or ["cpu"]
self._fisher_inv_blocks = []
_LOGGER.debug("Starting FisherInverseFastBlock")
for block_start_idx in range(0, grads.shape[1], self._block_size):
block = (
grads[:, block_start_idx : (block_start_idx + self._block_size)]
.to(self._devices[0])
.contiguous()
)
fisher_inv_block = FisherInverseFast(block, damp=damp)
self._fisher_inv_blocks.append(fisher_inv_block.to("cpu"))
del block
_LOGGER.debug("FisherInverseFastBlock H^-1 Calculation Complete")
def diag(self):
"""
:return: the entries along the diagonal entries of the inverse Fisher matrix.
"""
res = []
for idx, fisher_inv_block in enumerate(self._fisher_inv_blocks):
device = self._devices[idx % len(self._devices)]
fisher_inv_block = fisher_inv_block.to(device)
res.append(fisher_inv_block.diag().to("cpu"))
res.append(torch.zeros(0, dtype=self._dtype, device="cpu"))
# free GPU mem
fisher_inv_block.to("cpu")
torch.cuda.empty_cache()
return torch.cat(res[:-1])
def mul(self, x):
"""
:param x: tensor to multiply with the inverse Fisher matrix
:return: the matrix multiplied value of x and the inverse Fisher matrix
"""
x = x.to("cpu")
res = []
for idx, fisher_inv_block in enumerate(self._fisher_inv_blocks):
device = self._devices[idx % len(self._devices)]
fisher_inv_block = fisher_inv_block.to(device)
x_block = x[(self._block_size * idx) : (self._block_size * (idx + 1))].to(
device
)
res.append(fisher_inv_block.mul(x_block).to("cpu"))
# free GPU mem
fisher_inv_block.to("cpu")
torch.cuda.empty_cache()
return torch.cat(res)
class FisherInverseFastPageSwap(FisherInverse):
"""
Implementation of computing the inverse Fisher matrix values based on the
M-FAC paper using a given page size to break up computation across samples.
Pages of gradients must fit into GPU memory.
:param grads: tensor of gradient samples to compute the inverse Fisher product
with. Dimension should be (num_samples, num_parameters)
:param damp: the dampening factor. Default is 1e-5
:param num_pages: number of pages to break gradient samples into. the number of
gradients must be divisible by num_pages
:param devices: list of GPU device ids to use for computation. Default is to use cpu
"""
def __init__(self, grads, damp=1e-5, num_pages=1, devices=None):
assert torch.cuda.is_available(), (
"CUDA enabled device not available, "
"but is required for using FisherInverseFastPageSwap"
)
self._devices = devices or ["cuda:0"]
self._gpu0 = self._devices[0] # for computations that fit on single GPU
self._dtype = grads.dtype
self._num_samples, self._num_params = grads.shape
self._damp = 1.0 / damp
if self._num_samples < num_pages:
raise ValueError("num_grads cannot be smaller than num_pages")
if self._num_samples % num_pages != 0:
raise ValueError(
f"num_grads {self._num_samples} must be divisible by "
f"num_pages {num_pages}"
)
self._samples_per_page = self._num_samples // num_pages
self._params_per_device = int(math.ceil(self._num_params / len(self._devices)))
self._hinv_g = grads
self._denom = torch.zeros(self._num_samples, dtype=self._dtype, device="cpu")
# compute fisher inverse for first page across all GPUs
self._comp_first_page()
# run updates to fisher inverse on main GPU for remaining pages
self._fisher_update_buffer = torch.zeros(
(self._samples_per_page, self._num_params), dtype=self._dtype, device="cpu"
)
for page_offset in range(
self._samples_per_page, self._num_samples, self._samples_per_page
):
self._comp_page(page_offset)
del self._fisher_update_buffer
torch.cuda.empty_cache()
self._denom = self._denom.to(self._gpu0)
def diag(self):
"""
:return: the entries along the diagonal entries of the inverse Fisher matrix.
"""
res = self._damp * torch.ones(
self._num_params, device=self._gpu0, dtype=self._dtype
)
for page_offset in range(0, self._num_samples, self._samples_per_page):
hinv_g_page = self._hinv_g[
page_offset : (self._samples_per_page + page_offset), :
].to(self._gpu0)
for page_sample_idx in range(self._samples_per_page):
res -= (hinv_g_page[page_sample_idx, :] ** 2) / self._denom[
page_sample_idx + page_offset
]
del hinv_g_page
torch.cuda.empty_cache()
return res
def mul(self, x):
"""
:param x: tensor to multiply with the inverse Fisher matrix
:return: the matrix multiplied value of x and the inverse Fisher matrix
"""
x = x.to(self._gpu0)
res = self._damp * x
for page_offset in range(0, self._num_samples, self._samples_per_page):
hinv_g_page = self._hinv_g[
page_offset : (self._samples_per_page + page_offset), :
].to(self._gpu0)
mul = (
hinv_g_page.matmul(x)
/ self._denom[page_offset : (self._samples_per_page + page_offset)]
)
res -= mul.matmul(hinv_g_page)
del hinv_g_page
torch.cuda.empty_cache()
return res
def _comp_first_page(self):
# move first page value to devices across GPUs
def _get_first_page_on_device(params_idx, device):
return self._hinv_g[
: self._samples_per_page,
params_idx : (params_idx + self._params_per_device),
].to(device)
first_page_hinv_g_dist = parallel_apply(
[_get_first_page_on_device] * len(self._devices),
list(
zip(range(0, self._num_params, self._params_per_device), self._devices)
),
)
# compute value for first gradient sample
def _process_first_sample(first_page_hinv_g):
first_grad = first_page_hinv_g[0, :].clone()
first_page_hinv_g[0, :] = self._damp * first_grad
self._denom[0] += first_grad.dot(first_page_hinv_g[0, :]).to("cpu")
parallel_apply(
[_process_first_sample] * len(self._devices),
first_page_hinv_g_dist,
)
self._denom[0] += self._num_samples
for sample_idx in range(1, self._samples_per_page):
# update the other page gradients in parallel with two steps
self._mul_tmp = torch.zeros(sample_idx, device="cpu", dtype=self._dtype)
self._sample_grads_dist = [None] * len(self._devices) # type: List[Tensor]
def _calc_mul_update_dist(device_idx, hinv_g_shard):
self._sample_grads_dist[device_idx] = hinv_g_shard[
sample_idx, :
].clone()
hinv_g_shard[sample_idx, :] = (
self._damp * self._sample_grads_dist[device_idx]
)
self._mul_tmp += (
hinv_g_shard[:sample_idx, :]
.matmul(self._sample_grads_dist[device_idx])
.to("cpu")
)
parallel_apply(
[_calc_mul_update_dist] * len(self._devices),
list(enumerate(first_page_hinv_g_dist)),
)
self._mul_tmp /= self._denom[:sample_idx]
def _apply_mul_update_dist(device_idx, hinv_g_shard):
hinv_g_shard[sample_idx, :] -= self._mul_tmp.to(
hinv_g_shard.device
).matmul(hinv_g_shard[:sample_idx, :])
self._denom[sample_idx] += (
self._sample_grads_dist[device_idx]
.dot(hinv_g_shard[sample_idx, :])
.to("cpu")
)
parallel_apply(
[_apply_mul_update_dist] * len(self._devices),
list(enumerate(first_page_hinv_g_dist)),
)
self._denom[sample_idx] += self._num_samples
del self._mul_tmp
del self._sample_grads_dist
def _update_main_hinv_g(shard_param_idx, hinv_g_shard):
self._hinv_g[
: self._samples_per_page,
shard_param_idx : (shard_param_idx + self._params_per_device),
] = hinv_g_shard.to("cpu")
parallel_apply(
[_update_main_hinv_g] * len(first_page_hinv_g_dist),
list(
zip(
range(0, self._num_params, self._params_per_device),
first_page_hinv_g_dist,
),
),
)
del first_page_hinv_g_dist
def _comp_page(self, page_offset):
# update fisher update buffer
for prev_page_offset in range(0, page_offset, self._samples_per_page):
prev_page_hinv_g = self._hinv_g[
prev_page_offset : (self._samples_per_page + prev_page_offset), :
].to(self._gpu0)
for page_sample_idx in range(self._samples_per_page):
grad_sample = self._hinv_g[page_sample_idx + page_offset, :].to(
self._gpu0
)
mul = prev_page_hinv_g.matmul(grad_sample) / self._denom[
prev_page_offset : (self._samples_per_page + prev_page_offset)
].to(self._gpu0)
mul = mul.matmul(prev_page_hinv_g)
if prev_page_offset == 0:
self._fisher_update_buffer[page_sample_idx, :] = (
self._damp * grad_sample - mul
).to("cpu")
else:
self._fisher_update_buffer[page_sample_idx, :] -= mul.to("cpu")
del prev_page_hinv_g
# move buffer to main GPU and update the fisher inv state
fisher_inv_buf_gpu = self._fisher_update_buffer.to(self._gpu0)
grad_sample = self._hinv_g[page_offset, :].to(self._gpu0)
self._denom[page_offset] = self._num_samples + grad_sample.dot(
fisher_inv_buf_gpu[0, :]
)
for page_sample_idx in range(1, self._samples_per_page):
grad_sample = self._hinv_g[page_sample_idx + page_offset, :].to(self._gpu0)
mul = fisher_inv_buf_gpu[:page_sample_idx, :].matmul(
grad_sample
) / self._denom[page_offset : (page_sample_idx + page_offset)].to(
self._gpu0
)
fisher_inv_buf_gpu[page_sample_idx, :] -= mul.matmul(
fisher_inv_buf_gpu[:page_sample_idx, :]
)
self._denom[
page_sample_idx + page_offset
] = self._num_samples + grad_sample.dot(
fisher_inv_buf_gpu[page_sample_idx, :]
)
# update main tensor
self._hinv_g[
page_offset : (self._samples_per_page + page_offset), :
] = fisher_inv_buf_gpu.to("cpu")
del fisher_inv_buf_gpu
class FisherInverseFastSmallBlocks(FisherInverse):
"""
Implementation of computing the inverse Fisher matrix values based on the
M-FAC paper that is optimized for speed for small block sizes
:param grads: tensor of gradient samples to compute the inverse Fisher product
with. Dimension should be (num_samples, num_parameters)
:param block_size: size of blocks to form along diagonal of the Fisher matrix
:param damp: the dampening factor. Default is 1e-5
:param devices: list of GPU device ids to use for computation. Default is to use cpu
:param alpha: alpha value for add step
"""
def __init__(
self,
grads: Tensor,
block_size: int,
damp: float = 1e-5,
devices: List[torch.device] = None,
alpha: float = 0.0,
):
self._dtype = grads.dtype
self._element_size = grads.element_size()
self._block_size = block_size
self._devices = devices or ["cpu"]
self._alpha = alpha
self._damp = damp
self._num_samples, self._num_params = grads.shape
self._num_blocks = math.ceil(self._num_params / block_size)
self._num_devices = len(self._devices)
self._hinvs = []
block_mem = _block_memory_size(self._block_size, self._element_size)
cpu = self._devices[0] == "cpu"
self.hinv(tensor=grads, block_mem=block_mem, cpu=cpu)
def block_wise_decorator(func):
def wrapper_blocked(
self,
tensor: Tensor,
block_mem: int,
safety_margin: float = 0.1,
cpu: bool = False,
):
"""
Wraps the most memory intensive Fisher computations in a memory-aware block
allocation function. The decorator will allocate a number of blocks which
will maximize GPU memory utilization (if GPUs are utilized) with a safety
margin
Note: currently each device is called in sequence. There is no clear benefit
to this regime over simply re-using one device, but it may lend to easier
parallelization in the future and it upholds the M-FAC "available_devices"
parameter expected behavior.
:param tensor: The input tensor for func, the fisher computation function
:param block_mem: The amount of memory needed (in bytes) for the
computation of one block
:param safety_margin: The total number of blocks allocated per device is
(1 - safety_margin)*max_blocks, where max_blocks is the maximum that could
fit on the device at this time
:param cpu: When true all computation is done on the CPU, without the
memory-aware logic
"""
if cpu:
self._num_blocks_per_device_call = [self._num_blocks]
func(self, tensor, 0, "cpu") # Process all the blocks in one call
else:
self._num_blocks_per_device_call = []
self._remaining_blocks = self._num_blocks
self._device_suite_calls = 0 # Number of calls to the full set of gpus
# Calculate free memory available on each device
free_device_memory = _get_free_gpu_memory(
_cuda_list_to_idx(self._devices)
)
while self._remaining_blocks > 0:
# Allocate blocks based on device memory, until either all blocks
# are allocated or all gpus have been assigned for this iteration
for idx, device in enumerate(self._devices):
self._num_blocks_per_device_call.append(
min(
self._remaining_blocks,
math.floor(
(1 - safety_margin)
* free_device_memory[idx]
* BYTES_IN_MIB
/ block_mem
),
)
)
self._remaining_blocks -= self._num_blocks_per_device_call[-1]
_LOGGER.debug(
f"""
Allocating {self._num_blocks_per_device_call[-1]} blocks to
device {device}. {self._remaining_blocks} blocks remaining
"""
)
if self._remaining_blocks <= 0:
break
# Iterate through each device and perform computation
for idx, device in enumerate(self._devices):
call_idx = idx + self._device_suite_calls * self._num_devices
if call_idx >= len(self._num_blocks_per_device_call):
break
func(self, tensor, call_idx, device)
self._device_suite_calls += 1
# At the end of each iter the net free memory change should be 0
# If the free memory decreases, throw a warning in debug mode
prev_free_memory = free_device_memory
free_device_memory = _get_free_gpu_memory(
_cuda_list_to_idx(self._devices)
)
for i in range(len(free_device_memory)):
mem_diff = prev_free_memory[i] - free_device_memory[i]
if mem_diff > 0:
_LOGGER.debug(
f"WARNING - GPU memory not cleanly freed."
f"Found {(mem_diff)/BYTES_IN_MIB} less MiB"
f"since the last iteration"
)
if sum(self._num_blocks_per_device_call) != self._num_blocks:
_LOGGER.debug(
"WARNING - Number of blocks processed does not equal to total "
"number of blocks."
f"Total blocks - {self._num_blocks}"
f"Processed blocks - {sum(self._num_blocks_per_device_call)}"
)
return wrapper_blocked
def hinv(self, grads: Tensor, call_idx: int, device: str):
"""
Initialize the H^-1 and compute its result for the given device.
:param grads: The sampled gradients used for H^-1 computation
:param call_idx: The index of the number of single-device calls
:param device: the device on which to perform the computations
"""
# initialize H_invs on each device
num_blocks = self._num_blocks_per_device_call[call_idx]
try:
self._hinvs.append(
self._init_hinv(num_blocks, self._damp, device, self._dtype)
)
_LOGGER.debug(f"Initialized H^-1 for {num_blocks} blocks on {device}")
# As a failsafe for a memory issue, try again with half the number of blocks
# This condition has not been encountered in testing as of yet
except Exception as error_msg:
_LOGGER.warning(
f"{error_msg}"
f"Initialization of H^-1 for {num_blocks} blocks on {device} failed"
f"Retrying with {num_blocks//2} blocks"
)
self._hinvs.append(
self._init_hinv(num_blocks // 2, self._damp, device, self._dtype)
)
self._num_blocks_per_device_call[call_idx] //= 2
self._remaining_blocks += self._num_blocks_per_device_call[call_idx]
_LOGGER.debug(
f"Initialized H^-1 for {num_blocks//2} blocks on {device}"
f"remaining blocks increased to {self._remaining_blocks}"
)
# build hinv_g values from grad samples
_LOGGER.debug(
f"Calculating H^-1 with {self._num_samples} samples for call {call_idx}"
)
for sample_idx in range(self._num_samples):
self._add(grads[sample_idx, :], device, call_idx)
self._hinvs[call_idx] = self._hinvs[call_idx].to("cpu")
_LOGGER.debug("Finished H^-1 calculation and moved mat to CPU")
return None
def diag(self) -> Tensor:
"""
:return: the entries along the diagonal entries of the inverse Fisher matrix
"""
diag_slices = [
torch.diagonal(self._hinvs[idx], dim1=1, dim2=2).reshape(
-1
) # move all to same device after computation
for idx in range(len(self._num_blocks_per_device_call))
]
return torch.cat(diag_slices)[: self._num_params]
def mul(self, x: Tensor) -> Tensor:
"""
:param x: tensor to multiply with the inverse Fisher matrix
:return: the matrix multiplied value of x and the inverse Fisher matrix
"""
x = self._pad(x).reshape((-1, self._block_size)).unsqueeze(2)
self._mul_slices = []
block_mem = _block_memory_size(self._block_size, self._element_size)
cpu = self._devices[0] == "cpu"
self.mul_blocked(tensor=x, block_mem=block_mem, cpu=cpu)
return torch.cat(self._mul_slices)[: self._num_params]
def mul_blocked(self, x: Tensor, call_idx: int, device: str) -> Tensor:
"""
:param x: tensor to multiply with the inverse Fisher matrix
:param call_idx: The index of the number of single-device calls
:param device: the device on which to perform the computations
:return: the matrix multiplied value of x and the inverse Fisher matrix
"""
x_slice = x[
int(
torch.sum(
torch.tensor(self._num_blocks_per_device_call[:call_idx])
).item()
) : int(
torch.sum(
torch.tensor(self._num_blocks_per_device_call[: call_idx + 1])
).item()
)
].to(device)
# Get the H^-1 values corresponding to the number of blocks used here.
# It's clunky compared to torch.cat()[idx], but avoids duplicating
# the memory of H^-1. Most of the logic deals with indexing into a list of
# tensors as one continuous tensor, to grab slices that may span separate
# tensors in the list
block_start = sum(self._num_blocks_per_device_call[:call_idx])
block_end = sum(self._num_blocks_per_device_call[: call_idx + 1])
t_hinv = []
cont_end_idx = 0
for tensor in self._hinvs:
cont_start_idx = cont_end_idx
cont_end_idx += len(tensor)
if block_start > cont_end_idx:
continue
if block_end < cont_end_idx:
t_hinv.append(
tensor[block_start - cont_start_idx : block_end - cont_start_idx]
)
break
else:
t_hinv.append(tensor[block_start - cont_start_idx :])
block_start = cont_end_idx
mul_slice = (
torch.bmm(torch.cat(t_hinv).to(device), x_slice)
.reshape(-1)
.to("cpu") # move all to same device after computation
)
self._mul_slices.append(mul_slice)
def _init_hinv(
self,
num_blocks: int,
damp: float,
device: torch.device,
dtype: torch.dtype,
):
# initialize hinv to num_blocks diagonal blocks of size blocksize
base_block = torch.diag(
torch.full([self._block_size], 1.0 / damp, dtype=dtype, device=device)
)
return torch.repeat_interleave(base_block.unsqueeze(0), num_blocks, 0)
def _add(self, grad_sample: Tensor, device, call_idx):
# add gradient sample into H_invs
num_params_per_device = [
num_blocks_device * self._block_size
for num_blocks_device in self._num_blocks_per_device_call
]
grad_sample_slice = grad_sample[
int(torch.sum(torch.tensor(num_params_per_device[:call_idx])).item()) : int(
torch.sum(torch.tensor(num_params_per_device[: call_idx + 1])).item()
)
]
if len(grad_sample_slice) % self._block_size != 0:
# pad to block size
pad_vals = torch.zeros(
self._block_size - len(grad_sample_slice) % self._block_size
)
grad_sample_slice = torch.cat(
[grad_sample_slice, pad_vals.to(grad_sample.device)]
)
grads_blocked_device = grad_sample_slice.to(device).reshape(
(-1, self._block_size)
)
hinv_g_slice = torch.bmm(
self._hinvs[call_idx], grads_blocked_device.unsqueeze(2)
)
denom = (
self._num_samples
+ torch.bmm(grads_blocked_device.unsqueeze(1), hinv_g_slice)
).squeeze(2)
hinv_g_slice = hinv_g_slice.reshape(-1, self._block_size)
for idx_block in range(self._block_size):
# update h_inv calculation across block dims
self._hinvs[call_idx][:, idx_block, :] -= hinv_g_slice * (
hinv_g_slice[:, idx_block].unsqueeze(1) / denom
)
def _pad(self, x: Tensor):
# pad 1-d tensor to num_blocks * block_size
padded_x = torch.zeros(
self._num_blocks * self._block_size,
dtype=self._hinvs[0].dtype,
device=self._hinvs[0].device,
)
padded_x[: x.size(0)] = x
return padded_x
def _get_free_gpu_memory(
device_idx: List[int] = [], clear_cache: bool = True
) -> List[float]:
"""
Get free memory available on device(s)
Note: GPUtil and PyTorch may see different devices and device orders depending on
the value of CUDA_VISIBLE_DEVICES. This function honors the PyTorch device view.
:param device_idx: Devices to retrieve free memory for. If empty, will use
all visible devices
:param clear_cache: Whether to clear pytorch reserved memory before retrieving free
memory. Leaving this flag on will result in a larger (and more accurate) free memory
reading, but comes at a (small) cost to pytorch tensor allocation speed. In the case
of very high frequency calls, it may be better to turn clear_cache off.
"""
if not device_idx:
device_idx = list(range(torch.cuda.device_count()))
if not device_idx:
return [] # An empty list signals to use cpu
if "CUDA_VISIBLE_DEVICES" in os.environ:
if not os.environ["CUDA_VISIBLE_DEVICES"]:
raise ValueError(
"GPU device specified for M-FAC, but no GPUs"
"were found in CUDA_VISIBLE_DEVICES"
)
gpu_idx_all = [
int(idx) for idx in os.environ["CUDA_VISIBLE_DEVICES"].split(",")
]
gpu_idx = [gpu_idx_all[idx] for idx in device_idx]
else:
gpu_idx = device_idx
if clear_cache:
torch.cuda.empty_cache()
gpus_all = GPUtil.getGPUs()
return [gpus_all[idx].memoryFree for idx in gpu_idx]
def _cuda_list_to_idx(cuda_device_list: List[str]) -> List[int]:
"""
Convert list of cuda device string names to indices.
e.g. "cuda:0" -> 0
"""
return [
int("".join(filter(str.isdigit, device_str))) for device_str in cuda_device_list
]
def _block_memory_size(block_size: int, element_size: int) -> int:
"""
Calculate memory needed for H^-1 calculations of one block.
"""
# B^2 * e_size - memory required for H^-1
# 4*B * e_size - memory required for additional comp vectors
return (block_size**2 + 4 * block_size) * element_size
The provided code snippet includes necessary dependencies for implementing the `_compute_hessian_inv` function. Write a Python function `def _compute_hessian_inv( grads: Tensor, damp: float, fisher_block_size: int, num_pages: int, available_devices: Optional[List[str]], ) -> FisherInverse` to solve the following problem:
Determine which FisherInverse algorithm to use. :param grads: tensor of gradient samples to compute the Hessian inverse representation with. Should have shape (num_samples, num_parameters) :param damp: dampening factor, default is 1e-5 :param fisher_block_size: optional value to enable blocked computation of the Fisher matrix. Blocks will be formed consecutively along the diagonal. If None, blocked computation is not used. Default is 2000 :param num_pages: number of pages to break the gradient samples into for GPU computation. Only available when blocked computation is not enabled. Default is 1 :param available_devices: list of device names to perform computation on. Default is empty :return: FisherInverse object with access to the diagonal multiplication of the Fisher approximation of the Hessian inverse
Here is the function:
def _compute_hessian_inv(
grads: Tensor,
damp: float,
fisher_block_size: int,
num_pages: int,
available_devices: Optional[List[str]],
) -> FisherInverse:
"""
Determine which FisherInverse algorithm to use.
:param grads: tensor of gradient samples to compute the Hessian inverse
representation with. Should have shape (num_samples, num_parameters)
:param damp: dampening factor, default is 1e-5
:param fisher_block_size: optional value to enable blocked computation of the
Fisher matrix. Blocks will be formed consecutively along the diagonal. If
None, blocked computation is not used. Default is 2000
:param num_pages: number of pages to break the gradient samples into for GPU
computation. Only available when blocked computation is not enabled.
Default is 1
:param available_devices: list of device names to perform computation on. Default
is empty
:return: FisherInverse object with access to the diagonal multiplication of the
Fisher approximation of the Hessian inverse
"""
# The amount of memory required for the computation of one block is the main
# decider in the FisherInverse algorithm to use
if fisher_block_size:
block_mem_size = _block_memory_size(
block_size=fisher_block_size, element_size=grads.element_size()
)
_LOGGER.debug(
f"""
Calculated Fisher block with size {fisher_block_size}
to occupy {block_mem_size} bytes/ {block_mem_size/BYTES_IN_MIB} MiB
in memory
"""
)
if available_devices != ["cpu"]:
free_device_mem = _get_free_gpu_memory(_cuda_list_to_idx(available_devices))
_LOGGER.debug(
"Free memory on devices:"
+ "\n".join(
[
f"{available_devices[i]}: "
f"{str(free_device_mem[i]/BYTES_IN_MIB)}"
for i in range(len(free_device_mem))
]
)
)
# Determine which of the available gpus have enough free memory to host
# the block computation
available_devices = [
gpu
for i, gpu in enumerate(available_devices)
if free_device_mem[i] > block_mem_size / BYTES_IN_MIB
]
# FisherInverseFastBlock works only in sequential mode. Unless only one block
# or less can fit on the GPU, FisherInverseFastSmallBlocks should be used
if len(available_devices) > 0 or not free_device_mem:
_LOGGER.info("Using Small Block Fast Fisher Inverse Implementation")
_LOGGER.debug(
"Using the following devices for M-FAC:" + "\n".join(available_devices)
)
available_devices = available_devices
block_fisher_class = FisherInverseFastSmallBlocks
else:
_LOGGER.info(
"Large block size detected - Using Fast Block Fisher Inverse "
"Implementation"
)
block_fisher_class = FisherInverseFastBlock
return block_fisher_class(
grads,
fisher_block_size,
damp=damp,
devices=available_devices,
)
elif available_devices or num_pages > 1:
return FisherInverseFastPageSwap(
grads,
damp=damp,
num_pages=num_pages,
devices=available_devices,
)
else:
return FisherInverseFast(grads, damp=damp) | Determine which FisherInverse algorithm to use. :param grads: tensor of gradient samples to compute the Hessian inverse representation with. Should have shape (num_samples, num_parameters) :param damp: dampening factor, default is 1e-5 :param fisher_block_size: optional value to enable blocked computation of the Fisher matrix. Blocks will be formed consecutively along the diagonal. If None, blocked computation is not used. Default is 2000 :param num_pages: number of pages to break the gradient samples into for GPU computation. Only available when blocked computation is not enabled. Default is 1 :param available_devices: list of device names to perform computation on. Default is empty :return: FisherInverse object with access to the diagonal multiplication of the Fisher approximation of the Hessian inverse |
21,199 | import logging
import math
import os
from abc import ABC, abstractmethod
from functools import wraps
from typing import Any, Dict, List, Optional, Union
import torch
import torch.distributed as dist
from torch import Tensor
from torch.nn import Module, Parameter
from torch.nn.parallel.parallel_apply import parallel_apply
import GPUtil
from sparseml.pytorch.sparsification.modifier import ModifierProp, PyTorchModifierYAML
from sparseml.pytorch.sparsification.pruning.mask_creator import (
PruningMaskCreator,
get_mask_creator_default,
)
from sparseml.pytorch.sparsification.pruning.modifier_pruning_base import (
BaseGradualPruningModifier,
)
from sparseml.pytorch.sparsification.pruning.scorer import PruningParamsGradScorer
from sparseml.pytorch.utils import GradSampler
from sparseml.pytorch.utils.logger import BaseLogger
def _get_num_grads_for_sparsity(
num_grads: Union[Dict[float, int], int], sparsity: Union[float, List[float]]
) -> int:
if isinstance(num_grads, int):
return num_grads
if isinstance(sparsity, List):
sparsity = sum(sparsity) / len(sparsity)
sparsity_thresholds = list(sorted(num_grads, key=lambda key: float(key)))
if 0.0 not in sparsity_thresholds:
raise ValueError(
"Dictionary of sparsity thresholds to number of grads given for "
"num_grads, but 0 not included as a sparsity threshold. "
"0.0 must be included as a sparsity threshold. Given thresholds "
f"{sparsity_thresholds}"
)
idx = 0
while idx < len(sparsity_thresholds) and float(sparsity_thresholds[idx]) < sparsity:
idx += 1
idx = min(idx, len(num_grads) - 1)
return num_grads[sparsity_thresholds[idx]] | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.