code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def infer_framework(model_class):
"""
Infers the framework of a given model without using isinstance(), because we cannot guarantee that the relevant
classes are imported or available.
"""
for base_class in inspect.getmro(model_class):
module = base_class.__module__
name = base_class... |
Infers the framework of a given model without using isinstance(), because we cannot guarantee that the relevant
classes are imported or available.
| infer_framework | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def torch_int(x):
"""
Casts an input to a torch int64 tensor if we are in a tracing context, otherwise to a Python int.
"""
if not is_torch_available():
return int(x)
import torch
return x.to(torch.int64) if torch.jit.is_tracing() and isinstance(x, torch.Tensor) else int(x) |
Casts an input to a torch int64 tensor if we are in a tracing context, otherwise to a Python int.
| torch_int | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def torch_float(x):
"""
Casts an input to a torch float32 tensor if we are in a tracing context, otherwise to a Python float.
"""
if not is_torch_available():
return int(x)
import torch
return x.to(torch.float32) if torch.jit.is_tracing() and isinstance(x, torch.Tensor) else int(x) |
Casts an input to a torch float32 tensor if we are in a tracing context, otherwise to a Python float.
| torch_float | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def filter_out_non_signature_kwargs(extra: Optional[list] = None):
"""
Decorator to filter out named arguments that are not in the function signature.
This decorator ensures that only the keyword arguments that match the function's signature, or are specified in the
`extra` list, are passed to the func... |
Decorator to filter out named arguments that are not in the function signature.
This decorator ensures that only the keyword arguments that match the function's signature, or are specified in the
`extra` list, are passed to the function. Any additional keyword arguments are filtered out and a warning is i... | filter_out_non_signature_kwargs | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def is_timm_local_checkpoint(pretrained_model_path: str) -> bool:
"""
Checks whether a checkpoint is a timm model checkpoint.
"""
if pretrained_model_path is None:
return False
# in case it's Path, not str
pretrained_model_path = str(pretrained_model_path)
is_file = os.path.isfile(... |
Checks whether a checkpoint is a timm model checkpoint.
| is_timm_local_checkpoint | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def set_attribute_for_modules(module: "torch.nn.Module", key: str, value: Any):
"""
Set a value to a module and all submodules.
"""
setattr(module, key, value)
for submodule in module.children():
set_attribute_for_modules(submodule, key, value) |
Set a value to a module and all submodules.
| set_attribute_for_modules | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def del_attribute_from_modules(module: "torch.nn.Module", key: str):
"""
Delete a value from a module and all submodules.
"""
# because we might remove it previously in case it's a shared module, e.g. activation function
if hasattr(module, key):
delattr(module, key)
for submodule in mod... |
Delete a value from a module and all submodules.
| del_attribute_from_modules | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def can_return_tuple(func):
"""
Decorator to wrap model method, to call output.to_tuple() if return_dict=False passed as a kwarg or
use_return_dict=False is set in the config.
Note:
output.to_tuple() convert output to tuple skipping all `None` values.
"""
@wraps(func)
def wrapper(s... |
Decorator to wrap model method, to call output.to_tuple() if return_dict=False passed as a kwarg or
use_return_dict=False is set in the config.
Note:
output.to_tuple() convert output to tuple skipping all `None` values.
| can_return_tuple | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def list_repo_templates(
repo_id: str,
*,
local_files_only: bool,
revision: Optional[str] = None,
cache_dir: Optional[str] = None,
) -> list[str]:
"""List template files from a repo.
A template is a jinja file located under the `additional_chat_templates/` folder.
If working in offline ... | List template files from a repo.
A template is a jinja file located under the `additional_chat_templates/` folder.
If working in offline mode or if internet is down, the method will list jinja template from the local cache - if any.
| list_repo_templates | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def http_user_agent(user_agent: Union[dict, str, None] = None) -> str:
"""
Formats a user-agent string with basic info about a request.
"""
ua = f"transformers/{__version__}; python/{sys.version.split()[0]}; session_id/{SESSION_ID}"
if is_torch_available():
ua += f"; torch/{_torch_version}"
... |
Formats a user-agent string with basic info about a request.
| http_user_agent | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def download_url(url, proxies=None):
"""
Downloads a given url in a temporary file. This function is not safe to use in multiple processes. Its only use is
for deprecated behavior allowing to download config/models with a single url instead of using the Hub.
Args:
url (`str`): The url of the fi... |
Downloads a given url in a temporary file. This function is not safe to use in multiple processes. Its only use is
for deprecated behavior allowing to download config/models with a single url instead of using the Hub.
Args:
url (`str`): The url of the file to download.
proxies (`Dict[str, ... | download_url | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def has_file(
path_or_repo: Union[str, os.PathLike],
filename: str,
revision: Optional[str] = None,
proxies: Optional[dict[str, str]] = None,
token: Optional[Union[bool, str]] = None,
*,
local_files_only: bool = False,
cache_dir: Union[str, Path, None] = None,
repo_type: Optional[str... |
Checks if a repo contains a given file without downloading it. Works for remote repos and local folders.
If offline mode is enabled, checks if the file exists in the cache.
<Tip warning={false}>
This function will raise an error if the repository `path_or_repo` is not valid or if `revision` does not... | has_file | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def _create_repo(
self,
repo_id: str,
private: Optional[bool] = None,
token: Optional[Union[bool, str]] = None,
repo_url: Optional[str] = None,
organization: Optional[str] = None,
) -> str:
"""
Create the repo if needed, cleans up repo_id with deprecat... |
Create the repo if needed, cleans up repo_id with deprecated kwargs `repo_url` and `organization`, retrieves
the token.
| _create_repo | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def _upload_modified_files(
self,
working_dir: Union[str, os.PathLike],
repo_id: str,
files_timestamps: dict[str, float],
commit_message: Optional[str] = None,
token: Optional[Union[bool, str]] = None,
create_pr: bool = False,
revision: Optional[str] = Non... |
Uploads all modified files in `working_dir` to `repo_id`, based on `files_timestamps`.
| _upload_modified_files | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def send_example_telemetry(example_name, *example_args, framework="pytorch"):
"""
Sends telemetry that helps tracking the examples use.
Args:
example_name (`str`): The name of the example.
*example_args (dataclasses or `argparse.ArgumentParser`): The arguments to the script. This function w... |
Sends telemetry that helps tracking the examples use.
Args:
example_name (`str`): The name of the example.
*example_args (dataclasses or `argparse.ArgumentParser`): The arguments to the script. This function will only
try to extract the model and dataset name from those. Nothing el... | send_example_telemetry | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def convert_file_size_to_int(size: Union[int, str]):
"""
Converts a size expressed as a string with digits an unit (like `"5MB"`) to an integer (in bytes).
Args:
size (`int` or `str`): The size to convert. Will be directly returned if an `int`.
Example:
```py
>>> convert_file_size_to_i... |
Converts a size expressed as a string with digits an unit (like `"5MB"`) to an integer (in bytes).
Args:
size (`int` or `str`): The size to convert. Will be directly returned if an `int`.
Example:
```py
>>> convert_file_size_to_int("1MiB")
1048576
```
| convert_file_size_to_int | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def get_checkpoint_shard_files(
pretrained_model_name_or_path,
index_filename,
cache_dir=None,
force_download=False,
proxies=None,
resume_download=None,
local_files_only=False,
token=None,
user_agent=None,
revision=None,
subfolder="",
_commit_hash=None,
**deprecated_k... |
For a given model:
- download and cache all the shards of a sharded checkpoint if `pretrained_model_name_or_path` is a model ID on the
Hub
- returns the list of paths to all the shards, as well as some metadata.
For the description of each arg, see [`PreTrainedModel.from_pretrained`]. `index_fi... | get_checkpoint_shard_files | python | huggingface/transformers | src/transformers/utils/hub.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/hub.py | Apache-2.0 |
def is_torch_deterministic():
"""
Check whether pytorch uses deterministic algorithms by looking if torch.set_deterministic_debug_mode() is set to 1 or 2"
"""
if is_torch_available():
import torch
if torch.get_deterministic_debug_mode() == 0:
return False
else:
... |
Check whether pytorch uses deterministic algorithms by looking if torch.set_deterministic_debug_mode() is set to 1 or 2"
| is_torch_deterministic | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_xla_available(check_is_tpu=False, check_is_gpu=False):
"""
Check if `torch_xla` is available. To train a native pytorch job in an environment with torch xla installed, set
the USE_TORCH_XLA to false.
"""
assert not (check_is_tpu and check_is_gpu), "The check_is_tpu and check_is_gpu cann... |
Check if `torch_xla` is available. To train a native pytorch job in an environment with torch xla installed, set
the USE_TORCH_XLA to false.
| is_torch_xla_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_npu_available(check_device=False):
"Checks if `torch_npu` is installed and potentially if a NPU is in the environment"
if not _torch_available or importlib.util.find_spec("torch_npu") is None:
return False
import torch
import torch_npu # noqa: F401
if check_device:
tr... | Checks if `torch_npu` is installed and potentially if a NPU is in the environment | is_torch_npu_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_mlu_available(check_device=False):
"""
Checks if `mlu` is available via an `cndev-based` check which won't trigger the drivers and leave mlu
uninitialized.
"""
if not _torch_available or importlib.util.find_spec("torch_mlu") is None:
return False
import torch
import tor... |
Checks if `mlu` is available via an `cndev-based` check which won't trigger the drivers and leave mlu
uninitialized.
| is_torch_mlu_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_musa_available(check_device=False):
"Checks if `torch_musa` is installed and potentially if a MUSA is in the environment"
if not _torch_available or importlib.util.find_spec("torch_musa") is None:
return False
import torch
import torch_musa # noqa: F401
torch_musa_min_version... | Checks if `torch_musa` is installed and potentially if a MUSA is in the environment | is_torch_musa_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_hpu_available():
"Checks if `torch.hpu` is available and potentially if a HPU is in the environment"
if (
not _torch_available
or importlib.util.find_spec("habana_frameworks") is None
or importlib.util.find_spec("habana_frameworks.torch") is None
):
return False
... | Checks if `torch.hpu` is available and potentially if a HPU is in the environment | is_torch_hpu_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_ninja_available():
r"""
Code comes from *torch.utils.cpp_extension.is_ninja_available()*. Returns `True` if the
[ninja](https://ninja-build.org/) build system is available on the system, `False` otherwise.
"""
try:
subprocess.check_output("ninja --version".split())
except Exceptio... |
Code comes from *torch.utils.cpp_extension.is_ninja_available()*. Returns `True` if the
[ninja](https://ninja-build.org/) build system is available on the system, `False` otherwise.
| is_ninja_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_xpu_available(check_device=False):
"""
Checks if XPU acceleration is available either via native PyTorch (>=2.6),
`intel_extension_for_pytorch` or via stock PyTorch (>=2.4) and potentially
if a XPU is in the environment.
"""
if not is_torch_available():
return False
tor... |
Checks if XPU acceleration is available either via native PyTorch (>=2.6),
`intel_extension_for_pytorch` or via stock PyTorch (>=2.4) and potentially
if a XPU is in the environment.
| is_torch_xpu_available | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def is_torch_greater_or_equal(library_version: str, accept_dev: bool = False):
"""
Accepts a library version and returns True if the current version of the library is greater than or equal to the
given version. If `accept_dev` is True, it will also accept development versions (e.g. 2.7.0.dev20250320 matches... |
Accepts a library version and returns True if the current version of the library is greater than or equal to the
given version. If `accept_dev` is True, it will also accept development versions (e.g. 2.7.0.dev20250320 matches
2.7.0).
| is_torch_greater_or_equal | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def direct_transformers_import(path: str, file="__init__.py") -> ModuleType:
"""Imports transformers directly
Args:
path (`str`): The path to the source file
file (`str`, *optional*): The file to join with the path. Defaults to "__init__.py".
Returns:
`ModuleType`: The resulting im... | Imports transformers directly
Args:
path (`str`): The path to the source file
file (`str`, *optional*): The file to join with the path. Defaults to "__init__.py".
Returns:
`ModuleType`: The resulting imported module
| direct_transformers_import | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def requires(*, backends=()):
"""
This decorator enables two things:
- Attaching a `__backends` tuple to an object to see what are the necessary backends for it
to execute correctly without instantiating it
- The '@requires' string is used to dynamically import objects
"""
if not isinstan... |
This decorator enables two things:
- Attaching a `__backends` tuple to an object to see what are the necessary backends for it
to execute correctly without instantiating it
- The '@requires' string is used to dynamically import objects
| requires | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def fetch__all__(file_content):
"""
Returns the content of the __all__ variable in the file content.
Returns None if not defined, otherwise returns a list of strings.
"""
if "__all__" not in file_content:
return []
start_index = None
lines = file_content.splitlines()
for index,... |
Returns the content of the __all__ variable in the file content.
Returns None if not defined, otherwise returns a list of strings.
| fetch__all__ | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def create_import_structure_from_path(module_path):
"""
This method takes the path to a file/a folder and returns the import structure.
If a file is given, it will return the import structure of the parent folder.
Import structures are designed to be digestible by `_LazyModule` objects. They are
cr... |
This method takes the path to a file/a folder and returns the import structure.
If a file is given, it will return the import structure of the parent folder.
Import structures are designed to be digestible by `_LazyModule` objects. They are
created from the __all__ definitions in each files as well as... | create_import_structure_from_path | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def spread_import_structure(nested_import_structure):
"""
This method takes as input an unordered import structure and brings the required backends at the top-level,
aggregating modules and objects under their required backends.
Here's an example of an input import structure at the src.transformers.mod... |
This method takes as input an unordered import structure and brings the required backends at the top-level,
aggregating modules and objects under their required backends.
Here's an example of an input import structure at the src.transformers.models level:
{
'albert': {
frozenset()... | spread_import_structure | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def define_import_structure(module_path: str, prefix: Optional[str] = None) -> IMPORT_STRUCTURE_T:
"""
This method takes a module_path as input and creates an import structure digestible by a _LazyModule.
Here's an example of an output import structure at the src.transformers.models level:
{
f... |
This method takes a module_path as input and creates an import structure digestible by a _LazyModule.
Here's an example of an output import structure at the src.transformers.models level:
{
frozenset({'tokenizers'}): {
'albert.tokenization_albert_fast': {'AlbertTokenizerFast'}
... | define_import_structure | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def clear_import_cache():
"""
Clear cached Transformers modules to allow reloading modified code.
This is useful when actively developing/modifying Transformers code.
"""
# Get all transformers modules
transformers_modules = [mod_name for mod_name in sys.modules if mod_name.startswith("transfor... |
Clear cached Transformers modules to allow reloading modified code.
This is useful when actively developing/modifying Transformers code.
| clear_import_cache | python | huggingface/transformers | src/transformers/utils/import_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/import_utils.py | Apache-2.0 |
def _get_default_logging_level():
"""
If TRANSFORMERS_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is
not - fall back to `_default_log_level`
"""
env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None)
if env_level_str:
if env_level_s... |
If TRANSFORMERS_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is
not - fall back to `_default_log_level`
| _get_default_logging_level | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def get_logger(name: Optional[str] = None) -> logging.Logger:
"""
Return a logger with the specified name.
This function is not supposed to be directly accessed unless you are writing a custom transformers module.
"""
if name is None:
name = _get_library_name()
_configure_library_root... |
Return a logger with the specified name.
This function is not supposed to be directly accessed unless you are writing a custom transformers module.
| get_logger | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def disable_default_handler() -> None:
"""Disable the default handler of the HuggingFace Transformers's root logger."""
_configure_library_root_logger()
assert _default_handler is not None
_get_library_root_logger().removeHandler(_default_handler) | Disable the default handler of the HuggingFace Transformers's root logger. | disable_default_handler | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def enable_default_handler() -> None:
"""Enable the default handler of the HuggingFace Transformers's root logger."""
_configure_library_root_logger()
assert _default_handler is not None
_get_library_root_logger().addHandler(_default_handler) | Enable the default handler of the HuggingFace Transformers's root logger. | enable_default_handler | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def add_handler(handler: logging.Handler) -> None:
"""adds a handler to the HuggingFace Transformers's root logger."""
_configure_library_root_logger()
assert handler is not None
_get_library_root_logger().addHandler(handler) | adds a handler to the HuggingFace Transformers's root logger. | add_handler | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def remove_handler(handler: logging.Handler) -> None:
"""removes given handler from the HuggingFace Transformers's root logger."""
_configure_library_root_logger()
assert handler is not None and handler not in _get_library_root_logger().handlers
_get_library_root_logger().removeHandler(handler) | removes given handler from the HuggingFace Transformers's root logger. | remove_handler | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def disable_propagation() -> None:
"""
Disable propagation of the library log outputs. Note that log propagation is disabled by default.
"""
_configure_library_root_logger()
_get_library_root_logger().propagate = False |
Disable propagation of the library log outputs. Note that log propagation is disabled by default.
| disable_propagation | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def enable_propagation() -> None:
"""
Enable propagation of the library log outputs. Please disable the HuggingFace Transformers's default handler to
prevent double logging if the root logger has been configured.
"""
_configure_library_root_logger()
_get_library_root_logger().propagate = True |
Enable propagation of the library log outputs. Please disable the HuggingFace Transformers's default handler to
prevent double logging if the root logger has been configured.
| enable_propagation | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def enable_explicit_format() -> None:
"""
Enable explicit formatting for every HuggingFace Transformers's logger. The explicit formatter is as follows:
```
[LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
```
All handlers currently bound to the root logger are affected by this method.
""... |
Enable explicit formatting for every HuggingFace Transformers's logger. The explicit formatter is as follows:
```
[LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE
```
All handlers currently bound to the root logger are affected by this method.
| enable_explicit_format | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def reset_format() -> None:
"""
Resets the formatting for HuggingFace Transformers's loggers.
All handlers currently bound to the root logger are affected by this method.
"""
handlers = _get_library_root_logger().handlers
for handler in handlers:
handler.setFormatter(None) |
Resets the formatting for HuggingFace Transformers's loggers.
All handlers currently bound to the root logger are affected by this method.
| reset_format | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def warning_advice(self, *args, **kwargs):
"""
This method is identical to `logger.warning()`, but if env var TRANSFORMERS_NO_ADVISORY_WARNINGS=1 is set, this
warning will not be printed
"""
no_advisory_warnings = os.getenv("TRANSFORMERS_NO_ADVISORY_WARNINGS", False)
if no_advisory_warnings:
... |
This method is identical to `logger.warning()`, but if env var TRANSFORMERS_NO_ADVISORY_WARNINGS=1 is set, this
warning will not be printed
| warning_advice | python | huggingface/transformers | src/transformers/utils/logging.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/logging.py | Apache-2.0 |
def attach_tracer(tracer_name_template=None):
"""
Decorator that attaches a tracer to a class.
This decorator should be applied to classes that need OpenTelemetry tracing.
It adds a tracer attribute to the class instance that can be used by the traced decorator.
Args:
tracer_name_template:... |
Decorator that attaches a tracer to a class.
This decorator should be applied to classes that need OpenTelemetry tracing.
It adds a tracer attribute to the class instance that can be used by the traced decorator.
Args:
tracer_name_template: Optional template string for the tracer name.
... | attach_tracer | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def traced(
func=None,
*,
span_name=None,
standalone=False,
additional_attributes: Optional[List[Tuple[str, str, Union[Any, Callable[[Any], Any]]]]] = None,
):
"""
Decorator to trace function calls with OpenTelemetry.
Can be used as @traced or @traced(span_name="custom_name")
Args:... |
Decorator to trace function calls with OpenTelemetry.
Can be used as @traced or @traced(span_name="custom_name")
Args:
func: The function to trace
span_name: Optional custom name for the span (defaults to function name)
standalone: If True, creates a parentless span
additi... | traced | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def _setup_metrics(self):
"""Initialize OpenTelemetry metrics and tracing if the library is available."""
if not _has_opentelemetry:
logger.info("OpenTelemetry is not installed. Metrics and tracing will not be recorded.")
return
self.meter = metrics.get_meter("transform... | Initialize OpenTelemetry metrics and tracing if the library is available. | _setup_metrics | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def record_ttft_metric(self, created_time: float, request_id: str) -> None:
"""Record Time to First Token (TTFT).
Args:
created_time: The time the request was created
request_id: The ID of the request
"""
if not _has_opentelemetry:
return
ttf... | Record Time to First Token (TTFT).
Args:
created_time: The time the request was created
request_id: The ID of the request
| record_ttft_metric | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def record_batch_metrics(self, requests_in_batch: List) -> None:
"""Record metrics about the batch composition including decode/prefill ratio and batch fill percentage.
Args:
requests_in_batch: List of request states in the current batch
"""
if not _has_opentelemetry or not ... | Record metrics about the batch composition including decode/prefill ratio and batch fill percentage.
Args:
requests_in_batch: List of request states in the current batch
| record_batch_metrics | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def record_kv_cache_memory_metrics(self, cache) -> None:
"""Record memory usage of the PagedAttentionCache without GPU synchronization.
This calculates the theoretical memory usage based on cache configuration
and the number of blocks currently in use.
Args:
cache: The Page... | Record memory usage of the PagedAttentionCache without GPU synchronization.
This calculates the theoretical memory usage based on cache configuration
and the number of blocks currently in use.
Args:
cache: The PagedAttentionCache object to measure
| record_kv_cache_memory_metrics | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def record_queue_metrics(self, active_requests: int, waiting_requests: int) -> None:
"""Record metrics about active and waiting requests.
Args:
active_requests: Number of active requests
waiting_requests: Number of waiting requests
"""
if not _has_opentelemetry:
... | Record metrics about active and waiting requests.
Args:
active_requests: Number of active requests
waiting_requests: Number of waiting requests
| record_queue_metrics | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def record_request_completion(self, created_time: float, request_id: str) -> None:
"""Record metrics about a completed request.
Args:
created_time: The time the request was created
request_id: The ID of the request
"""
if not _has_opentelemetry:
retur... | Record metrics about a completed request.
Args:
created_time: The time the request was created
request_id: The ID of the request
| record_request_completion | python | huggingface/transformers | src/transformers/utils/metrics.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/metrics.py | Apache-2.0 |
def get_device_map(n_layers, devices):
"""Returns a dictionary of layers distributed evenly across all devices."""
layers = list(range(n_layers))
n_blocks = int(ceil(n_layers / len(devices)))
layers_list = [layers[i : i + n_blocks] for i in range(0, n_layers, n_blocks)]
return dict(zip(devices, lay... | Returns a dictionary of layers distributed evenly across all devices. | get_device_map | python | huggingface/transformers | src/transformers/utils/model_parallel_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/model_parallel_utils.py | Apache-2.0 |
def format_time(t):
"Format `t` (in seconds) to (h):mm:ss"
t = int(t)
h, m, s = t // 3600, (t // 60) % 60, t % 60
return f"{h}:{m:02d}:{s:02d}" if h != 0 else f"{m:02d}:{s:02d}" | Format `t` (in seconds) to (h):mm:ss | format_time | python | huggingface/transformers | src/transformers/utils/notebook.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/notebook.py | Apache-2.0 |
def text_to_html_table(items):
"Put the texts in `items` in an HTML table."
html_code = """<table border="1" class="dataframe">\n"""
html_code += """ <thead>\n <tr style="text-align: left;">\n"""
for i in items[0]:
html_code += f" <th>{i}</th>\n"
html_code += " </tr>\n </thead>\n ... | Put the texts in `items` in an HTML table. | text_to_html_table | python | huggingface/transformers | src/transformers/utils/notebook.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/notebook.py | Apache-2.0 |
def update(self, value: int, force_update: bool = False, comment: Optional[str] = None):
"""
The main method to update the progress bar to `value`.
Args:
value (`int`):
The value to use. Must be between 0 and `total`.
force_update (`bool`, *optional*, def... |
The main method to update the progress bar to `value`.
Args:
value (`int`):
The value to use. Must be between 0 and `total`.
force_update (`bool`, *optional*, defaults to `False`):
Whether or not to force and update of the internal state and disp... | update | python | huggingface/transformers | src/transformers/utils/notebook.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/notebook.py | Apache-2.0 |
def write_line(self, values):
"""
Write the values in the inner table.
Args:
values (`Dict[str, float]`): The values to display.
"""
if self.inner_table is None:
self.inner_table = [list(values.keys()), list(values.values())]
else:
col... |
Write the values in the inner table.
Args:
values (`Dict[str, float]`): The values to display.
| write_line | python | huggingface/transformers | src/transformers/utils/notebook.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/notebook.py | Apache-2.0 |
def check_peft_version(min_version: str) -> None:
r"""
Checks if the version of PEFT is compatible.
Args:
version (`str`):
The version of PEFT to check against.
"""
if not is_peft_available():
raise ValueError("PEFT is not installed. Please install it with `pip install p... |
Checks if the version of PEFT is compatible.
Args:
version (`str`):
The version of PEFT to check against.
| check_peft_version | python | huggingface/transformers | src/transformers/utils/peft_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/peft_utils.py | Apache-2.0 |
def from_dict(cls, config_dict, return_unused_kwargs=False, **kwargs):
"""
Instantiates a [`QuantizationConfigMixin`] from a Python dictionary of parameters.
Args:
config_dict (`Dict[str, Any]`):
Dictionary that will be used to instantiate the configuration object.
... |
Instantiates a [`QuantizationConfigMixin`] from a Python dictionary of parameters.
Args:
config_dict (`Dict[str, Any]`):
Dictionary that will be used to instantiate the configuration object.
return_unused_kwargs (`bool`,*optional*, defaults to `False`):
... | from_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_json_file(self, json_file_path: Union[str, os.PathLike]):
"""
Save this instance to a JSON file.
Args:
json_file_path (`str` or `os.PathLike`):
Path to the JSON file in which this configuration instance's parameters will be saved.
use_diff (`bool`,... |
Save this instance to a JSON file.
Args:
json_file_path (`str` or `os.PathLike`):
Path to the JSON file in which this configuration instance's parameters will be saved.
use_diff (`bool`, *optional*, defaults to `True`):
If set to `True`, only the... | to_json_file | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def __iter__(self):
"""allows `dict(obj)` for situations where obj may be a dict or QuantizationConfigMixin"""
for attr, value in copy.deepcopy(self.__dict__).items():
yield attr, value | allows `dict(obj)` for situations where obj may be a dict or QuantizationConfigMixin | __iter__ | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_json_string(self, use_diff: bool = True) -> str:
"""
Serializes this instance to a JSON string.
Args:
use_diff (`bool`, *optional*, defaults to `True`):
If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
... |
Serializes this instance to a JSON string.
Args:
use_diff (`bool`, *optional*, defaults to `True`):
If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
is serialized to JSON string.
Returns:
... | to_json_string | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def update(self, **kwargs):
"""
Updates attributes of this class instance with attributes from `kwargs` if they match existing attributes,
returning all the unused kwargs.
Args:
kwargs (`Dict[str, Any]`):
Dictionary of attributes to tentatively update this cl... |
Updates attributes of this class instance with attributes from `kwargs` if they match existing attributes,
returning all the unused kwargs.
Args:
kwargs (`Dict[str, Any]`):
Dictionary of attributes to tentatively update this class.
Returns:
`Dic... | update | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""Safety checker that arguments are correct."""
if self.bits not in [2, 3, 4, 8]:
raise ValueError(f"Only support quantization to [2,3,4,8] bits but found {self.bits}")
if self.group_size != -1 and self.group_size <= 0:
raise ValueError("group_size ... | Safety checker that arguments are correct. | post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def from_dict(cls, config: Dict[str, Any]):
"""
Override from_dict, used in AutoQuantizationConfig.from_dict in quantizers/auto.py
"""
instance = cls()
instance.quant_config = config["quant_config"]
instance.skip_modules = config["skip_modules"]
return instance |
Override from_dict, used in AutoQuantizationConfig.from_dict in quantizers/auto.py
| from_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_dict(self) -> Dict[str, Any]:
"""
Serializes this instance to a Python dictionary. Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
"""
return {
"quant_config": self.quant_config,
"quant_metho... |
Serializes this instance to a Python dictionary. Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
| to_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_diff_dict(self) -> Dict[str, Any]:
"""
Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this confi... |
Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
| to_diff_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
"""
if not isinstance(self.load_in_4bit, bool):
raise TypeError("load_in_4bit must be a boolean")
if not isinstance(self.load_in_8bi... |
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def quantization_method(self):
r"""
This method returns the quantization method used for the model. If the model is not quantizable, it returns
`None`.
"""
if self.load_in_8bit:
return "llm_int8"
elif self.load_in_4bit and self.bnb_4bit_quant_type == "fp4":
... |
This method returns the quantization method used for the model. If the model is not quantizable, it returns
`None`.
| quantization_method | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_dict(self) -> Dict[str, Any]:
"""
Serializes this instance to a Python dictionary. Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
"""
output = copy.deepcopy(self.__dict__)
output["bnb_4bit_compute_dtype"] =... |
Serializes this instance to a Python dictionary. Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
| to_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_diff_dict(self) -> Dict[str, Any]:
"""
Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this conf... |
Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
| to_diff_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
if self.bits not in [2, 3, 4, 8]:
raise ValueError(f"Only support quantization to [2,3,4,8] bits but found {self.bits}")
if self.group_size != -1 and self.group_size <= 0:
raise ValueE... |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_dict_optimum(self):
"""
Get compatible dict for optimum gptq config
"""
quant_dict = self.to_dict()
# make it compatible with optimum config
quant_dict["disable_exllama"] = not self.use_exllama
return quant_dict |
Get compatible dict for optimum gptq config
| to_dict_optimum | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def from_dict_optimum(cls, config_dict):
"""
Get compatible class with optimum gptq config dict
"""
if "disable_exllama" in config_dict:
config_dict["use_exllama"] = not config_dict["disable_exllama"]
# switch to None to not trigger the warning
config... |
Get compatible class with optimum gptq config dict
| from_dict_optimum | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
if self.backend not in [AwqBackendPackingMethod.AUTOAWQ, AwqBackendPackingMethod.LLMAWQ]:
raise ValueError(
f"Only supported quantization backends in {AwqBackendPackingMethod.AUTOAWQ} and ... |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
"""
if not isinstance(self.in_group_size, int):
raise TypeError("in_group_size must be a float")
if not isinstance(self.out_group_siz... |
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
if self.is_indice_packed is False:
raise ValueError("is_indice_packed should always be True") |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
for layer_name, layer_param in self.config_for_layers.items():
VptqLayerConfig(**layer_param)
if self.enable_proxy_error is True:
raise ValueError("enable_proxy_error should always be ... |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
accepted_weights = ["float8", "int8", "int4", "int2"]
accepted_activations = [None, "int8", "float8"]
if self.weights not in accepted_weights:
raise ValueError(f"Only support weights in {a... |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
accepted_weights = ["int8"]
if self.weights not in accepted_weights:
raise ValueError(f"Only support weights in {accepted_weights} but found {self.weights}") |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def from_dict(cls, config_dict, return_unused_kwargs=False, **kwargs):
"""
Instantiates a [`CompressedTensorsConfig`] from a Python dictionary of parameters.
Optionally unwraps any args from the nested quantization_config
Args:
config_dict (`Dict[str, Any]`):
... |
Instantiates a [`CompressedTensorsConfig`] from a Python dictionary of parameters.
Optionally unwraps any args from the nested quantization_config
Args:
config_dict (`Dict[str, Any]`):
Dictionary that will be used to instantiate the configuration object.
... | from_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_dict(self) -> Dict[str, Any]:
"""
Quantization config to be added to config.json
Serializes this instance to a Python dictionary. Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
"""
quantization_config = {}... |
Quantization config to be added to config.json
Serializes this instance to a Python dictionary. Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
| to_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def to_diff_dict(self) -> Dict[str, Any]:
"""
Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this confi... |
Removes all attributes from config which correspond to the default config attributes for better readability and
serializes to a Python dictionary.
Returns:
`Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
| to_diff_dict | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
"""
if self.bits not in [2, 3, 4]:
raise ValueError("bits must be 2, 3, or 4")
if self.p not in [1, 2]:
raise ValueError(... |
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def _get_ao_version() -> version.Version:
"""Centralized check for TorchAO availability and version requirements."""
if not is_torchao_available():
raise ValueError("TorchAoConfig requires torchao to be installed. Install with `pip install torchao`")
return version.parse(importlib.m... | Centralized check for TorchAO availability and version requirements. | _get_ao_version | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def _get_torchao_quant_type_to_method(self):
"""Get mapping of quant_type strings to their corresponding methods."""
from torchao.quantization import (
autoquant,
int4_weight_only,
int8_dynamic_activation_int8_weight,
int8_weight_only,
)
r... | Get mapping of quant_type strings to their corresponding methods. | _get_torchao_quant_type_to_method | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def get_apply_tensor_subclass(self):
"""Create the appropriate quantization method based on configuration."""
if isinstance(self.quant_type, str):
methods = self._get_torchao_quant_type_to_method()
quant_type_kwargs = self.quant_type_kwargs.copy()
if (
... | Create the appropriate quantization method based on configuration. | get_apply_tensor_subclass | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
"""
if not isinstance(self.bits, int):
raise TypeError("bits must be an int")
if not isinstance(self.beta1, int):
raise T... |
Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def post_init(self):
r"""
Safety checker that arguments are correct
"""
self.activation_scheme = self.activation_scheme.lower()
if self.activation_scheme not in ["dynamic"]:
raise ValueError(f"Activation scheme {self.activation_scheme} not supported")
if len(s... |
Safety checker that arguments are correct
| post_init | python | huggingface/transformers | src/transformers/utils/quantization_config.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/quantization_config.py | Apache-2.0 |
def require_version(requirement: str, hint: Optional[str] = None) -> None:
"""
Perform a runtime check of the dependency versions, using the exact same syntax used by pip.
The installed module version comes from the *site-packages* dir via *importlib.metadata*.
Args:
requirement (`str`): pip s... |
Perform a runtime check of the dependency versions, using the exact same syntax used by pip.
The installed module version comes from the *site-packages* dir via *importlib.metadata*.
Args:
requirement (`str`): pip style definition, e.g., "tokenizers==0.9.4", "tqdm>=4.27", "numpy"
hint (`... | require_version | python | huggingface/transformers | src/transformers/utils/versions.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/versions.py | Apache-2.0 |
def get_available_devices() -> frozenset[str]:
"""
Returns a frozenset of devices available for the current PyTorch installation.
"""
devices = {"cpu"} # `cpu` is always supported as a device in PyTorch
if is_torch_cuda_available():
devices.add("cuda")
if is_torch_mps_available():
... |
Returns a frozenset of devices available for the current PyTorch installation.
| get_available_devices | python | huggingface/transformers | src/transformers/utils/__init__.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/__init__.py | Apache-2.0 |
def create_and_test_config_from_and_save_pretrained_composite(self):
"""
Tests that composite or nested configs can be loaded and saved correctly. In case the config
has a sub-config, we should be able to call `sub_config.from_pretrained('general_config_file')`
and get a result same as i... |
Tests that composite or nested configs can be loaded and saved correctly. In case the config
has a sub-config, we should be able to call `sub_config.from_pretrained('general_config_file')`
and get a result same as if we loaded the whole config and obtained `config.sub_config` from it.
| create_and_test_config_from_and_save_pretrained_composite | python | huggingface/transformers | tests/test_configuration_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_configuration_common.py | Apache-2.0 |
def prepare_image_inputs(
batch_size,
min_resolution,
max_resolution,
num_channels,
size_divisor=None,
equal_resolution=False,
numpify=False,
torchify=False,
):
"""This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of P... | This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of PyTorch tensors if one specifies torchify=True.
One can specify whether the images are of the same resolution or not.
| prepare_image_inputs | python | huggingface/transformers | tests/test_image_processing_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_image_processing_common.py | Apache-2.0 |
def prepare_video(num_frames, num_channels, width=10, height=10, numpify=False, torchify=False):
"""This function prepares a video as a list of PIL images/NumPy arrays/PyTorch tensors."""
video = []
for i in range(num_frames):
video.append(np.random.randint(255, size=(num_channels, width, height), ... | This function prepares a video as a list of PIL images/NumPy arrays/PyTorch tensors. | prepare_video | python | huggingface/transformers | tests/test_image_processing_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_image_processing_common.py | Apache-2.0 |
def prepare_video_inputs(
batch_size,
num_frames,
num_channels,
min_resolution,
max_resolution,
equal_resolution=False,
numpify=False,
torchify=False,
):
"""This function prepares a batch of videos: a list of list of PIL images, or a list of list of numpy arrays if
one specifies ... | This function prepares a batch of videos: a list of list of PIL images, or a list of list of numpy arrays if
one specifies numpify=True, or a list of list of PyTorch tensors if one specifies torchify=True.
One can specify whether the videos are of the same resolution or not.
| prepare_video_inputs | python | huggingface/transformers | tests/test_image_processing_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_image_processing_common.py | Apache-2.0 |
def test_save_load_fast_slow(self):
"Test that we can load a fast image processor from a slow one and vice-versa."
if self.image_processing_class is None or self.fast_image_processing_class is None:
self.skipTest("Skipping slow/fast save/load test as one of the image processors is not define... | Test that we can load a fast image processor from a slow one and vice-versa. | test_save_load_fast_slow | python | huggingface/transformers | tests/test_image_processing_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_image_processing_common.py | Apache-2.0 |
def test_save_load_fast_slow_auto(self):
"Test that we can load a fast image processor from a slow one and vice-versa using AutoImageProcessor."
if self.image_processing_class is None or self.fast_image_processing_class is None:
self.skipTest("Skipping slow/fast save/load test as one of the ... | Test that we can load a fast image processor from a slow one and vice-versa using AutoImageProcessor. | test_save_load_fast_slow_auto | python | huggingface/transformers | tests/test_image_processing_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_image_processing_common.py | Apache-2.0 |
def test_batching_equivalence(self, atol=1e-5, rtol=1e-5):
"""
Tests that the model supports batching and that the output is the nearly the same for the same input in
different batch sizes.
(Why "nearly the same" not "exactly the same"? Batching uses different matmul shapes, which often ... |
Tests that the model supports batching and that the output is the nearly the same for the same input in
different batch sizes.
(Why "nearly the same" not "exactly the same"? Batching uses different matmul shapes, which often leads to
different results: https://github.com/huggingface/tra... | test_batching_equivalence | python | huggingface/transformers | tests/test_modeling_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py | Apache-2.0 |
def _make_attention_mask_non_null(self, inputs_dict):
"""Make sure no sequence has all zeros as attention mask"""
for k in ["attention_mask", "encoder_attention_mask", "decoder_attention_mask"]:
if k in inputs_dict:
attention_mask = inputs_dict[k]
# Make sur... | Make sure no sequence has all zeros as attention mask | _make_attention_mask_non_null | python | huggingface/transformers | tests/test_modeling_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py | Apache-2.0 |
def _postprocessing_to_ignore_test_cases(self, tf_outputs, pt_outputs, model_class):
"""For temporarily ignoring some failed test cases (issues to be fixed)"""
tf_keys = {k for k, v in tf_outputs.items() if v is not None}
pt_keys = {k for k, v in pt_outputs.items() if v is not None}
ke... | For temporarily ignoring some failed test cases (issues to be fixed) | _postprocessing_to_ignore_test_cases | python | huggingface/transformers | tests/test_modeling_common.py | https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.