code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def decode_spans(
start: np.ndarray, end: np.ndarray, topk: int, max_answer_len: int, undesired_tokens: np.ndarray
) -> Tuple:
"""
Take the output of any `ModelForQuestionAnswering` and will generate probabilities for each span to be the actual
answer.
In addition, it filters out some unwanted/impo... |
Take the output of any `ModelForQuestionAnswering` and will generate probabilities for each span to be the actual
answer.
In addition, it filters out some unwanted/impossible cases like answer len being greater than max_answer_len or
answer end position being before the starting position. The method s... | decode_spans | python | huggingface/transformers | src/transformers/pipelines/question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/question_answering.py | Apache-2.0 |
def select_starts_ends(
start,
end,
p_mask,
attention_mask,
min_null_score=1000000,
top_k=1,
handle_impossible_answer=False,
max_answer_len=15,
):
"""
Takes the raw output of any `ModelForQuestionAnswering` and first normalizes its outputs and then uses
`decode_spans()` to ge... |
Takes the raw output of any `ModelForQuestionAnswering` and first normalizes its outputs and then uses
`decode_spans()` to generate probabilities for each span to be the actual answer.
Args:
start (`np.ndarray`): Individual start logits for each token.
end (`np.ndarray`): Individual end lo... | select_starts_ends | python | huggingface/transformers | src/transformers/pipelines/question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/question_answering.py | Apache-2.0 |
def create_sample(
question: Union[str, List[str]], context: Union[str, List[str]]
) -> Union[SquadExample, List[SquadExample]]:
"""
QuestionAnsweringPipeline leverages the [`SquadExample`] internally. This helper method encapsulate all the
logic for converting question(s) and contex... |
QuestionAnsweringPipeline leverages the [`SquadExample`] internally. This helper method encapsulate all the
logic for converting question(s) and context(s) to [`SquadExample`].
We currently support extractive question answering.
Arguments:
question (`str` or `List[str]`): ... | create_sample | python | huggingface/transformers | src/transformers/pipelines/question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/question_answering.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
"""
Answer the question(s) given as inputs by using the context(s).
Args:
question (`str` or `List[str]`):
One or several question(s) (must be used in conjunction with the `context` argument).
context (`str` or `List[s... |
Answer the question(s) given as inputs by using the context(s).
Args:
question (`str` or `List[str]`):
One or several question(s) (must be used in conjunction with the `context` argument).
context (`str` or `List[str]`):
One or several context(s)... | __call__ | python | huggingface/transformers | src/transformers/pipelines/question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/question_answering.py | Apache-2.0 |
def span_to_answer(self, text: str, start: int, end: int) -> Dict[str, Union[str, int]]:
"""
When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
Args:
text (`str`): The actual context to extract the answer from.
s... |
When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
Args:
text (`str`): The actual context to extract the answer from.
start (`int`): The answer starting token index.
end (`int`): The answer end token index.
... | span_to_answer | python | huggingface/transformers | src/transformers/pipelines/question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/question_answering.py | Apache-2.0 |
def sequential_inference(self, **inputs):
"""
Inference used for models that need to process sequences in a sequential fashion, like the SQA models which
handle conversational query related to a table.
"""
if self.framework == "pt":
all_logits = []
all_agg... |
Inference used for models that need to process sequences in a sequential fashion, like the SQA models which
handle conversational query related to a table.
| sequential_inference | python | huggingface/transformers | src/transformers/pipelines/table_question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/table_question_answering.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
r"""
Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
- `pipeline(table, query)`
- `pipeline(table, [query])`
- `pipeline(table=table, query=query)`
- `pipeline(table=table, ... |
Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
- `pipeline(table, query)`
- `pipeline(table, [query])`
- `pipeline(table=table, query=query)`
- `pipeline(table=table, query=[query])`
- `pipeline({"table": tab... | __call__ | python | huggingface/transformers | src/transformers/pipelines/table_question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/table_question_answering.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
r"""
Generate the output text(s) using text(s) given as inputs.
Args:
args (`str` or `List[str]`):
Input text for the encoder.
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to... |
Generate the output text(s) using text(s) given as inputs.
Args:
args (`str` or `List[str]`):
Input text for the encoder.
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token ind... | __call__ | python | huggingface/transformers | src/transformers/pipelines/text2text_generation.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/text2text_generation.py | Apache-2.0 |
def check_inputs(self, input_length: int, min_length: int, max_length: int) -> bool:
"""
Checks whether there might be something wrong with given input with regard to the model.
"""
if max_length < min_length:
logger.warning(f"Your min_length={min_length} must be inferior tha... |
Checks whether there might be something wrong with given input with regard to the model.
| check_inputs | python | huggingface/transformers | src/transformers/pipelines/text2text_generation.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/text2text_generation.py | Apache-2.0 |
def __call__(self, inputs, **kwargs):
"""
Classify the text(s) given as inputs.
Args:
inputs (`str` or `List[str]` or `Dict[str]`, or `List[Dict[str]]`):
One or several texts to classify. In order to use text pairs for your classification, you can send a
... |
Classify the text(s) given as inputs.
Args:
inputs (`str` or `List[str]` or `Dict[str]`, or `List[Dict[str]]`):
One or several texts to classify. In order to use text pairs for your classification, you can send a
dictionary containing `{"text", "text_pair"}`... | __call__ | python | huggingface/transformers | src/transformers/pipelines/text_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/text_classification.py | Apache-2.0 |
def __call__(self, inputs: Union[str, List[str]], **kwargs):
"""
Classify each token of the text(s) given as inputs.
Args:
inputs (`str` or `List[str]`):
One or several texts (or one list of texts) for token classification.
Return:
A list or a li... |
Classify each token of the text(s) given as inputs.
Args:
inputs (`str` or `List[str]`):
One or several texts (or one list of texts) for token classification.
Return:
A list or a list of list of `dict`: Each result comes as a list of dictionaries (one f... | __call__ | python | huggingface/transformers | src/transformers/pipelines/token_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py | Apache-2.0 |
def gather_pre_entities(
self,
sentence: str,
input_ids: np.ndarray,
scores: np.ndarray,
offset_mapping: Optional[List[Tuple[int, int]]],
special_tokens_mask: np.ndarray,
aggregation_strategy: AggregationStrategy,
) -> List[dict]:
"""Fuse various numpy... | Fuse various numpy arrays into dicts with all the information needed for aggregation | gather_pre_entities | python | huggingface/transformers | src/transformers/pipelines/token_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py | Apache-2.0 |
def aggregate_words(self, entities: List[dict], aggregation_strategy: AggregationStrategy) -> List[dict]:
"""
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as m... |
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft|
company| B-ENT I-ENT
| aggregate_words | python | huggingface/transformers | src/transformers/pipelines/token_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py | Apache-2.0 |
def group_sub_entities(self, entities: List[dict]) -> dict:
"""
Group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
"""
# Get the first entity in the entity group
entity = entitie... |
Group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
| group_sub_entities | python | huggingface/transformers | src/transformers/pipelines/token_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py | Apache-2.0 |
def group_entities(self, entities: List[dict]) -> List[dict]:
"""
Find and group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
"""
entity_groups = []
entity_group_disagg = []
... |
Find and group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
| group_entities | python | huggingface/transformers | src/transformers/pipelines/token_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py | Apache-2.0 |
def __call__(self, inputs: Optional[Union[str, List[str]]] = None, **kwargs):
"""
Assign labels to the video(s) passed as inputs.
Args:
inputs (`str`, `List[str]`):
The pipeline handles three types of videos:
- A string containing a http link pointin... |
Assign labels to the video(s) passed as inputs.
Args:
inputs (`str`, `List[str]`):
The pipeline handles three types of videos:
- A string containing a http link pointing to a video
- A string containing a local path to a video
... | __call__ | python | huggingface/transformers | src/transformers/pipelines/video_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/video_classification.py | Apache-2.0 |
def __call__(
self,
image: Union["Image.Image", str, List["Image.Image"], List[str], "KeyDataset"],
question: Optional[Union[str, List[str]]] = None,
**kwargs,
):
r"""
Answers open-ended questions about images. The pipeline accepts several types of inputs which are de... |
Answers open-ended questions about images. The pipeline accepts several types of inputs which are detailed
below:
- `pipeline(image=image, question=question)`
- `pipeline({"image": image, "question": question})`
- `pipeline([{"image": image, "question": question}])`
- `... | __call__ | python | huggingface/transformers | src/transformers/pipelines/visual_question_answering.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/visual_question_answering.py | Apache-2.0 |
def _parse_and_tokenize(
self, sequence_pairs, padding=True, add_special_tokens=True, truncation=TruncationStrategy.ONLY_FIRST, **kwargs
):
"""
Parse arguments and tokenize only_first so that hypothesis (label) is not truncated
"""
return_tensors = self.framework
if s... |
Parse arguments and tokenize only_first so that hypothesis (label) is not truncated
| _parse_and_tokenize | python | huggingface/transformers | src/transformers/pipelines/zero_shot_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py | Apache-2.0 |
def __call__(
self,
sequences: Union[str, List[str]],
*args,
**kwargs,
):
"""
Classify the sequence(s) given as inputs. See the [`ZeroShotClassificationPipeline`] documentation for more
information.
Args:
sequences (`str` or `List[str]`):
... |
Classify the sequence(s) given as inputs. See the [`ZeroShotClassificationPipeline`] documentation for more
information.
Args:
sequences (`str` or `List[str]`):
The sequence(s) to classify, will be truncated if the model input is too large.
candidate_lab... | __call__ | python | huggingface/transformers | src/transformers/pipelines/zero_shot_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py | Apache-2.0 |
def __call__(self, image: Union[str, List[str], "Image", List["Image"]] = None, **kwargs):
"""
Assign labels to the image(s) passed as inputs.
Args:
image (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
... |
Assign labels to the image(s) passed as inputs.
Args:
image (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a lo... | __call__ | python | huggingface/transformers | src/transformers/pipelines/zero_shot_image_classification.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_image_classification.py | Apache-2.0 |
def __call__(
self,
image: Union[str, "Image.Image", List[Dict[str, Any]]],
candidate_labels: Optional[Union[str, List[str]]] = None,
**kwargs,
):
"""
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
image (`str`, ... |
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
image (`str`, `PIL.Image` or `List[Dict[str, Any]]`):
The pipeline handles three types of images:
- A string containing an http url pointing to an image
- A st... | __call__ | python | huggingface/transformers | src/transformers/pipelines/zero_shot_object_detection.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_object_detection.py | Apache-2.0 |
def _get_bounding_box(self, box: "torch.Tensor") -> Dict[str, int]:
"""
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`Dict[str, int]`): Dict co... |
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`Dict[str, int]`): Dict containing the coordinates in corners format.
| _get_bounding_box | python | huggingface/transformers | src/transformers/pipelines/zero_shot_object_detection.py | https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_object_detection.py | Apache-2.0 |
def merge_quantization_configs(
cls,
quantization_config: Union[dict, QuantizationConfigMixin],
quantization_config_from_args: Optional[QuantizationConfigMixin],
):
"""
handles situations where both quantization_config from args and quantization_config from model config are p... |
handles situations where both quantization_config from args and quantization_config from model config are present.
| merge_quantization_configs | python | huggingface/transformers | src/transformers/quantizers/auto.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/auto.py | Apache-2.0 |
def get_special_dtypes_update(self, model, torch_dtype: "torch.dtype") -> Dict[str, "torch.dtype"]:
"""
returns dtypes for modules that are not quantized - used for the computation of the device_map in case
one passes a str as a device_map. The method will use the `modules_to_not_convert` that i... |
returns dtypes for modules that are not quantized - used for the computation of the device_map in case
one passes a str as a device_map. The method will use the `modules_to_not_convert` that is modified
in `_process_model_before_weight_loading`.
Args:
model (`~transformers.... | get_special_dtypes_update | python | huggingface/transformers | src/transformers/quantizers/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/base.py | Apache-2.0 |
def check_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
state_dict: Dict[str, Any],
**kwargs,
) -> bool:
"""
checks if a loaded state_dict component is part of quantized param + some validation; only def... |
checks if a loaded state_dict component is part of quantized param + some validation; only defined if
requires_parameters_quantization == True for quantization methods that require to create a new parameters
for quantization.
| check_quantized_param | python | huggingface/transformers | src/transformers/quantizers/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/base.py | Apache-2.0 |
def create_quantized_param(self, *args, **kwargs) -> "torch.nn.Parameter":
"""
takes needed components from state_dict and creates quantized param; only applicable if
requires_parameters_quantization == True
"""
if not self.requires_parameters_quantization:
raise Attr... |
takes needed components from state_dict and creates quantized param; only applicable if
requires_parameters_quantization == True
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/base.py | Apache-2.0 |
def preprocess_model(self, model: "PreTrainedModel", **kwargs):
"""
Setting model attributes and/or converting model before weights loading. At this point
the model should be initialized on the meta device so you can freely manipulate the skeleton
of the model in order to replace modules... |
Setting model attributes and/or converting model before weights loading. At this point
the model should be initialized on the meta device so you can freely manipulate the skeleton
of the model in order to replace modules in-place. Make sure to override the abstract method `_process_model_before... | preprocess_model | python | huggingface/transformers | src/transformers/quantizers/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/base.py | Apache-2.0 |
def dequantize(self, model):
"""
Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance.
Note not all quantization schemes support this.
"""
model = self._dequantize(model)
# Delete quantizer and quantization config
... |
Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance.
Note not all quantization schemes support this.
| dequantize | python | huggingface/transformers | src/transformers/quantizers/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/base.py | Apache-2.0 |
def get_cuda_warm_up_factor(self):
"""
The factor to be used in `caching_allocator_warmup` to get the number of bytes to pre-allocate to warm up cuda.
A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means
we allocate half the memory... |
The factor to be used in `caching_allocator_warmup` to get the number of bytes to pre-allocate to warm up cuda.
A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means
we allocate half the memory of the weights residing in the empty model, e... | get_cuda_warm_up_factor | python | huggingface/transformers | src/transformers/quantizers/base.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/base.py | Apache-2.0 |
def is_qat_trainable(self) -> bool:
"""Flag indicating whether the quantized model can carry out quantization aware training"""
return (
self.quantization_config.linear_class == "autobitlinear"
and self.quantization_config.quantization_mode == "online"
) | Flag indicating whether the quantized model can carry out quantization aware training | is_qat_trainable | python | huggingface/transformers | src/transformers/quantizers/quantizer_bitnet.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_bitnet.py | Apache-2.0 |
def create_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
target_device: "torch.device",
state_dict: Dict[str, Any],
unexpected_keys: Optional[List[str]] = None,
):
"""
combines logic from _load_s... |
combines logic from _load_state_dict_into_meta_model and .integrations.bitsandbytes.py::set_module_quantized_tensor_to_device()
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_bnb_4bit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_bnb_4bit.py | Apache-2.0 |
def create_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
target_device: "torch.device",
state_dict: Dict[str, Any],
unexpected_keys: Optional[List[str]] = None,
):
"""
combines logic from _load_s... |
combines logic from _load_state_dict_into_meta_model and .integrations.bitsandbytes.py::set_module_quantized_tensor_to_device()
needs aux items from state dicts, if found - removes them from unexpected_keys
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_bnb_8bit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_bnb_8bit.py | Apache-2.0 |
def update_missing_keys_after_loading(self, model, missing_keys: List[str], prefix: str) -> List[str]:
"""
Update missing keys after loading the model. This is necessary for compressed tensors
to load the model correctly. We expect weights to be present in missing keys.
The weight's are ... |
Update missing keys after loading the model. This is necessary for compressed tensors
to load the model correctly. We expect weights to be present in missing keys.
The weight's are re-constructed by ModelCompressor in _process_model_after_weight_loading
This function cleans up expected... | update_missing_keys_after_loading | python | huggingface/transformers | src/transformers/quantizers/quantizer_compressed_tensors.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_compressed_tensors.py | Apache-2.0 |
def update_unexpected_keys(self, model, unexpected_keys: List[str], prefix: str) -> List[str]:
"""
Override this method if you want to adjust the `unexpected_keys`.
Args:
unexpected_keys (`List[str]`, *optional*):
The list of unexpected keys in the checkpoint compare... |
Override this method if you want to adjust the `unexpected_keys`.
Args:
unexpected_keys (`List[str]`, *optional*):
The list of unexpected keys in the checkpoint compared to the state dict of the model
| update_unexpected_keys | python | huggingface/transformers | src/transformers/quantizers/quantizer_compressed_tensors.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_compressed_tensors.py | Apache-2.0 |
def _process_model_after_weight_loading(self, model, **kwargs):
"""Decompress loaded model if necessary - need for qat"""
if (
self.quantization_config.is_quantization_compressed and not self.run_compressed
) or self.quantization_config.is_sparsification_compressed:
conf... | Decompress loaded model if necessary - need for qat | _process_model_after_weight_loading | python | huggingface/transformers | src/transformers/quantizers/quantizer_compressed_tensors.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_compressed_tensors.py | Apache-2.0 |
def is_qat_trainable(self) -> bool:
"""Loaded Models can carry out quantization aware training"""
# models need to be decompressed carry out qat
return not self.run_compressed or not self.quantization_config.is_quantization_compressed | Loaded Models can carry out quantization aware training | is_qat_trainable | python | huggingface/transformers | src/transformers/quantizers/quantizer_compressed_tensors.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_compressed_tensors.py | Apache-2.0 |
def create_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
target_device: "torch.device",
state_dict: Dict[str, Any],
unexpected_keys: Optional[List[str]] = None,
):
"""
Quantizes weights to FP8 fo... |
Quantizes weights to FP8 format using Block-wise quantization
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_finegrained_fp8.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_finegrained_fp8.py | Apache-2.0 |
def create_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
target_device: "torch.device",
state_dict: Dict[str, Any],
unexpected_keys: List[str],
):
"""
Each nn.Linear layer is processed here.
... |
Each nn.Linear layer is processed here.
We first check if the corresponding module state_dict contains already HQQ quantized parameters.
If not, we create a temp linear layer with the module state_dict params and use it for quantization
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_hqq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_hqq.py | Apache-2.0 |
def check_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
state_dict: Dict[str, Any],
**kwargs,
) -> bool:
"""
Check if a parameter needs to be quantized.
"""
if is_optimum_quanto_available... |
Check if a parameter needs to be quantized.
| check_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_quanto.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_quanto.py | Apache-2.0 |
def create_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
target_device: "torch.device",
*args,
**kwargs,
):
"""
Create the quantized parameter by calling .freeze() after setting it to the module.... |
Create the quantized parameter by calling .freeze() after setting it to the module.
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_quanto.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_quanto.py | Apache-2.0 |
def fuzzy_match_size(config_name: str) -> Optional[str]:
"""
Extract the size digit from strings like "4weight", "8weight".
Returns the digit as an integer if found, otherwise None.
"""
config_name = config_name.lower()
str_match = re.search(r"(\d)weight", config_name)
if str_match:
... |
Extract the size digit from strings like "4weight", "8weight".
Returns the digit as an integer if found, otherwise None.
| fuzzy_match_size | python | huggingface/transformers | src/transformers/quantizers/quantizer_torchao.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_torchao.py | Apache-2.0 |
def create_quantized_param(
self,
model: "PreTrainedModel",
param_value: "torch.Tensor",
param_name: str,
target_device: "torch.device",
state_dict: Dict[str, Any],
unexpected_keys: List[str],
):
"""
Each nn.Linear layer that needs to be quanti... |
Each nn.Linear layer that needs to be quantized is processed here.
First, we set the value the weight tensor, then we move it to the target device. Finally, we quantize the module.
| create_quantized_param | python | huggingface/transformers | src/transformers/quantizers/quantizer_torchao.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_torchao.py | Apache-2.0 |
def _process_model_after_weight_loading(self, model, **kwargs):
"""No process required for torchao quantized model"""
if self.quantization_config.quant_type == "autoquant":
from torchao import autoquant
from torchao.quantization import ALL_AUTOQUANT_CLASS_LIST
model ... | No process required for torchao quantized model | _process_model_after_weight_loading | python | huggingface/transformers | src/transformers/quantizers/quantizer_torchao.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_torchao.py | Apache-2.0 |
def get_cuda_warm_up_factor(self):
"""
This factor is used in caching_allocator_warmup to determine how many bytes to pre-allocate for CUDA warmup.
- A factor of 2 means we pre-allocate the full memory footprint of the model.
- A factor of 4 means we pre-allocate half of that, and so on
... |
This factor is used in caching_allocator_warmup to determine how many bytes to pre-allocate for CUDA warmup.
- A factor of 2 means we pre-allocate the full memory footprint of the model.
- A factor of 4 means we pre-allocate half of that, and so on
However, when using TorchAO, calculat... | get_cuda_warm_up_factor | python | huggingface/transformers | src/transformers/quantizers/quantizer_torchao.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_torchao.py | Apache-2.0 |
def _process_model_before_weight_loading(
self,
model: "PreTrainedModel",
keep_in_fp32_modules: Optional[List[str]] = None,
**kwargs,
):
"""
we don't have param like modules_to_not_convert to indicate which layers should not be quantized
because `quantization_... |
we don't have param like modules_to_not_convert to indicate which layers should not be quantized
because `quantization_config` include the layers that should be quantized
| _process_model_before_weight_loading | python | huggingface/transformers | src/transformers/quantizers/quantizer_vptq.py | https://github.com/huggingface/transformers/blob/master/src/transformers/quantizers/quantizer_vptq.py | Apache-2.0 |
def equalize_indent(docstring, indent_level):
"""
Adjust the indentation of a docstring to match the specified indent level.
"""
# fully dedent the docstring
docstring = "\n".join([line.lstrip() for line in docstring.splitlines()])
return textwrap.indent(docstring, " " * indent_level) |
Adjust the indentation of a docstring to match the specified indent level.
| equalize_indent | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def parse_docstring(docstring, max_indent_level=0):
"""
Parse the docstring to extract the Args section and return it as a dictionary.
The docstring is expected to be in the format:
Args:
arg1 (type): Description of arg1.
arg2 (type): Description of arg2.
# This function will also r... |
Parse the docstring to extract the Args section and return it as a dictionary.
The docstring is expected to be in the format:
Args:
arg1 (type): Description of arg1.
arg2 (type): Description of arg2.
# This function will also return the remaining part of the docstring after the Args se... | parse_docstring | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def contains_type(type_hint, target_type) -> Tuple[bool, Optional[object]]:
"""
Check if a "nested" type hint contains a specific target type,
return the first-level type containing the target_type if found.
"""
args = get_args(type_hint)
if args == ():
try:
return issubclass... |
Check if a "nested" type hint contains a specific target type,
return the first-level type containing the target_type if found.
| contains_type | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def get_placeholders_dict(placeholders: List, model_name: str) -> dict:
"""
Get the dictionary of placeholders for the given model name.
"""
# import here to avoid circular import
from transformers.models import auto as auto_module
placeholders_dict = {}
for placeholder in placeholders:
... |
Get the dictionary of placeholders for the given model name.
| get_placeholders_dict | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def format_args_docstring(args, model_name):
"""
Replaces placeholders such as {image_processor_class} in the docstring with the actual values,
deducted from the model name and the auto modules.
"""
# first check if there are any placeholders in the args, if not return them as is
placeholders = ... |
Replaces placeholders such as {image_processor_class} in the docstring with the actual values,
deducted from the model name and the auto modules.
| format_args_docstring | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def _get_parameter_info(param_name, documented_params, source_args_dict, param_type, optional):
"""
Get parameter documentation details from the appropriate source.
Tensor shape, optional status and description are taken from the custom docstring in priority if available.
Type is taken from the function... |
Get parameter documentation details from the appropriate source.
Tensor shape, optional status and description are taken from the custom docstring in priority if available.
Type is taken from the function signature first, then from the custom docstring if missing from the signature
Args:
param... | _get_parameter_info | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def _process_parameters_section(
func_documentation, sig, func, class_name, model_name_lowercase, parent_class, indent_level
):
"""
Process the parameters section of the docstring.
Args:
func_documentation (`str`): Existing function documentation (manually specified in the docstring)
si... |
Process the parameters section of the docstring.
Args:
func_documentation (`str`): Existing function documentation (manually specified in the docstring)
sig (`inspect.Signature`): Function signature
func (`function`): Function the parameters belong to
class_name (`str`): Name o... | _process_parameters_section | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def _process_returns_section(func_documentation, sig, config_class, indent_level):
"""
Process the returns section of the docstring.
Args:
func_documentation (`str`): Existing function documentation (manually specified in the docstring)
sig (`inspect.Signature`): Function signature
... |
Process the returns section of the docstring.
Args:
func_documentation (`str`): Existing function documentation (manually specified in the docstring)
sig (`inspect.Signature`): Function signature
config_class (`str`): Config class for the model
indent_level (`int`): Indentation... | _process_returns_section | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def auto_class_docstring(cls, custom_intro=None, custom_args=None, checkpoint=None):
"""
Wrapper that automatically generates a docstring for classes based on their attributes and methods.
"""
# import here to avoid circular import
from transformers.models import auto as auto_module
docstring_i... |
Wrapper that automatically generates a docstring for classes based on their attributes and methods.
| auto_class_docstring | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def auto_docstring(obj=None, *, custom_intro=None, custom_args=None, checkpoint=None):
"""
Automatically generates docstrings for classes and methods in the Transformers library.
This decorator can be used in the following forms:
@auto_docstring
def my_function(...):
...
or
@auto_do... |
Automatically generates docstrings for classes and methods in the Transformers library.
This decorator can be used in the following forms:
@auto_docstring
def my_function(...):
...
or
@auto_docstring()
def my_function(...):
...
or
@auto_docstring(custom_intro="Custo... | auto_docstring | python | huggingface/transformers | src/transformers/utils/args_doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/args_doc.py | Apache-2.0 |
def generate_attention_matrix_from_mask(
words, mask, img_token="<img>", sliding_window=None, token_type_ids=None, image_seq_length=None
):
"""
Generates an attention matrix from a given attention mask.
Optionally applies a sliding window mask (e.g., for Gemma2/3) and
marks regions where image toke... |
Generates an attention matrix from a given attention mask.
Optionally applies a sliding window mask (e.g., for Gemma2/3) and
marks regions where image tokens occur based on the specified `img_token`.
| generate_attention_matrix_from_mask | python | huggingface/transformers | src/transformers/utils/attention_visualizer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/attention_visualizer.py | Apache-2.0 |
def verify_out_features_out_indices(
out_features: Optional[Iterable[str]], out_indices: Optional[Iterable[int]], stage_names: Optional[Iterable[str]]
):
"""
Verify that out_indices and out_features are valid for the given stage_names.
"""
if stage_names is None:
raise ValueError("Stage_name... |
Verify that out_indices and out_features are valid for the given stage_names.
| verify_out_features_out_indices | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def _align_output_features_output_indices(
out_features: Optional[list[str]],
out_indices: Optional[Union[list[int], tuple[int]]],
stage_names: list[str],
):
"""
Finds the corresponding `out_features` and `out_indices` for the given `stage_names`.
The logic is as follows:
- `out_feature... |
Finds the corresponding `out_features` and `out_indices` for the given `stage_names`.
The logic is as follows:
- `out_features` not set, `out_indices` set: `out_features` is set to the `out_features` corresponding to the
`out_indices`.
- `out_indices` not set, `out_features` set: `out_... | _align_output_features_output_indices | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def get_aligned_output_features_output_indices(
out_features: Optional[list[str]],
out_indices: Optional[Union[list[int], tuple[int]]],
stage_names: list[str],
) -> tuple[list[str], list[int]]:
"""
Get the `out_features` and `out_indices` so that they are aligned.
The logic is as follows:
... |
Get the `out_features` and `out_indices` so that they are aligned.
The logic is as follows:
- `out_features` not set, `out_indices` set: `out_features` is set to the `out_features` corresponding to the
`out_indices`.
- `out_indices` not set, `out_features` set: `out_indices` is set to ... | get_aligned_output_features_output_indices | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def _init_timm_backbone(self, config) -> None:
"""
Initialize the backbone model from timm The backbone must already be loaded to self._backbone
"""
if getattr(self, "_backbone", None) is None:
raise ValueError("self._backbone must be set before calling _init_timm_backbone")
... |
Initialize the backbone model from timm The backbone must already be loaded to self._backbone
| _init_timm_backbone | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def _init_backbone(self, config) -> None:
"""
Method to initialize the backbone. This method is called by the constructor of the base class after the
pretrained model weights have been loaded.
"""
self.config = config
self.use_timm_backbone = getattr(config, "use_timm_ba... |
Method to initialize the backbone. This method is called by the constructor of the base class after the
pretrained model weights have been loaded.
| _init_backbone | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def out_features(self, out_features: list[str]):
"""
Set the out_features attribute. This will also update the out_indices attribute to match the new out_features.
"""
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
out_features=out_feature... |
Set the out_features attribute. This will also update the out_indices attribute to match the new out_features.
| out_features | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def out_indices(self, out_indices: Union[tuple[int], list[int]]):
"""
Set the out_indices attribute. This will also update the out_features attribute to match the new out_indices.
"""
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
out_feat... |
Set the out_indices attribute. This will also update the out_features attribute to match the new out_indices.
| out_indices | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def to_dict(self):
"""
Serializes this instance to a Python dictionary. Override the default `to_dict()` from `PretrainedConfig` to
include the `out_features` and `out_indices` attributes.
"""
output = super().to_dict()
output["out_features"] = output.pop("_out_features")... |
Serializes this instance to a Python dictionary. Override the default `to_dict()` from `PretrainedConfig` to
include the `out_features` and `out_indices` attributes.
| to_dict | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def out_features(self, out_features: list[str]):
"""
Set the out_features attribute. This will also update the out_indices attribute to match the new out_features.
"""
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
out_features=out_feature... |
Set the out_features attribute. This will also update the out_indices attribute to match the new out_features.
| out_features | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def out_indices(self, out_indices: Union[tuple[int], list[int]]):
"""
Set the out_indices attribute. This will also update the out_features attribute to match the new out_indices.
"""
self._out_features, self._out_indices = get_aligned_output_features_output_indices(
out_feat... |
Set the out_indices attribute. This will also update the out_features attribute to match the new out_indices.
| out_indices | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def to_dict(self):
"""
Serializes this instance to a Python dictionary. Override the default `to_dict()` from `PretrainedConfig` to
include the `out_features` and `out_indices` attributes.
"""
output = super().to_dict()
output["out_features"] = output.pop("_out_features")... |
Serializes this instance to a Python dictionary. Override the default `to_dict()` from `PretrainedConfig` to
include the `out_features` and `out_indices` attributes.
| to_dict | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def load_backbone(config):
"""
Loads the backbone model from a config object.
If the config is from the backbone model itself, then we return a backbone model with randomly initialized
weights.
If the config is from the parent model of the backbone model itself, then we load the pretrained backbon... |
Loads the backbone model from a config object.
If the config is from the backbone model itself, then we return a backbone model with randomly initialized
weights.
If the config is from the parent model of the backbone model itself, then we load the pretrained backbone weights
if specified.
| load_backbone | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def verify_backbone_config_arguments(
use_timm_backbone: bool,
use_pretrained_backbone: bool,
backbone: Optional[str],
backbone_config: Optional[Union[dict, "PretrainedConfig"]],
backbone_kwargs: Optional[dict],
):
"""
Verify that the config arguments to be passed to load_backbone are valid
... |
Verify that the config arguments to be passed to load_backbone are valid
| verify_backbone_config_arguments | python | huggingface/transformers | src/transformers/utils/backbone_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/backbone_utils.py | Apache-2.0 |
def parse_google_format_docstring(docstring: str) -> tuple[Optional[str], Optional[dict], Optional[str]]:
"""
Parses a Google-style docstring to extract the function description,
argument descriptions, and return description.
Args:
docstring (str): The docstring to parse.
Returns:
... |
Parses a Google-style docstring to extract the function description,
argument descriptions, and return description.
Args:
docstring (str): The docstring to parse.
Returns:
The function description, arguments, and return description.
| parse_google_format_docstring | python | huggingface/transformers | src/transformers/utils/chat_template_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/chat_template_utils.py | Apache-2.0 |
def get_json_schema(func: Callable) -> dict:
"""
This function generates a JSON schema for a given function, based on its docstring and type hints. This is
mostly used for passing lists of tools to a chat template. The JSON schema contains the name and description of
the function, as well as the names, ... |
This function generates a JSON schema for a given function, based on its docstring and type hints. This is
mostly used for passing lists of tools to a chat template. The JSON schema contains the name and description of
the function, as well as the names, types and descriptions for each of its arguments. `g... | get_json_schema | python | huggingface/transformers | src/transformers/utils/chat_template_utils.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/chat_template_utils.py | Apache-2.0 |
def deprecate_kwarg(
old_name: str,
version: str,
new_name: Optional[str] = None,
warn_if_greater_or_equal_version: bool = False,
raise_if_greater_or_equal_version: bool = False,
raise_if_both_names: bool = False,
additional_message: Optional[str] = None,
):
"""
Function or method de... |
Function or method decorator to notify users about deprecated keyword arguments, replacing them with a new name if specified.
Note that is decorator is `torch.compile`-safe, i.e. it will not cause graph breaks (but no warning will be displayed if compiling).
This decorator allows you to:
- Notify user... | deprecate_kwarg | python | huggingface/transformers | src/transformers/utils/deprecation.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/deprecation.py | Apache-2.0 |
def get_docstring_indentation_level(func):
"""Return the indentation level of the start of the docstring of a class or function (or method)."""
# We assume classes are always defined in the global scope
if inspect.isclass(func):
return 4
source = inspect.getsource(func)
first_line = source.s... | Return the indentation level of the start of the docstring of a class or function (or method). | get_docstring_indentation_level | python | huggingface/transformers | src/transformers/utils/doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/doc.py | Apache-2.0 |
def _get_indent(t):
"""Returns the indentation in the first line of t"""
search = re.search(r"^(\s*)\S", t)
return "" if search is None else search.groups()[0] | Returns the indentation in the first line of t | _get_indent | python | huggingface/transformers | src/transformers/utils/doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/doc.py | Apache-2.0 |
def _prepare_output_docstrings(output_type, config_class, min_indent=None, add_intro=True):
"""
Prepares the return part of the docstring using `output_type`.
"""
output_docstring = output_type.__doc__
params_docstring = None
if output_docstring is not None:
# Remove the head of the docs... |
Prepares the return part of the docstring using `output_type`.
| _prepare_output_docstrings | python | huggingface/transformers | src/transformers/utils/doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/doc.py | Apache-2.0 |
def filter_outputs_from_example(docstring, **kwargs):
"""
Removes the lines testing an output with the doctest syntax in a code sample when it's set to `None`.
"""
for key, value in kwargs.items():
if value is not None:
continue
doc_key = "{" + key + "}"
docstring = ... |
Removes the lines testing an output with the doctest syntax in a code sample when it's set to `None`.
| filter_outputs_from_example | python | huggingface/transformers | src/transformers/utils/doc.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/doc.py | Apache-2.0 |
def gen_constructor_wrapper(target: Callable) -> tuple[Callable, Callable]:
"""
Wraps `target` to be proxyable. Used for tensor creators like `torch.ones`, `torch.arange` and so on.
"""
wrapper = create_wrapper(target, "call_function")
return wrapper, target |
Wraps `target` to be proxyable. Used for tensor creators like `torch.ones`, `torch.arange` and so on.
| gen_constructor_wrapper | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def _proxies_to_metas(v):
"""Returns the underlying metadata for HFProxies, and behaves like the identity for the others."""
if isinstance(v, MetaDeviceAttribute):
return "meta"
if isinstance(v, torch.fx.Proxy):
if not (isinstance(v, HFProxy) and hasattr(v, "_metadata")):
raise R... | Returns the underlying metadata for HFProxies, and behaves like the identity for the others. | _proxies_to_metas | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def _generate_dummy_input(
self, model: "PreTrainedModel", input_name: str, shape: list[int], input_names: list[str]
) -> dict[str, torch.Tensor]:
"""Generates dummy input for model inference recording."""
# Retrieving the model class, either from the "class_for_deserialization" attribute if... | Generates dummy input for model inference recording. | _generate_dummy_input | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def trace(
self,
root: Union[torch.nn.Module, Callable[..., Any]],
concrete_args: Optional[dict[str, Any]] = None,
dummy_inputs: Optional[dict[str, Any]] = None,
complete_concrete_args_with_inputs_not_in_dummy_inputs: bool = True,
) -> Graph:
"""
Traces `root`... |
Traces `root` and returns the corresponding FX `torch.fx.Graph` representation. `root` can either be a
`torch.nn.Module` instance or a Python callable. Note that after this call, `self.root` may be different from
the `root` passed in here. For example, when a free function is passed to `trace()... | trace | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def _insert_module_as_submodule(self, mod: nn.Module) -> str:
"""
Helper method which tries to insert a module that was not declared as submodule.
"""
# If one of the module attributes is a Proxy, it means that its instantiation is input-dependent.
# It is not possible to insert ... |
Helper method which tries to insert a module that was not declared as submodule.
| _insert_module_as_submodule | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def path_of_module(self, mod: nn.Module) -> str:
"""
Helper method to find the qualified name of `mod` in the Module hierarchy of `root`. For example, if `root` has
a submodule named `foo`, which has a submodule named `bar`, passing `bar` into this function will return the
string "foo.ba... |
Helper method to find the qualified name of `mod` in the Module hierarchy of `root`. For example, if `root` has
a submodule named `foo`, which has a submodule named `bar`, passing `bar` into this function will return the
string "foo.bar".
Args:
mod (str): The `Module` to re... | path_of_module | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def keys(self, obj: "Proxy") -> Any:
"""Called when a proxy object is has the keys() method called.
This is what happens when ** is called on a proxy. This should return an iterator if ** is supposed to work in
your custom tracer.
"""
attribute = HFAttribute(obj, "keys")()
... | Called when a proxy object is has the keys() method called.
This is what happens when ** is called on a proxy. This should return an iterator if ** is supposed to work in
your custom tracer.
| keys | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def symbolic_trace(
model: "PreTrainedModel",
input_names: Optional[list[str]] = None,
disable_check: bool = False,
tracer_cls: type[HFTracer] = HFTracer,
) -> GraphModule:
"""
Performs symbolic tracing on the model.
Args:
model ([`PretrainedModel`]):
The model to trace.... |
Performs symbolic tracing on the model.
Args:
model ([`PretrainedModel`]):
The model to trace.
input_names (`List[str]`, *optional*):
The names of the inputs of the traced model. If unset, model.dummy_inputs.keys() are used instead.
disable_check (`bool`, *optio... | symbolic_trace | python | huggingface/transformers | src/transformers/utils/fx.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py | Apache-2.0 |
def strtobool(val):
"""Convert a string representation of truth to true (1) or false (0).
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values are 'n', 'no', 'f', 'false', 'off', and '0'.
Raises ValueError if 'val' is anything else.
"""
val = val.lower()
if val in {"y", "yes", "... | Convert a string representation of truth to true (1) or false (0).
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values are 'n', 'no', 'f', 'false', 'off', and '0'.
Raises ValueError if 'val' is anything else.
| strtobool | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def infer_framework_from_repr(x):
"""
Tries to guess the framework of an object `x` from its repr (brittle but will help in `is_tensor` to try the
frameworks in a smart order, without the need to import the frameworks).
"""
representation = str(type(x))
if representation.startswith("<class 'torc... |
Tries to guess the framework of an object `x` from its repr (brittle but will help in `is_tensor` to try the
frameworks in a smart order, without the need to import the frameworks).
| infer_framework_from_repr | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def _get_frameworks_and_test_func(x):
"""
Returns an (ordered since we are in Python 3.7+) dictionary framework to test function, which places the framework
we can guess from the repr first, then Numpy, then the others.
"""
framework_to_test = {
"pt": is_torch_tensor,
"tf": is_tf_ten... |
Returns an (ordered since we are in Python 3.7+) dictionary framework to test function, which places the framework
we can guess from the repr first, then Numpy, then the others.
| _get_frameworks_and_test_func | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def is_tensor(x):
"""
Tests if `x` is a `torch.Tensor`, `tf.Tensor`, `jaxlib.xla_extension.DeviceArray`, `np.ndarray` or `mlx.array`
in the order defined by `infer_framework_from_repr`
"""
# This gives us a smart order to test the frameworks with the corresponding tests.
framework_to_test_func =... |
Tests if `x` is a `torch.Tensor`, `tf.Tensor`, `jaxlib.xla_extension.DeviceArray`, `np.ndarray` or `mlx.array`
in the order defined by `infer_framework_from_repr`
| is_tensor | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def to_py_obj(obj):
"""
Convert a TensorFlow tensor, PyTorch tensor, Numpy array or python list to a python list.
"""
if isinstance(obj, (int, float)):
return obj
elif isinstance(obj, (dict, UserDict)):
return {k: to_py_obj(v) for k, v in obj.items()}
elif isinstance(obj, (list, ... |
Convert a TensorFlow tensor, PyTorch tensor, Numpy array or python list to a python list.
| to_py_obj | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def to_numpy(obj):
"""
Convert a TensorFlow tensor, PyTorch tensor, Numpy array or python list to a Numpy array.
"""
framework_to_numpy = {
"pt": lambda obj: obj.detach().cpu().numpy(),
"tf": lambda obj: obj.numpy(),
"jax": lambda obj: np.asarray(obj),
"np": lambda obj: ... |
Convert a TensorFlow tensor, PyTorch tensor, Numpy array or python list to a Numpy array.
| to_numpy | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def __init_subclass__(cls) -> None:
"""Register subclasses as pytree nodes.
This is necessary to synchronize gradients when using `torch.nn.parallel.DistributedDataParallel` with
`static_graph=True` with modules that output `ModelOutput` subclasses.
"""
if is_torch_available():
... | Register subclasses as pytree nodes.
This is necessary to synchronize gradients when using `torch.nn.parallel.DistributedDataParallel` with
`static_graph=True` with modules that output `ModelOutput` subclasses.
| __init_subclass__ | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def __post_init__(self):
"""Check the ModelOutput dataclass.
Only occurs if @dataclass decorator has been used.
"""
class_fields = fields(self)
# Safety and consistency checks
if not len(class_fields):
raise ValueError(f"{self.__class__.__name__} has no fiel... | Check the ModelOutput dataclass.
Only occurs if @dataclass decorator has been used.
| __post_init__ | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def can_return_loss(model_class):
"""
Check if a given model can return loss.
Args:
model_class (`type`): The class of the model.
"""
framework = infer_framework(model_class)
if framework == "tf":
signature = inspect.signature(model_class.call) # TensorFlow models
elif fram... |
Check if a given model can return loss.
Args:
model_class (`type`): The class of the model.
| can_return_loss | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def find_labels(model_class):
"""
Find the labels used by a given model.
Args:
model_class (`type`): The class of the model.
"""
model_name = model_class.__name__
framework = infer_framework(model_class)
if framework == "tf":
signature = inspect.signature(model_class.call) ... |
Find the labels used by a given model.
Args:
model_class (`type`): The class of the model.
| find_labels | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def flatten_dict(d: MutableMapping, parent_key: str = "", delimiter: str = "."):
"""Flatten a nested dict into a single level dict."""
def _flatten_dict(d, parent_key="", delimiter="."):
for k, v in d.items():
key = str(parent_key) + delimiter + str(k) if parent_key else k
if v ... | Flatten a nested dict into a single level dict. | flatten_dict | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def transpose(array, axes=None):
"""
Framework-agnostic version of `numpy.transpose` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
"""
if is_numpy_array(array):
return np.transpose(array, axes=axes)
elif is_torch_tensor(array):
return array.T if axes is ... |
Framework-agnostic version of `numpy.transpose` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
| transpose | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def reshape(array, newshape):
"""
Framework-agnostic version of `numpy.reshape` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
"""
if is_numpy_array(array):
return np.reshape(array, newshape)
elif is_torch_tensor(array):
return array.reshape(*newshape)
... |
Framework-agnostic version of `numpy.reshape` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
| reshape | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def squeeze(array, axis=None):
"""
Framework-agnostic version of `numpy.squeeze` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
"""
if is_numpy_array(array):
return np.squeeze(array, axis=axis)
elif is_torch_tensor(array):
return array.squeeze() if axis i... |
Framework-agnostic version of `numpy.squeeze` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
| squeeze | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def expand_dims(array, axis):
"""
Framework-agnostic version of `numpy.expand_dims` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
"""
if is_numpy_array(array):
return np.expand_dims(array, axis)
elif is_torch_tensor(array):
return array.unsqueeze(dim=axi... |
Framework-agnostic version of `numpy.expand_dims` that will work on torch/TensorFlow/Jax tensors as well as NumPy
arrays.
| expand_dims | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
def tensor_size(array):
"""
Framework-agnostic version of `numpy.size` that will work on torch/TensorFlow/Jax tensors as well as NumPy arrays.
"""
if is_numpy_array(array):
return np.size(array)
elif is_torch_tensor(array):
return array.numel()
elif is_tf_tensor(array):
i... |
Framework-agnostic version of `numpy.size` that will work on torch/TensorFlow/Jax tensors as well as NumPy arrays.
| tensor_size | python | huggingface/transformers | src/transformers/utils/generic.py | https://github.com/huggingface/transformers/blob/master/src/transformers/utils/generic.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.