id
int64 0
328k
| repository_name
stringlengths 7
58
| file_path
stringlengths 9
302
| class_name
stringlengths 5
256
| human_written_code
stringlengths 16
2.16M
| class_skeleton
stringlengths 18
1.49M
⌀ | total_program_units
int64 1
1.76k
| total_doc_str
int64 0
771
| AvgCountLine
float64 0
7.89k
| AvgCountLineBlank
float64 0
297
| AvgCountLineCode
float64 0
7.89k
| AvgCountLineComment
float64 0
7.89k
| AvgCyclomatic
float64 0
130
| CommentToCodeRatio
float64 0
168
| CountClassBase
float64 0
40
| CountClassCoupled
float64 0
583
| CountClassCoupledModified
float64 0
575
| CountClassDerived
float64 0
5.35k
| CountDeclInstanceMethod
float64 0
529
| CountDeclInstanceVariable
float64 0
296
| CountDeclMethod
float64 0
599
| CountDeclMethodAll
float64 0
1.12k
| CountLine
float64 1
40.4k
| CountLineBlank
float64 0
8.16k
| CountLineCode
float64 1
25.7k
| CountLineCodeDecl
float64 1
8.15k
| CountLineCodeExe
float64 0
24.2k
| CountLineComment
float64 0
16.5k
| CountStmt
float64 1
9.71k
| CountStmtDecl
float64 1
8.15k
| CountStmtExe
float64 0
9.69k
| MaxCyclomatic
float64 0
759
| MaxInheritanceTree
float64 0
16
| MaxNesting
float64 0
34
| SumCyclomatic
float64 0
2.9k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,400
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/optimization.py
|
transformers.optimization.Adafactor
|
import torch
import math
from torch.optim import Optimizer
class Adafactor(Optimizer):
"""
AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code:
https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py
Paper: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost* https://huggingface.co/papers/1804.04235 Note that
this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and
`warmup_init` options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and
`relative_step=False`.
Arguments:
params (`Iterable[nn.parameter.Parameter]`):
Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (`float`, *optional*):
The external learning rate.
eps (`tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`):
Regularization constants for square gradient and parameter scale respectively
clip_threshold (`float`, *optional*, defaults to 1.0):
Threshold of root mean square of final gradient update
decay_rate (`float`, *optional*, defaults to -0.8):
Coefficient used to compute running averages of square
beta1 (`float`, *optional*):
Coefficient used for computing running averages of gradient
weight_decay (`float`, *optional*, defaults to 0.0):
Weight decay (L2 penalty)
scale_parameter (`bool`, *optional*, defaults to `True`):
If True, learning rate is scaled by root mean square
relative_step (`bool`, *optional*, defaults to `True`):
If True, time-dependent learning rate is computed instead of external learning rate
warmup_init (`bool`, *optional*, defaults to `False`):
Time-dependent learning rate computation depends on whether warm-up initialization is being used
This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested.
Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3):
- Training without LR warmup or clip_threshold is not recommended.
- use scheduled LR warm-up to fixed LR
- use clip_threshold=1.0 (https://huggingface.co/papers/1804.04235)
- Disable relative updates
- Use scale_parameter=False
- Additional optimizer operations like gradient clipping should not be used alongside Adafactor
Example:
```python
Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3)
```
Others reported the following combination to work well:
```python
Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
```
When using `lr=None` with [`Trainer`] you will most likely need to use [`~optimization.AdafactorSchedule`]
scheduler as following:
```python
from transformers.optimization import Adafactor, AdafactorSchedule
optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
lr_scheduler = AdafactorSchedule(optimizer)
trainer = Trainer(..., optimizers=(optimizer, lr_scheduler))
```
Usage:
```python
# replace AdamW with Adafactor
optimizer = Adafactor(
model.parameters(),
lr=1e-3,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False,
)
```"""
def __init__(self, params, lr=None, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=True, warmup_init=False):
if lr is not None and relative_step:
raise ValueError('Cannot combine manual `lr` and `relative_step=True` options')
if warmup_init and (not relative_step):
raise ValueError('`warmup_init=True` requires `relative_step=True`')
defaults = {'lr': lr, 'eps': eps, 'clip_threshold': clip_threshold, 'decay_rate': decay_rate, 'beta1': beta1, 'weight_decay': weight_decay, 'scale_parameter': scale_parameter, 'relative_step': relative_step, 'warmup_init': warmup_init}
super().__init__(params, defaults)
@staticmethod
def _get_lr(param_group, param_state):
rel_step_sz = param_group['lr']
if param_group['relative_step']:
min_step = 1e-06 * param_state['step'] if param_group['warmup_init'] else 0.01
rel_step_sz = min(min_step, 1.0 / math.sqrt(param_state['step']))
param_scale = 1.0
if param_group['scale_parameter']:
param_scale = max(param_group['eps'][1], param_state['RMS'])
return param_scale * rel_step_sz
@staticmethod
def _get_options(param_group, param_shape):
factored = len(param_shape) >= 2
use_first_moment = param_group['beta1'] is not None
return (factored, use_first_moment)
@staticmethod
def _rms(tensor):
return tensor.norm(2) / tensor.numel() ** 0.5
@staticmethod
def _approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col):
r_factor = (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)).rsqrt_().unsqueeze(-1)
c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt()
return torch.mul(r_factor, c_factor)
@torch.no_grad()
def step(self, closure=None):
"""
Performs a single optimization step
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad
if grad.dtype in {torch.float16, torch.bfloat16}:
grad = grad.float()
if grad.is_sparse:
raise RuntimeError('Adafactor does not support sparse gradients.')
state = self.state[p]
grad_shape = grad.shape
factored, use_first_moment = self._get_options(group, grad_shape)
if len(state) == 0:
state['step'] = 0
if use_first_moment:
state['exp_avg'] = torch.zeros_like(grad)
if factored:
state['exp_avg_sq_row'] = torch.zeros(grad_shape[:-1]).to(grad)
state['exp_avg_sq_col'] = torch.zeros(grad_shape[:-2] + grad_shape[-1:]).to(grad)
else:
state['exp_avg_sq'] = torch.zeros_like(grad)
state['RMS'] = 0
else:
if use_first_moment:
state['exp_avg'] = state['exp_avg'].to(grad)
if factored:
state['exp_avg_sq_row'] = state['exp_avg_sq_row'].to(grad)
state['exp_avg_sq_col'] = state['exp_avg_sq_col'].to(grad)
else:
state['exp_avg_sq'] = state['exp_avg_sq'].to(grad)
p_data_fp32 = p
if p.dtype in {torch.float16, torch.bfloat16}:
p_data_fp32 = p_data_fp32.float()
state['step'] += 1
state['RMS'] = self._rms(p_data_fp32)
lr = self._get_lr(group, state)
beta2t = 1.0 - math.pow(state['step'], group['decay_rate'])
update = grad ** 2 + group['eps'][0]
if factored:
exp_avg_sq_row = state['exp_avg_sq_row']
exp_avg_sq_col = state['exp_avg_sq_col']
exp_avg_sq_row.mul_(beta2t).add_(update.mean(dim=-1), alpha=1.0 - beta2t)
exp_avg_sq_col.mul_(beta2t).add_(update.mean(dim=-2), alpha=1.0 - beta2t)
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
update.mul_(grad)
else:
exp_avg_sq = state['exp_avg_sq']
exp_avg_sq.mul_(beta2t).add_(update, alpha=1.0 - beta2t)
update = exp_avg_sq.rsqrt().mul_(grad)
update.div_((self._rms(update) / group['clip_threshold']).clamp_(min=1.0))
update.mul_(lr)
if use_first_moment:
exp_avg = state['exp_avg']
exp_avg.mul_(group['beta1']).add_(update, alpha=1 - group['beta1'])
update = exp_avg
if group['weight_decay'] != 0:
p_data_fp32.add_(p_data_fp32, alpha=-group['weight_decay'] * lr)
p_data_fp32.add_(-update)
if p.dtype in {torch.float16, torch.bfloat16}:
p.copy_(p_data_fp32)
return loss
|
class Adafactor(Optimizer):
'''
AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code:
https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py
Paper: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost* https://huggingface.co/papers/1804.04235 Note that
this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and
`warmup_init` options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and
`relative_step=False`.
Arguments:
params (`Iterable[nn.parameter.Parameter]`):
Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (`float`, *optional*):
The external learning rate.
eps (`tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`):
Regularization constants for square gradient and parameter scale respectively
clip_threshold (`float`, *optional*, defaults to 1.0):
Threshold of root mean square of final gradient update
decay_rate (`float`, *optional*, defaults to -0.8):
Coefficient used to compute running averages of square
beta1 (`float`, *optional*):
Coefficient used for computing running averages of gradient
weight_decay (`float`, *optional*, defaults to 0.0):
Weight decay (L2 penalty)
scale_parameter (`bool`, *optional*, defaults to `True`):
If True, learning rate is scaled by root mean square
relative_step (`bool`, *optional*, defaults to `True`):
If True, time-dependent learning rate is computed instead of external learning rate
warmup_init (`bool`, *optional*, defaults to `False`):
Time-dependent learning rate computation depends on whether warm-up initialization is being used
This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested.
Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3):
- Training without LR warmup or clip_threshold is not recommended.
- use scheduled LR warm-up to fixed LR
- use clip_threshold=1.0 (https://huggingface.co/papers/1804.04235)
- Disable relative updates
- Use scale_parameter=False
- Additional optimizer operations like gradient clipping should not be used alongside Adafactor
Example:
```python
Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3)
```
Others reported the following combination to work well:
```python
Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
```
When using `lr=None` with [`Trainer`] you will most likely need to use [`~optimization.AdafactorSchedule`]
scheduler as following:
```python
from transformers.optimization import Adafactor, AdafactorSchedule
optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)
lr_scheduler = AdafactorSchedule(optimizer)
trainer = Trainer(..., optimizers=(optimizer, lr_scheduler))
```
Usage:
```python
# replace AdamW with Adafactor
optimizer = Adafactor(
model.parameters(),
lr=1e-3,
eps=(1e-30, 1e-3),
clip_threshold=1.0,
decay_rate=-0.8,
beta1=None,
weight_decay=0.0,
relative_step=False,
scale_parameter=False,
warmup_init=False,
)
```'''
def __init__(self, params, lr=None, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=True, warmup_init=False):
pass
@staticmethod
def _get_lr(param_group, param_state):
pass
@staticmethod
def _get_options(param_group, param_shape):
pass
@staticmethod
def _rms(tensor):
pass
@staticmethod
def _approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col):
pass
@torch.no_grad()
def step(self, closure=None):
'''
Performs a single optimization step
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
'''
pass
| 12
| 2
| 24
| 3
| 19
| 2
| 5
| 0.67
| 1
| 3
| 0
| 0
| 2
| 0
| 6
| 6
| 239
| 40
| 120
| 47
| 96
| 80
| 89
| 30
| 82
| 17
| 1
| 4
| 27
|
6,401
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/optimization.py
|
transformers.optimization.AdafactorSchedule
|
from torch.optim.lr_scheduler import LambdaLR, ReduceLROnPlateau
class AdafactorSchedule(LambdaLR):
"""
Since [`~optimization.Adafactor`] performs its own scheduling, if the training loop relies on a scheduler (e.g.,
for logging), this class creates a proxy object that retrieves the current lr values from the optimizer.
It returns `initial_lr` during startup and the actual `lr` during stepping.
"""
def __init__(self, optimizer, initial_lr=0.0):
def lr_lambda(_):
return initial_lr
for group in optimizer.param_groups:
group['initial_lr'] = initial_lr
super().__init__(optimizer, lr_lambda)
for group in optimizer.param_groups:
del group['initial_lr']
def get_lr(self):
opt = self.optimizer
lrs = [opt._get_lr(group, opt.state[group['params'][0]]) for group in opt.param_groups if group['params'][0].grad is not None]
if len(lrs) == 0:
lrs = self.base_lrs
return lrs
|
class AdafactorSchedule(LambdaLR):
'''
Since [`~optimization.Adafactor`] performs its own scheduling, if the training loop relies on a scheduler (e.g.,
for logging), this class creates a proxy object that retrieves the current lr values from the optimizer.
It returns `initial_lr` during startup and the actual `lr` during stepping.
'''
def __init__(self, optimizer, initial_lr=0.0):
pass
def lr_lambda(_):
pass
def get_lr(self):
pass
| 4
| 1
| 7
| 0
| 7
| 0
| 2
| 0.32
| 1
| 1
| 0
| 0
| 2
| 0
| 2
| 14
| 28
| 4
| 19
| 7
| 15
| 6
| 15
| 7
| 11
| 3
| 2
| 1
| 6
|
6,402
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/audio_classification.py
|
transformers.pipelines.audio_classification.AudioClassificationPipeline
|
import numpy as np
from .base import Pipeline, build_pipeline_init_args
from typing import Any, Union
import requests
from ..utils import add_end_docstrings, is_torch_available, is_torchaudio_available, is_torchcodec_available, logging
@add_end_docstrings(build_pipeline_init_args(has_feature_extractor=True))
class AudioClassificationPipeline(Pipeline):
"""
Audio classification pipeline using any `AutoModelForAudioClassification`. This pipeline predicts the class of a
raw waveform or an audio file. In case of an audio file, ffmpeg should be installed to support multiple audio
formats.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="superb/wav2vec2-base-superb-ks")
>>> classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
[{'score': 0.997, 'label': '_unknown_'}, {'score': 0.002, 'label': 'left'}, {'score': 0.0, 'label': 'yes'}, {'score': 0.0, 'label': 'down'}, {'score': 0.0, 'label': 'stop'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"audio-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=audio-classification).
"""
_load_processor = False
_load_image_processor = False
_load_feature_extractor = True
_load_tokenizer = False
def __init__(self, *args, **kwargs):
if 'top_k' in kwargs and kwargs['top_k'] is None:
kwargs['top_k'] = None
elif 'top_k' not in kwargs:
kwargs['top_k'] = 5
super().__init__(*args, **kwargs)
self.check_model_type(MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES)
def __call__(self, inputs: Union[np.ndarray, bytes, str, dict], **kwargs: Any) -> list[dict[str, Any]]:
"""
Classify the sequence(s) given as inputs. See the [`AutomaticSpeechRecognitionPipeline`] documentation for more
information.
Args:
inputs (`np.ndarray` or `bytes` or `str` or `dict`):
The inputs is either :
- `str` that is the filename of the audio file, the file will be read at the correct sampling rate
to get the waveform using *ffmpeg*. This requires *ffmpeg* to be installed on the system.
- `bytes` it is supposed to be the content of an audio file and is interpreted by *ffmpeg* in the
same way.
- (`np.ndarray` of shape (n, ) of type `np.float32` or `np.float64`)
Raw audio at the correct sampling rate (no further check will be done)
- `dict` form can be used to pass raw audio sampled at arbitrary `sampling_rate` and let this
pipeline do the resampling. The dict must be either be in the format `{"sampling_rate": int,
"raw": np.array}`, or `{"sampling_rate": int, "array": np.array}`, where the key `"raw"` or
`"array"` is used to denote the raw audio waveform.
top_k (`int`, *optional*, defaults to None):
The number of top labels that will be returned by the pipeline. If the provided number is `None` or
higher than the number of labels available in the model configuration, it will default to the number of
labels.
function_to_apply(`str`, *optional*, defaults to "softmax"):
The function to apply to the model output. By default, the pipeline will apply the softmax function to
the output of the model. Valid options: ["softmax", "sigmoid", "none"]. Note that passing Python's
built-in `None` will default to "softmax", so you need to pass the string "none" to disable any
post-processing.
Return:
A list of `dict` with the following keys:
- **label** (`str`) -- The label predicted.
- **score** (`float`) -- The corresponding probability.
"""
return super().__call__(inputs, **kwargs)
def _sanitize_parameters(self, top_k=None, function_to_apply=None, **kwargs):
postprocess_params = {}
if top_k is None:
postprocess_params['top_k'] = self.model.config.num_labels
else:
if top_k > self.model.config.num_labels:
top_k = self.model.config.num_labels
postprocess_params['top_k'] = top_k
if function_to_apply is not None:
if function_to_apply not in ['softmax', 'sigmoid', 'none']:
raise ValueError(f"Invalid value for `function_to_apply`: {function_to_apply}. Valid options are ['softmax', 'sigmoid', 'none']")
postprocess_params['function_to_apply'] = function_to_apply
else:
postprocess_params['function_to_apply'] = 'softmax'
return ({}, {}, postprocess_params)
def preprocess(self, inputs):
if isinstance(inputs, str):
if inputs.startswith('http://') or inputs.startswith('https://'):
inputs = requests.get(inputs).content
else:
with open(inputs, 'rb') as f:
inputs = f.read()
if isinstance(inputs, bytes):
inputs = ffmpeg_read(inputs, self.feature_extractor.sampling_rate)
if is_torch_available():
import torch
if isinstance(inputs, torch.Tensor):
inputs = inputs.cpu().numpy()
if is_torchcodec_available():
import torch
import torchcodec
if isinstance(inputs, torchcodec.decoders.AudioDecoder):
_audio_samples = inputs.get_all_samples()
_array = _audio_samples.data
inputs = {'array': _array, 'sampling_rate': _audio_samples.sample_rate}
if isinstance(inputs, dict):
inputs = inputs.copy()
if not ('sampling_rate' in inputs and ('raw' in inputs or 'array' in inputs)):
raise ValueError('When passing a dictionary to AudioClassificationPipeline, the dict needs to contain a "raw" key containing the numpy array or torch tensor representing the audio and a "sampling_rate" key, containing the sampling_rate associated with that array')
_inputs = inputs.pop('raw', None)
if _inputs is None:
inputs.pop('path', None)
_inputs = inputs.pop('array', None)
in_sampling_rate = inputs.pop('sampling_rate')
inputs = _inputs
if in_sampling_rate != self.feature_extractor.sampling_rate:
import torch
if is_torchaudio_available():
from torchaudio import functional as F
else:
raise ImportError('torchaudio is required to resample audio samples in AudioClassificationPipeline. The torchaudio package can be installed through: `pip install torchaudio`.')
inputs = F.resample(torch.from_numpy(inputs) if isinstance(inputs, np.ndarray) else inputs, in_sampling_rate, self.feature_extractor.sampling_rate).numpy()
if not isinstance(inputs, np.ndarray):
raise TypeError('We expect a numpy ndarray or torch tensor as input')
if len(inputs.shape) != 1:
raise ValueError('We expect a single channel audio input for AudioClassificationPipeline')
processed = self.feature_extractor(inputs, sampling_rate=self.feature_extractor.sampling_rate, return_tensors='pt')
if self.dtype is not None:
processed = processed.to(dtype=self.dtype)
return processed
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs, top_k=5, function_to_apply='softmax'):
if function_to_apply == 'softmax':
probs = model_outputs.logits[0].softmax(-1)
elif function_to_apply == 'sigmoid':
probs = model_outputs.logits[0].sigmoid()
else:
probs = model_outputs.logits[0]
scores, ids = probs.topk(top_k)
scores = scores.tolist()
ids = ids.tolist()
labels = [{'score': score, 'label': self.model.config.id2label[_id]} for score, _id in zip(scores, ids)]
return labels
| null | 8
| 2
| 25
| 3
| 15
| 6
| 4
| 0.59
| 1
| 8
| 0
| 0
| 6
| 0
| 6
| 48
| 178
| 31
| 93
| 22
| 80
| 55
| 69
| 17
| 60
| 12
| 6
| 3
| 26
|
6,403
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/automatic_speech_recognition.py
|
transformers.pipelines.automatic_speech_recognition.AutomaticSpeechRecognitionPipeline
|
import requests
from .base import ChunkPipeline
from .audio_utils import ffmpeg_read
from typing import TYPE_CHECKING, Any, Optional, Union
from ..generation import GenerationConfig
from ..tokenization_utils import PreTrainedTokenizer
from collections import defaultdict
import numpy as np
from ..utils import is_torch_available, is_torchaudio_available, is_torchcodec_available, logging
class AutomaticSpeechRecognitionPipeline(ChunkPipeline):
"""
Pipeline that aims at extracting spoken text contained within some audio.
The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for
to support multiple audio formats
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 5
Example:
```python
>>> from transformers import pipeline
>>> transcriber = pipeline(model="openai/whisper-base")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
{'text': ' He hoped there would be stew for dinner, turnips and carrots and bruised potatoes and fat mutton pieces to be ladled out in thick, peppered flour-fatten sauce.'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
Arguments:
model ([`PreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`].
feature_extractor ([`SequenceFeatureExtractor`]):
The feature extractor that will be used by the pipeline to encode waveform for the model.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
decoder (`pyctcdecode.BeamSearchDecoderCTC`, *optional*):
[PyCTCDecode's
BeamSearchDecoderCTC](https://github.com/kensho-technologies/pyctcdecode/blob/2fd33dc37c4111417e08d89ccd23d28e9b308d19/pyctcdecode/decoder.py#L180)
can be passed for language model boosted decoding. See [`Wav2Vec2ProcessorWithLM`] for more information.
chunk_length_s (`float`, *optional*, defaults to 0):
The input length for in each chunk. If `chunk_length_s = 0` then chunking is disabled (default).
<Tip>
For more information on how to effectively use `chunk_length_s`, please have a look at the [ASR chunking
blog post](https://huggingface.co/blog/asr-chunking).
</Tip>
stride_length_s (`float`, *optional*, defaults to `chunk_length_s / 6`):
The length of stride on the left and right of each chunk. Used only with `chunk_length_s > 0`. This enables
the model to *see* more context and infer letters better than without this context but the pipeline
discards the stride bits at the end to make the final reconstitution as perfect as possible.
<Tip>
For more information on how to effectively use `stride_length_s`, please have a look at the [ASR chunking
blog post](https://huggingface.co/blog/asr-chunking).
</Tip>
device (Union[`int`, `torch.device`], *optional*):
Device ordinal for CPU/GPU supports. Setting this to `None` will leverage CPU, a positive will run the
model on the associated CUDA device id.
"""
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = False
_load_feature_extractor = True
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256, num_beams=5)
def __init__(self, model: 'PreTrainedModel', feature_extractor: Optional[Union['SequenceFeatureExtractor', str]]=None, tokenizer: Optional[PreTrainedTokenizer]=None, decoder: Optional[Union['BeamSearchDecoderCTC', str]]=None, device: Optional[Union[int, 'torch.device']]=None, **kwargs):
if model.config.model_type == 'whisper':
self.type = 'seq2seq_whisper'
elif model.__class__.__name__ in MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES.values():
self.type = 'seq2seq'
elif feature_extractor._processor_class and feature_extractor._processor_class.endswith('WithLM') and (decoder is not None):
self.decoder = decoder
self.type = 'ctc_with_lm'
else:
self.type = 'ctc'
super().__init__(model, tokenizer, feature_extractor, device=device, **kwargs)
def __call__(self, inputs: Union[np.ndarray, bytes, str, dict], **kwargs: Any) -> list[dict[str, Any]]:
"""
Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`]
documentation for more information.
Args:
inputs (`np.ndarray` or `bytes` or `str` or `dict`):
The inputs is either :
- `str` that is either the filename of a local audio file, or a public URL address to download the
audio file. The file will be read at the correct sampling rate to get the waveform using
*ffmpeg*. This requires *ffmpeg* to be installed on the system.
- `bytes` it is supposed to be the content of an audio file and is interpreted by *ffmpeg* in the
same way.
- (`np.ndarray` of shape (n, ) of type `np.float32` or `np.float64`)
Raw audio at the correct sampling rate (no further check will be done)
- `dict` form can be used to pass raw audio sampled at arbitrary `sampling_rate` and let this
pipeline do the resampling. The dict must be in the format `{"sampling_rate": int, "raw":
np.array}` with optionally a `"stride": (left: int, right: int)` than can ask the pipeline to
treat the first `left` samples and last `right` samples to be ignored in decoding (but used at
inference to provide more context to the model). Only use `stride` with CTC models.
return_timestamps (*optional*, `str` or `bool`):
Only available for pure CTC models (Wav2Vec2, HuBERT, etc) and the Whisper model. Not available for
other sequence-to-sequence models.
For CTC models, timestamps can take one of two formats:
- `"char"`: the pipeline will return timestamps along the text for every character in the text. For
instance, if you get `[{"text": "h", "timestamp": (0.5, 0.6)}, {"text": "i", "timestamp": (0.7,
0.9)}]`, then it means the model predicts that the letter "h" was spoken after `0.5` and before
`0.6` seconds.
- `"word"`: the pipeline will return timestamps along the text for every word in the text. For
instance, if you get `[{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp":
(1.0, 1.5)}]`, then it means the model predicts that the word "hi" was spoken after `0.5` and
before `0.9` seconds.
For the Whisper model, timestamps can take one of two formats:
- `"word"`: same as above for word-level CTC timestamps. Word-level timestamps are predicted
through the *dynamic-time warping (DTW)* algorithm, an approximation to word-level timestamps
by inspecting the cross-attention weights.
- `True`: the pipeline will return timestamps along the text for *segments* of words in the text.
For instance, if you get `[{"text": " Hi there!", "timestamp": (0.5, 1.5)}]`, then it means the
model predicts that the segment "Hi there!" was spoken after `0.5` and before `1.5` seconds.
Note that a segment of text refers to a sequence of one or more words, rather than individual
words as with word-level timestamps.
generate_kwargs (`dict`, *optional*):
The dictionary of ad-hoc parametrization of `generate_config` to be used for the generation call. For a
complete overview of generate, check the [following
guide](https://huggingface.co/docs/transformers/en/main_classes/text_generation).
Return:
`Dict`: A dictionary with the following keys:
- **text** (`str`): The recognized text.
- **chunks** (*optional(, `list[Dict]`)
When using `return_timestamps`, the `chunks` will become a list containing all the various text
chunks identified by the model, *e.g.* `[{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text":
"there", "timestamp": (1.0, 1.5)}]`. The original full text can roughly be recovered by doing
`"".join(chunk["text"] for chunk in output["chunks"])`.
"""
return super().__call__(inputs, **kwargs)
def _sanitize_parameters(self, chunk_length_s=None, stride_length_s=None, ignore_warning=None, decoder_kwargs=None, return_timestamps=None, return_language=None, **generate_kwargs):
preprocess_params = {}
forward_params = {}
postprocess_params = {}
if chunk_length_s is not None:
if self.type in ['seq2seq', 'seq2seq_whisper'] and (not ignore_warning):
type_warning = 'Using `chunk_length_s` is very experimental with seq2seq models. The results will not necessarily be entirely accurate and will have caveats. More information: https://github.com/huggingface/transformers/pull/20104. Ignore this warning with pipeline(..., ignore_warning=True).'
if self.type == 'seq2seq_whisper':
type_warning += " To use Whisper for long-form transcription, use rather the model's `generate` method directly as the model relies on it's own chunking mechanism (cf. Whisper original paper, section 3.8. Long-form Transcription)."
logger.warning(type_warning)
preprocess_params['chunk_length_s'] = chunk_length_s
if stride_length_s is not None:
preprocess_params['stride_length_s'] = stride_length_s
if 'generate_kwargs' in generate_kwargs:
forward_params.update(generate_kwargs.pop('generate_kwargs'))
forward_params.update(generate_kwargs)
if getattr(self, 'assistant_model', None) is not None:
forward_params['assistant_model'] = self.assistant_model
if getattr(self, 'assistant_tokenizer', None) is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
if decoder_kwargs is not None:
postprocess_params['decoder_kwargs'] = decoder_kwargs
if return_language is not None:
if self.type != 'seq2seq_whisper':
raise ValueError('Only Whisper can return language for now.')
postprocess_params['return_language'] = return_language
if hasattr(self, 'generation_config') and hasattr(self.generation_config, 'return_timestamps'):
return_timestamps = return_timestamps or self.generation_config.return_timestamps
if return_timestamps is not None:
if self.type == 'seq2seq' and return_timestamps:
raise ValueError('We cannot return_timestamps yet on non-CTC models apart from Whisper!')
if self.type == 'ctc_with_lm' and return_timestamps != 'word':
raise ValueError("CTC with LM can only predict word level timestamps, set `return_timestamps='word'`")
if self.type == 'ctc' and return_timestamps not in ['char', 'word']:
raise ValueError("CTC can either predict character level timestamps, or word level timestamps. Set `return_timestamps='char'` or `return_timestamps='word'` as required.")
if self.type == 'seq2seq_whisper' and return_timestamps == 'char':
raise ValueError("Whisper cannot return `char` timestamps, only word level or segment level timestamps. Use `return_timestamps='word'` or `return_timestamps=True` respectively.")
forward_params['return_timestamps'] = return_timestamps
postprocess_params['return_timestamps'] = return_timestamps
return (preprocess_params, forward_params, postprocess_params)
def preprocess(self, inputs, chunk_length_s=0, stride_length_s=None):
if isinstance(inputs, str):
if inputs.startswith('http://') or inputs.startswith('https://'):
inputs = requests.get(inputs).content
else:
with open(inputs, 'rb') as f:
inputs = f.read()
if isinstance(inputs, bytes):
inputs = ffmpeg_read(inputs, self.feature_extractor.sampling_rate)
stride = None
extra = {}
if is_torch_available():
import torch
if isinstance(inputs, torch.Tensor):
inputs = inputs.cpu().numpy()
if is_torchcodec_available():
import torchcodec
if isinstance(inputs, torchcodec.decoders.AudioDecoder):
_audio_samples = inputs.get_all_samples()
_array = _audio_samples.data
_array = _array[0] if _array.ndim == 2 and _array.shape[0] == 1 else _array
inputs = {'array': _array, 'sampling_rate': _audio_samples.sample_rate}
if isinstance(inputs, dict):
stride = inputs.pop('stride', None)
if not ('sampling_rate' in inputs and ('raw' in inputs or 'array' in inputs)):
raise ValueError('When passing a dictionary to AutomaticSpeechRecognitionPipeline, the dict needs to contain a "raw" key containing the numpy array or torch tensor representing the audio and a "sampling_rate" key, containing the sampling_rate associated with that array')
_inputs = inputs.pop('raw', None)
if _inputs is None:
inputs.pop('path', None)
_inputs = inputs.pop('array', None)
in_sampling_rate = inputs.pop('sampling_rate')
extra = inputs
inputs = _inputs
if in_sampling_rate != self.feature_extractor.sampling_rate:
if is_torchaudio_available():
from torchaudio import functional as F
else:
raise ImportError('torchaudio is required to resample audio samples in AutomaticSpeechRecognitionPipeline. The torchaudio package can be installed through: `pip install torchaudio`.')
inputs = F.resample(torch.from_numpy(inputs) if isinstance(inputs, np.ndarray) else inputs, in_sampling_rate, in_sampling_rate, self.feature_extractor.sampling_rate).numpy()
ratio = self.feature_extractor.sampling_rate / in_sampling_rate
else:
ratio = 1
if stride is not None:
if stride[0] + stride[1] > inputs.shape[0]:
raise ValueError('Stride is too large for input')
stride = (inputs.shape[0], int(round(stride[0] * ratio)), int(round(stride[1] * ratio)))
if not isinstance(inputs, (np.ndarray, torch.Tensor)):
raise TypeError(f'We expect a numpy ndarray or torch tensor as input, got `{type(inputs)}`')
if inputs.ndim != 1:
logger.warning(f'We expect a single channel audio input for AutomaticSpeechRecognitionPipeline, got {inputs.ndim}. Taking the mean of the channels for mono conversion.')
inputs = inputs.mean(axis=0)
if chunk_length_s:
if stride_length_s is None:
stride_length_s = chunk_length_s / 6
if isinstance(stride_length_s, (int, float)):
stride_length_s = [stride_length_s, stride_length_s]
align_to = getattr(self.model.config, 'inputs_to_logits_ratio', 1)
chunk_len = int(round(chunk_length_s * self.feature_extractor.sampling_rate / align_to) * align_to)
stride_left = int(round(stride_length_s[0] * self.feature_extractor.sampling_rate / align_to) * align_to)
stride_right = int(round(stride_length_s[1] * self.feature_extractor.sampling_rate / align_to) * align_to)
if chunk_len < stride_left + stride_right:
raise ValueError('Chunk length must be superior to stride length')
for item in chunk_iter(inputs, self.feature_extractor, chunk_len, stride_left, stride_right, self.dtype):
yield {**item, **extra}
else:
if self.type == 'seq2seq_whisper' and inputs.shape[0] > self.feature_extractor.n_samples:
processed = self.feature_extractor(inputs, sampling_rate=self.feature_extractor.sampling_rate, truncation=False, padding='longest', return_tensors='pt', return_attention_mask=True)
elif self.type == 'seq2seq_whisper' and stride is None:
processed = self.feature_extractor(inputs, sampling_rate=self.feature_extractor.sampling_rate, return_tensors='pt', return_token_timestamps=True, return_attention_mask=True)
extra['num_frames'] = processed.pop('num_frames')
else:
processed = self.feature_extractor(inputs, sampling_rate=self.feature_extractor.sampling_rate, return_tensors='pt', return_attention_mask=True)
if self.dtype is not None:
processed = processed.to(dtype=self.dtype)
if stride is not None:
if self.type == 'seq2seq':
raise ValueError('Stride is only usable with CTC models, try removing it !')
processed['stride'] = stride
yield {'is_last': True, **processed, **extra}
def _forward(self, model_inputs, return_timestamps=False, **generate_kwargs):
attention_mask = model_inputs.pop('attention_mask', None)
stride = model_inputs.pop('stride', None)
num_frames = model_inputs.pop('num_frames', None)
is_last = model_inputs.pop('is_last')
if stride is not None and num_frames is not None:
raise ValueError('num_frames must be used only when stride is None')
if self.type in {'seq2seq', 'seq2seq_whisper'}:
if 'input_features' in model_inputs:
inputs = model_inputs.pop('input_features')
elif 'input_values' in model_inputs:
inputs = model_inputs.pop('input_values')
else:
raise ValueError(f'Seq2Seq speech recognition model requires either a `input_features` or `input_values` key, but only has {model_inputs.keys()}')
return_timestamps = return_timestamps or getattr(self.generation_config, 'return_timestamps', False)
if return_timestamps and self.type == 'seq2seq_whisper':
generate_kwargs['return_timestamps'] = bool(return_timestamps)
if return_timestamps == 'word':
generate_kwargs['return_token_timestamps'] = True
generate_kwargs['return_segments'] = True
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
main_input_name = self.model.main_input_name if hasattr(self.model, 'main_input_name') else 'inputs'
generate_kwargs = {main_input_name: inputs, 'attention_mask': attention_mask, **generate_kwargs}
tokens = self.model.generate(**generate_kwargs)
if return_timestamps == 'word' and self.type == 'seq2seq_whisper':
if 'segments' not in tokens:
out = {'tokens': tokens['sequences'], 'token_timestamps': tokens['token_timestamps']}
else:
token_timestamps = [torch.cat([segment['token_timestamps'] for segment in segment_list]) for segment_list in tokens['segments']]
out = {'tokens': tokens['sequences'], 'token_timestamps': token_timestamps}
else:
out = {'tokens': tokens}
if self.type == 'seq2seq_whisper':
if stride is not None:
out['stride'] = stride
else:
inputs = {self.model.main_input_name: model_inputs.pop(self.model.main_input_name), 'attention_mask': attention_mask}
outputs = self.model(**inputs)
logits = outputs.logits
if self.type == 'ctc_with_lm':
out = {'logits': logits}
else:
out = {'tokens': logits.argmax(dim=-1)}
if stride is not None:
ratio = 1 / self.model.config.inputs_to_logits_ratio
if isinstance(stride, tuple):
out['stride'] = rescale_stride([stride], ratio)[0]
else:
out['stride'] = rescale_stride(stride, ratio)
extra = model_inputs
return {'is_last': is_last, **out, **extra}
def postprocess(self, model_outputs, decoder_kwargs: Optional[dict]=None, return_timestamps=None, return_language=None):
optional = {}
final_items = []
key = 'logits' if self.type == 'ctc_with_lm' else 'tokens'
stride = None
for outputs in model_outputs:
if outputs[key].dtype in (torch.bfloat16, torch.float16):
items = outputs[key].to(torch.float32).numpy()
else:
items = outputs[key].numpy()
stride = outputs.get('stride', None)
if stride is not None and self.type in {'ctc', 'ctc_with_lm'}:
total_n, left, right = stride
right_n = total_n - right
items = items[:, left:right_n]
final_items.append(items)
if stride and self.type == 'seq2seq':
items = _find_longest_common_sequence(final_items, self.tokenizer)
elif self.type == 'seq2seq_whisper':
time_precision = self.feature_extractor.chunk_length / self.model.config.max_source_positions
sampling_rate = self.feature_extractor.sampling_rate
for output in model_outputs:
if 'stride' in output:
chunk_len, stride_left, stride_right = output['stride']
chunk_len /= sampling_rate
stride_left /= sampling_rate
stride_right /= sampling_rate
output['stride'] = (chunk_len, stride_left, stride_right)
text, optional = self.tokenizer._decode_asr(model_outputs, return_timestamps=return_timestamps, return_language=return_language, time_precision=time_precision)
else:
items = np.concatenate(final_items, axis=1)
items = items.squeeze(0)
if self.type == 'ctc_with_lm':
if decoder_kwargs is None:
decoder_kwargs = {}
beams = self.decoder.decode_beams(items, **decoder_kwargs)
text = beams[0][0]
if return_timestamps:
chunk_offset = beams[0][2]
offsets = []
for word, (start_offset, end_offset) in chunk_offset:
offsets.append({'word': word, 'start_offset': start_offset, 'end_offset': end_offset})
elif self.type != 'seq2seq_whisper':
skip_special_tokens = self.type != 'ctc'
text = self.tokenizer.decode(items, skip_special_tokens=skip_special_tokens)
if return_timestamps:
offsets = self.tokenizer.decode(items, skip_special_tokens=skip_special_tokens, output_char_offsets=True)['char_offsets']
if return_timestamps == 'word':
offsets = self.tokenizer._get_word_offsets(offsets, self.tokenizer.replace_word_delimiter_char)
if return_timestamps and self.type not in {'seq2seq', 'seq2seq_whisper'}:
chunks = []
for item in offsets:
start = item['start_offset'] * self.model.config.inputs_to_logits_ratio
start /= self.feature_extractor.sampling_rate
stop = item['end_offset'] * self.model.config.inputs_to_logits_ratio
stop /= self.feature_extractor.sampling_rate
chunks.append({'text': item[return_timestamps], 'timestamp': (start, stop)})
optional['chunks'] = chunks
extra = defaultdict(list)
for output in model_outputs:
output.pop('tokens', None)
output.pop('logits', None)
output.pop('is_last', None)
output.pop('stride', None)
output.pop('token_timestamps', None)
for k, v in output.items():
extra[k].append(v)
return {'text': text, **optional, **extra}
|
class AutomaticSpeechRecognitionPipeline(ChunkPipeline):
'''
Pipeline that aims at extracting spoken text contained within some audio.
The input can be either a raw waveform or a audio file. In case of the audio file, ffmpeg should be installed for
to support multiple audio formats
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 5
Example:
```python
>>> from transformers import pipeline
>>> transcriber = pipeline(model="openai/whisper-base")
>>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
{'text': ' He hoped there would be stew for dinner, turnips and carrots and bruised potatoes and fat mutton pieces to be ladled out in thick, peppered flour-fatten sauce.'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
Arguments:
model ([`PreTrainedModel`]):
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
[`PreTrainedModel`].
feature_extractor ([`SequenceFeatureExtractor`]):
The feature extractor that will be used by the pipeline to encode waveform for the model.
tokenizer ([`PreTrainedTokenizer`]):
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
[`PreTrainedTokenizer`].
decoder (`pyctcdecode.BeamSearchDecoderCTC`, *optional*):
[PyCTCDecode's
BeamSearchDecoderCTC](https://github.com/kensho-technologies/pyctcdecode/blob/2fd33dc37c4111417e08d89ccd23d28e9b308d19/pyctcdecode/decoder.py#L180)
can be passed for language model boosted decoding. See [`Wav2Vec2ProcessorWithLM`] for more information.
chunk_length_s (`float`, *optional*, defaults to 0):
The input length for in each chunk. If `chunk_length_s = 0` then chunking is disabled (default).
<Tip>
For more information on how to effectively use `chunk_length_s`, please have a look at the [ASR chunking
blog post](https://huggingface.co/blog/asr-chunking).
</Tip>
stride_length_s (`float`, *optional*, defaults to `chunk_length_s / 6`):
The length of stride on the left and right of each chunk. Used only with `chunk_length_s > 0`. This enables
the model to *see* more context and infer letters better than without this context but the pipeline
discards the stride bits at the end to make the final reconstitution as perfect as possible.
<Tip>
For more information on how to effectively use `stride_length_s`, please have a look at the [ASR chunking
blog post](https://huggingface.co/blog/asr-chunking).
</Tip>
device (Union[`int`, `torch.device`], *optional*):
Device ordinal for CPU/GPU supports. Setting this to `None` will leverage CPU, a positive will run the
model on the associated CUDA device id.
'''
def __init__(self, model: 'PreTrainedModel', feature_extractor: Optional[Union['SequenceFeatureExtractor', str]]=None, tokenizer: Optional[PreTrainedTokenizer]=None, decoder: Optional[Union['BeamSearchDecoderCTC', str]]=None, device: Optional[Union[int, 'torch.device']]=None, **kwargs):
pass
def __call__(self, inputs: Union[np.ndarray, bytes, str, dict], **kwargs: Any) -> list[dict[str, Any]]:
'''
Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`]
documentation for more information.
Args:
inputs (`np.ndarray` or `bytes` or `str` or `dict`):
The inputs is either :
- `str` that is either the filename of a local audio file, or a public URL address to download the
audio file. The file will be read at the correct sampling rate to get the waveform using
*ffmpeg*. This requires *ffmpeg* to be installed on the system.
- `bytes` it is supposed to be the content of an audio file and is interpreted by *ffmpeg* in the
same way.
- (`np.ndarray` of shape (n, ) of type `np.float32` or `np.float64`)
Raw audio at the correct sampling rate (no further check will be done)
- `dict` form can be used to pass raw audio sampled at arbitrary `sampling_rate` and let this
pipeline do the resampling. The dict must be in the format `{"sampling_rate": int, "raw":
np.array}` with optionally a `"stride": (left: int, right: int)` than can ask the pipeline to
treat the first `left` samples and last `right` samples to be ignored in decoding (but used at
inference to provide more context to the model). Only use `stride` with CTC models.
return_timestamps (*optional*, `str` or `bool`):
Only available for pure CTC models (Wav2Vec2, HuBERT, etc) and the Whisper model. Not available for
other sequence-to-sequence models.
For CTC models, timestamps can take one of two formats:
- `"char"`: the pipeline will return timestamps along the text for every character in the text. For
instance, if you get `[{"text": "h", "timestamp": (0.5, 0.6)}, {"text": "i", "timestamp": (0.7,
0.9)}]`, then it means the model predicts that the letter "h" was spoken after `0.5` and before
`0.6` seconds.
- `"word"`: the pipeline will return timestamps along the text for every word in the text. For
instance, if you get `[{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text": "there", "timestamp":
(1.0, 1.5)}]`, then it means the model predicts that the word "hi" was spoken after `0.5` and
before `0.9` seconds.
For the Whisper model, timestamps can take one of two formats:
- `"word"`: same as above for word-level CTC timestamps. Word-level timestamps are predicted
through the *dynamic-time warping (DTW)* algorithm, an approximation to word-level timestamps
by inspecting the cross-attention weights.
- `True`: the pipeline will return timestamps along the text for *segments* of words in the text.
For instance, if you get `[{"text": " Hi there!", "timestamp": (0.5, 1.5)}]`, then it means the
model predicts that the segment "Hi there!" was spoken after `0.5` and before `1.5` seconds.
Note that a segment of text refers to a sequence of one or more words, rather than individual
words as with word-level timestamps.
generate_kwargs (`dict`, *optional*):
The dictionary of ad-hoc parametrization of `generate_config` to be used for the generation call. For a
complete overview of generate, check the [following
guide](https://huggingface.co/docs/transformers/en/main_classes/text_generation).
Return:
`Dict`: A dictionary with the following keys:
- **text** (`str`): The recognized text.
- **chunks** (*optional(, `list[Dict]`)
When using `return_timestamps`, the `chunks` will become a list containing all the various text
chunks identified by the model, *e.g.* `[{"text": "hi ", "timestamp": (0.5, 0.9)}, {"text":
"there", "timestamp": (1.0, 1.5)}]`. The original full text can roughly be recovered by doing
`"".join(chunk["text"] for chunk in output["chunks"])`.
'''
pass
def _sanitize_parameters(self, chunk_length_s=None, stride_length_s=None, ignore_warning=None, decoder_kwargs=None, return_timestamps=None, return_language=None, **generate_kwargs):
pass
def preprocess(self, inputs, chunk_length_s=0, stride_length_s=None):
pass
def _forward(self, model_inputs, return_timestamps=False, **generate_kwargs):
pass
def postprocess(self, model_outputs, decoder_kwargs: Optional[dict]=None, return_timestamps=None, return_language=None):
pass
| 7
| 2
| 77
| 6
| 57
| 14
| 14
| 0.4
| 1
| 13
| 1
| 0
| 6
| 2
| 6
| 50
| 534
| 58
| 340
| 86
| 307
| 136
| 224
| 60
| 216
| 23
| 7
| 5
| 82
|
6,404
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.ArgumentHandler
|
from abc import ABC, abstractmethod
class ArgumentHandler(ABC):
"""
Base interface for handling arguments for each [`~pipelines.Pipeline`].
"""
@abstractmethod
def __call__(self, *args, **kwargs):
raise NotImplementedError()
|
class ArgumentHandler(ABC):
'''
Base interface for handling arguments for each [`~pipelines.Pipeline`].
'''
@abstractmethod
def __call__(self, *args, **kwargs):
pass
| 3
| 1
| 2
| 0
| 2
| 0
| 1
| 0.75
| 1
| 1
| 0
| 4
| 1
| 0
| 1
| 21
| 8
| 1
| 4
| 3
| 1
| 3
| 3
| 2
| 1
| 1
| 4
| 0
| 1
|
6,405
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.ChunkPipeline
|
import os
class ChunkPipeline(Pipeline):
def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
all_outputs = []
for model_inputs in self.preprocess(inputs, **preprocess_params):
model_outputs = self.forward(model_inputs, **forward_params)
all_outputs.append(model_outputs)
outputs = self.postprocess(all_outputs, **postprocess_params)
return outputs
def get_iterator(self, inputs, num_workers: int, batch_size: int, preprocess_params, forward_params, postprocess_params):
if 'TOKENIZERS_PARALLELISM' not in os.environ:
logger.info("Disabling tokenizer parallelism, we're using DataLoader multithreading already")
os.environ['TOKENIZERS_PARALLELISM'] = 'false'
if num_workers > 1:
logger.warning('For ChunkPipeline using num_workers>0 is likely to result in errors since everything is iterable, setting `num_workers=1` to guarantee correctness.')
num_workers = 1
dataset = PipelineChunkIterator(inputs, self.preprocess, preprocess_params)
feature_extractor = self.feature_extractor if self.feature_extractor is not None else self.image_processor
collate_fn = no_collate_fn if batch_size == 1 else pad_collate_fn(self.tokenizer, feature_extractor)
dataloader = DataLoader(dataset, num_workers=num_workers, batch_size=batch_size, collate_fn=collate_fn)
model_iterator = PipelinePackIterator(dataloader, self.forward, forward_params, loader_batch_size=batch_size)
final_iterator = PipelineIterator(model_iterator, self.postprocess, postprocess_params)
return final_iterator
|
class ChunkPipeline(Pipeline):
def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
pass
def get_iterator(self, inputs, num_workers: int, batch_size: int, preprocess_params, forward_params, postprocess_params):
pass
| 3
| 0
| 14
| 1
| 13
| 1
| 4
| 0.04
| 1
| 4
| 0
| 7
| 2
| 1
| 2
| 44
| 30
| 2
| 27
| 15
| 22
| 1
| 22
| 13
| 19
| 5
| 6
| 1
| 7
|
6,406
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.CsvPipelineDataFormat
|
from typing import TYPE_CHECKING, Any, Optional, Union
import csv
class CsvPipelineDataFormat(PipelineDataFormat):
"""
Support for pipelines using CSV data format.
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
"""
def __init__(self, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False):
super().__init__(output_path, input_path, column, overwrite=overwrite)
def __iter__(self):
with open(self.input_path, 'r') as f:
reader = csv.DictReader(f)
for row in reader:
if self.is_multi_columns:
yield {k: row[c] for k, c in self.column}
else:
yield row[self.column[0]]
def save(self, data: list[dict]):
"""
Save the provided data object with the representation for the current [`~pipelines.PipelineDataFormat`].
Args:
data (`list[dict]`): The data to store.
"""
with open(self.output_path, 'w') as f:
if len(data) > 0:
writer = csv.DictWriter(f, list(data[0].keys()))
writer.writeheader()
writer.writerows(data)
|
class CsvPipelineDataFormat(PipelineDataFormat):
'''
Support for pipelines using CSV data format.
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
'''
def __init__(self, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False):
pass
def __iter__(self):
pass
def save(self, data: list[dict]):
'''
Save the provided data object with the representation for the current [`~pipelines.PipelineDataFormat`].
Args:
data (`list[dict]`): The data to store.
'''
pass
| 4
| 2
| 9
| 0
| 7
| 2
| 2
| 0.61
| 1
| 6
| 0
| 0
| 3
| 0
| 3
| 8
| 42
| 5
| 23
| 15
| 13
| 14
| 16
| 7
| 12
| 3
| 1
| 3
| 6
|
6,407
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.JsonPipelineDataFormat
|
import json
from typing import TYPE_CHECKING, Any, Optional, Union
class JsonPipelineDataFormat(PipelineDataFormat):
"""
Support for pipelines using JSON file format.
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
"""
def __init__(self, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False):
super().__init__(output_path, input_path, column, overwrite=overwrite)
with open(input_path, 'r') as f:
self._entries = json.load(f)
def __iter__(self):
for entry in self._entries:
if self.is_multi_columns:
yield {k: entry[c] for k, c in self.column}
else:
yield entry[self.column[0]]
def save(self, data: dict):
"""
Save the provided data object in a json file.
Args:
data (`dict`): The data to store.
"""
with open(self.output_path, 'w') as f:
json.dump(data, f)
|
class JsonPipelineDataFormat(PipelineDataFormat):
'''
Support for pipelines using JSON file format.
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
'''
def __init__(self, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False):
pass
def __iter__(self):
pass
def save(self, data: dict):
'''
Save the provided data object in a json file.
Args:
data (`dict`): The data to store.
'''
pass
| 4
| 2
| 9
| 1
| 6
| 2
| 2
| 0.7
| 1
| 3
| 0
| 0
| 3
| 1
| 3
| 8
| 40
| 6
| 20
| 14
| 10
| 14
| 13
| 6
| 9
| 3
| 1
| 2
| 5
|
6,408
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.PipedPipelineDataFormat
|
import sys
from typing import TYPE_CHECKING, Any, Optional, Union
class PipedPipelineDataFormat(PipelineDataFormat):
"""
Read data from piped input to the python process. For multi columns data, columns should separated by
If columns are provided, then the output will be a dictionary with {column_x: value_x}
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
"""
def __iter__(self):
for line in sys.stdin:
if '\t' in line:
line = line.split('\t')
if self.column:
yield {kwargs: l for (kwargs, _), l in zip(self.column, line)}
else:
yield tuple(line)
else:
yield line
def save(self, data: dict):
"""
Print the data.
Args:
data (`dict`): The data to store.
"""
print(data)
def save_binary(self, data: Union[dict, list[dict]]) -> str:
if self.output_path is None:
raise KeyError('When using piped input on pipeline outputting large object requires an output file path. Please provide such output path through --output argument.')
return super().save_binary(data)
|
class PipedPipelineDataFormat(PipelineDataFormat):
'''
Read data from piped input to the python process. For multi columns data, columns should separated by
If columns are provided, then the output will be a dictionary with {column_x: value_x}
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
'''
def __iter__(self):
pass
def save(self, data: dict):
'''
Print the data.
Args:
data (`dict`): The data to store.
'''
pass
def save_binary(self, data: Union[dict, list[dict]]) -> str:
pass
| 4
| 2
| 10
| 1
| 6
| 3
| 2
| 0.9
| 1
| 6
| 0
| 0
| 3
| 0
| 3
| 8
| 46
| 8
| 20
| 6
| 16
| 18
| 15
| 5
| 11
| 4
| 1
| 3
| 7
|
6,409
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.Pipeline
|
import types
from collections import UserDict
from typing import TYPE_CHECKING, Any, Optional, Union
from ..modelcard import ModelCard
from ..processing_utils import ProcessorMixin
from abc import ABC, abstractmethod
import os
from contextlib import contextmanager
import copy
from ..feature_extraction_utils import PreTrainedFeatureExtractor
from ..dynamic_module_utils import custom_object_save
from ..utils import ModelOutput, PushToHubMixin, add_end_docstrings, copy_func, is_torch_available, is_torch_cuda_available, is_torch_hpu_available, is_torch_mlu_available, is_torch_mps_available, is_torch_musa_available, is_torch_npu_available, is_torch_xpu_available, logging
from ..generation import GenerationConfig
from ..tokenization_utils import PreTrainedTokenizer
from ..image_processing_utils import BaseImageProcessor
import warnings
import collections
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, has_feature_extractor=True, has_image_processor=True, has_processor=True))
class Pipeline(_ScikitCompat, PushToHubMixin):
"""
The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across
different pipelines.
Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following
operations:
Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument (see below).
Some pipeline, like for instance [`FeatureExtractionPipeline`] (`'feature-extraction'`) output large tensor object
as nested-lists. In order to avoid dumping such large structure as textual data we provide the `binary_output`
constructor argument. If set to `True`, the output will be stored in the pickle format.
"""
_load_processor = None
_load_image_processor = None
_load_feature_extractor = None
_load_tokenizer = None
_pipeline_calls_generate = False
default_input_names = None
def __init__(self, model: 'PreTrainedModel', tokenizer: Optional[PreTrainedTokenizer]=None, feature_extractor: Optional[PreTrainedFeatureExtractor]=None, image_processor: Optional[BaseImageProcessor]=None, processor: Optional[ProcessorMixin]=None, modelcard: Optional[ModelCard]=None, task: str='', device: Optional[Union[int, 'torch.device']]=None, binary_output: bool=False, **kwargs):
_, _, _ = (kwargs.pop('args_parser', None), kwargs.pop('torch_dtype', None), kwargs.pop('dtype', None))
self.task = task
self.model = model
self.tokenizer = tokenizer
self.feature_extractor = feature_extractor
self.image_processor = image_processor
self.processor = processor
self.modelcard = modelcard
hf_device_map = getattr(self.model, 'hf_device_map', None)
if hf_device_map is not None and device is not None:
raise ValueError('The model has been loaded with `accelerate` and therefore cannot be moved to a specific device. Please discard the `device` argument when creating your pipeline object.')
if device is None:
if hf_device_map is not None:
device = next(iter(hf_device_map.values()))
else:
device = 0
if device == -1 and self.model.device is not None:
device = self.model.device
if isinstance(device, torch.device):
if device.type == 'xpu' and (not is_torch_xpu_available(check_device=True)) or (device.type == 'hpu' and (not is_torch_hpu_available())):
raise ValueError(f'{device} is not available, you should use device="cpu" instead')
self.device = device
elif isinstance(device, str):
if 'xpu' in device and (not is_torch_xpu_available(check_device=True)) or ('hpu' in device and (not is_torch_hpu_available())):
raise ValueError(f'{device} is not available, you should use device="cpu" instead')
self.device = torch.device(device)
elif device < 0:
self.device = torch.device('cpu')
elif is_torch_mlu_available():
self.device = torch.device(f'mlu:{device}')
elif is_torch_musa_available():
self.device = torch.device(f'musa:{device}')
elif is_torch_cuda_available():
self.device = torch.device(f'cuda:{device}')
elif is_torch_npu_available():
self.device = torch.device(f'npu:{device}')
elif is_torch_hpu_available():
self.device = torch.device(f'hpu:{device}')
elif is_torch_xpu_available(check_device=True):
self.device = torch.device(f'xpu:{device}')
elif is_torch_mps_available():
self.device = torch.device(f'mps:{device}')
else:
self.device = torch.device('cpu')
if torch.distributed.is_available() and torch.distributed.is_initialized():
self.device = self.model.device
logger.warning(f'Device set to use {self.device}')
self.binary_output = binary_output
if self.model.device != self.device and (not (isinstance(self.device, int) and self.device < 0)) and (hf_device_map is None):
self.model.to(self.device)
if self._pipeline_calls_generate and self.model.can_generate():
self.assistant_model, self.assistant_tokenizer = load_assistant_model(self.model, kwargs.pop('assistant_model', None), kwargs.pop('assistant_tokenizer', None))
self.prefix = self.model.config.prefix if hasattr(self.model.config, 'prefix') else None
default_pipeline_generation_config = getattr(self, '_default_generation_config', GenerationConfig())
if hasattr(self.model, '_prepare_generation_config'):
prepared_generation_config, kwargs = self.model._prepare_generation_config(generation_config=default_pipeline_generation_config, use_model_defaults=True, **kwargs)
self.generation_config = prepared_generation_config
if default_pipeline_generation_config.max_new_tokens is not None and self.generation_config.max_new_tokens == default_pipeline_generation_config.max_new_tokens and (self.generation_config.max_length is not None) and (self.generation_config.max_length != 20):
self.generation_config.max_new_tokens = None
else:
self.generation_config = copy.deepcopy(self.model.generation_config)
task_specific_params = self.model.config.task_specific_params
if task_specific_params is not None and task in task_specific_params:
this_task_params = task_specific_params.get(task)
if 'prefix' in this_task_params:
self.prefix = this_task_params.pop('prefix')
self.generation_config.update(**this_task_params)
if self.tokenizer is not None and self.tokenizer.pad_token_id is not None and (self.generation_config.pad_token_id is None):
self.generation_config.pad_token_id = self.tokenizer.pad_token_id
self.call_count = 0
self._batch_size = kwargs.pop('batch_size', None)
self._num_workers = kwargs.pop('num_workers', None)
self._preprocess_params, self._forward_params, self._postprocess_params = self._sanitize_parameters(**kwargs)
if self.processor is not None and all([self.tokenizer is None, self.feature_extractor is None, self.image_processor is None]):
self.tokenizer = getattr(self.processor, 'tokenizer', None)
self.feature_extractor = getattr(self.processor, 'feature_extractor', None)
self.image_processor = getattr(self.processor, 'image_processor', None)
if self.image_processor is None and self.feature_extractor is not None:
if isinstance(self.feature_extractor, BaseImageProcessor):
self.image_processor = self.feature_extractor
def save_pretrained(self, save_directory: Union[str, os.PathLike], safe_serialization: bool=True, **kwargs):
"""
Save the pipeline's model and tokenizer.
Args:
save_directory (`str` or `os.PathLike`):
A path to the directory where to saved. It will be created if it doesn't exist.
safe_serialization (`str`):
Whether to save the model using `safetensors` or PyTorch serialization.
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
use_auth_token = kwargs.pop('use_auth_token', None)
if use_auth_token is not None:
warnings.warn('The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.', FutureWarning)
if kwargs.get('token') is not None:
raise ValueError('`token` and `use_auth_token` are both specified. Please set only the argument `token`.')
kwargs['token'] = use_auth_token
if os.path.isfile(save_directory):
logger.error(f'Provided path ({save_directory}) should be a directory, not a file')
return
os.makedirs(save_directory, exist_ok=True)
if hasattr(self, '_registered_impl'):
pipeline_info = self._registered_impl.copy()
custom_pipelines = {}
for task, info in pipeline_info.items():
if info['impl'] != self.__class__:
continue
info = info.copy()
module_name = info['impl'].__module__
last_module = module_name.split('.')[-1]
info['impl'] = f"{last_module}.{info['impl'].__name__}"
info['pt'] = tuple((c.__name__ for c in info['pt']))
custom_pipelines[task] = info
self.model.config.custom_pipelines = custom_pipelines
custom_object_save(self, save_directory)
kwargs['safe_serialization'] = safe_serialization
self.model.save_pretrained(save_directory, **kwargs)
if self.tokenizer is not None:
self.tokenizer.save_pretrained(save_directory, **kwargs)
if self.feature_extractor is not None:
self.feature_extractor.save_pretrained(save_directory, **kwargs)
if self.image_processor is not None:
self.image_processor.save_pretrained(save_directory, **kwargs)
if self.modelcard is not None:
self.modelcard.save_pretrained(save_directory)
def transform(self, X):
"""
Scikit / Keras interface to transformers' pipelines. This method will forward to __call__().
"""
return self(X)
def predict(self, X):
"""
Scikit / Keras interface to transformers' pipelines. This method will forward to __call__().
"""
return self(X)
@property
def dtype(self) -> Optional['torch.dtype']:
"""
Dtype of the model (if it's Pytorch model), `None` otherwise.
"""
return getattr(self.model, 'dtype', None)
@property
def torch_dtype(self) -> Optional['torch.dtype']:
"""
Torch dtype of the model (if it's Pytorch model), `None` otherwise.
"""
logger.warning_once('`torch_dtype` attribute is deprecated. Use `dtype` instead!')
return getattr(self.model, 'dtype', None)
@contextmanager
def device_placement(self):
"""
Context Manager allowing tensor allocation on the user-specified device.
Returns:
Context manager
Examples:
```python
# Explicitly ask for tensor allocation on CUDA device :0
pipe = pipeline(..., device=0)
with pipe.device_placement():
# Every tensor allocation will be done on the request device
output = pipe(...)
```"""
if self.device.type == 'cuda':
with torch.cuda.device(self.device):
yield
elif self.device.type == 'mlu':
with torch.mlu.device(self.device):
yield
elif self.device.type == 'musa':
with torch.musa.device(self.device):
yield
elif self.device.type == 'xpu':
with torch.xpu.device(self.device):
yield
else:
yield
def ensure_tensor_on_device(self, **inputs):
"""
Ensure PyTorch tensors are on the specified device.
Args:
inputs (keyword arguments that should be `torch.Tensor`, the rest is ignored):
The tensors to place on `self.device`.
Recursive on lists **only**.
Return:
`dict[str, torch.Tensor]`: The same as `inputs` but on the proper device.
"""
return self._ensure_tensor_on_device(inputs, self.device)
def _ensure_tensor_on_device(self, inputs, device):
if isinstance(inputs, ModelOutput):
return ModelOutput({name: self._ensure_tensor_on_device(tensor, device) for name, tensor in inputs.items()})
elif isinstance(inputs, dict):
return {name: self._ensure_tensor_on_device(tensor, device) for name, tensor in inputs.items()}
elif isinstance(inputs, UserDict):
return UserDict({name: self._ensure_tensor_on_device(tensor, device) for name, tensor in inputs.items()})
elif isinstance(inputs, list):
return [self._ensure_tensor_on_device(item, device) for item in inputs]
elif isinstance(inputs, tuple):
return tuple((self._ensure_tensor_on_device(item, device) for item in inputs))
elif isinstance(inputs, torch.Tensor):
return inputs.to(device)
else:
return inputs
def check_model_type(self, supported_models: Union[list[str], dict]):
"""
Check if the model class is in supported by the pipeline.
Args:
supported_models (`list[str]` or `dict`):
The list of models supported by the pipeline, or a dictionary with model class values.
"""
if not isinstance(supported_models, list):
supported_models_names = []
if self.task in SUPPORTED_PEFT_TASKS:
supported_models_names.extend(SUPPORTED_PEFT_TASKS[self.task])
for model_name in supported_models.values():
if isinstance(model_name, tuple):
supported_models_names.extend(list(model_name))
else:
supported_models_names.append(model_name)
if hasattr(supported_models, '_model_mapping'):
for model in supported_models._model_mapping._extra_content.values():
if isinstance(model_name, tuple):
supported_models_names.extend([m.__name__ for m in model])
else:
supported_models_names.append(model.__name__)
supported_models = supported_models_names
if self.model.__class__.__name__ not in supported_models:
logger.error(f"The model '{self.model.__class__.__name__}' is not supported for {self.task}. Supported models are {supported_models}.")
@abstractmethod
def _sanitize_parameters(self, **pipeline_parameters):
"""
_sanitize_parameters will be called with any excessive named arguments from either `__init__` or `__call__`
methods. It should return 3 dictionaries of the resolved parameters used by the various `preprocess`,
`forward` and `postprocess` methods. Do not fill dictionaries if the caller didn't specify a kwargs. This
lets you keep defaults in function signatures, which is more "natural".
It is not meant to be called directly, it will be automatically called and the final parameters resolved by
`__init__` and `__call__`
"""
raise NotImplementedError('_sanitize_parameters not implemented')
@abstractmethod
def preprocess(self, input_: Any, **preprocess_parameters: dict) -> dict[str, GenericTensor]:
"""
Preprocess will take the `input_` of a specific pipeline and return a dictionary of everything necessary for
`_forward` to run properly. It should contain at least one tensor, but might have arbitrary other items.
"""
raise NotImplementedError('preprocess not implemented')
@abstractmethod
def _forward(self, input_tensors: dict[str, GenericTensor], **forward_parameters: dict) -> ModelOutput:
"""
_forward will receive the prepared dictionary from `preprocess` and run it on the model. This method might
involve the GPU or the CPU and should be agnostic to it. Isolating this function is the reason for `preprocess`
and `postprocess` to exist, so that the hot path, this method generally can run as fast as possible.
It is not meant to be called directly, `forward` is preferred. It is basically the same but contains additional
code surrounding `_forward` making sure tensors and models are on the same device, disabling the training part
of the code (leading to faster inference).
"""
raise NotImplementedError('_forward not implemented')
@abstractmethod
def postprocess(self, model_outputs: ModelOutput, **postprocess_parameters: dict) -> Any:
"""
Postprocess will receive the raw outputs of the `_forward` method, generally tensors, and reformat them into
something more friendly. Generally it will output a list or a dict or results (containing just strings and
numbers).
"""
raise NotImplementedError('postprocess not implemented')
def get_inference_context(self):
return torch.no_grad
def forward(self, model_inputs, **forward_params):
with self.device_placement():
inference_context = self.get_inference_context()
with inference_context():
model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
model_outputs = self._forward(model_inputs, **forward_params)
model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device('cpu'))
return model_outputs
def get_iterator(self, inputs, num_workers: int, batch_size: int, preprocess_params, forward_params, postprocess_params):
if isinstance(inputs, collections.abc.Sized):
dataset = PipelineDataset(inputs, self.preprocess, preprocess_params)
else:
if num_workers > 1:
logger.warning('For iterable dataset using num_workers>1 is likely to result in errors since everything is iterable, setting `num_workers=1` to guarantee correctness.')
num_workers = 1
dataset = PipelineIterator(inputs, self.preprocess, preprocess_params)
if 'TOKENIZERS_PARALLELISM' not in os.environ:
logger.info("Disabling tokenizer parallelism, we're using DataLoader multithreading already")
os.environ['TOKENIZERS_PARALLELISM'] = 'false'
feature_extractor = self.feature_extractor if self.feature_extractor is not None else self.image_processor
collate_fn = no_collate_fn if batch_size == 1 else pad_collate_fn(self.tokenizer, feature_extractor)
dataloader = DataLoader(dataset, num_workers=num_workers, batch_size=batch_size, collate_fn=collate_fn)
model_iterator = PipelineIterator(dataloader, self.forward, forward_params, loader_batch_size=batch_size)
final_iterator = PipelineIterator(model_iterator, self.postprocess, postprocess_params)
return final_iterator
def __call__(self, inputs, *args, num_workers=None, batch_size=None, **kwargs):
if args:
logger.warning(f'Ignoring args : {args}')
if num_workers is None:
if self._num_workers is None:
num_workers = 0
else:
num_workers = self._num_workers
if batch_size is None:
if self._batch_size is None:
batch_size = 1
else:
batch_size = self._batch_size
preprocess_params, forward_params, postprocess_params = self._sanitize_parameters(**kwargs)
preprocess_params = {**self._preprocess_params, **preprocess_params}
forward_params = {**self._forward_params, **forward_params}
postprocess_params = {**self._postprocess_params, **postprocess_params}
self.call_count += 1
if self.call_count > 10 and self.device.type == 'cuda':
logger.warning_once('You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset')
is_dataset = Dataset is not None and isinstance(inputs, Dataset)
is_generator = isinstance(inputs, types.GeneratorType)
is_list = isinstance(inputs, list)
is_iterable = is_dataset or is_generator or is_list
can_use_iterator = is_dataset or is_generator or is_list
if is_list:
if can_use_iterator:
final_iterator = self.get_iterator(inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params)
outputs = list(final_iterator)
return outputs
else:
return self.run_multi(inputs, preprocess_params, forward_params, postprocess_params)
elif can_use_iterator:
return self.get_iterator(inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params)
elif is_iterable:
return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
elif isinstance(self, ChunkPipeline):
return next(iter(self.get_iterator([inputs], num_workers, batch_size, preprocess_params, forward_params, postprocess_params)))
else:
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):
return [self.run_single(item, preprocess_params, forward_params, postprocess_params) for item in inputs]
def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
model_inputs = self.preprocess(inputs, **preprocess_params)
model_outputs = self.forward(model_inputs, **forward_params)
outputs = self.postprocess(model_outputs, **postprocess_params)
return outputs
def iterate(self, inputs, preprocess_params, forward_params, postprocess_params):
for input_ in inputs:
yield self.run_single(input_, preprocess_params, forward_params, postprocess_params)
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, has_feature_extractor=True, has_image_processor=True, has_processor=True))
class Pipeline(_ScikitCompat, PushToHubMixin):
'''
The Pipeline class is the class from which all pipelines inherit. Refer to this class for methods shared across
different pipelines.
Base class implementing pipelined operations. Pipeline workflow is defined as a sequence of the following
operations:
Input -> Tokenization -> Model Inference -> Post-Processing (task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument (see below).
Some pipeline, like for instance [`FeatureExtractionPipeline`] (`'feature-extraction'`) output large tensor object
as nested-lists. In order to avoid dumping such large structure as textual data we provide the `binary_output`
constructor argument. If set to `True`, the output will be stored in the pickle format.
'''
def __init__(self, model: 'PreTrainedModel', tokenizer: Optional[PreTrainedTokenizer]=None, feature_extractor: Optional[PreTrainedFeatureExtractor]=None, image_processor: Optional[BaseImageProcessor]=None, processor: Optional[ProcessorMixin]=None, modelcard: Optional[ModelCard]=None, task: str='', device: Optional[Union[int, 'torch.device']]=None, binary_output: bool=False, **kwargs):
pass
def save_pretrained(self, save_directory: Union[str, os.PathLike], safe_serialization: bool=True, **kwargs):
'''
Save the pipeline's model and tokenizer.
Args:
save_directory (`str` or `os.PathLike`):
A path to the directory where to saved. It will be created if it doesn't exist.
safe_serialization (`str`):
Whether to save the model using `safetensors` or PyTorch serialization.
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
'''
pass
def transform(self, X):
'''
Scikit / Keras interface to transformers' pipelines. This method will forward to __call__().
'''
pass
def predict(self, X):
'''
Scikit / Keras interface to transformers' pipelines. This method will forward to __call__().
'''
pass
@property
def dtype(self) -> Optional['torch.dtype']:
'''
Dtype of the model (if it's Pytorch model), `None` otherwise.
'''
pass
@property
def torch_dtype(self) -> Optional['torch.dtype']:
'''
Torch dtype of the model (if it's Pytorch model), `None` otherwise.
'''
pass
@contextmanager
def device_placement(self):
'''
Context Manager allowing tensor allocation on the user-specified device.
Returns:
Context manager
Examples:
```python
# Explicitly ask for tensor allocation on CUDA device :0
pipe = pipeline(..., device=0)
with pipe.device_placement():
# Every tensor allocation will be done on the request device
output = pipe(...)
```'''
pass
def ensure_tensor_on_device(self, **inputs):
'''
Ensure PyTorch tensors are on the specified device.
Args:
inputs (keyword arguments that should be `torch.Tensor`, the rest is ignored):
The tensors to place on `self.device`.
Recursive on lists **only**.
Return:
`dict[str, torch.Tensor]`: The same as `inputs` but on the proper device.
'''
pass
def _ensure_tensor_on_device(self, inputs, device):
pass
def check_model_type(self, supported_models: Union[list[str], dict]):
'''
Check if the model class is in supported by the pipeline.
Args:
supported_models (`list[str]` or `dict`):
The list of models supported by the pipeline, or a dictionary with model class values.
'''
pass
@abstractmethod
def _sanitize_parameters(self, **pipeline_parameters):
'''
_sanitize_parameters will be called with any excessive named arguments from either `__init__` or `__call__`
methods. It should return 3 dictionaries of the resolved parameters used by the various `preprocess`,
`forward` and `postprocess` methods. Do not fill dictionaries if the caller didn't specify a kwargs. This
lets you keep defaults in function signatures, which is more "natural".
It is not meant to be called directly, it will be automatically called and the final parameters resolved by
`__init__` and `__call__`
'''
pass
@abstractmethod
def preprocess(self, input_: Any, **preprocess_parameters: dict) -> dict[str, GenericTensor]:
'''
Preprocess will take the `input_` of a specific pipeline and return a dictionary of everything necessary for
`_forward` to run properly. It should contain at least one tensor, but might have arbitrary other items.
'''
pass
@abstractmethod
def _forward(self, input_tensors: dict[str, GenericTensor], **forward_parameters: dict) -> ModelOutput:
'''
_forward will receive the prepared dictionary from `preprocess` and run it on the model. This method might
involve the GPU or the CPU and should be agnostic to it. Isolating this function is the reason for `preprocess`
and `postprocess` to exist, so that the hot path, this method generally can run as fast as possible.
It is not meant to be called directly, `forward` is preferred. It is basically the same but contains additional
code surrounding `_forward` making sure tensors and models are on the same device, disabling the training part
of the code (leading to faster inference).
'''
pass
@abstractmethod
def postprocess(self, model_outputs: ModelOutput, **postprocess_parameters: dict) -> Any:
'''
Postprocess will receive the raw outputs of the `_forward` method, generally tensors, and reformat them into
something more friendly. Generally it will output a list or a dict or results (containing just strings and
numbers).
'''
pass
def get_inference_context(self):
pass
def forward(self, model_inputs, **forward_params):
pass
def get_iterator(self, inputs, num_workers: int, batch_size: int, preprocess_params, forward_params, postprocess_params):
pass
def __call__(self, inputs, *args, num_workers=None, batch_size=None, **kwargs):
pass
def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):
pass
def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
pass
def iterate(self, inputs, preprocess_params, forward_params, postprocess_params):
pass
| 30
| 13
| 23
| 2
| 16
| 5
| 5
| 0.35
| 2
| 23
| 6
| 21
| 20
| 20
| 20
| 42
| 518
| 64
| 338
| 103
| 289
| 117
| 233
| 75
| 212
| 28
| 5
| 4
| 95
|
6,410
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.PipelineDataFormat
|
import os
from os.path import abspath, exists
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any, Optional, Union
import pickle
class PipelineDataFormat:
"""
Base class for all the pipeline supported data format both for reading and writing. Supported data formats
currently includes:
- JSON
- CSV
- stdin/stdout (pipe)
`PipelineDataFormat` also includes some utilities to work with multi-columns like mapping from datasets columns to
pipelines keyword arguments through the `dataset_kwarg_1=dataset_column_1` format.
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
"""
SUPPORTED_FORMATS = ['json', 'csv', 'pipe']
def __init__(self, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite: bool=False):
self.output_path = output_path
self.input_path = input_path
self.column = column.split(',') if column is not None else ['']
self.is_multi_columns = len(self.column) > 1
if self.is_multi_columns:
self.column = [tuple(c.split('=')) if '=' in c else (c, c) for c in self.column]
if output_path is not None and (not overwrite):
if exists(abspath(self.output_path)):
raise OSError(f'{self.output_path} already exists on disk')
if input_path is not None:
if not exists(abspath(self.input_path)):
raise OSError(f"{self.input_path} doesn't exist on disk")
@abstractmethod
def __iter__(self):
raise NotImplementedError()
@abstractmethod
def save(self, data: Union[dict, list[dict]]):
"""
Save the provided data object with the representation for the current [`~pipelines.PipelineDataFormat`].
Args:
data (`dict` or list of `dict`): The data to store.
"""
raise NotImplementedError()
def save_binary(self, data: Union[dict, list[dict]]) -> str:
"""
Save the provided data object as a pickle-formatted binary data on the disk.
Args:
data (`dict` or list of `dict`): The data to store.
Returns:
`str`: Path where the data has been saved.
"""
path, _ = os.path.splitext(self.output_path)
binary_path = os.path.extsep.join((path, 'pickle'))
with open(binary_path, 'wb+') as f_output:
pickle.dump(data, f_output)
return binary_path
@staticmethod
def from_str(format: str, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False) -> 'PipelineDataFormat':
"""
Creates an instance of the right subclass of [`~pipelines.PipelineDataFormat`] depending on `format`.
Args:
format (`str`):
The format of the desired pipeline. Acceptable values are `"json"`, `"csv"` or `"pipe"`.
output_path (`str`, *optional*):
Where to save the outgoing data.
input_path (`str`, *optional*):
Where to look for the input data.
column (`str`, *optional*):
The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
Returns:
[`~pipelines.PipelineDataFormat`]: The proper data format.
"""
if format == 'json':
return JsonPipelineDataFormat(output_path, input_path, column, overwrite=overwrite)
elif format == 'csv':
return CsvPipelineDataFormat(output_path, input_path, column, overwrite=overwrite)
elif format == 'pipe':
return PipedPipelineDataFormat(output_path, input_path, column, overwrite=overwrite)
else:
raise KeyError(f'Unknown reader {format} (Available reader are json/csv/pipe)')
|
class PipelineDataFormat:
'''
Base class for all the pipeline supported data format both for reading and writing. Supported data formats
currently includes:
- JSON
- CSV
- stdin/stdout (pipe)
`PipelineDataFormat` also includes some utilities to work with multi-columns like mapping from datasets columns to
pipelines keyword arguments through the `dataset_kwarg_1=dataset_column_1` format.
Args:
output_path (`str`): Where to save the outgoing data.
input_path (`str`): Where to look for the input data.
column (`str`): The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
'''
def __init__(self, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite: bool=False):
pass
@abstractmethod
def __iter__(self):
pass
@abstractmethod
def save(self, data: Union[dict, list[dict]]):
'''
Save the provided data object with the representation for the current [`~pipelines.PipelineDataFormat`].
Args:
data (`dict` or list of `dict`): The data to store.
'''
pass
def save_binary(self, data: Union[dict, list[dict]]) -> str:
'''
Save the provided data object as a pickle-formatted binary data on the disk.
Args:
data (`dict` or list of `dict`): The data to store.
Returns:
`str`: Path where the data has been saved.
'''
pass
@staticmethod
def from_str(format: str, output_path: Optional[str], input_path: Optional[str], column: Optional[str], overwrite=False) -> 'PipelineDataFormat':
'''
Creates an instance of the right subclass of [`~pipelines.PipelineDataFormat`] depending on `format`.
Args:
format (`str`):
The format of the desired pipeline. Acceptable values are `"json"`, `"csv"` or `"pipe"`.
output_path (`str`, *optional*):
Where to save the outgoing data.
input_path (`str`, *optional*):
Where to look for the input data.
column (`str`, *optional*):
The column to read.
overwrite (`bool`, *optional*, defaults to `False`):
Whether or not to overwrite the `output_path`.
Returns:
[`~pipelines.PipelineDataFormat`]: The proper data format.
'''
pass
| 9
| 4
| 16
| 2
| 9
| 6
| 3
| 0.88
| 0
| 10
| 3
| 3
| 4
| 4
| 5
| 5
| 111
| 19
| 49
| 29
| 28
| 43
| 31
| 13
| 25
| 8
| 0
| 2
| 15
|
6,411
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.PipelineException
|
class PipelineException(Exception):
"""
Raised by a [`Pipeline`] when handling __call__.
Args:
task (`str`): The task of the pipeline.
model (`str`): The model used by the pipeline.
reason (`str`): The error message to display.
"""
def __init__(self, task: str, model: str, reason: str):
super().__init__(reason)
self.task = task
self.model = model
|
class PipelineException(Exception):
'''
Raised by a [`Pipeline`] when handling __call__.
Args:
task (`str`): The task of the pipeline.
model (`str`): The model used by the pipeline.
reason (`str`): The error message to display.
'''
def __init__(self, task: str, model: str, reason: str):
pass
| 2
| 1
| 5
| 1
| 4
| 0
| 1
| 1.4
| 1
| 2
| 0
| 0
| 1
| 2
| 1
| 11
| 15
| 3
| 5
| 4
| 3
| 7
| 5
| 4
| 3
| 1
| 3
| 0
| 1
|
6,412
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base.PipelineRegistry
|
from typing import TYPE_CHECKING, Any, Optional, Union
class PipelineRegistry:
def __init__(self, supported_tasks: dict[str, Any], task_aliases: dict[str, str]) -> None:
self.supported_tasks = supported_tasks
self.task_aliases = task_aliases
def get_supported_tasks(self) -> list[str]:
supported_task = list(self.supported_tasks.keys()) + list(self.task_aliases.keys())
supported_task.sort()
return supported_task
def check_task(self, task: str) -> tuple[str, dict, Any]:
if task in self.task_aliases:
task = self.task_aliases[task]
if task in self.supported_tasks:
targeted_task = self.supported_tasks[task]
return (task, targeted_task, None)
if task.startswith('translation'):
tokens = task.split('_')
if len(tokens) == 4 and tokens[0] == 'translation' and (tokens[2] == 'to'):
targeted_task = self.supported_tasks['translation']
task = 'translation'
return (task, targeted_task, (tokens[1], tokens[3]))
raise KeyError(f"Invalid translation task {task}, use 'translation_XX_to_YY' format")
raise KeyError(f"Unknown task {task}, available tasks are {self.get_supported_tasks() + ['translation_XX_to_YY']}")
def register_pipeline(self, task: str, pipeline_class: type, pt_model: Optional[Union[type, tuple[type]]]=None, default: Optional[dict]=None, type: Optional[str]=None) -> None:
if task in self.supported_tasks:
logger.warning(f'{task} is already registered. Overwriting pipeline for task {task}...')
if pt_model is None:
pt_model = ()
elif not isinstance(pt_model, tuple):
pt_model = (pt_model,)
task_impl = {'impl': pipeline_class, 'pt': pt_model}
if default is not None:
if 'model' not in default:
default = {'model': default}
task_impl['default'] = default
if type is not None:
task_impl['type'] = type
self.supported_tasks[task] = task_impl
pipeline_class._registered_impl = {task: task_impl}
def to_dict(self):
return self.supported_tasks
|
class PipelineRegistry:
def __init__(self, supported_tasks: dict[str, Any], task_aliases: dict[str, str]) -> None:
pass
def get_supported_tasks(self) -> list[str]:
pass
def check_task(self, task: str) -> tuple[str, dict, Any]:
pass
def register_pipeline(self, task: str, pipeline_class: type, pt_model: Optional[Union[type, tuple[type]]]=None, default: Optional[dict]=None, type: Optional[str]=None) -> None:
pass
def to_dict(self):
pass
| 6
| 0
| 12
| 2
| 11
| 0
| 3
| 0
| 0
| 5
| 0
| 0
| 5
| 2
| 5
| 5
| 66
| 12
| 54
| 20
| 40
| 0
| 42
| 12
| 36
| 9
| 0
| 2
| 17
|
6,413
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/base.py
|
transformers.pipelines.base._ScikitCompat
|
from abc import ABC, abstractmethod
class _ScikitCompat(ABC):
"""
Interface layer for the Scikit and Keras compatibility.
"""
@abstractmethod
def transform(self, X):
raise NotImplementedError()
@abstractmethod
def predict(self, X):
raise NotImplementedError()
|
class _ScikitCompat(ABC):
'''
Interface layer for the Scikit and Keras compatibility.
'''
@abstractmethod
def transform(self, X):
pass
@abstractmethod
def predict(self, X):
pass
| 5
| 1
| 2
| 0
| 2
| 0
| 1
| 0.43
| 1
| 1
| 0
| 1
| 2
| 0
| 2
| 22
| 12
| 2
| 7
| 5
| 2
| 3
| 5
| 3
| 2
| 1
| 4
| 0
| 2
|
6,414
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/depth_estimation.py
|
transformers.pipelines.depth_estimation.DepthEstimationPipeline
|
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
from typing import Any, Union, overload
from .base import Pipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class DepthEstimationPipeline(Pipeline):
"""
Depth estimation pipeline using any `AutoModelForDepthEstimation`. This pipeline predicts the depth of an image.
Example:
```python
>>> from transformers import pipeline
>>> depth_estimator = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf")
>>> output = depth_estimator("http://images.cocodataset.org/val2017/000000039769.jpg")
>>> # This is a tensor with the values being the depth expressed in meters for each pixel
>>> output["predicted_depth"].shape
torch.Size([1, 384, 384])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This depth estimation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"depth-estimation"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=depth-estimation).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = False
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_DEPTH_ESTIMATION_MAPPING_NAMES)
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> dict[str, Any]:
...
@overload
def __call__(self, inputs: list[Union[str, 'Image.Image']], **kwargs: Any) -> list[dict[str, Any]]:
...
def __call__(self, inputs: Union[str, list[str], 'Image.Image', list['Image.Image']], **kwargs: Any) -> Union[dict[str, Any], list[dict[str, Any]]]:
"""
Predict the depth(s) of the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
parameters (`Dict`, *optional*):
A dictionary of argument names to parameter values, to control pipeline behaviour.
The only parameter available right now is `timeout`, which is the length of time, in seconds,
that the pipeline should wait before giving up on trying to download an image.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A dictionary or a list of dictionaries containing result. If the input is a single image, will return a
dictionary, if the input is a list of several images, will return a list of dictionaries corresponding to
the images.
The dictionaries contain the following keys:
- **predicted_depth** (`torch.Tensor`) -- The predicted depth by the model as a `torch.Tensor`.
- **depth** (`PIL.Image`) -- The predicted depth by the model as a `PIL.Image`.
"""
if 'images' in kwargs:
inputs = kwargs.pop('images')
if inputs is None:
raise ValueError('Cannot call the depth-estimation pipeline without an inputs argument!')
return super().__call__(inputs, **kwargs)
def _sanitize_parameters(self, timeout=None, parameters=None, **kwargs):
preprocess_params = {}
if timeout is not None:
preprocess_params['timeout'] = timeout
if isinstance(parameters, dict) and 'timeout' in parameters:
preprocess_params['timeout'] = parameters['timeout']
return (preprocess_params, {}, {})
def preprocess(self, image, timeout=None):
image = load_image(image, timeout)
model_inputs = self.image_processor(images=image, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
model_inputs['target_size'] = image.size[::-1]
return model_inputs
def _forward(self, model_inputs):
target_size = model_inputs.pop('target_size')
model_outputs = self.model(**model_inputs)
model_outputs['target_size'] = target_size
return model_outputs
def postprocess(self, model_outputs):
outputs = self.image_processor.post_process_depth_estimation(model_outputs, [model_outputs['target_size']])
formatted_outputs = []
for output in outputs:
depth = output['predicted_depth'].detach().cpu().numpy()
depth = (depth - depth.min()) / (depth.max() - depth.min())
depth = Image.fromarray((depth * 255).astype('uint8'))
formatted_outputs.append({'predicted_depth': output['predicted_depth'], 'depth': depth})
return formatted_outputs[0] if len(outputs) == 1 else formatted_outputs
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class DepthEstimationPipeline(Pipeline):
'''
Depth estimation pipeline using any `AutoModelForDepthEstimation`. This pipeline predicts the depth of an image.
Example:
```python
>>> from transformers import pipeline
>>> depth_estimator = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf")
>>> output = depth_estimator("http://images.cocodataset.org/val2017/000000039769.jpg")
>>> # This is a tensor with the values being the depth expressed in meters for each pixel
>>> output["predicted_depth"].shape
torch.Size([1, 384, 384])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This depth estimation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"depth-estimation"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=depth-estimation).
'''
def __init__(self, *args, **kwargs):
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> dict[str, Any]:
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> dict[str, Any]:
pass
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> dict[str, Any]:
'''
Predict the depth(s) of the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
parameters (`Dict`, *optional*):
A dictionary of argument names to parameter values, to control pipeline behaviour.
The only parameter available right now is `timeout`, which is the length of time, in seconds,
that the pipeline should wait before giving up on trying to download an image.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A dictionary or a list of dictionaries containing result. If the input is a single image, will return a
dictionary, if the input is a list of several images, will return a list of dictionaries corresponding to
the images.
The dictionaries contain the following keys:
- **predicted_depth** (`torch.Tensor`) -- The predicted depth by the model as a `torch.Tensor`.
- **depth** (`PIL.Image`) -- The predicted depth by the model as a `PIL.Image`.
'''
pass
def _sanitize_parameters(self, timeout=None, parameters=None, **kwargs):
pass
def preprocess(self, image, timeout=None):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs):
pass
| 12
| 2
| 13
| 2
| 7
| 5
| 2
| 1.07
| 1
| 4
| 0
| 0
| 6
| 0
| 6
| 48
| 109
| 22
| 42
| 15
| 35
| 45
| 39
| 15
| 32
| 3
| 6
| 1
| 13
|
6,415
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/document_question_answering.py
|
transformers.pipelines.document_question_answering.DocumentQuestionAnsweringPipeline
|
from ..utils import ExplicitEnum, add_end_docstrings, is_pytesseract_available, is_torch_available, is_vision_available, logging
import re
from typing import Any, Optional, Union, overload
from ..generation import GenerationConfig
import numpy as np
from .question_answering import select_starts_ends
from .base import ChunkPipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True, has_tokenizer=True))
class DocumentQuestionAnsweringPipeline(ChunkPipeline):
"""
Document Question Answering pipeline using any `AutoModelForDocumentQuestionAnswering`. The inputs/outputs are
similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR'd
words/boxes) as input instead of text context.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> document_qa = pipeline(model="impira/layoutlm-document-qa")
>>> document_qa(
... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
... question="What is the invoice number?",
... )
[{'score': 0.425, 'answer': 'us-001', 'start': 16, 'end': 16}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This document question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"document-question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a document question answering task.
See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=document-question-answering).
"""
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = None
_load_feature_extractor = None
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.tokenizer is not None and (not self.tokenizer.__class__.__name__.endswith('Fast')):
raise ValueError(f'`DocumentQuestionAnsweringPipeline` requires a fast tokenizer, but a slow tokenizer (`{self.tokenizer.__class__.__name__}`) is provided.')
if self.model.config.__class__.__name__ == 'VisionEncoderDecoderConfig':
self.model_type = ModelType.VisionEncoderDecoder
if self.model.config.encoder.model_type != 'donut-swin':
raise ValueError('Currently, the only supported VisionEncoderDecoder model is Donut')
else:
self.check_model_type(MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES)
if self.model.config.__class__.__name__ == 'LayoutLMConfig':
self.model_type = ModelType.LayoutLM
else:
self.model_type = ModelType.LayoutLMv2andv3
def _sanitize_parameters(self, padding=None, doc_stride=None, max_question_len=None, lang: Optional[str]=None, tesseract_config: Optional[str]=None, max_answer_len=None, max_seq_len=None, top_k=None, handle_impossible_answer=None, timeout=None, **kwargs):
preprocess_params, postprocess_params = ({}, {})
if padding is not None:
preprocess_params['padding'] = padding
if doc_stride is not None:
preprocess_params['doc_stride'] = doc_stride
if max_question_len is not None:
preprocess_params['max_question_len'] = max_question_len
if max_seq_len is not None:
preprocess_params['max_seq_len'] = max_seq_len
if lang is not None:
preprocess_params['lang'] = lang
if tesseract_config is not None:
preprocess_params['tesseract_config'] = tesseract_config
if timeout is not None:
preprocess_params['timeout'] = timeout
if top_k is not None:
if top_k < 1:
raise ValueError(f'top_k parameter should be >= 1 (got {top_k})')
postprocess_params['top_k'] = top_k
if max_answer_len is not None:
if max_answer_len < 1:
raise ValueError(f'max_answer_len parameter should be >= 1 (got {max_answer_len}')
postprocess_params['max_answer_len'] = max_answer_len
if handle_impossible_answer is not None:
postprocess_params['handle_impossible_answer'] = handle_impossible_answer
forward_params = {}
if getattr(self, 'assistant_model', None) is not None:
forward_params['assistant_model'] = self.assistant_model
if getattr(self, 'assistant_tokenizer', None) is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
return (preprocess_params, forward_params, postprocess_params)
@overload
def __call__(self, image: Union['Image.Image', str], question: str, word_boxes: Optional[tuple[str, list[float]]]=None, **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, image: dict[str, Any], **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, image: list[dict[str, Any]], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, image: Union['Image.Image', str, list[dict[str, Any]]], question: Optional[str]=None, word_boxes: Optional[tuple[str, list[float]]]=None, **kwargs: Any) -> Union[dict[str, Any], list[dict[str, Any]]]:
"""
Answer the question(s) given as inputs by using the document(s). A document is defined as an image and an
optional list of (word, box) tuples which represent the text in the document. If the `word_boxes` are not
provided, it will use the Tesseract OCR engine (if available) to extract the words and boxes automatically for
LayoutLM-like models which require them as input. For Donut, no OCR is run.
You can invoke the pipeline several ways:
- `pipeline(image=image, question=question)`
- `pipeline(image=image, question=question, word_boxes=word_boxes)`
- `pipeline([{"image": image, "question": question}])`
- `pipeline([{"image": image, "question": question, "word_boxes": word_boxes}])`
Args:
image (`str` or `PIL.Image`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be
broadcasted to multiple questions.
question (`str`):
A question to ask of the document.
word_boxes (`list[str, tuple[float, float, float, float]]`, *optional*):
A list of words and bounding boxes (normalized 0->1000). If you provide this optional input, then the
pipeline will use these words and boxes instead of running OCR on the image to derive them for models
that need them (e.g. LayoutLM). This allows you to reuse OCR'd results across many invocations of the
pipeline without having to re-run it each time.
top_k (`int`, *optional*, defaults to 1):
The number of answers to return (will be chosen by order of likelihood). Note that we return less than
top_k answers if there are not enough options available within the context.
doc_stride (`int`, *optional*, defaults to 128):
If the words in the document are too long to fit with the question for the model, it will be split in
several chunks with some overlap. This argument controls the size of that overlap.
max_answer_len (`int`, *optional*, defaults to 15):
The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
max_seq_len (`int`, *optional*, defaults to 384):
The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
model. The context will be split in several chunks (using `doc_stride` as overlap) if needed.
max_question_len (`int`, *optional*, defaults to 64):
The maximum length of the question after tokenization. It will be truncated if needed.
handle_impossible_answer (`bool`, *optional*, defaults to `False`):
Whether or not we accept impossible as an answer.
lang (`str`, *optional*):
Language to use while running OCR. Defaults to english.
tesseract_config (`str`, *optional*):
Additional flags to pass to tesseract while running OCR.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys:
- **score** (`float`) -- The probability associated to the answer.
- **start** (`int`) -- The start word index of the answer (in the OCR'd version of the input or provided
`word_boxes`).
- **end** (`int`) -- The end word index of the answer (in the OCR'd version of the input or provided
`word_boxes`).
- **answer** (`str`) -- The answer to the question.
- **words** (`list[int]`) -- The index of each word/box pair that is in the answer
"""
if isinstance(question, str):
inputs = {'question': question, 'image': image}
if word_boxes is not None:
inputs['word_boxes'] = word_boxes
else:
inputs = image
return super().__call__(inputs, **kwargs)
def preprocess(self, input, padding='do_not_pad', doc_stride=None, max_seq_len=None, word_boxes: Optional[tuple[str, list[float]]]=None, lang=None, tesseract_config='', timeout=None):
if max_seq_len is None:
max_seq_len = self.tokenizer.model_max_length
if doc_stride is None:
doc_stride = min(max_seq_len // 2, 256)
image = None
image_features = {}
if input.get('image', None) is not None:
image = load_image(input['image'], timeout=timeout)
if self.image_processor is not None:
image_inputs = self.image_processor(images=image, return_tensors='pt')
image_inputs = image_inputs.to(self.dtype)
image_features.update(image_inputs)
elif self.feature_extractor is not None:
image_features.update(self.feature_extractor(images=image, return_tensors='pt'))
elif self.model_type == ModelType.VisionEncoderDecoder:
raise ValueError('If you are using a VisionEncoderDecoderModel, you must provide a feature extractor')
words, boxes = (None, None)
if self.model_type != ModelType.VisionEncoderDecoder:
if 'word_boxes' in input:
words = [x[0] for x in input['word_boxes']]
boxes = [x[1] for x in input['word_boxes']]
elif 'words' in image_features and 'boxes' in image_features:
words = image_features.pop('words')[0]
boxes = image_features.pop('boxes')[0]
elif image is not None:
if not TESSERACT_LOADED:
raise ValueError('If you provide an image without word_boxes, then the pipeline will run OCR using Tesseract, but pytesseract is not available')
if TESSERACT_LOADED:
words, boxes = apply_tesseract(image, lang=lang, tesseract_config=tesseract_config)
else:
raise ValueError('You must provide an image or word_boxes. If you provide an image, the pipeline will automatically run OCR to derive words and boxes')
if self.tokenizer.padding_side != 'right':
raise ValueError(f"Document question answering only supports tokenizers whose padding side is 'right', not {self.tokenizer.padding_side}")
if self.model_type == ModelType.VisionEncoderDecoder:
task_prompt = f"<s_docvqa><s_question>{input['question']}</s_question><s_answer>"
encoding = {'inputs': image_features['pixel_values'], 'decoder_input_ids': self.tokenizer(task_prompt, add_special_tokens=False, return_tensors='pt').input_ids, 'return_dict_in_generate': True}
yield {**encoding, 'p_mask': None, 'word_ids': None, 'words': None, 'output_attentions': True, 'is_last': True}
else:
tokenizer_kwargs = {}
if self.model_type == ModelType.LayoutLM:
tokenizer_kwargs['text'] = input['question'].split()
tokenizer_kwargs['text_pair'] = words
tokenizer_kwargs['is_split_into_words'] = True
else:
tokenizer_kwargs['text'] = [input['question']]
tokenizer_kwargs['text_pair'] = [words]
tokenizer_kwargs['boxes'] = [boxes]
encoding = self.tokenizer(padding=padding, max_length=max_seq_len, stride=doc_stride, return_token_type_ids=True, truncation='only_second', return_overflowing_tokens=True, **tokenizer_kwargs)
encoding.pop('overflow_to_sample_mapping', None)
num_spans = len(encoding['input_ids'])
p_mask = [[tok != 1 for tok in encoding.sequence_ids(span_id)] for span_id in range(num_spans)]
for span_idx in range(num_spans):
span_encoding = {k: torch.tensor(v[span_idx:span_idx + 1]) for k, v in encoding.items()}
if 'pixel_values' in image_features:
span_encoding['image'] = image_features['pixel_values']
input_ids_span_idx = encoding['input_ids'][span_idx]
if self.tokenizer.cls_token_id is not None:
cls_indices = np.nonzero(np.array(input_ids_span_idx) == self.tokenizer.cls_token_id)[0]
for cls_index in cls_indices:
p_mask[span_idx][cls_index] = 0
if 'boxes' not in tokenizer_kwargs:
bbox = []
for input_id, sequence_id, word_id in zip(encoding.input_ids[span_idx], encoding.sequence_ids(span_idx), encoding.word_ids(span_idx)):
if sequence_id == 1:
bbox.append(boxes[word_id])
elif input_id == self.tokenizer.sep_token_id:
bbox.append([1000] * 4)
else:
bbox.append([0] * 4)
span_encoding['bbox'] = torch.tensor(bbox).unsqueeze(0)
yield {**span_encoding, 'p_mask': p_mask[span_idx], 'word_ids': encoding.word_ids(span_idx), 'words': words, 'is_last': span_idx == num_spans - 1}
def _forward(self, model_inputs, **generate_kwargs):
p_mask = model_inputs.pop('p_mask', None)
word_ids = model_inputs.pop('word_ids', None)
words = model_inputs.pop('words', None)
is_last = model_inputs.pop('is_last', False)
if self.model_type == ModelType.VisionEncoderDecoder:
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
model_outputs = self.model.generate(**model_inputs, **generate_kwargs)
else:
model_outputs = self.model(**model_inputs)
model_outputs = dict(model_outputs.items())
model_outputs['p_mask'] = p_mask
model_outputs['word_ids'] = word_ids
model_outputs['words'] = words
model_outputs['attention_mask'] = model_inputs.get('attention_mask', None)
model_outputs['is_last'] = is_last
return model_outputs
def postprocess(self, model_outputs, top_k=1, **kwargs):
if self.model_type == ModelType.VisionEncoderDecoder:
answers = [self.postprocess_encoder_decoder_single(o) for o in model_outputs]
else:
answers = self.postprocess_extractive_qa(model_outputs, top_k=top_k, **kwargs)
answers = sorted(answers, key=lambda x: x.get('score', 0), reverse=True)[:top_k]
return answers
def postprocess_encoder_decoder_single(self, model_outputs, **kwargs):
sequence = self.tokenizer.batch_decode(model_outputs['sequences'])[0]
sequence = sequence.replace(self.tokenizer.eos_token, '').replace(self.tokenizer.pad_token, '')
sequence = re.sub('<.*?>', '', sequence, count=1).strip()
ret = {'answer': None}
answer = re.search('<s_answer>(.*)</s_answer>', sequence)
if answer is not None:
ret['answer'] = answer.group(1).strip()
return ret
def postprocess_extractive_qa(self, model_outputs, top_k=1, handle_impossible_answer=False, max_answer_len=15, **kwargs):
min_null_score = 1000000
answers = []
for output in model_outputs:
words = output['words']
if output['start_logits'].dtype in (torch.bfloat16, torch.float16):
output['start_logits'] = output['start_logits'].float()
if output['end_logits'].dtype in (torch.bfloat16, torch.float16):
output['end_logits'] = output['end_logits'].float()
starts, ends, scores, min_null_score = select_starts_ends(start=output['start_logits'], end=output['end_logits'], p_mask=output['p_mask'], attention_mask=output['attention_mask'].numpy() if output.get('attention_mask', None) is not None else None, min_null_score=min_null_score, top_k=top_k, handle_impossible_answer=handle_impossible_answer, max_answer_len=max_answer_len)
word_ids = output['word_ids']
for start, end, score in zip(starts, ends, scores):
word_start, word_end = (word_ids[start], word_ids[end])
if word_start is not None and word_end is not None:
answers.append({'score': float(score), 'answer': ' '.join(words[word_start:word_end + 1]), 'start': word_start, 'end': word_end})
if handle_impossible_answer:
answers.append({'score': min_null_score, 'answer': '', 'start': 0, 'end': 0})
return answers
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True, has_tokenizer=True))
class DocumentQuestionAnsweringPipeline(ChunkPipeline):
'''
Document Question Answering pipeline using any `AutoModelForDocumentQuestionAnswering`. The inputs/outputs are
similar to the (extractive) question answering pipeline; however, the pipeline takes an image (and optional OCR'd
words/boxes) as input instead of text context.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> document_qa = pipeline(model="impira/layoutlm-document-qa")
>>> document_qa(
... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png",
... question="What is the invoice number?",
... )
[{'score': 0.425, 'answer': 'us-001', 'start': 16, 'end': 16}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This document question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"document-question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a document question answering task.
See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=document-question-answering).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, padding=None, doc_stride=None, max_question_len=None, lang: Optional[str]=None, tesseract_config: Optional[str]=None, max_answer_len=None, max_seq_len=None, top_k=None, handle_impossible_answer=None, timeout=None, **kwargs):
pass
@overload
def __call__(self, image: Union['Image.Image', str], question: str, word_boxes: Optional[tuple[str, list[float]]]=None, **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, image: Union['Image.Image', str], question: str, word_boxes: Optional[tuple[str, list[float]]]=None, **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, image: Union['Image.Image', str], question: str, word_boxes: Optional[tuple[str, list[float]]]=None, **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, image: Union['Image.Image', str], question: str, word_boxes: Optional[tuple[str, list[float]]]=None, **kwargs: Any) -> list[dict[str, Any]]:
'''
Answer the question(s) given as inputs by using the document(s). A document is defined as an image and an
optional list of (word, box) tuples which represent the text in the document. If the `word_boxes` are not
provided, it will use the Tesseract OCR engine (if available) to extract the words and boxes automatically for
LayoutLM-like models which require them as input. For Donut, no OCR is run.
You can invoke the pipeline several ways:
- `pipeline(image=image, question=question)`
- `pipeline(image=image, question=question, word_boxes=word_boxes)`
- `pipeline([{"image": image, "question": question}])`
- `pipeline([{"image": image, "question": question, "word_boxes": word_boxes}])`
Args:
image (`str` or `PIL.Image`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be
broadcasted to multiple questions.
question (`str`):
A question to ask of the document.
word_boxes (`list[str, tuple[float, float, float, float]]`, *optional*):
A list of words and bounding boxes (normalized 0->1000). If you provide this optional input, then the
pipeline will use these words and boxes instead of running OCR on the image to derive them for models
that need them (e.g. LayoutLM). This allows you to reuse OCR'd results across many invocations of the
pipeline without having to re-run it each time.
top_k (`int`, *optional*, defaults to 1):
The number of answers to return (will be chosen by order of likelihood). Note that we return less than
top_k answers if there are not enough options available within the context.
doc_stride (`int`, *optional*, defaults to 128):
If the words in the document are too long to fit with the question for the model, it will be split in
several chunks with some overlap. This argument controls the size of that overlap.
max_answer_len (`int`, *optional*, defaults to 15):
The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
max_seq_len (`int`, *optional*, defaults to 384):
The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
model. The context will be split in several chunks (using `doc_stride` as overlap) if needed.
max_question_len (`int`, *optional*, defaults to 64):
The maximum length of the question after tokenization. It will be truncated if needed.
handle_impossible_answer (`bool`, *optional*, defaults to `False`):
Whether or not we accept impossible as an answer.
lang (`str`, *optional*):
Language to use while running OCR. Defaults to english.
tesseract_config (`str`, *optional*):
Additional flags to pass to tesseract while running OCR.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys:
- **score** (`float`) -- The probability associated to the answer.
- **start** (`int`) -- The start word index of the answer (in the OCR'd version of the input or provided
`word_boxes`).
- **end** (`int`) -- The end word index of the answer (in the OCR'd version of the input or provided
`word_boxes`).
- **answer** (`str`) -- The answer to the question.
- **words** (`list[int]`) -- The index of each word/box pair that is in the answer
'''
pass
def preprocess(self, input, padding='do_not_pad', doc_stride=None, max_seq_len=None, word_boxes: Optional[tuple[str, list[float]]]=None, lang=None, tesseract_config='', timeout=None):
pass
def _forward(self, model_inputs, **generate_kwargs):
pass
def postprocess(self, model_outputs, top_k=1, **kwargs):
pass
def postprocess_encoder_decoder_single(self, model_outputs, **kwargs):
pass
def postprocess_extractive_qa(self, model_outputs, top_k=1, handle_impossible_answer=False, max_answer_len=15, **kwargs):
pass
| 16
| 2
| 48
| 4
| 35
| 9
| 8
| 0.34
| 1
| 8
| 1
| 0
| 8
| 1
| 8
| 52
| 420
| 46
| 281
| 77
| 241
| 96
| 171
| 46
| 162
| 28
| 7
| 5
| 66
|
6,416
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/document_question_answering.py
|
transformers.pipelines.document_question_answering.ModelType
|
from ..utils import ExplicitEnum, add_end_docstrings, is_pytesseract_available, is_torch_available, is_vision_available, logging
class ModelType(ExplicitEnum):
LayoutLM = 'layoutlm'
LayoutLMv2andv3 = 'layoutlmv2andv3'
VisionEncoderDecoder = 'vision_encoder_decoder'
|
class ModelType(ExplicitEnum):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 4
| 0
| 4
| 4
| 3
| 0
| 4
| 4
| 3
| 0
| 1
| 0
| 0
|
6,417
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/feature_extraction.py
|
transformers.pipelines.feature_extraction.FeatureExtractionPipeline
|
from .base import GenericTensor, Pipeline, build_pipeline_init_args
from ..utils import add_end_docstrings
from typing import Any, Union
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, supports_binary_output=False), '\n tokenize_kwargs (`dict`, *optional*):\n Additional dictionary of keyword arguments passed along to the tokenizer.\n return_tensors (`bool`, *optional*):\n If `True`, returns a tensor according to the specified framework, otherwise returns a list.')
class FeatureExtractionPipeline(Pipeline):
"""
Feature extraction pipeline uses no model head. This pipeline extracts the hidden states from the base
transformer, which can be used as features in downstream tasks.
Example:
```python
>>> from transformers import pipeline
>>> extractor = pipeline(model="google-bert/bert-base-uncased", task="feature-extraction")
>>> result = extractor("This is a simple test.", return_tensors=True)
>>> result.shape # This is a tensor of shape [1, sequence_length, hidden_dimension] representing the input string.
torch.Size([1, 8, 768])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This feature extraction pipeline can currently be loaded from [`pipeline`] using the task identifier:
`"feature-extraction"`.
All models may be used for this pipeline. See a list of all models, including community-contributed models on
[huggingface.co/models](https://huggingface.co/models).
"""
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
def _sanitize_parameters(self, truncation=None, tokenize_kwargs=None, return_tensors=None, **kwargs):
if tokenize_kwargs is None:
tokenize_kwargs = {}
if truncation is not None:
if 'truncation' in tokenize_kwargs:
raise ValueError('truncation parameter defined twice (given as keyword argument as well as in tokenize_kwargs)')
tokenize_kwargs['truncation'] = truncation
preprocess_params = tokenize_kwargs
postprocess_params = {}
if return_tensors is not None:
postprocess_params['return_tensors'] = return_tensors
return (preprocess_params, {}, postprocess_params)
def preprocess(self, inputs, **tokenize_kwargs) -> dict[str, GenericTensor]:
model_inputs = self.tokenizer(inputs, return_tensors='pt', **tokenize_kwargs)
return model_inputs
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs, return_tensors=False):
if return_tensors:
return model_outputs[0]
return model_outputs[0].tolist()
def __call__(self, *args: Union[str, list[str]], **kwargs: Any) -> Union[Any, list[Any]]:
"""
Extract the features of the input(s) text.
Args:
args (`str` or `list[str]`): One or several texts (or one list of texts) to get the features of.
Return:
A nested list of `float`: The features computed by the model.
"""
return super().__call__(*args, **kwargs)
| null | 7
| 2
| 9
| 1
| 6
| 2
| 2
| 0.83
| 1
| 3
| 0
| 0
| 5
| 0
| 5
| 47
| 72
| 17
| 30
| 10
| 24
| 25
| 27
| 10
| 21
| 5
| 6
| 2
| 12
|
6,418
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/fill_mask.py
|
transformers.pipelines.fill_mask.FillMaskPipeline
|
from .base import GenericTensor, Pipeline, PipelineException, build_pipeline_init_args
from ..utils import add_end_docstrings, is_torch_available, logging
from typing import Any, Union, overload
import numpy as np
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True), '\n top_k (`int`, *optional*, defaults to 5):\n The number of predictions to return.\n targets (`str` or `list[str]`, *optional*):\n When passed, the model will limit the scores to the passed targets instead of looking up in the whole\n vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting\n token will be used (with a warning, and that might be slower).\n tokenizer_kwargs (`dict`, *optional*):\n Additional dictionary of keyword arguments passed along to the tokenizer.')
class FillMaskPipeline(Pipeline):
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
'\n Masked language modeling prediction pipeline using any `ModelWithLMHead`. See the [masked language modeling\n examples](../task_summary#masked-language-modeling) for more information.\n\n Example:\n\n ```python\n >>> from transformers import pipeline\n\n >>> fill_masker = pipeline(model="google-bert/bert-base-uncased")\n >>> fill_masker("This is a simple [MASK].")\n [{\'score\': 0.042, \'token\': 3291, \'token_str\': \'problem\', \'sequence\': \'this is a simple problem.\'}, {\'score\': 0.031, \'token\': 3160, \'token_str\': \'question\', \'sequence\': \'this is a simple question.\'}, {\'score\': 0.03, \'token\': 8522, \'token_str\': \'equation\', \'sequence\': \'this is a simple equation.\'}, {\'score\': 0.027, \'token\': 2028, \'token_str\': \'one\', \'sequence\': \'this is a simple one.\'}, {\'score\': 0.024, \'token\': 3627, \'token_str\': \'rule\', \'sequence\': \'this is a simple rule.\'}]\n ```\n\n Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)\n\n This mask filling pipeline can currently be loaded from [`pipeline`] using the following task identifier:\n `"fill-mask"`.\n\n The models that this pipeline can use are models that have been trained with a masked language modeling objective,\n which includes the bi-directional models in the library. See the up-to-date list of available models on\n [huggingface.co/models](https://huggingface.co/models?filter=fill-mask).\n\n <Tip>\n\n This pipeline only works for inputs with exactly one token masked. Experimental: We added support for multiple\n masks. The returned values are raw model output, and correspond to disjoint probabilities where one might expect\n joint probabilities (See [discussion](https://github.com/huggingface/transformers/pull/10222)).\n\n </Tip>\n\n <Tip>\n\n This pipeline now supports tokenizer_kwargs. For example try:\n\n ```python\n >>> from transformers import pipeline\n\n >>> fill_masker = pipeline(model="google-bert/bert-base-uncased")\n >>> tokenizer_kwargs = {"truncation": True}\n >>> fill_masker(\n ... "This is a simple [MASK]. " + "...with a large amount of repeated text appended. " * 100,\n ... tokenizer_kwargs=tokenizer_kwargs,\n ... )\n ```\n\n\n </Tip>\n\n\n '
def get_masked_index(self, input_ids: GenericTensor) -> np.ndarray:
masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False)
return masked_index
def _ensure_exactly_one_mask_token(self, input_ids: GenericTensor) -> np.ndarray:
masked_index = self.get_masked_index(input_ids)
numel = np.prod(masked_index.shape)
if numel < 1:
raise PipelineException('fill-mask', self.model.base_model_prefix, f'No mask_token ({self.tokenizer.mask_token}) found on the input')
def ensure_exactly_one_mask_token(self, model_inputs: GenericTensor):
if isinstance(model_inputs, list):
for model_input in model_inputs:
self._ensure_exactly_one_mask_token(model_input['input_ids'][0])
else:
for input_ids in model_inputs['input_ids']:
self._ensure_exactly_one_mask_token(input_ids)
def preprocess(self, inputs, return_tensors=None, tokenizer_kwargs=None, **preprocess_parameters) -> dict[str, GenericTensor]:
if return_tensors is None:
return_tensors = 'pt'
if tokenizer_kwargs is None:
tokenizer_kwargs = {}
model_inputs = self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
self.ensure_exactly_one_mask_token(model_inputs)
return model_inputs
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
model_outputs['input_ids'] = model_inputs['input_ids']
return model_outputs
def postprocess(self, model_outputs, top_k=5, target_ids=None):
if target_ids is not None and target_ids.shape[0] < top_k:
top_k = target_ids.shape[0]
input_ids = model_outputs['input_ids'][0]
outputs = model_outputs['logits']
masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False).squeeze(-1)
logits = outputs[0, masked_index, :]
probs = logits.softmax(dim=-1)
if target_ids is not None:
probs = probs[..., target_ids]
values, predictions = probs.topk(top_k)
result = []
single_mask = values.shape[0] == 1
for i, (_values, _predictions) in enumerate(zip(values.tolist(), predictions.tolist())):
row = []
for v, p in zip(_values, _predictions):
tokens = input_ids.numpy().copy()
if target_ids is not None:
p = target_ids[p].tolist()
tokens[masked_index[i]] = p
tokens = tokens[np.where(tokens != self.tokenizer.pad_token_id)]
sequence = self.tokenizer.decode(tokens, skip_special_tokens=single_mask)
proposition = {'score': v, 'token': p, 'token_str': self.tokenizer.decode([p]), 'sequence': sequence}
row.append(proposition)
result.append(row)
if single_mask:
return result[0]
return result
def get_target_ids(self, targets, top_k=None):
if isinstance(targets, str):
targets = [targets]
try:
vocab = self.tokenizer.get_vocab()
except Exception:
vocab = {}
target_ids = []
for target in targets:
id_ = vocab.get(target)
if id_ is None:
input_ids = self.tokenizer(target, add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False, max_length=1, truncation=True)['input_ids']
if len(input_ids) == 0:
logger.warning(f'The specified target token `{target}` does not exist in the model vocabulary. We cannot replace it with anything meaningful, ignoring it')
continue
id_ = input_ids[0]
logger.warning(f'The specified target token `{target}` does not exist in the model vocabulary. Replacing with `{self.tokenizer.convert_ids_to_tokens(id_)}`.')
target_ids.append(id_)
target_ids = list(set(target_ids))
if len(target_ids) == 0:
raise ValueError('At least one target must be provided when passed.')
target_ids = np.array(target_ids)
return target_ids
def _sanitize_parameters(self, top_k=None, targets=None, tokenizer_kwargs=None):
preprocess_params = {}
if tokenizer_kwargs is not None:
preprocess_params['tokenizer_kwargs'] = tokenizer_kwargs
postprocess_params = {}
if targets is not None:
target_ids = self.get_target_ids(targets, top_k)
postprocess_params['target_ids'] = target_ids
if top_k is not None:
postprocess_params['top_k'] = top_k
if self.tokenizer.mask_token_id is None:
raise PipelineException('fill-mask', self.model.base_model_prefix, 'The tokenizer does not define a `mask_token`.')
return (preprocess_params, {}, postprocess_params)
@overload
def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, inputs: list[str], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, inputs: Union[str, list[str]], **kwargs: Any) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Fill the masked token in the text(s) given as inputs.
Args:
inputs (`str` or `list[str]`):
One or several texts (or one list of prompts) with masked tokens.
targets (`str` or `list[str]`, *optional*):
When passed, the model will limit the scores to the passed targets instead of looking up in the whole
vocab. If the provided targets are not in the model vocab, they will be tokenized and the first
resulting token will be used (with a warning, and that might be slower).
top_k (`int`, *optional*):
When passed, overrides the number of predictions to return.
Return:
A list or a list of list of `dict`: Each result comes as list of dictionaries with the following keys:
- **sequence** (`str`) -- The corresponding input with the mask token prediction.
- **score** (`float`) -- The corresponding probability.
- **token** (`int`) -- The predicted token id (to replace the masked one).
- **token_str** (`str`) -- The predicted token (to replace the masked one).
"""
outputs = super().__call__(inputs, **kwargs)
if isinstance(inputs, list) and len(inputs) == 1:
return outputs[0]
return outputs
| null | 15
| 1
| 20
| 2
| 15
| 3
| 4
| 0.47
| 1
| 9
| 1
| 0
| 9
| 0
| 9
| 51
| 240
| 43
| 134
| 43
| 122
| 63
| 109
| 41
| 99
| 9
| 6
| 3
| 36
|
6,419
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_classification.py
|
transformers.pipelines.image_classification.ClassificationFunction
|
from ..utils import ExplicitEnum, add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
class ClassificationFunction(ExplicitEnum):
SIGMOID = 'sigmoid'
SOFTMAX = 'softmax'
NONE = 'none'
|
class ClassificationFunction(ExplicitEnum):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 4
| 0
| 4
| 4
| 3
| 0
| 4
| 4
| 3
| 0
| 1
| 0
| 0
|
6,420
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_classification.py
|
transformers.pipelines.image_classification.ImageClassificationPipeline
|
from ..utils import ExplicitEnum, add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
from .base import Pipeline, build_pipeline_init_args
from typing import Any, Union, overload
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True), '\n function_to_apply (`str`, *optional*, defaults to `"default"`):\n The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:\n\n - `"default"`: if the model has a single label, will apply the sigmoid function on the output. If the model\n has several labels, will apply the softmax function on the output.\n - `"sigmoid"`: Applies the sigmoid function on the output.\n - `"softmax"`: Applies the softmax function on the output.\n - `"none"`: Does not apply any function on the output.')
class ImageClassificationPipeline(Pipeline):
"""
Image classification pipeline using any `AutoModelForImageClassification`. This pipeline predicts the class of an
image.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> classifier("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.442, 'label': 'macaw'}, {'score': 0.088, 'label': 'popinjay'}, {'score': 0.075, 'label': 'parrot'}, {'score': 0.073, 'label': 'parodist, lampooner'}, {'score': 0.046, 'label': 'poll, poll_parrot'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=image-classification).
"""
function_to_apply: ClassificationFunction = ClassificationFunction.NONE
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = False
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES)
def _sanitize_parameters(self, top_k=None, function_to_apply=None, timeout=None):
preprocess_params = {}
if timeout is not None:
preprocess_params['timeout'] = timeout
postprocess_params = {}
if top_k is not None:
postprocess_params['top_k'] = top_k
if isinstance(function_to_apply, str):
function_to_apply = ClassificationFunction(function_to_apply.lower())
if function_to_apply is not None:
postprocess_params['function_to_apply'] = function_to_apply
return (preprocess_params, {}, postprocess_params)
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, inputs: Union[list[str], list['Image.Image']], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, inputs: Union[str, list[str], 'Image.Image', list['Image.Image']], **kwargs: Any) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
function_to_apply (`str`, *optional*, defaults to `"default"`):
The function to apply to the model outputs in order to retrieve the scores. Accepts four different
values:
If this argument is not specified, then it will apply the following functions according to the number
of labels:
- If the model has a single label, will apply the sigmoid function on the output.
- If the model has several labels, will apply the softmax function on the output.
Possible values are:
- `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output.
- `"none"`: Does not apply any function on the output.
top_k (`int`, *optional*, defaults to 5):
The number of top labels that will be returned by the pipeline. If the provided number is higher than
the number of labels available in the model configuration, it will default to the number of labels.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A dictionary or a list of dictionaries containing result. If the input is a single image, will return a
dictionary, if the input is a list of several images, will return a list of dictionaries corresponding to
the images.
The dictionaries contain the following keys:
- **label** (`str`) -- The label identified by the model.
- **score** (`int`) -- The score attributed by the model for that label.
"""
if 'images' in kwargs:
inputs = kwargs.pop('images')
if inputs is None:
raise ValueError('Cannot call the image-classification pipeline without an inputs argument!')
return super().__call__(inputs, **kwargs)
def preprocess(self, image, timeout=None):
image = load_image(image, timeout=timeout)
model_inputs = self.image_processor(images=image, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
return model_inputs
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs, function_to_apply=None, top_k=5):
if function_to_apply is None:
if self.model.config.problem_type == 'multi_label_classification' or self.model.config.num_labels == 1:
function_to_apply = ClassificationFunction.SIGMOID
elif self.model.config.problem_type == 'single_label_classification' or self.model.config.num_labels > 1:
function_to_apply = ClassificationFunction.SOFTMAX
elif hasattr(self.model.config, 'function_to_apply') and function_to_apply is None:
function_to_apply = self.model.config.function_to_apply
else:
function_to_apply = ClassificationFunction.NONE
if top_k > self.model.config.num_labels:
top_k = self.model.config.num_labels
outputs = model_outputs['logits'][0]
if outputs.dtype in (torch.bfloat16, torch.float16):
outputs = outputs.to(torch.float32).numpy()
else:
outputs = outputs.numpy()
if function_to_apply == ClassificationFunction.SIGMOID:
scores = sigmoid(outputs)
elif function_to_apply == ClassificationFunction.SOFTMAX:
scores = softmax(outputs)
elif function_to_apply == ClassificationFunction.NONE:
scores = outputs
else:
raise ValueError(f'Unrecognized `function_to_apply` argument: {function_to_apply}')
dict_scores = [{'label': self.model.config.id2label[i], 'score': score.item()} for i, score in enumerate(scores)]
dict_scores.sort(key=lambda x: x['score'], reverse=True)
if top_k is not None:
dict_scores = dict_scores[:top_k]
return dict_scores
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True), '\n function_to_apply (`str`, *optional*, defaults to `"default"`):\n The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:\n\n - `"default"`: if the model has a single label, will apply the sigmoid function on the output. If the model\n has several labels, will apply the softmax function on the output.\n - `"sigmoid"`: Applies the sigmoid function on the output.\n - `"softmax"`: Applies the softmax function on the output.\n - `"none"`: Does not apply any function on the output.')
class ImageClassificationPipeline(Pipeline):
'''
Image classification pipeline using any `AutoModelForImageClassification`. This pipeline predicts the class of an
image.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="microsoft/beit-base-patch16-224-pt22k-ft22k")
>>> classifier("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.442, 'label': 'macaw'}, {'score': 0.088, 'label': 'popinjay'}, {'score': 0.075, 'label': 'parrot'}, {'score': 0.073, 'label': 'parodist, lampooner'}, {'score': 0.046, 'label': 'poll, poll_parrot'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=image-classification).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, top_k=None, function_to_apply=None, timeout=None):
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
'''
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
function_to_apply (`str`, *optional*, defaults to `"default"`):
The function to apply to the model outputs in order to retrieve the scores. Accepts four different
values:
If this argument is not specified, then it will apply the following functions according to the number
of labels:
- If the model has a single label, will apply the sigmoid function on the output.
- If the model has several labels, will apply the softmax function on the output.
Possible values are:
- `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output.
- `"none"`: Does not apply any function on the output.
top_k (`int`, *optional*, defaults to 5):
The number of top labels that will be returned by the pipeline. If the provided number is higher than
the number of labels available in the model configuration, it will default to the number of labels.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A dictionary or a list of dictionaries containing result. If the input is a single image, will return a
dictionary, if the input is a list of several images, will return a list of dictionaries corresponding to
the images.
The dictionaries contain the following keys:
- **label** (`str`) -- The label identified by the model.
- **score** (`int`) -- The score attributed by the model for that label.
'''
pass
def preprocess(self, image, timeout=None):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs, function_to_apply=None, top_k=5):
pass
| 12
| 2
| 20
| 3
| 11
| 6
| 4
| 0.77
| 1
| 5
| 1
| 0
| 6
| 0
| 6
| 48
| 150
| 28
| 69
| 15
| 62
| 53
| 56
| 15
| 49
| 11
| 6
| 2
| 24
|
6,421
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_feature_extraction.py
|
transformers.pipelines.image_feature_extraction.ImageFeatureExtractionPipeline
|
from typing import Any, Union
from .base import GenericTensor, Pipeline, build_pipeline_init_args
from ..utils import add_end_docstrings, is_vision_available
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True), '\n image_processor_kwargs (`dict`, *optional*):\n Additional dictionary of keyword arguments passed along to the image processor e.g.\n {"size": {"height": 100, "width": 100}}\n pool (`bool`, *optional*, defaults to `False`):\n Whether or not to return the pooled output. If `False`, the model will return the raw hidden states.\n ')
class ImageFeatureExtractionPipeline(Pipeline):
"""
Image feature extraction pipeline uses no model head. This pipeline extracts the hidden states from the base
transformer, which can be used as features in downstream tasks.
Example:
```python
>>> from transformers import pipeline
>>> extractor = pipeline(model="google/vit-base-patch16-224", task="image-feature-extraction")
>>> result = extractor("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", return_tensors=True)
>>> result.shape # This is a tensor of shape [1, sequence_length, hidden_dimension] representing the input image.
torch.Size([1, 197, 768])
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image feature extraction pipeline can currently be loaded from [`pipeline`] using the task identifier:
`"image-feature-extraction"`.
All vision models may be used for this pipeline. See a list of all models, including community-contributed models on
[huggingface.co/models](https://huggingface.co/models).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = False
def _sanitize_parameters(self, image_processor_kwargs=None, return_tensors=None, pool=None, **kwargs):
preprocess_params = {} if image_processor_kwargs is None else image_processor_kwargs
postprocess_params = {}
if pool is not None:
postprocess_params['pool'] = pool
if return_tensors is not None:
postprocess_params['return_tensors'] = return_tensors
if 'timeout' in kwargs:
preprocess_params['timeout'] = kwargs['timeout']
return (preprocess_params, {}, postprocess_params)
def preprocess(self, image, timeout=None, **image_processor_kwargs) -> dict[str, GenericTensor]:
image = load_image(image, timeout=timeout)
model_inputs = self.image_processor(image, return_tensors='pt', **image_processor_kwargs)
model_inputs = model_inputs.to(self.dtype)
return model_inputs
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs, pool=None, return_tensors=False):
pool = pool if pool is not None else False
if pool:
if 'pooler_output' not in model_outputs:
raise ValueError('No pooled output was returned. Make sure the model has a `pooler` layer when using the `pool` option.')
outputs = model_outputs['pooler_output']
else:
outputs = model_outputs[0]
if return_tensors:
return outputs
return outputs.tolist()
def __call__(self, *args: Union[str, 'Image.Image', list['Image.Image'], list[str]], **kwargs: Any) -> list[Any]:
"""
Extract the features of the input(s).
Args:
images (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and
the call may block forever.
Return:
A nested list of `float`: The features computed by the model.
"""
return super().__call__(*args, **kwargs)
| null | 7
| 2
| 13
| 2
| 7
| 4
| 3
| 0.92
| 1
| 3
| 0
| 0
| 5
| 0
| 5
| 47
| 92
| 19
| 38
| 11
| 32
| 35
| 34
| 11
| 28
| 7
| 6
| 2
| 16
|
6,422
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_segmentation.py
|
transformers.pipelines.image_segmentation.ImageSegmentationPipeline
|
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
from .base import Pipeline, build_pipeline_init_args
import numpy as np
from typing import Any, Union, overload
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ImageSegmentationPipeline(Pipeline):
"""
Image segmentation pipeline using any `AutoModelForXXXSegmentation`. This pipeline predicts masks of objects and
their classes.
Example:
```python
>>> from transformers import pipeline
>>> segmenter = pipeline(model="facebook/detr-resnet-50-panoptic")
>>> segments = segmenter("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
>>> len(segments)
2
>>> segments[0]["label"]
'bird'
>>> segments[1]["label"]
'bird'
>>> type(segments[0]["mask"]) # This is a black and white mask showing where is the bird on the original image.
<class 'PIL.Image.Image'>
>>> segments[0]["mask"].size
(768, 512)
```
This image segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-segmentation"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=image-segmentation).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
mapping = MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES.copy()
mapping.update(MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES)
mapping.update(MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING_NAMES)
mapping.update(MODEL_FOR_UNIVERSAL_SEGMENTATION_MAPPING_NAMES)
self.check_model_type(mapping)
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
postprocess_kwargs = {}
if 'subtask' in kwargs:
postprocess_kwargs['subtask'] = kwargs['subtask']
preprocess_kwargs['subtask'] = kwargs['subtask']
if 'threshold' in kwargs:
postprocess_kwargs['threshold'] = kwargs['threshold']
if 'mask_threshold' in kwargs:
postprocess_kwargs['mask_threshold'] = kwargs['mask_threshold']
if 'overlap_mask_area_threshold' in kwargs:
postprocess_kwargs['overlap_mask_area_threshold'] = kwargs['overlap_mask_area_threshold']
if 'timeout' in kwargs:
preprocess_kwargs['timeout'] = kwargs['timeout']
return (preprocess_kwargs, {}, postprocess_kwargs)
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, inputs: Union[list[str], list['Image.Image']], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, inputs: Union[str, 'Image.Image', list[str], list['Image.Image']], **kwargs: Any) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Perform segmentation (detect masks & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing an HTTP(S) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the
same format: all as HTTP(S) links, all as local paths, or all as PIL images.
subtask (`str`, *optional*):
Segmentation task to be performed, choose [`semantic`, `instance` and `panoptic`] depending on model
capabilities. If not set, the pipeline will attempt tp resolve in the following order:
`panoptic`, `instance`, `semantic`.
threshold (`float`, *optional*, defaults to 0.9):
Probability threshold to filter out predicted masks.
mask_threshold (`float`, *optional*, defaults to 0.5):
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (`float`, *optional*, defaults to 0.5):
Mask overlap threshold to eliminate small, disconnected segments.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
If the input is a single image, will return a list of dictionaries, if the input is a list of several images,
will return a list of list of dictionaries corresponding to each image.
The dictionaries contain the mask, label and score (where applicable) of each detected object and contains
the following keys:
- **label** (`str`) -- The class label identified by the model.
- **mask** (`PIL.Image`) -- A binary mask of the detected object as a Pil Image of shape (width, height) of
the original image. Returns a mask filled with zeros if no object is found.
- **score** (*optional* `float`) -- Optionally, when the model is capable of estimating a confidence of the
"object" described by the label and the mask.
"""
if 'images' in kwargs:
inputs = kwargs.pop('images')
if inputs is None:
raise ValueError('Cannot call the image-classification pipeline without an inputs argument!')
return super().__call__(inputs, **kwargs)
def preprocess(self, image, subtask=None, timeout=None):
image = load_image(image, timeout=timeout)
target_size = [(image.height, image.width)]
if self.model.config.__class__.__name__ == 'OneFormerConfig':
if subtask is None:
kwargs = {}
else:
kwargs = {'task_inputs': [subtask]}
inputs = self.image_processor(images=[image], return_tensors='pt', **kwargs)
inputs = inputs.to(self.dtype)
inputs['task_inputs'] = self.tokenizer(inputs['task_inputs'], padding='max_length', max_length=self.model.config.task_seq_len, return_tensors='pt')['input_ids']
else:
inputs = self.image_processor(images=[image], return_tensors='pt')
inputs = inputs.to(self.dtype)
inputs['target_size'] = target_size
return inputs
def _forward(self, model_inputs):
target_size = model_inputs.pop('target_size')
model_outputs = self.model(**model_inputs)
model_outputs['target_size'] = target_size
return model_outputs
def postprocess(self, model_outputs, subtask=None, threshold=0.9, mask_threshold=0.5, overlap_mask_area_threshold=0.5):
fn = None
if subtask in {'panoptic', None} and hasattr(self.image_processor, 'post_process_panoptic_segmentation'):
fn = self.image_processor.post_process_panoptic_segmentation
elif subtask in {'instance', None} and hasattr(self.image_processor, 'post_process_instance_segmentation'):
fn = self.image_processor.post_process_instance_segmentation
if fn is not None:
outputs = fn(model_outputs, threshold=threshold, mask_threshold=mask_threshold, overlap_mask_area_threshold=overlap_mask_area_threshold, target_sizes=model_outputs['target_size'])[0]
annotation = []
segmentation = outputs['segmentation']
for segment in outputs['segments_info']:
mask = (segmentation == segment['id']) * 255
mask = Image.fromarray(mask.numpy().astype(np.uint8), mode='L')
label = self.model.config.id2label[segment['label_id']]
score = segment['score']
annotation.append({'score': score, 'label': label, 'mask': mask})
elif subtask in {'semantic', None} and hasattr(self.image_processor, 'post_process_semantic_segmentation'):
outputs = self.image_processor.post_process_semantic_segmentation(model_outputs, target_sizes=model_outputs['target_size'])[0]
annotation = []
segmentation = outputs.numpy()
labels = np.unique(segmentation)
for label in labels:
mask = (segmentation == label) * 255
mask = Image.fromarray(mask.astype(np.uint8), mode='L')
label = self.model.config.id2label[label]
annotation.append({'score': None, 'label': label, 'mask': mask})
else:
raise ValueError(f'Subtask {subtask} is not supported for model {type(self.model)}')
return annotation
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ImageSegmentationPipeline(Pipeline):
'''
Image segmentation pipeline using any `AutoModelForXXXSegmentation`. This pipeline predicts masks of objects and
their classes.
Example:
```python
>>> from transformers import pipeline
>>> segmenter = pipeline(model="facebook/detr-resnet-50-panoptic")
>>> segments = segmenter("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
>>> len(segments)
2
>>> segments[0]["label"]
'bird'
>>> segments[1]["label"]
'bird'
>>> type(segments[0]["mask"]) # This is a black and white mask showing where is the bird on the original image.
<class 'PIL.Image.Image'>
>>> segments[0]["mask"].size
(768, 512)
```
This image segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-segmentation"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=image-segmentation).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, **kwargs):
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
'''
Perform segmentation (detect masks & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing an HTTP(S) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the
same format: all as HTTP(S) links, all as local paths, or all as PIL images.
subtask (`str`, *optional*):
Segmentation task to be performed, choose [`semantic`, `instance` and `panoptic`] depending on model
capabilities. If not set, the pipeline will attempt tp resolve in the following order:
`panoptic`, `instance`, `semantic`.
threshold (`float`, *optional*, defaults to 0.9):
Probability threshold to filter out predicted masks.
mask_threshold (`float`, *optional*, defaults to 0.5):
Threshold to use when turning the predicted masks into binary values.
overlap_mask_area_threshold (`float`, *optional*, defaults to 0.5):
Mask overlap threshold to eliminate small, disconnected segments.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
If the input is a single image, will return a list of dictionaries, if the input is a list of several images,
will return a list of list of dictionaries corresponding to each image.
The dictionaries contain the mask, label and score (where applicable) of each detected object and contains
the following keys:
- **label** (`str`) -- The class label identified by the model.
- **mask** (`PIL.Image`) -- A binary mask of the detected object as a Pil Image of shape (width, height) of
the original image. Returns a mask filled with zeros if no object is found.
- **score** (*optional* `float`) -- Optionally, when the model is capable of estimating a confidence of the
"object" described by the label and the mask.
'''
pass
def preprocess(self, image, subtask=None, timeout=None):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs, subtask=None, threshold=0.9, mask_threshold=0.5, overlap_mask_area_threshold=0.5):
pass
| 12
| 2
| 25
| 3
| 16
| 6
| 4
| 0.61
| 1
| 2
| 0
| 0
| 6
| 0
| 6
| 48
| 190
| 31
| 99
| 26
| 90
| 60
| 79
| 24
| 72
| 7
| 6
| 2
| 24
|
6,423
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_text_to_text.py
|
transformers.pipelines.image_text_to_text.Chat
|
from typing import Any, Optional, Union, overload
class Chat:
"""This class is intended to just be used internally in this pipeline and not exposed to users. We convert chats
to this format because the rest of the pipeline code tends to assume that lists of messages are
actually a batch of samples rather than messages in the same conversation."""
def __init__(self, messages: dict, images: Optional[Union[str, list[str], 'Image.Image', list['Image.Image']]]=None):
for message in messages:
if not ('role' in message and 'content' in message):
raise ValueError("When passing chat dicts as input, each dict must have a 'role' and 'content' key.")
messages = add_images_to_messages(messages, images)
self.messages = messages
|
class Chat:
'''This class is intended to just be used internally in this pipeline and not exposed to users. We convert chats
to this format because the rest of the pipeline code tends to assume that lists of messages are
actually a batch of samples rather than messages in the same conversation.'''
def __init__(self, messages: dict, images: Optional[Union[str, list[str], 'Image.Image', list['Image.Image']]]=None):
pass
| 2
| 1
| 8
| 1
| 7
| 0
| 3
| 0.38
| 0
| 2
| 0
| 0
| 1
| 2
| 1
| 1
| 13
| 2
| 8
| 5
| 6
| 3
| 8
| 5
| 6
| 3
| 0
| 2
| 3
|
6,424
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_text_to_text.py
|
transformers.pipelines.image_text_to_text.ImageTextToTextPipeline
|
from ..generation import GenerationConfig
from .base import Pipeline, build_pipeline_init_args
from ..processing_utils import ProcessingKwargs, Unpack
from typing import Any, Optional, Union, overload
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
@add_end_docstrings(build_pipeline_init_args(has_processor=True))
class ImageTextToTextPipeline(Pipeline):
"""
Image-text-to-text pipeline using an `AutoModelForImageTextToText`. This pipeline generates text given an image and text.
When the underlying model is a conversational model, it can also accept one or more chats,
in which case the pipeline will operate in chat mode and will continue the chat(s) by adding its response(s).
Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task="image-text-to-text", model="Salesforce/blip-image-captioning-base")
>>> pipe("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", text="A photo of")
[{'generated_text': 'a photo of two birds'}]
```
```python
>>> from transformers import pipeline
>>> pipe = pipeline("image-text-to-text", model="llava-hf/llava-interleave-qwen-0.5b-hf")
>>> messages = [
>>> {
>>> "role": "user",
>>> "content": [
>>> {
>>> "type": "image",
>>> "url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
>>> },
>>> {"type": "text", "text": "Describe this image."},
>>> ],
>>> },
>>> {
>>> "role": "assistant",
>>> "content": [
>>> {"type": "text", "text": "There is a dog and"},
>>> ],
>>> },
>>> ]
>>> pipe(text=messages, max_new_tokens=20, return_full_text=False)
[{'input_text': [{'role': 'user',
'content': [{'type': 'image',
'url': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'type': 'text', 'text': 'Describe this image.'}]},
{'role': 'assistant',
'content': [{'type': 'text', 'text': 'There is a dog and'}]}],
'generated_text': ' a person in the image. The dog is sitting on the sand, and the person is sitting on'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image-text to text pipeline can currently be loaded from pipeline() using the following task identifier:
"image-text-to-text".
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-text-to-text).
"""
_load_processor = True
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = False
_pipeline_calls_generate = True
_default_generation_config = GenerationConfig(max_new_tokens=256)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES)
def _sanitize_parameters(self, max_new_tokens=None, generate_kwargs=None, timeout=None, return_full_text=None, return_tensors=None, return_type=None, clean_up_tokenization_spaces=None, stop_sequence=None, continue_final_message=None, skip_special_tokens=None, **kwargs: Unpack[ProcessingKwargs]):
forward_kwargs = {}
preprocess_params = {}
postprocess_params = {}
preprocess_params.update(kwargs)
if timeout is not None:
preprocess_params['timeout'] = timeout
if continue_final_message is not None:
preprocess_params['continue_final_message'] = continue_final_message
if generate_kwargs is not None:
forward_kwargs['generate_kwargs'] = generate_kwargs
if stop_sequence is not None:
stop_sequence_ids = self.processor.tokenizer.encode(stop_sequence, add_special_tokens=False)
if len(stop_sequence_ids) > 1:
logger.warning_once('Stopping on a multiple token sequence is not yet supported on transformers. The first token of the stop sequence will be used as the stop sequence string in the interim.')
generate_kwargs['eos_token_id'] = stop_sequence_ids[0]
if generate_kwargs is not None:
forward_kwargs['generate_kwargs'] = generate_kwargs
if max_new_tokens is not None:
if 'generate_kwargs' not in forward_kwargs:
forward_kwargs['generate_kwargs'] = {}
if 'max_new_tokens' in forward_kwargs['generate_kwargs']:
raise ValueError("'max_new_tokens' is defined twice, once in 'generate_kwargs' and once as a direct parameter, please use only one")
forward_kwargs['generate_kwargs']['max_new_tokens'] = max_new_tokens
if return_full_text is not None and return_type is None:
if return_tensors is not None:
raise ValueError('`return_full_text` is mutually exclusive with `return_tensors`')
return_type = ReturnType.FULL_TEXT if return_full_text else ReturnType.NEW_TEXT
if return_tensors is not None and return_type is None:
return_type = ReturnType.TENSORS
if return_type is not None:
postprocess_params['return_type'] = return_type
if continue_final_message is not None:
postprocess_params['continue_final_message'] = continue_final_message
if clean_up_tokenization_spaces is not None:
postprocess_params['clean_up_tokenization_spaces'] = clean_up_tokenization_spaces
if skip_special_tokens is not None:
postprocess_params['skip_special_tokens'] = skip_special_tokens
return (preprocess_params, forward_kwargs, postprocess_params)
@overload
def __call__(self, image: Optional[Union[str, 'Image.Image']]=None, text: Optional[str]=None, **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, image: Optional[Union[list[str], list['Image.Image']]]=None, text: Optional[list[str]]=None, **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, images: Optional[Union[str, list[str], list[list[str]], 'Image.Image', list['Image.Image'], list[list['Image.Image']], list[dict]]]=None, text: Optional[Union[str, list[str], list[dict]]]=None, **kwargs) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Generate a text given text and the image(s) passed as inputs.
Args:
images (`str`, `list[str]`, `PIL.Image, `list[PIL.Image]`, `list[dict[str, Union[str, PIL.Image]]]`):
The pipeline handles three types of images:
- A string containing a HTTP(s) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Finally, this pipeline also supports
the chat format (see `text`) containing images and text in this argument.
text (str, list[str], `list[dict[str, Union[str, PIL.Image]]]`):
The text to be used for generation. If a list of strings is passed, the length of the list should be
the same as the number of images. Text can also follow the chat format: a list of dictionaries where
each dictionary represents a message in a conversation. Each dictionary should have two keys: 'role'
and 'content'. 'role' should be one of 'user', 'system' or 'assistant'. 'content' should be a list of
dictionary containing the text of the message and the type of the message. The type of the message
can be either 'text' or 'image'. If the type is 'image', no text is needed.
return_tensors (`bool`, *optional*, defaults to `False`):
Returns the tensors of predictions (as token indices) in the outputs. If set to
`True`, the decoded text is not returned.
return_text (`bool`, *optional*):
Returns the decoded texts in the outputs.
return_full_text (`bool`, *optional*, defaults to `True`):
If set to `False` only added text is returned, otherwise the full text is returned. Cannot be
specified at the same time as `return_text`.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to clean up the potential extra spaces in the text output.
continue_final_message( `bool`, *optional*): This indicates that you want the model to continue the
last message in the input chat rather than starting a new one, allowing you to "prefill" its response.
By default this is `True` when the final message in the input chat has the `assistant` role and
`False` otherwise, but you can manually override that behaviour by setting this flag.
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following key (cannot
return a combination of both `generated_text` and `generated_token_ids`):
- **generated_text** (`str`, present when `return_text=True`) -- The generated text.
- **generated_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the generated text.
- **input_text** (`str`) -- The input text.
"""
if images is None and text is None:
raise ValueError('You must at least provide either text or images.')
def _is_chat(arg):
return isinstance(arg, (list, tuple, KeyDataset)) and isinstance(arg[0], (list, tuple, dict))
if _is_chat(text):
if isinstance(text[0], dict):
return super().__call__(Chat(text, images), **kwargs)
else:
if images is None:
images = [None] * len(text)
chats = [Chat(chat, image) for chat, image in zip(text, images)]
return super().__call__(chats, **kwargs)
elif text is None and _is_chat(images):
if isinstance(images[0], dict):
return super().__call__(Chat(images), **kwargs)
else:
chats = [Chat(image) for image in images]
return super().__call__(chats, **kwargs)
elif images is not None and text is None and (not valid_images(images)):
'\n Supports the following format\n - {"image": image, "text": text}\n - [{"image": image, "text": text}]\n - Generator and datasets\n This is a common pattern in other multimodal pipelines, so we support it here as well.\n '
return super().__call__(images, **kwargs)
if getattr(self.processor, 'chat_template', None) is not None:
logger.warning_once("The input data was not formatted as a chat with dicts containing 'role' and 'content' keys, even though this model supports chat. Consider using the chat format for better results. For more information, see https://huggingface.co/docs/transformers/en/chat_templating")
if images is None:
return super().__call__(text, **kwargs)
if text is None:
raise ValueError('You must provide text for this pipeline.')
return super().__call__({'images': images, 'text': text}, **kwargs)
def preprocess(self, inputs=None, timeout=None, continue_final_message=None, **processing_kwargs):
if isinstance(inputs, Chat):
if continue_final_message is None:
continue_final_message = inputs.messages[-1]['role'] == 'assistant'
model_inputs = self.processor.apply_chat_template(inputs.messages, add_generation_prompt=not continue_final_message, continue_final_message=continue_final_message, return_tensors='pt', tokenize=True, return_dict=True)
model_inputs['text'] = inputs
return model_inputs
if isinstance(inputs, (list, tuple, str)):
images = None
text = inputs
inputs_text = inputs
else:
images = load_images(inputs['images'], timeout=timeout)
text = inputs['text']
inputs_text = inputs['text']
if isinstance(text, (list, tuple)) and len(text) > 1:
processing_kwargs.setdefault('padding', True)
model_inputs = self.processor(images=images, text=text, return_tensors='pt', **processing_kwargs).to(dtype=self.dtype)
model_inputs['text'] = inputs_text
return model_inputs
def _forward(self, model_inputs, generate_kwargs=None):
generate_kwargs = {} if generate_kwargs is None else generate_kwargs
prompt_text = model_inputs.pop('text')
input_ids = model_inputs['input_ids'] if 'input_ids' in model_inputs else model_inputs['decoder_input_ids']
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
generated_sequence = self.model.generate(**model_inputs, **generate_kwargs)
return {'generated_sequence': generated_sequence, 'prompt_text': prompt_text, 'input_ids': input_ids}
def postprocess(self, model_outputs, return_type=ReturnType.FULL_TEXT, continue_final_message=None, skip_special_tokens=None, **postprocess_kwargs):
input_texts = model_outputs['prompt_text']
input_texts = [input_texts] if isinstance(input_texts, (str, Chat)) else input_texts
generated_sequence = model_outputs['generated_sequence']
input_ids = model_outputs['input_ids']
if return_type == ReturnType.TENSORS:
return [{'input_text': input_texts[i], 'generated_token_ids': generated_sequence[i]} for i in range(len(input_texts))]
skip_special_tokens = skip_special_tokens if skip_special_tokens is not None else True
generated_texts = self.processor.post_process_image_text_to_text(generated_sequence, skip_special_tokens=skip_special_tokens, **postprocess_kwargs)
decoded_inputs = self.processor.post_process_image_text_to_text(input_ids, skip_special_tokens=skip_special_tokens, **postprocess_kwargs)
if return_type in {ReturnType.NEW_TEXT, ReturnType.FULL_TEXT}:
new_generated_texts = []
for text_generated, decoded_input in zip(generated_texts, decoded_inputs):
index_input_text = text_generated.find(decoded_input)
if 0 <= index_input_text <= 2:
new_generated_texts.append(text_generated[index_input_text + len(decoded_input):])
else:
new_generated_texts.append(text_generated)
generated_texts = new_generated_texts
if return_type == ReturnType.FULL_TEXT:
full_texts = []
for prompt_text, generated_text in zip(input_texts, generated_texts):
if isinstance(prompt_text, str):
generated_text = prompt_text + generated_text
elif isinstance(prompt_text, Chat):
if continue_final_message is None:
continue_final_message = prompt_text.messages[-1]['role'] == 'assistant'
if continue_final_message:
new_text = dict(prompt_text.messages[-1]['content'][-1].items())
new_text['text'] += generated_text
generated_text = list(prompt_text.messages)[:-1] + [{'role': prompt_text.messages[-1]['role'], 'content': prompt_text.messages[-1]['content'][:-1] + [new_text]}]
else:
generated_text = list(prompt_text.messages) + [{'role': 'assistant', 'content': generated_text}]
full_texts.append(generated_text)
generated_texts = full_texts
records = [{'input_text': input_text.messages if isinstance(input_text, Chat) else input_text, 'generated_text': generated_text} for input_text, generated_text in zip(input_texts, generated_texts)]
return records
|
@add_end_docstrings(build_pipeline_init_args(has_processor=True))
class ImageTextToTextPipeline(Pipeline):
'''
Image-text-to-text pipeline using an `AutoModelForImageTextToText`. This pipeline generates text given an image and text.
When the underlying model is a conversational model, it can also accept one or more chats,
in which case the pipeline will operate in chat mode and will continue the chat(s) by adding its response(s).
Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task="image-text-to-text", model="Salesforce/blip-image-captioning-base")
>>> pipe("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", text="A photo of")
[{'generated_text': 'a photo of two birds'}]
```
```python
>>> from transformers import pipeline
>>> pipe = pipeline("image-text-to-text", model="llava-hf/llava-interleave-qwen-0.5b-hf")
>>> messages = [
>>> {
>>> "role": "user",
>>> "content": [
>>> {
>>> "type": "image",
>>> "url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
>>> },
>>> {"type": "text", "text": "Describe this image."},
>>> ],
>>> },
>>> {
>>> "role": "assistant",
>>> "content": [
>>> {"type": "text", "text": "There is a dog and"},
>>> ],
>>> },
>>> ]
>>> pipe(text=messages, max_new_tokens=20, return_full_text=False)
[{'input_text': [{'role': 'user',
'content': [{'type': 'image',
'url': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'type': 'text', 'text': 'Describe this image.'}]},
{'role': 'assistant',
'content': [{'type': 'text', 'text': 'There is a dog and'}]}],
'generated_text': ' a person in the image. The dog is sitting on the sand, and the person is sitting on'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image-text to text pipeline can currently be loaded from pipeline() using the following task identifier:
"image-text-to-text".
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-text-to-text).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, max_new_tokens=None, generate_kwargs=None, timeout=None, return_full_text=None, return_tensors=None, return_type=None, clean_up_tokenization_spaces=None, stop_sequence=None, continue_final_message=None, skip_special_tokens=None, **kwargs: Unpack[ProcessingKwargs]):
pass
@overload
def __call__(self, image: Optional[Union[str, 'Image.Image']]=None, text: Optional[str]=None, **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, image: Optional[Union[str, 'Image.Image']]=None, text: Optional[str]=None, **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, image: Optional[Union[str, 'Image.Image']]=None, text: Optional[str]=None, **kwargs: Any) -> list[dict[str, Any]]:
'''
Generate a text given text and the image(s) passed as inputs.
Args:
images (`str`, `list[str]`, `PIL.Image, `list[PIL.Image]`, `list[dict[str, Union[str, PIL.Image]]]`):
The pipeline handles three types of images:
- A string containing a HTTP(s) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Finally, this pipeline also supports
the chat format (see `text`) containing images and text in this argument.
text (str, list[str], `list[dict[str, Union[str, PIL.Image]]]`):
The text to be used for generation. If a list of strings is passed, the length of the list should be
the same as the number of images. Text can also follow the chat format: a list of dictionaries where
each dictionary represents a message in a conversation. Each dictionary should have two keys: 'role'
and 'content'. 'role' should be one of 'user', 'system' or 'assistant'. 'content' should be a list of
dictionary containing the text of the message and the type of the message. The type of the message
can be either 'text' or 'image'. If the type is 'image', no text is needed.
return_tensors (`bool`, *optional*, defaults to `False`):
Returns the tensors of predictions (as token indices) in the outputs. If set to
`True`, the decoded text is not returned.
return_text (`bool`, *optional*):
Returns the decoded texts in the outputs.
return_full_text (`bool`, *optional*, defaults to `True`):
If set to `False` only added text is returned, otherwise the full text is returned. Cannot be
specified at the same time as `return_text`.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to clean up the potential extra spaces in the text output.
continue_final_message( `bool`, *optional*): This indicates that you want the model to continue the
last message in the input chat rather than starting a new one, allowing you to "prefill" its response.
By default this is `True` when the final message in the input chat has the `assistant` role and
`False` otherwise, but you can manually override that behaviour by setting this flag.
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following key (cannot
return a combination of both `generated_text` and `generated_token_ids`):
- **generated_text** (`str`, present when `return_text=True`) -- The generated text.
- **generated_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the generated text.
- **input_text** (`str`) -- The input text.
'''
pass
def _is_chat(arg):
pass
def preprocess(self, inputs=None, timeout=None, continue_final_message=None, **processing_kwargs):
pass
def _forward(self, model_inputs, generate_kwargs=None):
pass
def postprocess(self, model_outputs, return_type=ReturnType.FULL_TEXT, continue_final_message=None, skip_special_tokens=None, **postprocess_kwargs):
pass
| 13
| 2
| 42
| 4
| 27
| 11
| 7
| 0.66
| 1
| 12
| 4
| 0
| 6
| 1
| 6
| 48
| 318
| 40
| 169
| 52
| 145
| 111
| 115
| 34
| 108
| 13
| 6
| 4
| 44
|
6,425
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_to_image.py
|
transformers.pipelines.image_to_image.ImageToImagePipeline
|
import numpy as np
from typing import Any, Union, overload
from .base import Pipeline, build_pipeline_init_args
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ImageToImagePipeline(Pipeline):
"""
Image to Image pipeline using any `AutoModelForImageToImage`. This pipeline generates an image based on a previous
image input.
Example:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import pipeline
>>> upscaler = pipeline("image-to-image", model="caidas/swin2SR-classical-sr-x2-64")
>>> img = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
>>> img = img.resize((64, 64))
>>> upscaled_img = upscaler(img)
>>> img.size
(64, 64)
>>> upscaled_img.size
(144, 144)
```
This image to image pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-to-image"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=image-to-image).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = False
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_IMAGE_TO_IMAGE_MAPPING_NAMES)
def _sanitize_parameters(self, **kwargs):
preprocess_params = {}
postprocess_params = {}
forward_params = {}
if 'timeout' in kwargs:
preprocess_params['timeout'] = kwargs['timeout']
if 'head_mask' in kwargs:
forward_params['head_mask'] = kwargs['head_mask']
return (preprocess_params, forward_params, postprocess_params)
@overload
def __call__(self, images: Union[str, 'Image.Image'], **kwargs: Any) -> 'Image.Image':
...
@overload
def __call__(self, images: Union[list[str], list['Image.Image']], **kwargs: Any) -> list['Image.Image']:
...
def __call__(self, images: Union[str, list[str], 'Image.Image', list['Image.Image']], **kwargs: Any) -> Union['Image.Image', list['Image.Image']]:
"""
Transform the image(s) passed as inputs.
Args:
images (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and
the call may block forever.
Return:
An image (Image.Image) or a list of images (list["Image.Image"]) containing result(s). If the input is a
single image, the return will be also a single image, if the input is a list of several images, it will
return a list of transformed images.
"""
return super().__call__(images, **kwargs)
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def preprocess(self, image, timeout=None):
image = load_image(image, timeout=timeout)
inputs = self.image_processor(images=[image], return_tensors='pt')
inputs = inputs.to(self.dtype)
return inputs
def postprocess(self, model_outputs):
images = []
if 'reconstruction' in model_outputs:
outputs = model_outputs.reconstruction
for output in outputs:
output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.moveaxis(output, source=0, destination=-1)
output = (output * 255.0).round().astype(np.uint8)
images.append(Image.fromarray(output))
return images if len(images) > 1 else images[0]
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ImageToImagePipeline(Pipeline):
'''
Image to Image pipeline using any `AutoModelForImageToImage`. This pipeline generates an image based on a previous
image input.
Example:
```python
>>> from PIL import Image
>>> import requests
>>> from transformers import pipeline
>>> upscaler = pipeline("image-to-image", model="caidas/swin2SR-classical-sr-x2-64")
>>> img = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw)
>>> img = img.resize((64, 64))
>>> upscaled_img = upscaler(img)
>>> img.size
(64, 64)
>>> upscaled_img.size
(144, 144)
```
This image to image pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"image-to-image"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=image-to-image).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, **kwargs):
pass
@overload
def __call__(self, images: Union[str, 'Image.Image'], **kwargs: Any) -> 'Image.Image':
pass
@overload
def __call__(self, images: Union[str, 'Image.Image'], **kwargs: Any) -> 'Image.Image':
pass
def __call__(self, images: Union[str, 'Image.Image'], **kwargs: Any) -> 'Image.Image':
'''
Transform the image(s) passed as inputs.
Args:
images (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images, which must then be passed as a string.
Images in a batch must all be in the same format: all as http links, all as local paths, or all as PIL
images.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is used and
the call may block forever.
Return:
An image (Image.Image) or a list of images (list["Image.Image"]) containing result(s). If the input is a
single image, the return will be also a single image, if the input is a list of several images, it will
return a list of transformed images.
'''
pass
def _forward(self, model_inputs):
pass
def preprocess(self, image, timeout=None):
pass
def postprocess(self, model_outputs):
pass
| 12
| 2
| 10
| 1
| 6
| 3
| 2
| 1.11
| 1
| 2
| 0
| 0
| 6
| 0
| 6
| 48
| 97
| 20
| 37
| 17
| 28
| 41
| 35
| 15
| 28
| 4
| 6
| 1
| 12
|
6,426
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/image_to_text.py
|
transformers.pipelines.image_to_text.ImageToTextPipeline
|
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
from typing import Any, Union, overload
from ..generation import GenerationConfig
from .base import Pipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, has_image_processor=True))
class ImageToTextPipeline(Pipeline):
"""
Image To Text pipeline using a `AutoModelForVision2Seq`. This pipeline predicts a caption for a given image.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> captioner = pipeline(model="ydshieh/vit-gpt2-coco-en")
>>> captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'generated_text': 'two birds are standing next to each other '}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image to text pipeline can currently be loaded from pipeline() using the following task identifier:
"image-to-text".
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-to-text).
"""
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES)
def _sanitize_parameters(self, max_new_tokens=None, generate_kwargs=None, prompt=None, timeout=None):
forward_params = {}
preprocess_params = {}
if prompt is not None:
preprocess_params['prompt'] = prompt
if timeout is not None:
preprocess_params['timeout'] = timeout
if max_new_tokens is not None:
forward_params['max_new_tokens'] = max_new_tokens
if generate_kwargs is not None:
if max_new_tokens is not None and 'max_new_tokens' in generate_kwargs:
raise ValueError('`max_new_tokens` is defined both as an argument and inside `generate_kwargs` argument, please use only 1 version')
forward_params.update(generate_kwargs)
if self.assistant_model is not None:
forward_params['assistant_model'] = self.assistant_model
if self.assistant_tokenizer is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
return (preprocess_params, forward_params, {})
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, inputs: Union[list[str], list['Image.Image']], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, inputs: Union[str, list[str], 'Image.Image', list['Image.Image']], **kwargs):
"""
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a HTTP(s) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images.
max_new_tokens (`int`, *optional*):
The amount of maximum tokens to generate. By default it will use `generate` default.
generate_kwargs (`Dict`, *optional*):
Pass it to send all of these arguments directly to `generate` allowing full control of this function.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following key:
- **generated_text** (`str`) -- The generated text.
"""
if 'images' in kwargs:
inputs = kwargs.pop('images')
if inputs is None:
raise ValueError('Cannot call the image-to-text pipeline without an inputs argument!')
return super().__call__(inputs, **kwargs)
def preprocess(self, image, prompt=None, timeout=None):
image = load_image(image, timeout=timeout)
if prompt is not None:
logger.warning_once('Passing `prompt` to the `image-to-text` pipeline is deprecated and will be removed in version 4.48 of 🤗 Transformers. Use the `image-text-to-text` pipeline instead')
if not isinstance(prompt, str):
raise ValueError(f'Received an invalid text input, got - {type(prompt)} - but expected a single string. Note also that one single text can be provided for conditional image to text generation.')
model_type = self.model.config.model_type
if model_type == 'git':
model_inputs = self.image_processor(images=image, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
input_ids = self.tokenizer(text=prompt, add_special_tokens=False).input_ids
input_ids = [self.tokenizer.cls_token_id] + input_ids
input_ids = torch.tensor(input_ids).unsqueeze(0)
model_inputs.update({'input_ids': input_ids})
elif model_type == 'pix2struct':
model_inputs = self.image_processor(images=image, header_text=prompt, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
elif model_type != 'vision-encoder-decoder':
model_inputs = self.image_processor(images=image, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
text_inputs = self.tokenizer(prompt, return_tensors='pt')
model_inputs.update(text_inputs)
else:
raise ValueError(f'Model type {model_type} does not support conditional text generation')
else:
model_inputs = self.image_processor(images=image, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
if self.model.config.model_type == 'git' and prompt is None:
model_inputs['input_ids'] = None
return model_inputs
def _forward(self, model_inputs, **generate_kwargs):
if 'input_ids' in model_inputs and isinstance(model_inputs['input_ids'], list) and all((x is None for x in model_inputs['input_ids'])):
model_inputs['input_ids'] = None
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
inputs = model_inputs.pop(self.model.main_input_name)
model_outputs = self.model.generate(inputs, **model_inputs, **generate_kwargs)
return model_outputs
def postprocess(self, model_outputs):
records = []
for output_ids in model_outputs:
record = {'generated_text': self.tokenizer.decode(output_ids, skip_special_tokens=True)}
records.append(record)
return records
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, has_image_processor=True))
class ImageToTextPipeline(Pipeline):
'''
Image To Text pipeline using a `AutoModelForVision2Seq`. This pipeline predicts a caption for a given image.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> captioner = pipeline(model="ydshieh/vit-gpt2-coco-en")
>>> captioner("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'generated_text': 'two birds are standing next to each other '}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image to text pipeline can currently be loaded from pipeline() using the following task identifier:
"image-to-text".
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?pipeline_tag=image-to-text).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, max_new_tokens=None, generate_kwargs=None, prompt=None, timeout=None):
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, inputs: Union[str, 'Image.Image'], **kwargs: Any) -> list[dict[str, Any]]:
'''
Assign labels to the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a HTTP(s) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images.
max_new_tokens (`int`, *optional*):
The amount of maximum tokens to generate. By default it will use `generate` default.
generate_kwargs (`Dict`, *optional*):
Pass it to send all of these arguments directly to `generate` allowing full control of this function.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following key:
- **generated_text** (`str`) -- The generated text.
'''
pass
def preprocess(self, image, prompt=None, timeout=None):
pass
def _forward(self, model_inputs, **generate_kwargs):
pass
def postprocess(self, model_outputs):
pass
| 12
| 2
| 25
| 4
| 16
| 5
| 5
| 0.45
| 1
| 4
| 0
| 0
| 6
| 0
| 6
| 48
| 177
| 35
| 98
| 18
| 91
| 44
| 74
| 18
| 67
| 11
| 6
| 3
| 29
|
6,427
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/mask_generation.py
|
transformers.pipelines.mask_generation.MaskGenerationPipeline
|
from collections import defaultdict
from ..utils import add_end_docstrings, is_torch_available, logging, requires_backends
from .base import ChunkPipeline, build_pipeline_init_args
from typing import TYPE_CHECKING, Any, Optional, Union, overload
from ..image_utils import load_image
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True), '\n points_per_batch (*optional*, int, default to 64):\n Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU\n memory.\n output_bboxes_mask (`bool`, *optional*, default to `False`):\n Whether or not to output the bounding box predictions.\n output_rle_masks (`bool`, *optional*, default to `False`):\n Whether or not to output the masks in `RLE` format')
class MaskGenerationPipeline(ChunkPipeline):
"""
Automatic mask generation for images using `SamForMaskGeneration`. This pipeline predicts binary masks for an
image, given an image. It is a `ChunkPipeline` because you can separate the points in a mini-batch in order to
avoid OOM issues. Use the `points_per_batch` argument to control the number of points that will be processed at the
same time. Default is `64`.
The pipeline works in 3 steps:
1. `preprocess`: A grid of 1024 points evenly separated is generated along with bounding boxes and point
labels.
For more details on how the points and bounding boxes are created, check the `_generate_crop_boxes`
function. The image is also preprocessed using the `image_processor`. This function `yields` a minibatch of
`points_per_batch`.
2. `forward`: feeds the outputs of `preprocess` to the model. The image embedding is computed only once.
Calls both `self.model.get_image_embeddings` and makes sure that the gradients are not computed, and the
tensors and models are on the same device.
3. `postprocess`: The most important part of the automatic mask generation happens here. Three steps
are induced:
- image_processor.postprocess_masks (run on each minibatch loop): takes in the raw output masks,
resizes them according
to the image size, and transforms there to binary masks.
- image_processor.filter_masks (on each minibatch loop): uses both `pred_iou_thresh` and
`stability_scores`. Also
applies a variety of filters based on non maximum suppression to remove bad masks.
- image_processor.postprocess_masks_for_amg applies the NSM on the mask to only keep relevant ones.
Example:
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="facebook/sam-vit-base", task="mask-generation")
>>> outputs = generator(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... )
>>> outputs = generator(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", points_per_batch=128
... )
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"mask-generation"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=mask-generation).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = False
def __init__(self, **kwargs):
super().__init__(**kwargs)
requires_backends(self, 'vision')
requires_backends(self, 'torch')
self.check_model_type(MODEL_FOR_MASK_GENERATION_MAPPING_NAMES)
def _sanitize_parameters(self, **kwargs):
preprocess_kwargs = {}
postprocess_kwargs = {}
forward_params = {}
if 'points_per_batch' in kwargs:
preprocess_kwargs['points_per_batch'] = kwargs['points_per_batch']
if 'points_per_crop' in kwargs:
preprocess_kwargs['points_per_crop'] = kwargs['points_per_crop']
if 'crops_n_layers' in kwargs:
preprocess_kwargs['crops_n_layers'] = kwargs['crops_n_layers']
if 'crop_overlap_ratio' in kwargs:
preprocess_kwargs['crop_overlap_ratio'] = kwargs['crop_overlap_ratio']
if 'crop_n_points_downscale_factor' in kwargs:
preprocess_kwargs['crop_n_points_downscale_factor'] = kwargs['crop_n_points_downscale_factor']
if 'timeout' in kwargs:
preprocess_kwargs['timeout'] = kwargs['timeout']
if 'pred_iou_thresh' in kwargs:
forward_params['pred_iou_thresh'] = kwargs['pred_iou_thresh']
if 'stability_score_offset' in kwargs:
forward_params['stability_score_offset'] = kwargs['stability_score_offset']
if 'mask_threshold' in kwargs:
forward_params['mask_threshold'] = kwargs['mask_threshold']
if 'stability_score_thresh' in kwargs:
forward_params['stability_score_thresh'] = kwargs['stability_score_thresh']
if 'max_hole_area' in kwargs:
forward_params['max_hole_area'] = kwargs['max_hole_area']
if 'max_sprinkle_area' in kwargs:
forward_params['max_sprinkle_area'] = kwargs['max_sprinkle_area']
if 'crops_nms_thresh' in kwargs:
postprocess_kwargs['crops_nms_thresh'] = kwargs['crops_nms_thresh']
if 'output_rle_mask' in kwargs:
postprocess_kwargs['output_rle_mask'] = kwargs['output_rle_mask']
if 'output_bboxes_mask' in kwargs:
postprocess_kwargs['output_bboxes_mask'] = kwargs['output_bboxes_mask']
return (preprocess_kwargs, forward_params, postprocess_kwargs)
@overload
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> dict[str, Any]:
...
@overload
def __call__(self, image: Union[list[str], list['Image.Image']], *args: Any, **kwargs: Any) -> list[dict[str, Any]]:
...
def __call__(self, image: Union[str, 'Image.Image', list[str], list['Image.Image']], *args: Any, **kwargs: Any) -> Union[dict[str, Any], list[dict[str, Any]]]:
"""
Generates binary segmentation masks
Args:
image (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
Image or list of images.
mask_threshold (`float`, *optional*, defaults to 0.0):
Threshold to use when turning the predicted masks into binary values.
pred_iou_thresh (`float`, *optional*, defaults to 0.88):
A filtering threshold in `[0,1]` applied on the model's predicted mask quality.
stability_score_thresh (`float`, *optional*, defaults to 0.95):
A filtering threshold in `[0,1]`, using the stability of the mask under changes to the cutoff used to
binarize the model's mask predictions.
stability_score_offset (`int`, *optional*, defaults to 1):
The amount to shift the cutoff when calculated the stability score.
crops_nms_thresh (`float`, *optional*, defaults to 0.7):
The box IoU cutoff used by non-maximal suppression to filter duplicate masks.
crops_n_layers (`int`, *optional*, defaults to 0):
If `crops_n_layers>0`, mask prediction will be run again on crops of the image. Sets the number of
layers to run, where each layer has 2**i_layer number of image crops.
crop_overlap_ratio (`float`, *optional*, defaults to `512 / 1500`):
Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of
the image length. Later layers with more crops scale down this overlap.
crop_n_points_downscale_factor (`int`, *optional*, defaults to `1`):
The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
`Dict`: A dictionary with the following keys:
- **mask** (`PIL.Image`) -- A binary mask of the detected object as a PIL Image of shape `(width,
height)` of the original image. Returns a mask filled with zeros if no object is found.
- **score** (*optional* `float`) -- Optionally, when the model is capable of estimating a confidence of
the "object" described by the label and the mask.
"""
num_workers = kwargs.pop('num_workers', None)
batch_size = kwargs.pop('batch_size', None)
return super().__call__(image, *args, num_workers=num_workers, batch_size=batch_size, **kwargs)
def preprocess(self, image, points_per_batch=64, crops_n_layers: int=0, crop_overlap_ratio: float=512 / 1500, points_per_crop: Optional[int]=32, crop_n_points_downscale_factor: Optional[int]=1, timeout: Optional[float]=None):
image = load_image(image, timeout=timeout)
target_size = self.image_processor.size.get('longest_edge', self.image_processor.size.get('height'))
crop_boxes, grid_points, cropped_images, input_labels = self.image_processor.generate_crop_boxes(image, target_size, crops_n_layers, crop_overlap_ratio, points_per_crop, crop_n_points_downscale_factor)
model_inputs = self.image_processor(images=cropped_images, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
with self.device_placement():
inference_context = self.get_inference_context()
with inference_context():
model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
embeddings = self.model.get_image_embeddings(model_inputs.pop('pixel_values'))
if isinstance(embeddings, tuple):
image_embeddings, intermediate_embeddings = embeddings
model_inputs['intermediate_embeddings'] = intermediate_embeddings
else:
image_embeddings = embeddings
model_inputs['image_embeddings'] = image_embeddings
n_points = grid_points.shape[1]
points_per_batch = points_per_batch if points_per_batch is not None else n_points
if points_per_batch <= 0:
raise ValueError('Cannot have points_per_batch<=0. Must be >=1 to returned batched outputs. To return all points at once, set points_per_batch to None')
for i in range(0, n_points, points_per_batch):
batched_points = grid_points[:, i:i + points_per_batch, :, :]
labels = input_labels[:, i:i + points_per_batch]
is_last = i == n_points - points_per_batch
yield {'input_points': batched_points, 'input_labels': labels, 'input_boxes': crop_boxes, 'is_last': is_last, **model_inputs}
def _forward(self, model_inputs, pred_iou_thresh=0.88, stability_score_thresh=0.95, mask_threshold=0, stability_score_offset=1, max_hole_area=None, max_sprinkle_area=None):
input_boxes = model_inputs.pop('input_boxes')
is_last = model_inputs.pop('is_last')
original_sizes = model_inputs.pop('original_sizes').tolist()
reshaped_input_sizes = model_inputs.pop('reshaped_input_sizes').tolist()
model_outputs = self.model(**model_inputs)
low_resolution_masks = model_outputs['pred_masks']
postprocess_kwargs = {}
if max_hole_area is not None:
postprocess_kwargs['max_hole_area'] = max_hole_area
if max_sprinkle_area is not None and max_sprinkle_area > 0:
postprocess_kwargs['max_sprinkle_area'] = max_sprinkle_area
if postprocess_kwargs:
low_resolution_masks = self.image_processor.post_process_masks(low_resolution_masks, original_sizes, mask_threshold=mask_threshold, reshaped_input_sizes=reshaped_input_sizes, binarize=False, **postprocess_kwargs)
masks = self.image_processor.post_process_masks(low_resolution_masks, original_sizes, mask_threshold=mask_threshold, reshaped_input_sizes=reshaped_input_sizes, binarize=False)
iou_scores = model_outputs['iou_scores']
masks, iou_scores, boxes = self.image_processor.filter_masks(masks[0], iou_scores[0], original_sizes[0], input_boxes[0], pred_iou_thresh, stability_score_thresh, mask_threshold, stability_score_offset)
return {'masks': masks, 'is_last': is_last, 'boxes': boxes, 'iou_scores': iou_scores}
def postprocess(self, model_outputs, output_rle_mask=False, output_bboxes_mask=False, crops_nms_thresh=0.7):
all_scores = []
all_masks = []
all_boxes = []
for model_output in model_outputs:
all_scores.append(model_output.pop('iou_scores'))
all_masks.extend(model_output.pop('masks'))
all_boxes.append(model_output.pop('boxes'))
all_scores = torch.cat(all_scores)
all_boxes = torch.cat(all_boxes)
output_masks, iou_scores, rle_mask, bounding_boxes = self.image_processor.post_process_for_mask_generation(all_masks, all_scores, all_boxes, crops_nms_thresh)
extra = defaultdict(list)
for output in model_outputs:
for k, v in output.items():
extra[k].append(v)
optional = {}
if output_rle_mask:
optional['rle_mask'] = rle_mask
if output_bboxes_mask:
optional['bounding_boxes'] = bounding_boxes
return {'masks': output_masks, 'scores': iou_scores, **optional, **extra}
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True), '\n points_per_batch (*optional*, int, default to 64):\n Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU\n memory.\n output_bboxes_mask (`bool`, *optional*, default to `False`):\n Whether or not to output the bounding box predictions.\n output_rle_masks (`bool`, *optional*, default to `False`):\n Whether or not to output the masks in `RLE` format')
class MaskGenerationPipeline(ChunkPipeline):
'''
Automatic mask generation for images using `SamForMaskGeneration`. This pipeline predicts binary masks for an
image, given an image. It is a `ChunkPipeline` because you can separate the points in a mini-batch in order to
avoid OOM issues. Use the `points_per_batch` argument to control the number of points that will be processed at the
same time. Default is `64`.
The pipeline works in 3 steps:
1. `preprocess`: A grid of 1024 points evenly separated is generated along with bounding boxes and point
labels.
For more details on how the points and bounding boxes are created, check the `_generate_crop_boxes`
function. The image is also preprocessed using the `image_processor`. This function `yields` a minibatch of
`points_per_batch`.
2. `forward`: feeds the outputs of `preprocess` to the model. The image embedding is computed only once.
Calls both `self.model.get_image_embeddings` and makes sure that the gradients are not computed, and the
tensors and models are on the same device.
3. `postprocess`: The most important part of the automatic mask generation happens here. Three steps
are induced:
- image_processor.postprocess_masks (run on each minibatch loop): takes in the raw output masks,
resizes them according
to the image size, and transforms there to binary masks.
- image_processor.filter_masks (on each minibatch loop): uses both `pred_iou_thresh` and
`stability_scores`. Also
applies a variety of filters based on non maximum suppression to remove bad masks.
- image_processor.postprocess_masks_for_amg applies the NSM on the mask to only keep relevant ones.
Example:
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="facebook/sam-vit-base", task="mask-generation")
>>> outputs = generator(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... )
>>> outputs = generator(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png", points_per_batch=128
... )
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This segmentation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"mask-generation"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=mask-generation).
'''
def __init__(self, **kwargs):
pass
def _sanitize_parameters(self, **kwargs):
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> dict[str, Any]:
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> dict[str, Any]:
pass
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> dict[str, Any]:
'''
Generates binary segmentation masks
Args:
image (`str`, `List[str]`, `PIL.Image` or `List[PIL.Image]`):
Image or list of images.
mask_threshold (`float`, *optional*, defaults to 0.0):
Threshold to use when turning the predicted masks into binary values.
pred_iou_thresh (`float`, *optional*, defaults to 0.88):
A filtering threshold in `[0,1]` applied on the model's predicted mask quality.
stability_score_thresh (`float`, *optional*, defaults to 0.95):
A filtering threshold in `[0,1]`, using the stability of the mask under changes to the cutoff used to
binarize the model's mask predictions.
stability_score_offset (`int`, *optional*, defaults to 1):
The amount to shift the cutoff when calculated the stability score.
crops_nms_thresh (`float`, *optional*, defaults to 0.7):
The box IoU cutoff used by non-maximal suppression to filter duplicate masks.
crops_n_layers (`int`, *optional*, defaults to 0):
If `crops_n_layers>0`, mask prediction will be run again on crops of the image. Sets the number of
layers to run, where each layer has 2**i_layer number of image crops.
crop_overlap_ratio (`float`, *optional*, defaults to `512 / 1500`):
Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of
the image length. Later layers with more crops scale down this overlap.
crop_n_points_downscale_factor (`int`, *optional*, defaults to `1`):
The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
`Dict`: A dictionary with the following keys:
- **mask** (`PIL.Image`) -- A binary mask of the detected object as a PIL Image of shape `(width,
height)` of the original image. Returns a mask filled with zeros if no object is found.
- **score** (*optional* `float`) -- Optionally, when the model is capable of estimating a confidence of
the "object" described by the label and the mask.
'''
pass
def preprocess(self, image, points_per_batch=64, crops_n_layers: int=0, crop_overlap_ratio: float=512 / 1500, points_per_crop: Optional[int]=32, crop_n_points_downscale_factor: Optional[int]=1, timeout: Optional[float]=None):
pass
def _forward(self, model_inputs, pred_iou_thresh=0.88, stability_score_thresh=0.95, mask_threshold=0, stability_score_offset=1, max_hole_area=None, max_sprinkle_area=None):
pass
def postprocess(self, model_outputs, output_rle_mask=False, output_bboxes_mask=False, crops_nms_thresh=0.7):
pass
| 12
| 2
| 33
| 3
| 24
| 6
| 5
| 0.52
| 1
| 6
| 0
| 0
| 6
| 0
| 6
| 50
| 255
| 32
| 147
| 60
| 118
| 76
| 96
| 38
| 89
| 14
| 7
| 3
| 30
|
6,428
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/object_detection.py
|
transformers.pipelines.object_detection.ObjectDetectionPipeline
|
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
from typing import TYPE_CHECKING, Any, Union, overload
from .base import Pipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ObjectDetectionPipeline(Pipeline):
"""
Object detection pipeline using any `AutoModelForObjectDetection`. This pipeline predicts bounding boxes of objects
and their classes.
Example:
```python
>>> from transformers import pipeline
>>> detector = pipeline(model="facebook/detr-resnet-50")
>>> detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.997, 'label': 'bird', 'box': {'xmin': 69, 'ymin': 171, 'xmax': 396, 'ymax': 507}}, {'score': 0.999, 'label': 'bird', 'box': {'xmin': 398, 'ymin': 105, 'xmax': 767, 'ymax': 507}}]
>>> # x, y are expressed relative to the top left hand corner.
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This object detection pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"object-detection"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=object-detection).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'vision')
mapping = MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES.copy()
mapping.update(MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES)
self.check_model_type(mapping)
def _sanitize_parameters(self, **kwargs):
preprocess_params = {}
if 'timeout' in kwargs:
preprocess_params['timeout'] = kwargs['timeout']
postprocess_kwargs = {}
if 'threshold' in kwargs:
postprocess_kwargs['threshold'] = kwargs['threshold']
return (preprocess_params, {}, postprocess_kwargs)
@overload
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, image: Union[list[str], list['Image.Image']], *args: Any, **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, *args, **kwargs) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing an HTTP(S) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the
same format: all as HTTP(S) links, all as local paths, or all as PIL images.
threshold (`float`, *optional*, defaults to 0.5):
The probability necessary to make a prediction.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list of dictionaries or a list of list of dictionaries containing the result. If the input is a single
image, will return a list of dictionaries, if the input is a list of several images, will return a list of
list of dictionaries corresponding to each image.
The dictionaries contain the following keys:
- **label** (`str`) -- The class label identified by the model.
- **score** (`float`) -- The score attributed by the model for that label.
- **box** (`list[dict[str, int]]`) -- The bounding box of detected object in image's original size.
"""
if 'images' in kwargs and 'inputs' not in kwargs:
kwargs['inputs'] = kwargs.pop('images')
return super().__call__(*args, **kwargs)
def preprocess(self, image, timeout=None):
image = load_image(image, timeout=timeout)
target_size = torch.IntTensor([[image.height, image.width]])
inputs = self.image_processor(images=[image], return_tensors='pt')
inputs = inputs.to(self.dtype)
if self.tokenizer is not None:
inputs = self.tokenizer(text=inputs['words'], boxes=inputs['boxes'], return_tensors='pt')
inputs['target_size'] = target_size
return inputs
def _forward(self, model_inputs):
target_size = model_inputs.pop('target_size')
outputs = self.model(**model_inputs)
model_outputs = outputs.__class__({'target_size': target_size, **outputs})
if self.tokenizer is not None:
model_outputs['bbox'] = model_inputs['bbox']
return model_outputs
def postprocess(self, model_outputs, threshold=0.5):
target_size = model_outputs['target_size']
if self.tokenizer is not None:
height, width = target_size[0].tolist()
def unnormalize(bbox):
return self._get_bounding_box(torch.Tensor([width * bbox[0] / 1000, height * bbox[1] / 1000, width * bbox[2] / 1000, height * bbox[3] / 1000]))
scores, classes = model_outputs['logits'].squeeze(0).softmax(dim=-1).max(dim=-1)
labels = [self.model.config.id2label[prediction] for prediction in classes.tolist()]
boxes = [unnormalize(bbox) for bbox in model_outputs['bbox'].squeeze(0)]
keys = ['score', 'label', 'box']
annotation = [dict(zip(keys, vals)) for vals in zip(scores.tolist(), labels, boxes) if vals[0] > threshold]
else:
raw_annotations = self.image_processor.post_process_object_detection(model_outputs, threshold, target_size)
raw_annotation = raw_annotations[0]
scores = raw_annotation['scores']
labels = raw_annotation['labels']
boxes = raw_annotation['boxes']
raw_annotation['scores'] = scores.tolist()
raw_annotation['labels'] = [self.model.config.id2label[label.item()] for label in labels]
raw_annotation['boxes'] = [self._get_bounding_box(box) for box in boxes]
keys = ['score', 'label', 'box']
annotation = [dict(zip(keys, vals)) for vals in zip(raw_annotation['scores'], raw_annotation['labels'], raw_annotation['boxes'])]
return annotation
def _get_bounding_box(self, box: 'torch.Tensor') -> dict[str, int]:
"""
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`dict[str, int]`): Dict containing the coordinates in corners format.
"""
xmin, ymin, xmax, ymax = box.int().tolist()
bbox = {'xmin': xmin, 'ymin': ymin, 'xmax': xmax, 'ymax': ymax}
return bbox
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ObjectDetectionPipeline(Pipeline):
'''
Object detection pipeline using any `AutoModelForObjectDetection`. This pipeline predicts bounding boxes of objects
and their classes.
Example:
```python
>>> from transformers import pipeline
>>> detector = pipeline(model="facebook/detr-resnet-50")
>>> detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'score': 0.997, 'label': 'bird', 'box': {'xmin': 69, 'ymin': 171, 'xmax': 396, 'ymax': 507}}, {'score': 0.999, 'label': 'bird', 'box': {'xmin': 398, 'ymin': 105, 'xmax': 767, 'ymax': 507}}]
>>> # x, y are expressed relative to the top left hand corner.
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This object detection pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"object-detection"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=object-detection).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, **kwargs):
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, image: Union[str, 'Image.Image'], *args: Any, **kwargs: Any) -> list[dict[str, Any]]:
'''
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
inputs (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing an HTTP(S) link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. Images in a batch must all be in the
same format: all as HTTP(S) links, all as local paths, or all as PIL images.
threshold (`float`, *optional*, defaults to 0.5):
The probability necessary to make a prediction.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list of dictionaries or a list of list of dictionaries containing the result. If the input is a single
image, will return a list of dictionaries, if the input is a list of several images, will return a list of
list of dictionaries corresponding to each image.
The dictionaries contain the following keys:
- **label** (`str`) -- The class label identified by the model.
- **score** (`float`) -- The score attributed by the model for that label.
- **box** (`list[dict[str, int]]`) -- The bounding box of detected object in image's original size.
'''
pass
def preprocess(self, image, timeout=None):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs, threshold=0.5):
pass
def unnormalize(bbox):
pass
def _get_bounding_box(self, box: 'torch.Tensor') -> dict[str, int]:
'''
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`dict[str, int]`): Dict containing the coordinates in corners format.
'''
pass
| 14
| 3
| 18
| 2
| 12
| 5
| 2
| 0.62
| 1
| 7
| 0
| 0
| 7
| 0
| 7
| 49
| 165
| 29
| 84
| 28
| 75
| 52
| 66
| 28
| 57
| 3
| 6
| 1
| 17
|
6,429
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/pt_utils.py
|
transformers.pipelines.pt_utils.KeyDataset
|
from torch.utils.data import Dataset, IterableDataset
class KeyDataset(Dataset):
def __init__(self, dataset: Dataset, key: str):
self.dataset = dataset
self.key = key
def __len__(self):
return len(self.dataset)
def __getitem__(self, i):
return self.dataset[i][self.key]
|
class KeyDataset(Dataset):
def __init__(self, dataset: Dataset, key: str):
pass
def __len__(self):
pass
def __getitem__(self, i):
pass
| 4
| 0
| 2
| 0
| 2
| 0
| 1
| 0
| 1
| 1
| 0
| 0
| 3
| 2
| 3
| 3
| 10
| 2
| 8
| 6
| 4
| 0
| 8
| 6
| 4
| 1
| 1
| 0
| 3
|
6,430
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/pt_utils.py
|
transformers.pipelines.pt_utils.KeyPairDataset
|
from torch.utils.data import Dataset, IterableDataset
class KeyPairDataset(Dataset):
def __init__(self, dataset: Dataset, key1: str, key2: str):
self.dataset = dataset
self.key1 = key1
self.key2 = key2
def __len__(self):
return len(self.dataset)
def __getitem__(self, i):
return {'text': self.dataset[i][self.key1], 'text_pair': self.dataset[i][self.key2]}
|
class KeyPairDataset(Dataset):
def __init__(self, dataset: Dataset, key1: str, key2: str):
pass
def __len__(self):
pass
def __getitem__(self, i):
pass
| 4
| 0
| 3
| 0
| 3
| 0
| 1
| 0
| 1
| 1
| 0
| 0
| 3
| 3
| 3
| 3
| 11
| 2
| 9
| 7
| 5
| 0
| 9
| 7
| 5
| 1
| 1
| 0
| 3
|
6,431
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/pt_utils.py
|
transformers.pipelines.pt_utils.PipelineChunkIterator
|
class PipelineChunkIterator(PipelineIterator):
def __init__(self, loader, infer, params, loader_batch_size=None):
"""
Roughly equivalent to
```
for iterator in loader:
for item in iterator:
yield infer(item, **params)
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
infer (any function):
The function to apply of each element of `loader`.
params (`dict`):
The parameters passed to `infer` along with every item
"""
super().__init__(loader, infer, params)
def __iter__(self):
self.iterator = iter(self.loader)
self.subiterator = None
return self
def __next__(self):
if self.subiterator is None:
"Subiterator None means we haven't started a `preprocess` iterator. so start it"
self.subiterator = self.infer(next(self.iterator), **self.params)
try:
processed = next(self.subiterator)
except StopIteration:
self.subiterator = self.infer(next(self.iterator), **self.params)
processed = next(self.subiterator)
return processed
|
class PipelineChunkIterator(PipelineIterator):
def __init__(self, loader, infer, params, loader_batch_size=None):
'''
Roughly equivalent to
```
for iterator in loader:
for item in iterator:
yield infer(item, **params)
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
infer (any function):
The function to apply of each element of `loader`.
params (`dict`):
The parameters passed to `infer` along with every item
'''
pass
def __iter__(self):
pass
def __next__(self):
pass
| 4
| 1
| 13
| 1
| 5
| 8
| 2
| 1.44
| 1
| 2
| 0
| 0
| 3
| 2
| 3
| 13
| 43
| 4
| 16
| 7
| 12
| 23
| 16
| 7
| 12
| 3
| 4
| 1
| 5
|
6,432
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/pt_utils.py
|
transformers.pipelines.pt_utils.PipelineDataset
|
from torch.utils.data import Dataset, IterableDataset
class PipelineDataset(Dataset):
def __init__(self, dataset, process, params):
self.dataset = dataset
self.process = process
self.params = params
def __len__(self):
return len(self.dataset)
def __getitem__(self, i):
item = self.dataset[i]
processed = self.process(item, **self.params)
return processed
|
class PipelineDataset(Dataset):
def __init__(self, dataset, process, params):
pass
def __len__(self):
pass
def __getitem__(self, i):
pass
| 4
| 0
| 3
| 0
| 3
| 0
| 1
| 0
| 1
| 0
| 0
| 0
| 3
| 3
| 3
| 3
| 13
| 2
| 11
| 9
| 7
| 0
| 11
| 9
| 7
| 1
| 1
| 0
| 3
|
6,433
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/pt_utils.py
|
transformers.pipelines.pt_utils.PipelineIterator
|
import torch
import numpy as np
from torch.utils.data import Dataset, IterableDataset
from ..utils.generic import ModelOutput
class PipelineIterator(IterableDataset):
def __init__(self, loader, infer, params, loader_batch_size=None):
"""
Roughly equivalent to
```
for item in loader:
yield infer(item, **params)
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
infer (any function):
The function to apply of each element of `loader`.
params (`dict`):
The parameters passed to `infer` along with every item
loader_batch_size (`int`, *optional*):
If specified, the items of `loader` are supposed to come as batch, and are loader_batched here
making it roughly behave as
```
for items in loader:
for i in loader_batch_size:
item = items[i]
yield infer(item, **params)
```"""
self.loader = loader
self.infer = infer
self.params = params
if loader_batch_size == 1:
loader_batch_size = None
self.loader_batch_size = loader_batch_size
self._loader_batch_index = None
self._loader_batch_data = None
def __len__(self):
return len(self.loader)
def __iter__(self):
self.iterator = iter(self.loader)
return self
def loader_batch_item(self):
"""
Return item located at `loader_batch_index` within the current `loader_batch_data`.
"""
if isinstance(self._loader_batch_data, torch.Tensor):
result = self._loader_batch_data[self._loader_batch_index].unsqueeze(0)
else:
loader_batched = {}
for k, element in self._loader_batch_data.items():
if isinstance(element, ModelOutput):
element = element.to_tuple()
if isinstance(element[0], torch.Tensor):
loader_batched[k] = tuple((el[self._loader_batch_index].unsqueeze(0) for el in element))
elif isinstance(element[0], np.ndarray):
loader_batched[k] = tuple((np.expand_dims(el[self._loader_batch_index], 0) for el in element))
continue
if k in {'hidden_states', 'attentions'} and isinstance(element, tuple):
if isinstance(element[0], torch.Tensor):
loader_batched[k] = tuple((el[self._loader_batch_index].unsqueeze(0) for el in element))
elif isinstance(element[0], np.ndarray):
loader_batched[k] = tuple((np.expand_dims(el[self._loader_batch_index], 0) for el in element))
continue
if k == 'past_key_values':
continue
if element is None:
loader_batched[k] = None
elif isinstance(element[self._loader_batch_index], torch.Tensor):
loader_batched[k] = element[self._loader_batch_index].unsqueeze(0)
elif isinstance(element[self._loader_batch_index], np.ndarray):
loader_batched[k] = np.expand_dims(element[self._loader_batch_index], 0)
else:
loader_batched[k] = element[self._loader_batch_index]
result = self._loader_batch_data.__class__(loader_batched)
self._loader_batch_index += 1
return result
def __next__(self):
if self._loader_batch_index is not None and self._loader_batch_index < self.loader_batch_size:
return self.loader_batch_item()
item = next(self.iterator)
processed = self.infer(item, **self.params)
if self.loader_batch_size is not None:
if isinstance(processed, torch.Tensor):
first_tensor = processed
elif isinstance(processed, tuple):
first_tensor = processed[0]
else:
key = list(processed.keys())[0]
first_tensor = processed[key]
if isinstance(first_tensor, list):
observed_batch_size = len(first_tensor)
else:
observed_batch_size = first_tensor.shape[0]
if 0 < observed_batch_size < self.loader_batch_size:
self.loader_batch_size = observed_batch_size
self._loader_batch_data = processed[0] if isinstance(processed, tuple) else processed
self._loader_batch_index = 0
return self.loader_batch_item()
else:
return processed
|
class PipelineIterator(IterableDataset):
def __init__(self, loader, infer, params, loader_batch_size=None):
'''
Roughly equivalent to
```
for item in loader:
yield infer(item, **params)
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
infer (any function):
The function to apply of each element of `loader`.
params (`dict`):
The parameters passed to `infer` along with every item
loader_batch_size (`int`, *optional*):
If specified, the items of `loader` are supposed to come as batch, and are loader_batched here
making it roughly behave as
```
for items in loader:
for i in loader_batch_size:
item = items[i]
yield infer(item, **params)
```'''
pass
def __len__(self):
pass
def __iter__(self):
pass
def loader_batch_item(self):
'''
Return item located at `loader_batch_index` within the current `loader_batch_data`.
'''
pass
def __next__(self):
pass
| 6
| 2
| 25
| 2
| 14
| 10
| 5
| 0.7
| 1
| 4
| 1
| 2
| 5
| 7
| 5
| 10
| 129
| 12
| 69
| 21
| 63
| 48
| 59
| 21
| 53
| 12
| 3
| 4
| 24
|
6,434
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/pt_utils.py
|
transformers.pipelines.pt_utils.PipelinePackIterator
|
import torch
class PipelinePackIterator(PipelineIterator):
"""
Roughly equivalent to
```
packed = []
for item in loader:
packed.append(item)
if item["is_last"]:
yield packed
packed = []
```
but it also handles cases where `item` are batched (meaning it's a dict of Tensor with first dimension > 1. In
that case it does
```
packed = []
for batch in loader:
# item is batched
for item in batch:
packed.append(item)
if item["is_last"]:
yield packed
packed = []
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
infer (any function):
The function to apply of each element of `loader`.
params (`dict`):
The parameters passed to `infer` along with every item
loader_batch_size (`int`, *optional*):
If specified, the items of `loader` are supposed to come as batch, and are loader_batched here making
it roughly behave as
```
for items in loader:
for i in loader_batch_size:
item = items[i]
yield infer(item, **params)
```"""
def __iter__(self):
self.iterator = iter(self.loader)
return self
def __next__(self):
is_last = False
accumulator = []
if self._loader_batch_index is not None and self._loader_batch_index < self.loader_batch_size:
while self._loader_batch_index < self.loader_batch_size:
item = self.loader_batch_item()
is_last = item.pop('is_last')
accumulator.append(item)
if is_last:
return accumulator
while not is_last:
processed = self.infer(next(self.iterator), **self.params)
if self.loader_batch_size is not None:
if isinstance(processed, torch.Tensor):
first_tensor = processed
else:
key = list(processed.keys())[0]
first_tensor = processed[key]
if isinstance(first_tensor, list):
observed_batch_size = len(first_tensor)
else:
observed_batch_size = first_tensor.shape[0]
if 0 < observed_batch_size < self.loader_batch_size:
self.loader_batch_size = observed_batch_size
self._loader_batch_data = processed
self._loader_batch_index = 0
while self._loader_batch_index < self.loader_batch_size:
item = self.loader_batch_item()
is_last = item.pop('is_last')
accumulator.append(item)
if is_last:
return accumulator
else:
item = processed
is_last = item.pop('is_last')
accumulator.append(item)
return accumulator
|
class PipelinePackIterator(PipelineIterator):
'''
Roughly equivalent to
```
packed = []
for item in loader:
packed.append(item)
if item["is_last"]:
yield packed
packed = []
```
but it also handles cases where `item` are batched (meaning it's a dict of Tensor with first dimension > 1. In
that case it does
```
packed = []
for batch in loader:
# item is batched
for item in batch:
packed.append(item)
if item["is_last"]:
yield packed
packed = []
```
Arguments:
loader (`torch.utils.data.DataLoader` or `Iterable`):
The iterator that will be used to apply `infer` on.
infer (any function):
The function to apply of each element of `loader`.
params (`dict`):
The parameters passed to `infer` along with every item
loader_batch_size (`int`, *optional*):
If specified, the items of `loader` are supposed to come as batch, and are loader_batched here making
it roughly behave as
```
for items in loader:
for i in loader_batch_size:
item = items[i]
yield infer(item, **params)
```'''
def __iter__(self):
pass
def __next__(self):
pass
| 3
| 1
| 26
| 1
| 20
| 5
| 6
| 1.15
| 1
| 2
| 0
| 0
| 2
| 4
| 2
| 12
| 98
| 10
| 41
| 14
| 38
| 47
| 38
| 14
| 35
| 11
| 4
| 4
| 12
|
6,435
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/question_answering.py
|
transformers.pipelines.question_answering.QuestionAnsweringArgumentHandler
|
import warnings
from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features
from .base import ArgumentHandler, ChunkPipeline, build_pipeline_init_args
import types
from collections.abc import Iterable
class QuestionAnsweringArgumentHandler(ArgumentHandler):
"""
QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to
internal [`SquadExample`].
QuestionAnsweringArgumentHandler manages all the possible to create a [`SquadExample`] from the command-line
supplied arguments.
"""
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
def normalize(self, item):
if isinstance(item, SquadExample):
return item
elif isinstance(item, dict):
for k in ['question', 'context']:
if k not in item:
raise KeyError('You need to provide a dictionary with keys {question:..., context:...}')
elif item[k] is None:
raise ValueError(f'`{k}` cannot be None')
elif isinstance(item[k], str) and len(item[k]) == 0:
raise ValueError(f'`{k}` cannot be empty')
return QuestionAnsweringPipeline.create_sample(**item)
raise ValueError(f'{item} argument needs to be of type (SquadExample, dict)')
def __call__(self, *args, **kwargs):
if args is not None and len(args) > 0:
if len(args) == 1:
inputs = args[0]
elif len(args) == 2 and {type(el) for el in args} == {str}:
inputs = [{'question': args[0], 'context': args[1]}]
else:
inputs = list(args)
elif 'X' in kwargs:
warnings.warn('Passing the `X` argument to the pipeline is deprecated and will be removed in v5. Inputs should be passed using the `question` and `context` keyword arguments instead.', FutureWarning)
inputs = kwargs['X']
elif 'data' in kwargs:
warnings.warn('Passing the `data` argument to the pipeline is deprecated and will be removed in v5. Inputs should be passed using the `question` and `context` keyword arguments instead.', FutureWarning)
inputs = kwargs['data']
elif 'question' in kwargs and 'context' in kwargs:
if isinstance(kwargs['question'], list) and isinstance(kwargs['context'], str):
inputs = [{'question': Q, 'context': kwargs['context']} for Q in kwargs['question']]
elif isinstance(kwargs['question'], list) and isinstance(kwargs['context'], list):
if len(kwargs['question']) != len(kwargs['context']):
raise ValueError("Questions and contexts don't have the same lengths")
inputs = [{'question': Q, 'context': C} for Q, C in zip(kwargs['question'], kwargs['context'])]
elif isinstance(kwargs['question'], str) and isinstance(kwargs['context'], str):
inputs = [{'question': kwargs['question'], 'context': kwargs['context']}]
else:
raise ValueError("Arguments can't be understood")
else:
raise ValueError(f'Unknown arguments {kwargs}')
generator_types = (types.GeneratorType, Dataset) if Dataset is not None else (types.GeneratorType,)
if isinstance(inputs, generator_types):
return inputs
if isinstance(inputs, dict):
inputs = [inputs]
elif isinstance(inputs, Iterable):
inputs = list(inputs)
else:
raise ValueError(f'Invalid arguments {kwargs}')
for i, item in enumerate(inputs):
inputs[i] = self.normalize(item)
return inputs
|
class QuestionAnsweringArgumentHandler(ArgumentHandler):
'''
QuestionAnsweringPipeline requires the user to provide multiple arguments (i.e. question & context) to be mapped to
internal [`SquadExample`].
QuestionAnsweringArgumentHandler manages all the possible to create a [`SquadExample`] from the command-line
supplied arguments.
'''
def normalize(self, item):
pass
def __call__(self, *args, **kwargs):
pass
| 3
| 1
| 35
| 3
| 29
| 3
| 12
| 0.2
| 1
| 11
| 2
| 0
| 2
| 0
| 2
| 23
| 80
| 9
| 59
| 7
| 56
| 12
| 39
| 7
| 36
| 16
| 5
| 3
| 23
|
6,436
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/question_answering.py
|
transformers.pipelines.question_answering.QuestionAnsweringPipeline
|
from ..tokenization_utils import PreTrainedTokenizer
import numpy as np
from ..modelcard import ModelCard
import warnings
from typing import TYPE_CHECKING, Optional, Union
import inspect
from ..utils import PaddingStrategy, add_end_docstrings, is_tokenizers_available, is_torch_available, logging
from .base import ArgumentHandler, ChunkPipeline, build_pipeline_init_args
from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class QuestionAnsweringPipeline(ChunkPipeline):
"""
Question Answering pipeline using any `ModelForQuestionAnswering`. See the [question answering
examples](../task_summary#question-answering) for more information.
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="deepset/roberta-base-squad2")
>>> oracle(question="Where do I live?", context="My name is Wolfgang and I live in Berlin")
{'score': 0.9191, 'start': 34, 'end': 40, 'answer': 'Berlin'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This question answering pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=question-answering).
"""
default_input_names = 'question,context'
handle_impossible_answer = False
def __init__(self, model: 'PreTrainedModel', tokenizer: PreTrainedTokenizer, modelcard: Optional[ModelCard]=None, task: str='', **kwargs):
super().__init__(model=model, tokenizer=tokenizer, modelcard=modelcard, task=task, **kwargs)
self._args_parser = QuestionAnsweringArgumentHandler()
self.check_model_type(MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES)
@staticmethod
def create_sample(question: Union[str, list[str]], context: Union[str, list[str]]) -> Union[SquadExample, list[SquadExample]]:
"""
QuestionAnsweringPipeline leverages the [`SquadExample`] internally. This helper method encapsulate all the
logic for converting question(s) and context(s) to [`SquadExample`].
We currently support extractive question answering.
Arguments:
question (`str` or `list[str]`): The question(s) asked.
context (`str` or `list[str]`): The context(s) in which we will look for the answer.
Returns:
One or a list of [`SquadExample`]: The corresponding [`SquadExample`] grouping question and context.
"""
if isinstance(question, list):
return [SquadExample(None, q, c, None, None, None) for q, c in zip(question, context)]
else:
return SquadExample(None, question, context, None, None, None)
def _sanitize_parameters(self, padding=None, topk=None, top_k=None, doc_stride=None, max_answer_len=None, max_seq_len=None, max_question_len=None, handle_impossible_answer=None, align_to_words=None, **kwargs):
preprocess_params = {}
if padding is not None:
preprocess_params['padding'] = padding
if doc_stride is not None:
preprocess_params['doc_stride'] = doc_stride
if max_question_len is not None:
preprocess_params['max_question_len'] = max_question_len
if max_seq_len is not None:
preprocess_params['max_seq_len'] = max_seq_len
postprocess_params = {}
if topk is not None and top_k is None:
warnings.warn('topk parameter is deprecated, use top_k instead', UserWarning)
top_k = topk
if top_k is not None:
if top_k < 1:
raise ValueError(f'top_k parameter should be >= 1 (got {top_k})')
postprocess_params['top_k'] = top_k
if max_answer_len is not None:
if max_answer_len < 1:
raise ValueError(f'max_answer_len parameter should be >= 1 (got {max_answer_len}')
postprocess_params['max_answer_len'] = max_answer_len
if handle_impossible_answer is not None:
postprocess_params['handle_impossible_answer'] = handle_impossible_answer
if align_to_words is not None:
postprocess_params['align_to_words'] = align_to_words
return (preprocess_params, {}, postprocess_params)
def __call__(self, *args, **kwargs):
"""
Answer the question(s) given as inputs by using the context(s).
Args:
question (`str` or `list[str]`):
One or several question(s) (must be used in conjunction with the `context` argument).
context (`str` or `list[str]`):
One or several context(s) associated with the question(s) (must be used in conjunction with the
`question` argument).
top_k (`int`, *optional*, defaults to 1):
The number of answers to return (will be chosen by order of likelihood). Note that we return less than
top_k answers if there are not enough options available within the context.
doc_stride (`int`, *optional*, defaults to 128):
If the context is too long to fit with the question for the model, it will be split in several chunks
with some overlap. This argument controls the size of that overlap.
max_answer_len (`int`, *optional*, defaults to 15):
The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
max_seq_len (`int`, *optional*, defaults to 384):
The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
model. The context will be split in several chunks (using `doc_stride` as overlap) if needed.
max_question_len (`int`, *optional*, defaults to 64):
The maximum length of the question after tokenization. It will be truncated if needed.
handle_impossible_answer (`bool`, *optional*, defaults to `False`):
Whether or not we accept impossible as an answer.
align_to_words (`bool`, *optional*, defaults to `True`):
Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt on
non-space-separated languages (like Japanese or Chinese)
Return:
A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys:
- **score** (`float`) -- The probability associated to the answer.
- **start** (`int`) -- The character start index of the answer (in the tokenized version of the input).
- **end** (`int`) -- The character end index of the answer (in the tokenized version of the input).
- **answer** (`str`) -- The answer to the question.
"""
if args:
warnings.warn('Passing a list of SQuAD examples to the pipeline is deprecated and will be removed in v5. Inputs should be passed using the `question` and `context` keyword arguments instead.', FutureWarning)
examples = self._args_parser(*args, **kwargs)
if isinstance(examples, (list, tuple)) and len(examples) == 1:
return super().__call__(examples[0], **kwargs)
return super().__call__(examples, **kwargs)
def preprocess(self, example, padding='do_not_pad', doc_stride=None, max_question_len=64, max_seq_len=None):
if isinstance(example, dict):
example = SquadExample(None, example['question'], example['context'], None, None, None)
if max_seq_len is None:
max_seq_len = min(self.tokenizer.model_max_length, 384)
if doc_stride is None:
doc_stride = min(max_seq_len // 2, 128)
if doc_stride > max_seq_len:
raise ValueError(f'`doc_stride` ({doc_stride}) is larger than `max_seq_len` ({max_seq_len})')
if not self.tokenizer.is_fast:
features = squad_convert_examples_to_features(examples=[example], tokenizer=self.tokenizer, max_seq_length=max_seq_len, doc_stride=doc_stride, max_query_length=max_question_len, padding_strategy=PaddingStrategy.MAX_LENGTH, is_training=False, tqdm_enabled=False)
else:
question_first = self.tokenizer.padding_side == 'right'
encoded_inputs = self.tokenizer(text=example.question_text if question_first else example.context_text, text_pair=example.context_text if question_first else example.question_text, padding=padding, truncation='only_second' if question_first else 'only_first', max_length=max_seq_len, stride=doc_stride, return_token_type_ids=True, return_overflowing_tokens=True, return_offsets_mapping=True, return_special_tokens_mask=True)
num_spans = len(encoded_inputs['input_ids'])
p_mask = [[tok != 1 if question_first else 0 for tok in encoded_inputs.sequence_ids(span_id)] for span_id in range(num_spans)]
features = []
for span_idx in range(num_spans):
input_ids_span_idx = encoded_inputs['input_ids'][span_idx]
attention_mask_span_idx = encoded_inputs['attention_mask'][span_idx] if 'attention_mask' in encoded_inputs else None
token_type_ids_span_idx = encoded_inputs['token_type_ids'][span_idx] if 'token_type_ids' in encoded_inputs else None
if self.tokenizer.cls_token_id is not None:
cls_indices = np.nonzero(np.array(input_ids_span_idx) == self.tokenizer.cls_token_id)[0]
for cls_index in cls_indices:
p_mask[span_idx][cls_index] = 0
submask = p_mask[span_idx]
features.append(SquadFeatures(input_ids=input_ids_span_idx, attention_mask=attention_mask_span_idx, token_type_ids=token_type_ids_span_idx, p_mask=submask, encoding=encoded_inputs[span_idx], cls_index=None, token_to_orig_map={}, example_index=0, unique_id=0, paragraph_len=0, token_is_max_context=0, tokens=[], start_position=0, end_position=0, is_impossible=False, qas_id=None))
for i, feature in enumerate(features):
fw_args = {}
others = {}
model_input_names = self.tokenizer.model_input_names + ['p_mask', 'token_type_ids']
for k, v in feature.__dict__.items():
if k in model_input_names:
tensor = torch.tensor(v)
if tensor.dtype == torch.int32:
tensor = tensor.long()
fw_args[k] = tensor.unsqueeze(0)
else:
others[k] = v
is_last = i == len(features) - 1
yield {'example': example, 'is_last': is_last, **fw_args, **others}
def _forward(self, inputs):
example = inputs['example']
model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
model_forward = self.model.forward
if 'use_cache' in inspect.signature(model_forward).parameters:
model_inputs['use_cache'] = False
output = self.model(**model_inputs)
if isinstance(output, dict):
return {'start': output['start_logits'], 'end': output['end_logits'], 'example': example, **inputs}
else:
start, end = output[:2]
return {'start': start, 'end': end, 'example': example, **inputs}
def postprocess(self, model_outputs, top_k=1, handle_impossible_answer=False, max_answer_len=15, align_to_words=True):
min_null_score = 1000000
answers = []
for output in model_outputs:
if output['start'].dtype == torch.bfloat16:
start_ = output['start'].to(torch.float32)
end_ = output['end'].to(torch.float32)
else:
start_ = output['start']
end_ = output['end']
example = output['example']
p_mask = output['p_mask']
attention_mask = output['attention_mask'].numpy() if output.get('attention_mask', None) is not None else None
pre_topk = top_k * 2 + 10 if align_to_words else top_k
starts, ends, scores, min_null_score = select_starts_ends(start_, end_, p_mask, attention_mask, min_null_score, pre_topk, handle_impossible_answer, max_answer_len)
if not self.tokenizer.is_fast:
char_to_word = np.array(example.char_to_word_offset)
for s, e, score in zip(starts, ends, scores):
token_to_orig_map = output['token_to_orig_map']
answers.append({'score': score.item(), 'start': np.where(char_to_word == token_to_orig_map[s])[0][0].item(), 'end': np.where(char_to_word == token_to_orig_map[e])[0][-1].item(), 'answer': ' '.join(example.doc_tokens[token_to_orig_map[s]:token_to_orig_map[e] + 1])})
else:
question_first = self.tokenizer.padding_side == 'right'
enc = output['encoding']
if self.tokenizer.padding_side == 'left':
offset = (output['input_ids'] == self.tokenizer.pad_token_id).numpy().sum()
else:
offset = 0
sequence_index = 1 if question_first else 0
for s, e, score in zip(starts, ends, scores):
s = s - offset
e = e - offset
start_index, end_index = self.get_indices(enc, s, e, sequence_index, align_to_words)
target_answer = example.context_text[start_index:end_index]
answer = self.get_answer(answers, target_answer)
if answer:
answer['score'] += score.item()
else:
answers.append({'score': score.item(), 'start': start_index, 'end': end_index, 'answer': example.context_text[start_index:end_index]})
if handle_impossible_answer:
answers.append({'score': min_null_score, 'start': 0, 'end': 0, 'answer': ''})
answers = sorted(answers, key=lambda x: x['score'], reverse=True)[:top_k]
if len(answers) == 1:
return answers[0]
return answers
def get_answer(self, answers: list[dict], target: str) -> Optional[dict]:
for answer in answers:
if answer['answer'].lower() == target.lower():
return answer
return None
def get_indices(self, enc: 'tokenizers.Encoding', s: int, e: int, sequence_index: int, align_to_words: bool) -> tuple[int, int]:
if align_to_words:
try:
start_word = enc.token_to_word(s)
end_word = enc.token_to_word(e)
start_index = enc.word_to_chars(start_word, sequence_index=sequence_index)[0]
end_index = enc.word_to_chars(end_word, sequence_index=sequence_index)[1]
except Exception:
start_index = enc.offsets[s][0]
end_index = enc.offsets[e][1]
else:
start_index = enc.offsets[s][0]
end_index = enc.offsets[e][1]
return (start_index, end_index)
def span_to_answer(self, text: str, start: int, end: int) -> dict[str, Union[str, int]]:
"""
When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
Args:
text (`str`): The actual context to extract the answer from.
start (`int`): The answer starting token index.
end (`int`): The answer end token index.
Returns:
Dictionary like `{'answer': str, 'start': int, 'end': int}`
"""
words = []
token_idx = char_start_idx = char_end_idx = chars_idx = 0
for i, word in enumerate(text.split(' ')):
token = self.tokenizer.tokenize(word)
if start <= token_idx <= end:
if token_idx == start:
char_start_idx = chars_idx
if token_idx == end:
char_end_idx = chars_idx + len(word)
words += [word]
if token_idx > end:
break
token_idx += len(token)
chars_idx += len(word) + 1
return {'answer': ' '.join(words), 'start': max(0, char_start_idx), 'end': min(len(text), char_end_idx)}
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class QuestionAnsweringPipeline(ChunkPipeline):
'''
Question Answering pipeline using any `ModelForQuestionAnswering`. See the [question answering
examples](../task_summary#question-answering) for more information.
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="deepset/roberta-base-squad2")
>>> oracle(question="Where do I live?", context="My name is Wolfgang and I live in Berlin")
{'score': 0.9191, 'start': 34, 'end': 40, 'answer': 'Berlin'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This question answering pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a question answering task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=question-answering).
'''
def __init__(self, model: 'PreTrainedModel', tokenizer: PreTrainedTokenizer, modelcard: Optional[ModelCard]=None, task: str='', **kwargs):
pass
@staticmethod
def create_sample(question: Union[str, list[str]], context: Union[str, list[str]]) -> Union[SquadExample, list[SquadExample]]:
'''
QuestionAnsweringPipeline leverages the [`SquadExample`] internally. This helper method encapsulate all the
logic for converting question(s) and context(s) to [`SquadExample`].
We currently support extractive question answering.
Arguments:
question (`str` or `list[str]`): The question(s) asked.
context (`str` or `list[str]`): The context(s) in which we will look for the answer.
Returns:
One or a list of [`SquadExample`]: The corresponding [`SquadExample`] grouping question and context.
'''
pass
def _sanitize_parameters(self, padding=None, topk=None, top_k=None, doc_stride=None, max_answer_len=None, max_seq_len=None, max_question_len=None, handle_impossible_answer=None, align_to_words=None, **kwargs):
pass
def __call__(self, *args, **kwargs):
'''
Answer the question(s) given as inputs by using the context(s).
Args:
question (`str` or `list[str]`):
One or several question(s) (must be used in conjunction with the `context` argument).
context (`str` or `list[str]`):
One or several context(s) associated with the question(s) (must be used in conjunction with the
`question` argument).
top_k (`int`, *optional*, defaults to 1):
The number of answers to return (will be chosen by order of likelihood). Note that we return less than
top_k answers if there are not enough options available within the context.
doc_stride (`int`, *optional*, defaults to 128):
If the context is too long to fit with the question for the model, it will be split in several chunks
with some overlap. This argument controls the size of that overlap.
max_answer_len (`int`, *optional*, defaults to 15):
The maximum length of predicted answers (e.g., only answers with a shorter length are considered).
max_seq_len (`int`, *optional*, defaults to 384):
The maximum length of the total sentence (context + question) in tokens of each chunk passed to the
model. The context will be split in several chunks (using `doc_stride` as overlap) if needed.
max_question_len (`int`, *optional*, defaults to 64):
The maximum length of the question after tokenization. It will be truncated if needed.
handle_impossible_answer (`bool`, *optional*, defaults to `False`):
Whether or not we accept impossible as an answer.
align_to_words (`bool`, *optional*, defaults to `True`):
Attempts to align the answer to real words. Improves quality on space separated languages. Might hurt on
non-space-separated languages (like Japanese or Chinese)
Return:
A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys:
- **score** (`float`) -- The probability associated to the answer.
- **start** (`int`) -- The character start index of the answer (in the tokenized version of the input).
- **end** (`int`) -- The character end index of the answer (in the tokenized version of the input).
- **answer** (`str`) -- The answer to the question.
'''
pass
def preprocess(self, example, padding='do_not_pad', doc_stride=None, max_question_len=64, max_seq_len=None):
pass
def _forward(self, inputs):
pass
def postprocess(self, model_outputs, top_k=1, handle_impossible_answer=False, max_answer_len=15, align_to_words=True):
pass
def get_answer(self, answers: list[dict], target: str) -> Optional[dict]:
pass
def get_indices(self, enc: 'tokenizers.Encoding', s: int, e: int, sequence_index: int, align_to_words: bool) -> tuple[int, int]:
pass
def span_to_answer(self, text: str, start: int, end: int) -> dict[str, Union[str, int]]:
'''
When decoding from token probabilities, this method maps token indexes to actual word in the initial context.
Args:
text (`str`): The actual context to extract the answer from.
start (`int`): The answer starting token index.
end (`int`): The answer end token index.
Returns:
Dictionary like `{'answer': str, 'start': int, 'end': int}`
'''
pass
| 13
| 4
| 46
| 4
| 32
| 10
| 7
| 0.38
| 1
| 19
| 5
| 0
| 8
| 2
| 9
| 53
| 450
| 52
| 290
| 98
| 248
| 109
| 166
| 65
| 156
| 22
| 7
| 5
| 67
|
6,437
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/table_question_answering.py
|
transformers.pipelines.table_question_answering.TableQuestionAnsweringArgumentHandler
|
from ..utils import add_end_docstrings, is_torch_available, requires_backends
from .base import ArgumentHandler, Dataset, Pipeline, PipelineException, build_pipeline_init_args
import types
class TableQuestionAnsweringArgumentHandler(ArgumentHandler):
"""
Handles arguments for the TableQuestionAnsweringPipeline
"""
def __call__(self, table=None, query=None, **kwargs):
requires_backends(self, 'pandas')
import pandas as pd
if table is None:
raise ValueError('Keyword argument `table` cannot be None.')
elif query is None:
if isinstance(table, dict) and table.get('query') is not None and (table.get('table') is not None):
tqa_pipeline_inputs = [table]
elif isinstance(table, list) and len(table) > 0:
if not all((isinstance(d, dict) for d in table)):
raise ValueError(f'Keyword argument `table` should be a list of dict, but is {(type(d) for d in table)}')
if table[0].get('query') is not None and table[0].get('table') is not None:
tqa_pipeline_inputs = table
else:
raise ValueError(f'If keyword argument `table` is a list of dictionaries, each dictionary should have a `table` and `query` key, but only dictionary has keys {table[0].keys()} `table` and `query` keys.')
elif Dataset is not None and isinstance(table, Dataset) or isinstance(table, types.GeneratorType):
return table
else:
raise ValueError(f'Invalid input. Keyword argument `table` should be either of type `dict` or `list`, but is {type(table)})')
else:
tqa_pipeline_inputs = [{'table': table, 'query': query}]
for tqa_pipeline_input in tqa_pipeline_inputs:
if not isinstance(tqa_pipeline_input['table'], pd.DataFrame):
if tqa_pipeline_input['table'] is None:
raise ValueError('Table cannot be None.')
tqa_pipeline_input['table'] = pd.DataFrame(tqa_pipeline_input['table'])
return tqa_pipeline_inputs
|
class TableQuestionAnsweringArgumentHandler(ArgumentHandler):
'''
Handles arguments for the TableQuestionAnsweringPipeline
'''
def __call__(self, table=None, query=None, **kwargs):
pass
| 2
| 1
| 46
| 5
| 35
| 6
| 11
| 0.25
| 1
| 3
| 0
| 0
| 1
| 0
| 1
| 22
| 51
| 6
| 36
| 5
| 33
| 9
| 22
| 5
| 19
| 11
| 5
| 3
| 11
|
6,438
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/table_question_answering.py
|
transformers.pipelines.table_question_answering.TableQuestionAnsweringPipeline
|
import collections
from ..generation import GenerationConfig
import numpy as np
from .base import ArgumentHandler, Dataset, Pipeline, PipelineException, build_pipeline_init_args
from ..utils import add_end_docstrings, is_torch_available, requires_backends
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class TableQuestionAnsweringPipeline(Pipeline):
"""
Table Question Answering pipeline using a `ModelForTableQuestionAnswering`. This pipeline is only available in
PyTorch.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="google/tapas-base-finetuned-wtq")
>>> table = {
... "Repository": ["Transformers", "Datasets", "Tokenizers"],
... "Stars": ["36542", "4512", "3934"],
... "Contributors": ["651", "77", "34"],
... "Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
... }
>>> oracle(query="How many stars does the transformers repository have?", table=table)
{'answer': 'AVERAGE > 36542', 'coordinates': [(0, 1)], 'cells': ['36542'], 'aggregator': 'AVERAGE'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This tabular question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"table-question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task.
See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=table-question-answering).
"""
default_input_names = 'table,query'
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256)
def __init__(self, args_parser=TableQuestionAnsweringArgumentHandler(), *args, **kwargs):
super().__init__(*args, **kwargs)
self._args_parser = args_parser
mapping = MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES.copy()
mapping.update(MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES)
self.check_model_type(mapping)
self.aggregate = getattr(self.model.config, 'aggregation_labels', None) and getattr(self.model.config, 'num_aggregation_labels', None)
self.type = 'tapas' if hasattr(self.model.config, 'aggregation_labels') else None
def batch_inference(self, **inputs):
return self.model(**inputs)
def sequential_inference(self, **inputs):
"""
Inference used for models that need to process sequences in a sequential fashion, like the SQA models which
handle conversational query related to a table.
"""
all_logits = []
all_aggregations = []
prev_answers = None
batch_size = inputs['input_ids'].shape[0]
input_ids = inputs['input_ids'].to(self.device)
attention_mask = inputs['attention_mask'].to(self.device)
token_type_ids = inputs['token_type_ids'].to(self.device)
token_type_ids_example = None
for index in range(batch_size):
if prev_answers is not None:
prev_labels_example = token_type_ids_example[:, 3]
model_labels = np.zeros_like(prev_labels_example.cpu().numpy())
token_type_ids_example = token_type_ids[index]
for i in range(model_labels.shape[0]):
segment_id = token_type_ids_example[:, 0].tolist()[i]
col_id = token_type_ids_example[:, 1].tolist()[i] - 1
row_id = token_type_ids_example[:, 2].tolist()[i] - 1
if row_id >= 0 and col_id >= 0 and (segment_id == 1):
model_labels[i] = int(prev_answers[col_id, row_id])
token_type_ids_example[:, 3] = torch.from_numpy(model_labels).type(torch.long).to(self.device)
input_ids_example = input_ids[index]
attention_mask_example = attention_mask[index]
token_type_ids_example = token_type_ids[index]
outputs = self.model(input_ids=input_ids_example.unsqueeze(0), attention_mask=attention_mask_example.unsqueeze(0), token_type_ids=token_type_ids_example.unsqueeze(0))
logits = outputs.logits
if self.aggregate:
all_aggregations.append(outputs.logits_aggregation)
all_logits.append(logits)
dist_per_token = torch.distributions.Bernoulli(logits=logits)
probabilities = dist_per_token.probs * attention_mask_example.type(torch.float32).to(dist_per_token.probs.device)
coords_to_probs = collections.defaultdict(list)
for i, p in enumerate(probabilities.squeeze().tolist()):
segment_id = token_type_ids_example[:, 0].tolist()[i]
col = token_type_ids_example[:, 1].tolist()[i] - 1
row = token_type_ids_example[:, 2].tolist()[i] - 1
if col >= 0 and row >= 0 and (segment_id == 1):
coords_to_probs[col, row].append(p)
prev_answers = {key: np.array(coords_to_probs[key]).mean() > 0.5 for key in coords_to_probs}
logits_batch = torch.cat(tuple(all_logits), 0)
return (logits_batch,) if not self.aggregate else (logits_batch, torch.cat(tuple(all_aggregations), 0))
def __call__(self, *args, **kwargs):
"""
Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
- `pipeline(table, query)`
- `pipeline(table, [query])`
- `pipeline(table=table, query=query)`
- `pipeline(table=table, query=[query])`
- `pipeline({"table": table, "query": query})`
- `pipeline({"table": table, "query": [query]})`
- `pipeline([{"table": table, "query": query}, {"table": table, "query": query}])`
The `table` argument should be a dict or a DataFrame built from that dict, containing the whole table:
Example:
```python
data = {
"actors": ["brad pitt", "leonardo di caprio", "george clooney"],
"age": ["56", "45", "59"],
"number of movies": ["87", "53", "69"],
"date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"],
}
```
This dictionary can be passed in as such, or can be converted to a pandas DataFrame:
Example:
```python
import pandas as pd
table = pd.DataFrame.from_dict(data)
```
Args:
table (`pd.DataFrame` or `Dict`):
Pandas DataFrame or dictionary that will be converted to a DataFrame containing all the table values.
See above for an example of dictionary.
query (`str` or `list[str]`):
Query or list of queries that will be sent to the model alongside the table.
sequential (`bool`, *optional*, defaults to `False`):
Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
inference to be done sequentially to extract relations within sequences, given their conversational
nature.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Activates and controls padding. Accepts the following values:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (`bool`, `str` or [`TapasTruncationStrategy`], *optional*, defaults to `False`):
Activates and controls truncation. Accepts the following values:
- `True` or `'drop_rows_to_fit'`: Truncate to a maximum length specified with the argument `max_length`
or to the maximum acceptable input length for the model if that argument is not provided. This will
truncate row by row, removing rows from the table.
- `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
Return:
A dictionary or a list of dictionaries containing results: Each result is a dictionary with the following
keys:
- **answer** (`str`) -- The answer of the query given the table. If there is an aggregator, the answer will
be preceded by `AGGREGATOR >`.
- **coordinates** (`list[tuple[int, int]]`) -- Coordinates of the cells of the answers.
- **cells** (`list[str]`) -- List of strings made up of the answer cell values.
- **aggregator** (`str`) -- If the model has an aggregator, this returns the aggregator.
"""
pipeline_inputs = self._args_parser(*args, **kwargs)
results = super().__call__(pipeline_inputs, **kwargs)
if len(results) == 1:
return results[0]
return results
def _sanitize_parameters(self, sequential=None, padding=None, truncation=None, **kwargs):
preprocess_params = {}
if padding is not None:
preprocess_params['padding'] = padding
if truncation is not None:
preprocess_params['truncation'] = truncation
forward_params = {}
if sequential is not None:
forward_params['sequential'] = sequential
if getattr(self, 'assistant_model', None) is not None:
forward_params['assistant_model'] = self.assistant_model
if getattr(self, 'assistant_tokenizer', None) is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
return (preprocess_params, forward_params, {})
def preprocess(self, pipeline_input, sequential=None, padding=True, truncation=None):
if truncation is None:
if self.type == 'tapas':
truncation = 'drop_rows_to_fit'
else:
truncation = 'do_not_truncate'
table, query = (pipeline_input['table'], pipeline_input['query'])
if table.empty:
raise ValueError('table is empty')
if query is None or query == '':
raise ValueError('query is empty')
inputs = self.tokenizer(table, query, return_tensors='pt', truncation=truncation, padding=padding)
inputs['table'] = table
return inputs
def _forward(self, model_inputs, sequential=False, **generate_kwargs):
table = model_inputs.pop('table')
if self.type == 'tapas':
if sequential:
outputs = self.sequential_inference(**model_inputs)
else:
outputs = self.batch_inference(**model_inputs)
else:
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
outputs = self.model.generate(**model_inputs, **generate_kwargs)
model_outputs = {'model_inputs': model_inputs, 'table': table, 'outputs': outputs}
return model_outputs
def postprocess(self, model_outputs):
inputs = model_outputs['model_inputs']
table = model_outputs['table']
outputs = model_outputs['outputs']
if self.type == 'tapas':
if self.aggregate:
logits, logits_agg = outputs[:2]
predictions = self.tokenizer.convert_logits_to_predictions(inputs, logits, logits_agg)
answer_coordinates_batch, agg_predictions = predictions
aggregators = {i: self.model.config.aggregation_labels[pred] for i, pred in enumerate(agg_predictions)}
no_agg_label_index = self.model.config.no_aggregation_label_index
aggregators_prefix = {i: aggregators[i] + ' > ' for i, pred in enumerate(agg_predictions) if pred != no_agg_label_index}
else:
logits = outputs[0]
predictions = self.tokenizer.convert_logits_to_predictions(inputs, logits)
answer_coordinates_batch = predictions[0]
aggregators = {}
aggregators_prefix = {}
answers = []
for index, coordinates in enumerate(answer_coordinates_batch):
cells = [table.iat[coordinate] for coordinate in coordinates]
aggregator = aggregators.get(index, '')
aggregator_prefix = aggregators_prefix.get(index, '')
answer = {'answer': aggregator_prefix + ', '.join(cells), 'coordinates': coordinates, 'cells': [table.iat[coordinate] for coordinate in coordinates]}
if aggregator:
answer['aggregator'] = aggregator
answers.append(answer)
if len(answer) == 0:
raise PipelineException('Table question answering', self.model.name_or_path, 'Empty answer')
else:
answers = [{'answer': answer} for answer in self.tokenizer.batch_decode(outputs, skip_special_tokens=True)]
return answers if len(answers) > 1 else answers[0]
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class TableQuestionAnsweringPipeline(Pipeline):
'''
Table Question Answering pipeline using a `ModelForTableQuestionAnswering`. This pipeline is only available in
PyTorch.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="google/tapas-base-finetuned-wtq")
>>> table = {
... "Repository": ["Transformers", "Datasets", "Tokenizers"],
... "Stars": ["36542", "4512", "3934"],
... "Contributors": ["651", "77", "34"],
... "Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
... }
>>> oracle(query="How many stars does the transformers repository have?", table=table)
{'answer': 'AVERAGE > 36542', 'coordinates': [(0, 1)], 'cells': ['36542'], 'aggregator': 'AVERAGE'}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This tabular question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"table-question-answering"`.
The models that this pipeline can use are models that have been fine-tuned on a tabular question answering task.
See the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=table-question-answering).
'''
def __init__(self, args_parser=TableQuestionAnsweringArgumentHandler(), *args, **kwargs):
pass
def batch_inference(self, **inputs):
pass
def sequential_inference(self, **inputs):
'''
Inference used for models that need to process sequences in a sequential fashion, like the SQA models which
handle conversational query related to a table.
'''
pass
def __call__(self, *args, **kwargs):
'''
Answers queries according to a table. The pipeline accepts several types of inputs which are detailed below:
- `pipeline(table, query)`
- `pipeline(table, [query])`
- `pipeline(table=table, query=query)`
- `pipeline(table=table, query=[query])`
- `pipeline({"table": table, "query": query})`
- `pipeline({"table": table, "query": [query]})`
- `pipeline([{"table": table, "query": query}, {"table": table, "query": query}])`
The `table` argument should be a dict or a DataFrame built from that dict, containing the whole table:
Example:
```python
data = {
"actors": ["brad pitt", "leonardo di caprio", "george clooney"],
"age": ["56", "45", "59"],
"number of movies": ["87", "53", "69"],
"date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"],
}
```
This dictionary can be passed in as such, or can be converted to a pandas DataFrame:
Example:
```python
import pandas as pd
table = pd.DataFrame.from_dict(data)
```
Args:
table (`pd.DataFrame` or `Dict`):
Pandas DataFrame or dictionary that will be converted to a DataFrame containing all the table values.
See above for an example of dictionary.
query (`str` or `list[str]`):
Query or list of queries that will be sent to the model alongside the table.
sequential (`bool`, *optional*, defaults to `False`):
Whether to do inference sequentially or as a batch. Batching is faster, but models like SQA require the
inference to be done sequentially to extract relations within sequences, given their conversational
nature.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Activates and controls padding. Accepts the following values:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (`bool`, `str` or [`TapasTruncationStrategy`], *optional*, defaults to `False`):
Activates and controls truncation. Accepts the following values:
- `True` or `'drop_rows_to_fit'`: Truncate to a maximum length specified with the argument `max_length`
or to the maximum acceptable input length for the model if that argument is not provided. This will
truncate row by row, removing rows from the table.
- `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
Return:
A dictionary or a list of dictionaries containing results: Each result is a dictionary with the following
keys:
- **answer** (`str`) -- The answer of the query given the table. If there is an aggregator, the answer will
be preceded by `AGGREGATOR >`.
- **coordinates** (`list[tuple[int, int]]`) -- Coordinates of the cells of the answers.
- **cells** (`list[str]`) -- List of strings made up of the answer cell values.
- **aggregator** (`str`) -- If the model has an aggregator, this returns the aggregator.
'''
pass
def _sanitize_parameters(self, sequential=None, padding=None, truncation=None, **kwargs):
pass
def preprocess(self, pipeline_input, sequential=None, padding=True, truncation=None):
pass
def _forward(self, model_inputs, sequential=False, **generate_kwargs):
pass
def postprocess(self, model_outputs):
pass
| 10
| 3
| 40
| 7
| 25
| 10
| 6
| 0.51
| 1
| 11
| 2
| 0
| 8
| 4
| 8
| 50
| 358
| 68
| 199
| 64
| 190
| 101
| 172
| 64
| 163
| 18
| 6
| 5
| 46
|
6,439
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text2text_generation.py
|
transformers.pipelines.text2text_generation.ReturnType
|
import enum
class ReturnType(enum.Enum):
TENSORS = 0
TEXT = 1
|
class ReturnType(enum.Enum):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 49
| 3
| 0
| 3
| 3
| 2
| 0
| 3
| 3
| 2
| 0
| 4
| 0
| 0
|
6,440
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text2text_generation.py
|
transformers.pipelines.text2text_generation.SummarizationPipeline
|
from ..utils import add_end_docstrings, is_torch_available, logging
from .base import Pipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class SummarizationPipeline(Text2TextGenerationPipeline):
"""
Summarize news articles and other documents.
This summarizing pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"summarization"`.
The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is
currently, '*bart-large-cnn*', '*google-t5/t5-small*', '*google-t5/t5-base*', '*google-t5/t5-large*', '*google-t5/t5-3b*', '*google-t5/t5-11b*'. See the up-to-date
list of available models on [huggingface.co/models](https://huggingface.co/models?filter=summarization). For a list
of available parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 4
Usage:
```python
# use bart
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
```"""
return_name = 'summary'
def __call__(self, *args, **kwargs):
"""
Summarize the text(s) given as inputs.
Args:
documents (*str* or `list[str]`):
One or several articles (or one list of articles) to summarize.
return_text (`bool`, *optional*, defaults to `True`):
Whether or not to include the decoded texts in the outputs
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token indices) in the outputs.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the potential extra spaces in the text output.
generate_kwargs:
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:
- **summary_text** (`str`, present when `return_text=True`) -- The summary of the corresponding input.
- **summary_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the summary.
"""
return super().__call__(*args, **kwargs)
def check_inputs(self, input_length: int, min_length: int, max_length: int) -> bool:
"""
Checks whether there might be something wrong with given input with regard to the model.
"""
if max_length < min_length:
logger.warning(f'Your min_length={min_length} must be inferior than your max_length={max_length}.')
if input_length < max_length:
logger.warning(f"Your max_length is set to {max_length}, but your input_length is only {input_length}. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length={input_length // 2})")
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class SummarizationPipeline(Text2TextGenerationPipeline):
'''
Summarize news articles and other documents.
This summarizing pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"summarization"`.
The models that this pipeline can use are models that have been fine-tuned on a summarization task, which is
currently, '*bart-large-cnn*', '*google-t5/t5-small*', '*google-t5/t5-base*', '*google-t5/t5-large*', '*google-t5/t5-3b*', '*google-t5/t5-11b*'. See the up-to-date
list of available models on [huggingface.co/models](https://huggingface.co/models?filter=summarization). For a list
of available parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 4
Usage:
```python
# use bart
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
```'''
def __call__(self, *args, **kwargs):
'''
Summarize the text(s) given as inputs.
Args:
documents (*str* or `list[str]`):
One or several articles (or one list of articles) to summarize.
return_text (`bool`, *optional*, defaults to `True`):
Whether or not to include the decoded texts in the outputs
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token indices) in the outputs.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the potential extra spaces in the text output.
generate_kwargs:
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:
- **summary_text** (`str`, present when `return_text=True`) -- The summary of the corresponding input.
- **summary_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the summary.
'''
pass
def check_inputs(self, input_length: int, min_length: int, max_length: int) -> bool:
'''
Checks whether there might be something wrong with given input with regard to the model.
'''
pass
| 4
| 3
| 19
| 2
| 6
| 12
| 2
| 3.23
| 1
| 3
| 0
| 0
| 2
| 0
| 2
| 52
| 67
| 12
| 13
| 4
| 10
| 42
| 9
| 4
| 6
| 3
| 7
| 1
| 4
|
6,441
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text2text_generation.py
|
transformers.pipelines.text2text_generation.Text2TextGenerationPipeline
|
from .base import Pipeline, build_pipeline_init_args
from ..tokenization_utils import TruncationStrategy
import warnings
from typing import Any, Union
from ..generation import GenerationConfig
from ..utils import add_end_docstrings, is_torch_available, logging
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class Text2TextGenerationPipeline(Pipeline):
"""
Pipeline for text to text generation using seq2seq models.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 4
Example:
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="mrm8488/t5-base-finetuned-question-generation-ap")
>>> generator(
... "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google"
... )
[{'generated_text': 'question: Who created the RuPERTa-base?'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text
generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about
text generation parameters in [Text generation strategies](../generation_strategies) and [Text
generation](text_generation).
This Text2TextGenerationPipeline pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"text2text-generation"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text2text-generation). For a list of available
parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Usage:
```python
text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")
```"""
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256, num_beams=4)
return_name = 'generated'
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.check_model_type(MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES)
def _sanitize_parameters(self, return_tensors=None, return_text=None, return_type=None, clean_up_tokenization_spaces=None, truncation=None, stop_sequence=None, **generate_kwargs):
preprocess_params = {}
if truncation is not None:
preprocess_params['truncation'] = truncation
forward_params = generate_kwargs
postprocess_params = {}
if return_tensors is not None and return_type is None:
return_type = ReturnType.TENSORS if return_tensors else ReturnType.TEXT
if return_type is not None:
postprocess_params['return_type'] = return_type
if clean_up_tokenization_spaces is not None:
postprocess_params['clean_up_tokenization_spaces'] = clean_up_tokenization_spaces
if stop_sequence is not None:
stop_sequence_ids = self.tokenizer.encode(stop_sequence, add_special_tokens=False)
if len(stop_sequence_ids) > 1:
warnings.warn('Stopping on a multiple token sequence is not yet supported on transformers. The first token of the stop sequence will be used as the stop sequence string in the interim.')
generate_kwargs['eos_token_id'] = stop_sequence_ids[0]
if self.assistant_model is not None:
forward_params['assistant_model'] = self.assistant_model
if self.assistant_tokenizer is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
return (preprocess_params, forward_params, postprocess_params)
def check_inputs(self, input_length: int, min_length: int, max_length: int):
"""
Checks whether there might be something wrong with given input with regard to the model.
"""
return True
def _parse_and_tokenize(self, *args, truncation):
prefix = self.prefix if self.prefix is not None else ''
if isinstance(args[0], list):
if self.tokenizer.pad_token_id is None:
raise ValueError('Please make sure that the tokenizer has a pad_token_id when using a batch input')
args = ([prefix + arg for arg in args[0]],)
padding = True
elif isinstance(args[0], str):
args = (prefix + args[0],)
padding = False
else:
raise TypeError(f' `args[0]`: {args[0]} have the wrong format. The should be either of type `str` or type `list`')
inputs = self.tokenizer(*args, padding=padding, truncation=truncation, return_tensors='pt')
if 'token_type_ids' in inputs:
del inputs['token_type_ids']
return inputs
def __call__(self, *args: Union[str, list[str]], **kwargs: Any) -> list[dict[str, str]]:
"""
Generate the output text(s) using text(s) given as inputs.
Args:
args (`str` or `list[str]`):
Input text for the encoder.
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (`bool`, *optional*, defaults to `True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the potential extra spaces in the text output.
truncation (`TruncationStrategy`, *optional*, defaults to `TruncationStrategy.DO_NOT_TRUNCATE`):
The truncation strategy for the tokenization within the pipeline. `TruncationStrategy.DO_NOT_TRUNCATE`
(default) will never truncate, but it is sometimes desirable to truncate the input to fit the model's
max_length instead of throwing an error down the line.
generate_kwargs:
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:
- **generated_text** (`str`, present when `return_text=True`) -- The generated text.
- **generated_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the generated text.
"""
result = super().__call__(*args, **kwargs)
if isinstance(args[0], list) and all((isinstance(el, str) for el in args[0])) and all((len(res) == 1 for res in result)):
return [res[0] for res in result]
return result
def preprocess(self, inputs, truncation=TruncationStrategy.DO_NOT_TRUNCATE, **kwargs):
inputs = self._parse_and_tokenize(inputs, truncation=truncation, **kwargs)
return inputs
def _forward(self, model_inputs, **generate_kwargs):
in_b, input_length = model_inputs['input_ids'].shape
self.check_inputs(input_length, generate_kwargs.get('min_length', self.generation_config.min_length), generate_kwargs.get('max_length', self.generation_config.max_length))
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
out_b = output_ids.shape[0]
output_ids = output_ids.reshape(in_b, out_b // in_b, *output_ids.shape[1:])
return {'output_ids': output_ids}
def postprocess(self, model_outputs, return_type=ReturnType.TEXT, clean_up_tokenization_spaces=False):
records = []
for output_ids in model_outputs['output_ids'][0]:
if return_type == ReturnType.TENSORS:
record = {f'{self.return_name}_token_ids': output_ids}
elif return_type == ReturnType.TEXT:
record = {f'{self.return_name}_text': self.tokenizer.decode(output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=clean_up_tokenization_spaces)}
records.append(record)
return records
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class Text2TextGenerationPipeline(Pipeline):
'''
Pipeline for text to text generation using seq2seq models.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 4
Example:
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="mrm8488/t5-base-finetuned-question-generation-ap")
>>> generator(
... "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google"
... )
[{'generated_text': 'question: Who created the RuPERTa-base?'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text
generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about
text generation parameters in [Text generation strategies](../generation_strategies) and [Text
generation](text_generation).
This Text2TextGenerationPipeline pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"text2text-generation"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text2text-generation). For a list of available
parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Usage:
```python
text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")
```'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, return_tensors=None, return_text=None, return_type=None, clean_up_tokenization_spaces=None, truncation=None, stop_sequence=None, **generate_kwargs):
pass
def check_inputs(self, input_length: int, min_length: int, max_length: int):
'''
Checks whether there might be something wrong with given input with regard to the model.
'''
pass
def _parse_and_tokenize(self, *args, truncation):
pass
def __call__(self, *args: Union[str, list[str]], **kwargs: Any) -> list[dict[str, str]]:
'''
Generate the output text(s) using text(s) given as inputs.
Args:
args (`str` or `list[str]`):
Input text for the encoder.
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (`bool`, *optional*, defaults to `True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the potential extra spaces in the text output.
truncation (`TruncationStrategy`, *optional*, defaults to `TruncationStrategy.DO_NOT_TRUNCATE`):
The truncation strategy for the tokenization within the pipeline. `TruncationStrategy.DO_NOT_TRUNCATE`
(default) will never truncate, but it is sometimes desirable to truncate the input to fit the model's
max_length instead of throwing an error down the line.
generate_kwargs:
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:
- **generated_text** (`str`, present when `return_text=True`) -- The generated text.
- **generated_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the generated text.
'''
pass
def preprocess(self, inputs, truncation=TruncationStrategy.DO_NOT_TRUNCATE, **kwargs):
pass
def _forward(self, model_inputs, **generate_kwargs):
pass
def postprocess(self, model_outputs, return_type=ReturnType.TEXT, clean_up_tokenization_spaces=False):
pass
| 10
| 3
| 19
| 2
| 14
| 4
| 4
| 0.52
| 1
| 7
| 2
| 2
| 8
| 0
| 8
| 50
| 199
| 32
| 110
| 33
| 92
| 57
| 73
| 24
| 64
| 10
| 6
| 2
| 32
|
6,442
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text2text_generation.py
|
transformers.pipelines.text2text_generation.TranslationPipeline
|
from ..tokenization_utils import TruncationStrategy
from ..utils import add_end_docstrings, is_torch_available, logging
from .base import Pipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class TranslationPipeline(Text2TextGenerationPipeline):
"""
Translates from one language to another.
This translation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"translation_xx_to_yy"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=translation).
For a list of available parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 4
Usage:
```python
en_fr_translator = pipeline("translation_en_to_fr")
en_fr_translator("How old are you?")
```"""
return_name = 'translation'
def check_inputs(self, input_length: int, min_length: int, max_length: int):
if input_length > 0.9 * max_length:
logger.warning(f"Your input_length: {input_length} is bigger than 0.9 * max_length: {max_length}. You might consider increasing your max_length manually, e.g. translator('...', max_length=400)")
return True
def preprocess(self, *args, truncation=TruncationStrategy.DO_NOT_TRUNCATE, src_lang=None, tgt_lang=None):
if getattr(self.tokenizer, '_build_translation_inputs', None):
return self.tokenizer._build_translation_inputs(*args, return_tensors='pt', truncation=truncation, src_lang=src_lang, tgt_lang=tgt_lang)
else:
return super()._parse_and_tokenize(*args, truncation=truncation)
def _sanitize_parameters(self, src_lang=None, tgt_lang=None, **kwargs):
preprocess_params, forward_params, postprocess_params = super()._sanitize_parameters(**kwargs)
if src_lang is not None:
preprocess_params['src_lang'] = src_lang
if tgt_lang is not None:
preprocess_params['tgt_lang'] = tgt_lang
if src_lang is None and tgt_lang is None:
task = kwargs.get('task', self.task)
items = task.split('_')
if task and len(items) == 4:
preprocess_params['src_lang'] = items[1]
preprocess_params['tgt_lang'] = items[3]
return (preprocess_params, forward_params, postprocess_params)
def __call__(self, *args, **kwargs):
"""
Translate the text(s) given as inputs.
Args:
args (`str` or `list[str]`):
Texts to be translated.
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (`bool`, *optional*, defaults to `True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the potential extra spaces in the text output.
src_lang (`str`, *optional*):
The language of the input. Might be required for multilingual models. Will not have any effect for
single pair translation models
tgt_lang (`str`, *optional*):
The language of the desired output. Might be required for multilingual models. Will not have any effect
for single pair translation models
generate_kwargs:
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:
- **translation_text** (`str`, present when `return_text=True`) -- The translation.
- **translation_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The
token ids of the translation.
"""
return super().__call__(*args, **kwargs)
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class TranslationPipeline(Text2TextGenerationPipeline):
'''
Translates from one language to another.
This translation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"translation_xx_to_yy"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=translation).
For a list of available parameters, see the [following
documentation](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.generation.GenerationMixin.generate)
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- num_beams: 4
Usage:
```python
en_fr_translator = pipeline("translation_en_to_fr")
en_fr_translator("How old are you?")
```'''
def check_inputs(self, input_length: int, min_length: int, max_length: int):
pass
def preprocess(self, *args, truncation=TruncationStrategy.DO_NOT_TRUNCATE, src_lang=None, tgt_lang=None):
pass
def _sanitize_parameters(self, src_lang=None, tgt_lang=None, **kwargs):
pass
def __call__(self, *args, **kwargs):
'''
Translate the text(s) given as inputs.
Args:
args (`str` or `list[str]`):
Texts to be translated.
return_tensors (`bool`, *optional*, defaults to `False`):
Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (`bool`, *optional*, defaults to `True`):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
Whether or not to clean up the potential extra spaces in the text output.
src_lang (`str`, *optional*):
The language of the input. Might be required for multilingual models. Will not have any effect for
single pair translation models
tgt_lang (`str`, *optional*):
The language of the desired output. Might be required for multilingual models. Will not have any effect
for single pair translation models
generate_kwargs:
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of list of `dict`: Each result comes as a dictionary with the following keys:
- **translation_text** (`str`, present when `return_text=True`) -- The translation.
- **translation_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The
token ids of the translation.
'''
pass
| 6
| 2
| 15
| 1
| 7
| 7
| 3
| 1.35
| 1
| 3
| 1
| 0
| 4
| 1
| 4
| 54
| 85
| 12
| 31
| 10
| 26
| 42
| 25
| 9
| 20
| 5
| 7
| 2
| 10
|
6,443
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text_classification.py
|
transformers.pipelines.text_classification.ClassificationFunction
|
from ..utils import ExplicitEnum, add_end_docstrings, is_torch_available
class ClassificationFunction(ExplicitEnum):
SIGMOID = 'sigmoid'
SOFTMAX = 'softmax'
NONE = 'none'
|
class ClassificationFunction(ExplicitEnum):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 4
| 0
| 4
| 4
| 3
| 0
| 4
| 4
| 3
| 0
| 1
| 0
| 0
|
6,444
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text_classification.py
|
transformers.pipelines.text_classification.TextClassificationPipeline
|
import warnings
import inspect
from .base import GenericTensor, Pipeline, build_pipeline_init_args
from ..utils import ExplicitEnum, add_end_docstrings, is_torch_available
from typing import Any, Union
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True), '\n return_all_scores (`bool`, *optional*, defaults to `False`):\n Whether to return all prediction scores or just the one of the predicted class.\n function_to_apply (`str`, *optional*, defaults to `"default"`):\n The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:\n\n - `"default"`: if the model has a single label, will apply the sigmoid function on the output. If the model\n has several labels, will apply the softmax function on the output. In case of regression tasks, will not\n apply any function on the output.\n - `"sigmoid"`: Applies the sigmoid function on the output.\n - `"softmax"`: Applies the softmax function on the output.\n - `"none"`: Does not apply any function on the output.')
class TextClassificationPipeline(Pipeline):
"""
Text classification pipeline using any `ModelForSequenceClassification`. See the [sequence classification
examples](../task_summary#sequence-classification) for more information.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="distilbert/distilbert-base-uncased-finetuned-sst-2-english")
>>> classifier("This movie is disgustingly good !")
[{'label': 'POSITIVE', 'score': 1.0}]
>>> classifier("Director tried too much.")
[{'label': 'NEGATIVE', 'score': 0.996}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This text classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"sentiment-analysis"` (for classifying sequences according to positive or negative sentiments).
If multiple classification labels are available (`model.config.num_labels >= 2`), the pipeline will run a softmax
over the results. If there is a single label, the pipeline will run a sigmoid over the result. In case of regression
tasks (`model.config.problem_type == "regression"`), will not apply any function on the output.
The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See
the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text-classification).
"""
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
return_all_scores = False
function_to_apply = ClassificationFunction.NONE
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.check_model_type(MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES)
def _sanitize_parameters(self, return_all_scores=None, function_to_apply=None, top_k='', **tokenizer_kwargs):
preprocess_params = tokenizer_kwargs
postprocess_params = {}
if hasattr(self.model.config, 'return_all_scores') and return_all_scores is None:
return_all_scores = self.model.config.return_all_scores
if isinstance(top_k, int) or top_k is None:
postprocess_params['top_k'] = top_k
postprocess_params['_legacy'] = False
elif return_all_scores is not None:
warnings.warn('`return_all_scores` is now deprecated, if want a similar functionality use `top_k=None` instead of `return_all_scores=True` or `top_k=1` instead of `return_all_scores=False`.', UserWarning)
if return_all_scores:
postprocess_params['top_k'] = None
else:
postprocess_params['top_k'] = 1
if isinstance(function_to_apply, str):
function_to_apply = ClassificationFunction[function_to_apply.upper()]
if function_to_apply is not None:
postprocess_params['function_to_apply'] = function_to_apply
return (preprocess_params, {}, postprocess_params)
def __call__(self, inputs: Union[str, list[str], dict[str, str], list[dict[str, str]]], **kwargs: Any) -> list[dict[str, Any]]:
"""
Classify the text(s) given as inputs.
Args:
inputs (`str` or `list[str]` or `dict[str]`, or `list[dict[str]]`):
One or several texts to classify. In order to use text pairs for your classification, you can send a
dictionary containing `{"text", "text_pair"}` keys, or a list of those.
top_k (`int`, *optional*, defaults to `1`):
How many results to return.
function_to_apply (`str`, *optional*, defaults to `"default"`):
The function to apply to the model outputs in order to retrieve the scores. Accepts four different
values:
If this argument is not specified, then it will apply the following functions according to the number
of labels:
- If problem type is regression, will not apply any function on the output.
- If the model has a single label, will apply the sigmoid function on the output.
- If the model has several labels, will apply the softmax function on the output.
Possible values are:
- `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output.
- `"none"`: Does not apply any function on the output.
Return:
A list of `dict`: Each result comes as list of dictionaries with the following keys:
- **label** (`str`) -- The label predicted.
- **score** (`float`) -- The corresponding probability.
If `top_k` is used, one such dictionary is returned per label.
"""
inputs = (inputs,)
result = super().__call__(*inputs, **kwargs)
_legacy = 'top_k' not in kwargs
if isinstance(inputs[0], str) and _legacy:
return [result]
else:
return result
def preprocess(self, inputs, **tokenizer_kwargs) -> dict[str, GenericTensor]:
return_tensors = 'pt'
if isinstance(inputs, dict):
return self.tokenizer(**inputs, return_tensors=return_tensors, **tokenizer_kwargs)
elif isinstance(inputs, list) and len(inputs) == 1 and isinstance(inputs[0], list) and (len(inputs[0]) == 2):
return self.tokenizer(text=inputs[0][0], text_pair=inputs[0][1], return_tensors=return_tensors, **tokenizer_kwargs)
elif isinstance(inputs, list):
raise ValueError('The pipeline received invalid inputs, if you are trying to send text pairs, you can try to send a dictionary `{"text": "My text", "text_pair": "My pair"}` in order to send a text pair.')
return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
def _forward(self, model_inputs):
model_forward = self.model.forward
if 'use_cache' in inspect.signature(model_forward).parameters:
model_inputs['use_cache'] = False
return self.model(**model_inputs)
def postprocess(self, model_outputs, function_to_apply=None, top_k=1, _legacy=True):
if function_to_apply is None:
if self.model.config.problem_type == 'regression':
function_to_apply = ClassificationFunction.NONE
elif self.model.config.problem_type == 'multi_label_classification' or self.model.config.num_labels == 1:
function_to_apply = ClassificationFunction.SIGMOID
elif self.model.config.problem_type == 'single_label_classification' or self.model.config.num_labels > 1:
function_to_apply = ClassificationFunction.SOFTMAX
elif hasattr(self.model.config, 'function_to_apply') and function_to_apply is None:
function_to_apply = self.model.config.function_to_apply
else:
function_to_apply = ClassificationFunction.NONE
outputs = model_outputs['logits'][0]
outputs = outputs.float().numpy()
if function_to_apply == ClassificationFunction.SIGMOID:
scores = sigmoid(outputs)
elif function_to_apply == ClassificationFunction.SOFTMAX:
scores = softmax(outputs)
elif function_to_apply == ClassificationFunction.NONE:
scores = outputs
else:
raise ValueError(f'Unrecognized `function_to_apply` argument: {function_to_apply}')
if top_k == 1 and _legacy:
return {'label': self.model.config.id2label[scores.argmax().item()], 'score': scores.max().item()}
dict_scores = [{'label': self.model.config.id2label[i], 'score': score.item()} for i, score in enumerate(scores)]
if not _legacy:
dict_scores.sort(key=lambda x: x['score'], reverse=True)
if top_k is not None:
dict_scores = dict_scores[:top_k]
return dict_scores
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True), '\n return_all_scores (`bool`, *optional*, defaults to `False`):\n Whether to return all prediction scores or just the one of the predicted class.\n function_to_apply (`str`, *optional*, defaults to `"default"`):\n The function to apply to the model outputs in order to retrieve the scores. Accepts four different values:\n\n - `"default"`: if the model has a single label, will apply the sigmoid function on the output. If the model\n has several labels, will apply the softmax function on the output. In case of regression tasks, will not\n apply any function on the output.\n - `"sigmoid"`: Applies the sigmoid function on the output.\n - `"softmax"`: Applies the softmax function on the output.\n - `"none"`: Does not apply any function on the output.')
class TextClassificationPipeline(Pipeline):
'''
Text classification pipeline using any `ModelForSequenceClassification`. See the [sequence classification
examples](../task_summary#sequence-classification) for more information.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="distilbert/distilbert-base-uncased-finetuned-sst-2-english")
>>> classifier("This movie is disgustingly good !")
[{'label': 'POSITIVE', 'score': 1.0}]
>>> classifier("Director tried too much.")
[{'label': 'NEGATIVE', 'score': 0.996}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This text classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"sentiment-analysis"` (for classifying sequences according to positive or negative sentiments).
If multiple classification labels are available (`model.config.num_labels >= 2`), the pipeline will run a softmax
over the results. If there is a single label, the pipeline will run a sigmoid over the result. In case of regression
tasks (`model.config.problem_type == "regression"`), will not apply any function on the output.
The models that this pipeline can use are models that have been fine-tuned on a sequence classification task. See
the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text-classification).
'''
def __init__(self, **kwargs):
pass
def _sanitize_parameters(self, return_all_scores=None, function_to_apply=None, top_k='', **tokenizer_kwargs):
pass
def __call__(self, inputs: Union[str, list[str], dict[str, str], list[dict[str, str]]], **kwargs: Any) -> list[dict[str, Any]]:
'''
Classify the text(s) given as inputs.
Args:
inputs (`str` or `list[str]` or `dict[str]`, or `list[dict[str]]`):
One or several texts to classify. In order to use text pairs for your classification, you can send a
dictionary containing `{"text", "text_pair"}` keys, or a list of those.
top_k (`int`, *optional*, defaults to `1`):
How many results to return.
function_to_apply (`str`, *optional*, defaults to `"default"`):
The function to apply to the model outputs in order to retrieve the scores. Accepts four different
values:
If this argument is not specified, then it will apply the following functions according to the number
of labels:
- If problem type is regression, will not apply any function on the output.
- If the model has a single label, will apply the sigmoid function on the output.
- If the model has several labels, will apply the softmax function on the output.
Possible values are:
- `"sigmoid"`: Applies the sigmoid function on the output.
- `"softmax"`: Applies the softmax function on the output.
- `"none"`: Does not apply any function on the output.
Return:
A list of `dict`: Each result comes as list of dictionaries with the following keys:
- **label** (`str`) -- The label predicted.
- **score** (`float`) -- The corresponding probability.
If `top_k` is used, one such dictionary is returned per label.
'''
pass
def preprocess(self, inputs, **tokenizer_kwargs) -> dict[str, GenericTensor]:
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs, function_to_apply=None, top_k=1, _legacy=True):
pass
| 8
| 2
| 25
| 3
| 15
| 6
| 5
| 0.63
| 1
| 9
| 1
| 0
| 6
| 0
| 6
| 48
| 188
| 33
| 95
| 18
| 88
| 60
| 67
| 18
| 60
| 13
| 6
| 2
| 31
|
6,445
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text_generation.py
|
transformers.pipelines.text_generation.Chat
|
class Chat:
"""This class is intended to just be used internally in this pipeline and not exposed to users. We convert chats
to this format because the rest of the pipeline code tends to assume that lists of messages are
actually a batch of samples rather than messages in the same conversation."""
def __init__(self, messages: dict):
for message in messages:
if not ('role' in message and 'content' in message):
raise ValueError("When passing chat dicts as input, each dict must have a 'role' and 'content' key.")
self.messages = messages
|
class Chat:
'''This class is intended to just be used internally in this pipeline and not exposed to users. We convert chats
to this format because the rest of the pipeline code tends to assume that lists of messages are
actually a batch of samples rather than messages in the same conversation.'''
def __init__(self, messages: dict):
pass
| 2
| 1
| 5
| 0
| 5
| 0
| 3
| 0.5
| 0
| 1
| 0
| 0
| 1
| 1
| 1
| 1
| 10
| 1
| 6
| 4
| 4
| 3
| 6
| 4
| 4
| 3
| 0
| 2
| 3
|
6,446
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text_generation.py
|
transformers.pipelines.text_generation.ReturnType
|
import enum
class ReturnType(enum.Enum):
TENSORS = 0
NEW_TEXT = 1
FULL_TEXT = 2
|
class ReturnType(enum.Enum):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 49
| 4
| 0
| 4
| 4
| 3
| 0
| 4
| 4
| 3
| 0
| 4
| 0
| 0
|
6,447
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text_generation.py
|
transformers.pipelines.text_generation.TextGenerationPipeline
|
from ..generation import GenerationConfig
from ..utils import ModelOutput, add_end_docstrings, is_torch_available
from .base import Pipeline, build_pipeline_init_args
import types
import itertools
from typing import Any, overload
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class TextGenerationPipeline(Pipeline):
"""
Language generation pipeline using any `ModelWithLMHead` or `ModelForCausalLM`. This pipeline predicts the words
that will follow a specified text prompt. When the underlying model is a conversational model, it can also accept
one or more chats, in which case the pipeline will operate in chat mode and will continue the chat(s) by adding
its response(s). Each chat takes the form of a list of dicts, where each dict contains "role" and "content" keys.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
- do_sample: True
- temperature: 0.7
Examples:
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="openai-community/gpt2")
>>> generator("I can't believe you did such a ", do_sample=False)
[{'generated_text': "I can't believe you did such a icky thing to me. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I'm so sorry. I"}]
>>> # These parameters will return suggestions, and only the newly created text making it easier for prompting suggestions.
>>> outputs = generator("My tart needs some", num_return_sequences=4, return_full_text=False)
```
```python
>>> from transformers import pipeline
>>> generator = pipeline(model="HuggingFaceH4/zephyr-7b-beta")
>>> # Zephyr-beta is a conversational model, so let's pass it a chat instead of a single string
>>> generator([{"role": "user", "content": "What is the capital of France? Answer in one word."}], do_sample=False, max_new_tokens=2)
[{'generated_text': [{'role': 'user', 'content': 'What is the capital of France? Answer in one word.'}, {'role': 'assistant', 'content': 'Paris'}]}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial). You can pass text
generation parameters to this pipeline to control stopping criteria, decoding strategy, and more. Learn more about
text generation parameters in [Text generation strategies](../generation_strategies) and [Text
generation](text_generation).
This language generation pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"text-generation"`.
The models that this pipeline can use are models that have been trained with an autoregressive language modeling
objective. See the list of available [text completion models](https://huggingface.co/models?filter=text-generation)
and the list of [conversational models](https://huggingface.co/models?other=conversational)
on [huggingface.co/models].
"""
XL_PREFIX = "\n In 1991, the remains of Russian Tsar Nicholas II and his family (except for Alexei and Maria) are discovered. The\n voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the remainder of the story. 1883 Western\n Siberia, a young Grigori Rasputin is asked by his father and a group of men to perform magic. Rasputin has a vision\n and denounces one of the men as a horse thief. Although his father initially slaps him for making such an\n accusation, Rasputin watches as the man is chased outside and beaten. Twenty years later, Rasputin sees a vision of\n the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous, with people, even a bishop,\n begging for his blessing. <eod> </s> <eos>\n "
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256, do_sample=True, temperature=0.7)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.check_model_type(MODEL_FOR_CAUSAL_LM_MAPPING_NAMES)
if 'prefix' not in self._preprocess_params:
prefix = None
if self.prefix is not None:
prefix = self.prefix
if prefix is None and self.model.__class__.__name__ in ['XLNetLMHeadModel', 'TransfoXLLMHeadModel', 'TFXLNetLMHeadModel', 'TFTransfoXLLMHeadModel']:
prefix = self.XL_PREFIX
if prefix is not None:
preprocess_params, forward_params, _ = self._sanitize_parameters(prefix=prefix, **self._forward_params)
self._preprocess_params = {**self._preprocess_params, **preprocess_params}
self._forward_params = {**self._forward_params, **forward_params}
def _sanitize_parameters(self, return_full_text=None, return_tensors=None, return_text=None, return_type=None, clean_up_tokenization_spaces=None, prefix=None, handle_long_generation=None, stop_sequence=None, truncation=None, max_length=None, continue_final_message=None, skip_special_tokens=None, tokenizer_encode_kwargs=None, **generate_kwargs):
preprocess_params = {}
add_special_tokens = False
if 'add_special_tokens' in generate_kwargs:
add_special_tokens = preprocess_params['add_special_tokens'] = generate_kwargs.pop('add_special_tokens')
if 'padding' in generate_kwargs:
preprocess_params['padding'] = generate_kwargs.pop('padding')
if truncation is not None:
preprocess_params['truncation'] = truncation
if max_length is not None:
preprocess_params['max_length'] = max_length
generate_kwargs['max_length'] = max_length
if prefix is not None:
preprocess_params['prefix'] = prefix
if prefix:
prefix_inputs = self.tokenizer(prefix, padding=False, add_special_tokens=add_special_tokens, return_tensors='pt')
generate_kwargs['prefix_length'] = prefix_inputs['input_ids'].shape[-1]
if handle_long_generation is not None:
if handle_long_generation not in {'hole'}:
raise ValueError(f"{handle_long_generation} is not a valid value for `handle_long_generation` parameter expected [None, 'hole']")
preprocess_params['handle_long_generation'] = handle_long_generation
if continue_final_message is not None:
preprocess_params['continue_final_message'] = continue_final_message
if tokenizer_encode_kwargs is not None:
preprocess_params['tokenizer_encode_kwargs'] = tokenizer_encode_kwargs
preprocess_params.update(generate_kwargs)
if stop_sequence is not None:
stop_sequence_ids = self.tokenizer.encode(stop_sequence, add_special_tokens=False)
generate_kwargs['eos_token_id'] = stop_sequence_ids
forward_params = generate_kwargs
if self.assistant_model is not None:
forward_params['assistant_model'] = self.assistant_model
if self.assistant_tokenizer is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
postprocess_params = {}
if return_full_text is not None and return_type is None:
if return_text is not None:
raise ValueError('`return_text` is mutually exclusive with `return_full_text`')
if return_tensors is not None:
raise ValueError('`return_full_text` is mutually exclusive with `return_tensors`')
return_type = ReturnType.FULL_TEXT if return_full_text else ReturnType.NEW_TEXT
if return_tensors is not None and return_type is None:
if return_text is not None:
raise ValueError('`return_text` is mutually exclusive with `return_tensors`')
return_type = ReturnType.TENSORS
if return_type is not None:
postprocess_params['return_type'] = return_type
if clean_up_tokenization_spaces is not None:
postprocess_params['clean_up_tokenization_spaces'] = clean_up_tokenization_spaces
if continue_final_message is not None:
postprocess_params['continue_final_message'] = continue_final_message
if skip_special_tokens is not None:
postprocess_params['skip_special_tokens'] = skip_special_tokens
return (preprocess_params, forward_params, postprocess_params)
def _parse_and_tokenize(self, *args, **kwargs):
"""
Parse arguments and tokenize
"""
if self.model.__class__.__name__ in ['TransfoXLLMHeadModel']:
kwargs.update({'add_space_before_punct_symbol': True})
return super()._parse_and_tokenize(*args, **kwargs)
@overload
def __call__(self, text_inputs: str, **kwargs: Any) -> list[dict[str, str]]:
...
@overload
def __call__(self, text_inputs: list[str], **kwargs: Any) -> list[list[dict[str, str]]]:
...
@overload
def __call__(self, text_inputs: ChatType, **kwargs: Any) -> list[dict[str, ChatType]]:
...
@overload
def __call__(self, text_inputs: list[ChatType], **kwargs: Any) -> list[list[dict[str, ChatType]]]:
...
def __call__(self, text_inputs, **kwargs):
"""
Complete the prompt(s) given as inputs.
Args:
text_inputs (`str`, `list[str]`, list[dict[str, str]], or `list[list[dict[str, str]]]`):
One or several prompts (or one list of prompts) to complete. If strings or a list of string are
passed, this pipeline will continue each prompt. Alternatively, a "chat", in the form of a list
of dicts with "role" and "content" keys, can be passed, or a list of such chats. When chats are passed,
the model's chat template will be used to format them before passing them to the model.
return_tensors (`bool`, *optional*, defaults to `False`):
Returns the tensors of predictions (as token indices) in the outputs. If set to
`True`, the decoded text is not returned.
return_text (`bool`, *optional*):
Returns the decoded texts in the outputs.
return_full_text (`bool`, *optional*, defaults to `True`):
If set to `False` only added text is returned, otherwise the full text is returned. Cannot be
specified at the same time as `return_text`.
clean_up_tokenization_spaces (`bool`, *optional*, defaults to `True`):
Whether or not to clean up the potential extra spaces in the text output.
continue_final_message( `bool`, *optional*): This indicates that you want the model to continue the
last message in the input chat rather than starting a new one, allowing you to "prefill" its response.
By default this is `True` when the final message in the input chat has the `assistant` role and
`False` otherwise, but you can manually override that behaviour by setting this flag.
prefix (`str`, *optional*):
Prefix added to prompt.
handle_long_generation (`str`, *optional*):
By default, this pipelines does not handle long generation (ones that exceed in one form or the other
the model maximum length). There is no perfect way to address this (more info
:https://github.com/huggingface/transformers/issues/14033#issuecomment-948385227). This provides common
strategies to work around that problem depending on your use case.
- `None` : default strategy where nothing in particular happens
- `"hole"`: Truncates left of input, and leaves a gap wide enough to let generation happen (might
truncate a lot of the prompt and not suitable when generation exceed the model capacity)
tokenizer_encode_kwargs (`dict`, *optional*):
Additional keyword arguments to pass along to the encoding step of the tokenizer. If the text input is
a chat, it is passed to `apply_chat_template`. Otherwise, it is passed to `__call__`.
generate_kwargs (`dict`, *optional*):
Additional keyword arguments to pass along to the generate method of the model (see the generate method
[here](./text_generation)).
Return:
A list or a list of lists of `dict`: Returns one of the following dictionaries (cannot return a combination
of both `generated_text` and `generated_token_ids`):
- **generated_text** (`str`, present when `return_text=True`) -- The generated text.
- **generated_token_ids** (`torch.Tensor`, present when `return_tensors=True`) -- The token
ids of the generated text.
"""
if isinstance(text_inputs, (list, tuple, types.GeneratorType, KeyDataset) if is_torch_available() else (list, tuple, types.GeneratorType)):
if isinstance(text_inputs, types.GeneratorType):
text_inputs, _ = itertools.tee(text_inputs)
text_inputs, first_item = ((x for x in text_inputs), next(_))
else:
first_item = text_inputs[0]
if isinstance(first_item, (list, tuple, dict)):
if isinstance(first_item, dict):
return super().__call__(Chat(text_inputs), **kwargs)
else:
chats = (Chat(chat) for chat in text_inputs)
if isinstance(text_inputs, types.GeneratorType):
return super().__call__(chats, **kwargs)
else:
return super().__call__(list(chats), **kwargs)
return super().__call__(text_inputs, **kwargs)
def preprocess(self, prompt_text, prefix='', handle_long_generation=None, add_special_tokens=None, truncation=None, padding=None, max_length=None, continue_final_message=None, tokenizer_encode_kwargs=None, **generate_kwargs):
tokenizer_kwargs = {'add_special_tokens': add_special_tokens, 'truncation': truncation, 'padding': padding, 'max_length': max_length}
tokenizer_kwargs = {key: value for key, value in tokenizer_kwargs.items() if value is not None}
tokenizer_kwargs.update(tokenizer_encode_kwargs or {})
if isinstance(prompt_text, Chat):
tokenizer_kwargs.pop('add_special_tokens', None)
if continue_final_message is None:
continue_final_message = prompt_text.messages[-1]['role'] == 'assistant'
inputs = self.tokenizer.apply_chat_template(prompt_text.messages, add_generation_prompt=not continue_final_message, continue_final_message=continue_final_message, return_dict=True, return_tensors='pt', **tokenizer_kwargs)
else:
inputs = self.tokenizer(prefix + prompt_text, return_tensors='pt', **tokenizer_kwargs)
inputs['prompt_text'] = prompt_text
if handle_long_generation == 'hole':
cur_len = inputs['input_ids'].shape[-1]
if 'max_new_tokens' in generate_kwargs:
new_tokens = generate_kwargs['max_new_tokens']
else:
new_tokens = generate_kwargs.get('max_length', self.generation_config.max_length) - cur_len
if new_tokens < 0:
raise ValueError('We cannot infer how many new tokens are expected')
if cur_len + new_tokens > self.tokenizer.model_max_length:
keep_length = self.tokenizer.model_max_length - new_tokens
if keep_length <= 0:
raise ValueError('We cannot use `hole` to handle this generation the number of desired tokens exceeds the models max length')
inputs['input_ids'] = inputs['input_ids'][:, -keep_length:]
if 'attention_mask' in inputs:
inputs['attention_mask'] = inputs['attention_mask'][:, -keep_length:]
return inputs
def _forward(self, model_inputs, **generate_kwargs):
input_ids = model_inputs['input_ids']
attention_mask = model_inputs.get('attention_mask', None)
if input_ids.shape[1] == 0:
input_ids = None
attention_mask = None
in_b = 1
else:
in_b = input_ids.shape[0]
prompt_text = model_inputs.pop('prompt_text')
prefix_length = generate_kwargs.pop('prefix_length', 0)
if prefix_length > 0:
has_max_new_tokens = 'max_new_tokens' in generate_kwargs or ('generation_config' in generate_kwargs and generate_kwargs['generation_config'].max_new_tokens is not None)
if not has_max_new_tokens:
generate_kwargs['max_length'] = generate_kwargs.get('max_length') or self.generation_config.max_length
generate_kwargs['max_length'] += prefix_length
has_min_new_tokens = 'min_new_tokens' in generate_kwargs or ('generation_config' in generate_kwargs and generate_kwargs['generation_config'].min_new_tokens is not None)
if not has_min_new_tokens and 'min_length' in generate_kwargs:
generate_kwargs['min_length'] += prefix_length
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
output = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
if isinstance(output, ModelOutput):
generated_sequence = output.sequences
other_outputs = {k: v for k, v in output.items() if k not in {'sequences', 'past_key_values'}}
out_b = generated_sequence.shape[0]
for key, value in other_outputs.items():
if isinstance(value, torch.Tensor) and value.shape[0] == out_b:
other_outputs[key] = value.reshape(in_b, out_b // in_b, *value.shape[1:])
if isinstance(value, tuple) and len(value[0]) == out_b:
value = torch.stack(value).swapaxes(0, 1)
other_outputs[key] = value
else:
generated_sequence = output
other_outputs = {}
out_b = generated_sequence.shape[0]
generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *generated_sequence.shape[1:])
model_outputs = {'generated_sequence': generated_sequence, 'input_ids': input_ids, 'prompt_text': prompt_text}
if other_outputs:
model_outputs.update({'additional_outputs': other_outputs})
return model_outputs
def postprocess(self, model_outputs, return_type=ReturnType.FULL_TEXT, clean_up_tokenization_spaces=True, continue_final_message=None, skip_special_tokens=None):
generated_sequence = model_outputs['generated_sequence'][0]
input_ids = model_outputs['input_ids']
prompt_text = model_outputs['prompt_text']
generated_sequence = generated_sequence.numpy().tolist()
records = []
other_outputs = model_outputs.get('additional_outputs', {})
split_keys = {}
if other_outputs:
for k, v in other_outputs.items():
if isinstance(v, torch.Tensor) and v.shape[0] == len(generated_sequence):
split_keys[k] = v.numpy().tolist()
skip_special_tokens = skip_special_tokens if skip_special_tokens is not None else True
for idx, sequence in enumerate(generated_sequence):
if return_type == ReturnType.TENSORS:
record = {'generated_token_ids': sequence}
elif return_type in {ReturnType.NEW_TEXT, ReturnType.FULL_TEXT}:
text = self.tokenizer.decode(sequence, skip_special_tokens=skip_special_tokens, clean_up_tokenization_spaces=clean_up_tokenization_spaces)
if input_ids is None:
prompt_length = 0
else:
prompt_length = len(self.tokenizer.decode(input_ids[0], skip_special_tokens=skip_special_tokens, clean_up_tokenization_spaces=clean_up_tokenization_spaces))
all_text = text[prompt_length:]
if return_type == ReturnType.FULL_TEXT:
if isinstance(prompt_text, str):
all_text = prompt_text + all_text
elif isinstance(prompt_text, Chat):
if continue_final_message is None:
continue_final_message = prompt_text.messages[-1]['role'] == 'assistant'
if continue_final_message:
all_text = list(prompt_text.messages)[:-1] + [{'role': prompt_text.messages[-1]['role'], 'content': prompt_text.messages[-1]['content'] + all_text}]
else:
all_text = list(prompt_text.messages) + [{'role': 'assistant', 'content': all_text}]
record = {'generated_text': all_text}
for key, values in split_keys.items():
record[key] = values[idx]
records.append(record)
return records
| null | 17
| 3
| 56
| 5
| 42
| 10
| 12
| 0.34
| 1
| 11
| 3
| 0
| 7
| 2
| 7
| 49
| 456
| 51
| 303
| 84
| 264
| 104
| 193
| 53
| 185
| 22
| 6
| 5
| 81
|
6,448
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/text_to_audio.py
|
transformers.pipelines.text_to_audio.TextToAudioPipeline
|
from typing import Any, Union, overload
from ..generation import GenerationConfig
from .base import Pipeline
class TextToAudioPipeline(Pipeline):
"""
Text-to-audio generation pipeline using any `AutoModelForTextToWaveform` or `AutoModelForTextToSpectrogram`. This
pipeline generates an audio file from an input text and optional other conditional inputs.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(model="suno/bark-small")
>>> output = pipe("Hey it's HuggingFace on the phone!")
>>> audio = output["audio"]
>>> sampling_rate = output["sampling_rate"]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
<Tip>
You can specify parameters passed to the model by using [`TextToAudioPipeline.__call__.forward_params`] or
[`TextToAudioPipeline.__call__.generate_kwargs`].
Example:
```python
>>> from transformers import pipeline
>>> music_generator = pipeline(task="text-to-audio", model="facebook/musicgen-small")
>>> # diversify the music generation by adding randomness with a high temperature and set a maximum music length
>>> generate_kwargs = {
... "do_sample": True,
... "temperature": 0.7,
... "max_new_tokens": 35,
... }
>>> outputs = music_generator("Techno music with high melodic riffs", generate_kwargs=generate_kwargs)
```
</Tip>
This pipeline can currently be loaded from [`pipeline`] using the following task identifiers: `"text-to-speech"` or
`"text-to-audio"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=text-to-speech).
"""
_load_processor = True
_pipeline_calls_generate = True
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
_default_generation_config = GenerationConfig(max_new_tokens=256)
def __init__(self, *args, vocoder=None, sampling_rate=None, no_processor=True, **kwargs):
super().__init__(*args, **kwargs)
self.no_processor = no_processor
self.vocoder = None
if self.model.__class__ in MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING.values():
self.vocoder = SpeechT5HifiGan.from_pretrained(DEFAULT_VOCODER_ID).to(self.model.device) if vocoder is None else vocoder
self.sampling_rate = sampling_rate
if self.vocoder is not None:
self.sampling_rate = self.vocoder.config.sampling_rate
if self.sampling_rate is None:
config = self.model.config
gen_config = self.model.__dict__.get('generation_config', None)
if gen_config is not None:
config.update(gen_config.to_dict())
for sampling_rate_name in ['sample_rate', 'sampling_rate']:
sampling_rate = getattr(config, sampling_rate_name, None)
if sampling_rate is not None:
self.sampling_rate = sampling_rate
elif getattr(config, 'codec_config', None) is not None:
sampling_rate = getattr(config.codec_config, sampling_rate_name, None)
if sampling_rate is not None:
self.sampling_rate = sampling_rate
if self.sampling_rate is None and (not self.no_processor) and hasattr(self.processor, 'feature_extractor'):
self.sampling_rate = self.processor.feature_extractor.sampling_rate
def preprocess(self, text, **kwargs):
if isinstance(text, str):
text = [text]
if self.model.config.model_type == 'bark':
new_kwargs = {'max_length': self.generation_config.semantic_config.get('max_input_semantic_length', 256), 'add_special_tokens': False, 'return_attention_mask': True, 'return_token_type_ids': False, 'padding': 'max_length'}
new_kwargs.update(kwargs)
kwargs = new_kwargs
preprocessor = self.tokenizer if self.no_processor else self.processor
output = preprocessor(text, **kwargs, return_tensors='pt')
return output
def _forward(self, model_inputs, **kwargs):
kwargs = self._ensure_tensor_on_device(kwargs, device=self.device)
forward_params = kwargs['forward_params']
generate_kwargs = kwargs['generate_kwargs']
if self.model.can_generate():
generate_kwargs = self._ensure_tensor_on_device(generate_kwargs, device=self.device)
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
forward_params.update(generate_kwargs)
output = self.model.generate(**model_inputs, **forward_params)
else:
if len(generate_kwargs):
raise ValueError(f"You're using the `TextToAudioPipeline` with a forward-only model, but `generate_kwargs` is non empty. For forward-only TTA models, please use `forward_params` instead of `generate_kwargs`. For reference, the `generate_kwargs` used here are: {generate_kwargs.keys()}")
output = self.model(**model_inputs, **forward_params)[0]
if self.vocoder is not None:
output = self.vocoder(output)
return output
@overload
def __call__(self, text_inputs: str, **forward_params: Any) -> dict[str, Any]:
...
@overload
def __call__(self, text_inputs: list[str], **forward_params: Any) -> list[dict[str, Any]]:
...
def __call__(self, text_inputs: Union[str, list[str]], **forward_params) -> Union[dict[str, Any], list[dict[str, Any]]]:
"""
Generates speech/audio from the inputs. See the [`TextToAudioPipeline`] documentation for more information.
Args:
text_inputs (`str` or `list[str]`):
The text(s) to generate.
forward_params (`dict`, *optional*):
Parameters passed to the model generation/forward method. `forward_params` are always passed to the
underlying model.
generate_kwargs (`dict`, *optional*):
The dictionary of ad-hoc parametrization of `generate_config` to be used for the generation call. For a
complete overview of generate, check the [following
guide](https://huggingface.co/docs/transformers/en/main_classes/text_generation). `generate_kwargs` are
only passed to the underlying model if the latter is a generative model.
Return:
A `dict` or a list of `dict`: The dictionaries have two keys:
- **audio** (`np.ndarray` of shape `(nb_channels, audio_length)`) -- The generated audio waveform.
- **sampling_rate** (`int`) -- The sampling rate of the generated audio waveform.
"""
return super().__call__(text_inputs, **forward_params)
def _sanitize_parameters(self, preprocess_params=None, forward_params=None, generate_kwargs=None):
if getattr(self, 'assistant_model', None) is not None:
generate_kwargs['assistant_model'] = self.assistant_model
if getattr(self, 'assistant_tokenizer', None) is not None:
generate_kwargs['tokenizer'] = self.tokenizer
generate_kwargs['assistant_tokenizer'] = self.assistant_tokenizer
params = {'forward_params': forward_params if forward_params else {}, 'generate_kwargs': generate_kwargs if generate_kwargs else {}}
if preprocess_params is None:
preprocess_params = {}
postprocess_params = {}
return (preprocess_params, params, postprocess_params)
def postprocess(self, audio):
output_dict = {}
if self.model.config.model_type == 'csm':
waveform_key = 'audio'
else:
waveform_key = 'waveform'
if self.no_processor:
if isinstance(audio, dict):
waveform = audio[waveform_key]
elif isinstance(audio, tuple):
waveform = audio[0]
else:
waveform = audio
else:
waveform = self.processor.decode(audio)
if isinstance(audio, list):
output_dict['audio'] = [el.to(device='cpu', dtype=torch.float).numpy() for el in waveform]
else:
output_dict['audio'] = waveform.to(device='cpu', dtype=torch.float).numpy()
output_dict['sampling_rate'] = self.sampling_rate
return output_dict
|
class TextToAudioPipeline(Pipeline):
'''
Text-to-audio generation pipeline using any `AutoModelForTextToWaveform` or `AutoModelForTextToSpectrogram`. This
pipeline generates an audio file from an input text and optional other conditional inputs.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(model="suno/bark-small")
>>> output = pipe("Hey it's HuggingFace on the phone!")
>>> audio = output["audio"]
>>> sampling_rate = output["sampling_rate"]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
<Tip>
You can specify parameters passed to the model by using [`TextToAudioPipeline.__call__.forward_params`] or
[`TextToAudioPipeline.__call__.generate_kwargs`].
Example:
```python
>>> from transformers import pipeline
>>> music_generator = pipeline(task="text-to-audio", model="facebook/musicgen-small")
>>> # diversify the music generation by adding randomness with a high temperature and set a maximum music length
>>> generate_kwargs = {
... "do_sample": True,
... "temperature": 0.7,
... "max_new_tokens": 35,
... }
>>> outputs = music_generator("Techno music with high melodic riffs", generate_kwargs=generate_kwargs)
```
</Tip>
This pipeline can currently be loaded from [`pipeline`] using the following task identifiers: `"text-to-speech"` or
`"text-to-audio"`.
See the list of available models on [huggingface.co/models](https://huggingface.co/models?filter=text-to-speech).
'''
def __init__(self, *args, vocoder=None, sampling_rate=None, no_processor=True, **kwargs):
pass
def preprocess(self, text, **kwargs):
pass
def _forward(self, model_inputs, **kwargs):
pass
@overload
def __call__(self, text_inputs: str, **forward_params: Any) -> dict[str, Any]:
pass
@overload
def __call__(self, text_inputs: str, **forward_params: Any) -> dict[str, Any]:
pass
def __call__(self, text_inputs: str, **forward_params: Any) -> dict[str, Any]:
'''
Generates speech/audio from the inputs. See the [`TextToAudioPipeline`] documentation for more information.
Args:
text_inputs (`str` or `list[str]`):
The text(s) to generate.
forward_params (`dict`, *optional*):
Parameters passed to the model generation/forward method. `forward_params` are always passed to the
underlying model.
generate_kwargs (`dict`, *optional*):
The dictionary of ad-hoc parametrization of `generate_config` to be used for the generation call. For a
complete overview of generate, check the [following
guide](https://huggingface.co/docs/transformers/en/main_classes/text_generation). `generate_kwargs` are
only passed to the underlying model if the latter is a generative model.
Return:
A `dict` or a list of `dict`: The dictionaries have two keys:
- **audio** (`np.ndarray` of shape `(nb_channels, audio_length)`) -- The generated audio waveform.
- **sampling_rate** (`int`) -- The sampling rate of the generated audio waveform.
'''
pass
def _sanitize_parameters(self, preprocess_params=None, forward_params=None, generate_kwargs=None):
pass
def postprocess(self, audio):
pass
| 11
| 2
| 23
| 4
| 15
| 4
| 5
| 0.64
| 1
| 6
| 1
| 0
| 6
| 2
| 6
| 48
| 193
| 45
| 90
| 25
| 78
| 58
| 66
| 20
| 59
| 9
| 6
| 3
| 27
|
6,449
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/token_classification.py
|
transformers.pipelines.token_classification.AggregationStrategy
|
from ..utils import ExplicitEnum, add_end_docstrings, is_torch_available
class AggregationStrategy(ExplicitEnum):
"""All the valid aggregation strategies for TokenClassificationPipeline"""
NONE = 'none'
SIMPLE = 'simple'
FIRST = 'first'
AVERAGE = 'average'
MAX = 'max'
|
class AggregationStrategy(ExplicitEnum):
'''All the valid aggregation strategies for TokenClassificationPipeline'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 0.17
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 8
| 1
| 6
| 6
| 5
| 1
| 6
| 6
| 5
| 0
| 1
| 0
| 0
|
6,450
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/token_classification.py
|
transformers.pipelines.token_classification.TokenClassificationArgumentHandler
|
import types
from .base import ArgumentHandler, ChunkPipeline, Dataset, build_pipeline_init_args
from typing import Any, Optional, Union, overload
class TokenClassificationArgumentHandler(ArgumentHandler):
"""
Handles arguments for token classification.
"""
def __call__(self, inputs: Union[str, list[str]], **kwargs):
is_split_into_words = kwargs.get('is_split_into_words', False)
delimiter = kwargs.get('delimiter')
if inputs is not None and isinstance(inputs, (list, tuple)) and (len(inputs) > 0):
inputs = list(inputs)
batch_size = len(inputs)
elif isinstance(inputs, str):
inputs = [inputs]
batch_size = 1
elif Dataset is not None and isinstance(inputs, Dataset) or isinstance(inputs, types.GeneratorType):
return (inputs, is_split_into_words, None, delimiter)
else:
raise ValueError('At least one input is required.')
offset_mapping = kwargs.get('offset_mapping')
if offset_mapping:
if isinstance(offset_mapping, list) and isinstance(offset_mapping[0], tuple):
offset_mapping = [offset_mapping]
if len(offset_mapping) != batch_size:
raise ValueError('offset_mapping should have the same batch size as the input')
return (inputs, is_split_into_words, offset_mapping, delimiter)
|
class TokenClassificationArgumentHandler(ArgumentHandler):
'''
Handles arguments for token classification.
'''
def __call__(self, inputs: Union[str, list[str]], **kwargs):
pass
| 2
| 1
| 19
| 1
| 18
| 0
| 7
| 0.16
| 1
| 4
| 0
| 0
| 1
| 0
| 1
| 22
| 24
| 2
| 19
| 4
| 17
| 3
| 16
| 4
| 14
| 7
| 5
| 2
| 7
|
6,451
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/token_classification.py
|
transformers.pipelines.token_classification.TokenClassificationPipeline
|
import warnings
from ..utils import ExplicitEnum, add_end_docstrings, is_torch_available
from ..models.bert.tokenization_bert import BasicTokenizer
from .base import ArgumentHandler, ChunkPipeline, Dataset, build_pipeline_init_args
from typing import Any, Optional, Union, overload
import numpy as np
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True), '\n ignore_labels (`list[str]`, defaults to `["O"]`):\n A list of labels to ignore.\n grouped_entities (`bool`, *optional*, defaults to `False`):\n DEPRECATED, use `aggregation_strategy` instead. Whether or not to group the tokens corresponding to the\n same entity together in the predictions or not.\n stride (`int`, *optional*):\n If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size\n model_max_length. Works only with fast tokenizers and `aggregation_strategy` different from `NONE`. The\n value of this argument defines the number of overlapping tokens between chunks. In other words, the model\n will shift forward by `tokenizer.model_max_length - stride` tokens each step.\n aggregation_strategy (`str`, *optional*, defaults to `"none"`):\n The strategy to fuse (or not) tokens based on the model prediction.\n\n - "none" : Will simply not do any aggregation and simply return raw results from the model\n - "simple" : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C,\n I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{"word": ABC, "entity": "TAG"}, {"word": "D",\n "entity": "TAG2"}, {"word": "E", "entity": "TAG2"}] Notice that two consecutive B tags will end up as\n different entities. On word based languages, we might end up splitting words undesirably : Imagine\n Microsoft being tagged as [{"word": "Micro", "entity": "ENTERPRISE"}, {"word": "soft", "entity":\n "NAME"}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages\n that support that meaning, which is basically tokens separated by a space). These mitigations will\n only work on real words, "New york" might still be tagged with two different entities.\n - "first" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot\n end up with different tags. Words will simply use the tag of the first token of the word when there\n is ambiguity.\n - "average" : (works only on word based models) Will use the `SIMPLE` strategy except that words,\n cannot end up with different tags. scores will be averaged first across tokens, and then the maximum\n label is applied.\n - "max" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot\n end up with different tags. Word entity will simply be the token with the maximum score.')
class TokenClassificationPipeline(ChunkPipeline):
"""
Named Entity Recognition pipeline using any `ModelForTokenClassification`. See the [named entity recognition
examples](../task_summary#named-entity-recognition) for more information.
Example:
```python
>>> from transformers import pipeline
>>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}]
>>> token = tokens[0]
>>> # Start and end provide an easy way to highlight words in the original text.
>>> sentence[token["start"] : token["end"]]
' jean-baptiste'
>>> # Some models use the same idea to do part of speech.
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London")
[{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This token recognition pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"ner"` (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=token-classification).
"""
default_input_names = 'sequences'
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
def __init__(self, args_parser=TokenClassificationArgumentHandler(), *args, **kwargs):
super().__init__(*args, **kwargs)
self.check_model_type(MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES)
self._basic_tokenizer = BasicTokenizer(do_lower_case=False)
self._args_parser = args_parser
def _sanitize_parameters(self, ignore_labels=None, grouped_entities: Optional[bool]=None, ignore_subwords: Optional[bool]=None, aggregation_strategy: Optional[AggregationStrategy]=None, offset_mapping: Optional[list[tuple[int, int]]]=None, is_split_into_words: Optional[bool]=False, stride: Optional[int]=None, delimiter: Optional[str]=None):
preprocess_params = {}
preprocess_params['is_split_into_words'] = is_split_into_words
if is_split_into_words:
preprocess_params['delimiter'] = ' ' if delimiter is None else delimiter
if offset_mapping is not None:
preprocess_params['offset_mapping'] = offset_mapping
postprocess_params = {}
if grouped_entities is not None or ignore_subwords is not None:
if grouped_entities and ignore_subwords:
aggregation_strategy = AggregationStrategy.FIRST
elif grouped_entities and (not ignore_subwords):
aggregation_strategy = AggregationStrategy.SIMPLE
else:
aggregation_strategy = AggregationStrategy.NONE
if grouped_entities is not None:
warnings.warn(f'`grouped_entities` is deprecated and will be removed in version v5.0.0, defaulted to `aggregation_strategy="{aggregation_strategy}"` instead.')
if ignore_subwords is not None:
warnings.warn(f'`ignore_subwords` is deprecated and will be removed in version v5.0.0, defaulted to `aggregation_strategy="{aggregation_strategy}"` instead.')
if aggregation_strategy is not None:
if isinstance(aggregation_strategy, str):
aggregation_strategy = AggregationStrategy[aggregation_strategy.upper()]
if aggregation_strategy in {AggregationStrategy.FIRST, AggregationStrategy.MAX, AggregationStrategy.AVERAGE} and (not self.tokenizer.is_fast):
raise ValueError('Slow tokenizers cannot handle subwords. Please set the `aggregation_strategy` option to `"simple"` or use a fast tokenizer.')
postprocess_params['aggregation_strategy'] = aggregation_strategy
if ignore_labels is not None:
postprocess_params['ignore_labels'] = ignore_labels
if stride is not None:
if stride >= self.tokenizer.model_max_length:
raise ValueError('`stride` must be less than `tokenizer.model_max_length` (or even lower if the tokenizer adds special tokens)')
if aggregation_strategy == AggregationStrategy.NONE:
raise ValueError(f'`stride` was provided to process all the text but `aggregation_strategy="{aggregation_strategy}"`, please select another one instead.')
elif self.tokenizer.is_fast:
tokenizer_params = {'return_overflowing_tokens': True, 'padding': True, 'stride': stride}
preprocess_params['tokenizer_params'] = tokenizer_params
else:
raise ValueError("`stride` was provided to process all the text but you're using a slow tokenizer. Please use a fast tokenizer.")
return (preprocess_params, {}, postprocess_params)
@overload
def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, str]]:
...
@overload
def __call__(self, inputs: list[str], **kwargs: Any) -> list[list[dict[str, str]]]:
...
def __call__(self, inputs: Union[str, list[str]], **kwargs: Any) -> Union[list[dict[str, str]], list[list[dict[str, str]]]]:
"""
Classify each token of the text(s) given as inputs.
Args:
inputs (`str` or `List[str]`):
One or several texts (or one list of texts) for token classification. Can be pre-tokenized when
`is_split_into_words=True`.
Return:
A list or a list of list of `dict`: Each result comes as a list of dictionaries (one for each token in the
corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with
the following keys:
- **word** (`str`) -- The token/word classified. This is obtained by decoding the selected tokens. If you
want to have the exact string in the original sentence, use `start` and `end`.
- **score** (`float`) -- The corresponding probability for `entity`.
- **entity** (`str`) -- The entity predicted for that token/word (it is named *entity_group* when
*aggregation_strategy* is not `"none"`.
- **index** (`int`, only present when `aggregation_strategy="none"`) -- The index of the corresponding
token in the sentence.
- **start** (`int`, *optional*) -- The index of the start of the corresponding entity in the sentence. Only
exists if the offsets are available within the tokenizer
- **end** (`int`, *optional*) -- The index of the end of the corresponding entity in the sentence. Only
exists if the offsets are available within the tokenizer
"""
_inputs, is_split_into_words, offset_mapping, delimiter = self._args_parser(inputs, **kwargs)
kwargs['is_split_into_words'] = is_split_into_words
kwargs['delimiter'] = delimiter
if is_split_into_words and (not all((isinstance(input, list) for input in inputs))):
return super().__call__([inputs], **kwargs)
if offset_mapping:
kwargs['offset_mapping'] = offset_mapping
return super().__call__(inputs, **kwargs)
def preprocess(self, sentence, offset_mapping=None, **preprocess_params):
tokenizer_params = preprocess_params.pop('tokenizer_params', {})
truncation = self.tokenizer.model_max_length and self.tokenizer.model_max_length > 0
word_to_chars_map = None
is_split_into_words = preprocess_params['is_split_into_words']
if is_split_into_words:
delimiter = preprocess_params['delimiter']
if not isinstance(sentence, list):
raise ValueError('When `is_split_into_words=True`, `sentence` must be a list of tokens.')
words = sentence
sentence = delimiter.join(words)
word_to_chars_map = []
delimiter_len = len(delimiter)
char_offset = 0
for word in words:
word_to_chars_map.append((char_offset, char_offset + len(word)))
char_offset += len(word) + delimiter_len
text_to_tokenize = words
tokenizer_params['is_split_into_words'] = True
else:
if not isinstance(sentence, str):
raise ValueError('When `is_split_into_words=False`, `sentence` must be an untokenized string.')
text_to_tokenize = sentence
inputs = self.tokenizer(text_to_tokenize, return_tensors='pt', truncation=truncation, return_special_tokens_mask=True, return_offsets_mapping=self.tokenizer.is_fast, **tokenizer_params)
if is_split_into_words and (not self.tokenizer.is_fast):
raise ValueError('is_split_into_words=True is only supported with fast tokenizers.')
inputs.pop('overflow_to_sample_mapping', None)
num_chunks = len(inputs['input_ids'])
for i in range(num_chunks):
model_inputs = {k: v[i].unsqueeze(0) for k, v in inputs.items()}
if offset_mapping is not None:
model_inputs['offset_mapping'] = offset_mapping
model_inputs['sentence'] = sentence if i == 0 else None
model_inputs['is_last'] = i == num_chunks - 1
if word_to_chars_map is not None:
model_inputs['word_ids'] = inputs.word_ids(i)
model_inputs['word_to_chars_map'] = word_to_chars_map
yield model_inputs
def _forward(self, model_inputs):
special_tokens_mask = model_inputs.pop('special_tokens_mask')
offset_mapping = model_inputs.pop('offset_mapping', None)
sentence = model_inputs.pop('sentence')
is_last = model_inputs.pop('is_last')
word_ids = model_inputs.pop('word_ids', None)
word_to_chars_map = model_inputs.pop('word_to_chars_map', None)
output = self.model(**model_inputs)
logits = output['logits'] if isinstance(output, dict) else output[0]
return {'logits': logits, 'special_tokens_mask': special_tokens_mask, 'offset_mapping': offset_mapping, 'sentence': sentence, 'is_last': is_last, 'word_ids': word_ids, 'word_to_chars_map': word_to_chars_map, **model_inputs}
def postprocess(self, all_outputs, aggregation_strategy=AggregationStrategy.NONE, ignore_labels=None):
if ignore_labels is None:
ignore_labels = ['O']
all_entities = []
word_to_chars_map = all_outputs[0].get('word_to_chars_map')
for model_outputs in all_outputs:
if model_outputs['logits'][0].dtype in (torch.bfloat16, torch.float16):
logits = model_outputs['logits'][0].to(torch.float32).numpy()
else:
logits = model_outputs['logits'][0].numpy()
sentence = all_outputs[0]['sentence']
input_ids = model_outputs['input_ids'][0]
offset_mapping = model_outputs['offset_mapping'][0] if model_outputs['offset_mapping'] is not None else None
special_tokens_mask = model_outputs['special_tokens_mask'][0].numpy()
word_ids = model_outputs.get('word_ids')
maxes = np.max(logits, axis=-1, keepdims=True)
shifted_exp = np.exp(logits - maxes)
scores = shifted_exp / shifted_exp.sum(axis=-1, keepdims=True)
pre_entities = self.gather_pre_entities(sentence, input_ids, scores, offset_mapping, special_tokens_mask, aggregation_strategy, word_ids=word_ids, word_to_chars_map=word_to_chars_map)
grouped_entities = self.aggregate(pre_entities, aggregation_strategy)
entities = [entity for entity in grouped_entities if entity.get('entity', None) not in ignore_labels and entity.get('entity_group', None) not in ignore_labels]
all_entities.extend(entities)
num_chunks = len(all_outputs)
if num_chunks > 1:
all_entities = self.aggregate_overlapping_entities(all_entities)
return all_entities
def aggregate_overlapping_entities(self, entities):
if len(entities) == 0:
return entities
entities = sorted(entities, key=lambda x: x['start'])
aggregated_entities = []
previous_entity = entities[0]
for entity in entities:
if previous_entity['start'] <= entity['start'] < previous_entity['end']:
current_length = entity['end'] - entity['start']
previous_length = previous_entity['end'] - previous_entity['start']
if current_length > previous_length or (current_length == previous_length and entity['score'] > previous_entity['score']):
previous_entity = entity
else:
aggregated_entities.append(previous_entity)
previous_entity = entity
aggregated_entities.append(previous_entity)
return aggregated_entities
def gather_pre_entities(self, sentence: str, input_ids: np.ndarray, scores: np.ndarray, offset_mapping: Optional[list[tuple[int, int]]], special_tokens_mask: np.ndarray, aggregation_strategy: AggregationStrategy, word_ids: Optional[list[Optional[int]]]=None, word_to_chars_map: Optional[list[tuple[int, int]]]=None) -> list[dict]:
"""Fuse various numpy arrays into dicts with all the information needed for aggregation"""
pre_entities = []
for idx, token_scores in enumerate(scores):
if special_tokens_mask[idx]:
continue
word = self.tokenizer.convert_ids_to_tokens(int(input_ids[idx]))
if offset_mapping is not None:
start_ind, end_ind = offset_mapping[idx]
if word_ids is not None and word_to_chars_map is not None:
word_index = word_ids[idx]
if word_index is not None:
start_char, _ = word_to_chars_map[word_index]
start_ind += start_char
end_ind += start_char
if not isinstance(start_ind, int):
start_ind = start_ind.item()
end_ind = end_ind.item()
word_ref = sentence[start_ind:end_ind]
if getattr(self.tokenizer, '_tokenizer', None) and getattr(self.tokenizer._tokenizer.model, 'continuing_subword_prefix', None):
is_subword = len(word) != len(word_ref)
else:
if aggregation_strategy in {AggregationStrategy.FIRST, AggregationStrategy.AVERAGE, AggregationStrategy.MAX}:
warnings.warn('Tokenizer does not support real words, using fallback heuristic', UserWarning)
is_subword = start_ind > 0 and ' ' not in sentence[start_ind - 1:start_ind + 1]
if int(input_ids[idx]) == self.tokenizer.unk_token_id:
word = word_ref
is_subword = False
else:
start_ind = None
end_ind = None
is_subword = False
pre_entity = {'word': word, 'scores': token_scores, 'start': start_ind, 'end': end_ind, 'index': idx, 'is_subword': is_subword}
pre_entities.append(pre_entity)
return pre_entities
def aggregate(self, pre_entities: list[dict], aggregation_strategy: AggregationStrategy) -> list[dict]:
if aggregation_strategy in {AggregationStrategy.NONE, AggregationStrategy.SIMPLE}:
entities = []
for pre_entity in pre_entities:
entity_idx = pre_entity['scores'].argmax()
score = pre_entity['scores'][entity_idx]
entity = {'entity': self.model.config.id2label[entity_idx], 'score': score, 'index': pre_entity['index'], 'word': pre_entity['word'], 'start': pre_entity['start'], 'end': pre_entity['end']}
entities.append(entity)
else:
entities = self.aggregate_words(pre_entities, aggregation_strategy)
if aggregation_strategy == AggregationStrategy.NONE:
return entities
return self.group_entities(entities)
def aggregate_word(self, entities: list[dict], aggregation_strategy: AggregationStrategy) -> dict:
word = self.tokenizer.convert_tokens_to_string([entity['word'] for entity in entities])
if aggregation_strategy == AggregationStrategy.FIRST:
scores = entities[0]['scores']
idx = scores.argmax()
score = scores[idx]
entity = self.model.config.id2label[idx]
elif aggregation_strategy == AggregationStrategy.MAX:
max_entity = max(entities, key=lambda entity: entity['scores'].max())
scores = max_entity['scores']
idx = scores.argmax()
score = scores[idx]
entity = self.model.config.id2label[idx]
elif aggregation_strategy == AggregationStrategy.AVERAGE:
scores = np.stack([entity['scores'] for entity in entities])
average_scores = np.nanmean(scores, axis=0)
entity_idx = average_scores.argmax()
entity = self.model.config.id2label[entity_idx]
score = average_scores[entity_idx]
else:
raise ValueError('Invalid aggregation_strategy')
new_entity = {'entity': entity, 'score': score, 'word': word, 'start': entities[0]['start'], 'end': entities[-1]['end']}
return new_entity
def aggregate_words(self, entities: list[dict], aggregation_strategy: AggregationStrategy) -> list[dict]:
"""
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft|
company| B-ENT I-ENT
"""
if aggregation_strategy in {AggregationStrategy.NONE, AggregationStrategy.SIMPLE}:
raise ValueError('NONE and SIMPLE strategies are invalid for word aggregation')
word_entities = []
word_group = None
for entity in entities:
if word_group is None:
word_group = [entity]
elif entity['is_subword']:
word_group.append(entity)
else:
word_entities.append(self.aggregate_word(word_group, aggregation_strategy))
word_group = [entity]
if word_group is not None:
word_entities.append(self.aggregate_word(word_group, aggregation_strategy))
return word_entities
def group_sub_entities(self, entities: list[dict]) -> dict:
"""
Group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
"""
entity = entities[0]['entity'].split('-', 1)[-1]
scores = np.nanmean([entity['score'] for entity in entities])
tokens = [entity['word'] for entity in entities]
entity_group = {'entity_group': entity, 'score': np.mean(scores), 'word': self.tokenizer.convert_tokens_to_string(tokens), 'start': entities[0]['start'], 'end': entities[-1]['end']}
return entity_group
def get_tag(self, entity_name: str) -> tuple[str, str]:
if entity_name.startswith('B-'):
bi = 'B'
tag = entity_name[2:]
elif entity_name.startswith('I-'):
bi = 'I'
tag = entity_name[2:]
else:
bi = 'I'
tag = entity_name
return (bi, tag)
def group_entities(self, entities: list[dict]) -> list[dict]:
"""
Find and group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
"""
entity_groups = []
entity_group_disagg = []
for entity in entities:
if not entity_group_disagg:
entity_group_disagg.append(entity)
continue
bi, tag = self.get_tag(entity['entity'])
last_bi, last_tag = self.get_tag(entity_group_disagg[-1]['entity'])
if tag == last_tag and bi != 'B':
entity_group_disagg.append(entity)
else:
entity_groups.append(self.group_sub_entities(entity_group_disagg))
entity_group_disagg = [entity]
if entity_group_disagg:
entity_groups.append(self.group_sub_entities(entity_group_disagg))
return entity_groups
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True), '\n ignore_labels (`list[str]`, defaults to `["O"]`):\n A list of labels to ignore.\n grouped_entities (`bool`, *optional*, defaults to `False`):\n DEPRECATED, use `aggregation_strategy` instead. Whether or not to group the tokens corresponding to the\n same entity together in the predictions or not.\n stride (`int`, *optional*):\n If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size\n model_max_length. Works only with fast tokenizers and `aggregation_strategy` different from `NONE`. The\n value of this argument defines the number of overlapping tokens between chunks. In other words, the model\n will shift forward by `tokenizer.model_max_length - stride` tokens each step.\n aggregation_strategy (`str`, *optional*, defaults to `"none"`):\n The strategy to fuse (or not) tokens based on the model prediction.\n\n - "none" : Will simply not do any aggregation and simply return raw results from the model\n - "simple" : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C,\n I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{"word": ABC, "entity": "TAG"}, {"word": "D",\n "entity": "TAG2"}, {"word": "E", "entity": "TAG2"}] Notice that two consecutive B tags will end up as\n different entities. On word based languages, we might end up splitting words undesirably : Imagine\n Microsoft being tagged as [{"word": "Micro", "entity": "ENTERPRISE"}, {"word": "soft", "entity":\n "NAME"}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages\n that support that meaning, which is basically tokens separated by a space). These mitigations will\n only work on real words, "New york" might still be tagged with two different entities.\n - "first" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot\n end up with different tags. Words will simply use the tag of the first token of the word when there\n is ambiguity.\n - "average" : (works only on word based models) Will use the `SIMPLE` strategy except that words,\n cannot end up with different tags. scores will be averaged first across tokens, and then the maximum\n label is applied.\n - "max" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot\n end up with different tags. Word entity will simply be the token with the maximum score.')
class TokenClassificationPipeline(ChunkPipeline):
'''
Named Entity Recognition pipeline using any `ModelForTokenClassification`. See the [named entity recognition
examples](../task_summary#named-entity-recognition) for more information.
Example:
```python
>>> from transformers import pipeline
>>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple")
>>> sentence = "Je m'appelle jean-baptiste et je vis à montréal"
>>> tokens = token_classifier(sentence)
>>> tokens
[{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}]
>>> token = tokens[0]
>>> # Start and end provide an easy way to highlight words in the original text.
>>> sentence[token["start"] : token["end"]]
' jean-baptiste'
>>> # Some models use the same idea to do part of speech.
>>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple")
>>> syntaxer("My name is Sarah and I live in London")
[{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This token recognition pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"ner"` (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous).
The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=token-classification).
'''
def __init__(self, args_parser=TokenClassificationArgumentHandler(), *args, **kwargs):
pass
def _sanitize_parameters(self, ignore_labels=None, grouped_entities: Optional[bool]=None, ignore_subwords: Optional[bool]=None, aggregation_strategy: Optional[AggregationStrategy]=None, offset_mapping: Optional[list[tuple[int, int]]]=None, is_split_into_words: Optional[bool]=False, stride: Optional[int]=None, delimiter: Optional[str]=None):
pass
@overload
def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, str]]:
pass
@overload
def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, str]]:
pass
def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, str]]:
'''
Classify each token of the text(s) given as inputs.
Args:
inputs (`str` or `List[str]`):
One or several texts (or one list of texts) for token classification. Can be pre-tokenized when
`is_split_into_words=True`.
Return:
A list or a list of list of `dict`: Each result comes as a list of dictionaries (one for each token in the
corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with
the following keys:
- **word** (`str`) -- The token/word classified. This is obtained by decoding the selected tokens. If you
want to have the exact string in the original sentence, use `start` and `end`.
- **score** (`float`) -- The corresponding probability for `entity`.
- **entity** (`str`) -- The entity predicted for that token/word (it is named *entity_group* when
*aggregation_strategy* is not `"none"`.
- **index** (`int`, only present when `aggregation_strategy="none"`) -- The index of the corresponding
token in the sentence.
- **start** (`int`, *optional*) -- The index of the start of the corresponding entity in the sentence. Only
exists if the offsets are available within the tokenizer
- **end** (`int`, *optional*) -- The index of the end of the corresponding entity in the sentence. Only
exists if the offsets are available within the tokenizer
'''
pass
def preprocess(self, sentence, offset_mapping=None, **preprocess_params):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, all_outputs, aggregation_strategy=AggregationStrategy.NONE, ignore_labels=None):
pass
def aggregate_overlapping_entities(self, entities):
pass
def gather_pre_entities(self, sentence: str, input_ids: np.ndarray, scores: np.ndarray, offset_mapping: Optional[list[tuple[int, int]]], special_tokens_mask: np.ndarray, aggregation_strategy: AggregationStrategy, word_ids: Optional[list[Optional[int]]]=None, word_to_chars_map: Optional[list[tuple[int, int]]]=None) -> list[dict]:
'''Fuse various numpy arrays into dicts with all the information needed for aggregation'''
pass
def aggregate_overlapping_entities(self, entities):
pass
def aggregate_word(self, entities: list[dict], aggregation_strategy: AggregationStrategy) -> dict:
pass
def aggregate_words(self, entities: list[dict], aggregation_strategy: AggregationStrategy) -> list[dict]:
'''
Override tokens from a given word that disagree to force agreement on word boundaries.
Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft|
company| B-ENT I-ENT
'''
pass
def group_sub_entities(self, entities: list[dict]) -> dict:
'''
Group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
'''
pass
def get_tag(self, entity_name: str) -> tuple[str, str]:
pass
def group_entities(self, entities: list[dict]) -> list[dict]:
'''
Find and group together the adjacent tokens with the same entity predicted.
Args:
entities (`dict`): The entities predicted by the pipeline.
'''
pass
| 20
| 6
| 30
| 2
| 24
| 4
| 5
| 0.24
| 1
| 12
| 3
| 0
| 14
| 3
| 14
| 58
| 477
| 53
| 342
| 104
| 311
| 82
| 216
| 87
| 201
| 15
| 7
| 4
| 74
|
6,452
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/video_classification.py
|
transformers.pipelines.video_classification.VideoClassificationPipeline
|
import requests
from ..utils import add_end_docstrings, is_av_available, is_torch_available, logging, requires_backends
from typing import Any, Optional, Union, overload
from io import BytesIO
import warnings
from .base import Pipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class VideoClassificationPipeline(Pipeline):
"""
Video classification pipeline using any `AutoModelForVideoClassification`. This pipeline predicts the class of a
video.
This video classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"video-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=video-classification).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = False
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
requires_backends(self, 'av')
self.check_model_type(MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING_NAMES)
def _sanitize_parameters(self, top_k=None, num_frames=None, frame_sampling_rate=None, function_to_apply=None):
preprocess_params = {}
if frame_sampling_rate is not None:
preprocess_params['frame_sampling_rate'] = frame_sampling_rate
if num_frames is not None:
preprocess_params['num_frames'] = num_frames
postprocess_params = {}
if top_k is not None:
postprocess_params['top_k'] = top_k
if function_to_apply is not None:
if function_to_apply not in ['softmax', 'sigmoid', 'none']:
raise ValueError(f"Invalid value for `function_to_apply`: {function_to_apply}. Valid options are ['softmax', 'sigmoid', 'none']")
postprocess_params['function_to_apply'] = function_to_apply
else:
postprocess_params['function_to_apply'] = 'softmax'
return (preprocess_params, {}, postprocess_params)
@overload
def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, inputs: list[str], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, inputs: Optional[Union[str, list[str]]]=None, **kwargs):
"""
Assign labels to the video(s) passed as inputs.
Args:
inputs (`str`, `list[str]`):
The pipeline handles three types of videos:
- A string containing a http link pointing to a video
- A string containing a local path to a video
The pipeline accepts either a single video or a batch of videos, which must then be passed as a string.
Videos in a batch must all be in the same format: all as http links or all as local paths.
top_k (`int`, *optional*, defaults to 5):
The number of top labels that will be returned by the pipeline. If the provided number is higher than
the number of labels available in the model configuration, it will default to the number of labels.
num_frames (`int`, *optional*, defaults to `self.model.config.num_frames`):
The number of frames sampled from the video to run the classification on. If not provided, will default
to the number of frames specified in the model configuration.
frame_sampling_rate (`int`, *optional*, defaults to 1):
The sampling rate used to select frames from the video. If not provided, will default to 1, i.e. every
frame will be used.
function_to_apply(`str`, *optional*, defaults to "softmax"):
The function to apply to the model output. By default, the pipeline will apply the softmax function to
the output of the model. Valid options: ["softmax", "sigmoid", "none"]. Note that passing Python's
built-in `None` will default to "softmax", so you need to pass the string "none" to disable any
post-processing.
Return:
A list of dictionaries or a list of list of dictionaries containing result. If the input is a single video,
will return a list of `top_k` dictionaries, if the input is a list of several videos, will return a list of list of
`top_k` dictionaries corresponding to the videos.
The dictionaries contain the following keys:
- **label** (`str`) -- The label identified by the model.
- **score** (`int`) -- The score attributed by the model for that label.
"""
if 'videos' in kwargs:
warnings.warn('The `videos` argument has been renamed to `inputs`. In version 5 of Transformers, `videos` will no longer be accepted', FutureWarning)
inputs = kwargs.pop('videos')
if inputs is None:
raise ValueError('Cannot call the video-classification pipeline without an inputs argument!')
return super().__call__(inputs, **kwargs)
def preprocess(self, video, num_frames=None, frame_sampling_rate=1):
if num_frames is None:
num_frames = self.model.config.num_frames
if video.startswith('http://') or video.startswith('https://'):
video = BytesIO(requests.get(video).content)
container = av.open(video)
start_idx = 0
end_idx = num_frames * frame_sampling_rate - 1
indices = np.linspace(start_idx, end_idx, num=num_frames, dtype=np.int64)
video = read_video_pyav(container, indices)
video = list(video)
model_inputs = self.image_processor(video, return_tensors='pt')
model_inputs = model_inputs.to(self.dtype)
return model_inputs
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs, top_k=5, function_to_apply='softmax'):
if top_k > self.model.config.num_labels:
top_k = self.model.config.num_labels
if function_to_apply == 'softmax':
probs = model_outputs.logits[0].softmax(-1)
elif function_to_apply == 'sigmoid':
probs = model_outputs.logits[0].sigmoid()
else:
probs = model_outputs.logits[0]
scores, ids = probs.topk(top_k)
scores = scores.tolist()
ids = ids.tolist()
return [{'score': score, 'label': self.model.config.id2label[_id]} for score, _id in zip(scores, ids)]
| null | 12
| 2
| 19
| 2
| 11
| 5
| 3
| 0.59
| 1
| 6
| 0
| 0
| 6
| 0
| 6
| 48
| 130
| 22
| 68
| 17
| 61
| 40
| 58
| 17
| 51
| 6
| 6
| 2
| 20
|
6,453
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/visual_question_answering.py
|
transformers.pipelines.visual_question_answering.VisualQuestionAnsweringPipeline
|
from .base import Pipeline, build_pipeline_init_args
from typing import Optional, Union
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging
from ..generation import GenerationConfig
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, has_image_processor=True))
class VisualQuestionAnsweringPipeline(Pipeline):
"""
Visual Question Answering pipeline using a `AutoModelForVisualQuestionAnswering`. This pipeline is currently only
available in PyTorch.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="dandelin/vilt-b32-finetuned-vqa")
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/lena.png"
>>> oracle(question="What is she wearing ?", image=image_url)
[{'score': 0.948, 'answer': 'hat'}, {'score': 0.009, 'answer': 'fedora'}, {'score': 0.003, 'answer': 'clothes'}, {'score': 0.003, 'answer': 'sun hat'}, {'score': 0.002, 'answer': 'nothing'}]
>>> oracle(question="What is she wearing ?", image=image_url, top_k=1)
[{'score': 0.948, 'answer': 'hat'}]
>>> oracle(question="Is this a person ?", image=image_url, top_k=1)
[{'score': 0.993, 'answer': 'yes'}]
>>> oracle(question="Is this a man ?", image=image_url, top_k=1)
[{'score': 0.996, 'answer': 'no'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This visual question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifiers: `"visual-question-answering", "vqa"`.
The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See
the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=visual-question-answering).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = True
_pipeline_calls_generate = True
_default_generation_config = GenerationConfig(max_new_tokens=256)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.check_model_type(MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING_NAMES)
def _sanitize_parameters(self, top_k=None, padding=None, truncation=None, timeout=None, **kwargs):
preprocess_params, postprocess_params = ({}, {})
if padding is not None:
preprocess_params['padding'] = padding
if truncation is not None:
preprocess_params['truncation'] = truncation
if timeout is not None:
preprocess_params['timeout'] = timeout
if top_k is not None:
postprocess_params['top_k'] = top_k
forward_params = {}
if getattr(self, 'assistant_model', None) is not None:
forward_params['assistant_model'] = self.assistant_model
if getattr(self, 'assistant_tokenizer', None) is not None:
forward_params['tokenizer'] = self.tokenizer
forward_params['assistant_tokenizer'] = self.assistant_tokenizer
return (preprocess_params, forward_params, postprocess_params)
def __call__(self, image: Union['Image.Image', str, list['Image.Image'], list[str], 'KeyDataset'], question: Optional[Union[str, list[str]]]=None, **kwargs):
"""
Answers open-ended questions about images. The pipeline accepts several types of inputs which are detailed
below:
- `pipeline(image=image, question=question)`
- `pipeline({"image": image, "question": question})`
- `pipeline([{"image": image, "question": question}])`
- `pipeline([{"image": image, "question": question}, {"image": image, "question": question}])`
Args:
image (`str`, `list[str]`, `PIL.Image`, `list[PIL.Image]` or `KeyDataset`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be
broadcasted to multiple questions.
For dataset: the passed in dataset must be of type `transformers.pipelines.pt_utils.KeyDataset`
Example:
```python
>>> from transformers.pipelines.pt_utils import KeyDataset
>>> from datasets import load_dataset
>>> dataset = load_dataset("detection-datasets/coco")
>>> oracle(image=KeyDataset(dataset, "image"), question="What's in this image?")
```
question (`str`, `list[str]`):
The question(s) asked. If given a single question, it can be broadcasted to multiple images.
If multiple images and questions are given, each and every question will be broadcasted to all images
(same effect as a Cartesian product)
top_k (`int`, *optional*, defaults to 5):
The number of top labels that will be returned by the pipeline. If the provided number is higher than
the number of labels available in the model configuration, it will default to the number of labels.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys:
- **label** (`str`) -- The label identified by the model.
- **score** (`int`) -- The score attributed by the model for that label.
"""
is_dataset = isinstance(image, KeyDataset)
is_image_batch = isinstance(image, list) and all((isinstance(item, (Image.Image, str)) for item in image))
is_question_batch = isinstance(question, list) and all((isinstance(item, str) for item in question))
if isinstance(image, (Image.Image, str)) and isinstance(question, str):
inputs = {'image': image, 'question': question}
elif (is_image_batch or is_dataset) and isinstance(question, str):
inputs = [{'image': im, 'question': question} for im in image]
elif isinstance(image, (Image.Image, str)) and is_question_batch:
inputs = [{'image': image, 'question': q} for q in question]
elif (is_image_batch or is_dataset) and is_question_batch:
question_image_pairs = []
for q in question:
for im in image:
question_image_pairs.append({'image': im, 'question': q})
inputs = question_image_pairs
else:
'\n Supports the following format\n - {"image": image, "question": question}\n - [{"image": image, "question": question}]\n - Generator and datasets\n '
inputs = image
results = super().__call__(inputs, **kwargs)
return results
def preprocess(self, inputs, padding=False, truncation=False, timeout=None):
image = load_image(inputs['image'], timeout=timeout)
model_inputs = self.tokenizer(inputs['question'], return_tensors='pt', padding=padding, truncation=truncation)
image_features = self.image_processor(images=image, return_tensors='pt')
image_features = image_features.to(self.dtype)
model_inputs.update(image_features)
return model_inputs
def _forward(self, model_inputs, **generate_kwargs):
if self.model.can_generate():
if 'generation_config' not in generate_kwargs:
generate_kwargs['generation_config'] = self.generation_config
model_outputs = self.model.generate(**model_inputs, **generate_kwargs)
else:
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs, top_k=5):
if self.model.can_generate():
return [{'answer': self.tokenizer.decode(output_ids, skip_special_tokens=True).strip()} for output_ids in model_outputs]
else:
if top_k > self.model.config.num_labels:
top_k = self.model.config.num_labels
probs = model_outputs.logits.sigmoid()[0]
scores, ids = probs.topk(top_k)
scores = scores.tolist()
ids = ids.tolist()
return [{'score': score, 'answer': self.model.config.id2label[_id]} for score, _id in zip(scores, ids)]
|
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True, has_image_processor=True))
class VisualQuestionAnsweringPipeline(Pipeline):
'''
Visual Question Answering pipeline using a `AutoModelForVisualQuestionAnswering`. This pipeline is currently only
available in PyTorch.
Unless the model you're using explicitly sets these generation parameters in its configuration files
(`generation_config.json`), the following default values will be used:
- max_new_tokens: 256
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="dandelin/vilt-b32-finetuned-vqa")
>>> image_url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/lena.png"
>>> oracle(question="What is she wearing ?", image=image_url)
[{'score': 0.948, 'answer': 'hat'}, {'score': 0.009, 'answer': 'fedora'}, {'score': 0.003, 'answer': 'clothes'}, {'score': 0.003, 'answer': 'sun hat'}, {'score': 0.002, 'answer': 'nothing'}]
>>> oracle(question="What is she wearing ?", image=image_url, top_k=1)
[{'score': 0.948, 'answer': 'hat'}]
>>> oracle(question="Is this a person ?", image=image_url, top_k=1)
[{'score': 0.993, 'answer': 'yes'}]
>>> oracle(question="Is this a man ?", image=image_url, top_k=1)
[{'score': 0.996, 'answer': 'no'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This visual question answering pipeline can currently be loaded from [`pipeline`] using the following task
identifiers: `"visual-question-answering", "vqa"`.
The models that this pipeline can use are models that have been fine-tuned on a visual question answering task. See
the up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=visual-question-answering).
'''
def __init__(self, *args, **kwargs):
pass
def _sanitize_parameters(self, top_k=None, padding=None, truncation=None, timeout=None, **kwargs):
pass
def __call__(self, image: Union['Image.Image', str, list['Image.Image'], list[str], 'KeyDataset'], question: Optional[Union[str, list[str]]]=None, **kwargs):
'''
Answers open-ended questions about images. The pipeline accepts several types of inputs which are detailed
below:
- `pipeline(image=image, question=question)`
- `pipeline({"image": image, "question": question})`
- `pipeline([{"image": image, "question": question}])`
- `pipeline([{"image": image, "question": question}, {"image": image, "question": question}])`
Args:
image (`str`, `list[str]`, `PIL.Image`, `list[PIL.Image]` or `KeyDataset`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
The pipeline accepts either a single image or a batch of images. If given a single image, it can be
broadcasted to multiple questions.
For dataset: the passed in dataset must be of type `transformers.pipelines.pt_utils.KeyDataset`
Example:
```python
>>> from transformers.pipelines.pt_utils import KeyDataset
>>> from datasets import load_dataset
>>> dataset = load_dataset("detection-datasets/coco")
>>> oracle(image=KeyDataset(dataset, "image"), question="What's in this image?")
```
question (`str`, `list[str]`):
The question(s) asked. If given a single question, it can be broadcasted to multiple images.
If multiple images and questions are given, each and every question will be broadcasted to all images
(same effect as a Cartesian product)
top_k (`int`, *optional*, defaults to 5):
The number of top labels that will be returned by the pipeline. If the provided number is higher than
the number of labels available in the model configuration, it will default to the number of labels.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A dictionary or a list of dictionaries containing the result. The dictionaries contain the following keys:
- **label** (`str`) -- The label identified by the model.
- **score** (`int`) -- The score attributed by the model for that label.
'''
pass
def preprocess(self, inputs, padding=False, truncation=False, timeout=None):
pass
def _forward(self, model_inputs, **generate_kwargs):
pass
def postprocess(self, model_outputs, top_k=5):
pass
| 8
| 2
| 24
| 2
| 14
| 8
| 4
| 0.82
| 1
| 6
| 1
| 0
| 6
| 1
| 6
| 48
| 181
| 28
| 84
| 27
| 72
| 69
| 64
| 21
| 57
| 7
| 6
| 3
| 24
|
6,454
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/zero_shot_audio_classification.py
|
transformers.pipelines.zero_shot_audio_classification.ZeroShotAudioClassificationPipeline
|
import requests
from collections import UserDict
from typing import Any, Union
from .base import Pipeline, build_pipeline_init_args
import numpy as np
from ..utils import add_end_docstrings, logging
from .audio_classification import ffmpeg_read
@add_end_docstrings(build_pipeline_init_args(has_feature_extractor=True, has_tokenizer=True))
class ZeroShotAudioClassificationPipeline(Pipeline):
"""
Zero shot audio classification pipeline using `ClapModel`. This pipeline predicts the class of an audio when you
provide an audio and a set of `candidate_labels`.
<Tip warning={true}>
The default `hypothesis_template` is : `"This is a sound of {}."`. Make sure you update it for your usage.
</Tip>
Example:
```python
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> dataset = load_dataset("ashraq/esc50")
>>> audio = next(iter(dataset["train"]["audio"]))["array"]
>>> classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
>>> classifier(audio, candidate_labels=["Sound of a dog", "Sound of vacuum cleaner"])
[{'score': 0.9996, 'label': 'Sound of a dog'}, {'score': 0.0004, 'label': 'Sound of vacuum cleaner'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This audio
classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-audio-classification"`. See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-audio-classification).
"""
_load_processor = False
_load_image_processor = False
_load_feature_extractor = True
_load_tokenizer = True
def __init__(self, **kwargs):
super().__init__(**kwargs)
def __call__(self, audios: Union[np.ndarray, bytes, str, dict], **kwargs: Any) -> list[dict[str, Any]]:
"""
Assign labels to the audio(s) passed as inputs.
Args:
audios (`str`, `list[str]`, `np.array` or `list[np.array]`):
The pipeline handles three types of inputs:
- A string containing a http link pointing to an audio
- A string containing a local path to an audio
- An audio loaded in numpy
candidate_labels (`list[str]`):
The candidate labels for this audio. They will be formatted using *hypothesis_template*.
hypothesis_template (`str`, *optional*, defaults to `"This is a sound of {}"`):
The format used in conjunction with *candidate_labels* to attempt the audio classification by
replacing the placeholder with the candidate_labels. Pass "{}" if *candidate_labels* are
already formatted.
Return:
A list of dictionaries containing one entry per proposed label. Each dictionary contains the
following keys:
- **label** (`str`) -- One of the suggested *candidate_labels*.
- **score** (`float`) -- The score attributed by the model to that label. It is a value between
0 and 1, computed as the `softmax` of `logits_per_audio`.
"""
return super().__call__(audios, **kwargs)
def _sanitize_parameters(self, **kwargs):
preprocess_params = {}
if 'candidate_labels' in kwargs:
preprocess_params['candidate_labels'] = kwargs['candidate_labels']
if 'hypothesis_template' in kwargs:
preprocess_params['hypothesis_template'] = kwargs['hypothesis_template']
return (preprocess_params, {}, {})
def preprocess(self, audio, candidate_labels=None, hypothesis_template='This is a sound of {}.'):
if isinstance(audio, str):
if audio.startswith('http://') or audio.startswith('https://'):
audio = requests.get(audio).content
else:
with open(audio, 'rb') as f:
audio = f.read()
if isinstance(audio, bytes):
audio = ffmpeg_read(audio, self.feature_extractor.sampling_rate)
if not isinstance(audio, np.ndarray):
raise TypeError('We expect a numpy ndarray as input')
if len(audio.shape) != 1:
raise ValueError('We expect a single channel audio input for ZeroShotAudioClassificationPipeline')
inputs = self.feature_extractor([audio], sampling_rate=self.feature_extractor.sampling_rate, return_tensors='pt')
inputs = inputs.to(self.dtype)
inputs['candidate_labels'] = candidate_labels
sequences = [hypothesis_template.format(x) for x in candidate_labels]
text_inputs = self.tokenizer(sequences, return_tensors='pt', padding=True)
inputs['text_inputs'] = [text_inputs]
return inputs
def _forward(self, model_inputs):
candidate_labels = model_inputs.pop('candidate_labels')
text_inputs = model_inputs.pop('text_inputs')
if isinstance(text_inputs[0], UserDict):
text_inputs = text_inputs[0]
else:
text_inputs = text_inputs[0][0]
outputs = self.model(**text_inputs, **model_inputs)
model_outputs = {'candidate_labels': candidate_labels, 'logits': outputs.logits_per_audio}
return model_outputs
def postprocess(self, model_outputs):
candidate_labels = model_outputs.pop('candidate_labels')
logits = model_outputs['logits'][0]
probs = logits.softmax(dim=0)
scores = probs.tolist()
result = [{'score': score, 'label': candidate_label} for score, candidate_label in sorted(zip(scores, candidate_labels), key=lambda x: -x[0])]
return result
|
@add_end_docstrings(build_pipeline_init_args(has_feature_extractor=True, has_tokenizer=True))
class ZeroShotAudioClassificationPipeline(Pipeline):
'''
Zero shot audio classification pipeline using `ClapModel`. This pipeline predicts the class of an audio when you
provide an audio and a set of `candidate_labels`.
<Tip warning={true}>
The default `hypothesis_template` is : `"This is a sound of {}."`. Make sure you update it for your usage.
</Tip>
Example:
```python
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> dataset = load_dataset("ashraq/esc50")
>>> audio = next(iter(dataset["train"]["audio"]))["array"]
>>> classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
>>> classifier(audio, candidate_labels=["Sound of a dog", "Sound of vacuum cleaner"])
[{'score': 0.9996, 'label': 'Sound of a dog'}, {'score': 0.0004, 'label': 'Sound of vacuum cleaner'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This audio
classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-audio-classification"`. See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-audio-classification).
'''
def __init__(self, **kwargs):
pass
def __call__(self, audios: Union[np.ndarray, bytes, str, dict], **kwargs: Any) -> list[dict[str, Any]]:
'''
Assign labels to the audio(s) passed as inputs.
Args:
audios (`str`, `list[str]`, `np.array` or `list[np.array]`):
The pipeline handles three types of inputs:
- A string containing a http link pointing to an audio
- A string containing a local path to an audio
- An audio loaded in numpy
candidate_labels (`list[str]`):
The candidate labels for this audio. They will be formatted using *hypothesis_template*.
hypothesis_template (`str`, *optional*, defaults to `"This is a sound of {}"`):
The format used in conjunction with *candidate_labels* to attempt the audio classification by
replacing the placeholder with the candidate_labels. Pass "{}" if *candidate_labels* are
already formatted.
Return:
A list of dictionaries containing one entry per proposed label. Each dictionary contains the
following keys:
- **label** (`str`) -- One of the suggested *candidate_labels*.
- **score** (`float`) -- The score attributed by the model to that label. It is a value between
0 and 1, computed as the `softmax` of `logits_per_audio`.
'''
pass
def _sanitize_parameters(self, **kwargs):
pass
def preprocess(self, audio, candidate_labels=None, hypothesis_template='This is a sound of {}.'):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs):
pass
| 8
| 2
| 16
| 2
| 10
| 4
| 3
| 0.73
| 1
| 7
| 0
| 0
| 6
| 1
| 6
| 48
| 132
| 23
| 63
| 21
| 56
| 46
| 52
| 20
| 45
| 7
| 6
| 3
| 17
|
6,455
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/zero_shot_classification.py
|
transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler
|
from .base import ArgumentHandler, ChunkPipeline, build_pipeline_init_args
class ZeroShotClassificationArgumentHandler(ArgumentHandler):
"""
Handles arguments for zero-shot for text classification by turning each possible label into an NLI
premise/hypothesis pair.
"""
def _parse_labels(self, labels):
if isinstance(labels, str):
labels = [label.strip() for label in labels.split(',') if label.strip()]
return labels
def __call__(self, sequences, labels, hypothesis_template):
if len(labels) == 0 or len(sequences) == 0:
raise ValueError('You must include at least one label and at least one sequence.')
if hypothesis_template.format(labels[0]) == hypothesis_template:
raise ValueError(f'The provided hypothesis_template "{hypothesis_template}" was not able to be formatted with the target labels. Make sure the passed template includes formatting syntax such as {{}} where the label should go.')
if isinstance(sequences, str):
sequences = [sequences]
sequence_pairs = []
for sequence in sequences:
sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels])
return (sequence_pairs, sequences)
|
class ZeroShotClassificationArgumentHandler(ArgumentHandler):
'''
Handles arguments for zero-shot for text classification by turning each possible label into an NLI
premise/hypothesis pair.
'''
def _parse_labels(self, labels):
pass
def __call__(self, sequences, labels, hypothesis_template):
pass
| 3
| 1
| 12
| 2
| 10
| 0
| 4
| 0.19
| 1
| 2
| 0
| 0
| 2
| 0
| 2
| 23
| 30
| 5
| 21
| 5
| 18
| 4
| 16
| 5
| 13
| 5
| 5
| 1
| 7
|
6,456
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/zero_shot_classification.py
|
transformers.pipelines.zero_shot_classification.ZeroShotClassificationPipeline
|
from typing import Union
import numpy as np
from ..utils import add_end_docstrings, logging
from ..tokenization_utils import TruncationStrategy
import inspect
from .base import ArgumentHandler, ChunkPipeline, build_pipeline_init_args
@add_end_docstrings(build_pipeline_init_args(has_tokenizer=True))
class ZeroShotClassificationPipeline(ChunkPipeline):
"""
NLI-based zero-shot classification pipeline using a `ModelForSequenceClassification` trained on NLI (natural
language inference) tasks. Equivalent of `text-classification` pipelines, but these models don't require a
hardcoded number of potential classes, they can be chosen at runtime. It usually means it's slower but it is
**much** more flexible.
Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
pair and passed to the pretrained model. Then, the logit for *entailment* is taken as the logit for the candidate
label being valid. Any NLI model can be used, but the id of the *entailment* label must be included in the model
config's :attr:*~transformers.PretrainedConfig.label2id*.
Example:
```python
>>> from transformers import pipeline
>>> oracle = pipeline(model="facebook/bart-large-mnli")
>>> oracle(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]}
>>> oracle(
... "I have a problem with my iphone that needs to be resolved asap!!",
... candidate_labels=["english", "german"],
... )
{'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['english', 'german'], 'scores': [0.814, 0.186]}
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This NLI pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-classification"`.
The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list
of available models on [huggingface.co/models](https://huggingface.co/models?search=nli).
"""
_load_processor = False
_load_image_processor = False
_load_feature_extractor = False
_load_tokenizer = True
def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs):
self._args_parser = args_parser
super().__init__(*args, **kwargs)
if self.entailment_id == -1:
logger.warning("Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to -1. Define a descriptive label2id mapping in the model config to ensure correct outputs.")
@property
def entailment_id(self):
for label, ind in self.model.config.label2id.items():
if label.lower().startswith('entail'):
return ind
return -1
def _parse_and_tokenize(self, sequence_pairs, padding=True, add_special_tokens=True, truncation=TruncationStrategy.ONLY_FIRST, **kwargs):
"""
Parse arguments and tokenize only_first so that hypothesis (label) is not truncated
"""
return_tensors = 'pt'
if self.tokenizer.pad_token is None:
logger.error('Tokenizer was not supporting padding necessary for zero-shot, attempting to use `pad_token=eos_token`')
self.tokenizer.pad_token = self.tokenizer.eos_token
try:
inputs = self.tokenizer(sequence_pairs, add_special_tokens=add_special_tokens, return_tensors=return_tensors, padding=padding, truncation=truncation)
except Exception as e:
if 'too short' in str(e):
inputs = self.tokenizer(sequence_pairs, add_special_tokens=add_special_tokens, return_tensors=return_tensors, padding=padding, truncation=TruncationStrategy.DO_NOT_TRUNCATE)
else:
raise e
return inputs
def _sanitize_parameters(self, **kwargs):
if kwargs.get('multi_class') is not None:
kwargs['multi_label'] = kwargs['multi_class']
logger.warning('The `multi_class` argument has been deprecated and renamed to `multi_label`. `multi_class` will be removed in a future version of Transformers.')
preprocess_params = {}
if 'candidate_labels' in kwargs:
preprocess_params['candidate_labels'] = self._args_parser._parse_labels(kwargs['candidate_labels'])
if 'hypothesis_template' in kwargs:
preprocess_params['hypothesis_template'] = kwargs['hypothesis_template']
postprocess_params = {}
if 'multi_label' in kwargs:
postprocess_params['multi_label'] = kwargs['multi_label']
return (preprocess_params, {}, postprocess_params)
def __call__(self, sequences: Union[str, list[str]], *args, **kwargs):
"""
Classify the sequence(s) given as inputs. See the [`ZeroShotClassificationPipeline`] documentation for more
information.
Args:
sequences (`str` or `list[str]`):
The sequence(s) to classify, will be truncated if the model input is too large.
candidate_labels (`str` or `list[str]`):
The set of possible class labels to classify each sequence into. Can be a single label, a string of
comma-separated labels, or a list of labels.
hypothesis_template (`str`, *optional*, defaults to `"This example is {}."`):
The template used to turn each label into an NLI-style hypothesis. This template must include a {} or
similar syntax for the candidate label to be inserted into the template. For example, the default
template is `"This example is {}."` With the candidate label `"sports"`, this would be fed into the
model like `"<cls> sequence to classify <sep> This example is sports . <sep>"`. The default template
works well in many cases, but it may be worthwhile to experiment with different templates depending on
the task setting.
multi_label (`bool`, *optional*, defaults to `False`):
Whether or not multiple candidate labels can be true. If `False`, the scores are normalized such that
the sum of the label likelihoods for each sequence is 1. If `True`, the labels are considered
independent and probabilities are normalized for each candidate by doing a softmax of the entailment
score vs. the contradiction score.
Return:
A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys:
- **sequence** (`str`) -- The sequence for which this is the output.
- **labels** (`list[str]`) -- The labels sorted by order of likelihood.
- **scores** (`list[float]`) -- The probabilities for each of the labels.
"""
if len(args) == 0:
pass
elif len(args) == 1 and 'candidate_labels' not in kwargs:
kwargs['candidate_labels'] = args[0]
else:
raise ValueError(f'Unable to understand extra arguments {args}')
return super().__call__(sequences, **kwargs)
def preprocess(self, inputs, candidate_labels=None, hypothesis_template='This example is {}.'):
sequence_pairs, sequences = self._args_parser(inputs, candidate_labels, hypothesis_template)
for i, (candidate_label, sequence_pair) in enumerate(zip(candidate_labels, sequence_pairs)):
model_input = self._parse_and_tokenize([sequence_pair])
yield {'candidate_label': candidate_label, 'sequence': sequences[0], 'is_last': i == len(candidate_labels) - 1, **model_input}
def _forward(self, inputs):
candidate_label = inputs['candidate_label']
sequence = inputs['sequence']
model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
model_forward = self.model.forward
if 'use_cache' in inspect.signature(model_forward).parameters:
model_inputs['use_cache'] = False
outputs = self.model(**model_inputs)
model_outputs = {'candidate_label': candidate_label, 'sequence': sequence, 'is_last': inputs['is_last'], **outputs}
return model_outputs
def postprocess(self, model_outputs, multi_label=False):
candidate_labels = [outputs['candidate_label'] for outputs in model_outputs]
sequences = [outputs['sequence'] for outputs in model_outputs]
logits = np.concatenate([output['logits'].float().numpy() for output in model_outputs])
N = logits.shape[0]
n = len(candidate_labels)
num_sequences = N // n
reshaped_outputs = logits.reshape((num_sequences, n, -1))
if multi_label or len(candidate_labels) == 1:
entailment_id = self.entailment_id
contradiction_id = -1 if entailment_id == 0 else 0
entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]]
scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True)
scores = scores[..., 1]
else:
entail_logits = reshaped_outputs[..., self.entailment_id]
scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True)
top_inds = list(reversed(scores[0].argsort()))
return {'sequence': sequences[0], 'labels': [candidate_labels[i] for i in top_inds], 'scores': scores[0, top_inds].tolist()}
| null | 11
| 3
| 22
| 2
| 15
| 5
| 3
| 0.55
| 1
| 10
| 2
| 0
| 8
| 1
| 8
| 52
| 222
| 28
| 125
| 46
| 108
| 69
| 77
| 37
| 68
| 5
| 7
| 2
| 26
|
6,457
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/zero_shot_image_classification.py
|
transformers.pipelines.zero_shot_image_classification.ZeroShotImageClassificationPipeline
|
from typing import Any, Union, overload
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
import warnings
from .base import Pipeline, build_pipeline_init_args
from collections import UserDict
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ZeroShotImageClassificationPipeline(Pipeline):
"""
Zero shot image classification pipeline using `CLIPModel`. This pipeline predicts the class of an image when you
provide an image and a set of `candidate_labels`.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="google/siglip-so400m-patch14-384")
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["animals", "humans", "landscape"],
... )
[{'score': 0.965, 'label': 'animals'}, {'score': 0.03, 'label': 'humans'}, {'score': 0.005, 'label': 'landscape'}]
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["black and white", "photorealist", "painting"],
... )
[{'score': 0.996, 'label': 'black and white'}, {'score': 0.003, 'label': 'photorealist'}, {'score': 0.0, 'label': 'painting'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-image-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-image-classification).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = True
def __init__(self, **kwargs):
super().__init__(**kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES)
@overload
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: list[str], **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, image: Union[list[str], list['Image.Image']], candidate_labels: list[str], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, image: Union[str, list[str], 'Image.Image', list['Image.Image']], candidate_labels: list[str], **kwargs: Any) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Assign labels to the image(s) passed as inputs.
Args:
image (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
candidate_labels (`list[str]`):
The candidate labels for this image. They will be formatted using *hypothesis_template*.
hypothesis_template (`str`, *optional*, defaults to `"This is a photo of {}"`):
The format used in conjunction with *candidate_labels* to attempt the image classification by
replacing the placeholder with the candidate_labels. Pass "{}" if *candidate_labels* are
already formatted.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list of dictionaries containing one entry per proposed label. Each dictionary contains the
following keys:
- **label** (`str`) -- One of the suggested *candidate_labels*.
- **score** (`float`) -- The score attributed by the model to that label. It is a value between
0 and 1, computed as the `softmax` of `logits_per_image`.
"""
if 'images' in kwargs:
image = kwargs.pop('images')
if image is None:
raise ValueError('Cannot call the zero-shot-image-classification pipeline without an images argument!')
return super().__call__(image, candidate_labels=candidate_labels, **kwargs)
def _sanitize_parameters(self, tokenizer_kwargs=None, **kwargs):
preprocess_params = {}
if 'candidate_labels' in kwargs:
preprocess_params['candidate_labels'] = kwargs['candidate_labels']
if 'timeout' in kwargs:
preprocess_params['timeout'] = kwargs['timeout']
if 'hypothesis_template' in kwargs:
preprocess_params['hypothesis_template'] = kwargs['hypothesis_template']
if tokenizer_kwargs is not None:
warnings.warn('The `tokenizer_kwargs` argument is deprecated and will be removed in version 5 of Transformers', FutureWarning)
preprocess_params['tokenizer_kwargs'] = tokenizer_kwargs
return (preprocess_params, {}, {})
def preprocess(self, image, candidate_labels=None, hypothesis_template='This is a photo of {}.', timeout=None, tokenizer_kwargs=None):
if tokenizer_kwargs is None:
tokenizer_kwargs = {}
image = load_image(image, timeout=timeout)
inputs = self.image_processor(images=[image], return_tensors='pt')
inputs = inputs.to(self.dtype)
inputs['candidate_labels'] = candidate_labels
sequences = [hypothesis_template.format(x) for x in candidate_labels]
tokenizer_default_kwargs = {'padding': True}
if 'siglip' in self.model.config.model_type:
tokenizer_default_kwargs.update(padding='max_length', max_length=64, truncation=True)
tokenizer_default_kwargs.update(tokenizer_kwargs)
text_inputs = self.tokenizer(sequences, return_tensors='pt', **tokenizer_default_kwargs)
inputs['text_inputs'] = [text_inputs]
return inputs
def _forward(self, model_inputs):
candidate_labels = model_inputs.pop('candidate_labels')
text_inputs = model_inputs.pop('text_inputs')
if isinstance(text_inputs[0], UserDict):
text_inputs = text_inputs[0]
else:
text_inputs = text_inputs[0][0]
outputs = self.model(**text_inputs, **model_inputs)
model_outputs = {'candidate_labels': candidate_labels, 'logits': outputs.logits_per_image}
return model_outputs
def postprocess(self, model_outputs):
candidate_labels = model_outputs.pop('candidate_labels')
logits = model_outputs['logits'][0]
if 'siglip' in self.model.config.model_type:
probs = torch.sigmoid(logits).squeeze(-1)
scores = probs.tolist()
if not isinstance(scores, list):
scores = [scores]
else:
probs = logits.softmax(dim=-1).squeeze(-1)
scores = probs.tolist()
if not isinstance(scores, list):
scores = [scores]
result = [{'score': score, 'label': candidate_label} for score, candidate_label in sorted(zip(scores, candidate_labels), key=lambda x: -x[0])]
return result
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ZeroShotImageClassificationPipeline(Pipeline):
'''
Zero shot image classification pipeline using `CLIPModel`. This pipeline predicts the class of an image when you
provide an image and a set of `candidate_labels`.
Example:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(model="google/siglip-so400m-patch14-384")
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["animals", "humans", "landscape"],
... )
[{'score': 0.965, 'label': 'animals'}, {'score': 0.03, 'label': 'humans'}, {'score': 0.005, 'label': 'landscape'}]
>>> classifier(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["black and white", "photorealist", "painting"],
... )
[{'score': 0.996, 'label': 'black and white'}, {'score': 0.003, 'label': 'photorealist'}, {'score': 0.0, 'label': 'painting'}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This image classification pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-image-classification"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-image-classification).
'''
def __init__(self, **kwargs):
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: list[str], **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: list[str], **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: list[str], **kwargs: Any) -> list[dict[str, Any]]:
'''
Assign labels to the image(s) passed as inputs.
Args:
image (`str`, `list[str]`, `PIL.Image` or `list[PIL.Image]`):
The pipeline handles three types of images:
- A string containing a http link pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
candidate_labels (`list[str]`):
The candidate labels for this image. They will be formatted using *hypothesis_template*.
hypothesis_template (`str`, *optional*, defaults to `"This is a photo of {}"`):
The format used in conjunction with *candidate_labels* to attempt the image classification by
replacing the placeholder with the candidate_labels. Pass "{}" if *candidate_labels* are
already formatted.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list of dictionaries containing one entry per proposed label. Each dictionary contains the
following keys:
- **label** (`str`) -- One of the suggested *candidate_labels*.
- **score** (`float`) -- The score attributed by the model to that label. It is a value between
0 and 1, computed as the `softmax` of `logits_per_image`.
'''
pass
def _sanitize_parameters(self, tokenizer_kwargs=None, **kwargs):
pass
def preprocess(self, image, candidate_labels=None, hypothesis_template='This is a photo of {}.', timeout=None, tokenizer_kwargs=None):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs):
pass
| 12
| 2
| 20
| 2
| 14
| 4
| 4
| 0.58
| 1
| 7
| 0
| 0
| 6
| 1
| 6
| 48
| 160
| 24
| 86
| 28
| 72
| 50
| 62
| 21
| 55
| 6
| 6
| 2
| 22
|
6,458
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pipelines/zero_shot_object_detection.py
|
transformers.pipelines.zero_shot_object_detection.ZeroShotObjectDetectionPipeline
|
from .base import ChunkPipeline, build_pipeline_init_args
from ..utils import add_end_docstrings, is_torch_available, is_vision_available, logging, requires_backends
from typing import Any, Optional, Union, overload
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ZeroShotObjectDetectionPipeline(ChunkPipeline):
"""
Zero shot object detection pipeline using `OwlViTForObjectDetection`. This pipeline predicts bounding boxes of
objects when you provide an image and a set of `candidate_labels`.
Example:
```python
>>> from transformers import pipeline
>>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... candidate_labels=["cat", "couch"],
... )
[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}]
>>> detector(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["head", "bird"],
... )
[{'score': 0.119, 'label': 'bird', 'box': {'xmin': 71, 'ymin': 170, 'xmax': 410, 'ymax': 508}}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This object detection pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-object-detection"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-object-detection).
"""
_load_processor = False
_load_image_processor = True
_load_feature_extractor = False
_load_tokenizer = True
def __init__(self, **kwargs):
super().__init__(**kwargs)
requires_backends(self, 'vision')
self.check_model_type(MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING_NAMES)
@overload
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: Union[str, list[str]], **kwargs: Any) -> list[dict[str, Any]]:
...
@overload
def __call__(self, image: list[dict[str, Any]], **kwargs: Any) -> list[list[dict[str, Any]]]:
...
def __call__(self, image: Union[str, 'Image.Image', list[dict[str, Any]]], candidate_labels: Optional[Union[str, list[str]]]=None, **kwargs: Any) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]:
"""
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
image (`str`, `PIL.Image` or `list[dict[str, Any]]`):
The pipeline handles three types of images:
- A string containing an http url pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
You can use this parameter to send directly a list of images, or a dataset or a generator like so:
```python
>>> from transformers import pipeline
>>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
... [
... {
... "image": "http://images.cocodataset.org/val2017/000000039769.jpg",
... "candidate_labels": ["cat", "couch"],
... },
... {
... "image": "http://images.cocodataset.org/val2017/000000039769.jpg",
... "candidate_labels": ["cat", "couch"],
... },
... ]
... )
[[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.25, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}], [{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}]]
```
candidate_labels (`str` or `list[str]` or `list[list[str]]`):
What the model should recognize in the image.
threshold (`float`, *optional*, defaults to 0.1):
The probability necessary to make a prediction.
top_k (`int`, *optional*, defaults to None):
The number of top predictions that will be returned by the pipeline. If the provided number is `None`
or higher than the number of predictions available, it will default to the number of predictions.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list of lists containing prediction results, one list per input image. Each list contains dictionaries
with the following keys:
- **label** (`str`) -- Text query corresponding to the found object.
- **score** (`float`) -- Score corresponding to the object (between 0 and 1).
- **box** (`dict[str,int]`) -- Bounding box of the detected object in image's original size. It is a
dictionary with `x_min`, `x_max`, `y_min`, `y_max` keys.
"""
if 'text_queries' in kwargs:
candidate_labels = kwargs.pop('text_queries')
if isinstance(image, (str, Image.Image)):
inputs = {'image': image, 'candidate_labels': candidate_labels}
elif isinstance(image, (list, tuple)) and valid_images(image):
return list(super().__call__(({'image': img, 'candidate_labels': labels} for img, labels in zip(image, candidate_labels)), **kwargs))
else:
'\n Supports the following format\n - {"image": image, "candidate_labels": candidate_labels}\n - [{"image": image, "candidate_labels": candidate_labels}]\n - Generator and datasets\n This is a common pattern in other multimodal pipelines, so we support it here as well.\n '
inputs = image
results = super().__call__(inputs, **kwargs)
return results
def _sanitize_parameters(self, **kwargs):
preprocess_params = {}
if 'timeout' in kwargs:
preprocess_params['timeout'] = kwargs['timeout']
postprocess_params = {}
if 'threshold' in kwargs:
postprocess_params['threshold'] = kwargs['threshold']
if 'top_k' in kwargs:
postprocess_params['top_k'] = kwargs['top_k']
return (preprocess_params, {}, postprocess_params)
def preprocess(self, inputs, timeout=None):
image = load_image(inputs['image'], timeout=timeout)
candidate_labels = inputs['candidate_labels']
if isinstance(candidate_labels, str):
candidate_labels = candidate_labels.split(',')
target_size = torch.tensor([[image.height, image.width]], dtype=torch.int32)
for i, candidate_label in enumerate(candidate_labels):
text_inputs = self.tokenizer(candidate_label, return_tensors='pt')
image_features = self.image_processor(image, return_tensors='pt')
image_features = image_features.to(self.dtype)
yield {'is_last': i == len(candidate_labels) - 1, 'target_size': target_size, 'candidate_label': candidate_label, **text_inputs, **image_features}
def _forward(self, model_inputs):
target_size = model_inputs.pop('target_size')
candidate_label = model_inputs.pop('candidate_label')
is_last = model_inputs.pop('is_last')
outputs = self.model(**model_inputs)
model_outputs = {'target_size': target_size, 'candidate_label': candidate_label, 'is_last': is_last, **outputs}
return model_outputs
def postprocess(self, model_outputs, threshold=0.1, top_k=None):
results = []
for model_output in model_outputs:
label = model_output['candidate_label']
model_output = BaseModelOutput(model_output)
outputs = self.image_processor.post_process_object_detection(outputs=model_output, threshold=threshold, target_sizes=model_output['target_size'])[0]
for index in outputs['scores'].nonzero():
score = outputs['scores'][index].item()
box = self._get_bounding_box(outputs['boxes'][index][0])
result = {'score': score, 'label': label, 'box': box}
results.append(result)
results = sorted(results, key=lambda x: x['score'], reverse=True)
if top_k:
results = results[:top_k]
return results
def _get_bounding_box(self, box: 'torch.Tensor') -> dict[str, int]:
"""
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`dict[str, int]`): Dict containing the coordinates in corners format.
"""
xmin, ymin, xmax, ymax = box.int().tolist()
bbox = {'xmin': xmin, 'ymin': ymin, 'xmax': xmax, 'ymax': ymax}
return bbox
|
@add_end_docstrings(build_pipeline_init_args(has_image_processor=True))
class ZeroShotObjectDetectionPipeline(ChunkPipeline):
'''
Zero shot object detection pipeline using `OwlViTForObjectDetection`. This pipeline predicts bounding boxes of
objects when you provide an image and a set of `candidate_labels`.
Example:
```python
>>> from transformers import pipeline
>>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... candidate_labels=["cat", "couch"],
... )
[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}]
>>> detector(
... "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png",
... candidate_labels=["head", "bird"],
... )
[{'score': 0.119, 'label': 'bird', 'box': {'xmin': 71, 'ymin': 170, 'xmax': 410, 'ymax': 508}}]
```
Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial)
This object detection pipeline can currently be loaded from [`pipeline`] using the following task identifier:
`"zero-shot-object-detection"`.
See the list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=zero-shot-object-detection).
'''
def __init__(self, **kwargs):
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: Union[str, list[str]], **kwargs: Any) -> list[dict[str, Any]]:
pass
@overload
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: Union[str, list[str]], **kwargs: Any) -> list[dict[str, Any]]:
pass
def __call__(self, image: Union[str, 'Image.Image'], candidate_labels: Union[str, list[str]], **kwargs: Any) -> list[dict[str, Any]]:
'''
Detect objects (bounding boxes & classes) in the image(s) passed as inputs.
Args:
image (`str`, `PIL.Image` or `list[dict[str, Any]]`):
The pipeline handles three types of images:
- A string containing an http url pointing to an image
- A string containing a local path to an image
- An image loaded in PIL directly
You can use this parameter to send directly a list of images, or a dataset or a generator like so:
```python
>>> from transformers import pipeline
>>> detector = pipeline(model="google/owlvit-base-patch32", task="zero-shot-object-detection")
>>> detector(
... [
... {
... "image": "http://images.cocodataset.org/val2017/000000039769.jpg",
... "candidate_labels": ["cat", "couch"],
... },
... {
... "image": "http://images.cocodataset.org/val2017/000000039769.jpg",
... "candidate_labels": ["cat", "couch"],
... },
... ]
... )
[[{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.25, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}], [{'score': 0.287, 'label': 'cat', 'box': {'xmin': 324, 'ymin': 20, 'xmax': 640, 'ymax': 373}}, {'score': 0.254, 'label': 'cat', 'box': {'xmin': 1, 'ymin': 55, 'xmax': 315, 'ymax': 472}}, {'score': 0.121, 'label': 'couch', 'box': {'xmin': 4, 'ymin': 0, 'xmax': 642, 'ymax': 476}}]]
```
candidate_labels (`str` or `list[str]` or `list[list[str]]`):
What the model should recognize in the image.
threshold (`float`, *optional*, defaults to 0.1):
The probability necessary to make a prediction.
top_k (`int`, *optional*, defaults to None):
The number of top predictions that will be returned by the pipeline. If the provided number is `None`
or higher than the number of predictions available, it will default to the number of predictions.
timeout (`float`, *optional*, defaults to None):
The maximum time in seconds to wait for fetching images from the web. If None, no timeout is set and
the call may block forever.
Return:
A list of lists containing prediction results, one list per input image. Each list contains dictionaries
with the following keys:
- **label** (`str`) -- Text query corresponding to the found object.
- **score** (`float`) -- Score corresponding to the object (between 0 and 1).
- **box** (`dict[str,int]`) -- Bounding box of the detected object in image's original size. It is a
dictionary with `x_min`, `x_max`, `y_min`, `y_max` keys.
'''
pass
def _sanitize_parameters(self, **kwargs):
pass
def preprocess(self, inputs, timeout=None):
pass
def _forward(self, model_inputs):
pass
def postprocess(self, model_outputs, threshold=0.1, top_k=None):
pass
def _get_bounding_box(self, box: 'torch.Tensor') -> dict[str, int]:
'''
Turns list [xmin, xmax, ymin, ymax] into dict { "xmin": xmin, ... }
Args:
box (`torch.Tensor`): Tensor containing the coordinates in corners format.
Returns:
bbox (`dict[str, int]`): Dict containing the coordinates in corners format.
'''
pass
| 13
| 3
| 25
| 4
| 13
| 8
| 3
| 0.9
| 1
| 10
| 0
| 0
| 7
| 0
| 7
| 51
| 213
| 40
| 91
| 38
| 78
| 82
| 66
| 33
| 58
| 4
| 7
| 2
| 21
|
6,459
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.AllKwargsForChatTemplate
|
from typing import Any, Optional, TypedDict, TypeVar, Union
class AllKwargsForChatTemplate(TypedDict, total=False):
processor_kwargs: ProcessingKwargs
mm_load_kwargs: ChatTemplateLoadKwargs
template_kwargs: ProcessorChatTemplateKwargs
|
class AllKwargsForChatTemplate(TypedDict, total=False):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 6
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 3
| 0
| 3
| 3
| 1
| 0
| 2
| 1
| 1
| 0
| 2
| 0
| 0
|
6,460
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.AudioKwargs
|
from .tokenization_utils_base import PaddingStrategy, PreTokenizedInput, PreTrainedTokenizerBase, TextInput, TruncationStrategy
from typing import Any, Optional, TypedDict, TypeVar, Union
class AudioKwargs(TypedDict, total=False):
"""
Keyword arguments for audio processing.
Attributes:
sampling_rate (`int`, *optional*):
The sampling rate at which the `raw_speech` input was sampled.
raw_speech (`np.ndarray`, `list[float]`, `list[np.ndarray]`, `list[list[float]]`):
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'`
max_length (`int`, *optional*):
Maximum length of the returned list and optionally padding length (see above).
truncation (`bool`, *optional*):
Activates truncation to cut input sequences longer than *max_length* to *max_length*.
pad_to_multiple_of (`int`, *optional*):
If set, will pad the sequence to a multiple of the provided value.
return_attention_mask (`bool`, *optional*):
Whether or not [`~ASTFeatureExtractor.__call__`] should return `attention_mask`.
"""
sampling_rate: Optional[int]
raw_speech: Optional[Union['np.ndarray', list[float], list['np.ndarray'], list[list[float]]]]
padding: Optional[Union[bool, str, PaddingStrategy]]
max_length: Optional[int]
truncation: Optional[bool]
pad_to_multiple_of: Optional[int]
return_attention_mask: Optional[bool]
|
class AudioKwargs(TypedDict, total=False):
'''
Keyword arguments for audio processing.
Attributes:
sampling_rate (`int`, *optional*):
The sampling rate at which the `raw_speech` input was sampled.
raw_speech (`np.ndarray`, `list[float]`, `list[np.ndarray]`, `list[list[float]]`):
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'`
max_length (`int`, *optional*):
Maximum length of the returned list and optionally padding length (see above).
truncation (`bool`, *optional*):
Activates truncation to cut input sequences longer than *max_length* to *max_length*.
pad_to_multiple_of (`int`, *optional*):
If set, will pad the sequence to a multiple of the provided value.
return_attention_mask (`bool`, *optional*):
Whether or not [`~ASTFeatureExtractor.__call__`] should return `attention_mask`.
'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 3.25
| 2
| 0
| 0
| 2
| 0
| 0
| 0
| 0
| 37
| 3
| 8
| 1
| 7
| 26
| 8
| 1
| 7
| 0
| 1
| 0
| 0
|
6,461
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.CommonKwargs
|
from .utils import AUDIO_TOKENIZER_NAME, CHAT_TEMPLATE_DIR, CHAT_TEMPLATE_FILE, LEGACY_PROCESSOR_CHAT_TEMPLATE_FILE, PROCESSOR_NAME, PushToHubMixin, TensorType, cached_file, copy_func, direct_transformers_import, download_url, is_offline_mode, is_remote_url, is_torch_available, list_repo_templates, logging
from typing import Any, Optional, TypedDict, TypeVar, Union
class CommonKwargs(TypedDict, total=False):
return_tensors: Optional[Union[str, TensorType]]
|
class CommonKwargs(TypedDict, total=False):
pass
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 2
| 0
| 0
| 2
| 0
| 0
| 0
| 0
| 2
| 0
| 2
| 1
| 1
| 0
| 2
| 1
| 1
| 0
| 1
| 0
| 0
|
6,462
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.ImagesKwargs
|
from .image_utils import ChannelDimension, ImageInput, is_vision_available
from typing import Any, Optional, TypedDict, TypeVar, Union
class ImagesKwargs(TypedDict, total=False):
"""
Keyword arguments for image processing. For extended documentation, check the appropriate ImageProcessor
class methods and docstrings.
Attributes:
do_resize (`bool`, *optional*):
Whether to resize the image.
size (`dict[str, int]`, *optional*):
Resize the shorter side of the input to `size["shortest_edge"]`.
crop_size (`dict[str, int]`, *optional*):
Desired output size when applying center-cropping.
resample (`PILImageResampling`, *optional*):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*):
Whether to rescale the image by the specified scale `rescale_factor`.
rescale_factor (`int` or `float`, *optional*):
Scale factor to use if rescaling the image.
do_normalize (`bool`, *optional*):
Whether to normalize the image.
image_mean (`float` or `list[float]`, *optional*):
Mean to use if normalizing the image.
image_std (`float` or `list[float]`, *optional*):
Standard deviation to use if normalizing the image.
do_pad (`bool`, *optional*):
Whether to pad the image to the `(max_height, max_width)` of the images in the batch.
pad_size (`dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to.
do_center_crop (`bool`, *optional*):
Whether to center crop the image.
data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the output image.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the input image.
device (`str`, *optional*):
The device to use for processing (e.g. "cpu", "cuda"), only relevant for fast image processing.
"""
do_resize: Optional[bool]
size: Optional[dict[str, int]]
crop_size: Optional[dict[str, int]]
resample: Optional[Union['PILImageResampling', int]]
do_rescale: Optional[bool]
rescale_factor: Optional[float]
do_normalize: Optional[bool]
image_mean: Optional[Union[float, list[float]]]
image_std: Optional[Union[float, list[float]]]
do_pad: Optional[bool]
pad_size: Optional[dict[str, int]]
do_center_crop: Optional[bool]
data_format: Optional[ChannelDimension]
input_data_format: Optional[Union[str, ChannelDimension]]
device: Optional[str]
|
class ImagesKwargs(TypedDict, total=False):
'''
Keyword arguments for image processing. For extended documentation, check the appropriate ImageProcessor
class methods and docstrings.
Attributes:
do_resize (`bool`, *optional*):
Whether to resize the image.
size (`dict[str, int]`, *optional*):
Resize the shorter side of the input to `size["shortest_edge"]`.
crop_size (`dict[str, int]`, *optional*):
Desired output size when applying center-cropping.
resample (`PILImageResampling`, *optional*):
Resampling filter to use if resizing the image.
do_rescale (`bool`, *optional*):
Whether to rescale the image by the specified scale `rescale_factor`.
rescale_factor (`int` or `float`, *optional*):
Scale factor to use if rescaling the image.
do_normalize (`bool`, *optional*):
Whether to normalize the image.
image_mean (`float` or `list[float]`, *optional*):
Mean to use if normalizing the image.
image_std (`float` or `list[float]`, *optional*):
Standard deviation to use if normalizing the image.
do_pad (`bool`, *optional*):
Whether to pad the image to the `(max_height, max_width)` of the images in the batch.
pad_size (`dict[str, int]`, *optional*):
The size `{"height": int, "width" int}` to pad the images to.
do_center_crop (`bool`, *optional*):
Whether to center crop the image.
data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the output image.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the input image.
device (`str`, *optional*):
The device to use for processing (e.g. "cpu", "cuda"), only relevant for fast image processing.
'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 2.18
| 2
| 0
| 0
| 12
| 0
| 0
| 0
| 0
| 56
| 2
| 17
| 1
| 16
| 37
| 17
| 1
| 16
| 0
| 1
| 0
| 0
|
6,463
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.ProcessingKwargs
|
from typing import Any, Optional, TypedDict, TypeVar, Union
class ProcessingKwargs(TypedDict, total=False):
"""
Base class for kwargs passing to processors.
In case a model has specific kwargs that are not present in the base class or default values for existing keys,
it should have its own `ModelProcessorKwargs` class that inherits from `ProcessingKwargs` to provide:
1) Additional typed keys and that this model requires to process inputs.
2) Default values for existing keys under a `_defaults` attribute.
New keys have to be defined as follows to ensure type hinting is done correctly.
```python
# adding a new image kwarg for this model
class ModelImagesKwargs(ImagesKwargs, total=False):
new_image_kwarg: Optional[bool]
class ModelProcessorKwargs(ProcessingKwargs, total=False):
images_kwargs: ModelImagesKwargs
_defaults = {
"images_kwargs: {
"new_image_kwarg": False,
}
"text_kwargs": {
"padding": "max_length",
},
}
```
For Python 3.8 compatibility, when inheriting from this class and overriding one of the kwargs,
you need to manually update the __annotations__ dictionary. This can be done as follows:
```python
class CustomProcessorKwargs(ProcessingKwargs, total=False):
images_kwargs: CustomImagesKwargs
CustomProcessorKwargs.__annotations__["images_kwargs"] = CustomImagesKwargs # python 3.8 compatibility
```python
"""
_defaults = {}
common_kwargs: CommonKwargs = {**CommonKwargs.__annotations__}
text_kwargs: TextKwargs = {**TextKwargs.__annotations__}
images_kwargs: ImagesKwargs = {**ImagesKwargs.__annotations__}
videos_kwargs: VideosKwargs = {**VideosKwargs.__annotations__}
audio_kwargs: AudioKwargs = {**AudioKwargs.__annotations__}
|
class ProcessingKwargs(TypedDict, total=False):
'''
Base class for kwargs passing to processors.
In case a model has specific kwargs that are not present in the base class or default values for existing keys,
it should have its own `ModelProcessorKwargs` class that inherits from `ProcessingKwargs` to provide:
1) Additional typed keys and that this model requires to process inputs.
2) Default values for existing keys under a `_defaults` attribute.
New keys have to be defined as follows to ensure type hinting is done correctly.
```python
# adding a new image kwarg for this model
class ModelImagesKwargs(ImagesKwargs, total=False):
new_image_kwarg: Optional[bool]
class ModelProcessorKwargs(ProcessingKwargs, total=False):
images_kwargs: ModelImagesKwargs
_defaults = {
"images_kwargs: {
"new_image_kwarg": False,
}
"text_kwargs": {
"padding": "max_length",
},
}
```
For Python 3.8 compatibility, when inheriting from this class and overriding one of the kwargs,
you need to manually update the __annotations__ dictionary. This can be done as follows:
```python
class CustomProcessorKwargs(ProcessingKwargs, total=False):
images_kwargs: CustomImagesKwargs
CustomProcessorKwargs.__annotations__["images_kwargs"] = CustomImagesKwargs # python 3.8 compatibility
```python
'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 1.81
| 6
| 0
| 0
| 37
| 0
| 0
| 0
| 0
| 53
| 8
| 16
| 6
| 15
| 29
| 6
| 6
| 5
| 0
| 2
| 0
| 0
|
6,464
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.ProcessorMixin
|
import warnings
import os
from .audio_utils import AudioInput, load_audio
from typing import Any, Optional, TypedDict, TypeVar, Union
from .tokenization_utils_base import PaddingStrategy, PreTokenizedInput, PreTrainedTokenizerBase, TextInput, TruncationStrategy
from .utils.chat_template_utils import render_jinja_template
from .utils import AUDIO_TOKENIZER_NAME, CHAT_TEMPLATE_DIR, CHAT_TEMPLATE_FILE, LEGACY_PROCESSOR_CHAT_TEMPLATE_FILE, PROCESSOR_NAME, PushToHubMixin, TensorType, cached_file, copy_func, direct_transformers_import, download_url, is_offline_mode, is_remote_url, is_torch_available, list_repo_templates, logging
import copy
from huggingface_hub.errors import EntryNotFoundError
from .feature_extraction_utils import BatchFeature
from .image_utils import ChannelDimension, ImageInput, is_vision_available
from .utils.deprecation import deprecate_kwarg
from pathlib import Path
import inspect
import json
import bisect
from .video_utils import VideoInput, VideoMetadata
import numpy as np
from .dynamic_module_utils import custom_object_save
class ProcessorMixin(PushToHubMixin):
"""
This is a mixin used to provide saving/loading functionality for all processor classes.
"""
attributes = ['feature_extractor', 'tokenizer']
optional_attributes = ['chat_template', 'audio_tokenizer']
optional_call_args: list[str] = []
feature_extractor_class = None
tokenizer_class = None
_auto_class = None
valid_processor_kwargs = ProcessingKwargs
def __init__(self, *args, **kwargs):
for optional_attribute in self.optional_attributes:
optional_attribute_value = kwargs.pop(optional_attribute, None)
setattr(self, optional_attribute, optional_attribute_value)
if optional_attribute == 'audio_tokenizer' and optional_attribute_value is not None:
proper_class = self.check_argument_for_proper_class(optional_attribute, optional_attribute_value)
if not (is_torch_available() and isinstance(optional_attribute_value, PreTrainedAudioTokenizerBase)):
raise ValueError(f'Tried to use `{proper_class}` for audio tokenization. However, this class is not registered for audio tokenization.')
for key in kwargs:
if key not in self.attributes:
raise TypeError(f'Unexpected keyword argument {key}.')
for arg, attribute_name in zip(args, self.attributes):
if attribute_name in kwargs:
raise TypeError(f'Got multiple values for argument {attribute_name}.')
else:
kwargs[attribute_name] = arg
if len(kwargs) != len(self.attributes):
raise ValueError(f"This processor requires {len(self.attributes)} arguments: {', '.join(self.attributes)}. Got {len(args)} arguments instead.")
for attribute_name, arg in kwargs.items():
self.check_argument_for_proper_class(attribute_name, arg)
setattr(self, attribute_name, arg)
def __call__(self, images: Optional[ImageInput]=None, text: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, videos: Optional[VideoInput]=None, audio: Optional[AudioInput]=None, **kwargs: Unpack[ProcessingKwargs]):
"""
Main method to prepare for model inputs. This method forwards the each modality argument to its own processor
along with `kwargs`. Please refer to the docstring of the each processor attributes for more information.
Args:
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `list[PIL.Image.Image]`, `list[np.ndarray]`, `list[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. Both channels-first and channels-last formats are supported.
text (`TextInput`, `PreTokenizedInput`, `list[TextInput]`, `list[PreTokenizedInput]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
videos (`np.ndarray`, `torch.Tensor`, `List[np.ndarray]`, `List[torch.Tensor]`):
The video or batch of videos to be prepared. Each video can be a 4D NumPy array or PyTorch
tensor, or a nested list of 3D frames. Both channels-first and channels-last formats are supported.
audio (`np.ndarray`, `torch.Tensor`, `list[np.ndarray]`, `list[torch.Tensor]`):
The audio or batch of audio to be prepared. Each audio can be a NumPy array or PyTorch
tensor.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return NumPy `np.ndarray` objects.
Returns:
[`BatchFeature`]: A [`BatchFeature`] object with processed inputs in a dict format.
"""
if images is None and text is None and (videos is None) and (audio is None):
raise ValueError(f'You need to provide at least one input to call {self.__class__.__name__}')
kwargs = self._merge_kwargs(self.valid_processor_kwargs, tokenizer_init_kwargs=self.tokenizer.init_kwargs if hasattr(self, 'tokenizer') else {}, **kwargs)
attribute_to_kwargs = {'tokenizer': (text, 'text_kwargs'), 'image_processor': (images, 'images_kwargs'), 'video_processor': (videos, 'videos_kwargs'), 'feature_extractor': (audio, 'audio_kwargs')}
outputs = {}
for attribute_name in self.attributes:
attribute = getattr(self, attribute_name, None)
input_data, input_kwargs = attribute_to_kwargs[attribute_name]
if input_data is not None and attribute is not None:
attribute_output = attribute(input_data, **kwargs[input_kwargs])
outputs.update(attribute_output)
return BatchFeature(outputs)
def check_argument_for_proper_class(self, argument_name, argument):
"""
Checks the passed argument's class against the expected transformers class. In case of an unexpected
mismatch between expected and actual class, an error is raise. Otherwise, the proper retrieved class
is returned.
"""
class_name = getattr(self, f'{argument_name}_class')
class_name = AUTO_TO_BASE_CLASS_MAPPING.get(class_name, class_name)
if isinstance(class_name, tuple):
proper_class = tuple((self.get_possibly_dynamic_module(n) for n in class_name if n is not None))
else:
proper_class = self.get_possibly_dynamic_module(class_name)
if not isinstance(argument, proper_class):
raise TypeError(f'Received a {type(argument).__name__} for argument {argument_name}, but a {class_name} was expected.')
return proper_class
def to_dict(self, legacy_serialization=True) -> dict[str, Any]:
"""
Serializes this instance to a Python dictionary.
Returns:
`dict[str, Any]`: Dictionary of all the attributes that make up this processor instance.
"""
output = copy.deepcopy(self.__dict__)
sig = inspect.signature(self.__init__)
attrs_to_save = list(sig.parameters)
attrs_to_save += ['auto_map']
if legacy_serialization:
attrs_to_save = [x for x in attrs_to_save if x not in self.__class__.attributes]
if 'tokenizer' in output:
del output['tokenizer']
if 'qformer_tokenizer' in output:
del output['qformer_tokenizer']
if 'protein_tokenizer' in output:
del output['protein_tokenizer']
if 'chat_template' in output:
del output['chat_template']
def cast_array_to_list(dictionary):
"""
Numpy arrays are not serialiazable but can be in pre-processing dicts.
This function casts arrays to list, recusring through the nested configs as well.
"""
for key, value in dictionary.items():
if isinstance(value, np.ndarray):
dictionary[key] = value.tolist()
elif isinstance(value, dict):
dictionary[key] = cast_array_to_list(value)
return dictionary
output = {k: v.to_dict() if isinstance(v, PushToHubMixin) else v for k, v in output.items() if k in attrs_to_save and v.__class__.__name__ != 'BeamSearchDecoderCTC' and (legacy_serialization and (not isinstance(v, PushToHubMixin)) or not legacy_serialization)}
output = cast_array_to_list(output)
if not legacy_serialization and 'audio_tokenizer' in output:
audio_tokenizer_dict = {'audio_tokenizer_class': self.audio_tokenizer.__class__.__name__, 'audio_tokenizer_name_or_path': self.audio_tokenizer.name_or_path}
output['audio_tokenizer'] = audio_tokenizer_dict
output['processor_class'] = self.__class__.__name__
return output
def to_json_string(self, legacy_serialization=True) -> str:
"""
Serializes this instance to a JSON string.
Returns:
`str`: String containing all the attributes that make up this feature_extractor instance in JSON format.
"""
dictionary = self.to_dict(legacy_serialization=legacy_serialization)
return json.dumps(dictionary, indent=2, sort_keys=True) + '\n'
def to_json_file(self, json_file_path: Union[str, os.PathLike], legacy_serialization=True):
"""
Save this instance to a JSON file.
Args:
json_file_path (`str` or `os.PathLike`):
Path to the JSON file in which this processor instance's parameters will be saved.
"""
with open(json_file_path, 'w', encoding='utf-8') as writer:
writer.write(self.to_json_string(legacy_serialization=legacy_serialization))
def __repr__(self):
attributes_repr = [f'- {name}: {repr(getattr(self, name))}' for name in self.attributes]
attributes_repr = '\n'.join(attributes_repr)
return f'{self.__class__.__name__}:\n{attributes_repr}\n\n{self.to_json_string()}'
def save_pretrained(self, save_directory, push_to_hub: bool=False, legacy_serialization: bool=True, **kwargs):
"""
Saves the attributes of this processor (feature extractor, tokenizer...) in the specified directory so that it
can be reloaded using the [`~ProcessorMixin.from_pretrained`] method.
<Tip>
This class method is simply calling [`~feature_extraction_utils.FeatureExtractionMixin.save_pretrained`] and
[`~tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained`]. Please refer to the docstrings of the
methods above for more information.
</Tip>
Args:
save_directory (`str` or `os.PathLike`):
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (`bool`, *optional*, defaults to `False`):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
legacy_serialization (`bool`, *optional*, defaults to `True`):
Whether or not to save processor attributes in separate config files (legacy) or in processor's config
file as a nested dict. Saving all attributes in a single dict will become the default in future versions.
Set to `legacy_serialization=True` until then.
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
use_auth_token = kwargs.pop('use_auth_token', None)
if use_auth_token is not None:
warnings.warn('The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.', FutureWarning)
if kwargs.get('token') is not None:
raise ValueError('`token` and `use_auth_token` are both specified. Please set only the argument `token`.')
kwargs['token'] = use_auth_token
os.makedirs(save_directory, exist_ok=True)
if push_to_hub:
commit_message = kwargs.pop('commit_message', None)
repo_id = kwargs.pop('repo_id', save_directory.split(os.path.sep)[-1])
repo_id = self._create_repo(repo_id, **kwargs)
files_timestamps = self._get_files_timestamps(save_directory)
if self._auto_class is not None:
attrs = [getattr(self, attribute_name) for attribute_name in self.attributes]
configs = [a.init_kwargs if isinstance(a, PreTrainedTokenizerBase) else a for a in attrs]
configs.append(self)
custom_object_save(self, save_directory, config=configs)
save_jinja_files = kwargs.get('save_jinja_files', True)
for attribute_name in self.attributes:
if attribute_name == 'tokenizer':
attribute = getattr(self, attribute_name)
if hasattr(attribute, '_set_processor_class'):
attribute._set_processor_class(self.__class__.__name__)
attribute.save_pretrained(save_directory, save_jinja_files=save_jinja_files)
elif legacy_serialization:
attribute = getattr(self, attribute_name)
if hasattr(attribute, '_set_processor_class'):
attribute._set_processor_class(self.__class__.__name__)
attribute.save_pretrained(save_directory)
if self._auto_class is not None:
for attribute_name in self.attributes:
attribute = getattr(self, attribute_name)
if isinstance(attribute, PreTrainedTokenizerBase):
del attribute.init_kwargs['auto_map']
output_processor_file = os.path.join(save_directory, PROCESSOR_NAME)
output_chat_template_file_jinja = os.path.join(save_directory, CHAT_TEMPLATE_FILE)
output_chat_template_file_legacy = os.path.join(save_directory, LEGACY_PROCESSOR_CHAT_TEMPLATE_FILE)
chat_template_dir = os.path.join(save_directory, CHAT_TEMPLATE_DIR)
if self.chat_template is not None:
save_jinja_files = kwargs.get('save_jinja_files', True)
is_single_template = isinstance(self.chat_template, str)
if save_jinja_files and is_single_template:
with open(output_chat_template_file_jinja, 'w', encoding='utf-8') as f:
f.write(self.chat_template)
logger.info(f'chat template saved in {output_chat_template_file_jinja}')
elif save_jinja_files and (not is_single_template):
for template_name, template in self.chat_template.items():
if template_name == 'default':
with open(output_chat_template_file_jinja, 'w', encoding='utf-8') as f:
f.write(self.chat_template['default'])
logger.info(f'chat template saved in {output_chat_template_file_jinja}')
else:
os.makedirs(chat_template_dir, exist_ok=True)
template_filepath = os.path.join(chat_template_dir, f'{template_name}.jinja')
with open(template_filepath, 'w', encoding='utf-8') as f:
f.write(template)
logger.info(f'chat template saved in {template_filepath}')
elif is_single_template:
chat_template_json_string = json.dumps({'chat_template': self.chat_template}, indent=2, sort_keys=True) + '\n'
with open(output_chat_template_file_legacy, 'w', encoding='utf-8') as writer:
writer.write(chat_template_json_string)
logger.info(f'chat template saved in {output_chat_template_file_legacy}')
elif self.chat_template is not None:
raise ValueError('Multiple chat templates are not supported in the legacy format. Please save them as separate files using the `save_jinja_files` argument.')
if legacy_serialization:
output_audio_tokenizer_file = os.path.join(save_directory, AUDIO_TOKENIZER_NAME)
processor_dict = self.to_dict()
if set(processor_dict.keys()) != {'processor_class'}:
self.to_json_file(output_processor_file)
logger.info(f'processor saved in {output_processor_file}')
if set(processor_dict.keys()) == {'processor_class'}:
return_files = []
else:
return_files = [output_processor_file]
if self.audio_tokenizer is not None:
audio_tokenizer_class = self.audio_tokenizer.__class__.__name__
audio_tokenizer_name_or_path = self.audio_tokenizer.name_or_path
audio_tokenizer_dict = {'audio_tokenizer_class': audio_tokenizer_class, 'audio_tokenizer_name_or_path': audio_tokenizer_name_or_path}
audio_tokenizer_json = json.dumps(audio_tokenizer_dict, indent=2, sort_keys=True) + '\n'
with open(output_audio_tokenizer_file, 'w', encoding='utf-8') as writer:
writer.write(audio_tokenizer_json)
else:
self.to_json_file(output_processor_file, legacy_serialization=False)
logger.info(f'processor saved in {output_processor_file}')
return_files = [output_processor_file]
if push_to_hub:
self._upload_modified_files(save_directory, repo_id, files_timestamps, commit_message=commit_message, token=kwargs.get('token'))
return return_files
@classmethod
def get_processor_dict(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> tuple[dict[str, Any], dict[str, Any]]:
"""
From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a
processor of type [`~processing_utils.ProcessingMixin`] using `from_args_and_dict`.
Parameters:
pretrained_model_name_or_path (`str` or `os.PathLike`):
The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.
subfolder (`str`, *optional*, defaults to `""`):
In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
specify the folder name here.
Returns:
`tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the processor object.
"""
audio_tokenizer_kwargs = copy.deepcopy(kwargs)
cache_dir = kwargs.pop('cache_dir', None)
force_download = kwargs.pop('force_download', False)
resume_download = kwargs.pop('resume_download', None)
proxies = kwargs.pop('proxies', None)
token = kwargs.pop('token', None)
local_files_only = kwargs.pop('local_files_only', False)
revision = kwargs.pop('revision', None)
subfolder = kwargs.pop('subfolder', '')
from_pipeline = kwargs.pop('_from_pipeline', None)
from_auto_class = kwargs.pop('_from_auto', False)
user_agent = {'file_type': 'processor', 'from_auto_class': from_auto_class}
if from_pipeline is not None:
user_agent['using_pipeline'] = from_pipeline
if is_offline_mode() and (not local_files_only):
logger.info('Offline mode: forcing local_files_only=True')
local_files_only = True
pretrained_model_name_or_path = str(pretrained_model_name_or_path)
is_local = os.path.isdir(pretrained_model_name_or_path)
if os.path.isdir(pretrained_model_name_or_path):
processor_file = os.path.join(pretrained_model_name_or_path, PROCESSOR_NAME)
additional_chat_template_files = {}
resolved_additional_chat_template_files = {}
if os.path.isfile(pretrained_model_name_or_path):
resolved_processor_file = pretrained_model_name_or_path
resolved_chat_template_file = None
resolved_raw_chat_template_file = None
resolved_audio_tokenizer_file = None
is_local = True
elif is_remote_url(pretrained_model_name_or_path):
processor_file = pretrained_model_name_or_path
resolved_processor_file = download_url(pretrained_model_name_or_path)
resolved_chat_template_file = None
resolved_raw_chat_template_file = None
resolved_audio_tokenizer_file = None
else:
if is_local:
template_dir = Path(pretrained_model_name_or_path, CHAT_TEMPLATE_DIR)
if template_dir.is_dir():
for template_file in template_dir.glob('*.jinja'):
template_name = template_file.stem
additional_chat_template_files[template_name] = f'{CHAT_TEMPLATE_DIR}/{template_file.name}'
else:
try:
for template in list_repo_templates(pretrained_model_name_or_path, local_files_only=local_files_only, revision=revision, cache_dir=cache_dir, token=token):
additional_chat_template_files[template] = f'{CHAT_TEMPLATE_DIR}/{template}.jinja'
except EntryNotFoundError:
pass
processor_file = PROCESSOR_NAME
try:
resolved_processor_file = cached_file(pretrained_model_name_or_path, processor_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, subfolder=subfolder, _raise_exceptions_for_missing_entries=False)
resolved_chat_template_file = cached_file(pretrained_model_name_or_path, LEGACY_PROCESSOR_CHAT_TEMPLATE_FILE, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, subfolder=subfolder, _raise_exceptions_for_missing_entries=False)
resolved_raw_chat_template_file = cached_file(pretrained_model_name_or_path, CHAT_TEMPLATE_FILE, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, subfolder=subfolder, _raise_exceptions_for_missing_entries=False)
resolved_additional_chat_template_files = {template_name: cached_file(pretrained_model_name_or_path, template_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, subfolder=subfolder, _raise_exceptions_for_missing_entries=False) for template_name, template_file in additional_chat_template_files.items()}
resolved_audio_tokenizer_file = cached_file(pretrained_model_name_or_path, AUDIO_TOKENIZER_NAME, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, subfolder=subfolder, _raise_exceptions_for_missing_entries=False)
except OSError:
raise
except Exception:
raise OSError(f"Can't load processor for '{pretrained_model_name_or_path}'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory containing a {PROCESSOR_NAME} file")
if resolved_chat_template_file is not None:
with open(resolved_chat_template_file, encoding='utf-8') as reader:
chat_template_json = json.loads(reader.read())
chat_templates = {'default': chat_template_json['chat_template']}
if resolved_additional_chat_template_files:
raise ValueError('Cannot load chat template due to conflicting files - this checkpoint combines a legacy chat_template.json file with separate template files, which is not supported. To resolve this error, replace the legacy chat_template.json file with a modern chat_template.jinja file.')
else:
chat_templates = {template_name: open(template_file, 'r', encoding='utf-8').read() for template_name, template_file in resolved_additional_chat_template_files.items()}
if resolved_raw_chat_template_file is not None:
with open(resolved_raw_chat_template_file, 'r', encoding='utf-8') as reader:
chat_templates['default'] = reader.read()
if isinstance(chat_templates, dict) and 'default' in chat_templates and (len(chat_templates) == 1):
chat_templates = chat_templates['default']
if chat_templates:
kwargs['chat_template'] = chat_templates
if resolved_processor_file is None:
processor_dict = {}
else:
try:
with open(resolved_processor_file, encoding='utf-8') as reader:
text = reader.read()
processor_dict = json.loads(text)
except json.JSONDecodeError:
raise OSError(f"It looks like the config file at '{resolved_processor_file}' is not a valid JSON file.")
if is_local:
logger.info(f'loading configuration file {resolved_processor_file}')
else:
logger.info(f'loading configuration file {processor_file} from cache at {resolved_processor_file}')
if 'chat_template' in processor_dict and processor_dict['chat_template'] is not None:
logger.warning_once("Chat templates should be in a 'chat_template.jinja' file but found key='chat_template' in the processor's config. Make sure to move your template to its own file.")
if 'chat_template' in kwargs:
processor_dict['chat_template'] = kwargs.pop('chat_template')
if resolved_audio_tokenizer_file is not None or 'audio_tokenizer' in processor_dict:
if resolved_audio_tokenizer_file is not None:
reader = open(resolved_audio_tokenizer_file, 'r', encoding='utf-8')
audio_tokenizer_dict = reader.read()
audio_tokenizer_dict = json.loads(audio_tokenizer_dict)
else:
audio_tokenizer_dict = processor_dict['audio_tokenizer']
audio_tokenizer_class = cls.get_possibly_dynamic_module(audio_tokenizer_dict['audio_tokenizer_class'])
audio_tokenizer_path = audio_tokenizer_dict['audio_tokenizer_name_or_path']
processor_dict['audio_tokenizer'] = audio_tokenizer_class.from_pretrained(audio_tokenizer_path, **audio_tokenizer_kwargs)
for attribute in cls.attributes:
processor_dict.pop(attribute, None)
return (processor_dict, kwargs)
@classmethod
def from_args_and_dict(cls, args, processor_dict: dict[str, Any], **kwargs):
"""
Instantiates a type of [`~processing_utils.ProcessingMixin`] from a Python dictionary of parameters.
Args:
processor_dict (`dict[str, Any]`):
Dictionary that will be used to instantiate the processor object. Such a dictionary can be
retrieved from a pretrained checkpoint by leveraging the
[`~processing_utils.ProcessingMixin.to_dict`] method.
kwargs (`dict[str, Any]`):
Additional parameters from which to initialize the processor object.
Returns:
[`~processing_utils.ProcessingMixin`]: The processor object instantiated from those
parameters.
"""
processor_dict = processor_dict.copy()
return_unused_kwargs = kwargs.pop('return_unused_kwargs', False)
if 'processor_class' in processor_dict:
del processor_dict['processor_class']
if 'auto_map' in processor_dict:
del processor_dict['auto_map']
processor_dict.update(kwargs)
accepted_args_and_kwargs = cls.__init__.__code__.co_varnames[:cls.__init__.__code__.co_argcount][1:]
unused_kwargs, valid_kwargs = cls.validate_init_kwargs(processor_config=processor_dict, valid_kwargs=accepted_args_and_kwargs)
args_to_update = {i: valid_kwargs.pop(arg) for i, arg in enumerate(accepted_args_and_kwargs) if arg in valid_kwargs and i < len(args)}
args = [args_to_update.get(i, arg) for i, arg in enumerate(args)]
processor = cls(*args, **valid_kwargs)
logger.info(f'Processor {processor}')
if return_unused_kwargs:
return (processor, unused_kwargs)
else:
return processor
def _merge_kwargs(self, ModelProcessorKwargs: ProcessingKwargs, tokenizer_init_kwargs: Optional[dict]=None, **kwargs) -> dict[str, dict]:
"""
Method to merge dictionaries of kwargs cleanly separated by modality within a Processor instance.
The order of operations is as follows:
1) kwargs passed as before have highest priority to preserve BC.
```python
high_priority_kwargs = {"crop_size" = {"height": 222, "width": 222}, "padding" = "max_length"}
processor(..., **high_priority_kwargs)
```
2) kwargs passed as modality-specific kwargs have second priority. This is the recommended API.
```python
processor(..., text_kwargs={"padding": "max_length"}, images_kwargs={"crop_size": {"height": 222, "width": 222}}})
```
3) kwargs passed during instantiation of a modality processor have fourth priority.
```python
tokenizer = tokenizer_class(..., {"padding": "max_length"})
image_processor = image_processor_class(...)
processor(tokenizer, image_processor) # will pass max_length unless overridden by kwargs at call
```
4) defaults kwargs specified at processor level have lowest priority.
```python
class MyProcessingKwargs(ProcessingKwargs, CommonKwargs, TextKwargs, ImagesKwargs, total=False):
_defaults = {
"text_kwargs": {
"padding": "max_length",
"max_length": 64,
},
}
```
Args:
ModelProcessorKwargs (`ProcessingKwargs`):
Typed dictionary of kwargs specifically required by the model passed.
tokenizer_init_kwargs (`Dict`, *optional*):
Dictionary of kwargs the tokenizer was instantiated with and need to take precedence over defaults.
Returns:
output_kwargs (`Dict`):
Dictionary of per-modality kwargs to be passed to each modality-specific processor.
"""
output_kwargs = {'text_kwargs': {}, 'images_kwargs': {}, 'audio_kwargs': {}, 'videos_kwargs': {}, 'common_kwargs': {}}
default_kwargs = {'text_kwargs': {}, 'images_kwargs': {}, 'audio_kwargs': {}, 'videos_kwargs': {}, 'common_kwargs': {}}
possible_modality_keywords = {'text', 'audio', 'videos', 'images'}
used_keys = set()
for modality in default_kwargs:
default_kwargs[modality] = ModelProcessorKwargs._defaults.get(modality, {}).copy()
for modality_key in ModelProcessorKwargs.__annotations__[modality].__annotations__:
if tokenizer_init_kwargs is not None and modality_key in tokenizer_init_kwargs:
value = getattr(self.tokenizer, modality_key) if hasattr(self.tokenizer, modality_key) else tokenizer_init_kwargs[modality_key]
default_kwargs[modality][modality_key] = value
output_kwargs.update(default_kwargs)
non_modality_kwargs = set(kwargs) - set(output_kwargs)
for modality, output_kwarg in output_kwargs.items():
for modality_key in ModelProcessorKwargs.__annotations__[modality].__annotations__:
if modality in kwargs:
kwarg_value = kwargs[modality].pop(modality_key, '__empty__')
if kwarg_value != '__empty__' and modality_key in non_modality_kwargs:
raise ValueError(f'Keyword argument {modality_key} was passed two times:\nin a dictionary for {modality} and as a **kwarg.')
elif modality_key in kwargs:
kwarg_value = kwargs.get(modality_key, '__empty__')
else:
kwarg_value = '__empty__'
if not isinstance(kwarg_value, str) or kwarg_value != '__empty__':
output_kwarg[modality_key] = kwarg_value
used_keys.add(modality_key)
if any((key in default_kwargs for key in kwargs)):
for modality, subdict in kwargs.items():
if modality in default_kwargs:
for subkey, subvalue in subdict.items():
if subkey not in used_keys:
output_kwargs[modality][subkey] = subvalue
used_keys.add(subkey)
else:
for key, kwarg in kwargs.items():
if key not in used_keys:
if key in ModelProcessorKwargs.__annotations__['common_kwargs'].__annotations__:
output_kwargs['common_kwargs'][key] = kwarg
elif key not in possible_modality_keywords:
logger.warning_once(f'Keyword argument `{key}` is not a valid argument for this processor and will be ignored.')
for kwarg in output_kwargs.values():
kwarg.update(output_kwargs['common_kwargs'])
return output_kwargs
@classmethod
def from_pretrained(cls: type[SpecificProcessorType], pretrained_model_name_or_path: Union[str, os.PathLike], cache_dir: Optional[Union[str, os.PathLike]]=None, force_download: bool=False, local_files_only: bool=False, token: Optional[Union[str, bool]]=None, revision: str='main', **kwargs) -> SpecificProcessorType:
"""
Instantiate a processor associated with a pretrained model.
<Tip>
This class method is simply calling the feature extractor
[`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`], image processor
[`~image_processing_utils.ImageProcessingMixin`] and the tokenizer
[`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`] methods. Please refer to the docstrings of the
methods above for more information.
</Tip>
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
This can be either:
- a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on
huggingface.co.
- a path to a *directory* containing a feature extractor file saved using the
[`~SequenceFeatureExtractor.save_pretrained`] method, e.g., `./my_model_directory/`.
- a path or url to a saved feature extractor JSON *file*, e.g.,
`./my_model_directory/preprocessor_config.json`.
**kwargs
Additional keyword arguments passed along to both
[`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] and
[`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`].
"""
kwargs['cache_dir'] = cache_dir
kwargs['force_download'] = force_download
kwargs['local_files_only'] = local_files_only
kwargs['revision'] = revision
use_auth_token = kwargs.pop('use_auth_token', None)
if use_auth_token is not None:
warnings.warn('The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.', FutureWarning)
if token is not None:
raise ValueError('`token` and `use_auth_token` are both specified. Please set only the argument `token`.')
token = use_auth_token
if token is not None:
kwargs['token'] = token
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
processor_dict, kwargs = cls.get_processor_dict(pretrained_model_name_or_path, **kwargs)
return cls.from_args_and_dict(args, processor_dict, **kwargs)
@classmethod
def register_for_auto_class(cls, auto_class='AutoProcessor'):
"""
Register this class with a given auto class. This should only be used for custom feature extractors as the ones
in the library are already mapped with `AutoProcessor`.
Args:
auto_class (`str` or `type`, *optional*, defaults to `"AutoProcessor"`):
The auto class to register this new feature extractor with.
"""
if not isinstance(auto_class, str):
auto_class = auto_class.__name__
import transformers.models.auto as auto_module
if not hasattr(auto_module, auto_class):
raise ValueError(f'{auto_class} is not a valid auto class.')
cls._auto_class = auto_class
@classmethod
def _get_arguments_from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
"""
Identify and instantiate the subcomponents of Processor classes, like image processors and
tokenizers. This method uses the Processor attributes like `tokenizer_class` to figure out what class those
subcomponents should be. Note that any subcomponents must either be library classes that are accessible in
the `transformers` root, or they must be custom code that has been registered with the relevant autoclass,
via methods like `AutoTokenizer.register()`. If neither of these conditions are fulfilled, this method
will be unable to find the relevant subcomponent class and will raise an error.
"""
args = []
for attribute_name in cls.attributes:
class_name = getattr(cls, f'{attribute_name}_class')
if isinstance(class_name, tuple):
classes = tuple((cls.get_possibly_dynamic_module(n) if n is not None else None for n in class_name))
if attribute_name == 'image_processor':
use_fast = kwargs.get('use_fast')
if use_fast is None:
logger.warning_once("Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.")
else:
use_fast = kwargs.get('use_fast', True)
if use_fast and classes[1] is not None:
attribute_class = classes[1]
else:
attribute_class = classes[0]
else:
attribute_class = cls.get_possibly_dynamic_module(class_name)
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
return args
@staticmethod
def get_possibly_dynamic_module(module_name):
if hasattr(transformers_module, module_name):
return getattr(transformers_module, module_name)
lookup_locations = [transformers_module.IMAGE_PROCESSOR_MAPPING, transformers_module.VIDEO_PROCESSOR_MAPPING, transformers_module.TOKENIZER_MAPPING, transformers_module.FEATURE_EXTRACTOR_MAPPING, transformers_module.MODEL_FOR_AUDIO_TOKENIZATION_MAPPING]
for lookup_location in lookup_locations:
for custom_class in lookup_location._extra_content.values():
if isinstance(custom_class, tuple):
for custom_subclass in custom_class:
if custom_subclass is not None and custom_subclass.__name__ == module_name:
return custom_subclass
elif custom_class is not None and custom_class.__name__ == module_name:
return custom_class
raise ValueError(f'Could not find module {module_name} in `transformers`. If this is a custom class, it should be registered using the relevant `AutoClass.register()` function so that other functions can find it!')
def batch_decode(self, *args, **kwargs):
"""
This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
refer to the docstring of this method for more information.
"""
if not hasattr(self, 'tokenizer'):
raise ValueError(f'Cannot batch decode text: {self.__class__.__name__} has no tokenizer.')
return self.tokenizer.batch_decode(*args, **kwargs)
def decode(self, *args, **kwargs):
"""
This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to
the docstring of this method for more information.
"""
if not hasattr(self, 'tokenizer'):
raise ValueError(f'Cannot decode text: {self.__class__.__name__} has no tokenizer.')
return self.tokenizer.decode(*args, **kwargs)
@property
def model_input_names(self):
model_input_names = []
for attribute_name in self.attributes:
attribute = getattr(self, attribute_name, None)
attr_input_names = getattr(attribute, 'model_input_names')
model_input_names.extend(attr_input_names)
return model_input_names
@staticmethod
def validate_init_kwargs(processor_config, valid_kwargs):
kwargs_from_config = set(processor_config.keys())
valid_kwargs_set = set(valid_kwargs)
unused_keys = kwargs_from_config - valid_kwargs_set
valid_keys = kwargs_from_config & valid_kwargs_set
unused_kwargs = {k: processor_config[k] for k in unused_keys} if unused_keys else {}
valid_kwargs = {k: processor_config[k] for k in valid_keys} if valid_keys else {}
return (unused_kwargs, valid_kwargs)
@deprecate_kwarg('video_fps', version='4.58', new_name='fps')
@deprecate_kwarg('video_load_backend', version='4.59', additional_message='. This function will use `torchcodec` by default, or `torchvision` if `torchcodec` is not installed.')
def apply_chat_template(self, conversation: Union[list[dict[str, str]], list[list[dict[str, str]]]], chat_template: Optional[str]=None, **kwargs: Unpack[AllKwargsForChatTemplate]) -> str:
"""
Similar to the `apply_chat_template` method on tokenizers, this method applies a Jinja template to input
conversations to turn them into a single tokenizable string.
The input is expected to be in the following format, where each message content is a list consisting of text and
optionally image or video inputs. One can also provide an image, video, URL or local path which will be used to form
`pixel_values` when `return_dict=True`. If not provided, one will get only the formatted text, optionally tokenized text.
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
{"type": "text", "text": "Please describe this image in detail."},
],
},
]
Args:
conversation (`Union[list[Dict, [str, str]], list[list[dict[str, str]]]]`):
The conversation to format.
chat_template (`Optional[str]`, *optional*):
The Jinja template to use for formatting the conversation. If not provided, the tokenizer's
chat template is used.
"""
if chat_template is None:
if isinstance(self.chat_template, dict) and 'default' in self.chat_template:
chat_template = self.chat_template['default']
elif isinstance(self.chat_template, dict):
raise ValueError(f"""The processor has multiple chat templates but none of them are named "default". You need to specify which one to use by passing the `chat_template` argument. Available templates are: {', '.join(self.chat_template.keys())}""")
elif self.chat_template is not None:
chat_template = self.chat_template
else:
raise ValueError('Cannot use apply_chat_template because this processor does not have a chat template.')
elif isinstance(self.chat_template, dict) and chat_template in self.chat_template:
chat_template = self.chat_template[chat_template]
else:
pass
is_tokenizers_fast = hasattr(self, 'tokenizer') and self.tokenizer.__class__.__name__.endswith('Fast')
if kwargs.get('continue_final_message', False):
if kwargs.get('add_generation_prompt', False):
raise ValueError('continue_final_message and add_generation_prompt are not compatible. Use continue_final_message when you want the model to continue the final message, and add_generation_prompt when you want to add a header that will prompt it to start a new assistant message instead.')
if kwargs.get('return_assistant_tokens_mask', False):
raise ValueError('continue_final_message is not compatible with return_assistant_tokens_mask.')
if kwargs.get('return_assistant_tokens_mask', False):
if not is_tokenizers_fast:
raise ValueError('`return_assistant_tokens_mask` is not possible with slow tokenizers. Make sure you have `tokenizers` installed. If the error persists, open an issue to support a Fast tokenizer for your model.')
else:
kwargs['return_offsets_mapping'] = True
processed_kwargs = {'mm_load_kwargs': {}, 'template_kwargs': {}}
for kwarg_type in processed_kwargs:
for key in AllKwargsForChatTemplate.__annotations__[kwarg_type].__annotations__:
kwarg_type_defaults = AllKwargsForChatTemplate.__annotations__[kwarg_type]
default_value = getattr(kwarg_type_defaults, key, None)
value = kwargs.pop(key, default_value)
if value is not None and (not isinstance(value, dict)):
processed_kwargs[kwarg_type][key] = value
kwargs.pop('video_load_backend', None)
processed_kwargs['template_kwargs'].update(kwargs)
if isinstance(conversation, (list, tuple)) and (isinstance(conversation[0], (list, tuple)) or hasattr(conversation[0], 'content')):
is_batched = True
conversations = conversation
else:
is_batched = False
conversations = [conversation]
tokenize = processed_kwargs['template_kwargs'].pop('tokenize', False)
return_dict = processed_kwargs['template_kwargs'].pop('return_dict', False)
mm_load_kwargs = processed_kwargs['mm_load_kwargs']
if tokenize:
batch_images, batch_videos = ([], [])
batch_audios = []
for conversation in conversations:
images, videos = ([], [])
for message in conversation:
visuals = [content for content in message['content'] if content['type'] in ['image', 'video']]
audio_fnames = [content[key] for content in message['content'] for key in ['audio', 'url', 'path'] if key in content and content['type'] == 'audio']
image_fnames = [vision_info[key] for vision_info in visuals for key in ['image', 'url', 'path', 'base64'] if key in vision_info and vision_info['type'] == 'image']
images.extend(image_fnames)
video_fnames = [vision_info[key] for vision_info in visuals for key in ['video', 'url', 'path'] if key in vision_info and vision_info['type'] == 'video']
videos.extend(video_fnames)
if not mm_load_kwargs['load_audio_from_video']:
for fname in audio_fnames:
batch_audios.append(load_audio(fname, sampling_rate=mm_load_kwargs['sampling_rate']))
else:
for fname in video_fnames:
batch_audios.append(load_audio(fname, sampling_rate=mm_load_kwargs['sampling_rate']))
batch_images.append(images)
batch_videos.append(videos)
prompt, generation_indices = render_jinja_template(conversations=conversations, chat_template=chat_template, **processed_kwargs['template_kwargs'], **self.tokenizer.special_tokens_map)
if not is_batched:
prompt = prompt[0]
if tokenize:
single_prompt = prompt[0] if is_batched else prompt
if self.tokenizer.bos_token is not None and single_prompt.startswith(self.tokenizer.bos_token):
kwargs['add_special_tokens'] = False
if 'do_sample_frames' not in kwargs and (kwargs.get('fps') is not None or kwargs.get('num_frames') is not None):
kwargs['do_sample_frames'] = True
images_exist = any((im is not None for im_list in batch_images for im in im_list))
videos_exist = any((vid is not None for vid_list in batch_videos for vid in vid_list))
out = self(text=prompt, images=batch_images if images_exist else None, videos=batch_videos if videos_exist else None, audio=batch_audios if batch_audios else None, **kwargs)
if return_dict:
if processed_kwargs['template_kwargs'].get('return_assistant_tokens_mask', False):
assistant_masks = []
offset_mapping = out.pop('offset_mapping')
input_ids = out['input_ids']
for i in range(len(input_ids)):
current_mask = [0] * len(input_ids[i])
offsets = offset_mapping[i]
offset_starts = [start for start, end in offsets]
for assistant_start_char, assistant_end_char in generation_indices[i]:
start_pos = bisect.bisect_left(offset_starts, assistant_start_char)
end_pos = bisect.bisect_left(offset_starts, assistant_end_char)
if not (start_pos >= 0 and offsets[start_pos][0] <= assistant_start_char < offsets[start_pos][1]):
continue
for token_id in range(start_pos, end_pos if end_pos else len(input_ids[i])):
current_mask[token_id] = 1
assistant_masks.append(current_mask)
out['assistant_masks'] = assistant_masks
out.convert_to_tensors(tensor_type=kwargs.get('return_tensors'))
return out
else:
return out['input_ids']
return prompt
def post_process_image_text_to_text(self, generated_outputs, skip_special_tokens=True, **kwargs):
"""
Post-process the output of a vlm to decode the text.
Args:
generated_outputs (`torch.Tensor` or `np.ndarray`):
The output of the model `generate` function. The output is expected to be a tensor of shape `(batch_size, sequence_length)`
or `(sequence_length,)`.
skip_special_tokens (`bool`, *optional*, defaults to `True`):
Whether or not to remove special tokens in the output. Argument passed to the tokenizer's `batch_decode` method.
**kwargs:
Additional arguments to be passed to the tokenizer's `batch_decode method`.
Returns:
`list[str]`: The decoded text.
"""
return self.tokenizer.batch_decode(generated_outputs, skip_special_tokens=skip_special_tokens, **kwargs)
def _check_special_mm_tokens(self, text: list[str], text_inputs: 'BatchFeature', modalities: list[str]):
"""
Checks that number of special tokens in text and processed text is same. The count can be different
if tokenized text was truncated, leading to issues in model code.
"""
for modality in modalities:
token_str = getattr(self, f'{modality}_token')
token_id = getattr(self, f'{modality}_token_id')
ids_count = [list(ids).count(token_id) for ids in text_inputs['input_ids']]
text_count = [sample.count(token_str) for sample in text]
if ids_count != text_count:
raise ValueError(f"Mismatch in `{modality}` token count between text and `input_ids`. Got ids={ids_count} and text={text_count}. Likely due to `truncation='max_length'`. Please disable truncation or increase `max_length`.")
|
class ProcessorMixin(PushToHubMixin):
'''
This is a mixin used to provide saving/loading functionality for all processor classes.
'''
def __init__(self, *args, **kwargs):
pass
def __call__(self, images: Optional[ImageInput]=None, text: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, videos: Optional[VideoInput]=None, audio: Optional[AudioInput]=None, **kwargs: Unpack[ProcessingKwargs]):
'''
Main method to prepare for model inputs. This method forwards the each modality argument to its own processor
along with `kwargs`. Please refer to the docstring of the each processor attributes for more information.
Args:
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `list[PIL.Image.Image]`, `list[np.ndarray]`, `list[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. Both channels-first and channels-last formats are supported.
text (`TextInput`, `PreTokenizedInput`, `list[TextInput]`, `list[PreTokenizedInput]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
videos (`np.ndarray`, `torch.Tensor`, `List[np.ndarray]`, `List[torch.Tensor]`):
The video or batch of videos to be prepared. Each video can be a 4D NumPy array or PyTorch
tensor, or a nested list of 3D frames. Both channels-first and channels-last formats are supported.
audio (`np.ndarray`, `torch.Tensor`, `list[np.ndarray]`, `list[torch.Tensor]`):
The audio or batch of audio to be prepared. Each audio can be a NumPy array or PyTorch
tensor.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return NumPy `np.ndarray` objects.
Returns:
[`BatchFeature`]: A [`BatchFeature`] object with processed inputs in a dict format.
'''
pass
def check_argument_for_proper_class(self, argument_name, argument):
'''
Checks the passed argument's class against the expected transformers class. In case of an unexpected
mismatch between expected and actual class, an error is raise. Otherwise, the proper retrieved class
is returned.
'''
pass
def to_dict(self, legacy_serialization=True) -> dict[str, Any]:
'''
Serializes this instance to a Python dictionary.
Returns:
`dict[str, Any]`: Dictionary of all the attributes that make up this processor instance.
'''
pass
def cast_array_to_list(dictionary):
'''
Numpy arrays are not serialiazable but can be in pre-processing dicts.
This function casts arrays to list, recusring through the nested configs as well.
'''
pass
def to_json_string(self, legacy_serialization=True) -> str:
'''
Serializes this instance to a JSON string.
Returns:
`str`: String containing all the attributes that make up this feature_extractor instance in JSON format.
'''
pass
def to_json_file(self, json_file_path: Union[str, os.PathLike], legacy_serialization=True):
'''
Save this instance to a JSON file.
Args:
json_file_path (`str` or `os.PathLike`):
Path to the JSON file in which this processor instance's parameters will be saved.
'''
pass
def __repr__(self):
pass
def save_pretrained(self, save_directory, push_to_hub: bool=False, legacy_serialization: bool=True, **kwargs):
'''
Saves the attributes of this processor (feature extractor, tokenizer...) in the specified directory so that it
can be reloaded using the [`~ProcessorMixin.from_pretrained`] method.
<Tip>
This class method is simply calling [`~feature_extraction_utils.FeatureExtractionMixin.save_pretrained`] and
[`~tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained`]. Please refer to the docstrings of the
methods above for more information.
</Tip>
Args:
save_directory (`str` or `os.PathLike`):
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (`bool`, *optional*, defaults to `False`):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
legacy_serialization (`bool`, *optional*, defaults to `True`):
Whether or not to save processor attributes in separate config files (legacy) or in processor's config
file as a nested dict. Saving all attributes in a single dict will become the default in future versions.
Set to `legacy_serialization=True` until then.
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
'''
pass
@classmethod
def get_processor_dict(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> tuple[dict[str, Any], dict[str, Any]]:
'''
From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a
processor of type [`~processing_utils.ProcessingMixin`] using `from_args_and_dict`.
Parameters:
pretrained_model_name_or_path (`str` or `os.PathLike`):
The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.
subfolder (`str`, *optional*, defaults to `""`):
In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
specify the folder name here.
Returns:
`tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the processor object.
'''
pass
@classmethod
def from_args_and_dict(cls, args, processor_dict: dict[str, Any], **kwargs):
'''
Instantiates a type of [`~processing_utils.ProcessingMixin`] from a Python dictionary of parameters.
Args:
processor_dict (`dict[str, Any]`):
Dictionary that will be used to instantiate the processor object. Such a dictionary can be
retrieved from a pretrained checkpoint by leveraging the
[`~processing_utils.ProcessingMixin.to_dict`] method.
kwargs (`dict[str, Any]`):
Additional parameters from which to initialize the processor object.
Returns:
[`~processing_utils.ProcessingMixin`]: The processor object instantiated from those
parameters.
'''
pass
def _merge_kwargs(self, ModelProcessorKwargs: ProcessingKwargs, tokenizer_init_kwargs: Optional[dict]=None, **kwargs) -> dict[str, dict]:
'''
Method to merge dictionaries of kwargs cleanly separated by modality within a Processor instance.
The order of operations is as follows:
1) kwargs passed as before have highest priority to preserve BC.
```python
high_priority_kwargs = {"crop_size" = {"height": 222, "width": 222}, "padding" = "max_length"}
processor(..., **high_priority_kwargs)
```
2) kwargs passed as modality-specific kwargs have second priority. This is the recommended API.
```python
processor(..., text_kwargs={"padding": "max_length"}, images_kwargs={"crop_size": {"height": 222, "width": 222}}})
```
3) kwargs passed during instantiation of a modality processor have fourth priority.
```python
tokenizer = tokenizer_class(..., {"padding": "max_length"})
image_processor = image_processor_class(...)
processor(tokenizer, image_processor) # will pass max_length unless overridden by kwargs at call
```
4) defaults kwargs specified at processor level have lowest priority.
```python
class MyProcessingKwargs(ProcessingKwargs, CommonKwargs, TextKwargs, ImagesKwargs, total=False):
_defaults = {
"text_kwargs": {
"padding": "max_length",
"max_length": 64,
},
}
```
Args:
ModelProcessorKwargs (`ProcessingKwargs`):
Typed dictionary of kwargs specifically required by the model passed.
tokenizer_init_kwargs (`Dict`, *optional*):
Dictionary of kwargs the tokenizer was instantiated with and need to take precedence over defaults.
Returns:
output_kwargs (`Dict`):
Dictionary of per-modality kwargs to be passed to each modality-specific processor.
'''
pass
@classmethod
def from_pretrained(cls: type[SpecificProcessorType], pretrained_model_name_or_path: Union[str, os.PathLike], cache_dir: Optional[Union[str, os.PathLike]]=None, force_download: bool=False, local_files_only: bool=False, token: Optional[Union[str, bool]]=None, revision: str='main', **kwargs) -> SpecificProcessorType:
'''
Instantiate a processor associated with a pretrained model.
<Tip>
This class method is simply calling the feature extractor
[`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`], image processor
[`~image_processing_utils.ImageProcessingMixin`] and the tokenizer
[`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`] methods. Please refer to the docstrings of the
methods above for more information.
</Tip>
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
This can be either:
- a string, the *model id* of a pretrained feature_extractor hosted inside a model repo on
huggingface.co.
- a path to a *directory* containing a feature extractor file saved using the
[`~SequenceFeatureExtractor.save_pretrained`] method, e.g., `./my_model_directory/`.
- a path or url to a saved feature extractor JSON *file*, e.g.,
`./my_model_directory/preprocessor_config.json`.
**kwargs
Additional keyword arguments passed along to both
[`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`] and
[`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`].
'''
pass
@classmethod
def register_for_auto_class(cls, auto_class='AutoProcessor'):
'''
Register this class with a given auto class. This should only be used for custom feature extractors as the ones
in the library are already mapped with `AutoProcessor`.
Args:
auto_class (`str` or `type`, *optional*, defaults to `"AutoProcessor"`):
The auto class to register this new feature extractor with.
'''
pass
@classmethod
def _get_arguments_from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
'''
Identify and instantiate the subcomponents of Processor classes, like image processors and
tokenizers. This method uses the Processor attributes like `tokenizer_class` to figure out what class those
subcomponents should be. Note that any subcomponents must either be library classes that are accessible in
the `transformers` root, or they must be custom code that has been registered with the relevant autoclass,
via methods like `AutoTokenizer.register()`. If neither of these conditions are fulfilled, this method
will be unable to find the relevant subcomponent class and will raise an error.
'''
pass
@staticmethod
def get_possibly_dynamic_module(module_name):
pass
def batch_decode(self, *args, **kwargs):
'''
This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
refer to the docstring of this method for more information.
'''
pass
def decode(self, *args, **kwargs):
'''
This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to
the docstring of this method for more information.
'''
pass
@property
def model_input_names(self):
pass
@staticmethod
def validate_init_kwargs(processor_config, valid_kwargs):
pass
@deprecate_kwarg('video_fps', version='4.58', new_name='fps')
@deprecate_kwarg('video_load_backend', version='4.59', additional_message='. This function will use `torchcodec` by default, or `torchvision` if `torchcodec` is not installed.')
def apply_chat_template(self, conversation: Union[list[dict[str, str]], list[list[dict[str, str]]]], chat_template: Optional[str]=None, **kwargs: Unpack[AllKwargsForChatTemplate]) -> str:
'''
Similar to the `apply_chat_template` method on tokenizers, this method applies a Jinja template to input
conversations to turn them into a single tokenizable string.
The input is expected to be in the following format, where each message content is a list consisting of text and
optionally image or video inputs. One can also provide an image, video, URL or local path which will be used to form
`pixel_values` when `return_dict=True`. If not provided, one will get only the formatted text, optionally tokenized text.
conversation = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
{"type": "text", "text": "Please describe this image in detail."},
],
},
]
Args:
conversation (`Union[list[Dict, [str, str]], list[list[dict[str, str]]]]`):
The conversation to format.
chat_template (`Optional[str]`, *optional*):
The Jinja template to use for formatting the conversation. If not provided, the tokenizer's
chat template is used.
'''
pass
def post_process_image_text_to_text(self, generated_outputs, skip_special_tokens=True, **kwargs):
'''
Post-process the output of a vlm to decode the text.
Args:
generated_outputs (`torch.Tensor` or `np.ndarray`):
The output of the model `generate` function. The output is expected to be a tensor of shape `(batch_size, sequence_length)`
or `(sequence_length,)`.
skip_special_tokens (`bool`, *optional*, defaults to `True`):
Whether or not to remove special tokens in the output. Argument passed to the tokenizer's `batch_decode` method.
**kwargs:
Additional arguments to be passed to the tokenizer's `batch_decode method`.
Returns:
`list[str]`: The decoded text.
'''
pass
def _check_special_mm_tokens(self, text: list[str], text_inputs: 'BatchFeature', modalities: list[str]):
'''
Checks that number of special tokens in text and processed text is same. The count can be different
if tokenized text was truncated, leading to issues in model code.
'''
pass
| 34
| 19
| 49
| 6
| 28
| 15
| 7
| 0.52
| 1
| 18
| 5
| 71
| 11
| 0
| 17
| 17
| 870
| 112
| 498
| 143
| 451
| 260
| 324
| 112
| 305
| 20
| 1
| 6
| 115
|
6,465
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.TextKwargs
|
from .tokenization_utils_base import PaddingStrategy, PreTokenizedInput, PreTrainedTokenizerBase, TextInput, TruncationStrategy
from typing import Any, Optional, TypedDict, TypeVar, Union
class TextKwargs(TypedDict, total=False):
"""
Keyword arguments for text processing. For extended documentation, check out tokenization_utils_base methods and
docstrings associated.
Attributes:
add_special_tokens (`bool`, *optional*)
Whether or not to add special tokens when encoding the sequences.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*)
Activates and controls padding.
truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*):
Activates and controls truncation.
max_length (`int`, *optional*):
Controls the maximum length to use by one of the truncation/padding parameters.
stride (`int`, *optional*):
If set, the overflowing tokens will contain some tokens from the end of the truncated sequence.
is_split_into_words (`bool`, *optional*):
Whether or not the input is already pre-tokenized.
pad_to_multiple_of (`int`, *optional*):
If set, will pad the sequence to a multiple of the provided value.
return_token_type_ids (`bool`, *optional*):
Whether to return token type IDs.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask.
return_overflowing_tokens (`bool`, *optional*):
Whether or not to return overflowing token sequences.
return_special_tokens_mask (`bool`, *optional*):
Whether or not to return special tokens mask information.
return_offsets_mapping (`bool`, *optional*):
Whether or not to return `(char_start, char_end)` for each token.
return_length (`bool`, *optional*):
Whether or not to return the lengths of the encoded inputs.
verbose (`bool`, *optional*):
Whether or not to print more information and warnings.
padding_side (`str`, *optional*):
The side on which padding will be applied.
return_mm_token_type_ids (`bool`, *optional*):
Whether to return multimodal token type ids indicating mm placeholder token positions.
"""
text_pair: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]
text_target: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]
text_pair_target: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]
add_special_tokens: Optional[bool]
padding: Union[bool, str, PaddingStrategy]
truncation: Union[bool, str, TruncationStrategy]
max_length: Optional[int]
stride: Optional[int]
is_split_into_words: Optional[bool]
pad_to_multiple_of: Optional[int]
return_token_type_ids: Optional[bool]
return_attention_mask: Optional[bool]
return_overflowing_tokens: Optional[bool]
return_special_tokens_mask: Optional[bool]
return_offsets_mapping: Optional[bool]
return_length: Optional[bool]
verbose: Optional[bool]
padding_side: Optional[str]
return_mm_token_type_ids: Optional[bool]
|
class TextKwargs(TypedDict, total=False):
'''
Keyword arguments for text processing. For extended documentation, check out tokenization_utils_base methods and
docstrings associated.
Attributes:
add_special_tokens (`bool`, *optional*)
Whether or not to add special tokens when encoding the sequences.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*)
Activates and controls padding.
truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*):
Activates and controls truncation.
max_length (`int`, *optional*):
Controls the maximum length to use by one of the truncation/padding parameters.
stride (`int`, *optional*):
If set, the overflowing tokens will contain some tokens from the end of the truncated sequence.
is_split_into_words (`bool`, *optional*):
Whether or not the input is already pre-tokenized.
pad_to_multiple_of (`int`, *optional*):
If set, will pad the sequence to a multiple of the provided value.
return_token_type_ids (`bool`, *optional*):
Whether to return token type IDs.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask.
return_overflowing_tokens (`bool`, *optional*):
Whether or not to return overflowing token sequences.
return_special_tokens_mask (`bool`, *optional*):
Whether or not to return special tokens mask information.
return_offsets_mapping (`bool`, *optional*):
Whether or not to return `(char_start, char_end)` for each token.
return_length (`bool`, *optional*):
Whether or not to return the lengths of the encoded inputs.
verbose (`bool`, *optional*):
Whether or not to print more information and warnings.
padding_side (`str`, *optional*):
The side on which padding will be applied.
return_mm_token_type_ids (`bool`, *optional*):
Whether to return multimodal token type ids indicating mm placeholder token positions.
'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 1.84
| 2
| 0
| 0
| 9
| 0
| 0
| 0
| 0
| 56
| 2
| 19
| 1
| 18
| 35
| 19
| 1
| 18
| 0
| 1
| 0
| 0
|
6,466
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/processing_utils.py
|
transformers.processing_utils.VideosKwargs
|
from .video_utils import VideoInput, VideoMetadata
from .image_utils import ChannelDimension, ImageInput, is_vision_available
from typing import Any, Optional, TypedDict, TypeVar, Union
class VideosKwargs(TypedDict, total=False):
"""
Keyword arguments for video processing.
Attributes:
do_convert_rgb (`bool`):
Whether to convert the video to RGB format.
do_resize (`bool`):
Whether to resize the video.
size (`dict[str, int]`, *optional*):
Resize the shorter side of the input to `size["shortest_edge"]`.
default_to_square (`bool`, *optional*, defaults to `self.default_to_square`):
Whether to default to a square when resizing, if size is an int.
resample (`PILImageResampling`, *optional*):
Resampling filter to use if resizing the video.
do_rescale (`bool`, *optional*):
Whether to rescale the video by the specified scale `rescale_factor`.
rescale_factor (`int` or `float`, *optional*):
Scale factor to use if rescaling the video.
do_normalize (`bool`, *optional*):
Whether to normalize the video.
image_mean (`float` or `list[float]`, *optional*):
Mean to use if normalizing the video.
image_std (`float` or `list[float]`, *optional*):
Standard deviation to use if normalizing the video.
do_center_crop (`bool`, *optional*):
Whether to center crop the video.
do_sample_frames (`bool`, *optional*):
Whether to sample frames from the video before processing or to process the whole video.
video_metadata (`Union[VideoMetadata, dict]`, *optional*):
Metadata of the video containing information about total duration, fps and total number of frames.
num_frames (`int`, *optional*):
Maximum number of frames to sample when `do_sample_frames=True`.
fps (`int` or `float`, *optional*):
Target frames to sample per second when `do_sample_frames=True`.
crop_size (`dict[str, int]`, *optional*):
Desired output size when applying center-cropping.
data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the output video.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the input video.
return_metadata (`ChannelDimension` or `str`, *optional*):
Whether to return video metadata or not.
"""
do_convert_rgb: Optional[bool]
do_resize: Optional[bool]
size: Optional[dict[str, int]]
default_to_square: Optional[bool]
resample: Optional['PILImageResampling']
do_rescale: Optional[bool]
rescale_factor: Optional[float]
do_normalize: Optional[bool]
image_mean: Optional[Union[float, list[float]]]
image_std: Optional[Union[float, list[float]]]
do_center_crop: Optional[bool]
crop_size: Optional[dict[str, int]]
data_format: Optional[ChannelDimension]
input_data_format: Optional[Union[str, ChannelDimension]]
device: Optional[str]
do_sample_frames: Optional[bool]
video_metadata: Optional[Union[VideoMetadata, dict]]
fps: Optional[Union[int, float]]
num_frames: Optional[int]
return_metadata: Optional[bool]
|
class VideosKwargs(TypedDict, total=False):
'''
Keyword arguments for video processing.
Attributes:
do_convert_rgb (`bool`):
Whether to convert the video to RGB format.
do_resize (`bool`):
Whether to resize the video.
size (`dict[str, int]`, *optional*):
Resize the shorter side of the input to `size["shortest_edge"]`.
default_to_square (`bool`, *optional*, defaults to `self.default_to_square`):
Whether to default to a square when resizing, if size is an int.
resample (`PILImageResampling`, *optional*):
Resampling filter to use if resizing the video.
do_rescale (`bool`, *optional*):
Whether to rescale the video by the specified scale `rescale_factor`.
rescale_factor (`int` or `float`, *optional*):
Scale factor to use if rescaling the video.
do_normalize (`bool`, *optional*):
Whether to normalize the video.
image_mean (`float` or `list[float]`, *optional*):
Mean to use if normalizing the video.
image_std (`float` or `list[float]`, *optional*):
Standard deviation to use if normalizing the video.
do_center_crop (`bool`, *optional*):
Whether to center crop the video.
do_sample_frames (`bool`, *optional*):
Whether to sample frames from the video before processing or to process the whole video.
video_metadata (`Union[VideoMetadata, dict]`, *optional*):
Metadata of the video containing information about total duration, fps and total number of frames.
num_frames (`int`, *optional*):
Maximum number of frames to sample when `do_sample_frames=True`.
fps (`int` or `float`, *optional*):
Target frames to sample per second when `do_sample_frames=True`.
crop_size (`dict[str, int]`, *optional*):
Desired output size when applying center-cropping.
data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the output video.
input_data_format (`ChannelDimension` or `str`, *optional*):
The channel dimension format for the input video.
return_metadata (`ChannelDimension` or `str`, *optional*):
Whether to return video metadata or not.
'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 2.14
| 2
| 0
| 0
| 4
| 0
| 0
| 0
| 0
| 46
| 2
| 14
| 1
| 13
| 30
| 14
| 1
| 13
| 0
| 1
| 0
| 0
|
6,467
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/pytorch_utils.py
|
transformers.pytorch_utils.Conv1D
|
import torch
from torch import nn
class Conv1D(nn.Module):
"""
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
Args:
nf (`int`): The number of output features.
nx (`int`): The number of input features.
"""
def __init__(self, nf, nx):
super().__init__()
self.nf = nf
self.nx = nx
self.weight = nn.Parameter(torch.empty(nx, nf))
self.bias = nn.Parameter(torch.zeros(nf))
nn.init.normal_(self.weight, std=0.02)
def __repr__(self) -> str:
return 'Conv1D(nf={nf}, nx={nx})'.format(**self.__dict__)
def forward(self, x):
size_out = x.size()[:-1] + (self.nf,)
x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
x = x.view(size_out)
return x
|
class Conv1D(nn.Module):
'''
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
Args:
nf (`int`): The number of output features.
nx (`int`): The number of input features.
'''
def __init__(self, nf, nx):
pass
def __repr__(self) -> str:
pass
def forward(self, x):
pass
| 4
| 1
| 5
| 0
| 5
| 0
| 1
| 0.47
| 1
| 2
| 0
| 0
| 3
| 4
| 3
| 13
| 27
| 5
| 15
| 9
| 11
| 7
| 15
| 9
| 11
| 1
| 1
| 0
| 3
|
6,468
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/auto.py
|
transformers.quantizers.auto.AutoHfQuantizer
|
import warnings
from ..utils.quantization_config import AqlmConfig, AutoRoundConfig, AwqConfig, BitNetQuantConfig, BitsAndBytesConfig, CompressedTensorsConfig, EetqConfig, FbgemmFp8Config, FineGrainedFP8Config, FPQuantConfig, GPTQConfig, HiggsConfig, HqqConfig, Mxfp4Config, QuantizationConfigMixin, QuantizationMethod, QuantoConfig, QuarkConfig, SpQRConfig, TorchAoConfig, VptqConfig
from typing import Optional, Union
class AutoHfQuantizer:
"""
The Auto-HF quantizer class that takes care of automatically instantiating to the correct
`HfQuantizer` given the `QuantizationConfig`.
"""
@classmethod
def from_config(cls, quantization_config: Union[QuantizationConfigMixin, dict], **kwargs):
if isinstance(quantization_config, dict):
quantization_config = AutoQuantizationConfig.from_dict(quantization_config)
quant_method = quantization_config.quant_method
if quant_method == QuantizationMethod.BITS_AND_BYTES:
if quantization_config.load_in_8bit:
quant_method += '_8bit'
else:
quant_method += '_4bit'
if quant_method not in AUTO_QUANTIZER_MAPPING:
raise ValueError(f'Unknown quantization type, got {quant_method} - supported types are: {list(AUTO_QUANTIZER_MAPPING.keys())}')
target_cls = AUTO_QUANTIZER_MAPPING[quant_method]
return target_cls(quantization_config, **kwargs)
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
quantization_config = AutoQuantizationConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
return cls.from_config(quantization_config)
@classmethod
def merge_quantization_configs(cls, quantization_config: Union[dict, QuantizationConfigMixin], quantization_config_from_args: Optional[QuantizationConfigMixin]):
"""
handles situations where both quantization_config from args and quantization_config from model config are present.
"""
if quantization_config_from_args is not None:
warning_msg = "You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used."
else:
warning_msg = ''
if isinstance(quantization_config, dict):
if isinstance(quantization_config_from_args, AutoRoundConfig):
quantization_config = AutoRoundConfig.from_dict(quantization_config)
else:
quantization_config = AutoQuantizationConfig.from_dict(quantization_config)
if quantization_config_from_args is not None and quantization_config.__class__.__name__ != quantization_config_from_args.__class__.__name__:
raise ValueError(f'The model is quantized with {quantization_config.__class__.__name__} but you are passing a {quantization_config_from_args.__class__.__name__} config. Please make sure to pass the same quantization config class to `from_pretrained` with different loading attributes.')
if isinstance(quantization_config, (GPTQConfig, AwqConfig, AutoRoundConfig, FbgemmFp8Config, CompressedTensorsConfig, Mxfp4Config)) and quantization_config_from_args is not None:
loading_attr_dict = quantization_config_from_args.get_loading_attributes()
for attr, val in loading_attr_dict.items():
setattr(quantization_config, attr, val)
warning_msg += f'However, loading attributes (e.g. {list(loading_attr_dict.keys())}) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored.'
if warning_msg != '' and (not isinstance(quantization_config, Mxfp4Config)):
warnings.warn(warning_msg)
else:
logger.info(warning_msg)
return quantization_config
@staticmethod
def supports_quant_method(quantization_config_dict):
quant_method = quantization_config_dict.get('quant_method', None)
if quantization_config_dict.get('load_in_8bit', False) or quantization_config_dict.get('load_in_4bit', False):
suffix = '_4bit' if quantization_config_dict.get('load_in_4bit', False) else '_8bit'
quant_method = QuantizationMethod.BITS_AND_BYTES + suffix
elif quant_method is None:
raise ValueError("The model's quantization config from the arguments has no `quant_method` attribute. Make sure that the model has been correctly quantized")
if quant_method not in AUTO_QUANTIZATION_CONFIG_MAPPING:
logger.warning(f'Unknown quantization type, got {quant_method} - supported types are: {list(AUTO_QUANTIZER_MAPPING.keys())}. Hence, we will skip the quantization. To remove the warning, you can delete the quantization_config attribute in config.json')
return False
return True
|
class AutoHfQuantizer:
'''
The Auto-HF quantizer class that takes care of automatically instantiating to the correct
`HfQuantizer` given the `QuantizationConfig`.
'''
@classmethod
def from_config(cls, quantization_config: Union[QuantizationConfigMixin, dict], **kwargs):
pass
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
pass
@classmethod
def merge_quantization_configs(cls, quantization_config: Union[dict, QuantizationConfigMixin], quantization_config_from_args: Optional[QuantizationConfigMixin]):
'''
handles situations where both quantization_config from args and quantization_config from model config are present.
'''
pass
@staticmethod
def supports_quant_method(quantization_config_dict):
pass
| 9
| 2
| 20
| 3
| 15
| 2
| 4
| 0.17
| 0
| 10
| 7
| 0
| 0
| 0
| 4
| 4
| 91
| 14
| 66
| 21
| 53
| 11
| 40
| 13
| 35
| 6
| 0
| 2
| 17
|
6,469
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/auto.py
|
transformers.quantizers.auto.AutoQuantizationConfig
|
from ..models.auto.configuration_auto import AutoConfig
from ..utils.quantization_config import AqlmConfig, AutoRoundConfig, AwqConfig, BitNetQuantConfig, BitsAndBytesConfig, CompressedTensorsConfig, EetqConfig, FbgemmFp8Config, FineGrainedFP8Config, FPQuantConfig, GPTQConfig, HiggsConfig, HqqConfig, Mxfp4Config, QuantizationConfigMixin, QuantizationMethod, QuantoConfig, QuarkConfig, SpQRConfig, TorchAoConfig, VptqConfig
class AutoQuantizationConfig:
"""
The Auto-HF quantization config class that takes care of automatically dispatching to the correct
quantization config given a quantization config stored in a dictionary.
"""
@classmethod
def from_dict(cls, quantization_config_dict: dict):
quant_method = quantization_config_dict.get('quant_method')
if quantization_config_dict.get('load_in_8bit', False) or quantization_config_dict.get('load_in_4bit', False):
suffix = '_4bit' if quantization_config_dict.get('load_in_4bit', False) else '_8bit'
quant_method = QuantizationMethod.BITS_AND_BYTES + suffix
elif quant_method is None:
raise ValueError("The model's quantization config from the arguments has no `quant_method` attribute. Make sure that the model has been correctly quantized")
if quant_method not in AUTO_QUANTIZATION_CONFIG_MAPPING:
raise ValueError(f'Unknown quantization type, got {quant_method} - supported types are: {list(AUTO_QUANTIZER_MAPPING.keys())}')
target_cls = AUTO_QUANTIZATION_CONFIG_MAPPING[quant_method]
return target_cls.from_dict(quantization_config_dict)
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
if getattr(model_config, 'quantization_config', None) is None:
raise ValueError(f'Did not found a `quantization_config` in {pretrained_model_name_or_path}. Make sure that the model is correctly quantized.')
quantization_config_dict = model_config.quantization_config
quantization_config = cls.from_dict(quantization_config_dict)
quantization_config.update(**kwargs)
return quantization_config
|
class AutoQuantizationConfig:
'''
The Auto-HF quantization config class that takes care of automatically dispatching to the correct
quantization config given a quantization config stored in a dictionary.
'''
@classmethod
def from_dict(cls, quantization_config_dict: dict):
pass
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
pass
| 5
| 1
| 15
| 1
| 13
| 1
| 4
| 0.21
| 0
| 4
| 2
| 0
| 0
| 0
| 2
| 2
| 39
| 4
| 29
| 11
| 24
| 6
| 19
| 9
| 16
| 5
| 0
| 1
| 7
|
6,470
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/base.py
|
transformers.quantizers.base.HfQuantizer
|
from typing import TYPE_CHECKING, Any, Optional, Union
from abc import ABC, abstractmethod
from ..utils.quantization_config import QuantizationConfigMixin, QuantizationMethod
from .quantizers_utils import get_module_from_name
class HfQuantizer(ABC):
"""
Abstract class of the HuggingFace quantizer. Supports for now quantizing HF transformers models for inference and/or quantization.
This class is used only for transformers.PreTrainedModel.from_pretrained and cannot be easily used outside the scope of that method
yet.
Attributes
quantization_config (`transformers.utils.quantization_config.QuantizationConfigMixin`):
The quantization config that defines the quantization parameters of your model that you want to quantize.
modules_to_not_convert (`list[str]`, *optional*):
The list of module names to not convert when quantizing the model.
required_packages (`list[str]`, *optional*):
The list of required pip packages to install prior to using the quantizer
requires_calibration (`bool`):
Whether the quantization method requires to calibrate the model before using it.
requires_parameters_quantization (`bool`):
Whether the quantization method requires to create a new Parameter. For example, for bitsandbytes, it is
required to create a new xxxParameter in order to properly quantize the model.
"""
requires_calibration = False
required_packages = None
requires_parameters_quantization = False
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
self.quantization_config = quantization_config
self.modules_to_not_convert = kwargs.pop('modules_to_not_convert', [])
self.pre_quantized = kwargs.pop('pre_quantized', True)
if not self.pre_quantized and self.requires_calibration:
raise ValueError(f'The quantization method {quantization_config.quant_method} does require the model to be pre-quantized. You explicitly passed `pre_quantized=False` meaning your model weights are not quantized. Make sure to pass `pre_quantized=True` while knowing what you are doing.')
def update_torch_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
"""
Deprecared in favor of `update_dtype`!
Args:
dtype (`torch.dtype`):
The input dtype that is passed in `from_pretrained`
"""
logger.warning_once('`update_torch_dtype` is deprecated in favor of `update_dtype`! It will be removed in version v4.57')
return self.update_dtype(dtype)
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
"""
Some quantization methods require to explicitly set the dtype of the model to a
target dtype. You need to override this method in case you want to make sure that behavior is
preserved
Args:
dtype (`torch.dtype`):
The input dtype that is passed in `from_pretrained`
"""
return dtype
def update_device_map(self, device_map: Optional[dict[str, Any]]) -> Optional[dict[str, Any]]:
"""
Override this method if you want to pass a override the existing device map with a new
one. E.g. for bitsandbytes, since `accelerate` is a hard requirement, if no device_map is
passed, the device_map is set to `"auto"``
Args:
device_map (`Union[dict, str]`, *optional*):
The device_map that is passed through the `from_pretrained` method.
"""
return device_map
def adjust_target_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
"""
Override this method if you want to adjust the `target_dtype` variable used in `from_pretrained`
to compute the device_map in case the device_map is a `str`. E.g. for bitsandbytes we force-set `target_dtype`
to `torch.int8` and for 4-bit we pass a custom enum `accelerate.CustomDtype.int4`.
Args:
dtype (`torch.dtype`, *optional*):
The dtype that is used to compute the device_map.
"""
return dtype
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
"""
Override this method if you want to adjust the `missing_keys`.
Args:
missing_keys (`list[str]`, *optional*):
The list of missing keys in the checkpoint compared to the state dict of the model
"""
return missing_keys
def update_unexpected_keys(self, model, unexpected_keys: list[str], prefix: str) -> list[str]:
"""
Override this method if you want to adjust the `unexpected_keys`.
Args:
unexpected_keys (`list[str]`, *optional*):
The list of unexpected keys in the checkpoint compared to the state dict of the model
"""
return unexpected_keys
def update_missing_keys_after_loading(self, model, missing_keys: list[str], prefix: str) -> list[str]:
"""
Override this method if you want to adjust the `missing_keys` after loading the model params,
but before the model is post-processed.
Args:
missing_keys (`list[str]`, *optional*):
The list of missing keys in the checkpoint compared to the state dict of the model
"""
return missing_keys
def update_expected_keys(self, model, expected_keys: list[str], loaded_keys: list[str]) -> list[str]:
"""
Override this method if you want to adjust the `update_expected_keys`.
Args:
expected_keys (`list[str]`, *optional*):
The list of the expected keys in the initialized model.
loaded_keys (`list[str]`, *optional*):
The list of the loaded keys in the checkpoint.
"""
return expected_keys
def get_special_dtypes_update(self, model, dtype: 'torch.dtype') -> dict[str, 'torch.dtype']:
"""
returns dtypes for modules that are not quantized - used for the computation of the device_map in case
one passes a str as a device_map. The method will use the `modules_to_not_convert` that is modified
in `_process_model_before_weight_loading`.
Args:
model (`~transformers.PreTrainedModel`):
The model to quantize
dtype (`torch.dtype`):
The dtype passed in `from_pretrained` method.
"""
return {name: dtype for name, _ in model.named_parameters() if any((m in name for m in self.modules_to_not_convert))}
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
"""adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization"""
return max_memory
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
"""
checks if a loaded state_dict component is part of quantized param + some validation; only defined if
requires_parameters_quantization == True for quantization methods that require to create a new parameters
for quantization.
"""
return False
def create_quantized_param(self, *args, **kwargs) -> 'torch.nn.Parameter':
"""
takes needed components from state_dict and creates quantized param; only applicable if
requires_parameters_quantization == True
"""
if not self.requires_parameters_quantization:
raise AttributeError(f'`.create_quantized_param()` method is not supported by quantizer class {self.__class__.__name__}.')
def validate_environment(self, *args, **kwargs):
"""
This method is used to potentially check for potential conflicts with arguments that are
passed in `from_pretrained`. You need to define it for all future quantizers that are integrated with transformers.
If no explicit check are needed, simply return nothing.
"""
return
def update_tp_plan(self, config):
"""updates the tp plan for the scales"""
return config
def update_ep_plan(self, config):
"""updates the tp plan for the scales"""
return config
def preprocess_model(self, model: 'PreTrainedModel', **kwargs):
"""
Setting model attributes and/or converting model before weights loading. At this point
the model should be initialized on the meta device so you can freely manipulate the skeleton
of the model in order to replace modules in-place. Make sure to override the abstract method `_process_model_before_weight_loading`.
Args:
model (`~transformers.PreTrainedModel`):
The model to quantize
kwargs (`dict`, *optional*):
The keyword arguments that are passed along `_process_model_before_weight_loading`.
"""
model.is_quantized = True
model.quantization_method = self.quantization_config.quant_method
if self.pre_quantized:
self._convert_model_for_quantization(model)
return self._process_model_before_weight_loading(model, **kwargs)
def postprocess_model(self, model: 'PreTrainedModel', **kwargs):
"""
Post-process the model post weights loading.
Make sure to override the abstract method `_process_model_after_weight_loading`.
Args:
model (`~transformers.PreTrainedModel`):
The model to quantize
kwargs (`dict`, *optional*):
The keyword arguments that are passed along `_process_model_after_weight_loading`.
"""
return self._process_model_after_weight_loading(model, **kwargs)
def remove_quantization_config(self, model):
"""
Remove the quantization config from the model.
"""
if hasattr(model, 'hf_quantizer'):
del model.hf_quantizer
if hasattr(model.config, 'quantization_config'):
del model.config.quantization_config
if hasattr(model.config, '_pre_quantization_dtype'):
del model.config._pre_quantization_dtype
if hasattr(model, 'quantization_method'):
del model.quantization_method
model.is_quantized = False
def dequantize(self, model):
"""
Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance.
Note not all quantization schemes support this.
"""
model = self._dequantize(model)
del model.hf_quantizer
del model.config.quantization_config
del model.config._pre_quantization_dtype
del model.quantization_method
model.is_quantized = False
return model
def get_accelerator_warm_up_factor(self):
"""
The factor to be used in `caching_allocator_warmup` to get the number of bytes to pre-allocate to warm up accelerator.
A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means
we allocate half the memory of the weights residing in the empty model, etc...
"""
return 4
def _dequantize(self, model):
raise NotImplementedError(f'{self.quantization_config.quant_method} has no implementation of `dequantize`, please raise an issue on GitHub.')
def update_param_name(self, param_name: str) -> str:
"""
Override this method if you want to adjust the `param_name`.
"""
return param_name
@staticmethod
def get_modules_to_not_convert(model: 'PreTrainedModel', skip_modules: Optional[list[str]]=None, keep_in_fp32_modules: Optional[list[str]]=None, add_default_skips: bool=False):
from ..integrations import get_keys_to_not_convert
if skip_modules is None or add_default_skips:
modules_to_not_convert = get_keys_to_not_convert(model)
else:
modules_to_not_convert = []
if skip_modules is not None:
modules_to_not_convert.extend(skip_modules)
if keep_in_fp32_modules is not None:
modules_to_not_convert.extend(keep_in_fp32_modules)
return modules_to_not_convert
@property
def is_qat_trainable(self) -> bool:
"""Flag indicating whether the quantized model can carry out quantization aware training"""
return False
@property
def is_compileable(self) -> bool:
"""Flag indicating whether the quantized model can be compiled"""
return False
def get_state_dict_and_metadata(self, model, safe_serialization=False):
"""Get state dict and metadata. Useful when we need to modify a bit the state dict due to quantization"""
return (None, {})
@abstractmethod
def _process_model_before_weight_loading(self, model, **kwargs):
...
@abstractmethod
def _process_model_after_weight_loading(self, model, **kwargs):
...
@abstractmethod
def is_serializable(self, safe_serialization=None):
...
@property
@abstractmethod
def is_trainable(self):
...
def _convert_model_for_quantization(self, model):
from accelerate import init_empty_weights
for name, module in model.named_modules():
module_class_name = module.__class__.__name__
if module_class_name in MODULES_TO_PATCH_FOR_QUANTIZATION and self.quantization_config.quant_method in MODULES_TO_PATCH_FOR_QUANTIZATION[module_class_name]['quantization_methods']:
with init_empty_weights():
parent_module, name = get_module_from_name(model, name)
parent_module._modules[name] = MODULES_TO_PATCH_FOR_QUANTIZATION[module_class_name]['module_name'](model.config.get_text_config())
|
class HfQuantizer(ABC):
'''
Abstract class of the HuggingFace quantizer. Supports for now quantizing HF transformers models for inference and/or quantization.
This class is used only for transformers.PreTrainedModel.from_pretrained and cannot be easily used outside the scope of that method
yet.
Attributes
quantization_config (`transformers.utils.quantization_config.QuantizationConfigMixin`):
The quantization config that defines the quantization parameters of your model that you want to quantize.
modules_to_not_convert (`list[str]`, *optional*):
The list of module names to not convert when quantizing the model.
required_packages (`list[str]`, *optional*):
The list of required pip packages to install prior to using the quantizer
requires_calibration (`bool`):
Whether the quantization method requires to calibrate the model before using it.
requires_parameters_quantization (`bool`):
Whether the quantization method requires to create a new Parameter. For example, for bitsandbytes, it is
required to create a new xxxParameter in order to properly quantize the model.
'''
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
pass
def update_torch_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
'''
Deprecared in favor of `update_dtype`!
Args:
dtype (`torch.dtype`):
The input dtype that is passed in `from_pretrained`
'''
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
'''
Some quantization methods require to explicitly set the dtype of the model to a
target dtype. You need to override this method in case you want to make sure that behavior is
preserved
Args:
dtype (`torch.dtype`):
The input dtype that is passed in `from_pretrained`
'''
pass
def update_device_map(self, device_map: Optional[dict[str, Any]]) -> Optional[dict[str, Any]]:
'''
Override this method if you want to pass a override the existing device map with a new
one. E.g. for bitsandbytes, since `accelerate` is a hard requirement, if no device_map is
passed, the device_map is set to `"auto"``
Args:
device_map (`Union[dict, str]`, *optional*):
The device_map that is passed through the `from_pretrained` method.
'''
pass
def adjust_target_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
'''
Override this method if you want to adjust the `target_dtype` variable used in `from_pretrained`
to compute the device_map in case the device_map is a `str`. E.g. for bitsandbytes we force-set `target_dtype`
to `torch.int8` and for 4-bit we pass a custom enum `accelerate.CustomDtype.int4`.
Args:
dtype (`torch.dtype`, *optional*):
The dtype that is used to compute the device_map.
'''
pass
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
'''
Override this method if you want to adjust the `missing_keys`.
Args:
missing_keys (`list[str]`, *optional*):
The list of missing keys in the checkpoint compared to the state dict of the model
'''
pass
def update_unexpected_keys(self, model, unexpected_keys: list[str], prefix: str) -> list[str]:
'''
Override this method if you want to adjust the `unexpected_keys`.
Args:
unexpected_keys (`list[str]`, *optional*):
The list of unexpected keys in the checkpoint compared to the state dict of the model
'''
pass
def update_missing_keys_after_loading(self, model, missing_keys: list[str], prefix: str) -> list[str]:
'''
Override this method if you want to adjust the `missing_keys` after loading the model params,
but before the model is post-processed.
Args:
missing_keys (`list[str]`, *optional*):
The list of missing keys in the checkpoint compared to the state dict of the model
'''
pass
def update_expected_keys(self, model, expected_keys: list[str], loaded_keys: list[str]) -> list[str]:
'''
Override this method if you want to adjust the `update_expected_keys`.
Args:
expected_keys (`list[str]`, *optional*):
The list of the expected keys in the initialized model.
loaded_keys (`list[str]`, *optional*):
The list of the loaded keys in the checkpoint.
'''
pass
def get_special_dtypes_update(self, model, dtype: 'torch.dtype') -> dict[str, 'torch.dtype']:
'''
returns dtypes for modules that are not quantized - used for the computation of the device_map in case
one passes a str as a device_map. The method will use the `modules_to_not_convert` that is modified
in `_process_model_before_weight_loading`.
Args:
model (`~transformers.PreTrainedModel`):
The model to quantize
dtype (`torch.dtype`):
The dtype passed in `from_pretrained` method.
'''
pass
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
'''adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization'''
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
'''
checks if a loaded state_dict component is part of quantized param + some validation; only defined if
requires_parameters_quantization == True for quantization methods that require to create a new parameters
for quantization.
'''
pass
def create_quantized_param(self, *args, **kwargs) -> 'torch.nn.Parameter':
'''
takes needed components from state_dict and creates quantized param; only applicable if
requires_parameters_quantization == True
'''
pass
def validate_environment(self, *args, **kwargs):
'''
This method is used to potentially check for potential conflicts with arguments that are
passed in `from_pretrained`. You need to define it for all future quantizers that are integrated with transformers.
If no explicit check are needed, simply return nothing.
'''
pass
def update_tp_plan(self, config):
'''updates the tp plan for the scales'''
pass
def update_ep_plan(self, config):
'''updates the tp plan for the scales'''
pass
def preprocess_model(self, model: 'PreTrainedModel', **kwargs):
'''
Setting model attributes and/or converting model before weights loading. At this point
the model should be initialized on the meta device so you can freely manipulate the skeleton
of the model in order to replace modules in-place. Make sure to override the abstract method `_process_model_before_weight_loading`.
Args:
model (`~transformers.PreTrainedModel`):
The model to quantize
kwargs (`dict`, *optional*):
The keyword arguments that are passed along `_process_model_before_weight_loading`.
'''
pass
def postprocess_model(self, model: 'PreTrainedModel', **kwargs):
'''
Post-process the model post weights loading.
Make sure to override the abstract method `_process_model_after_weight_loading`.
Args:
model (`~transformers.PreTrainedModel`):
The model to quantize
kwargs (`dict`, *optional*):
The keyword arguments that are passed along `_process_model_after_weight_loading`.
'''
pass
def remove_quantization_config(self, model):
'''
Remove the quantization config from the model.
'''
pass
def dequantize(self, model):
'''
Potentially dequantize the model to retrieve the original model, with some loss in accuracy / performance.
Note not all quantization schemes support this.
'''
pass
def get_accelerator_warm_up_factor(self):
'''
The factor to be used in `caching_allocator_warmup` to get the number of bytes to pre-allocate to warm up accelerator.
A factor of 2 means we allocate all bytes in the empty model (since we allocate in fp16), a factor of 4 means
we allocate half the memory of the weights residing in the empty model, etc...
'''
pass
def _dequantize(self, model):
pass
def update_param_name(self, param_name: str) -> str:
'''
Override this method if you want to adjust the `param_name`.
'''
pass
@staticmethod
def get_modules_to_not_convert(model: 'PreTrainedModel', skip_modules: Optional[list[str]]=None, keep_in_fp32_modules: Optional[list[str]]=None, add_default_skips: bool=False):
pass
@property
def is_qat_trainable(self) -> bool:
'''Flag indicating whether the quantized model can carry out quantization aware training'''
pass
@property
def is_compileable(self) -> bool:
'''Flag indicating whether the quantized model can be compiled'''
pass
def get_state_dict_and_metadata(self, model, safe_serialization=False):
'''Get state dict and metadata. Useful when we need to modify a bit the state dict due to quantization'''
pass
@abstractmethod
def _process_model_before_weight_loading(self, model, **kwargs):
pass
@abstractmethod
def _process_model_after_weight_loading(self, model, **kwargs):
pass
@abstractmethod
def is_serializable(self, safe_serialization=None):
pass
@property
@abstractmethod
def is_trainable(self):
pass
def _convert_model_for_quantization(self, model):
pass
| 41
| 25
| 8
| 1
| 3
| 4
| 1
| 1.38
| 1
| 8
| 1
| 14
| 20
| 3
| 20
| 40
| 218
| 35
| 77
| 40
| 47
| 106
| 56
| 27
| 35
| 2
| 4
| 1
| 22
|
6,471
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_aqlm.py
|
transformers.quantizers.quantizer_aqlm.AqlmHfQuantizer
|
from ..utils.quantization_config import QuantizationConfigMixin
import importlib
from .base import HfQuantizer
from packaging import version
from ..integrations import replace_with_aqlm_linear
from ..utils import is_accelerate_available, is_aqlm_available, is_torch_available, logging
class AqlmHfQuantizer(HfQuantizer):
"""
Quantizer of the AQLM method. Enables the loading of prequantized models.
"""
requires_calibration = True
required_packages = ['aqlm']
optimum_quantizer = None
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
super().__init__(quantization_config, **kwargs)
self.quantization_config = quantization_config
def validate_environment(self, *args, **kwargs):
if not is_accelerate_available():
raise ImportError('Using `aqlm` quantization requires Accelerate: `pip install accelerate`')
if not is_aqlm_available():
raise ImportError('Using `aqlm` quantization requires AQLM: `pip install aqlm[gpu,cpu]`')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
if torch.cuda.is_available():
dtype = torch.float16
logger.info('CUDA available. Assuming AQLM inference on GPU and loading the model in `torch.float16`. To overwrite it, set `dtype` manually.')
else:
dtype = torch.float32
logger.info('CUDA is unavailable. Assuming AQLM inference on CPU and loading the model in `torch.float32`. To overwrite it, set `dtype` manually.')
return dtype
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', **kwargs):
replace_with_aqlm_linear(model, quantization_config=self.quantization_config, linear_weights_not_to_quantize=self.quantization_config.linear_weights_not_to_quantize)
model.config.quantization_config = self.quantization_config
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
return model
@property
def is_trainable(self) -> bool:
aqlm_supports_training = version.parse(importlib.metadata.version('aqlm')) >= version.parse('1.0.2')
if aqlm_supports_training:
return True
else:
logger.warning(f"Currently installed `aqlm` version ({importlib.metadata.version('aqlm')}) doesn't support training. If you wish to train a quantized model, please update `aqlm` with `pip install aqlm>=1.0.2`")
return False
def is_serializable(self, safe_serialization=None):
return True
|
class AqlmHfQuantizer(HfQuantizer):
'''
Quantizer of the AQLM method. Enables the loading of prequantized models.
'''
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
@property
def is_trainable(self) -> bool:
pass
def is_serializable(self, safe_serialization=None):
pass
| 9
| 1
| 7
| 0
| 6
| 0
| 2
| 0.06
| 1
| 3
| 1
| 0
| 7
| 1
| 7
| 47
| 62
| 9
| 50
| 18
| 37
| 3
| 33
| 13
| 25
| 3
| 5
| 2
| 12
|
6,472
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_awq.py
|
transformers.quantizers.quantizer_awq.AwqQuantizer
|
from typing import TYPE_CHECKING, Optional
import importlib.metadata
from .base import HfQuantizer
from ..utils import is_accelerate_available, is_auto_awq_available, is_torch_available, logging
from packaging import version
from ..utils.quantization_config import AWQLinearVersion
class AwqQuantizer(HfQuantizer):
"""
4-bit quantization for Activation-aware Weight Quantization(AWQ) (https://huggingface.co/papers/2306.00978)
"""
requires_calibration = True
required_packages = ['awq', 'accelerate']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
def validate_environment(self, device_map, **kwargs):
if not is_auto_awq_available():
raise ImportError('Loading an AWQ quantized model requires auto-awq library (`pip install autoawq`)')
if not is_accelerate_available():
raise ImportError('Loading an AWQ quantized model requires accelerate (`pip install accelerate`)')
if self.quantization_config.version == AWQLinearVersion.GEMM and (not torch.cuda.is_available()) and (not torch.xpu.is_available()):
logger.warning_once('No CUDA or XPU found, consider switching to the IPEX version for CPU-only execution.')
self.quantization_config.version = AWQLinearVersion.IPEX
if self.quantization_config.version == AWQLinearVersion.IPEX:
if version.parse(importlib.metadata.version('autoawq')) < version.parse('0.2.6'):
raise RuntimeError('To use IPEX backend, you need autoawq>0.2.6. Please install the latest version or from source.')
if device_map is None:
logger.warning_once("You have loaded an AWQ model without setting device_map, please set 'cpu' or 'xpu' or 'auto'")
elif isinstance(device_map, dict) and 'disk' in device_map.values():
raise ValueError('You are attempting to load an IPEX version AWQ model with a device_map that contains disk device. This is not supported. Please make sure only cpu and xpu in the device_map.')
else:
if not torch.cuda.is_available() and (not torch.xpu.is_available()):
raise RuntimeError('GPU is required to run AWQ quantized model. You can use IPEX version AWQ if you have an Intel CPU')
if device_map is None:
logger.warning_once('You have loaded an AWQ model on CPU and have a CUDA/XPU device available, make sure to set your model on a GPU device in order to run your model.')
elif device_map is not None:
if isinstance(device_map, dict) and any((forbidden in device_map.values() for forbidden in ('cpu', torch.device('cpu'), 'disk'))):
raise ValueError('You are attempting to load an AWQ model with a device_map that contains a CPU or disk device. This is not supported. Please remove the CPU or disk device from the device_map.')
def update_dtype(self, dtype):
if dtype is None:
dtype = torch.float16
logger.info('Loading the model in `torch.float16`. To overwrite it, set `dtype` manually.')
elif dtype == torch.bfloat16 and (torch.cuda.is_available() or torch.xpu.is_available()):
logger.warning('`torch.bfloat16` is not supported for AWQ CUDA/XPU kernels yet. Casting to `torch.float16`.')
dtype = torch.float16
elif dtype != torch.float16 and (torch.cuda.is_available() or torch.xpu.is_available()):
logger.warning('We suggest you to set `dtype=torch.float16` for better efficiency on CUDA/XPU with AWQ.')
return dtype
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_quantization_scales, replace_with_awq_linear
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules, add_default_skips=True)
model, has_been_replaced = replace_with_awq_linear(model, quantization_config=self.quantization_config, modules_to_not_convert=self.modules_to_not_convert)
model = replace_quantization_scales(model, model.config.model_type)
if not has_been_replaced:
logger.warning('You are loading an AWQ model but no linear modules were found in your model. Please double check your model architecture, or submit an issue on github if you think this is a bug.')
def _process_model_after_weight_loading(self, model, **kwargs):
if self.quantization_config.do_fuse:
from ..integrations import fuse_awq_modules
model = fuse_awq_modules(model, self.quantization_config)
model._awq_is_fused = True
if self.quantization_config.version == AWQLinearVersion.EXLLAMA:
from ..integrations import post_init_awq_exllama_modules
model = post_init_awq_exllama_modules(model, self.quantization_config.exllama_config)
if self.quantization_config.version == AWQLinearVersion.IPEX:
from ..integrations import post_init_awq_ipex_modules
model = post_init_awq_ipex_modules(model)
def is_serializable(self, safe_serialization=None):
if self.quantization_config.do_fuse:
logger.warning('You cannot save an AWQ model that uses fused modules!')
return False
if self.quantization_config.version == AWQLinearVersion.EXLLAMA:
logger.warning('You cannot save an AWQ model that uses Exllama backend!')
return False
return True
@property
def is_trainable(self):
MIN_AWQ_VERSION_FOR_PEFT = '0.2.0'
return version.parse(importlib.metadata.version('autoawq')) >= version.parse(MIN_AWQ_VERSION_FOR_PEFT)
|
class AwqQuantizer(HfQuantizer):
'''
4-bit quantization for Activation-aware Weight Quantization(AWQ) (https://huggingface.co/papers/2306.00978)
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, device_map, **kwargs):
pass
def update_dtype(self, dtype):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def _process_model_after_weight_loading(self, model, **kwargs):
pass
def is_serializable(self, safe_serialization=None):
pass
@property
def is_trainable(self):
pass
| 9
| 1
| 14
| 2
| 12
| 0
| 4
| 0.08
| 1
| 6
| 1
| 0
| 7
| 2
| 7
| 47
| 118
| 25
| 87
| 19
| 74
| 7
| 62
| 17
| 50
| 12
| 5
| 3
| 27
|
6,473
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_bitnet.py
|
transformers.quantizers.quantizer_bitnet.BitNetHfQuantizer
|
from ..utils import is_accelerate_available, is_torch_available, logging
from .base import HfQuantizer
from typing import TYPE_CHECKING, Optional, Union
class BitNetHfQuantizer(HfQuantizer):
"""
1.58-bit quantization from BitNet quantization method:
Before loading: it converts the linear layers into BitLinear layers during loading.
Check out the paper introducing this method: https://huggingface.co/papers/2402.17764
"""
requires_parameters_quantization = False
requires_calibration = True
required_packages = ['accelerate']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
self.quantization_config = quantization_config
def validate_environment(self, *args, **kwargs):
if not is_accelerate_available():
raise ImportError('Loading a BitNet quantized model requires accelerate (`pip install accelerate`)')
if not torch.cuda.is_available():
logger.warning_once("You don't have a GPU available to load the model, the inference will be slow because of weight unpacking")
return
device_map = kwargs.get('device_map')
if device_map is None:
logger.warning_once('You have loaded a BitNet model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model.')
elif device_map is not None:
if isinstance(device_map, dict) and ('cpu' in device_map.values() or 'disk' in device_map.values()):
raise ValueError('You are attempting to load a BitNet model with a device_map that contains a CPU or disk device.This is not supported. Please remove the CPU or disk device from the device_map.')
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
return model
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_bitnet_linear
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
model = replace_with_bitnet_linear(model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config, pre_quantized=self.pre_quantized)
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
max_memory = {key: val * 0.9 for key, val in max_memory.items()}
return max_memory
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
target_dtype = torch.int8
return target_dtype
def is_serializable(self, safe_serialization=None):
return True
@property
def is_trainable(self) -> bool:
return self.quantization_config.linear_class == 'autobitlinear' and self.quantization_config.quantization_mode == 'online'
@property
def is_qat_trainable(self) -> bool:
"""Flag indicating whether the quantized model can carry out quantization aware training"""
return self.quantization_config.linear_class == 'autobitlinear' and self.quantization_config.quantization_mode == 'online'
|
class BitNetHfQuantizer(HfQuantizer):
'''
1.58-bit quantization from BitNet quantization method:
Before loading: it converts the linear layers into BitLinear layers during loading.
Check out the paper introducing this method: https://huggingface.co/papers/2402.17764
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
pass
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
pass
def is_serializable(self, safe_serialization=None):
pass
@property
def is_trainable(self) -> bool:
pass
@property
def is_qat_trainable(self) -> bool:
'''Flag indicating whether the quantized model can carry out quantization aware training'''
pass
| 12
| 2
| 8
| 1
| 7
| 0
| 2
| 0.08
| 1
| 7
| 0
| 0
| 8
| 2
| 8
| 48
| 84
| 17
| 62
| 23
| 45
| 5
| 38
| 16
| 28
| 7
| 5
| 2
| 15
|
6,474
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_bnb_4bit.py
|
transformers.quantizers.quantizer_bnb_4bit.Bnb4BitHfQuantizer
|
import importlib
from .quantizers_utils import get_module_from_name
from ..utils import ACCELERATE_MIN_VERSION, is_accelerate_available, is_bitsandbytes_available, is_torch_available, is_torch_hpu_available, is_torch_npu_available, is_torch_xpu_available, logging
from functools import cached_property
from packaging import version
from typing import TYPE_CHECKING, Any, Optional, Union
from .base import HfQuantizer
class Bnb4BitHfQuantizer(HfQuantizer):
"""
4-bit quantization from bitsandbytes.py quantization method:
before loading: converts transformer layers into Linear4bit during loading: load 16bit weight and pass to the
layer object after: quantizes individual weights in Linear4bit into 4bit at the first .cuda() call
saving:
from state dict, as usual; saves weights and `quant_state` components
loading:
need to locate `quant_state` components and pass to Param4bit constructor
"""
use_keep_in_fp32_modules = True
requires_parameters_quantization = True
requires_calibration = False
required_packages = ['bitsandbytes', 'accelerate']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
if self.quantization_config.llm_int8_skip_modules is not None:
self.modules_to_not_convert = self.quantization_config.llm_int8_skip_modules
def validate_environment(self, *args, **kwargs):
if not is_accelerate_available():
raise ImportError(f"Using `bitsandbytes` 4-bit quantization requires Accelerate: `pip install 'accelerate>={ACCELERATE_MIN_VERSION}'`")
if not is_bitsandbytes_available(check_library_only=True):
raise ImportError('Using `bitsandbytes` 4-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`')
if not is_torch_available():
raise ImportError('The bitsandbytes library requires PyTorch but it was not found in your environment. You can install it with `pip install torch`.')
if version.parse(importlib.metadata.version('bitsandbytes')) < version.parse('0.43.1'):
if not torch.cuda.is_available():
raise ImportError('The installed version of bitsandbytes (<0.43.1) requires CUDA, but CUDA is not available. You may need to install PyTorch with CUDA support or upgrade bitsandbytes to >=0.43.1.')
from ..integrations import validate_bnb_backend_availability
from ..utils import is_bitsandbytes_multi_backend_available
bnb_multibackend_is_enabled = is_bitsandbytes_multi_backend_available()
validate_bnb_backend_availability(raise_exception=True)
device_map = kwargs.get('device_map')
if device_map is not None and isinstance(device_map, dict) and (not self.quantization_config.llm_int8_enable_fp32_cpu_offload):
device_map_without_lm_head = {key: device_map[key] for key in device_map if key not in self.modules_to_not_convert}
if set(device_map.values()) == {'cpu'} and bnb_multibackend_is_enabled:
pass
elif 'cpu' in device_map_without_lm_head.values() or 'disk' in device_map_without_lm_head.values():
raise ValueError('Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `llm_int8_enable_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details. ')
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
if version.parse(importlib.metadata.version('accelerate')) > version.parse('0.19.0'):
from accelerate.utils import CustomDtype
if target_dtype != torch.int8:
logger.info('target_dtype {target_dtype} is replaced by `CustomDtype.INT4` for 4-bit BnB quantization')
return CustomDtype.INT4
else:
raise ValueError("You are using `device_map='auto'` on a 4bit loaded version of the model. To automatically compute the appropriate device map, you should upgrade your `accelerate` library,`pip install --upgrade accelerate` or install it from source to support fp4 auto device mapcalculation. You may encounter unexpected behavior, or pass your own device map")
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
import bitsandbytes as bnb
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module._parameters.get(tensor_name, None), bnb.nn.Params4bit):
return True
elif isinstance(module, bnb.nn.Linear4bit) and tensor_name == 'bias':
return True
else:
return False
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
"""
combines logic from _load_state_dict_into_meta_model and .integrations.bitsandbytes.py::set_module_quantized_tensor_to_device()
"""
import bitsandbytes as bnb
module, tensor_name = get_module_from_name(model, param_name)
if tensor_name not in module._parameters:
raise ValueError(f'{module} does not have a parameter or a buffer named {tensor_name}.')
old_value = getattr(module, tensor_name)
if isinstance(target_device, int) and is_torch_npu_available():
target_device = f'npu:{target_device}'
if tensor_name == 'bias':
if param_value is None:
new_value = old_value.to(target_device)
else:
new_value = param_value.to(target_device)
new_value = torch.nn.Parameter(new_value, requires_grad=old_value.requires_grad)
module._parameters[tensor_name] = new_value
return
if not isinstance(module._parameters[tensor_name], bnb.nn.Params4bit):
raise ValueError('this function only loads `Linear4bit components`')
if old_value.device == torch.device('meta') and target_device not in ['meta', torch.device('meta')] and (param_value is None):
raise ValueError(f'{tensor_name} is on the meta device, we need a `value` to put in on {target_device}.')
if self.pre_quantized:
if not self.is_serializable:
raise ValueError('Detected int4 weights but the version of bitsandbytes is not compatible with int4 serialization. Make sure to download the latest `bitsandbytes` version. `pip install --upgrade bitsandbytes`.')
if param_name + '.quant_state.bitsandbytes__fp4' not in state_dict and param_name + '.quant_state.bitsandbytes__nf4' not in state_dict:
raise ValueError(f'Supplied state dict for {param_name} does not contain `bitsandbytes__*` and possibly other `quantized_stats` components.')
quantized_stats = {}
for k, v in state_dict.items():
if param_name + '.' in k:
quantized_stats[k] = v
if unexpected_keys is not None and k in unexpected_keys:
unexpected_keys.remove(k)
param_kwargs = {}
if self.is_bnb_supports_quant_storage_module:
param_kwargs['module'] = module
new_value = bnb.nn.Params4bit.from_prequantized(data=param_value, quantized_stats=quantized_stats, requires_grad=False, device=target_device, **param_kwargs)
else:
new_value = param_value.to('cpu')
if issubclass(module.source_cls, Conv1D):
new_value = new_value.T
kwargs = old_value.__dict__
new_value = bnb.nn.Params4bit(new_value, requires_grad=False, **kwargs).to(target_device)
module._parameters[tensor_name] = new_value
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
max_memory = {key: val * 0.9 for key, val in max_memory.items()}
return max_memory
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
logger.info('Overriding dtype=%s with `dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in 8-bit or 4-bit. Pass your own dtype to specify the dtype of the remaining non-linear layers or pass dtype=torch.float16 to remove this warning.', dtype)
dtype = torch.float16
return dtype
def update_device_map(self, device_map):
if device_map is None:
if torch.cuda.is_available():
device_map = {'': torch.cuda.current_device()}
elif is_torch_npu_available():
device_map = {'': f'npu:{torch.npu.current_device()}'}
elif is_torch_hpu_available():
device_map = {'': f'hpu:{torch.hpu.current_device()}'}
elif is_torch_xpu_available():
device_map = {'': torch.xpu.current_device()}
else:
device_map = {'': 'cpu'}
logger.info(f"The device_map was not initialized. Setting device_map to {device_map}. If you want to use the model for inference, please set device_map ='auto' ")
return device_map
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', device_map, keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_bnb_linear
llm_int8_enable_fp32_cpu_offload = self.quantization_config.llm_int8_enable_fp32_cpu_offload
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.llm_int8_skip_modules, keep_in_fp32_modules)
if isinstance(device_map, dict) and len(device_map.keys()) > 1:
keys_on_cpu = [key for key, value in device_map.items() if value in ['disk', 'cpu']]
if len(keys_on_cpu) > 0 and (not llm_int8_enable_fp32_cpu_offload):
raise ValueError('If you want to offload some keys to `cpu` or `disk`, you need to set `llm_int8_enable_fp32_cpu_offload=True`. Note that these modules will not be converted to 8-bit but kept in 32-bit.')
self.modules_to_not_convert.extend(keys_on_cpu)
model = replace_with_bnb_linear(model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config)
model.config.quantization_config = self.quantization_config
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
model.is_loaded_in_4bit = True
model.is_4bit_serializable = self.is_serializable()
return model
def is_serializable(self, safe_serialization=None):
_is_4bit_serializable = version.parse(importlib.metadata.version('bitsandbytes')) >= version.parse('0.41.3')
if not _is_4bit_serializable:
logger.warning("You are calling `save_pretrained` to a 4-bit converted model, but your `bitsandbytes` version doesn't support it. If you want to save 4-bit models, make sure to have `bitsandbytes>=0.41.3` installed.")
return False
return True
@cached_property
def is_bnb_supports_quant_storage_module(self) -> bool:
"""
determines if the current version of bitsandbytes supports
the `module` parameter in `Params4bit.from_prequantized`
:return:
"""
return version.parse(importlib.metadata.version('bitsandbytes')) >= version.parse('0.43.3')
@property
def is_trainable(self) -> bool:
return True
def _dequantize(self, model):
from ..integrations import dequantize_and_replace
model = dequantize_and_replace(model, self.modules_to_not_convert, quantization_config=self.quantization_config)
return model
|
class Bnb4BitHfQuantizer(HfQuantizer):
'''
4-bit quantization from bitsandbytes.py quantization method:
before loading: converts transformer layers into Linear4bit during loading: load 16bit weight and pass to the
layer object after: quantizes individual weights in Linear4bit into 4bit at the first .cuda() call
saving:
from state dict, as usual; saves weights and `quant_state` components
loading:
need to locate `quant_state` components and pass to Param4bit constructor
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
'''
combines logic from _load_state_dict_into_meta_model and .integrations.bitsandbytes.py::set_module_quantized_tensor_to_device()
'''
pass
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def update_device_map(self, device_map):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', device_map, keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def is_serializable(self, safe_serialization=None):
pass
@cached_property
def is_bnb_supports_quant_storage_module(self) -> bool:
'''
determines if the current version of bitsandbytes supports
the `module` parameter in `Params4bit.from_prequantized`
:return:
'''
pass
@property
def is_trainable(self) -> bool:
pass
def _dequantize(self, model):
pass
| 17
| 3
| 20
| 2
| 16
| 2
| 4
| 0.16
| 1
| 12
| 1
| 0
| 14
| 1
| 14
| 54
| 317
| 50
| 232
| 64
| 187
| 36
| 133
| 41
| 111
| 15
| 5
| 4
| 50
|
6,475
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_bnb_8bit.py
|
transformers.quantizers.quantizer_bnb_8bit.Bnb8BitHfQuantizer
|
from typing import TYPE_CHECKING, Any, Optional, Union
import importlib
from ..utils import ACCELERATE_MIN_VERSION, is_accelerate_available, is_bitsandbytes_available, is_torch_available, is_torch_xpu_available, logging
from .base import HfQuantizer
from packaging import version
from .quantizers_utils import get_module_from_name
class Bnb8BitHfQuantizer(HfQuantizer):
"""
8-bit quantization from bitsandbytes quantization method:
before loading: converts transformer layers into Linear8bitLt during loading: load 16bit weight and pass to the
layer object after: quantizes individual weights in Linear8bitLt into 8bit at fitst .cuda() call
saving:
from state dict, as usual; saves weights and 'SCB' component
loading:
need to locate SCB component and pass to the Linear8bitLt object
"""
use_keep_in_fp32_modules = True
requires_parameters_quantization = True
requires_calibration = False
required_packages = ['bitsandbytes', 'accelerate']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
if self.quantization_config.llm_int8_skip_modules is not None:
self.modules_to_not_convert = self.quantization_config.llm_int8_skip_modules
def validate_environment(self, *args, **kwargs):
if not is_accelerate_available():
raise ImportError(f"Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install 'accelerate>={ACCELERATE_MIN_VERSION}'`")
if not is_bitsandbytes_available(check_library_only=True):
raise ImportError('Using `bitsandbytes` 8-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`')
if not is_torch_available():
raise ImportError('The bitsandbytes library requires PyTorch but it was not found in your environment. You can install it with `pip install torch`.')
if version.parse(importlib.metadata.version('bitsandbytes')) < version.parse('0.43.1'):
if not torch.cuda.is_available():
raise ImportError('The installed version of bitsandbytes (<0.43.1) requires CUDA, but CUDA is not available. You may need to install PyTorch with CUDA support or upgrade bitsandbytes to >=0.43.1.')
from ..integrations import validate_bnb_backend_availability
from ..utils import is_bitsandbytes_multi_backend_available
bnb_multibackend_is_enabled = is_bitsandbytes_multi_backend_available()
validate_bnb_backend_availability(raise_exception=True)
device_map = kwargs.get('device_map')
if device_map is not None and isinstance(device_map, dict) and (not self.quantization_config.llm_int8_enable_fp32_cpu_offload):
device_map_without_lm_head = {key: device_map[key] for key in device_map if key not in self.modules_to_not_convert}
if set(device_map.values()) == {'cpu'} and bnb_multibackend_is_enabled:
pass
elif 'cpu' in device_map_without_lm_head.values() or 'disk' in device_map_without_lm_head.values():
raise ValueError('Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `llm_int8_enable_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details. ')
if version.parse(importlib.metadata.version('bitsandbytes')) < version.parse('0.37.2'):
raise ValueError('You have a version of `bitsandbytes` that is not compatible with 8bit inference and training make sure you have the latest version of `bitsandbytes` installed')
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
max_memory = {key: val * 0.9 for key, val in max_memory.items()}
return max_memory
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
logger.info('Overriding dtype=%s with `dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in 8-bit or 4-bit. Pass your own dtype to specify the dtype of the remaining non-linear layers or pass dtype=torch.float16 to remove this warning.', dtype)
dtype = torch.float16
return dtype
def update_device_map(self, device_map):
if device_map is None:
if torch.cuda.is_available():
device_map = {'': torch.cuda.current_device()}
elif is_torch_xpu_available():
device_map = {'': torch.xpu.current_device()}
else:
device_map = {'': 'cpu'}
logger.info(f"The device_map was not initialized. Setting device_map to {device_map}. If you want to use the model for inference, please set device_map ='auto' ")
return device_map
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
if target_dtype != torch.int8:
logger.info('target_dtype {target_dtype} is replaced by `torch.int8` for 8-bit BnB quantization')
return torch.int8
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs):
import bitsandbytes as bnb
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module._parameters.get(tensor_name, None), bnb.nn.Int8Params):
if self.pre_quantized:
if param_name.replace('weight', 'SCB') not in state_dict:
raise ValueError('Missing quantization component `SCB`')
if param_value.dtype != torch.int8:
raise ValueError(f'Incompatible dtype `{param_value.dtype}` when loading 8-bit prequantized weight. Expected `torch.int8`.')
return True
return False
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
"""
combines logic from _load_state_dict_into_meta_model and .integrations.bitsandbytes.py::set_module_quantized_tensor_to_device()
needs aux items from state dicts, if found - removes them from unexpected_keys
"""
import bitsandbytes as bnb
fp16_statistics_key = param_name.replace('weight', 'SCB')
fp16_weights_format_key = param_name.replace('weight', 'weight_format')
fp16_statistics = state_dict.get(fp16_statistics_key)
fp16_weights_format = state_dict.get(fp16_weights_format_key)
module, tensor_name = get_module_from_name(model, param_name)
if tensor_name not in module._parameters:
raise ValueError(f'{module} does not have a parameter or a buffer named {tensor_name}.')
old_value = getattr(module, tensor_name)
if not isinstance(module._parameters[tensor_name], bnb.nn.Int8Params):
raise TypeError(f'Parameter `{tensor_name}` should only be a `bnb.nn.Int8Params` instance.')
if old_value.device == torch.device('meta') and target_device not in ['meta', torch.device('meta')] and (param_value is None):
raise ValueError(f'{tensor_name} is on the meta device, we need a `value` to put in on {target_device}.')
new_value = param_value.to('cpu')
if self.pre_quantized and (not self.is_serializable()):
raise ValueError('Detected int8 weights but the version of bitsandbytes is not compatible with int8 serialization. Make sure to download the latest `bitsandbytes` version. `pip install --upgrade bitsandbytes`.')
if issubclass(module.source_cls, Conv1D):
if fp16_statistics is None:
new_value = new_value.T
kwargs = old_value.__dict__
new_value = bnb.nn.Int8Params(new_value, requires_grad=False, **kwargs).to(target_device)
module._parameters[tensor_name] = new_value
if fp16_statistics is not None:
setattr(module.weight, 'SCB', fp16_statistics.to(target_device))
if unexpected_keys is not None:
unexpected_keys.remove(fp16_statistics_key)
if fp16_weights_format is not None and unexpected_keys is not None:
unexpected_keys.remove(fp16_weights_format_key)
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
model.is_loaded_in_8bit = True
model.is_8bit_serializable = self.is_serializable()
return model
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', device_map, keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_bnb_linear
llm_int8_enable_fp32_cpu_offload = self.quantization_config.llm_int8_enable_fp32_cpu_offload
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.llm_int8_skip_modules, keep_in_fp32_modules)
if isinstance(device_map, dict) and len(device_map.keys()) > 1:
keys_on_cpu = [key for key, value in device_map.items() if value in ['disk', 'cpu']]
if len(keys_on_cpu) > 0 and (not llm_int8_enable_fp32_cpu_offload):
raise ValueError('If you want to offload some keys to `cpu` or `disk`, you need to set `llm_int8_enable_fp32_cpu_offload=True`. Note that these modules will not be converted to 8-bit but kept in 32-bit.')
self.modules_to_not_convert.extend(keys_on_cpu)
model = replace_with_bnb_linear(model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config)
model.config.quantization_config = self.quantization_config
def is_serializable(self, safe_serialization=None):
_bnb_supports_8bit_serialization = version.parse(importlib.metadata.version('bitsandbytes')) > version.parse('0.37.2')
if not _bnb_supports_8bit_serialization:
logger.warning("You are calling `save_pretrained` to a 8-bit converted model, but your `bitsandbytes` version doesn't support it. If you want to save 8-bit models, make sure to have `bitsandbytes>0.37.2` installed. You will most likely face errors or unexpected behaviours.")
return False
return True
@property
def is_trainable(self) -> bool:
return version.parse(importlib.metadata.version('bitsandbytes')) >= version.parse('0.37.0')
def _dequantize(self, model):
from ..integrations import dequantize_and_replace
model = dequantize_and_replace(model, self.modules_to_not_convert, quantization_config=self.quantization_config)
return model
|
class Bnb8BitHfQuantizer(HfQuantizer):
'''
8-bit quantization from bitsandbytes quantization method:
before loading: converts transformer layers into Linear8bitLt during loading: load 16bit weight and pass to the
layer object after: quantizes individual weights in Linear8bitLt into 8bit at fitst .cuda() call
saving:
from state dict, as usual; saves weights and 'SCB' component
loading:
need to locate SCB component and pass to the Linear8bitLt object
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def update_device_map(self, device_map):
pass
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs):
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
'''
combines logic from _load_state_dict_into_meta_model and .integrations.bitsandbytes.py::set_module_quantized_tensor_to_device()
needs aux items from state dicts, if found - removes them from unexpected_keys
'''
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', device_map, keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def is_serializable(self, safe_serialization=None):
pass
@property
def is_trainable(self) -> bool:
pass
def _dequantize(self, model):
pass
| 15
| 2
| 18
| 2
| 15
| 1
| 3
| 0.11
| 1
| 11
| 1
| 0
| 13
| 1
| 13
| 53
| 267
| 43
| 202
| 62
| 160
| 23
| 120
| 40
| 100
| 10
| 5
| 3
| 44
|
6,476
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_compressed_tensors.py
|
transformers.quantizers.quantizer_compressed_tensors.CompressedTensorsHfQuantizer
|
from ..utils import is_compressed_tensors_available, is_torch_available, logging
from .base import HfQuantizer
from ..utils.quantization_config import CompressedTensorsConfig
class CompressedTensorsHfQuantizer(HfQuantizer):
"""
Quantizer for the compressed_tensors package. Loads and restores models to
quantized state with compressed_tensors
"""
requires_calibration = True
required_packages = ['compressed_tensors']
def __init__(self, quantization_config: CompressedTensorsConfig, **kwargs):
super().__init__(quantization_config, **kwargs)
if not is_compressed_tensors_available():
raise ImportError('Using `compressed_tensors` quantized models requires the compressed-tensors library: `pip install compressed-tensors`')
quantization_config.post_init()
from compressed_tensors.compressors import ModelCompressor
self.compressor = ModelCompressor.from_compression_config(quantization_config)
self.run_compressed = quantization_config.run_compressed
self.quantization_config = quantization_config
def validate_environment(self, *args, **kwargs):
if not is_compressed_tensors_available():
raise ImportError('Using `compressed_tensors` quantized models requires the compressed-tensors library: `pip install compressed-tensors`')
if not is_torch_available():
raise ImportError('torch is required for using compressed-tensors quantization')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
logger.info('Loading model using torch.float16 for compressed-tensors quantization')
dtype = torch.float16
elif dtype != torch.float16:
logger.info('We suggest you to set `dtype=torch.float16` for better efficiency with compressed_tensors.')
return dtype
def _process_model_before_weight_loading(self, model, **kwargs):
from compressed_tensors.quantization import apply_quantization_config
ct_quantization_config = self.compressor.quantization_config
apply_quantization_config(model, ct_quantization_config, self.run_compressed)
if self.quantization_config.is_quantization_compressed or self.quantization_config.is_sparsification_compressed:
self.compressor.compress_model(model=model)
def _process_model_after_weight_loading(self, model, **kwargs):
"""Decompress loaded model if necessary - need for qat"""
if self.quantization_config.is_quantization_compressed and (not self.run_compressed) or self.quantization_config.is_sparsification_compressed:
self.compressor.decompress_model(model=model)
def update_tp_plan(self, config):
additional_plan = {'layers.*.feed_forward.experts.*.gate_proj.weight': 'local_colwise', 'layers.*.feed_forward.experts.*.gate_proj.weight_scale': 'local_colwise', 'layers.*.feed_forward.experts.*.up_proj.weight': 'local_colwise', 'layers.*.feed_forward.experts.*.up_proj.weight_scale': 'local_colwise', 'layers.*.feed_forward.experts.*.down_proj.weight': 'local_rowwise'}
if config.get_text_config() is not None and config.get_text_config().base_model_tp_plan is not None:
config.get_text_config().base_model_tp_plan.update(additional_plan)
return config
@property
def is_trainable(self):
return True
def is_qat_trainable(self) -> bool:
"""Loaded Models can carry out quantization aware training"""
return not self.run_compressed or not self.quantization_config.is_quantization_compressed
def is_serializable(self, safe_serialization=None) -> bool:
"""Models quantized using compressed tensors can be saved to disk"""
return True
|
class CompressedTensorsHfQuantizer(HfQuantizer):
'''
Quantizer for the compressed_tensors package. Loads and restores models to
quantized state with compressed_tensors
'''
def __init__(self, quantization_config: CompressedTensorsConfig, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def _process_model_before_weight_loading(self, model, **kwargs):
pass
def _process_model_after_weight_loading(self, model, **kwargs):
'''Decompress loaded model if necessary - need for qat'''
pass
def update_tp_plan(self, config):
pass
@property
def is_trainable(self):
pass
def is_qat_trainable(self) -> bool:
'''Loaded Models can carry out quantization aware training'''
pass
def is_serializable(self, safe_serialization=None) -> bool:
'''Models quantized using compressed tensors can be saved to disk'''
pass
| 11
| 4
| 8
| 1
| 7
| 1
| 2
| 0.13
| 1
| 4
| 1
| 0
| 10
| 3
| 10
| 50
| 103
| 23
| 71
| 29
| 51
| 9
| 52
| 26
| 35
| 4
| 5
| 2
| 20
|
6,477
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_eetq.py
|
transformers.quantizers.quantizer_eetq.EetqHfQuantizer
|
from typing import TYPE_CHECKING, Any, Optional
from .base import HfQuantizer
from .quantizers_utils import get_module_from_name
from ..utils import is_accelerate_available, is_eetq_available, is_torch_available, logging
class EetqHfQuantizer(HfQuantizer):
"""
8-bit quantization from EETQ quantization method:
before loading: converts transformer layers into W8A16Linear during loading: load 16bit weight and pass to the
layer object after: quantizes individual weights in Linear8bitLt into 8bit at first .cuda() call
"""
requires_parameters_quantization = True
requires_calibration = False
required_packages = ['eetq', 'accelerate']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
self.quantization_config = quantization_config
def validate_environment(self, *args, **kwargs):
if not is_eetq_available():
raise ImportError('Using `eetq` 8-bit quantization requires eetq.Please install the latest version of eetq from : https://github.com/NetEase-FuXi/EETQ')
try:
import eetq
except ImportError as exc:
if 'shard_checkpoint' in str(exc):
raise ImportError('You are using a version of EETQ that is incompatible with the current transformers version. Either downgrade transformers to <= v4.46.3 or, if available, upgrade EETQ to > v1.0.0.') from exc
else:
raise
if not is_accelerate_available():
raise ImportError('Loading an EETQ quantized model requires accelerate (`pip install accelerate`)')
if not torch.cuda.is_available():
raise RuntimeError('No GPU found. A GPU is needed for quantization.')
device_map = kwargs.get('device_map')
if device_map is None:
logger.warning_once('You have loaded an EETQ model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model.')
elif device_map is not None:
if isinstance(device_map, dict) and ('cpu' in device_map.values() or 'disk' in device_map.values()):
raise ValueError('You are attempting to load an EETQ model with a device_map that contains a CPU or disk device. This is not supported. Please remove the CPU or disk device from the device_map.')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
dtype = torch.float16
logger.info('Overriding dtype=%s with `dtype=torch.float16` due to requirements of `eetq` to enable model loading in 8-bit. Pass your own dtype to specify the dtype of the remaining non-linear layers or pass dtype=torch.float16 to remove this warning.', dtype)
elif dtype != torch.float16:
logger.info('We suggest you to set `dtype=torch.float16` for better efficiency with EETQ.')
return dtype
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs):
from eetq import EetqLinear
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module, EetqLinear):
if self.pre_quantized or tensor_name == 'bias':
if tensor_name == 'weight' and param_value.dtype != torch.int8:
raise ValueError('Expect quantized weights but got an unquantized weight')
return False
else:
if tensor_name == 'weight_scale':
raise ValueError('Expect unquantized weights but got a quantized weight_scale')
return True
return False
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
"""
quantizes weights into qweight and weight_scales
"""
from eetq import quantize_and_preprocess_weights
module, tensor_name = get_module_from_name(model, param_name)
new_value, weight_scale = quantize_and_preprocess_weights(param_value)
module._buffers[tensor_name] = new_value.to(target_device)
module.register('weight_scales', weight_scale.to(target_device))
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
return model
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_eetq_linear
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
model = replace_with_eetq_linear(model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config, pre_quantized=self.pre_quantized)
model.config.quantization_config = self.quantization_config
def is_serializable(self, safe_serialization=None):
return True
@property
def is_trainable(self) -> bool:
return True
|
class EetqHfQuantizer(HfQuantizer):
'''
8-bit quantization from EETQ quantization method:
before loading: converts transformer layers into W8A16Linear during loading: load 16bit weight and pass to the
layer object after: quantizes individual weights in Linear8bitLt into 8bit at first .cuda() call
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs):
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
'''
quantizes weights into qweight and weight_scales
'''
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def is_serializable(self, safe_serialization=None):
pass
@property
def is_trainable(self) -> bool:
pass
| 11
| 2
| 14
| 1
| 12
| 1
| 3
| 0.1
| 1
| 8
| 0
| 0
| 9
| 2
| 9
| 49
| 151
| 24
| 116
| 46
| 80
| 12
| 64
| 23
| 50
| 10
| 5
| 3
| 25
|
6,478
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_fbgemm_fp8.py
|
transformers.quantizers.quantizer_fbgemm_fp8.FbgemmFp8HfQuantizer
|
from .quantizers_utils import get_module_from_name
from typing import TYPE_CHECKING, Any, Optional
from .base import HfQuantizer
from ..utils import is_accelerate_available, is_fbgemm_gpu_available, is_torch_available, logging
class FbgemmFp8HfQuantizer(HfQuantizer):
"""
FP8 quantization using fbgemm kernels
"""
requires_parameters_quantization = True
requires_calibration = False
required_packages = ['fbgemm-gpu', 'accelerate']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
self.quantization_config = quantization_config
def validate_environment(self, *args, **kwargs):
if not is_torch_available():
raise ImportError('Using fbgemm fp8 quantization requires torch >= 2.1.0Please install the latest version of torch ( pip install --upgrade torch )')
if not is_fbgemm_gpu_available():
raise ImportError('Using fbgemm fp8 quantization requires fbgemm-gpu libraryPlease install the latest version of fbgemm-gpu library by following : https://pytorch.org/FBGEMM/fbgemm_gpu-development/InstallationInstructions.html#fbgemm-gpu-install-libraries')
if not is_accelerate_available('0.32.2'):
raise ImportError('Loading an FP8 quantized model requires accelerate > 0.32.1 (`pip install --upgrade accelerate`)')
if not torch.cuda.is_available():
raise RuntimeError('Using FP8 quantized models with fbgemm kernels requires a GPU')
compute_capability = torch.cuda.get_device_capability()
major, minor = compute_capability
if major < 9:
raise ValueError('FP8 quantized models is only supported on GPUs with compute capability >= 9.0 (e.g H100)')
device_map = kwargs.get('device_map')
if device_map is None:
logger.warning_once("You have loaded an FP8 model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model. To remove this warning, pass device_map = 'cuda'. ")
elif device_map is not None:
if not self.pre_quantized and isinstance(device_map, dict) and ('cpu' in device_map.values() or 'disk' in device_map.values()):
raise ValueError('You are attempting to load an FP8 model with a device_map that contains a CPU or disk device.This is not supported when the model is quantized on the fly. Please use a quantized checkpoint or remove the CPU or disk device from the device_map.')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
dtype = torch.bfloat16
logger.info('Overriding dtype=%s with `dtype=torch.bloat16` due to requirements of `fbgemm-gpu` to enable model loading in fp8. Pass your own dtype to specify the dtype of the remaining non-linear layers or pass dtype=torch.bfloat16 to remove this warning.', dtype)
elif dtype == torch.float16:
raise ValueError('You cannot use FP8 with dtype=torch.float16.We recommend you passing dtype=torch.bfloat16')
return dtype
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs):
from ..integrations import FbgemmFp8Linear, FbgemmFp8Llama4TextExperts
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module, FbgemmFp8Linear):
if self.pre_quantized or tensor_name == 'bias':
if tensor_name == 'weight' and param_value.dtype != torch.float8_e4m3fn:
raise ValueError('Expect quantized weights but got an unquantized weight')
return False
else:
if tensor_name == 'weight_scale':
raise ValueError('Expect unquantized weights but got a quantized weight_scale')
return True
if isinstance(module, FbgemmFp8Llama4TextExperts):
if self.pre_quantized or tensor_name == 'bias':
return False
else:
if tensor_name == 'gate_up_proj_scale' or tensor_name == 'down_proj_scale':
raise ValueError('Expect unquantized weights but got a quantized weight_scale')
return True
return False
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
"""
Quantizes weights into weight and weight_scale
"""
from ..integrations import FbgemmFp8Llama4TextExperts
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module, FbgemmFp8Llama4TextExperts):
if tensor_name == 'gate_up_proj':
transposed_param = param_value.transpose(1, 2)
original_shape = transposed_param.shape
flattened_param = transposed_param.reshape(-1, original_shape[-1])
new_value_flat, weight_scale_flat = torch.ops.fbgemm.quantize_fp8_per_row(flattened_param)
new_value = new_value_flat.reshape(original_shape)
new_value = new_value.transpose(1, 2)
weight_scale = weight_scale_flat.reshape(original_shape[0], 1, original_shape[1])
elif tensor_name == 'down_proj':
transposed_param = param_value.transpose(1, 2)
original_shape = transposed_param.shape
flattened_param = transposed_param.reshape(-1, original_shape[-1])
new_value_flat, weight_scale_flat = torch.ops.fbgemm.quantize_fp8_per_row(flattened_param)
new_value = new_value_flat.reshape(original_shape)
new_value = new_value.transpose(1, 2)
weight_scale = weight_scale_flat.reshape(original_shape[0], original_shape[1], 1)
module._parameters[f'{tensor_name}_scale'] = torch.nn.Parameter(weight_scale.to(target_device))
else:
new_value, weight_scale = torch.ops.fbgemm.quantize_fp8_per_row(param_value)
module._parameters[f'{tensor_name}_scale'] = torch.nn.Parameter(weight_scale.view(weight_scale.shape[0], 1).to(target_device))
module._parameters[tensor_name] = torch.nn.Parameter(new_value.to(target_device))
if unexpected_keys is not None and param_name in unexpected_keys:
unexpected_keys.remove(param_name)
del param_name
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
return model
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_fbgemm_fp8_linear
tp_plan = model._tp_plan
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
config = model.config
model = replace_with_fbgemm_fp8_linear(model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config, pre_quantized=self.pre_quantized, config=config, tp_plan=tp_plan)
model.config.quantization_config = self.quantization_config
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
from ..integrations import FbgemmFp8Linear, FbgemmFp8Llama4TextExperts
not_missing_keys = []
for name, module in model.named_modules():
if isinstance(module, (FbgemmFp8Linear, FbgemmFp8Llama4TextExperts)):
for missing in missing_keys:
if (name in missing or name in f'{prefix}.{missing}') and (not missing.endswith('.weight')) and (not missing.endswith('.bias')):
not_missing_keys.append(missing)
return [k for k in missing_keys if k not in not_missing_keys]
def update_tp_plan(self, config):
if 'Llama4' in config.__class__.__name__:
text_plan = {'layers.*.self_attn.q_proj.weight': 'local_colwise', 'layers.*.self_attn.q_proj.weight_scale': 'local_colwise', 'layers.*.self_attn.k_proj.weight': 'local_colwise', 'layers.*.self_attn.k_proj.weight_scale': 'local_colwise', 'layers.*.self_attn.v_proj.weight': 'local_colwise', 'layers.*.self_attn.v_proj.weight_scale': 'local_colwise', 'layers.*.self_attn.o_proj.weight': 'local_rowwise', 'layers.*.self_attn': 'gather', 'layers.*.input_layernorm.weight': 'sequence_parallel', 'layers.*.post_attention_layernorm.weight': 'sequence_parallel', 'norm.weight': 'sequence_parallel', 'layers.*.feed_forward.shared_expert.gate_proj.weight': 'local_colwise', 'layers.*.feed_forward.shared_expert.gate_proj.weight_scale': 'local_colwise', 'layers.*.feed_forward.shared_expert.up_proj.weight': 'local_colwise', 'layers.*.feed_forward.shared_expert.up_proj.weight_scale': 'local_colwise', 'layers.*.feed_forward.shared_expert.down_proj.weight': 'local_rowwise', 'layers.*.feed_forward.experts': 'local', 'layers.*.feed_forward': 'gather', 'layers.*.feed_forward.experts.*.gate_proj.weight': 'local_colwise', 'layers.*.feed_forward.experts.*.gate_proj.weight_scale': 'local_colwise', 'layers.*.feed_forward.experts.*.up_proj.weight': 'local_colwise', 'layers.*.feed_forward.experts.*.up_proj.weight_scale': 'local_colwise', 'layers.*.feed_forward.experts.*.down_proj.weight': 'local_rowwise', 'layers.*.feed_forward.experts.gate_up_proj': 'local_packed_rowwise', 'layers.*.feed_forward.experts.gate_up_proj_scale': 'local_packed_rowwise', 'layers.*.feed_forward.experts.down_proj': 'local_colwise'}
if config.get_text_config() is not None:
config.get_text_config().base_model_tp_plan = text_plan
else:
config.base_model_tp_plan = text_plan
return config
return config
def is_serializable(self, safe_serialization=None):
return True
@property
def is_trainable(self) -> bool:
return False
|
class FbgemmFp8HfQuantizer(HfQuantizer):
'''
FP8 quantization using fbgemm kernels
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs):
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
'''
Quantizes weights into weight and weight_scale
'''
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
pass
def update_tp_plan(self, config):
pass
def is_serializable(self, safe_serialization=None):
pass
@property
def is_trainable(self) -> bool:
pass
| 13
| 2
| 15
| 1
| 13
| 1
| 3
| 0.06
| 1
| 9
| 1
| 0
| 10
| 2
| 10
| 50
| 169
| 25
| 137
| 50
| 101
| 8
| 73
| 28
| 59
| 9
| 5
| 4
| 30
|
6,479
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_gptq.py
|
transformers.quantizers.quantizer_gptq.GptqHfQuantizer
|
from .base import HfQuantizer
from ..utils import is_auto_gptq_available, is_gptqmodel_available, is_optimum_available, is_torch_available, logging
from packaging import version
from ..utils.quantization_config import GPTQConfig, QuantizationConfigMixin
import importlib
class GptqHfQuantizer(HfQuantizer):
"""
Quantizer of the GPTQ method - for GPTQ the quantizer support calibration of the model through
`auto_gptq` or `gptqmodel` package. Quantization is done under the hood for users if they load a non-prequantized model.
"""
requires_calibration = False
required_packages = ['optimum', 'auto_gptq', 'gptqmodel']
optimum_quantizer = None
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
super().__init__(quantization_config, **kwargs)
if not is_optimum_available():
raise ImportError('Loading a GPTQ quantized model requires optimum (`pip install optimum`)')
from optimum.gptq import GPTQQuantizer
self.optimum_quantizer = GPTQQuantizer.from_dict(self.quantization_config.to_dict_optimum())
def validate_environment(self, *args, **kwargs):
if not is_optimum_available():
raise ImportError('Loading a GPTQ quantized model requires optimum (`pip install optimum`)')
if is_auto_gptq_available() and is_gptqmodel_available():
logger.warning('Detected gptqmodel and auto-gptq, will use gptqmodel')
gptq_supports_cpu = is_auto_gptq_available() and version.parse(importlib.metadata.version('auto-gptq')) > version.parse('0.4.2') or is_gptqmodel_available()
if not gptq_supports_cpu and (not torch.cuda.is_available()):
raise RuntimeError('GPU is required to quantize or run quantize model.')
elif not (is_auto_gptq_available() or is_gptqmodel_available()):
raise ImportError('Loading a GPTQ quantized model requires gptqmodel (`pip install gptqmodel`) or auto-gptq (`pip install auto-gptq`) library. ')
elif is_auto_gptq_available() and version.parse(importlib.metadata.version('auto_gptq')) < version.parse('0.4.2'):
raise ImportError('You need a version of auto_gptq >= 0.4.2 to use GPTQ: `pip install --upgrade auto-gptq` or use gptqmodel by `pip install gptqmodel>=1.4.3`.')
elif is_gptqmodel_available() and (version.parse(importlib.metadata.version('gptqmodel')) < version.parse('1.4.3') or version.parse(importlib.metadata.version('optimum')) < version.parse('1.23.99')):
raise ImportError('The gptqmodel version should be >= 1.4.3, optimum version should >= 1.24.0')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
dtype = torch.float16
logger.info('Loading the model in `torch.float16`. To overwrite it, set `dtype` manually.')
elif dtype != torch.float16:
logger.info('We suggest you to set `dtype=torch.float16` for better efficiency with GPTQ.')
return dtype
def update_device_map(self, device_map):
if device_map is None:
device_map = {'': torch.device('cpu')}
if not is_gptqmodel_available() and device_map in ('cpu', {'': torch.device('cpu')}):
device_map == {'': 0}
return device_map
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', **kwargs):
if model.__class__.main_input_name != 'input_ids':
raise RuntimeError('We can only quantize pure text model.')
if self.pre_quantized:
if version.parse(importlib.metadata.version('optimum')) <= version.parse('1.23.99'):
model = self.optimum_quantizer.convert_model(model)
else:
model = self.optimum_quantizer.convert_model(model, **kwargs)
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
if self.pre_quantized:
model = self.optimum_quantizer.post_init_model(model)
else:
if self.quantization_config.tokenizer is None:
self.quantization_config.tokenizer = model.name_or_path
self.optimum_quantizer.quantize_model(model, self.quantization_config.tokenizer)
model.config.quantization_config = GPTQConfig.from_dict(self.optimum_quantizer.to_dict())
@property
def is_trainable(self) -> bool:
return True
def is_serializable(self, safe_serialization=None):
return True
|
class GptqHfQuantizer(HfQuantizer):
'''
Quantizer of the GPTQ method - for GPTQ the quantizer support calibration of the model through
`auto_gptq` or `gptqmodel` package. Quantization is done under the hood for users if they load a non-prequantized model.
'''
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def update_device_map(self, device_map):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
@property
def is_trainable(self) -> bool:
pass
def is_serializable(self, safe_serialization=None):
pass
| 10
| 1
| 9
| 1
| 8
| 0
| 3
| 0.09
| 1
| 5
| 2
| 0
| 8
| 0
| 8
| 48
| 90
| 14
| 70
| 15
| 59
| 6
| 51
| 14
| 41
| 7
| 5
| 2
| 24
|
6,480
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_higgs.py
|
transformers.quantizers.quantizer_higgs.HiggsHfQuantizer
|
from typing import TYPE_CHECKING, Any, Optional
from ..utils import is_accelerate_available, is_flute_available, is_hadamard_available, is_torch_available, logging
from ..utils.quantization_config import QuantizationConfigMixin
from .quantizers_utils import get_module_from_name
from .base import HfQuantizer
from ..utils.logging import tqdm
class HiggsHfQuantizer(HfQuantizer):
"""
Quantizer of the HIGGS method. Enables the loading of prequantized models and in-flight quantization of full-precision models.
"""
requires_calibration = False
requires_parameters_quantization = True
required_packages = ['flute-kernel', 'fast_hadamard_transform']
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
super().__init__(quantization_config, **kwargs)
self.quantization_config = quantization_config
def validate_environment(self, device_map, **kwargs):
if not torch.cuda.is_available():
raise NotImplementedError('HIGGS quantization is only supported on GPU. Please use a different quantizer.')
if not is_accelerate_available():
raise ImportError('Using `higgs` quantization requires Accelerate: `pip install accelerate`')
if not is_flute_available():
raise ImportError('Using `higgs` quantization requires FLUTE: `pip install flute-kernel>=0.3.0`')
if not is_hadamard_available():
raise ImportError('Using `higgs` quantization requires fast_hadamard_transform: `pip install fast_hadamard_transform`')
if device_map is None:
raise ValueError("You are attempting to load a HIGGS model without setting device_map. Please set device_map comprised of 'cuda' devices.")
elif isinstance(device_map, dict) and ('cpu' in device_map.values() or 'disk' in device_map.values()):
raise ValueError('You are attempting to load a HIGGS model with a device_map that contains a CPU or disk device. This is not supported. Please remove the CPU or disk device from the device_map.')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
logger.info('`dtype` is None. Setting `dtype=torch.float16` for FLUTE compatibility.')
dtype = torch.float16
elif dtype != torch.float16 and dtype != torch.bfloat16:
raise ValueError(f'Invalid `dtype` {dtype}. HIGGS quantization only supports `dtype=torch.float16` or `dtype=torch.bfloat16`.')
return dtype
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
from ..integrations import quantize_with_higgs
'\n Quantizes weights into weight and weight_scale\n '
flute_dict = quantize_with_higgs(param_value.to(target_device), self.quantization_config.bits, self.quantization_config.p, self.quantization_config.group_size, self.quantization_config.hadamard_size)
del param_value
module, _ = get_module_from_name(model, param_name)
module_name = '.'.join(param_name.split('.')[:-1])
for key, value in flute_dict.items():
if key in module._parameters:
module._parameters[key] = torch.nn.Parameter(value, requires_grad=False)
elif key in module._buffers:
module._buffers[key] = torch.nn.Buffer(value)
elif key == 'tune_metadata':
module.tune_metadata = value
self.quantization_config.tune_metadata[module_name] = value.to_dict()
else:
raise ValueError(f'Unexpected key {key} in module {module}')
if unexpected_keys is not None and param_name in unexpected_keys:
unexpected_keys.remove(param_name)
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_higgs_linear
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
replace_with_higgs_linear(model, quantization_config=self.quantization_config, modules_to_not_convert=self.modules_to_not_convert)
model.config.quantization_config = self.quantization_config
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
from flute.tune import TuneMetaData, maybe_tune_and_repack
from flute.utils import make_workspace_streamk
from ..integrations import HiggsLinear
flute_workspaces = {}
flute_modules = {name: module for name, module in model.named_modules() if isinstance(module, HiggsLinear)}
for name, module in tqdm(flute_modules.items(), desc='Repacking HIGGS modules', leave=False):
if module.weight.device not in flute_workspaces:
flute_workspaces[module.weight.device] = make_workspace_streamk(device=module.weight.device)
module.workspace = flute_workspaces[module.weight.device]
module.tune_metadata = TuneMetaData.from_dict(self.quantization_config.tune_metadata[name])
module.weight.data, module.tune_metadata = maybe_tune_and_repack(weight=module.weight.data, scales=module.scales.data, metadata=module.tune_metadata)
self.quantization_config.tune_metadata[name] = module.tune_metadata.to_dict()
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
from ..integrations import HiggsLinear
higgs_names = {name for name, module in model.named_modules() if isinstance(module, HiggsLinear)}
def should_update(key: str) -> bool:
if key.endswith('.weight') or key.endswith('.bias'):
return False
full_key = f'{prefix}.{key}'
return any((name in key or name in full_key for name in higgs_names))
return [key for key in missing_keys if not should_update(key)]
@property
def is_trainable(self) -> bool:
return False
def is_serializable(self, safe_serialization=None):
return True
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
from ..integrations import HiggsLinear
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module, HiggsLinear) and tensor_name == 'weight' and (param_value.dtype != torch.int16):
return True
else:
return False
def _dequantize(self, model):
from ..integrations import dequantize_higgs
model = dequantize_higgs(model)
return model
|
class HiggsHfQuantizer(HfQuantizer):
'''
Quantizer of the HIGGS method. Enables the loading of prequantized models and in-flight quantization of full-precision models.
'''
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
pass
def validate_environment(self, device_map, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: Optional[list[str]]=None):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
pass
def should_update(key: str) -> bool:
pass
@property
def is_trainable(self) -> bool:
pass
def is_serializable(self, safe_serialization=None):
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
pass
def _dequantize(self, model):
pass
| 14
| 1
| 15
| 2
| 13
| 1
| 3
| 0.08
| 1
| 10
| 2
| 0
| 11
| 1
| 11
| 51
| 186
| 29
| 146
| 54
| 107
| 11
| 79
| 34
| 60
| 7
| 5
| 4
| 32
|
6,481
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_hqq.py
|
transformers.quantizers.quantizer_hqq.HqqHfQuantizer
|
from .base import HfQuantizer
from ..integrations import prepare_for_hqq_linear
from ..utils import is_accelerate_available, is_hqq_available, is_torch_available, logging
from typing import TYPE_CHECKING, Any
from .quantizers_utils import get_module_from_name
class HqqHfQuantizer(HfQuantizer):
"""
HQQ quantizer base HF class.
nn.Linear modules are first tagged with quant_config in _process_model_before_weight_loading().
The actual quantization and offloading to the GPU is done in check_quantized_param().
"""
use_keep_in_fp32_modules = False
requires_parameters_quantization = True
requires_calibration = False
required_packages = ['hqq']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
self.dtype = None
self.using_multi_gpu = False
def validate_environment(self, *args, **kwargs):
if not is_hqq_available():
raise ImportError('A valid HQQ version (>=0.2.1) is not available. Please follow the instructions to install it: `https://github.com/mobiusml/hqq/`.')
if self.dtype is None:
if 'dtype' in kwargs:
self.dtype = kwargs['dtype']
else:
self.dtype = torch.float32
logger.info('Setting dtype to torch.float32 as the default value since it was not specified.')
device_map = kwargs.get('device_map')
if isinstance(device_map, dict):
if 'cpu' in device_map.values() or 'disk' in device_map.values():
raise ValueError('You are attempting to use an HQQ model with a device_map that contains a CPU or disk device. This is not supported. Please remove the CPU or disk device from the device_map.')
else:
self.using_multi_gpu = len(set(device_map.values())) > 1
def update_missing_keys(self, model: 'PreTrainedModel', missing_keys: list[str], prefix: str, **kwargs) -> list[str]:
if self.pre_quantized:
return [key for key in missing_keys if 'weight' not in key]
else:
return missing_keys
def update_expected_keys(self, model: 'PreTrainedModel', expected_keys: list[str], loaded_keys: list[str]) -> list[str]:
if not self.pre_quantized:
return expected_keys
def _find_hqq_quantizable_layers(model, layers):
for name, module in model.named_children():
if isinstance(module, torch.nn.Linear):
layers.add(module.name)
_find_hqq_quantizable_layers(module, layers)
new_keys = set(expected_keys)
if is_hqq_available():
from hqq.core.quantize import HQQLinear
for name, module in model.named_modules():
module.name = name
_valid_modules = set()
_find_hqq_quantizable_layers(model, _valid_modules)
_skipped_modules = set()
for _module in _valid_modules:
for _skip_module in model.config.quantization_config['skip_modules']:
if _skip_module in _module:
_skipped_modules.add(_module)
_valid_modules -= _skipped_modules
_ref_keys = HQQLinear(linear_layer=None, quant_config=None, compute_dtype=torch.float16, device='cpu', del_orig=False).state_dict_keys() - {'bias'}
_rm_keys = set()
for key in new_keys:
if any((_module in key for _module in _valid_modules)):
_rm_keys.add(key)
new_keys -= _rm_keys
for _module in _valid_modules:
if _module + '.weight' in loaded_keys:
new_keys.add(_module + '.weight')
else:
new_keys.update({_module + '.' + _ref_key for _ref_key in _ref_keys})
if _module + '.bias' in loaded_keys:
new_keys.add(_module + '.bias')
return list(new_keys)
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
if is_hqq_available():
from hqq.core.quantize import HQQLinear
module, tensor_name = get_module_from_name(model, param_name)
if self.pre_quantized:
return isinstance(module, (torch.nn.Linear, HQQLinear)) and tensor_name != 'weight'
else:
return isinstance(module, torch.nn.Linear) and tensor_name == 'weight' or (isinstance(module, HQQLinear) and tensor_name == 'bias')
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: list[str]):
"""
Each nn.Linear layer is processed here.
We first check if the corresponding module state_dict contains already HQQ quantized parameters.
If not, we create a temp linear layer with the module state_dict params and use it for quantization
"""
if is_hqq_available():
from hqq.core.quantize import HQQLinear
@property
def weight(_self: HQQLinear):
return torch.empty(0, dtype=_self.compute_dtype, device=_self.device)
HQQLinear.weight = weight
module, tensor_name = get_module_from_name(model, param_name)
layer_name = '.'.join(param_name.split('.')[:-1])
parent_module = find_parent(model, layer_name)
node = layer_name.split('.')[-1]
if tensor_name == 'bias':
return
module_state_dict = {}
for k, v in state_dict.items():
if layer_name + '.' in k:
module_state_dict[k.split('.')[-1]] = v
if unexpected_keys is not None and k in unexpected_keys:
unexpected_keys.remove(k)
if self.pre_quantized:
if isinstance(module, HQQLinear):
return
else:
hqq_layer = HQQLinear(linear_layer=None, quant_config=None, compute_dtype=self.dtype, device=target_device, del_orig=False)
hqq_layer.load_state_dict(module_state_dict)
if hqq_layer.bias is not None and isinstance(hqq_layer.bias, torch.Tensor):
hqq_layer.bias = torch.nn.Parameter(hqq_layer.bias)
if self.using_multi_gpu:
hqq_layer = self._patch_layer_for_multigpu(hqq_layer)
setattr(parent_module, node, hqq_layer)
del module.__dict__, module
torch.cuda.empty_cache()
return
for key, tensor in module_state_dict.items():
setattr(module, key, torch.nn.Parameter(tensor))
quant_config = model.config.quantization_config['quant_config']
skip_modules = model.config.quantization_config['skip_modules']
module_tag = '.'.join(module.name.split('.')[-2:])
module_quant_config = None
if 'weight_quant_params' in quant_config:
module_quant_config = quant_config
elif module_tag in quant_config:
module_quant_config = quant_config[module_tag]
for skip_module in skip_modules:
if skip_module in module.name:
module_quant_config = None
break
if module_quant_config is not None:
hqq_layer = HQQLinear(module, quant_config=module_quant_config, compute_dtype=self.dtype, device=target_device, del_orig=True)
if hqq_layer.bias is not None and isinstance(hqq_layer.bias, torch.Tensor):
hqq_layer.bias = torch.nn.Parameter(hqq_layer.bias)
if self.using_multi_gpu:
hqq_layer = self._patch_layer_for_multigpu(hqq_layer)
setattr(parent_module, node, hqq_layer)
else:
module = module.to(dtype=self.dtype, device=target_device)
setattr(parent_module, node, module)
torch.cuda.empty_cache()
def _patch_layer_for_multigpu(self, hqq_layer):
hqq_layer = remove_hook_from_module(hqq_layer)
def forward_with_device(self, x):
out = torch.matmul(x.to(self.device), self.dequantize().t())
if self.bias is not None:
out += self.bias
return out
hqq_layer.forward = lambda x: forward_with_device(hqq_layer, x)
return hqq_layer
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', **kwargs):
model = prepare_for_hqq_linear(model, quantization_config=self.quantization_config)
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
model.is_hqq_quantized = True
model.is_hqq_serializable = self.is_serializable()
return model
def is_serializable(self, safe_serialization=None):
return True
@property
def is_trainable(self) -> bool:
return True
|
class HqqHfQuantizer(HfQuantizer):
'''
HQQ quantizer base HF class.
nn.Linear modules are first tagged with quant_config in _process_model_before_weight_loading().
The actual quantization and offloading to the GPU is done in check_quantized_param().
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_missing_keys(self, model: 'PreTrainedModel', missing_keys: list[str], prefix: str, **kwargs) -> list[str]:
pass
def update_expected_keys(self, model: 'PreTrainedModel', expected_keys: list[str], loaded_keys: list[str]) -> list[str]:
pass
def _find_hqq_quantizable_layers(model, layers):
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: list[str]):
'''
Each nn.Linear layer is processed here.
We first check if the corresponding module state_dict contains already HQQ quantized parameters.
If not, we create a temp linear layer with the module state_dict params and use it for quantization
'''
pass
@property
def weight(_self: HQQLinear):
pass
def _patch_layer_for_multigpu(self, hqq_layer):
pass
def forward_with_device(self, x):
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
def is_serializable(self, safe_serialization=None):
pass
@property
def is_trainable(self) -> bool:
pass
| 17
| 2
| 18
| 2
| 14
| 1
| 4
| 0.14
| 1
| 11
| 0
| 0
| 11
| 2
| 11
| 51
| 252
| 44
| 182
| 67
| 139
| 26
| 124
| 41
| 107
| 13
| 5
| 3
| 47
|
6,482
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_quanto.py
|
transformers.quantizers.quantizer_quanto.QuantoHfQuantizer
|
from .base import HfQuantizer
from ..utils import is_accelerate_available, is_optimum_quanto_available, is_torch_available, logging
from packaging import version
from typing import TYPE_CHECKING, Any, Optional, Union
import importlib
from ..utils.quantization_config import QuantoConfig
from .quantizers_utils import get_module_from_name
class QuantoHfQuantizer(HfQuantizer):
"""
Quantizer for the quanto library
"""
required_packages = ['quanto', 'accelerate']
requires_parameters_quantization = True
requires_calibration = False
def __init__(self, quantization_config: QuantoConfig, **kwargs):
super().__init__(quantization_config, **kwargs)
self.post_init()
def post_init(self):
"""
Safety checker
"""
if self.quantization_config.activations is not None and (not self.pre_quantized):
raise ValueError("We don't support quantizing the activations with transformers library.Use quanto library for more complex use cases such as activations quantization, calibration and quantization aware training.")
def validate_environment(self, *args, **kwargs):
if not is_optimum_quanto_available():
raise ImportError('Loading an optimum-quanto quantized model requires optimum-quanto library (`pip install optimum-quanto`)')
if not is_accelerate_available():
raise ImportError('Loading an optimum-quanto quantized model requires accelerate library (`pip install accelerate`)')
def update_device_map(self, device_map):
if device_map is None:
device_map = {'': 'cpu'}
logger.info("The device_map was not initialized. Setting device_map to {'':'cpu'}. If you want to use the model for inference, please set device_map ='auto'")
return device_map
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
logger.info('You did not specify `dtype` in `from_pretrained`. Setting it to `torch.float32`.')
dtype = torch.float32
return dtype
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
if is_optimum_quanto_available():
from optimum.quanto import QModuleMixin
not_missing_keys = []
for name, module in model.named_modules():
if isinstance(module, QModuleMixin):
for missing in missing_keys:
if (name in missing or name in f'{prefix}.{missing}') and (not missing.endswith('.weight')) and (not missing.endswith('.bias')):
not_missing_keys.append(missing)
return [k for k in missing_keys if k not in not_missing_keys]
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
"""
Check if a parameter needs to be quantized.
"""
if is_optimum_quanto_available():
from optimum.quanto import QModuleMixin
device_map = kwargs.get('device_map')
param_device = kwargs.get('param_device')
if device_map is not None and param_device is not None:
device_map_values = set(device_map.values())
if param_device == 'cpu' and len(device_map_values) > 1:
if not (device_map_values == {'cpu'} or device_map_values == {'cpu', 'disk'}):
return False
module, tensor_name = get_module_from_name(model, param_name)
if isinstance(module, QModuleMixin) and 'weight' in tensor_name:
return not module.frozen
else:
return False
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
max_memory = {key: val * 0.9 for key, val in max_memory.items()}
return max_memory
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', *args, **kwargs):
"""
Create the quantized parameter by calling .freeze() after setting it to the module.
"""
from accelerate.utils import set_module_tensor_to_device
set_module_tensor_to_device(model, param_name, target_device, param_value)
module, _ = get_module_from_name(model, param_name)
module.freeze()
module.weight.requires_grad = False
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
if version.parse(importlib.metadata.version('accelerate')) > version.parse('0.27.0'):
from accelerate.utils import CustomDtype
mapping = {'int8': torch.int8, 'float8': CustomDtype.FP8, 'int4': CustomDtype.INT4, 'int2': CustomDtype.INT2}
target_dtype = mapping[self.quantization_config.weights]
return target_dtype
else:
raise ValueError("You are using `device_map='auto'` on an optimum-quanto quantized model. To automatically compute the appropriate device map, you should upgrade your `accelerate` library,`pip install --upgrade accelerate` or install it from source.")
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
from ..integrations import replace_with_quanto_layers
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
model, _ = replace_with_quanto_layers(model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config)
model.config.quantization_config = self.quantization_config
def _process_model_after_weight_loading(self, model, **kwargs):
return model
@property
def is_trainable(self) -> bool:
return True
def is_serializable(self, safe_serialization=None):
return False
|
class QuantoHfQuantizer(HfQuantizer):
'''
Quantizer for the quanto library
'''
def __init__(self, quantization_config: QuantoConfig, **kwargs):
pass
def post_init(self):
'''
Safety checker
'''
pass
def validate_environment(self, *args, **kwargs):
pass
def update_device_map(self, device_map):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def update_missing_keys(self, model, missing_keys: list[str], prefix: str) -> list[str]:
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
'''
Check if a parameter needs to be quantized.
'''
pass
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', *args, **kwargs):
'''
Create the quantized parameter by calling .freeze() after setting it to the module.
'''
pass
def adjust_target_dtype(self, target_dtype: 'torch.dtype') -> 'torch.dtype':
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def _process_model_after_weight_loading(self, model, **kwargs):
pass
@property
def is_trainable(self) -> bool:
pass
def is_serializable(self, safe_serialization=None):
pass
| 16
| 4
| 10
| 1
| 9
| 1
| 2
| 0.13
| 1
| 11
| 1
| 0
| 14
| 1
| 14
| 54
| 168
| 24
| 128
| 52
| 90
| 16
| 81
| 34
| 61
| 6
| 5
| 4
| 32
|
6,483
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_torchao.py
|
transformers.quantizers.quantizer_torchao.TorchAoHfQuantizer
|
from .quantizers_utils import get_module_from_name
from .base import HfQuantizer
from ..utils.quantization_config import TorchAoConfig
import importlib
from packaging import version
from ..utils import is_torch_available, is_torchao_available, logging
from typing import TYPE_CHECKING, Optional, Union
from typing import Any
import types
class TorchAoHfQuantizer(HfQuantizer):
"""
Quantizer for torchao: https://github.com/pytorch/ao/
"""
requires_parameters_quantization = True
requires_calibration = False
required_packages = ['torchao']
def __init__(self, quantization_config, **kwargs):
super().__init__(quantization_config, **kwargs)
def validate_environment(self, *args, **kwargs):
if not is_torchao_available():
raise ImportError('Loading an torchao quantized model requires torchao library (`pip install torchao`)')
self.offload = False
device_map = kwargs.get('device_map')
if isinstance(device_map, dict):
if ('disk' in device_map.values() or 'cpu' in device_map.values()) and len(device_map) > 1:
self.offload = True
if self.pre_quantized and 'disk' in device_map.values():
raise ValueError('You are attempting to perform disk offload with a pre-quantized torchao model This is not supported yet . Please remove the disk device from the device_map.')
if self.pre_quantized:
weights_only = kwargs.get('weights_only')
if weights_only:
torch_version = version.parse(importlib.metadata.version('torch'))
if torch_version < version.parse('2.5.0'):
raise RuntimeError(f"In order to use torchao pre-quantized model, you need to have torch>=2.5.0. However, the current version is {torch_version}. You can also set with `weights_only=False` in `from_pretrained` if you don't want to update torch")
def update_dtype(self, dtype):
if self.quantization_config.quant_type == 'int4_weight_only':
if dtype is not None and dtype != torch.bfloat16:
logger.warning_once(f'Setting dtype to {dtype} for int4_weight_only quantization, but only bfloat16 is supported right now. Please set the dtype to bfloat16.')
if dtype is None:
logger.warning_once('Setting dtype to torch.bfloat16 for int4_weight_only quantization since only bfloat16 is supported right now. Please set dtype=torch.bfloat16 to remove this warning.')
dtype = torch.bfloat16
if self.quantization_config.quant_type == 'int8_dynamic_activation_int8_weight':
if dtype is None:
logger.info('Setting dtype to torch.float32 for int8_dynamic_activation_int8_weight quantization as no dtype was specified in from_pretrained')
dtype = torch.float32
return dtype
def adjust_target_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if version.parse(importlib.metadata.version('accelerate')) > version.parse('0.19.0'):
from accelerate.utils import CustomDtype
if self.quantization_config._get_ao_version() > version.Version('0.9.0'):
from torchao.core.config import AOBaseConfig
quant_type = self.quantization_config.quant_type
if isinstance(quant_type, AOBaseConfig):
config_name = quant_type.__class__.__name__
size_digit = fuzzy_match_size(config_name)
if size_digit == '4':
return CustomDtype.INT4
else:
return torch.int8
map_to_target_dtype = {'int4_weight_only': CustomDtype.INT4, 'int8_weight_only': torch.int8, 'int8_dynamic_activation_int8_weight': torch.int8, 'autoquant': None}
return map_to_target_dtype[self.quantization_config.quant_type]
else:
raise ValueError("You are using `device_map='auto'` on a torchao quantized model. To automatically compute the appropriate device map, you should upgrade your `accelerate` library with `pip install --upgrade accelerate`")
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
max_memory = {key: val * 0.9 for key, val in max_memory.items()}
return max_memory
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
if self.quantization_config.include_input_output_embeddings:
input_emb = model.get_input_embeddings()
input_emb_names = [name for name, module in model.named_modules() if id(module) == id(input_emb)]
output_emb = model.get_output_embeddings()
output_emb_names = [name for name, module in model.named_modules() if id(module) == id(output_emb)]
self.modules_to_not_convert = [x for x in self.modules_to_not_convert if x not in input_emb_names + output_emb_names]
return
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
if self.quantization_config.quant_type == 'autoquant':
return False
param_device = kwargs.pop('param_device', None)
if any((key + '.' in param_name or key == param_name for key in self.modules_to_not_convert)):
return False
elif param_device == 'cpu' and self.offload:
return False
else:
module, tensor_name = get_module_from_name(model, param_name)
_QUANTIZABLE = [torch.nn.Linear]
if self.quantization_config.include_input_output_embeddings:
_QUANTIZABLE.append(torch.nn.Embedding)
return isinstance(module, tuple(_QUANTIZABLE)) and tensor_name == 'weight'
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: list[str]):
"""
Each nn.Linear layer that needs to be quantized is processed here.
First, we set the value the weight tensor, then we move it to the target device. Finally, we quantize the module.
"""
if self.quantization_config.quant_type == 'autoquant':
return
from torchao.quantization import quantize_
module, tensor_name = get_module_from_name(model, param_name)
if self.pre_quantized:
module._parameters[tensor_name] = torch.nn.Parameter(param_value.to(device=target_device), requires_grad=param_value.requires_grad)
if isinstance(module, nn.Linear):
module.extra_repr = types.MethodType(_linear_extra_repr, module)
else:
assert isinstance(self.quantization_config, TorchAoConfig)
module._parameters[tensor_name] = torch.nn.Parameter(param_value, requires_grad=param_value.requires_grad).to(device=target_device)
input_embed = model.get_input_embeddings()
if self.quantization_config.untie_embedding_weights and id(module) == id(input_embed):
model.tie_weights()
setattr(model.config.get_text_config(decoder=True), 'tie_word_embeddings', False)
if self.quantization_config._get_ao_version() >= version.Version('0.12.0'):
from torchao.quantization import ModuleFqnToConfig
config = self.quantization_config.get_apply_tensor_subclass()
if isinstance(config, ModuleFqnToConfig):
module_fqn, _ = param_name.rsplit('.', 1)
c = None
if module_fqn in config.module_fqn_to_config:
c = config.module_fqn_to_config[module_fqn]
else:
c = config.module_fqn_to_config.get('_default', None)
if c is not None:
quantize_(module, c, filter_fn=lambda x, fqn: True)
return
quantize_(module, self.quantization_config.get_apply_tensor_subclass())
def _process_model_after_weight_loading(self, model, **kwargs):
"""No process required for torchao quantized model"""
if self.quantization_config.quant_type == 'autoquant':
from torchao import autoquant
from torchao.quantization import ALL_AUTOQUANT_CLASS_LIST
model = torch.compile(model, mode='max-autotune')
model = autoquant(model, qtensor_class_list=ALL_AUTOQUANT_CLASS_LIST, set_inductor_config=False, **self.quantization_config.quant_type_kwargs)
return model
return
def is_serializable(self, safe_serialization=None) -> bool:
if safe_serialization:
logger.warning('torchao quantized model does not support safe serialization, please set `safe_serialization` to False')
return False
_is_torchao_serializable = version.parse(importlib.metadata.version('huggingface_hub')) >= version.parse('0.25.0')
if not _is_torchao_serializable:
logger.warning('torchao quantized model is only serializable after huggingface_hub >= 0.25.0 ')
if self.offload and self.quantization_config.modules_to_not_convert is None:
logger.warning("The model contains offloaded modules and these modules are not quantized. We don't recommend saving the model as we won't be able to reload them.If you want to specify modules to not quantize, please specify modules_to_not_convert in the quantization_config.")
return False
return _is_torchao_serializable
def get_accelerator_warm_up_factor(self):
"""
This factor is used in caching_allocator_warmup to determine how many bytes to pre-allocate for accelerator warmup.
- A factor of 2 means we pre-allocate the full memory footprint of the model.
- A factor of 4 means we pre-allocate half of that, and so on
However, when using TorchAO, calculating memory usage with param.numel() * param.element_size() doesn't give the correct size for quantized weights (like int4 or int8)
That's because TorchAO internally represents quantized tensors using subtensors and metadata, and the reported element_size() still corresponds to the dtype
not the actual bit-width of the quantized data.
To correct for this:
- Use a division factor of 8 for int4 weights
- Use a division factor of 4 for int8 weights
"""
if self.quantization_config._get_ao_version() > version.Version('0.9.0'):
from torchao.core.config import AOBaseConfig
quant_type = self.quantization_config.quant_type
if isinstance(quant_type, AOBaseConfig):
config_name = quant_type.__class__.__name__
size_digit = fuzzy_match_size(config_name)
if size_digit == '4':
return 8
else:
return 4
map_to_target_dtype = {'int4_weight_only': 8, 'int8_weight_only': 4, 'int8_dynamic_activation_int8_weight': 4, 'autoquant': 4}
return map_to_target_dtype[self.quantization_config.quant_type]
@property
def is_trainable(self) -> bool:
supported_quant_types_for_training = ['int8_weight_only', 'int8_dynamic_activation_int8_weight']
return self.quantization_config.quant_type in supported_quant_types_for_training
@property
def is_compileable(self) -> bool:
return True
|
class TorchAoHfQuantizer(HfQuantizer):
'''
Quantizer for torchao: https://github.com/pytorch/ao/
'''
def __init__(self, quantization_config, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype):
pass
def adjust_target_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]:
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
pass
def check_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, state_dict: dict[str, Any], **kwargs) -> bool:
pass
def create_quantized_param(self, model: 'PreTrainedModel', param_value: 'torch.Tensor', param_name: str, target_device: 'torch.device', state_dict: dict[str, Any], unexpected_keys: list[str]):
'''
Each nn.Linear layer that needs to be quantized is processed here.
First, we set the value the weight tensor, then we move it to the target device. Finally, we quantize the module.
'''
pass
def _process_model_after_weight_loading(self, model, **kwargs):
'''No process required for torchao quantized model'''
pass
def is_serializable(self, safe_serialization=None) -> bool:
pass
def get_accelerator_warm_up_factor(self):
'''
This factor is used in caching_allocator_warmup to determine how many bytes to pre-allocate for accelerator warmup.
- A factor of 2 means we pre-allocate the full memory footprint of the model.
- A factor of 4 means we pre-allocate half of that, and so on
However, when using TorchAO, calculating memory usage with param.numel() * param.element_size() doesn't give the correct size for quantized weights (like int4 or int8)
That's because TorchAO internally represents quantized tensors using subtensors and metadata, and the reported element_size() still corresponds to the dtype
not the actual bit-width of the quantized data.
To correct for this:
- Use a division factor of 8 for int4 weights
- Use a division factor of 4 for int8 weights
'''
pass
@property
def is_trainable(self) -> bool:
pass
@property
def is_compileable(self) -> bool:
pass
| 16
| 4
| 13
| 1
| 12
| 1
| 3
| 0.1
| 1
| 10
| 0
| 0
| 11
| 2
| 11
| 51
| 165
| 19
| 133
| 45
| 102
| 13
| 81
| 29
| 66
| 8
| 5
| 3
| 32
|
6,484
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/quantizers/quantizer_vptq.py
|
transformers.quantizers.quantizer_vptq.VptqHfQuantizer
|
from ..utils import is_accelerate_available, is_torch_available, is_vptq_available, logging
from typing import TYPE_CHECKING, Optional
from .base import HfQuantizer
from ..utils.quantization_config import QuantizationConfigMixin
class VptqHfQuantizer(HfQuantizer):
"""
Quantizer of the VPTQ method. Enables the loading of prequantized models.
"""
requires_calibration = True
required_packages = ['vptq']
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
super().__init__(quantization_config, **kwargs)
self.quantization_config = quantization_config
def validate_environment(self, *args, **kwargs):
if not is_accelerate_available():
raise ImportError('Using `vptq` quantization requires Accelerate: `pip install accelerate`')
if not is_vptq_available():
raise ImportError('Using `vptq` quantization requires VPTQ>=0.0.4: `pip install -U vptq`')
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
if dtype is None:
if torch.cuda.is_available():
dtype = torch.float16
logger.info('CUDA available. Assuming VPTQ inference on GPU and loading the model in `torch.float16`. To overwrite it, set `dtype` manually.')
else:
import vptq
device_availability = getattr(vptq, 'device_availability', lambda device: False)
if device_availability('cpu') is True:
raise RuntimeError('No GPU found. Please wait for the next release of VPTQ to use CPU inference')
dtype = torch.float32
logger.info('No GPU found. Assuming VPTQ inference on CPU and loading the model in `torch.float32`.')
return dtype
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
"""
we don't have param like modules_to_not_convert to indicate which layers should not be quantized
because `quantization_config` include the layers that should be quantized
"""
from ..integrations import replace_with_vptq_linear
self.modules_to_not_convert = self.get_modules_to_not_convert(model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules)
replace_with_vptq_linear(model, quantization_config=self.quantization_config, modules_to_not_convert=self.modules_to_not_convert)
model.config.quantization_config = self.quantization_config
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
return model
@property
def is_trainable(self) -> bool:
return False
def is_serializable(self, safe_serialization=None):
return True
|
class VptqHfQuantizer(HfQuantizer):
'''
Quantizer of the VPTQ method. Enables the loading of prequantized models.
'''
def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs):
pass
def validate_environment(self, *args, **kwargs):
pass
def update_dtype(self, dtype: 'torch.dtype') -> 'torch.dtype':
pass
def _process_model_before_weight_loading(self, model: 'PreTrainedModel', keep_in_fp32_modules: Optional[list[str]]=None, **kwargs):
'''
we don't have param like modules_to_not_convert to indicate which layers should not be quantized
because `quantization_config` include the layers that should be quantized
'''
pass
def _process_model_after_weight_loading(self, model: 'PreTrainedModel', **kwargs):
pass
@property
def is_trainable(self) -> bool:
pass
def is_serializable(self, safe_serialization=None):
pass
| 9
| 2
| 7
| 1
| 6
| 1
| 2
| 0.15
| 1
| 4
| 1
| 0
| 7
| 1
| 7
| 47
| 67
| 12
| 48
| 20
| 33
| 7
| 34
| 15
| 24
| 4
| 5
| 3
| 12
|
6,485
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/sagemaker/trainer_sm.py
|
transformers.sagemaker.trainer_sm.SageMakerTrainer
|
from ..trainer import Trainer
import warnings
class SageMakerTrainer(Trainer):
def __init__(self, args=None, **kwargs):
warnings.warn('`SageMakerTrainer` is deprecated and will be removed in v5 of Transformers. You can use `Trainer` instead.', FutureWarning)
super().__init__(args=args, **kwargs)
|
class SageMakerTrainer(Trainer):
def __init__(self, args=None, **kwargs):
pass
| 2
| 0
| 7
| 0
| 7
| 0
| 1
| 0
| 1
| 2
| 0
| 0
| 1
| 0
| 1
| 86
| 8
| 0
| 8
| 2
| 6
| 0
| 4
| 2
| 2
| 1
| 1
| 0
| 1
|
6,486
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/sagemaker/training_args_sm.py
|
transformers.sagemaker.training_args_sm.SageMakerTrainingArguments
|
from ..utils import is_sagemaker_dp_enabled, logging
from ..training_args import TrainingArguments
import warnings
import os
import torch
from functools import cached_property
from dataclasses import dataclass, field
@dataclass
class SageMakerTrainingArguments(TrainingArguments):
mp_parameters: str = field(default='', metadata={'help': 'Used by the SageMaker launcher to send mp-specific args. Ignored in SageMakerTrainer'})
def __post_init__(self):
super().__post_init__()
warnings.warn('`SageMakerTrainingArguments` is deprecated and will be removed in v5 of Transformers. You can use `TrainingArguments` instead.', FutureWarning)
@cached_property
def _setup_devices(self) -> 'torch.device':
logger.info('PyTorch: setting up devices')
if torch.distributed.is_available() and torch.distributed.is_initialized() and (self.local_rank == -1):
logger.warning('torch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch')
if self.no_cuda:
device = torch.device('cpu')
self._n_gpu = 0
elif is_sagemaker_model_parallel_available():
local_rank = smp.local_rank()
device = torch.device('cuda', local_rank)
self._n_gpu = 1
elif is_sagemaker_dp_enabled():
import smdistributed.dataparallel.torch.torch_smddp
torch.distributed.init_process_group(backend='smddp', timeout=self.ddp_timeout_delta)
self.local_rank = int(os.getenv('SMDATAPARALLEL_LOCAL_RANK'))
device = torch.device('cuda', self.local_rank)
self._n_gpu = 1
elif self.local_rank == -1:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self._n_gpu = torch.cuda.device_count()
else:
if not torch.distributed.is_initialized():
torch.distributed.init_process_group(backend='nccl', timeout=self.ddp_timeout_delta)
device = torch.device('cuda', self.local_rank)
self._n_gpu = 1
if device.type == 'cuda':
torch.cuda.set_device(device)
return device
@property
def world_size(self):
if is_sagemaker_model_parallel_available():
return smp.dp_size()
return super().world_size
@property
def place_model_on_device(self):
return not is_sagemaker_model_parallel_available()
@property
def _no_sync_in_gradient_accumulation(self):
return False
|
@dataclass
class SageMakerTrainingArguments(TrainingArguments):
def __post_init__(self):
pass
@cached_property
def _setup_devices(self) -> 'torch.device':
pass
@property
def world_size(self):
pass
@property
def place_model_on_device(self):
pass
@property
def _no_sync_in_gradient_accumulation(self):
pass
| 11
| 0
| 12
| 1
| 9
| 2
| 3
| 0.2
| 1
| 3
| 0
| 0
| 5
| 2
| 5
| 37
| 74
| 9
| 55
| 16
| 44
| 11
| 37
| 12
| 30
| 9
| 1
| 2
| 14
|
6,487
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.AffineTransformed
|
from torch.distributions import AffineTransform, Distribution, Independent, NegativeBinomial, Normal, StudentT, TransformedDistribution
class AffineTransformed(TransformedDistribution):
def __init__(self, base_distribution: Distribution, loc=None, scale=None, event_dim=0):
self.scale = 1.0 if scale is None else scale
self.loc = 0.0 if loc is None else loc
super().__init__(base_distribution, [AffineTransform(loc=self.loc, scale=self.scale, event_dim=event_dim)])
@property
def mean(self):
"""
Returns the mean of the distribution.
"""
return self.base_dist.mean * self.scale + self.loc
@property
def variance(self):
"""
Returns the variance of the distribution.
"""
return self.base_dist.variance * self.scale ** 2
@property
def stddev(self):
"""
Returns the standard deviation of the distribution.
"""
return self.variance.sqrt()
|
class AffineTransformed(TransformedDistribution):
def __init__(self, base_distribution: Distribution, loc=None, scale=None, event_dim=0):
pass
@property
def mean(self):
'''
Returns the mean of the distribution.
'''
pass
@property
def variance(self):
'''
Returns the variance of the distribution.
'''
pass
@property
def stddev(self):
'''
Returns the standard deviation of the distribution.
'''
pass
| 8
| 3
| 5
| 0
| 3
| 2
| 2
| 0.64
| 1
| 2
| 0
| 0
| 4
| 2
| 4
| 38
| 27
| 4
| 14
| 10
| 6
| 9
| 11
| 7
| 6
| 3
| 2
| 0
| 6
|
6,488
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.DistributionOutput
|
from torch import nn
import torch
from torch.distributions import AffineTransform, Distribution, Independent, NegativeBinomial, Normal, StudentT, TransformedDistribution
from typing import Callable, Optional
class DistributionOutput:
distribution_class: type
in_features: int
args_dim: dict[str, int]
def __init__(self, dim: int=1) -> None:
self.dim = dim
self.args_dim = {k: dim * self.args_dim[k] for k in self.args_dim}
def _base_distribution(self, distr_args):
if self.dim == 1:
return self.distribution_class(*distr_args)
else:
return Independent(self.distribution_class(*distr_args), 1)
def distribution(self, distr_args, loc: Optional[torch.Tensor]=None, scale: Optional[torch.Tensor]=None) -> Distribution:
distr = self._base_distribution(distr_args)
if loc is None and scale is None:
return distr
else:
return AffineTransformed(distr, loc=loc, scale=scale, event_dim=self.event_dim)
@property
def event_shape(self) -> tuple:
"""
Shape of each individual event contemplated by the distributions that this object constructs.
"""
return () if self.dim == 1 else (self.dim,)
@property
def event_dim(self) -> int:
"""
Number of event dimensions, i.e., length of the `event_shape` tuple, of the distributions that this object
constructs.
"""
return len(self.event_shape)
@property
def value_in_support(self) -> float:
"""
A float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By
default 0.0. This value will be used when padding data series.
"""
return 0.0
def get_parameter_projection(self, in_features: int) -> nn.Module:
"""
Return the parameter projection layer that maps the input to the appropriate parameters of the distribution.
"""
return ParameterProjection(in_features=in_features, args_dim=self.args_dim, domain_map=LambdaLayer(self.domain_map))
def domain_map(self, *args: torch.Tensor):
"""
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the
correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a
distribution of the right event_shape.
"""
raise NotImplementedError()
@staticmethod
def squareplus(x: torch.Tensor) -> torch.Tensor:
"""
Helper to map inputs to the positive orthant by applying the square-plus operation. Reference:
https://twitter.com/jon_barron/status/1387167648669048833
"""
return (x + torch.sqrt(torch.square(x) + 4.0)) / 2.0
|
class DistributionOutput:
def __init__(self, dim: int=1) -> None:
pass
def _base_distribution(self, distr_args):
pass
def distribution(self, distr_args, loc: Optional[torch.Tensor]=None, scale: Optional[torch.Tensor]=None) -> Distribution:
pass
@property
def event_shape(self) -> tuple:
'''
Shape of each individual event contemplated by the distributions that this object constructs.
'''
pass
@property
def event_dim(self) -> int:
'''
Number of event dimensions, i.e., length of the `event_shape` tuple, of the distributions that this object
constructs.
'''
pass
@property
def value_in_support(self) -> float:
'''
A float that will have a valid numeric value when computing the log-loss of the corresponding distribution. By
default 0.0. This value will be used when padding data series.
'''
pass
def get_parameter_projection(self, in_features: int) -> nn.Module:
'''
Return the parameter projection layer that maps the input to the appropriate parameters of the distribution.
'''
pass
def domain_map(self, *args: torch.Tensor):
'''
Converts arguments to the right shape and domain. The domain depends on the type of distribution, while the
correct shape is obtained by reshaping the trailing axis in such a way that the returned tensors define a
distribution of the right event_shape.
'''
pass
@staticmethod
def squareplus(x: torch.Tensor) -> torch.Tensor:
'''
Helper to map inputs to the positive orthant by applying the square-plus operation. Reference:
https://twitter.com/jon_barron/status/1387167648669048833
'''
pass
| 14
| 6
| 6
| 0
| 4
| 3
| 1
| 0.53
| 0
| 9
| 3
| 3
| 8
| 1
| 9
| 9
| 75
| 9
| 43
| 21
| 24
| 23
| 28
| 12
| 18
| 2
| 0
| 1
| 12
|
6,489
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.LambdaLayer
|
from torch import nn
class LambdaLayer(nn.Module):
def __init__(self, function):
super().__init__()
self.function = function
def forward(self, x, *args):
return self.function(x, *args)
|
class LambdaLayer(nn.Module):
def __init__(self, function):
pass
def forward(self, x, *args):
pass
| 3
| 0
| 3
| 0
| 3
| 0
| 1
| 0
| 1
| 1
| 0
| 0
| 2
| 1
| 2
| 12
| 7
| 1
| 6
| 4
| 3
| 0
| 6
| 4
| 3
| 1
| 1
| 0
| 2
|
6,490
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.NegativeBinomialOutput
|
import torch
from torch.distributions import AffineTransform, Distribution, Independent, NegativeBinomial, Normal, StudentT, TransformedDistribution
from typing import Callable, Optional
class NegativeBinomialOutput(DistributionOutput):
"""
Negative Binomial distribution output class.
"""
args_dim: dict[str, int] = {'total_count': 1, 'logits': 1}
distribution_class: type = NegativeBinomial
@classmethod
def domain_map(cls, total_count: torch.Tensor, logits: torch.Tensor):
total_count = cls.squareplus(total_count)
return (total_count.squeeze(-1), logits.squeeze(-1))
def _base_distribution(self, distr_args) -> Distribution:
total_count, logits = distr_args
if self.dim == 1:
return self.distribution_class(total_count=total_count, logits=logits)
else:
return Independent(self.distribution_class(total_count=total_count, logits=logits), 1)
def distribution(self, distr_args, loc: Optional[torch.Tensor]=None, scale: Optional[torch.Tensor]=None) -> Distribution:
total_count, logits = distr_args
if scale is not None:
logits += scale.log()
return self._base_distribution((total_count, logits))
|
class NegativeBinomialOutput(DistributionOutput):
'''
Negative Binomial distribution output class.
'''
@classmethod
def domain_map(cls, total_count: torch.Tensor, logits: torch.Tensor):
pass
def _base_distribution(self, distr_args) -> Distribution:
pass
def distribution(self, distr_args, loc: Optional[torch.Tensor]=None, scale: Optional[torch.Tensor]=None) -> Distribution:
pass
| 5
| 1
| 6
| 1
| 5
| 0
| 2
| 0.35
| 1
| 3
| 0
| 0
| 2
| 0
| 3
| 12
| 33
| 6
| 20
| 11
| 13
| 7
| 16
| 8
| 12
| 2
| 1
| 1
| 5
|
6,491
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.NormalOutput
|
import torch
from torch.distributions import AffineTransform, Distribution, Independent, NegativeBinomial, Normal, StudentT, TransformedDistribution
class NormalOutput(DistributionOutput):
"""
Normal distribution output class.
"""
args_dim: dict[str, int] = {'loc': 1, 'scale': 1}
distribution_class: type = Normal
@classmethod
def domain_map(cls, loc: torch.Tensor, scale: torch.Tensor):
scale = cls.squareplus(scale).clamp_min(torch.finfo(scale.dtype).eps)
return (loc.squeeze(-1), scale.squeeze(-1))
|
class NormalOutput(DistributionOutput):
'''
Normal distribution output class.
'''
@classmethod
def domain_map(cls, loc: torch.Tensor, scale: torch.Tensor):
pass
| 3
| 1
| 3
| 0
| 3
| 0
| 1
| 0.43
| 1
| 1
| 0
| 0
| 0
| 0
| 1
| 10
| 12
| 2
| 7
| 5
| 4
| 3
| 6
| 4
| 4
| 1
| 1
| 0
| 1
|
6,492
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.ParameterProjection
|
from typing import Callable, Optional
from torch import nn
import torch
class ParameterProjection(nn.Module):
def __init__(self, in_features: int, args_dim: dict[str, int], domain_map: Callable[..., tuple[torch.Tensor]], **kwargs) -> None:
super().__init__(**kwargs)
self.args_dim = args_dim
self.proj = nn.ModuleList([nn.Linear(in_features, dim) for dim in args_dim.values()])
self.domain_map = domain_map
def forward(self, x: torch.Tensor) -> tuple[torch.Tensor]:
params_unbounded = [proj(x) for proj in self.proj]
return self.domain_map(*params_unbounded)
|
class ParameterProjection(nn.Module):
def __init__(self, in_features: int, args_dim: dict[str, int], domain_map: Callable[..., tuple[torch.Tensor]], **kwargs) -> None:
pass
def forward(self, x: torch.Tensor) -> tuple[torch.Tensor]:
pass
| 3
| 0
| 6
| 1
| 5
| 0
| 1
| 0
| 1
| 4
| 0
| 0
| 2
| 3
| 2
| 12
| 13
| 2
| 11
| 9
| 6
| 0
| 9
| 7
| 6
| 1
| 1
| 0
| 2
|
6,493
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/time_series_utils.py
|
transformers.time_series_utils.StudentTOutput
|
from torch.distributions import AffineTransform, Distribution, Independent, NegativeBinomial, Normal, StudentT, TransformedDistribution
import torch
class StudentTOutput(DistributionOutput):
"""
Student-T distribution output class.
"""
args_dim: dict[str, int] = {'df': 1, 'loc': 1, 'scale': 1}
distribution_class: type = StudentT
@classmethod
def domain_map(cls, df: torch.Tensor, loc: torch.Tensor, scale: torch.Tensor):
scale = cls.squareplus(scale).clamp_min(torch.finfo(scale.dtype).eps)
df = 2.0 + cls.squareplus(df)
return (df.squeeze(-1), loc.squeeze(-1), scale.squeeze(-1))
|
class StudentTOutput(DistributionOutput):
'''
Student-T distribution output class.
'''
@classmethod
def domain_map(cls, df: torch.Tensor, loc: torch.Tensor, scale: torch.Tensor):
pass
| 3
| 1
| 4
| 0
| 4
| 0
| 1
| 0.38
| 1
| 1
| 0
| 0
| 0
| 0
| 1
| 10
| 13
| 2
| 8
| 5
| 5
| 3
| 7
| 4
| 5
| 1
| 1
| 0
| 1
|
6,494
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/tokenization_utils.py
|
transformers.tokenization_utils.ExtensionsTrie
|
class ExtensionsTrie(Trie):
def __init__(self, *args):
super().__init__(*args)
def extensions(self, prefix: str):
"""
Generates all extensions of a given prefix token in the Trie.
Example:
```python
>>> trie = Trie()
>>> trie.add("apple")
>>> trie.add("app")
>>> trie.add("application")
>>> trie.extensions("app")
['app', 'apple', 'application']
```
"""
prefix_node = self._get_node(prefix)
ret = self._collect_tokens(prefix_node)
return [prefix + token for token in ret]
def _get_node(self, token: str) -> dict:
"""
Retrieves the node corresponding to the given token in the Trie.
Args:
token (str): The token for which the corresponding node needs to be retrieved.
Returns:
dict: The node in the Trie corresponding to the given token.
"""
node = self.data
for char in token:
if char not in node:
break
node = node[char]
return node
def _collect_tokens(self, node: dict) -> list:
"""
Generates all tokens in the Trie starting from a given node.
Args:
node (dict): The node in the Trie from which tokens need to be generated.
Returns:
list: List of tokens generated from the given node.
"""
tokens = [self._termination_char] if self._termination_char in node else []
for token, subtrie_head in node.items():
if token != self._termination_char:
subtokens = self._collect_tokens(subtrie_head)
tokens.extend([token + subtoken for subtoken in subtokens])
return tokens
|
class ExtensionsTrie(Trie):
def __init__(self, *args):
pass
def extensions(self, prefix: str):
'''
Generates all extensions of a given prefix token in the Trie.
Example:
```python
>>> trie = Trie()
>>> trie.add("apple")
>>> trie.add("app")
>>> trie.add("application")
>>> trie.extensions("app")
['app', 'apple', 'application']
```
'''
pass
def _get_node(self, token: str) -> dict:
'''
Retrieves the node corresponding to the given token in the Trie.
Args:
token (str): The token for which the corresponding node needs to be retrieved.
Returns:
dict: The node in the Trie corresponding to the given token.
'''
pass
def _collect_tokens(self, node: dict) -> list:
'''
Generates all tokens in the Trie starting from a given node.
Args:
node (dict): The node in the Trie from which tokens need to be generated.
Returns:
list: List of tokens generated from the given node.
'''
pass
| 5
| 3
| 13
| 2
| 5
| 7
| 2
| 1.24
| 1
| 4
| 0
| 0
| 4
| 1
| 4
| 9
| 57
| 10
| 21
| 12
| 16
| 26
| 21
| 12
| 16
| 4
| 1
| 2
| 9
|
6,495
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/tokenization_utils.py
|
transformers.tokenization_utils.PreTrainedTokenizer
|
import re
import itertools
from typing import Any, Optional, Union, overload
from .utils import PaddingStrategy, TensorType, add_end_docstrings, logging
from .tokenization_utils_base import ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING, INIT_TOKENIZER_DOCSTRING, AddedToken, BatchEncoding, EncodedInput, EncodedInputPair, PreTokenizedInput, PreTokenizedInputPair, PreTrainedTokenizerBase, TextInput, TextInputPair, TruncationStrategy
@add_end_docstrings(INIT_TOKENIZER_DOCSTRING)
class PreTrainedTokenizer(PreTrainedTokenizerBase):
"""
Base class for all slow tokenizers.
Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`].
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading
pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the
specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...).
"""
def __init__(self, **kwargs):
self.tokens_trie = Trie()
if not hasattr(self, '_added_tokens_decoder'):
self._added_tokens_decoder: dict[int, AddedToken] = {}
self._added_tokens_decoder.update(kwargs.pop('added_tokens_decoder', {}))
self._added_tokens_encoder: dict[str, int] = {k.content: v for v, k in self._added_tokens_decoder.items()}
super().__init__(**kwargs)
self._add_tokens([token for token in self.all_special_tokens_extended if token not in self._added_tokens_encoder], special_tokens=True)
self._decode_use_source_tokenizer = False
@property
def is_fast(self) -> bool:
return False
@property
def vocab_size(self) -> int:
"""
`int`: Size of the base vocabulary (without the added tokens).
"""
raise NotImplementedError
@property
def added_tokens_encoder(self) -> dict[str, int]:
"""
Returns the sorted mapping from string to index. The added tokens encoder is cached for performance
optimisation in `self._added_tokens_encoder` for the slow tokenizers.
"""
return {k.content: v for v, k in sorted(self._added_tokens_decoder.items(), key=lambda item: item[0])}
@property
def added_tokens_decoder(self) -> dict[int, AddedToken]:
"""
Returns the added tokens in the vocabulary as a dictionary of index to AddedToken.
Returns:
`dict[str, int]`: The added tokens.
"""
return dict(sorted(self._added_tokens_decoder.items(), key=lambda item: item[0]))
@added_tokens_decoder.setter
def added_tokens_decoder(self, value: dict[int, Union[AddedToken, str]]) -> dict[int, AddedToken]:
for index, token in value.items():
if not isinstance(token, (str, AddedToken)) or not isinstance(index, int):
raise TypeError(f'The provided `added_tokens_decoder` has an element of type {(index.__class__, token.__class__)}, should be a dict of {(int, Union[AddedToken, str])}')
self._added_tokens_decoder[index] = AddedToken(token) if isinstance(token, str) else token
self._added_tokens_encoder[str(token)] = index
self._update_total_vocab_size()
def get_added_vocab(self) -> dict[str, int]:
"""
Returns the added tokens in the vocabulary as a dictionary of token to index. Results might be different from
the fast call because for now we always add the tokens even if they are already in the vocabulary. This is
something we should change.
Returns:
`dict[str, int]`: The added tokens.
"""
return self._added_tokens_encoder
def __len__(self):
"""
Size of the full vocabulary with the added tokens.
"""
return self.total_vocab_size
def _update_total_vocab_size(self):
"""
Update the size of the full vocabulary with the added tokens. Counts the `keys` and not the `values` because
otherwise if there is a hole in the vocab, we will add tokenizers at a wrong index. This operation is slow and
is only updated when adding tokens.
"""
self.total_vocab_size = len(self.get_vocab())
def _add_tokens(self, new_tokens: Union[list[str], list[AddedToken]], special_tokens: bool=False) -> int:
"""
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to
it with indices starting from length of the current vocabulary. Special tokens are sometimes already in the
vocab which is why they have to be handled specifically.
Args:
new_tokens (`list[str]`or `list[tokenizers.AddedToken]`):
Token(s) to add in vocabulary. A token is counted as added if it's not already in the vocabulary
(tested by checking if the tokenizer assign the index of the `unk_token` to them). If a token is part
of the vocabulary then we simply mark this token as an `AddedToken` which allows to control the
stripping and normalization of this token. This is NOT possible in `tokenizers`.
special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the tokens should be added as special tokens.
Returns:
`int`: The number of tokens actually added to the vocabulary.
Examples:
```python
# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertModel.from_pretrained("google-bert/bert-base-uncased")
num_added_toks = tokenizer.add_tokens(["new_tok1", "my_new-tok2"])
print("We have added", num_added_toks, "tokens")
# Note: resize_token_embeddings expects to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
```"""
added_tokens = 0
if new_tokens is None:
return added_tokens
current_vocab = self.get_vocab().copy()
new_idx = len(current_vocab)
for token in new_tokens:
if not isinstance(token, (str, AddedToken)):
raise TypeError(f'Token {token} is not a string but a {type(token)}.')
if str(token) == '':
continue
if isinstance(token, str):
if token in self._added_tokens_encoder:
continue
else:
is_special = token in self.all_special_tokens or special_tokens
token = AddedToken(token, rstrip=False, lstrip=False, normalized=not is_special, special=is_special)
elif special_tokens:
token.__setstate__({'special': True, 'normalized': token.normalized})
if token in self._added_tokens_decoder:
continue
if not token.special and token.normalized and getattr(self, 'do_lower_case', False):
token.content = token.content.lower()
if token.content not in current_vocab:
token_index = new_idx + added_tokens
current_vocab[token.content] = token_index
added_tokens += 1
else:
token_index = current_vocab[token.content]
if token.special and str(token) not in self.all_special_tokens:
self._special_tokens_map['additional_special_tokens'].append(token)
self._added_tokens_decoder[token_index] = token
self._added_tokens_encoder[token.content] = token_index
if self.verbose:
logger.info(f'Adding {token} to the vocabulary')
self._update_trie()
self._update_total_vocab_size()
return added_tokens
def _update_trie(self, unique_no_split_tokens: Optional[str]=[]):
for token in self._added_tokens_decoder.values():
if token.content not in self.tokens_trie._tokens:
self.tokens_trie.add(token.content)
for token in unique_no_split_tokens:
if token not in self.tokens_trie._tokens:
self.tokens_trie.add(token)
def num_special_tokens_to_add(self, pair: bool=False) -> int:
"""
Returns the number of added tokens when encoding a sequence with special tokens.
<Tip>
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put
this inside your training loop.
</Tip>
Args:
pair (`bool`, *optional*, defaults to `False`):
Whether the number of added tokens should be computed in the case of a sequence pair or a single
sequence.
Returns:
`int`: Number of special tokens added to sequences.
"""
token_ids_0 = []
token_ids_1 = []
return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None))
def tokenize(self, text: TextInput, **kwargs) -> list[str]:
"""
Converts a string into a sequence of tokens, using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies
(BPE/SentencePieces/WordPieces). Takes care of added tokens.
Args:
text (`str`):
The sequence to be encoded.
**kwargs (additional keyword arguments):
Passed along to the model-specific `prepare_for_tokenization` preprocessing method.
Returns:
`list[str]`: The list of tokens.
"""
split_special_tokens = kwargs.pop('split_special_tokens', self.split_special_tokens)
text, kwargs = self.prepare_for_tokenization(text, **kwargs)
if kwargs:
logger.warning(f'Keyword arguments {kwargs} not recognized.')
if hasattr(self, 'do_lower_case') and self.do_lower_case:
escaped_special_toks = [re.escape(s_tok) for s_tok in self.all_special_tokens]
escaped_special_toks += [re.escape(s_tok.content) for s_tok in self._added_tokens_decoder.values() if not s_tok.special and s_tok.normalized]
pattern = '(' + '|'.join(escaped_special_toks) + ')|' + '(.+?)'
text = re.sub(pattern, lambda m: m.groups()[0] or m.groups()[1].lower(), text)
if split_special_tokens:
no_split_token = []
tokens = [text]
else:
no_split_token = self._added_tokens_encoder.keys()
tokens = self.tokens_trie.split(text)
for i, token in enumerate(tokens):
if token in no_split_token:
tok_extended = self._added_tokens_decoder.get(self._added_tokens_encoder[token], None)
left = tokens[i - 1] if i > 0 else None
right = tokens[i + 1] if i < len(tokens) - 1 else None
if isinstance(tok_extended, AddedToken):
if tok_extended.rstrip and right:
tokens[i + 1] = right.lstrip()
if tok_extended.lstrip and left:
tokens[i - 1] = left.rstrip()
if tok_extended.single_word and left and (left[-1] != ' '):
tokens[i - 1] += token
tokens[i] = ''
elif tok_extended.single_word and right and (right[0] != ' '):
tokens[i + 1] = token + tokens[i + 1]
tokens[i] = ''
else:
raise ValueError(f'{tok_extended} cannot be tokenized because it was not properly added to the tokenizer. This means that it is not an `AddedToken` but a {type(tok_extended)}')
tokenized_text = []
for token in tokens:
if not token:
continue
if token in no_split_token:
tokenized_text.append(token)
else:
tokenized_text.extend(self._tokenize(token))
return tokenized_text
def _tokenize(self, text, **kwargs):
"""
Converts a string into a sequence of tokens (string), using the tokenizer. Split in words for word-based
vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
Do NOT take care of added tokens.
"""
raise NotImplementedError
def convert_tokens_to_ids(self, tokens: Union[str, list[str]]) -> Union[int, list[int]]:
"""
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the
vocabulary.
Args:
tokens (`str` or `list[str]`): One or several token(s) to convert to token id(s).
Returns:
`int` or `list[int]`: The token id or list of token ids.
"""
if tokens is None:
return None
if isinstance(tokens, str):
return self._convert_token_to_id_with_added_voc(tokens)
ids = []
for token in tokens:
ids.append(self._convert_token_to_id_with_added_voc(token))
return ids
def _convert_token_to_id_with_added_voc(self, token):
if token is None:
return None
if token in self._added_tokens_encoder:
return self._added_tokens_encoder[token]
return self._convert_token_to_id(token)
def _convert_token_to_id(self, token):
raise NotImplementedError
def _encode_plus(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, **kwargs) -> BatchEncoding:
def get_input_ids(text):
if isinstance(text, str):
tokens = self.tokenize(text, **kwargs)
return self.convert_tokens_to_ids(tokens)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):
if is_split_into_words:
tokens = list(itertools.chain(*(self.tokenize(t, is_split_into_words=True, **kwargs) for t in text)))
return self.convert_tokens_to_ids(tokens)
else:
return self.convert_tokens_to_ids(text)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], int):
return text
elif is_split_into_words:
raise ValueError(f'Input {text} is not valid. Should be a string or a list/tuple of strings when `is_split_into_words=True`.')
else:
raise ValueError(f'Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.')
if return_offsets_mapping:
raise NotImplementedError('return_offset_mapping is not available when using Python tokenizers. To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast. More information on available tokenizers at https://github.com/huggingface/transformers/pull/2674')
first_ids = get_input_ids(text)
second_ids = get_input_ids(text_pair) if text_pair is not None else None
return self.prepare_for_model(first_ids, pair_ids=second_ids, add_special_tokens=add_special_tokens, padding=padding_strategy.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_tensors=return_tensors, prepend_batch_axis=True, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, verbose=verbose)
def _batch_encode_plus(self, batch_text_or_text_pairs: Union[list[TextInput], list[TextInputPair], list[PreTokenizedInput], list[PreTokenizedInputPair], list[EncodedInput], list[EncodedInputPair]], add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
def get_input_ids(text):
if isinstance(text, str):
tokens = self.tokenize(text, **kwargs)
return self.convert_tokens_to_ids(tokens)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str):
if is_split_into_words:
tokens = list(itertools.chain(*(self.tokenize(t, is_split_into_words=True, **kwargs) for t in text)))
return self.convert_tokens_to_ids(tokens)
else:
return self.convert_tokens_to_ids(text)
elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], int):
return text
else:
raise ValueError('Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.')
if return_offsets_mapping:
raise NotImplementedError('return_offset_mapping is not available when using Python tokenizers. To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.')
input_ids = []
for ids_or_pair_ids in batch_text_or_text_pairs:
if not isinstance(ids_or_pair_ids, (list, tuple)) or (is_split_into_words and (not isinstance(ids_or_pair_ids[0], (list, tuple)))):
ids, pair_ids = (ids_or_pair_ids, None)
else:
ids, pair_ids = ids_or_pair_ids
first_ids = get_input_ids(ids)
second_ids = get_input_ids(pair_ids) if pair_ids is not None else None
input_ids.append((first_ids, second_ids))
batch_outputs = self._batch_prepare_for_model(input_ids, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=return_tensors, verbose=verbose, split_special_tokens=split_special_tokens)
return BatchEncoding(batch_outputs)
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def _batch_prepare_for_model(self, batch_ids_pairs: list[Union[PreTokenizedInputPair, tuple[list[int], None]]], add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[str]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False) -> BatchEncoding:
"""
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens
Args:
batch_ids_pairs: list of tokenized input ids or input ids pairs
"""
batch_outputs = {}
for first_ids, second_ids in batch_ids_pairs:
outputs = self.prepare_for_model(first_ids, second_ids, add_special_tokens=add_special_tokens, padding=PaddingStrategy.DO_NOT_PAD.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=None, padding_side=None, return_attention_mask=False, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=None, prepend_batch_axis=False, verbose=verbose, split_special_tokens=split_special_tokens)
for key, value in outputs.items():
if key not in batch_outputs:
batch_outputs[key] = []
batch_outputs[key].append(value)
batch_outputs = self.pad(batch_outputs, padding=padding_strategy.value, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=return_attention_mask)
batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors)
return batch_outputs
def prepare_for_tokenization(self, text: str, is_split_into_words: bool=False, **kwargs) -> tuple[str, dict[str, Any]]:
"""
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining `kwargs` as well. We test the
`kwargs` at the end of the encoding process to be sure all the arguments have been used.
Args:
text (`str`):
The text to prepare.
is_split_into_words (`bool`, *optional*, defaults to `False`):
Whether or not the input is already pre-tokenized (e.g., split into words). If set to `True`, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
kwargs (`dict[str, Any]`, *optional*):
Keyword arguments to use for the tokenization.
Returns:
`tuple[str, dict[str, Any]]`: The prepared text and the unused kwargs.
"""
return (text, kwargs)
def get_special_tokens_mask(self, token_ids_0: list, token_ids_1: Optional[list]=None, already_has_special_tokens: bool=False) -> list[int]:
"""
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` or `encode_plus` methods.
Args:
token_ids_0 (`list[int]`):
List of ids of the first sequence.
token_ids_1 (`list[int]`, *optional*):
List of ids of the second sequence.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
if already_has_special_tokens:
if token_ids_1 is not None:
raise ValueError('You should not supply a second sequence if the provided sequence of ids is already formatted with special tokens for the model.')
return super().get_special_tokens_mask(token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True)
return [0] * ((len(token_ids_1) if token_ids_1 else 0) + len(token_ids_0))
@overload
def convert_ids_to_tokens(self, ids: int, skip_special_tokens: bool=False) -> str:
...
@overload
def convert_ids_to_tokens(self, ids: list[int], skip_special_tokens: bool=False) -> list[str]:
...
def convert_ids_to_tokens(self, ids: Union[int, list[int]], skip_special_tokens: bool=False) -> Union[str, list[str]]:
"""
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and
added tokens.
Args:
ids (`int` or `list[int]`):
The token id (or token ids) to convert to tokens.
skip_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to remove special tokens in the decoding.
Returns:
`str` or `list[str]`: The decoded token(s).
"""
if isinstance(ids, int):
if ids in self._added_tokens_decoder:
return self._added_tokens_decoder[ids].content
else:
return self._convert_id_to_token(ids)
tokens = []
for index in ids:
index = int(index)
if skip_special_tokens and index in self.all_special_ids:
continue
if index in self._added_tokens_decoder:
tokens.append(self._added_tokens_decoder[index].content)
else:
tokens.append(self._convert_id_to_token(index))
return tokens
def _convert_id_to_token(self, index: int) -> str:
raise NotImplementedError
def convert_tokens_to_string(self, tokens: list[str]) -> str:
return ' '.join(tokens)
def _decode(self, token_ids: Union[int, list[int]], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, spaces_between_special_tokens: bool=True, **kwargs) -> str:
self._decode_use_source_tokenizer = kwargs.pop('use_source_tokenizer', False)
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
if isinstance(filtered_tokens, str):
filtered_tokens = [filtered_tokens]
legacy_added_tokens = set(self._added_tokens_encoder.keys()) - set(self.all_special_tokens) | {token for token in self.additional_special_tokens if self.convert_tokens_to_ids(token) >= self.vocab_size}
sub_texts = []
current_sub_text = []
for token in filtered_tokens:
if skip_special_tokens and token in self.all_special_tokens:
continue
if token in legacy_added_tokens:
if current_sub_text:
string = self.convert_tokens_to_string(current_sub_text)
if len(string) > 0:
sub_texts.append(string)
current_sub_text = []
sub_texts.append(token)
else:
current_sub_text.append(token)
if current_sub_text:
sub_texts.append(self.convert_tokens_to_string(current_sub_text))
if spaces_between_special_tokens:
text = ' '.join(sub_texts)
else:
text = ''.join(sub_texts)
clean_up_tokenization_spaces = clean_up_tokenization_spaces if clean_up_tokenization_spaces is not None else self.clean_up_tokenization_spaces
if clean_up_tokenization_spaces:
clean_text = self.clean_up_tokenization(text)
return clean_text
else:
return text
|
@add_end_docstrings(INIT_TOKENIZER_DOCSTRING)
class PreTrainedTokenizer(PreTrainedTokenizerBase):
'''
Base class for all slow tokenizers.
Inherits from [`~tokenization_utils_base.PreTrainedTokenizerBase`].
Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading
pretrained tokenizers as well as adding tokens to the vocabulary.
This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the
specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...).
'''
def __init__(self, **kwargs):
pass
@property
def is_fast(self) -> bool:
pass
@property
def vocab_size(self) -> int:
'''
`int`: Size of the base vocabulary (without the added tokens).
'''
pass
@property
def added_tokens_encoder(self) -> dict[str, int]:
'''
Returns the sorted mapping from string to index. The added tokens encoder is cached for performance
optimisation in `self._added_tokens_encoder` for the slow tokenizers.
'''
pass
@property
def added_tokens_decoder(self) -> dict[int, AddedToken]:
'''
Returns the added tokens in the vocabulary as a dictionary of index to AddedToken.
Returns:
`dict[str, int]`: The added tokens.
'''
pass
@added_tokens_decoder.setter
def added_tokens_decoder(self) -> dict[int, AddedToken]:
pass
def get_added_vocab(self) -> dict[str, int]:
'''
Returns the added tokens in the vocabulary as a dictionary of token to index. Results might be different from
the fast call because for now we always add the tokens even if they are already in the vocabulary. This is
something we should change.
Returns:
`dict[str, int]`: The added tokens.
'''
pass
def __len__(self):
'''
Size of the full vocabulary with the added tokens.
'''
pass
def _update_total_vocab_size(self):
'''
Update the size of the full vocabulary with the added tokens. Counts the `keys` and not the `values` because
otherwise if there is a hole in the vocab, we will add tokenizers at a wrong index. This operation is slow and
is only updated when adding tokens.
'''
pass
def _add_tokens(self, new_tokens: Union[list[str], list[AddedToken]], special_tokens: bool=False) -> int:
'''
Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to
it with indices starting from length of the current vocabulary. Special tokens are sometimes already in the
vocab which is why they have to be handled specifically.
Args:
new_tokens (`list[str]`or `list[tokenizers.AddedToken]`):
Token(s) to add in vocabulary. A token is counted as added if it's not already in the vocabulary
(tested by checking if the tokenizer assign the index of the `unk_token` to them). If a token is part
of the vocabulary then we simply mark this token as an `AddedToken` which allows to control the
stripping and normalization of this token. This is NOT possible in `tokenizers`.
special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the tokens should be added as special tokens.
Returns:
`int`: The number of tokens actually added to the vocabulary.
Examples:
```python
# Let's see how to increase the vocabulary of Bert model and tokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
model = BertModel.from_pretrained("google-bert/bert-base-uncased")
num_added_toks = tokenizer.add_tokens(["new_tok1", "my_new-tok2"])
print("We have added", num_added_toks, "tokens")
# Note: resize_token_embeddings expects to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
```'''
pass
def _update_trie(self, unique_no_split_tokens: Optional[str]=[]):
pass
def num_special_tokens_to_add(self, pair: bool=False) -> int:
'''
Returns the number of added tokens when encoding a sequence with special tokens.
<Tip>
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put
this inside your training loop.
</Tip>
Args:
pair (`bool`, *optional*, defaults to `False`):
Whether the number of added tokens should be computed in the case of a sequence pair or a single
sequence.
Returns:
`int`: Number of special tokens added to sequences.
'''
pass
def tokenize(self, text: TextInput, **kwargs) -> list[str]:
'''
Converts a string into a sequence of tokens, using the tokenizer.
Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies
(BPE/SentencePieces/WordPieces). Takes care of added tokens.
Args:
text (`str`):
The sequence to be encoded.
**kwargs (additional keyword arguments):
Passed along to the model-specific `prepare_for_tokenization` preprocessing method.
Returns:
`list[str]`: The list of tokens.
'''
pass
def _tokenize(self, text, **kwargs):
'''
Converts a string into a sequence of tokens (string), using the tokenizer. Split in words for word-based
vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
Do NOT take care of added tokens.
'''
pass
def convert_tokens_to_ids(self, tokens: Union[str, list[str]]) -> Union[int, list[int]]:
'''
Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the
vocabulary.
Args:
tokens (`str` or `list[str]`): One or several token(s) to convert to token id(s).
Returns:
`int` or `list[int]`: The token id or list of token ids.
'''
pass
def _convert_token_to_id_with_added_voc(self, token):
pass
def _convert_token_to_id_with_added_voc(self, token):
pass
def _encode_plus(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, **kwargs) -> BatchEncoding:
pass
def get_input_ids(text):
pass
def _batch_encode_plus(self, batch_text_or_text_pairs: Union[list[TextInput], list[TextInputPair], list[PreTokenizedInput], list[PreTokenizedInputPair], list[EncodedInput], list[EncodedInputPair]], add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
pass
def get_input_ids(text):
pass
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def _batch_prepare_for_model(self, batch_ids_pairs: list[Union[PreTokenizedInputPair, tuple[list[int], None]]], add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[str]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False) -> BatchEncoding:
'''
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens
Args:
batch_ids_pairs: list of tokenized input ids or input ids pairs
'''
pass
def prepare_for_tokenization(self, text: str, is_split_into_words: bool=False, **kwargs) -> tuple[str, dict[str, Any]]:
'''
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining `kwargs` as well. We test the
`kwargs` at the end of the encoding process to be sure all the arguments have been used.
Args:
text (`str`):
The text to prepare.
is_split_into_words (`bool`, *optional*, defaults to `False`):
Whether or not the input is already pre-tokenized (e.g., split into words). If set to `True`, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
kwargs (`dict[str, Any]`, *optional*):
Keyword arguments to use for the tokenization.
Returns:
`tuple[str, dict[str, Any]]`: The prepared text and the unused kwargs.
'''
pass
def get_special_tokens_mask(self, token_ids_0: list, token_ids_1: Optional[list]=None, already_has_special_tokens: bool=False) -> list[int]:
'''
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` or `encode_plus` methods.
Args:
token_ids_0 (`list[int]`):
List of ids of the first sequence.
token_ids_1 (`list[int]`, *optional*):
List of ids of the second sequence.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
'''
pass
@overload
def convert_ids_to_tokens(self, ids: int, skip_special_tokens: bool=False) -> str:
pass
@overload
def convert_ids_to_tokens(self, ids: int, skip_special_tokens: bool=False) -> str:
pass
def convert_ids_to_tokens(self, ids: int, skip_special_tokens: bool=False) -> str:
'''
Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and
added tokens.
Args:
ids (`int` or `list[int]`):
The token id (or token ids) to convert to tokens.
skip_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to remove special tokens in the decoding.
Returns:
`str` or `list[str]`: The decoded token(s).
'''
pass
def _convert_id_to_token(self, index: int) -> str:
pass
def convert_tokens_to_string(self, tokens: list[str]) -> str:
pass
def _decode(self, token_ids: Union[int, list[int]], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, spaces_between_special_tokens: bool=True, **kwargs) -> str:
pass
| 40
| 16
| 24
| 2
| 17
| 6
| 4
| 0.39
| 1
| 18
| 3
| 100
| 28
| 5
| 28
| 89
| 727
| 92
| 464
| 169
| 347
| 179
| 247
| 81
| 216
| 16
| 2
| 4
| 108
|
6,496
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/tokenization_utils.py
|
transformers.tokenization_utils.Trie
|
from collections import OrderedDict
class Trie:
"""
Trie in Python. Creates a Trie out of a list of words. The trie is used to split on `added_tokens` in one pass
Loose reference https://en.wikipedia.org/wiki/Trie
"""
def __init__(self, *args):
self.data = {}
self._tokens = set()
self._termination_char = ''
self.update(*args)
def update(self, *args):
"""
Updates the Trie with new tokens provided as arguments.
Args:
*args: Variable number of words to be added to the Trie.
"""
for token in tuple(*args):
self.add(token)
def add(self, word: str):
"""
Passes over every char (utf-8 char) on word and recursively adds it to the internal `data` trie representation.
The special key `""` in `self._termination_char` is used to represent termination.
This function is idempotent, adding twice the same word will leave the trie unchanged
Example:
```python
>>> trie = Trie()
>>> trie.add("Hello 友達")
>>> trie.data
{"H": {"e": {"l": {"l": {"o": {" ": {"友": {"達": {"": 1}}}}}}}}}
>>> trie.add("Hello")
>>> trie.data
{"H": {"e": {"l": {"l": {"o": {"": 1, " ": {"友": {"達": {"": 1}}}}}}}}}
```
"""
if not word:
return
self._tokens.add(word)
ref = self.data
for char in word:
ref[char] = ref.setdefault(char, {})
ref = ref[char]
ref[self._termination_char] = 1
def split(self, text: str) -> list[str]:
"""
Will look for the words added to the trie within `text`. Output is the original string split along the
boundaries of the words found.
This trie will match the longest possible word first !
Example:
```python
>>> trie = Trie()
>>> trie.split("[CLS] This is a extra_id_100")
["[CLS] This is a extra_id_100"]
>>> trie.add("[CLS]")
>>> trie.add("extra_id_1")
>>> trie.add("extra_id_100")
>>> trie.split("[CLS] This is a extra_id_100")
["[CLS]", " This is a ", "extra_id_100"]
```
"""
states = OrderedDict()
offsets = [0]
skip = 0
for current, current_char in enumerate(text):
if skip and current < skip:
continue
to_remove = set()
reset = False
for start, trie_pointer in states.items():
if '' in trie_pointer:
for lookstart, looktrie_pointer in states.items():
if lookstart > start:
break
elif lookstart < start:
lookahead_index = current + 1
end = current + 1
else:
lookahead_index = current
end = current
next_char = text[lookahead_index] if lookahead_index < len(text) else None
if '' in looktrie_pointer:
start = lookstart
end = lookahead_index
skip = lookahead_index
while next_char in looktrie_pointer:
looktrie_pointer = looktrie_pointer[next_char]
lookahead_index += 1
if '' in looktrie_pointer:
start = lookstart
end = lookahead_index
skip = lookahead_index
if lookahead_index == len(text):
break
next_char = text[lookahead_index]
offsets.append(start)
offsets.append(end)
reset = True
break
elif current_char in trie_pointer:
trie_pointer = trie_pointer[current_char]
states[start] = trie_pointer
else:
to_remove.add(start)
if reset:
states = {}
else:
for start in to_remove:
del states[start]
if current >= skip and current_char in self.data:
states[current] = self.data[current_char]
for start, trie_pointer in states.items():
if '' in trie_pointer:
end = len(text)
offsets.append(start)
offsets.append(end)
break
return self.cut_text(text, offsets)
def cut_text(self, text, offsets):
offsets.append(len(text))
tokens = []
start = 0
for end in offsets:
if start > end:
logger.error('There was a bug in Trie algorithm in tokenization. Attempting to recover. Please report it anyway.')
continue
elif start == end:
continue
tokens.append(text[start:end])
start = end
return tokens
|
class Trie:
'''
Trie in Python. Creates a Trie out of a list of words. The trie is used to split on `added_tokens` in one pass
Loose reference https://en.wikipedia.org/wiki/Trie
'''
def __init__(self, *args):
pass
def update(self, *args):
'''
Updates the Trie with new tokens provided as arguments.
Args:
*args: Variable number of words to be added to the Trie.
'''
pass
def add(self, word: str):
'''
Passes over every char (utf-8 char) on word and recursively adds it to the internal `data` trie representation.
The special key `""` in `self._termination_char` is used to represent termination.
This function is idempotent, adding twice the same word will leave the trie unchanged
Example:
```python
>>> trie = Trie()
>>> trie.add("Hello 友達")
>>> trie.data
{"H": {"e": {"l": {"l": {"o": {" ": {"友": {"達": {"": 1}}}}}}}}}
>>> trie.add("Hello")
>>> trie.data
{"H": {"e": {"l": {"l": {"o": {"": 1, " ": {"友": {"達": {"": 1}}}}}}}}}
```
'''
pass
def split(self, text: str) -> list[str]:
'''
Will look for the words added to the trie within `text`. Output is the original string split along the
boundaries of the words found.
This trie will match the longest possible word first !
Example:
```python
>>> trie = Trie()
>>> trie.split("[CLS] This is a extra_id_100")
["[CLS] This is a extra_id_100"]
>>> trie.add("[CLS]")
>>> trie.add("extra_id_1")
>>> trie.add("extra_id_100")
>>> trie.split("[CLS] This is a extra_id_100")
["[CLS]", " This is a ", "extra_id_100"]
```
'''
pass
def cut_text(self, text, offsets):
pass
| 6
| 4
| 44
| 5
| 18
| 21
| 6
| 1.16
| 0
| 5
| 0
| 1
| 5
| 3
| 5
| 5
| 229
| 30
| 92
| 26
| 86
| 107
| 83
| 26
| 77
| 19
| 0
| 6
| 29
|
6,497
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/tokenization_utils_base.py
|
transformers.tokenization_utils_base.BatchEncoding
|
from collections import UserDict
import numpy as np
import warnings
from .utils import CHAT_TEMPLATE_DIR, CHAT_TEMPLATE_FILE, ExplicitEnum, PaddingStrategy, PushToHubMixin, TensorType, add_end_docstrings, cached_file, copy_func, download_url, extract_commit_hash, is_mlx_available, is_numpy_array, is_offline_mode, is_protobuf_available, is_remote_url, is_tokenizers_available, is_torch_available, is_torch_device, is_torch_tensor, list_repo_templates, logging, requires_backends, to_py_obj
from collections.abc import Mapping, Sequence, Sized
from typing import TYPE_CHECKING, Any, Callable, NamedTuple, Optional, Union
class BatchEncoding(UserDict):
"""
Holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase.__call__`],
[`~tokenization_utils_base.PreTrainedTokenizerBase.encode_plus`] and
[`~tokenization_utils_base.PreTrainedTokenizerBase.batch_encode_plus`] methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes
utility methods to map from word/character space to token space.
Args:
data (`dict`, *optional*):
Dictionary of lists/arrays/tensors returned by the `__call__`/`encode_plus`/`batch_encode_plus` methods
('input_ids', 'attention_mask', etc.).
encoding (`tokenizers.Encoding` or `Sequence[tokenizers.Encoding]`, *optional*):
If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character
space to token space the `tokenizers.Encoding` instance or list of instance (for batches) hold this
information.
tensor_type (`Union[None, str, TensorType]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at
initialization.
prepend_batch_axis (`bool`, *optional*, defaults to `False`):
Whether or not to add a batch axis when converting to tensors (see `tensor_type` above). Note that this
parameter has an effect if the parameter `tensor_type` is set, *otherwise has no effect*.
n_sequences (`Optional[int]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at
initialization.
"""
def __init__(self, data: Optional[dict[str, Any]]=None, encoding: Optional[Union[EncodingFast, Sequence[EncodingFast]]]=None, tensor_type: Union[None, str, TensorType]=None, prepend_batch_axis: bool=False, n_sequences: Optional[int]=None):
super().__init__(data)
if encoding is not None and isinstance(encoding, EncodingFast):
encoding = [encoding]
self._encodings = encoding
if n_sequences is None and encoding is not None and encoding:
n_sequences = encoding[0].n_sequences
self._n_sequences = n_sequences
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
@property
def n_sequences(self) -> Optional[int]:
"""
`Optional[int]`: The number of sequences used to generate each sample from the batch encoded in this
[`BatchEncoding`]. Currently can be one of `None` (unknown), `1` (a single sentence) or `2` (a pair of
sentences)
"""
return self._n_sequences
@property
def is_fast(self) -> bool:
"""
`bool`: Indicate whether this [`BatchEncoding`] was generated from the result of a [`PreTrainedTokenizerFast`]
or not.
"""
return self._encodings is not None
def __getitem__(self, item: Union[int, str]) -> Union[Any, EncodingFast]:
"""
If the key is a string, returns the value of the dict associated to `key` ('input_ids', 'attention_mask',
etc.).
If the key is an integer, get the `tokenizers.Encoding` for batch item with index `key`.
If the key is a slice, returns the value of the dict associated to `key` ('input_ids', 'attention_mask', etc.)
with the constraint of slice.
"""
if isinstance(item, str):
return self.data[item]
elif self._encodings is not None:
return self._encodings[item]
elif isinstance(item, slice):
return {key: self.data[key][item] for key in self.data}
else:
raise KeyError('Invalid key. Only three types of key are available: (1) string, (2) integers for backend Encoding, and (3) slices for data subsetting.')
def __getattr__(self, item: str):
try:
return self.data[item]
except KeyError:
raise AttributeError
def __getstate__(self):
return {'data': self.data, 'encodings': self._encodings}
def __setstate__(self, state):
if 'data' in state:
self.data = state['data']
if 'encodings' in state:
self._encodings = state['encodings']
@property
def encodings(self) -> Optional[list[EncodingFast]]:
"""
`Optional[list[tokenizers.Encoding]]`: The list all encodings from the tokenization process. Returns `None` if
the input was tokenized through Python (i.e., not a fast) tokenizer.
"""
return self._encodings
def tokens(self, batch_index: int=0) -> list[str]:
"""
Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to
integer indices) at a given batch index (only works for the output of a fast tokenizer).
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[str]`: The list of tokens at that index.
"""
if not self._encodings:
raise ValueError('tokens() is not available when using non-fast tokenizers (e.g. instance of a `XxxTokenizerFast` class).')
return self._encodings[batch_index].tokens
def sequence_ids(self, batch_index: int=0) -> list[Optional[int]]:
"""
Return a list mapping the tokens to the id of their original sentences:
- `None` for special tokens added around or between sequences,
- `0` for tokens corresponding to words in the first sequence,
- `1` for tokens corresponding to words in the second sequence when a pair of sequences was jointly
encoded.
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[Optional[int]]`: A list indicating the sequence id corresponding to each token. Special tokens added
by the tokenizer are mapped to `None` and other tokens are mapped to the index of their corresponding
sequence.
"""
if not self._encodings:
raise ValueError('sequence_ids() is not available when using non-fast tokenizers (e.g. instance of a `XxxTokenizerFast` class).')
return self._encodings[batch_index].sequence_ids
def words(self, batch_index: int=0) -> list[Optional[int]]:
"""
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by the
tokenizer are mapped to `None` and other tokens are mapped to the index of their corresponding word
(several tokens will be mapped to the same word index if they are parts of that word).
"""
if not self._encodings:
raise ValueError('words() is not available when using non-fast tokenizers (e.g. instance of a `XxxTokenizerFast` class).')
warnings.warn('`BatchEncoding.words()` property is deprecated and should be replaced with the identical, but more self-explanatory `BatchEncoding.word_ids()` property.', FutureWarning)
return self.word_ids(batch_index)
def word_ids(self, batch_index: int=0) -> list[Optional[int]]:
"""
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by the
tokenizer are mapped to `None` and other tokens are mapped to the index of their corresponding word
(several tokens will be mapped to the same word index if they are parts of that word).
"""
if not self._encodings:
raise ValueError('word_ids() is not available when using non-fast tokenizers (e.g. instance of a `XxxTokenizerFast` class).')
return self._encodings[batch_index].word_ids
def token_to_sequence(self, batch_or_token_index: int, token_index: Optional[int]=None) -> int:
"""
Get the index of the sequence represented by the given token. In the general use case, this method returns `0`
for a single sequence or the first sequence of a pair, and `1` for the second sequence of a pair
Can be called as:
- `self.token_to_sequence(token_index)` if batch size is 1
- `self.token_to_sequence(batch_index, token_index)` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e.,
words are defined by the user). In this case it allows to easily associate encoded tokens with provided
tokenized words.
Args:
batch_or_token_index (`int`):
Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of
the token in the sequence.
token_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the token in the
sequence.
Returns:
`int`: Index of the word in the input sequence.
"""
if not self._encodings:
raise ValueError('token_to_sequence() is not available when using Python based tokenizers')
if token_index is not None:
batch_index = batch_or_token_index
else:
batch_index = 0
token_index = batch_or_token_index
if batch_index < 0:
batch_index = self._batch_size + batch_index
if token_index < 0:
token_index = self._seq_len + token_index
return self._encodings[batch_index].token_to_sequence(token_index)
def token_to_word(self, batch_or_token_index: int, token_index: Optional[int]=None) -> int:
"""
Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.
Can be called as:
- `self.token_to_word(token_index)` if batch size is 1
- `self.token_to_word(batch_index, token_index)` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e.,
words are defined by the user). In this case it allows to easily associate encoded tokens with provided
tokenized words.
Args:
batch_or_token_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the token in the sequence.
token_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the token in the
sequence.
Returns:
`int`: Index of the word in the input sequence.
"""
if not self._encodings:
raise ValueError('token_to_word() is not available when using Python based tokenizers')
if token_index is not None:
batch_index = batch_or_token_index
else:
batch_index = 0
token_index = batch_or_token_index
if batch_index < 0:
batch_index = self._batch_size + batch_index
if token_index < 0:
token_index = self._seq_len + token_index
return self._encodings[batch_index].token_to_word(token_index)
def word_to_tokens(self, batch_or_word_index: int, word_index: Optional[int]=None, sequence_index: int=0) -> Optional[TokenSpan]:
"""
Get the encoded token span corresponding to a word in a sequence of the batch.
Token spans are returned as a [`~tokenization_utils_base.TokenSpan`] with:
- **start** -- Index of the first token.
- **end** -- Index of the token following the last token.
Can be called as:
- `self.word_to_tokens(word_index, sequence_index: int = 0)` if batch size is 1
- `self.word_to_tokens(batch_index, word_index, sequence_index: int = 0)` if batch size is greater or equal to
1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words
are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized
words.
Args:
batch_or_word_index (`int`):
Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of
the word in the sequence.
word_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the word in the
sequence.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided word index belongs to.
Returns:
([`~tokenization_utils_base.TokenSpan`], *optional*): Span of tokens in the encoded sequence. Returns
`None` if no tokens correspond to the word. This can happen especially when the token is a special token
that has been used to format the tokenization. For example when we add a class token at the very beginning
of the tokenization.
"""
if not self._encodings:
raise ValueError('word_to_tokens() is not available when using Python based tokenizers')
if word_index is not None:
batch_index = batch_or_word_index
else:
batch_index = 0
word_index = batch_or_word_index
if batch_index < 0:
batch_index = self._batch_size + batch_index
if word_index < 0:
word_index = self._seq_len + word_index
span = self._encodings[batch_index].word_to_tokens(word_index, sequence_index)
return TokenSpan(*span) if span is not None else None
def token_to_chars(self, batch_or_token_index: int, token_index: Optional[int]=None) -> Optional[CharSpan]:
"""
Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a [`~tokenization_utils_base.CharSpan`] with:
- **start** -- Index of the first character in the original string associated to the token.
- **end** -- Index of the character following the last character in the original string associated to the
token.
Can be called as:
- `self.token_to_chars(token_index)` if batch size is 1
- `self.token_to_chars(batch_index, token_index)` if batch size is greater or equal to 1
Args:
batch_or_token_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the token in the sequence.
token_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the token or tokens in
the sequence.
Returns:
[`~tokenization_utils_base.CharSpan`]: Span of characters in the original string, or None, if the token
(e.g. <s>, </s>) doesn't correspond to any chars in the origin string.
"""
if not self._encodings:
raise ValueError('token_to_chars() is not available when using Python based tokenizers')
if token_index is not None:
batch_index = batch_or_token_index
else:
batch_index = 0
token_index = batch_or_token_index
span_indices = self._encodings[batch_index].token_to_chars(token_index)
return CharSpan(*span_indices) if span_indices is not None else None
def char_to_token(self, batch_or_char_index: int, char_index: Optional[int]=None, sequence_index: int=0) -> int:
"""
Get the index of the token in the encoded output comprising a character in the original string for a sequence
of the batch.
Can be called as:
- `self.char_to_token(char_index)` if batch size is 1
- `self.char_to_token(batch_index, char_index)` if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words
are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized
words.
Args:
batch_or_char_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the word in the sequence
char_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the word in the
sequence.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided character index belongs to.
Returns:
`int`: Index of the token, or None if the char index refers to a whitespace only token and whitespace is
trimmed with `trim_offsets=True`.
"""
if not self._encodings:
raise ValueError('char_to_token() is not available when using Python based tokenizers')
if char_index is not None:
batch_index = batch_or_char_index
else:
batch_index = 0
char_index = batch_or_char_index
return self._encodings[batch_index].char_to_token(char_index, sequence_index)
def word_to_chars(self, batch_or_word_index: int, word_index: Optional[int]=None, sequence_index: int=0) -> CharSpan:
"""
Get the character span in the original string corresponding to given word in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
Can be called as:
- `self.word_to_chars(word_index)` if batch size is 1
- `self.word_to_chars(batch_index, word_index)` if batch size is greater or equal to 1
Args:
batch_or_word_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the word in the sequence
word_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the word in the
sequence.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided word index belongs to.
Returns:
`CharSpan` or `list[CharSpan]`: Span(s) of the associated character or characters in the string. CharSpan
are NamedTuple with:
- start: index of the first character associated to the token in the original string
- end: index of the character following the last character associated to the token in the original
string
"""
if not self._encodings:
raise ValueError('word_to_chars() is not available when using Python based tokenizers')
if word_index is not None:
batch_index = batch_or_word_index
else:
batch_index = 0
word_index = batch_or_word_index
return CharSpan(*self._encodings[batch_index].word_to_chars(word_index, sequence_index))
def char_to_word(self, batch_or_char_index: int, char_index: Optional[int]=None, sequence_index: int=0) -> int:
"""
Get the word in the original string corresponding to a character in the original string of a sequence of the
batch.
Can be called as:
- `self.char_to_word(char_index)` if batch size is 1
- `self.char_to_word(batch_index, char_index)` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words
are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized
words.
Args:
batch_or_char_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the character in the original string.
char_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the character in the
original string.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided character index belongs to.
Returns:
`int` or `list[int]`: Index or indices of the associated encoded token(s).
"""
if not self._encodings:
raise ValueError('char_to_word() is not available when using Python based tokenizers')
if char_index is not None:
batch_index = batch_or_char_index
else:
batch_index = 0
char_index = batch_or_char_index
return self._encodings[batch_index].char_to_word(char_index, sequence_index)
def convert_to_tensors(self, tensor_type: Optional[Union[str, TensorType]]=None, prepend_batch_axis: bool=False):
"""
Convert the inner content to tensors.
Args:
tensor_type (`str` or [`~utils.TensorType`], *optional*):
The type of tensors to use. If `str`, should be one of the values of the enum [`~utils.TensorType`]. If
`None`, no modification is done.
prepend_batch_axis (`int`, *optional*, defaults to `False`):
Whether or not to add the batch dimension during the conversion.
"""
if tensor_type is None:
return self
if not isinstance(tensor_type, TensorType):
tensor_type = TensorType(tensor_type)
if tensor_type == TensorType.PYTORCH:
if not is_torch_available():
raise ImportError('Unable to convert output to PyTorch tensors format, PyTorch is not installed.')
import torch
def as_tensor(value, dtype=None):
if isinstance(value, list) and len(value) > 0 and isinstance(value[0], np.ndarray):
return torch.from_numpy(np.array(value))
if len(flatten(value)) == 0 and dtype is None:
dtype = torch.int64
return torch.tensor(value, dtype=dtype)
is_tensor = torch.is_tensor
elif tensor_type == TensorType.MLX:
if not is_mlx_available():
raise ImportError('Unable to convert output to MLX tensors format, MLX is not installed.')
import mlx.core as mx
def as_tensor(value, dtype=None):
if len(flatten(value)) == 0 and dtype is None:
dtype = mx.int32
return mx.array(value, dtype=dtype)
def is_tensor(obj):
return isinstance(obj, mx.array)
else:
def as_tensor(value, dtype=None):
if isinstance(value, (list, tuple)) and len(value) > 0 and isinstance(value[0], (list, tuple, np.ndarray)):
value_lens = [len(val) for val in value]
if len(set(value_lens)) > 1 and dtype is None:
value = as_tensor([np.asarray(val) for val in value], dtype=object)
if len(flatten(value)) == 0 and dtype is None:
dtype = np.int64
return np.asarray(value, dtype=dtype)
is_tensor = is_numpy_array
for key, value in self.items():
try:
if prepend_batch_axis:
value = [value]
if not is_tensor(value):
tensor = as_tensor(value)
self[key] = tensor
except Exception as e:
if key == 'overflowing_tokens':
raise ValueError('Unable to create tensor returning overflowing tokens of different lengths. Please see if a fast version of this tokenizer is available to have this feature available.') from e
raise ValueError(f"Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`{key}` in this case) have excessive nesting (inputs type `list` where type `int` is expected).") from e
return self
def to(self, device: Union[str, 'torch.device'], *, non_blocking: bool=False) -> 'BatchEncoding':
"""
Send all values to device by calling `v.to(device, non_blocking=non_blocking)` (PyTorch only).
Args:
device (`str` or `torch.device`): The device to put the tensors on.
non_blocking (`bool`): Whether to perform the copy asynchronously.
Returns:
[`BatchEncoding`]: The same instance after modification.
"""
requires_backends(self, ['torch'])
if isinstance(device, str) or is_torch_device(device) or isinstance(device, int):
self.data = {k: v.to(device=device, non_blocking=non_blocking) if hasattr(v, 'to') and callable(v.to) else v for k, v in self.data.items()}
else:
logger.warning(f'Attempting to cast a BatchEncoding to type {str(device)}. This is not supported.')
return self
|
class BatchEncoding(UserDict):
'''
Holds the output of the [`~tokenization_utils_base.PreTrainedTokenizerBase.__call__`],
[`~tokenization_utils_base.PreTrainedTokenizerBase.encode_plus`] and
[`~tokenization_utils_base.PreTrainedTokenizerBase.batch_encode_plus`] methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes
utility methods to map from word/character space to token space.
Args:
data (`dict`, *optional*):
Dictionary of lists/arrays/tensors returned by the `__call__`/`encode_plus`/`batch_encode_plus` methods
('input_ids', 'attention_mask', etc.).
encoding (`tokenizers.Encoding` or `Sequence[tokenizers.Encoding]`, *optional*):
If the tokenizer is a fast tokenizer which outputs additional information like mapping from word/character
space to token space the `tokenizers.Encoding` instance or list of instance (for batches) hold this
information.
tensor_type (`Union[None, str, TensorType]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at
initialization.
prepend_batch_axis (`bool`, *optional*, defaults to `False`):
Whether or not to add a batch axis when converting to tensors (see `tensor_type` above). Note that this
parameter has an effect if the parameter `tensor_type` is set, *otherwise has no effect*.
n_sequences (`Optional[int]`, *optional*):
You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at
initialization.
'''
def __init__(self, data: Optional[dict[str, Any]]=None, encoding: Optional[Union[EncodingFast, Sequence[EncodingFast]]]=None, tensor_type: Union[None, str, TensorType]=None, prepend_batch_axis: bool=False, n_sequences: Optional[int]=None):
pass
@property
def n_sequences(self) -> Optional[int]:
'''
`Optional[int]`: The number of sequences used to generate each sample from the batch encoded in this
[`BatchEncoding`]. Currently can be one of `None` (unknown), `1` (a single sentence) or `2` (a pair of
sentences)
'''
pass
@property
def is_fast(self) -> bool:
'''
`bool`: Indicate whether this [`BatchEncoding`] was generated from the result of a [`PreTrainedTokenizerFast`]
or not.
'''
pass
def __getitem__(self, item: Union[int, str]) -> Union[Any, EncodingFast]:
'''
If the key is a string, returns the value of the dict associated to `key` ('input_ids', 'attention_mask',
etc.).
If the key is an integer, get the `tokenizers.Encoding` for batch item with index `key`.
If the key is a slice, returns the value of the dict associated to `key` ('input_ids', 'attention_mask', etc.)
with the constraint of slice.
'''
pass
def __getattr__(self, item: str):
pass
def __getstate__(self):
pass
def __setstate__(self, state):
pass
@property
def encodings(self) -> Optional[list[EncodingFast]]:
'''
`Optional[list[tokenizers.Encoding]]`: The list all encodings from the tokenization process. Returns `None` if
the input was tokenized through Python (i.e., not a fast) tokenizer.
'''
pass
def tokens(self, batch_index: int=0) -> list[str]:
'''
Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to
integer indices) at a given batch index (only works for the output of a fast tokenizer).
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[str]`: The list of tokens at that index.
'''
pass
def sequence_ids(self, batch_index: int=0) -> list[Optional[int]]:
'''
Return a list mapping the tokens to the id of their original sentences:
- `None` for special tokens added around or between sequences,
- `0` for tokens corresponding to words in the first sequence,
- `1` for tokens corresponding to words in the second sequence when a pair of sequences was jointly
encoded.
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[Optional[int]]`: A list indicating the sequence id corresponding to each token. Special tokens added
by the tokenizer are mapped to `None` and other tokens are mapped to the index of their corresponding
sequence.
'''
pass
def words(self, batch_index: int=0) -> list[Optional[int]]:
'''
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by the
tokenizer are mapped to `None` and other tokens are mapped to the index of their corresponding word
(several tokens will be mapped to the same word index if they are parts of that word).
'''
pass
def word_ids(self, batch_index: int=0) -> list[Optional[int]]:
'''
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
Args:
batch_index (`int`, *optional*, defaults to 0): The index to access in the batch.
Returns:
`list[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by the
tokenizer are mapped to `None` and other tokens are mapped to the index of their corresponding word
(several tokens will be mapped to the same word index if they are parts of that word).
'''
pass
def token_to_sequence(self, batch_or_token_index: int, token_index: Optional[int]=None) -> int:
'''
Get the index of the sequence represented by the given token. In the general use case, this method returns `0`
for a single sequence or the first sequence of a pair, and `1` for the second sequence of a pair
Can be called as:
- `self.token_to_sequence(token_index)` if batch size is 1
- `self.token_to_sequence(batch_index, token_index)` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e.,
words are defined by the user). In this case it allows to easily associate encoded tokens with provided
tokenized words.
Args:
batch_or_token_index (`int`):
Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of
the token in the sequence.
token_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the token in the
sequence.
Returns:
`int`: Index of the word in the input sequence.
'''
pass
def token_to_word(self, batch_or_token_index: int, token_index: Optional[int]=None) -> int:
'''
Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.
Can be called as:
- `self.token_to_word(token_index)` if batch size is 1
- `self.token_to_word(batch_index, token_index)` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e.,
words are defined by the user). In this case it allows to easily associate encoded tokens with provided
tokenized words.
Args:
batch_or_token_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the token in the sequence.
token_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the token in the
sequence.
Returns:
`int`: Index of the word in the input sequence.
'''
pass
def word_to_tokens(self, batch_or_word_index: int, word_index: Optional[int]=None, sequence_index: int=0) -> Optional[TokenSpan]:
'''
Get the encoded token span corresponding to a word in a sequence of the batch.
Token spans are returned as a [`~tokenization_utils_base.TokenSpan`] with:
- **start** -- Index of the first token.
- **end** -- Index of the token following the last token.
Can be called as:
- `self.word_to_tokens(word_index, sequence_index: int = 0)` if batch size is 1
- `self.word_to_tokens(batch_index, word_index, sequence_index: int = 0)` if batch size is greater or equal to
1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words
are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized
words.
Args:
batch_or_word_index (`int`):
Index of the sequence in the batch. If the batch only comprises one sequence, this can be the index of
the word in the sequence.
word_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the word in the
sequence.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided word index belongs to.
Returns:
([`~tokenization_utils_base.TokenSpan`], *optional*): Span of tokens in the encoded sequence. Returns
`None` if no tokens correspond to the word. This can happen especially when the token is a special token
that has been used to format the tokenization. For example when we add a class token at the very beginning
of the tokenization.
'''
pass
def token_to_chars(self, batch_or_token_index: int, token_index: Optional[int]=None) -> Optional[CharSpan]:
'''
Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a [`~tokenization_utils_base.CharSpan`] with:
- **start** -- Index of the first character in the original string associated to the token.
- **end** -- Index of the character following the last character in the original string associated to the
token.
Can be called as:
- `self.token_to_chars(token_index)` if batch size is 1
- `self.token_to_chars(batch_index, token_index)` if batch size is greater or equal to 1
Args:
batch_or_token_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the token in the sequence.
token_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the token or tokens in
the sequence.
Returns:
[`~tokenization_utils_base.CharSpan`]: Span of characters in the original string, or None, if the token
(e.g. <s>, </s>) doesn't correspond to any chars in the origin string.
'''
pass
def char_to_token(self, batch_or_char_index: int, char_index: Optional[int]=None, sequence_index: int=0) -> int:
'''
Get the index of the token in the encoded output comprising a character in the original string for a sequence
of the batch.
Can be called as:
- `self.char_to_token(char_index)` if batch size is 1
- `self.char_to_token(batch_index, char_index)` if batch size is greater or equal to 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words
are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized
words.
Args:
batch_or_char_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the word in the sequence
char_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the word in the
sequence.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided character index belongs to.
Returns:
`int`: Index of the token, or None if the char index refers to a whitespace only token and whitespace is
trimmed with `trim_offsets=True`.
'''
pass
def word_to_chars(self, batch_or_word_index: int, word_index: Optional[int]=None, sequence_index: int=0) -> CharSpan:
'''
Get the character span in the original string corresponding to given word in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
Can be called as:
- `self.word_to_chars(word_index)` if batch size is 1
- `self.word_to_chars(batch_index, word_index)` if batch size is greater or equal to 1
Args:
batch_or_word_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the word in the sequence
word_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the word in the
sequence.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided word index belongs to.
Returns:
`CharSpan` or `list[CharSpan]`: Span(s) of the associated character or characters in the string. CharSpan
are NamedTuple with:
- start: index of the first character associated to the token in the original string
- end: index of the character following the last character associated to the token in the original
string
'''
pass
def char_to_word(self, batch_or_char_index: int, char_index: Optional[int]=None, sequence_index: int=0) -> int:
'''
Get the word in the original string corresponding to a character in the original string of a sequence of the
batch.
Can be called as:
- `self.char_to_word(char_index)` if batch size is 1
- `self.char_to_word(batch_index, char_index)` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words
are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized
words.
Args:
batch_or_char_index (`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, this can be the index of
the character in the original string.
char_index (`int`, *optional*):
If a batch index is provided in *batch_or_token_index*, this can be the index of the character in the
original string.
sequence_index (`int`, *optional*, defaults to 0):
If pair of sequences are encoded in the batch this can be used to specify which sequence in the pair (0
or 1) the provided character index belongs to.
Returns:
`int` or `list[int]`: Index or indices of the associated encoded token(s).
'''
pass
def convert_to_tensors(self, tensor_type: Optional[Union[str, TensorType]]=None, prepend_batch_axis: bool=False):
'''
Convert the inner content to tensors.
Args:
tensor_type (`str` or [`~utils.TensorType`], *optional*):
The type of tensors to use. If `str`, should be one of the values of the enum [`~utils.TensorType`]. If
`None`, no modification is done.
prepend_batch_axis (`int`, *optional*, defaults to `False`):
Whether or not to add the batch dimension during the conversion.
'''
pass
def as_tensor(value, dtype=None):
pass
def as_tensor(value, dtype=None):
pass
def is_tensor(obj):
pass
def as_tensor(value, dtype=None):
pass
def tokens(self, batch_index: int=0) -> list[str]:
'''
Send all values to device by calling `v.to(device, non_blocking=non_blocking)` (PyTorch only).
Args:
device (`str` or `torch.device`): The device to put the tensors on.
non_blocking (`bool`): Whether to perform the copy asynchronously.
Returns:
[`BatchEncoding`]: The same instance after modification.
'''
pass
| 29
| 18
| 22
| 3
| 9
| 9
| 3
| 1.1
| 1
| 18
| 2
| 0
| 24
| 3
| 24
| 79
| 634
| 115
| 248
| 67
| 197
| 272
| 183
| 48
| 150
| 16
| 8
| 3
| 81
|
6,498
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/tokenization_utils_base.py
|
transformers.tokenization_utils_base.CharSpan
|
from typing import TYPE_CHECKING, Any, Callable, NamedTuple, Optional, Union
class CharSpan(NamedTuple):
"""
Character span in the original string.
Args:
start (`int`): Index of the first character in the original string.
end (`int`): Index of the character following the last character in the original string.
"""
start: int
end: int
|
class CharSpan(NamedTuple):
'''
Character span in the original string.
Args:
start (`int`): Index of the first character in the original string.
end (`int`): Index of the character following the last character in the original string.
'''
pass
| 1
| 1
| 0
| 0
| 0
| 0
| 0
| 2
| 1
| 0
| 0
| 0
| 0
| 0
| 0
| 0
| 11
| 2
| 3
| 1
| 2
| 6
| 3
| 1
| 2
| 0
| 1
| 0
| 0
|
6,499
|
huggingface/pytorch-pretrained-BERT
|
huggingface_pytorch-pretrained-BERT/src/transformers/tokenization_utils_base.py
|
transformers.tokenization_utils_base.PreTrainedTokenizerBase
|
import copy
from .utils.chat_template_utils import render_jinja_template
from pathlib import Path
import numpy as np
import warnings
from .utils import CHAT_TEMPLATE_DIR, CHAT_TEMPLATE_FILE, ExplicitEnum, PaddingStrategy, PushToHubMixin, TensorType, add_end_docstrings, cached_file, copy_func, download_url, extract_commit_hash, is_mlx_available, is_numpy_array, is_offline_mode, is_protobuf_available, is_remote_url, is_tokenizers_available, is_torch_available, is_torch_device, is_torch_tensor, list_repo_templates, logging, requires_backends, to_py_obj
from contextlib import contextmanager
import os
from .dynamic_module_utils import custom_object_save
import json
from collections.abc import Mapping, Sequence, Sized
from typing import TYPE_CHECKING, Any, Callable, NamedTuple, Optional, Union
@add_end_docstrings(INIT_TOKENIZER_DOCSTRING)
class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
"""
Base class for [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`].
Handles shared (mostly boiler plate) methods for those two classes.
"""
vocab_files_names: dict[str, str] = {}
pretrained_vocab_files_map: dict[str, dict[str, str]] = {}
_auto_class: Optional[str] = None
model_input_names: list[str] = ['input_ids', 'token_type_ids', 'attention_mask']
padding_side: str = 'right'
truncation_side: str = 'right'
slow_tokenizer_class = None
def __init__(self, **kwargs):
self.init_inputs = ()
for key in kwargs:
if hasattr(self, key) and callable(getattr(self, key)):
raise AttributeError(f'{key} conflicts with the method {key} in {self.__class__.__name__}')
self.init_kwargs = copy.deepcopy(kwargs)
self.name_or_path = kwargs.pop('name_or_path', '')
self._processor_class = kwargs.pop('processor_class', None)
model_max_length = kwargs.pop('model_max_length', kwargs.pop('max_len', None))
self.model_max_length = model_max_length if model_max_length is not None else VERY_LARGE_INTEGER
self.padding_side = kwargs.pop('padding_side', self.padding_side)
if self.padding_side not in ['right', 'left']:
raise ValueError(f"Padding side should be selected between 'right' and 'left', current value: {self.padding_side}")
self.truncation_side = kwargs.pop('truncation_side', self.truncation_side)
if self.truncation_side not in ['right', 'left']:
raise ValueError(f"Truncation side should be selected between 'right' and 'left', current value: {self.truncation_side}")
self.model_input_names = kwargs.pop('model_input_names', self.model_input_names)
self.clean_up_tokenization_spaces = kwargs.pop('clean_up_tokenization_spaces', False)
self.split_special_tokens = kwargs.pop('split_special_tokens', False)
self.deprecation_warnings = {}
self._in_target_context_manager = False
self.chat_template = kwargs.pop('chat_template', None)
if isinstance(self.chat_template, (list, tuple)):
self.chat_template = {template['name']: template['template'] for template in self.chat_template}
super().__init__(**kwargs)
self.extra_special_tokens = kwargs.pop('extra_special_tokens', {})
self._set_model_specific_special_tokens(special_tokens=self.extra_special_tokens)
@property
def max_len_single_sentence(self) -> int:
"""
`int`: The maximum length of a sentence that can be fed to the model.
"""
return self.model_max_length - self.num_special_tokens_to_add(pair=False)
@property
def max_len_sentences_pair(self) -> int:
"""
`int`: The maximum combined length of a pair of sentences that can be fed to the model.
"""
return self.model_max_length - self.num_special_tokens_to_add(pair=True)
@max_len_single_sentence.setter
def max_len_single_sentence(self, value) -> int:
if value == self.model_max_length - self.num_special_tokens_to_add(pair=False) and self.verbose:
if not self.deprecation_warnings.get('max_len_single_sentence', False):
logger.warning("Setting 'max_len_single_sentence' is now deprecated. This value is automatically set up.")
self.deprecation_warnings['max_len_single_sentence'] = True
else:
raise ValueError("Setting 'max_len_single_sentence' is now deprecated. This value is automatically set up.")
@max_len_sentences_pair.setter
def max_len_sentences_pair(self, value) -> int:
if value == self.model_max_length - self.num_special_tokens_to_add(pair=True) and self.verbose:
if not self.deprecation_warnings.get('max_len_sentences_pair', False):
logger.warning("Setting 'max_len_sentences_pair' is now deprecated. This value is automatically set up.")
self.deprecation_warnings['max_len_sentences_pair'] = True
else:
raise ValueError("Setting 'max_len_sentences_pair' is now deprecated. This value is automatically set up.")
def _set_processor_class(self, processor_class: str):
"""Sets processor class as an attribute."""
self._processor_class = processor_class
@property
def added_tokens_decoder(self) -> dict[int, AddedToken]:
raise NotImplementedError()
def __repr__(self) -> str:
added_tokens_decoder_rep = '\n\t'.join([f'{k}: {v.__repr__()},' for k, v in self.added_tokens_decoder.items()])
return f"{self.__class__.__name__}(name_or_path='{self.name_or_path}', vocab_size={self.vocab_size}, model_max_length={self.model_max_length}, is_fast={self.is_fast}, padding_side='{self.padding_side}', truncation_side='{self.truncation_side}', special_tokens={self.special_tokens_map}, clean_up_tokenization_spaces={self.clean_up_tokenization_spaces}, added_tokens_decoder={{\n\t" + added_tokens_decoder_rep + '\n}\n)'
def __len__(self) -> int:
raise NotImplementedError()
def get_vocab(self) -> dict[str, int]:
"""
Returns the vocabulary as a dictionary of token to index.
`tokenizer.get_vocab()[token]` is equivalent to `tokenizer.convert_tokens_to_ids(token)` when `token` is in the
vocab.
Returns:
`dict[str, int]`: The vocabulary.
"""
raise NotImplementedError()
def apply_chat_template(self, conversation: Union[list[dict[str, str]], list[list[dict[str, str]]]], tools: Optional[list[Union[dict, Callable]]]=None, documents: Optional[list[dict[str, str]]]=None, chat_template: Optional[str]=None, add_generation_prompt: bool=False, continue_final_message: bool=False, tokenize: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: bool=False, max_length: Optional[int]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_dict: bool=False, return_assistant_tokens_mask: bool=False, tokenizer_kwargs: Optional[dict[str, Any]]=None, **kwargs) -> Union[str, list[int], list[str], list[list[int]], BatchEncoding]:
"""
Converts a list of dictionaries with `"role"` and `"content"` keys to a list of token
ids. This method is intended for use with chat models, and will read the tokenizer's chat_template attribute to
determine the format and control tokens to use when converting.
Args:
conversation (Union[list[dict[str, str]], list[list[dict[str, str]]]]): A list of dicts
with "role" and "content" keys, representing the chat history so far.
tools (`list[Union[Dict, Callable]]`, *optional*):
A list of tools (callable functions) that will be accessible to the model. If the template does not
support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema,
giving the name, description and argument types for the tool. See our
[chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use)
for more information.
documents (`list[dict[str, str]]`, *optional*):
A list of dicts representing documents that will be accessible to the model if it is performing RAG
(retrieval-augmented generation). If the template does not support RAG, this argument will have no
effect. We recommend that each document should be a dict containing "title" and "text" keys. Please
see the RAG section of the [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#arguments-for-RAG)
for examples of passing documents with chat templates.
chat_template (`str`, *optional*):
A Jinja template to use for this conversion. It is usually not necessary to pass anything to this
argument, as the model's template will be used by default.
add_generation_prompt (bool, *optional*):
If this is set, a prompt with the token(s) that indicate
the start of an assistant message will be appended to the formatted output. This is useful when you want to generate a response from the model.
Note that this argument will be passed to the chat template, and so it must be supported in the
template for this argument to have any effect.
continue_final_message (bool, *optional*):
If this is set, the chat will be formatted so that the final
message in the chat is open-ended, without any EOS tokens. The model will continue this message
rather than starting a new one. This allows you to "prefill" part of
the model's response for it. Cannot be used at the same time as `add_generation_prompt`.
tokenize (`bool`, defaults to `True`):
Whether to tokenize the output. If `False`, the output will be a string.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (`bool`, defaults to `False`):
Whether to truncate sequences at the maximum length. Has no effect if tokenize is `False`.
max_length (`int`, *optional*):
Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is `False`. If
not specified, the tokenizer's `max_length` attribute will be used as a default.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Has no effect if tokenize is `False`. Acceptable
values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return NumPy `np.ndarray` objects.
return_dict (`bool`, defaults to `False`):
Whether to return a dictionary with named outputs. Has no effect if tokenize is `False`.
tokenizer_kwargs (`dict[str: Any]`, *optional*): Additional kwargs to pass to the tokenizer.
return_assistant_tokens_mask (`bool`, defaults to `False`):
Whether to return a mask of the assistant generated tokens. For tokens generated by the assistant,
the mask will contain 1. For user and system tokens, the mask will contain 0.
This functionality is only available for chat templates that support it via the `{% generation %}` keyword.
**kwargs: Additional kwargs to pass to the template renderer. Will be accessible by the chat template.
Returns:
`Union[list[int], Dict]`: A list of token ids representing the tokenized chat so far, including control tokens. This
output is ready to pass to the model, either directly or via methods like `generate()`. If `return_dict` is
set, will return a dict of tokenizer outputs instead.
"""
if return_dict and (not tokenize):
raise ValueError('`return_dict=True` is incompatible with `tokenize=False`, because there is no dict of tokenizer outputs to return.')
if return_assistant_tokens_mask and (not return_dict):
raise ValueError('`return_assistant_tokens_mask=True` is incompatible with `return_dict=False`')
if tokenizer_kwargs is None:
tokenizer_kwargs = {}
chat_template = self.get_chat_template(chat_template, tools)
if isinstance(conversation, (list, tuple)) and (isinstance(conversation[0], (list, tuple)) or hasattr(conversation[0], 'messages')):
conversations = conversation
is_batched = True
else:
conversations = [conversation]
is_batched = False
if continue_final_message:
if add_generation_prompt:
raise ValueError('continue_final_message and add_generation_prompt are not compatible. Use continue_final_message when you want the model to continue the final message, and add_generation_prompt when you want to add a header that will prompt it to start a new assistant message instead.')
if return_assistant_tokens_mask:
raise ValueError('continue_final_message is not compatible with return_assistant_tokens_mask.')
template_kwargs = {**self.special_tokens_map, **kwargs}
rendered_chat, generation_indices = render_jinja_template(conversations=conversations, tools=tools, documents=documents, chat_template=chat_template, return_assistant_tokens_mask=return_assistant_tokens_mask, continue_final_message=continue_final_message, add_generation_prompt=add_generation_prompt, **template_kwargs)
if not is_batched:
rendered_chat = rendered_chat[0]
if tokenize:
out = self(rendered_chat, padding=padding, truncation=truncation, max_length=max_length, add_special_tokens=False, return_tensors=return_tensors, **tokenizer_kwargs)
if return_dict:
if return_assistant_tokens_mask:
assistant_masks = []
if is_batched or return_tensors:
input_ids = out['input_ids']
else:
input_ids = [out['input_ids']]
for i in range(len(input_ids)):
current_mask = [0] * len(input_ids[i])
for assistant_start_char, assistant_end_char in generation_indices[i]:
start_token = out.char_to_token(i, assistant_start_char)
end_token = out.char_to_token(i, assistant_end_char - 1)
if start_token is None:
break
for token_id in range(start_token, end_token + 1 if end_token else len(input_ids[i])):
current_mask[token_id] = 1
assistant_masks.append(current_mask)
if not is_batched and (not return_tensors):
assistant_masks = assistant_masks[0]
out['assistant_masks'] = assistant_masks
if return_tensors:
out.convert_to_tensors(tensor_type=return_tensors)
return out
else:
return out['input_ids']
else:
return rendered_chat
def encode_message_with_chat_template(self, message: dict[str, str], conversation_history: Optional[list[dict[str, str]]]=None, **kwargs) -> list[int]:
"""
Tokenize a single message. This method is a convenience wrapper around `apply_chat_template` that allows you
to tokenize messages one by one. This is useful for things like token-by-token streaming.
This method is not guaranteed to be perfect. For some models, it may be impossible to robustly tokenize
single messages. For example, if the chat template adds tokens after each message, but also has a prefix that
is added to the entire chat, it will be impossible to distinguish a chat-start-token from a message-start-token.
In these cases, this method will do its best to find the correct tokenization, but it may not be perfect.
**Note:** This method does not support `add_generation_prompt`. If you want to add a generation prompt,
you should do it separately after tokenizing the conversation.
Args:
message (`dict`):
A dictionary with "role" and "content" keys, representing the message to tokenize.
conversation_history (`list[dict]`, *optional*):
A list of dicts with "role" and "content" keys, representing the chat history so far. If you are
tokenizing messages one by one, you should pass the previous messages in the conversation here.
**kwargs:
Additional kwargs to pass to the `apply_chat_template` method.
Returns:
`list[int]`: A list of token ids representing the tokenized message.
"""
if 'add_generation_prompt' in kwargs:
raise ValueError('`encode_message_with_chat_template` does not support `add_generation_prompt`. Please add the generation prompt separately.')
if conversation_history is None or len(conversation_history) == 0:
return self.apply_chat_template([message], add_generation_prompt=False, tokenize=True, **kwargs)
conversation = conversation_history + [message]
tokens = self.apply_chat_template(conversation, add_generation_prompt=False, tokenize=True, **kwargs)
prefix_tokens = self.apply_chat_template(conversation_history, add_generation_prompt=False, tokenize=True, **kwargs)
min_len = min(len(prefix_tokens), len(tokens))
for i in range(min_len):
if prefix_tokens[i] != tokens[i]:
return tokens[i:]
return tokens[min_len:]
def get_chat_template(self, chat_template: Optional[str]=None, tools: Optional[list[dict]]=None) -> str:
"""
Retrieve the chat template string used for tokenizing chat messages. This template is used
internally by the `apply_chat_template` method and can also be used externally to retrieve the model's chat
template for better generation tracking.
Args:
chat_template (`str`, *optional*):
A Jinja template or the name of a template to use for this conversion.
It is usually not necessary to pass anything to this argument,
as the model's template will be used by default.
tools (`list[Dict]`, *optional*):
A list of tools (callable functions) that will be accessible to the model. If the template does not
support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema,
giving the name, description and argument types for the tool. See our
[chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use)
for more information.
Returns:
`str`: The chat template string.
"""
if isinstance(self.chat_template, dict):
template_dict = self.chat_template
if chat_template is not None and chat_template in template_dict:
chat_template = template_dict[chat_template]
elif chat_template is None:
if tools is not None and 'tool_use' in template_dict:
chat_template = template_dict['tool_use']
elif 'default' in template_dict:
chat_template = template_dict['default']
else:
raise ValueError(f'This model has multiple chat templates with no default specified! Please either pass a chat template or the name of the template you wish to use to the `chat_template` argument. Available template names are {sorted(template_dict.keys())}.')
elif chat_template is None:
if self.chat_template is not None:
chat_template = self.chat_template
else:
raise ValueError('Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed! For information about writing templates and setting the tokenizer.chat_template attribute, please see the documentation at https://huggingface.co/docs/transformers/main/en/chat_templating')
return chat_template
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], *init_inputs, cache_dir: Optional[Union[str, os.PathLike]]=None, force_download: bool=False, local_files_only: bool=False, token: Optional[Union[str, bool]]=None, revision: str='main', trust_remote_code=False, **kwargs):
"""
Instantiate a [`~tokenization_utils_base.PreTrainedTokenizerBase`] (or a derived class) from a predefined
tokenizer.
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
Can be either:
- A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co.
- A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved
using the [`~tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained`] method, e.g.,
`./my_model_directory/`.
- (**Deprecated**, not applicable to all derived classes) A path or url to a single saved vocabulary
file (if and only if the tokenizer only requires a single vocabulary file like Bert or XLNet), e.g.,
`./my_model_directory/vocab.txt`.
cache_dir (`str` or `os.PathLike`, *optional*):
Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the
standard cache should not be used.
force_download (`bool`, *optional*, defaults to `False`):
Whether or not to force the (re-)download the vocabulary files and override the cached versions if they
exist.
resume_download:
Deprecated and ignored. All downloads are now resumed by default when possible.
Will be removed in v5 of Transformers.
proxies (`dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
when running `hf auth login` (stored in `~/.huggingface`).
local_files_only (`bool`, *optional*, defaults to `False`):
Whether or not to only rely on local files and not to attempt to download any files.
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
identifier allowed by git.
subfolder (`str`, *optional*):
In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for
facebook/rag-token-base), specify it here.
inputs (additional positional arguments, *optional*):
Will be passed along to the Tokenizer `__init__` method.
trust_remote_code (`bool`, *optional*, defaults to `False`):
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to `True` for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
kwargs (additional keyword arguments, *optional*):
Will be passed to the Tokenizer `__init__` method. Can be used to set special tokens like `bos_token`,
`eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`,
`additional_special_tokens`. See parameters in the `__init__` for more details.
<Tip>
Passing `token=True` is required when you want to use a private model.
</Tip>
Examples:
```python
# We can't instantiate directly the base class *PreTrainedTokenizerBase* so let's show our examples on a derived class: BertTokenizer
# Download vocabulary from huggingface.co and cache.
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
# Download vocabulary from huggingface.co (user-uploaded) and cache.
tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
# If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
tokenizer = BertTokenizer.from_pretrained("./test/saved_model/")
# If the tokenizer uses a single vocabulary file, you can point directly to this file
tokenizer = BertTokenizer.from_pretrained("./test/saved_model/my_vocab.txt")
# You can link tokens to special vocabulary when instantiating
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased", unk_token="<unk>")
# You should be sure '<unk>' is in the vocabulary when doing that.
# Otherwise use tokenizer.add_special_tokens({'unk_token': '<unk>'}) instead)
assert tokenizer.unk_token == "<unk>"
```"""
resume_download = kwargs.pop('resume_download', None)
proxies = kwargs.pop('proxies', None)
use_auth_token = kwargs.pop('use_auth_token', None)
subfolder = kwargs.pop('subfolder', None)
from_pipeline = kwargs.pop('_from_pipeline', None)
from_auto_class = kwargs.pop('_from_auto', False)
commit_hash = kwargs.pop('_commit_hash', None)
gguf_file = kwargs.get('gguf_file')
if use_auth_token is not None:
warnings.warn('The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.', FutureWarning)
if token is not None:
raise ValueError('`token` and `use_auth_token` are both specified. Please set only the argument `token`.')
token = use_auth_token
user_agent = {'file_type': 'tokenizer', 'from_auto_class': from_auto_class, 'is_fast': 'Fast' in cls.__name__}
if from_pipeline is not None:
user_agent['using_pipeline'] = from_pipeline
if is_offline_mode() and (not local_files_only):
logger.info('Offline mode: forcing local_files_only=True')
local_files_only = True
pretrained_model_name_or_path = str(pretrained_model_name_or_path)
vocab_files = {}
init_configuration = {}
is_local = os.path.isdir(pretrained_model_name_or_path)
single_file_id = None
if os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
if len(cls.vocab_files_names) > 1 and (not gguf_file):
raise ValueError(f'Calling {cls.__name__}.from_pretrained() with the path to a single file or url is not supported for this tokenizer. Use a model identifier or the path to a directory instead.')
warnings.warn(f"Calling {cls.__name__}.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.", FutureWarning)
file_id = list(cls.vocab_files_names.keys())[0]
vocab_files[file_id] = pretrained_model_name_or_path
single_file_id = file_id
elif gguf_file:
vocab_files['vocab_file'] = gguf_file
else:
additional_files_names = {'added_tokens_file': ADDED_TOKENS_FILE, 'special_tokens_map_file': SPECIAL_TOKENS_MAP_FILE, 'tokenizer_config_file': TOKENIZER_CONFIG_FILE, 'tokenizer_file': FULL_TOKENIZER_FILE, 'chat_template_file': CHAT_TEMPLATE_FILE}
vocab_files = {**cls.vocab_files_names, **additional_files_names}
if 'tokenizer_file' in vocab_files:
fast_tokenizer_file = FULL_TOKENIZER_FILE
try:
resolved_config_file = cached_file(pretrained_model_name_or_path, TOKENIZER_CONFIG_FILE, cache_dir=cache_dir, force_download=force_download, resume_download=resume_download, proxies=proxies, token=token, revision=revision, local_files_only=local_files_only, subfolder=subfolder, user_agent=user_agent, _raise_exceptions_for_missing_entries=False, _commit_hash=commit_hash)
except OSError:
raise
except Exception:
raise OSError(f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory containing all relevant files for a {cls.__name__} tokenizer.")
commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
if resolved_config_file is not None:
with open(resolved_config_file, encoding='utf-8') as reader:
tokenizer_config = json.load(reader)
if 'fast_tokenizer_files' in tokenizer_config:
fast_tokenizer_file = get_fast_tokenizer_file(tokenizer_config['fast_tokenizer_files'])
vocab_files['tokenizer_file'] = fast_tokenizer_file
if is_local:
template_dir = Path(pretrained_model_name_or_path, CHAT_TEMPLATE_DIR)
if template_dir.is_dir():
for template_file in template_dir.glob('*.jinja'):
template_name = template_file.name.removesuffix('.jinja')
vocab_files[f'chat_template_{template_name}'] = f'{CHAT_TEMPLATE_DIR}/{template_file.name}'
else:
for template in list_repo_templates(pretrained_model_name_or_path, local_files_only=local_files_only, revision=revision, cache_dir=cache_dir, token=token):
template = template.removesuffix('.jinja')
vocab_files[f'chat_template_{template}'] = f'{CHAT_TEMPLATE_DIR}/{template}.jinja'
resolved_vocab_files = {}
for file_id, file_path in vocab_files.items():
if file_path is None:
resolved_vocab_files[file_id] = None
elif single_file_id == file_id:
if os.path.isfile(file_path):
resolved_vocab_files[file_id] = file_path
elif is_remote_url(file_path):
resolved_vocab_files[file_id] = download_url(file_path, proxies=proxies)
else:
try:
resolved_vocab_files[file_id] = cached_file(pretrained_model_name_or_path, file_path, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download, local_files_only=local_files_only, token=token, user_agent=user_agent, revision=revision, subfolder=subfolder, _raise_exceptions_for_missing_entries=False, _commit_hash=commit_hash)
except OSError:
raise
except Exception:
raise OSError(f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory containing all relevant files for a {cls.__name__} tokenizer.")
commit_hash = extract_commit_hash(resolved_vocab_files[file_id], commit_hash)
for file_id, file_path in vocab_files.items():
if file_id not in resolved_vocab_files:
continue
if is_local:
logger.info(f'loading file {file_path}')
else:
logger.info(f'loading file {file_path} from cache at {resolved_vocab_files[file_id]}')
return cls._from_pretrained(resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, token=token, cache_dir=cache_dir, local_files_only=local_files_only, _commit_hash=commit_hash, _is_local=is_local, trust_remote_code=trust_remote_code, **kwargs)
@classmethod
def _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, token=None, cache_dir=None, local_files_only=False, _commit_hash=None, _is_local=False, trust_remote_code=False, **kwargs):
from_slow = kwargs.get('from_slow', False)
gguf_file = kwargs.get('gguf_file')
has_tokenizer_file = resolved_vocab_files.get('tokenizer_file', None) is not None
if (from_slow or not has_tokenizer_file) and cls.slow_tokenizer_class is not None and (not gguf_file):
slow_tokenizer = cls.slow_tokenizer_class._from_pretrained(copy.deepcopy(resolved_vocab_files), pretrained_model_name_or_path, copy.deepcopy(init_configuration), *init_inputs, token=token, cache_dir=cache_dir, local_files_only=local_files_only, _commit_hash=_commit_hash, **copy.deepcopy(kwargs))
else:
slow_tokenizer = None
tokenizer_config_file = resolved_vocab_files.pop('tokenizer_config_file', None)
if tokenizer_config_file is not None:
with open(tokenizer_config_file, encoding='utf-8') as tokenizer_config_handle:
init_kwargs = json.load(tokenizer_config_handle)
config_tokenizer_class = init_kwargs.get('tokenizer_class')
init_kwargs.pop('tokenizer_class', None)
if not has_tokenizer_file:
init_kwargs.pop('tokenizer_file', None)
saved_init_inputs = init_kwargs.pop('init_inputs', ())
if not init_inputs:
init_inputs = saved_init_inputs
else:
config_tokenizer_class = None
init_kwargs = init_configuration
chat_templates = {}
chat_template_file = resolved_vocab_files.pop('chat_template_file', None)
extra_chat_templates = [key for key in resolved_vocab_files if key.startswith('chat_template_')]
if chat_template_file is not None:
with open(chat_template_file, encoding='utf-8') as chat_template_handle:
chat_templates['default'] = chat_template_handle.read()
for extra_chat_template in extra_chat_templates:
template_file = resolved_vocab_files.pop(extra_chat_template, None)
if template_file is None:
continue
template_name = extra_chat_template.removeprefix('chat_template_')
with open(template_file) as chat_template_handle:
chat_templates[template_name] = chat_template_handle.read()
if len(chat_templates) == 1 and 'default' in chat_templates:
init_kwargs['chat_template'] = chat_templates['default']
elif chat_templates:
init_kwargs['chat_template'] = chat_templates
if not _is_local:
if 'auto_map' in init_kwargs:
if isinstance(init_kwargs['auto_map'], (tuple, list)):
init_kwargs['auto_map'] = {'AutoTokenizer': init_kwargs['auto_map']}
if config_tokenizer_class is None:
from .models.auto.configuration_auto import AutoConfig
try:
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, token=token, cache_dir=cache_dir, local_files_only=local_files_only, trust_remote_code=trust_remote_code, _commit_hash=_commit_hash)
config_tokenizer_class = config.tokenizer_class
except (OSError, ValueError, KeyError):
config = None
if config_tokenizer_class is None:
from .models.auto.tokenization_auto import TOKENIZER_MAPPING_NAMES
if hasattr(config, 'model_type'):
model_type = config.model_type
else:
model_type = None
for pattern in TOKENIZER_MAPPING_NAMES:
if pattern in str(pretrained_model_name_or_path):
model_type = pattern
break
if model_type is not None:
config_tokenizer_class, config_tokenizer_class_fast = TOKENIZER_MAPPING_NAMES.get(model_type, (None, None))
if config_tokenizer_class is None:
config_tokenizer_class = config_tokenizer_class_fast
if config_tokenizer_class is not None:
if cls.__name__.replace('Fast', '') != config_tokenizer_class.replace('Fast', ''):
logger.warning(f"The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. \nThe tokenizer class you load from this checkpoint is '{config_tokenizer_class}'. \nThe class this function is called from is '{cls.__name__}'.")
init_kwargs.update(kwargs)
added_tokens_file = resolved_vocab_files.pop('added_tokens_file', None)
special_tokens_map_file = resolved_vocab_files.pop('special_tokens_map_file', None)
for args_name, file_path in resolved_vocab_files.items():
if args_name not in init_kwargs:
init_kwargs[args_name] = file_path
tokenizer_file = resolved_vocab_files.pop('tokenizer_file', None)
if slow_tokenizer is not None:
init_kwargs['__slow_tokenizer'] = slow_tokenizer
init_kwargs['name_or_path'] = pretrained_model_name_or_path
added_tokens_decoder: dict[int, AddedToken] = {}
added_tokens_map: dict[str, AddedToken] = {}
if 'added_tokens_decoder' in init_kwargs:
for idx, token in init_kwargs['added_tokens_decoder'].items():
if isinstance(token, dict):
token = AddedToken(**token)
if isinstance(token, AddedToken):
added_tokens_decoder[int(idx)] = token
added_tokens_map[str(token)] = token
else:
raise TypeError(f'Found a {token.__class__} in the saved `added_tokens_decoder`, should be a dictionary or an AddedToken instance')
else:
if special_tokens_map_file is not None:
with open(special_tokens_map_file, encoding='utf-8') as special_tokens_map_handle:
special_tokens_map = json.load(special_tokens_map_handle)
for key, value in special_tokens_map.items():
if key in kwargs and kwargs[key]:
continue
if isinstance(value, dict):
value['special'] = True
value = AddedToken(**value)
elif key == 'additional_special_tokens' and isinstance(value, list):
additional_special_tokens = init_kwargs.pop('additional_special_tokens', []) or []
for token in value:
if isinstance(token, dict):
token['special'] = True
token = AddedToken(**token)
if token not in additional_special_tokens:
additional_special_tokens.append(token)
value = additional_special_tokens
init_kwargs[key] = value
if added_tokens_file is not None:
special_tokens = []
for key in cls.SPECIAL_TOKENS_ATTRIBUTES & init_kwargs.keys():
if init_kwargs[key] is not None:
if key == 'additional_special_tokens':
special_tokens += [str(token) for token in init_kwargs[key]]
else:
special_tokens.append(str(init_kwargs[key]))
with open(added_tokens_file, encoding='utf-8') as added_tokens_handle:
added_tok_encoder = json.load(added_tokens_handle)
for str_token, index in added_tok_encoder.items():
special = str_token in special_tokens
added_tokens_decoder[index] = AddedToken(str_token, rstrip=False, lstrip=False, normalized=not special, special=special)
added_tokens_map[str(token)] = added_tokens_decoder[index]
if tokenizer_file is not None:
with open(tokenizer_file, encoding='utf-8') as tokenizer_file_handle:
tokenizer_file_handle = json.load(tokenizer_file_handle)
added_tokens = tokenizer_file_handle.pop('added_tokens')
for serialized_tokens in added_tokens:
idx = serialized_tokens.pop('id')
added_tokens_decoder[idx] = AddedToken(**serialized_tokens)
added_tokens_map[str(added_tokens_decoder[idx])] = added_tokens_decoder[idx]
init_kwargs['added_tokens_decoder'] = added_tokens_decoder
init_kwargs = cls.convert_added_tokens(init_kwargs, save=False)
for key in cls.SPECIAL_TOKENS_ATTRIBUTES & init_kwargs.keys():
if added_tokens_map != {} and init_kwargs[key] is not None:
if key != 'additional_special_tokens':
init_kwargs[key] = added_tokens_map.get(str(init_kwargs[key]), init_kwargs[key])
try:
tokenizer = cls(*init_inputs, **init_kwargs)
except import_protobuf_decode_error():
logger.info('Unable to load tokenizer model from SPM, loading from TikToken will be attempted instead.(Google protobuf error: Tried to load SPM model with non-SPM vocab file).')
return False
except RuntimeError as e:
if 'sentencepiece_processor.cc' in str(e):
logger.info('Unable to load tokenizer model from SPM, loading from TikToken will be attempted instead.(SentencePiece RuntimeError: Tried to load SPM model with non-SPM vocab file).')
return False
except OSError:
raise OSError('Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.')
if added_tokens_decoder != {} and max(list(added_tokens_decoder.keys())[-1], 0) > tokenizer.vocab_size:
logger.info('Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.')
return tokenizer
@staticmethod
def _eventually_correct_t5_max_length(pretrained_model_name_or_path, max_model_length, init_max_model_length):
return max_model_length
@classmethod
def convert_added_tokens(cls, obj: Union[AddedToken, Any], save=False, add_type_field=True):
if isinstance(obj, dict) and '__type' in obj and (obj['__type'] == 'AddedToken'):
obj.pop('__type')
return AddedToken(**obj)
if isinstance(obj, AddedToken) and save:
obj = obj.__getstate__()
if add_type_field:
obj['__type'] = 'AddedToken'
else:
obj.pop('special')
return obj
elif isinstance(obj, (list, tuple)):
return [cls.convert_added_tokens(o, save=save, add_type_field=add_type_field) for o in obj]
elif isinstance(obj, dict):
return {k: cls.convert_added_tokens(v, save=save, add_type_field=add_type_field) for k, v in obj.items()}
return obj
def save_chat_templates(self, save_directory: Union[str, os.PathLike], tokenizer_config: dict, filename_prefix: Optional[str], save_jinja_files: bool):
"""
Writes chat templates out to the save directory if we're using the new format, and removes them from
the tokenizer config if present. If we're using the legacy format, it doesn't write any files, and instead
writes the templates to the tokenizer config in the correct format.
"""
chat_template_file = os.path.join(save_directory, (filename_prefix + '-' if filename_prefix else '') + CHAT_TEMPLATE_FILE)
chat_template_dir = os.path.join(save_directory, (filename_prefix + '-' if filename_prefix else '') + CHAT_TEMPLATE_DIR)
saved_raw_chat_template_files = []
if save_jinja_files and isinstance(self.chat_template, str):
with open(chat_template_file, 'w', encoding='utf-8') as f:
f.write(self.chat_template)
logger.info(f'chat template saved in {chat_template_file}')
saved_raw_chat_template_files.append(chat_template_file)
if 'chat_template' in tokenizer_config:
tokenizer_config.pop('chat_template')
elif save_jinja_files and isinstance(self.chat_template, dict):
for template_name, template in self.chat_template.items():
if template_name == 'default':
with open(chat_template_file, 'w', encoding='utf-8') as f:
f.write(self.chat_template['default'])
logger.info(f'chat template saved in {chat_template_file}')
saved_raw_chat_template_files.append(chat_template_file)
else:
Path(chat_template_dir).mkdir(exist_ok=True)
template_filepath = os.path.join(chat_template_dir, f'{template_name}.jinja')
with open(template_filepath, 'w', encoding='utf-8') as f:
f.write(template)
logger.info(f'chat template saved in {template_filepath}')
saved_raw_chat_template_files.append(template_filepath)
if 'chat_template' in tokenizer_config:
tokenizer_config.pop('chat_template')
elif isinstance(self.chat_template, dict):
tokenizer_config['chat_template'] = [{'name': k, 'template': v} for k, v in self.chat_template.items()]
elif self.chat_template is not None:
tokenizer_config['chat_template'] = self.chat_template
return (tokenizer_config, saved_raw_chat_template_files)
def save_pretrained(self, save_directory: Union[str, os.PathLike], legacy_format: Optional[bool]=None, filename_prefix: Optional[str]=None, push_to_hub: bool=False, **kwargs) -> tuple[str]:
"""
Save the full tokenizer state.
This method make sure the full tokenizer can then be re-loaded using the
[`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`] class method..
Warning,None This won't save modifications you may have applied to the tokenizer after the instantiation (for
instance, modifying `tokenizer.do_lower_case` after creation).
Args:
save_directory (`str` or `os.PathLike`): The path to a directory where the tokenizer will be saved.
legacy_format (`bool`, *optional*):
Only applicable for a fast tokenizer. If unset (default), will save the tokenizer in the unified JSON
format as well as in legacy format if it exists, i.e. with tokenizer specific vocabulary and a separate
added_tokens files.
If `False`, will only save the tokenizer in the unified JSON format. This format is incompatible with
"slow" tokenizers (not powered by the *tokenizers* library), so the tokenizer will not be able to be
loaded in the corresponding "slow" tokenizer.
If `True`, will save the tokenizer in legacy format. If the "slow" tokenizer doesn't exits, a value
error is raised.
filename_prefix (`str`, *optional*):
A prefix to add to the names of the files saved by the tokenizer.
push_to_hub (`bool`, *optional*, defaults to `False`):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
Returns:
A tuple of `str`: The files saved.
"""
use_auth_token = kwargs.pop('use_auth_token', None)
if use_auth_token is not None:
warnings.warn('The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.', FutureWarning)
if kwargs.get('token') is not None:
raise ValueError('`token` and `use_auth_token` are both specified. Please set only the argument `token`.')
kwargs['token'] = use_auth_token
if os.path.isfile(save_directory):
logger.error(f'Provided path ({save_directory}) should be a directory, not a file')
return
os.makedirs(save_directory, exist_ok=True)
if push_to_hub:
commit_message = kwargs.pop('commit_message', None)
repo_id = kwargs.pop('repo_id', save_directory.split(os.path.sep)[-1])
repo_id = self._create_repo(repo_id, **kwargs)
files_timestamps = self._get_files_timestamps(save_directory)
special_tokens_map_file = os.path.join(save_directory, (filename_prefix + '-' if filename_prefix else '') + SPECIAL_TOKENS_MAP_FILE)
tokenizer_config_file = os.path.join(save_directory, (filename_prefix + '-' if filename_prefix else '') + TOKENIZER_CONFIG_FILE)
tokenizer_config = copy.deepcopy(self.init_kwargs)
target_keys = set(self.init_kwargs.keys())
target_keys.update(['model_max_length', 'clean_up_tokenization_spaces'])
for k in target_keys:
if hasattr(self, k):
tokenizer_config[k] = getattr(self, k)
tokenizer_config.update(self.special_tokens_map)
if 'extra_special_tokens' not in tokenizer_config:
tokenizer_config['extra_special_tokens'] = self.extra_special_tokens
tokenizer_config.update(self.extra_special_tokens)
save_jinja_files = kwargs.get('save_jinja_files', True)
tokenizer_config, saved_raw_chat_template_files = self.save_chat_templates(save_directory, tokenizer_config, filename_prefix, save_jinja_files)
if len(self.init_inputs) > 0:
tokenizer_config['init_inputs'] = copy.deepcopy(self.init_inputs)
for file_id in self.vocab_files_names:
tokenizer_config.pop(file_id, None)
tokenizer_config = self.convert_added_tokens(tokenizer_config, add_type_field=True, save=True)
added_tokens = {}
for key, value in self.added_tokens_decoder.items():
added_tokens[key] = value.__getstate__()
tokenizer_config['added_tokens_decoder'] = added_tokens
tokenizer_class = self.__class__.__name__
if tokenizer_class.endswith('Fast') and getattr(self, 'can_save_slow_tokenizer', False):
tokenizer_class = tokenizer_class[:-4]
tokenizer_config['tokenizer_class'] = tokenizer_class
if getattr(self, '_auto_map', None) is not None:
tokenizer_config['auto_map'] = self._auto_map
if getattr(self, '_processor_class', None) is not None:
tokenizer_config['processor_class'] = self._processor_class
if self._auto_class is not None:
custom_object_save(self, save_directory, config=tokenizer_config)
if 'name_or_path' in tokenizer_config:
tokenizer_config.pop('name_or_path')
tokenizer_config.pop('special_tokens_map_file', None)
tokenizer_config.pop('tokenizer_file', None)
if 'device_map' in tokenizer_config:
tokenizer_config.pop('device_map')
with open(tokenizer_config_file, 'w', encoding='utf-8') as f:
out_str = json.dumps(tokenizer_config, indent=2, sort_keys=True, ensure_ascii=False) + '\n'
f.write(out_str)
logger.info(f'tokenizer config file saved in {tokenizer_config_file}')
write_dict = self.convert_added_tokens(self.special_tokens_map_extended, save=True, add_type_field=False)
with open(special_tokens_map_file, 'w', encoding='utf-8') as f:
out_str = json.dumps(write_dict, indent=2, sort_keys=True, ensure_ascii=False) + '\n'
f.write(out_str)
logger.info(f'Special tokens file saved in {special_tokens_map_file}')
file_names = (tokenizer_config_file, special_tokens_map_file, *saved_raw_chat_template_files)
save_files = self._save_pretrained(save_directory=save_directory, file_names=file_names, legacy_format=legacy_format, filename_prefix=filename_prefix)
if push_to_hub:
self._upload_modified_files(save_directory, repo_id, files_timestamps, commit_message=commit_message, token=kwargs.get('token'))
return save_files
def _save_pretrained(self, save_directory: Union[str, os.PathLike], file_names: tuple[str], legacy_format: Optional[bool]=None, filename_prefix: Optional[str]=None) -> tuple[str]:
"""
Save a tokenizer using the slow-tokenizer/legacy format: vocabulary + added tokens.
Fast tokenizers can also be saved in a unique JSON file containing {config + vocab + added-tokens} using the
specific [`~tokenization_utils_fast.PreTrainedTokenizerFast._save_pretrained`]
"""
if legacy_format is False:
raise ValueError('Only fast tokenizers (instances of PreTrainedTokenizerFast) can be saved in non legacy format.')
save_directory = str(save_directory)
added_tokens_file = os.path.join(save_directory, (filename_prefix + '-' if filename_prefix else '') + ADDED_TOKENS_FILE)
added_vocab = {tok: index for tok, index in self.added_tokens_encoder.items() if index >= self.vocab_size}
if added_vocab:
with open(added_tokens_file, 'w', encoding='utf-8') as f:
out_str = json.dumps(added_vocab, indent=2, sort_keys=True, ensure_ascii=False) + '\n'
f.write(out_str)
logger.info(f'added tokens file saved in {added_tokens_file}')
vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix)
return file_names + vocab_files + (added_tokens_file,)
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str]=None) -> tuple[str]:
"""
Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won't save the configuration and special token mappings of the tokenizer. Use
[`~PreTrainedTokenizerFast._save_pretrained`] to save the whole state of the tokenizer.
Args:
save_directory (`str`):
The directory in which to save the vocabulary.
filename_prefix (`str`, *optional*):
An optional prefix to add to the named of the saved files.
Returns:
`Tuple(str)`: Paths to the files saved.
"""
raise NotImplementedError
def tokenize(self, text: str, pair: Optional[str]=None, add_special_tokens: bool=False, **kwargs) -> list[str]:
"""
Converts a string into a sequence of tokens, replacing unknown tokens with the `unk_token`.
Args:
text (`str`):
The sequence to be encoded.
pair (`str`, *optional*):
A second sequence to be encoded with the first.
add_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to add the special tokens associated with the corresponding model.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific encode method. See details in
[`~PreTrainedTokenizerBase.__call__`]
Returns:
`list[str]`: The list of tokens.
"""
raise NotImplementedError
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, '\n **kwargs: Passed along to the `.tokenize()` method.\n ', '\n Returns:\n `list[int]`, `torch.Tensor`, or `np.ndarray`: The tokenized ids of the text.\n ')
def encode(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, **kwargs) -> list[int]:
"""
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing `self.convert_tokens_to_ids(self.tokenize(text))`.
Args:
text (`str`, `list[str]` or `list[int]`):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the
`tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
text_pair (`str`, `list[str]` or `list[int]`, *optional*):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
"""
encoded_inputs = self.encode_plus(text, text_pair=text_pair, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, padding_side=padding_side, return_tensors=return_tensors, **kwargs)
return encoded_inputs['input_ids']
def num_special_tokens_to_add(self, pair: bool=False) -> int:
raise NotImplementedError
def _get_padding_truncation_strategies(self, padding=False, truncation=None, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs):
"""
Find the correct padding/truncation strategy
"""
if max_length is not None and padding is False and (truncation is None):
if verbose:
if not self.deprecation_warnings.get('Truncation-not-explicitly-activated', False):
logger.warning("Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.")
self.deprecation_warnings['Truncation-not-explicitly-activated'] = True
truncation = 'longest_first'
if padding is not False:
if padding is True:
if verbose:
if max_length is not None and (truncation is None or truncation is False or truncation == 'do_not_truncate'):
warnings.warn("`max_length` is ignored when `padding`=`True` and there is no truncation strategy. To pad to max length, use `padding='max_length'`.")
padding_strategy = PaddingStrategy.LONGEST
elif not isinstance(padding, PaddingStrategy):
padding_strategy = PaddingStrategy(padding)
elif isinstance(padding, PaddingStrategy):
padding_strategy = padding
else:
padding_strategy = PaddingStrategy.DO_NOT_PAD
if truncation is not False and truncation is not None:
if truncation is True:
truncation_strategy = TruncationStrategy.LONGEST_FIRST
elif not isinstance(truncation, TruncationStrategy):
truncation_strategy = TruncationStrategy(truncation)
elif isinstance(truncation, TruncationStrategy):
truncation_strategy = truncation
else:
truncation_strategy = TruncationStrategy.DO_NOT_TRUNCATE
if max_length is None:
if padding_strategy == PaddingStrategy.MAX_LENGTH:
if self.model_max_length > LARGE_INTEGER:
if verbose:
if not self.deprecation_warnings.get('Asking-to-pad-to-max_length', False):
logger.warning('Asking to pad to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no padding.')
self.deprecation_warnings['Asking-to-pad-to-max_length'] = True
padding_strategy = PaddingStrategy.DO_NOT_PAD
else:
max_length = self.model_max_length
if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE:
if self.model_max_length > LARGE_INTEGER:
if verbose:
if not self.deprecation_warnings.get('Asking-to-truncate-to-max_length', False):
logger.warning('Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.')
self.deprecation_warnings['Asking-to-truncate-to-max_length'] = True
truncation_strategy = TruncationStrategy.DO_NOT_TRUNCATE
else:
max_length = self.model_max_length
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.pad_token is None or self.pad_token_id < 0):
raise ValueError("Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.")
if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and padding_strategy != PaddingStrategy.DO_NOT_PAD and (pad_to_multiple_of is not None) and (max_length is not None) and (max_length % pad_to_multiple_of != 0):
raise ValueError(f'Truncation and padding are both activated but truncation length ({max_length}) is not a multiple of pad_to_multiple_of ({pad_to_multiple_of}).')
return (padding_strategy, truncation_strategy, max_length, kwargs)
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def __call__(self, text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput], None]=None, text_pair: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, text_target: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput], None]=None, text_pair_target: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, **kwargs) -> BatchEncoding:
"""
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
Args:
text (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
text_pair (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
text_target (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
text_pair_target (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
"""
all_kwargs = {'add_special_tokens': add_special_tokens, 'padding': padding, 'truncation': truncation, 'max_length': max_length, 'stride': stride, 'is_split_into_words': is_split_into_words, 'pad_to_multiple_of': pad_to_multiple_of, 'padding_side': padding_side, 'return_tensors': return_tensors, 'return_token_type_ids': return_token_type_ids, 'return_attention_mask': return_attention_mask, 'return_overflowing_tokens': return_overflowing_tokens, 'return_special_tokens_mask': return_special_tokens_mask, 'return_offsets_mapping': return_offsets_mapping, 'return_length': return_length, 'split_special_tokens': kwargs.pop('split_special_tokens', self.split_special_tokens), 'verbose': verbose}
all_kwargs.update(kwargs)
if text is None and text_target is None:
raise ValueError('You need to specify either `text` or `text_target`.')
if text is not None:
if not self._in_target_context_manager:
self._switch_to_input_mode()
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
if text_target is not None:
self._switch_to_target_mode()
target_encodings = self._call_one(text=text_target, text_pair=text_pair_target, **all_kwargs)
self._switch_to_input_mode()
if text_target is None:
return encodings
elif text is None:
return target_encodings
else:
encodings['labels'] = target_encodings['input_ids']
return encodings
def _call_one(self, text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]], text_pair: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
def _is_valid_text_input(t):
if isinstance(t, str):
return True
elif isinstance(t, (list, tuple)):
if len(t) == 0:
return True
elif isinstance(t[0], str):
return True
elif isinstance(t[0], (list, tuple)):
return len(t[0]) == 0 or isinstance(t[0][0], str)
else:
return False
else:
return False
if not _is_valid_text_input(text):
raise ValueError('text input must be of type `str` (single example), `list[str]` (batch or single pretokenized example) or `list[list[str]]` (batch of pretokenized examples).')
if text_pair is not None and (not _is_valid_text_input(text_pair)):
raise ValueError('text input must be of type `str` (single example), `list[str]` (batch or single pretokenized example) or `list[list[str]]` (batch of pretokenized examples).')
if is_split_into_words:
is_batched = isinstance(text, (list, tuple)) and text and isinstance(text[0], (list, tuple))
else:
is_batched = isinstance(text, (list, tuple))
if is_batched:
if isinstance(text_pair, str):
raise TypeError('when tokenizing batches of text, `text_pair` must be a list or tuple with the same length as `text`.')
if text_pair is not None and len(text) != len(text_pair):
raise ValueError(f'batch length of `text`: {len(text)} does not match batch length of `text_pair`: {len(text_pair)}.')
batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text
return self.batch_encode_plus(batch_text_or_text_pairs=batch_text_or_text_pairs, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, split_special_tokens=split_special_tokens, **kwargs)
else:
return self.encode_plus(text=text, text_pair=text_pair, add_special_tokens=add_special_tokens, padding=padding, truncation=truncation, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, split_special_tokens=split_special_tokens, **kwargs)
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def encode_plus(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, **kwargs) -> BatchEncoding:
"""
Tokenize and prepare for the model a sequence or a pair of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
text (`str`, `list[str]` or (for non-fast tokenizers) `list[int]`):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the
`tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
text_pair (`str`, `list[str]` or `list[int]`, *optional*):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
"""
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs)
return self._encode_plus(text=text, text_pair=text_pair, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, split_special_tokens=kwargs.pop('split_special_tokens', self.split_special_tokens), **kwargs)
def _encode_plus(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
raise NotImplementedError
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def batch_encode_plus(self, batch_text_or_text_pairs: Union[list[TextInput], list[TextInputPair], list[PreTokenizedInput], list[PreTokenizedInputPair], list[EncodedInput], list[EncodedInputPair]], add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
"""
Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
batch_text_or_text_pairs (`list[str]`, `list[tuple[str, str]]`, `list[list[str]]`, `list[tuple[list[str], list[str]]]`, and for not-fast tokenizers, also `list[list[int]]`, `list[tuple[list[int], list[int]]]`):
Batch of sequences or pair of sequences to be encoded. This can be a list of
string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see
details in `encode_plus`).
"""
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs)
return self._batch_encode_plus(batch_text_or_text_pairs=batch_text_or_text_pairs, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, is_split_into_words=is_split_into_words, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_tensors=return_tensors, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_offsets_mapping=return_offsets_mapping, return_length=return_length, verbose=verbose, split_special_tokens=split_special_tokens, **kwargs)
def _batch_encode_plus(self, batch_text_or_text_pairs: Union[list[TextInput], list[TextInputPair], list[PreTokenizedInput], list[PreTokenizedInputPair], list[EncodedInput], list[EncodedInputPair]], add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
raise NotImplementedError
def pad(self, encoded_inputs: Union[BatchEncoding, list[BatchEncoding], dict[str, EncodedInput], dict[str, list[EncodedInput]], list[dict[str, EncodedInput]]], padding: Union[bool, str, PaddingStrategy]=True, max_length: Optional[int]=None, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_attention_mask: Optional[bool]=None, return_tensors: Optional[Union[str, TensorType]]=None, verbose: bool=True) -> BatchEncoding:
"""
Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length
in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level (with `self.padding_side`,
`self.pad_token_id` and `self.pad_token_type_id`).
Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the
text followed by a call to the `pad` method to get a padded encoding.
<Tip>
If the `encoded_inputs` passed are dictionary of numpy arrays, or PyTorch tensors, the
result will use the same type unless you provide a different tensor type with `return_tensors`. In the case of
PyTorch tensors, you will lose the specific device of your tensors however.
</Tip>
Args:
encoded_inputs ([`BatchEncoding`], list of [`BatchEncoding`], `dict[str, list[int]]`, `dict[str, list[list[int]]` or `list[dict[str, list[int]]]`):
Tokenized inputs. Can represent one input ([`BatchEncoding`] or `dict[str, list[int]]`) or a batch of
tokenized inputs (list of [`BatchEncoding`], *dict[str, list[list[int]]]* or *list[dict[str,
list[int]]]*) so you can use this method during preprocessing as well as in a PyTorch Dataloader
collate function.
Instead of `list[int]` you can have tensors (numpy arrays, or PyTorch tensors), see
the note above for the return type.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different
lengths).
max_length (`int`, *optional*):
Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
`>= 7.5` (Volta).
padding_side (`str`, *optional*):
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the `return_outputs` attribute.
[What are attention masks?](../glossary#attention-mask)
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return Numpy `np.ndarray` objects.
verbose (`bool`, *optional*, defaults to `True`):
Whether or not to print more information and warnings.
"""
if self.__class__.__name__.endswith('Fast'):
if not self.deprecation_warnings.get('Asking-to-pad-a-fast-tokenizer', False):
logger.warning_advice(f"You're using a {self.__class__.__name__} tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.")
self.deprecation_warnings['Asking-to-pad-a-fast-tokenizer'] = True
if isinstance(encoded_inputs, (list, tuple)) and isinstance(encoded_inputs[0], Mapping):
encoded_inputs = {key: [example[key] for example in encoded_inputs] for key in encoded_inputs[0]}
if self.model_input_names[0] not in encoded_inputs:
raise ValueError(f'You should supply an encoding or a list of encodings to this method that includes {self.model_input_names[0]}, but you provided {list(encoded_inputs.keys())}')
required_input = encoded_inputs[self.model_input_names[0]]
if required_input is None or (isinstance(required_input, Sized) and len(required_input) == 0):
if return_attention_mask:
encoded_inputs['attention_mask'] = []
return encoded_inputs
first_element = required_input[0]
if isinstance(first_element, (list, tuple)):
for item in required_input:
if len(item) != 0:
first_element = item[0]
break
if not isinstance(first_element, (int, list, tuple)):
if is_torch_tensor(first_element):
return_tensors = 'pt' if return_tensors is None else return_tensors
elif isinstance(first_element, np.ndarray):
return_tensors = 'np' if return_tensors is None else return_tensors
else:
raise ValueError(f'type of {first_element} unknown: {type(first_element)}. Should be one of a python, numpy, or pytorch object.')
for key, value in encoded_inputs.items():
encoded_inputs[key] = to_py_obj(value)
padding_strategy, _, max_length, _ = self._get_padding_truncation_strategies(padding=padding, max_length=max_length, verbose=verbose)
required_input = encoded_inputs[self.model_input_names[0]]
if required_input and (not isinstance(required_input[0], (list, tuple))):
encoded_inputs = self._pad(encoded_inputs, max_length=max_length, padding_strategy=padding_strategy, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=return_attention_mask)
return BatchEncoding(encoded_inputs, tensor_type=return_tensors)
batch_size = len(required_input)
assert all((len(v) == batch_size for v in encoded_inputs.values())), 'Some items in the output dictionary have a different batch size than others.'
if padding_strategy == PaddingStrategy.LONGEST:
max_length = max((len(inputs) for inputs in required_input))
padding_strategy = PaddingStrategy.MAX_LENGTH
batch_outputs = {}
for i in range(batch_size):
inputs = {k: v[i] for k, v in encoded_inputs.items()}
outputs = self._pad(inputs, max_length=max_length, padding_strategy=padding_strategy, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=return_attention_mask)
for key, value in outputs.items():
if key not in batch_outputs:
batch_outputs[key] = []
batch_outputs[key].append(value)
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
def create_token_type_ids_from_sequences(self, token_ids_0: list[int], token_ids_1: Optional[list[int]]=None) -> list[int]:
"""
Create the token type IDs corresponding to the sequences passed. [What are token type
IDs?](../glossary#token-type-ids)
Should be overridden in a subclass if the model has a special way of building those.
Args:
token_ids_0 (`list[int]`): The first tokenized sequence.
token_ids_1 (`list[int]`, *optional*): The second tokenized sequence.
Returns:
`list[int]`: The token type ids.
"""
cls_len = int(getattr(self, 'cls_token_id', None) is not None)
sep_len = int(getattr(self, 'sep_token_id', None) is not None)
if token_ids_1 is None:
return [0] * (cls_len + len(token_ids_0) + sep_len)
return [0] * (cls_len + len(token_ids_0) + sep_len) + [1] * (len(token_ids_1) + sep_len)
def build_inputs_with_special_tokens(self, token_ids_0: list[int], token_ids_1: Optional[list[int]]=None) -> list[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
Args:
token_ids_0 (`list[int]`): The first tokenized sequence.
token_ids_1 (`list[int]`, *optional*): The second tokenized sequence.
Returns:
`list[int]`: The model input with special tokens.
"""
if token_ids_1 is None:
return token_ids_0
return token_ids_0 + token_ids_1
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def prepare_for_model(self, ids: list[int], pair_ids: Optional[list[int]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, prepend_batch_axis: bool=False, **kwargs) -> BatchEncoding:
"""
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens. Please Note, for *pair_ids*
different than `None` and *truncation_strategy = longest_first* or `True`, it is not possible to return
overflowing tokens. Such a combination of arguments will raise an error.
Args:
ids (`list[int]`):
Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
`convert_tokens_to_ids` methods.
pair_ids (`list[int]`, *optional*):
Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize`
and `convert_tokens_to_ids` methods.
"""
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(padding=padding, truncation=truncation, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, verbose=verbose, **kwargs)
pair = pair_ids is not None
len_ids = len(ids)
len_pair_ids = len(pair_ids) if pair else 0
if return_token_type_ids and (not add_special_tokens):
raise ValueError('Asking to return token_type_ids while setting add_special_tokens to False results in an undefined behavior. Please set add_special_tokens to True or set return_token_type_ids to None.')
if return_overflowing_tokens and truncation_strategy == TruncationStrategy.LONGEST_FIRST and (pair_ids is not None):
raise ValueError('Not possible to return overflowing tokens for pair of sequences with the `longest_first`. Please select another truncation strategy than `longest_first`, for instance `only_second` or `only_first`.')
if return_token_type_ids is None:
return_token_type_ids = 'token_type_ids' in self.model_input_names
if return_attention_mask is None:
return_attention_mask = 'attention_mask' in self.model_input_names
encoded_inputs = {}
total_len = len_ids + len_pair_ids + (self.num_special_tokens_to_add(pair=pair) if add_special_tokens else 0)
overflowing_tokens = []
if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE and max_length and (total_len > max_length):
ids, pair_ids, overflowing_tokens = self.truncate_sequences(ids, pair_ids=pair_ids, num_tokens_to_remove=total_len - max_length, truncation_strategy=truncation_strategy, stride=stride)
if return_overflowing_tokens:
encoded_inputs['overflowing_tokens'] = overflowing_tokens
encoded_inputs['num_truncated_tokens'] = total_len - max_length
if add_special_tokens:
sequence = self.build_inputs_with_special_tokens(ids, pair_ids)
token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids)
else:
sequence = ids + pair_ids if pair else ids
token_type_ids = [0] * len(ids) + ([0] * len(pair_ids) if pair else [])
encoded_inputs['input_ids'] = sequence
if return_token_type_ids:
encoded_inputs['token_type_ids'] = token_type_ids
if return_special_tokens_mask:
if add_special_tokens:
encoded_inputs['special_tokens_mask'] = self.get_special_tokens_mask(ids, pair_ids)
else:
encoded_inputs['special_tokens_mask'] = [0] * len(sequence)
self._eventual_warn_about_too_long_sequence(encoded_inputs['input_ids'], max_length, verbose)
if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask:
encoded_inputs = self.pad(encoded_inputs, max_length=max_length, padding=padding_strategy.value, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=return_attention_mask)
if return_length:
encoded_inputs['length'] = len(encoded_inputs['input_ids'])
batch_outputs = BatchEncoding(encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis)
return batch_outputs
def truncate_sequences(self, ids: list[int], pair_ids: Optional[list[int]]=None, num_tokens_to_remove: int=0, truncation_strategy: Union[str, TruncationStrategy]='longest_first', stride: int=0) -> tuple[list[int], list[int], list[int]]:
"""
Truncates a sequence pair in-place following the strategy.
Args:
ids (`list[int]`):
Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
`convert_tokens_to_ids` methods.
pair_ids (`list[int]`, *optional*):
Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize`
and `convert_tokens_to_ids` methods.
num_tokens_to_remove (`int`, *optional*, defaults to 0):
Number of tokens to remove using the truncation strategy.
truncation_strategy (`str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `'longest_first'`):
The strategy to follow for truncation. Can be:
- `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will truncate
token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a
batch of pairs) is provided.
- `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater
than the model maximum admissible input size).
stride (`int`, *optional*, defaults to 0):
If set to a positive number, the overflowing tokens returned will contain some tokens from the main
sequence returned. The value of this argument defines the number of additional tokens.
Returns:
`tuple[list[int], list[int], list[int]]`: The truncated `ids`, the truncated `pair_ids` and the list of
overflowing tokens. Note: The *longest_first* strategy returns empty list of overflowing tokens if a pair
of sequences (or a batch of pairs) is provided.
"""
if num_tokens_to_remove <= 0:
return (ids, pair_ids, [])
if not isinstance(truncation_strategy, TruncationStrategy):
truncation_strategy = TruncationStrategy(truncation_strategy)
overflowing_tokens = []
if truncation_strategy == TruncationStrategy.ONLY_FIRST or (truncation_strategy == TruncationStrategy.LONGEST_FIRST and pair_ids is None):
if len(ids) > num_tokens_to_remove:
window_len = min(len(ids), stride + num_tokens_to_remove)
if self.truncation_side == 'left':
overflowing_tokens = ids[:window_len]
ids = ids[num_tokens_to_remove:]
elif self.truncation_side == 'right':
overflowing_tokens = ids[-window_len:]
ids = ids[:-num_tokens_to_remove]
else:
raise ValueError(f"invalid truncation strategy: {self.truncation_side}, use 'left' or 'right'.")
else:
error_msg = f'We need to remove {num_tokens_to_remove} to truncate the input but the first sequence has a length {len(ids)}. '
if truncation_strategy == TruncationStrategy.ONLY_FIRST:
error_msg = error_msg + f"Please select another truncation strategy than {truncation_strategy}, for instance 'longest_first' or 'only_second'."
logger.error(error_msg)
elif truncation_strategy == TruncationStrategy.LONGEST_FIRST:
logger.warning(f"Be aware, overflowing tokens are not returned for the setting you have chosen, i.e. sequence pairs with the '{TruncationStrategy.LONGEST_FIRST.value}' truncation strategy. So the returned list will always be empty even if some tokens have been removed.")
len_pair_ids = len(pair_ids) if pair_ids is not None else 0
len_ids = len(ids)
first_remove = min(abs(len_pair_ids - len_ids), num_tokens_to_remove)
second_remove = num_tokens_to_remove - first_remove
if len_ids > len_pair_ids:
ids_to_move = first_remove + second_remove // 2
pair_ids_to_move = second_remove - second_remove // 2
else:
ids_to_move = second_remove // 2
pair_ids_to_move = first_remove + second_remove - second_remove // 2
if self.truncation_side == 'right':
ids = ids[:-ids_to_move] if ids_to_move > 0 else ids
pair_ids = pair_ids[:-pair_ids_to_move] if pair_ids is not None and pair_ids_to_move > 0 else pair_ids
elif self.truncation_side == 'left':
ids = ids[ids_to_move:]
pair_ids = pair_ids[pair_ids_to_move:] if pair_ids is not None else None
else:
raise ValueError(f'invalid truncation strategy:{self.truncation_side}')
elif truncation_strategy == TruncationStrategy.ONLY_SECOND and pair_ids is not None:
if len(pair_ids) > num_tokens_to_remove:
window_len = min(len(pair_ids), stride + num_tokens_to_remove)
if self.truncation_side == 'right':
overflowing_tokens = pair_ids[-window_len:]
pair_ids = pair_ids[:-num_tokens_to_remove]
elif self.truncation_side == 'left':
overflowing_tokens = pair_ids[:window_len]
pair_ids = pair_ids[num_tokens_to_remove:]
else:
raise ValueError(f'invalid truncation strategy:{self.truncation_side}')
else:
logger.error(f"We need to remove {num_tokens_to_remove} to truncate the input but the second sequence has a length {len(pair_ids)}. Please select another truncation strategy than {truncation_strategy}, for instance 'longest_first' or 'only_first'.")
return (ids, pair_ids, overflowing_tokens)
def _pad(self, encoded_inputs: Union[dict[str, EncodedInput], BatchEncoding], max_length: Optional[int]=None, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_attention_mask: Optional[bool]=None) -> dict:
"""
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs:
Dictionary of tokenized inputs (`list[int]`) or batch of tokenized inputs (`list[list[int]]`).
max_length: maximum length of the returned list and optionally padding length (see below).
Will truncate by taking into account the special tokens.
padding_strategy: PaddingStrategy to use for padding.
- PaddingStrategy.LONGEST Pad to the longest sequence in the batch
- PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
- PaddingStrategy.DO_NOT_PAD: Do not pad
The tokenizer padding sides are defined in `padding_side` argument:
- 'left': pads on the left of the sequences
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
`>= 7.5` (Volta).
padding_side:
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
if return_attention_mask is None:
return_attention_mask = 'attention_mask' in self.model_input_names
required_input = encoded_inputs[self.model_input_names[0]]
if padding_strategy == PaddingStrategy.LONGEST:
max_length = len(required_input)
if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
max_length = (max_length // pad_to_multiple_of + 1) * pad_to_multiple_of
needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
if return_attention_mask and 'attention_mask' not in encoded_inputs:
encoded_inputs['attention_mask'] = [1] * len(required_input)
if needs_to_be_padded:
difference = max_length - len(required_input)
padding_side = padding_side if padding_side is not None else self.padding_side
if padding_side == 'right':
if return_attention_mask:
encoded_inputs['attention_mask'] = encoded_inputs['attention_mask'] + [0] * difference
if 'token_type_ids' in encoded_inputs:
encoded_inputs['token_type_ids'] = encoded_inputs['token_type_ids'] + [self.pad_token_type_id] * difference
if 'special_tokens_mask' in encoded_inputs:
encoded_inputs['special_tokens_mask'] = encoded_inputs['special_tokens_mask'] + [1] * difference
encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference
elif padding_side == 'left':
if return_attention_mask:
encoded_inputs['attention_mask'] = [0] * difference + encoded_inputs['attention_mask']
if 'token_type_ids' in encoded_inputs:
encoded_inputs['token_type_ids'] = [self.pad_token_type_id] * difference + encoded_inputs['token_type_ids']
if 'special_tokens_mask' in encoded_inputs:
encoded_inputs['special_tokens_mask'] = [1] * difference + encoded_inputs['special_tokens_mask']
encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
else:
raise ValueError(f'Invalid padding strategy:{padding_side}')
return encoded_inputs
def convert_tokens_to_string(self, tokens: list[str]) -> str:
"""
Converts a sequence of tokens in a single string. The most simple way to do it is `" ".join(tokens)` but we
often want to remove sub-word tokenization artifacts at the same time.
Args:
tokens (`list[str]`): The token to join in a string.
Returns:
`str`: The joined tokens.
"""
raise NotImplementedError
def batch_decode(self, sequences: Union[list[int], list[list[int]], 'np.ndarray', 'torch.Tensor'], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> list[str]:
"""
Convert a list of lists of token ids into a list of strings by calling decode.
Args:
sequences (`Union[list[int], list[list[int]], np.ndarray, torch.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
skip_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (`bool`, *optional*):
Whether or not to clean up the tokenization spaces. If `None`, will default to
`self.clean_up_tokenization_spaces`.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific decode method.
Returns:
`list[str]`: The list of decoded sentences.
"""
return [self.decode(seq, skip_special_tokens=skip_special_tokens, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs) for seq in sequences]
def decode(self, token_ids: Union[int, list[int], 'np.ndarray', 'torch.Tensor'], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> str:
"""
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
Args:
token_ids (`Union[int, list[int], np.ndarray, torch.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
skip_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (`bool`, *optional*):
Whether or not to clean up the tokenization spaces. If `None`, will default to
`self.clean_up_tokenization_spaces`.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific decode method.
Returns:
`str`: The decoded sentence.
"""
token_ids = to_py_obj(token_ids)
return self._decode(token_ids=token_ids, skip_special_tokens=skip_special_tokens, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
def _decode(self, token_ids: Union[int, list[int]], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> str:
raise NotImplementedError
def get_special_tokens_mask(self, token_ids_0: list[int], token_ids_1: Optional[list[int]]=None, already_has_special_tokens: bool=False) -> list[int]:
"""
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` or `encode_plus` methods.
Args:
token_ids_0 (`list[int]`):
List of ids of the first sequence.
token_ids_1 (`list[int]`, *optional*):
List of ids of the second sequence.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
assert already_has_special_tokens and token_ids_1 is None, 'You cannot use ``already_has_special_tokens=False`` with this tokenizer. Please use a slow (full python) tokenizer to activate this argument. Or set `return_special_tokens_mask=True` when calling the encoding method to get the special tokens mask in any tokenizer. '
all_special_ids = self.all_special_ids
special_tokens_mask = [1 if token in all_special_ids else 0 for token in token_ids_0]
return special_tokens_mask
@staticmethod
def clean_up_tokenization(out_string: str) -> str:
"""
Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.
Args:
out_string (`str`): The text to clean up.
Returns:
`str`: The cleaned-up string.
"""
out_string = out_string.replace(' .', '.').replace(' ?', '?').replace(' !', '!').replace(' ,', ',').replace(" ' ", "'").replace(" n't", "n't").replace(" 'm", "'m").replace(" 's", "'s").replace(" 've", "'ve").replace(" 're", "'re")
return out_string
def _eventual_warn_about_too_long_sequence(self, ids: list[int], max_length: Optional[int], verbose: bool):
"""
Depending on the input and internal state we might trigger a warning about a sequence that is too long for its
corresponding model
Args:
ids (`list[str]`): The ids produced by the tokenization
max_length (`int`, *optional*): The max_length desired (does not trigger a warning if it is set)
verbose (`bool`): Whether or not to print more information and warnings.
"""
if max_length is None and len(ids) > self.model_max_length and verbose and (self.model_max_length != 0):
if not self.deprecation_warnings.get('sequence-length-is-longer-than-the-specified-maximum', False):
logger.warning(f'Token indices sequence length is longer than the specified maximum sequence length for this model ({len(ids)} > {self.model_max_length}). Running this sequence through the model will result in indexing errors')
self.deprecation_warnings['sequence-length-is-longer-than-the-specified-maximum'] = True
def _switch_to_input_mode(self):
"""
Private method to put the tokenizer in input mode (when it has different modes for input/outputs)
"""
pass
def _switch_to_target_mode(self):
"""
Private method to put the tokenizer in target mode (when it has different modes for input/outputs)
"""
pass
@contextmanager
def as_target_tokenizer(self):
"""
Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to
sequence-to-sequence models that need a slightly different processing for the labels.
"""
warnings.warn('`as_target_tokenizer` is deprecated and will be removed in v5 of Transformers. You can tokenize your labels by using the argument `text_target` of the regular `__call__` method (either in the same call as your input texts if you use the same keyword arguments, or in a separate call.')
self._switch_to_target_mode()
self._in_target_context_manager = True
yield
self._in_target_context_manager = False
self._switch_to_input_mode()
@classmethod
def register_for_auto_class(cls, auto_class='AutoTokenizer'):
"""
Register this class with a given auto class. This should only be used for custom tokenizers as the ones in the
library are already mapped with `AutoTokenizer`.
Args:
auto_class (`str` or `type`, *optional*, defaults to `"AutoTokenizer"`):
The auto class to register this new tokenizer with.
"""
if not isinstance(auto_class, str):
auto_class = auto_class.__name__
import transformers.models.auto as auto_module
if not hasattr(auto_module, auto_class):
raise ValueError(f'{auto_class} is not a valid auto class.')
cls._auto_class = auto_class
def prepare_seq2seq_batch(self, src_texts: list[str], tgt_texts: Optional[list[str]]=None, max_length: Optional[int]=None, max_target_length: Optional[int]=None, padding: str='longest', return_tensors: Optional[str]=None, truncation: bool=True, **kwargs) -> BatchEncoding:
"""
Prepare model inputs for translation. For best performance, translate one sentence at a time.
Arguments:
src_texts (`list[str]`):
List of documents to summarize or source language texts.
tgt_texts (`list`, *optional*):
List of summaries or target language texts.
max_length (`int`, *optional*):
Controls the maximum length for encoder inputs (documents to summarize or source language texts) If
left unset or set to `None`, this will use the predefined model maximum length if a maximum length is
required by one of the truncation/padding parameters. If the model has no specific maximum input length
(like XLNet) truncation/padding to a maximum length will be deactivated.
max_target_length (`int`, *optional*):
Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set
to `None`, this will use the max_length value.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Activates and controls padding. Accepts the following values:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return Numpy `np.ndarray` objects.
truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `True`):
Activates and controls truncation. Accepts the following values:
- `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
- `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
**kwargs:
Additional keyword arguments passed along to `self.__call__`.
Return:
[`BatchEncoding`]: A [`BatchEncoding`] with the following fields:
- **input_ids** -- List of token ids to be fed to the encoder.
- **attention_mask** -- List of indices specifying which tokens should be attended to by the model.
- **labels** -- List of token ids for tgt_texts.
The full set of keys `[input_ids, attention_mask, labels]`, will only be returned if tgt_texts is passed.
Otherwise, input_ids, attention_mask will be the only keys.
"""
formatted_warning = '\n`prepare_seq2seq_batch` is deprecated and will be removed in version 5 of HuggingFace Transformers. Use the regular\n`__call__` method to prepare your inputs and targets.\n\nHere is a short example:\n\nmodel_inputs = tokenizer(src_texts, text_target=tgt_texts, ...)\n\nIf you either need to use different keyword arguments for the source and target texts, you should do two calls like\nthis:\n\nmodel_inputs = tokenizer(src_texts, ...)\nlabels = tokenizer(text_target=tgt_texts, ...)\nmodel_inputs["labels"] = labels["input_ids"]\n\nSee the documentation of your specific tokenizer for more details on the specific arguments to the tokenizer of choice.\nFor a more complete example, see the implementation of `prepare_seq2seq_batch`.\n'
warnings.warn(formatted_warning, FutureWarning)
kwargs.pop('src_lang', None)
kwargs.pop('tgt_lang', None)
if max_length is None:
max_length = self.model_max_length
model_inputs = self(src_texts, add_special_tokens=True, return_tensors=return_tensors, max_length=max_length, padding=padding, truncation=truncation, **kwargs)
if tgt_texts is None:
return model_inputs
if max_target_length is None:
max_target_length = max_length
with self.as_target_tokenizer():
labels = self(tgt_texts, add_special_tokens=True, return_tensors=return_tensors, padding=padding, max_length=max_target_length, truncation=truncation, **kwargs)
model_inputs['labels'] = labels['input_ids']
return model_inputs
|
@add_end_docstrings(INIT_TOKENIZER_DOCSTRING)
class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
'''
Base class for [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`].
Handles shared (mostly boiler plate) methods for those two classes.
'''
def __init__(self, **kwargs):
pass
@property
def max_len_single_sentence(self) -> int:
'''
`int`: The maximum length of a sentence that can be fed to the model.
'''
pass
@property
def max_len_sentences_pair(self) -> int:
'''
`int`: The maximum combined length of a pair of sentences that can be fed to the model.
'''
pass
@max_len_single_sentence.setter
def max_len_single_sentence(self) -> int:
pass
@max_len_sentences_pair.setter
def max_len_sentences_pair(self) -> int:
pass
def _set_processor_class(self, processor_class: str):
'''Sets processor class as an attribute.'''
pass
@property
def added_tokens_decoder(self) -> dict[int, AddedToken]:
pass
def __repr__(self) -> str:
pass
def __len__(self) -> int:
pass
def get_vocab(self) -> dict[str, int]:
'''
Returns the vocabulary as a dictionary of token to index.
`tokenizer.get_vocab()[token]` is equivalent to `tokenizer.convert_tokens_to_ids(token)` when `token` is in the
vocab.
Returns:
`dict[str, int]`: The vocabulary.
'''
pass
def apply_chat_template(self, conversation: Union[list[dict[str, str]], list[list[dict[str, str]]]], tools: Optional[list[Union[dict, Callable]]]=None, documents: Optional[list[dict[str, str]]]=None, chat_template: Optional[str]=None, add_generation_prompt: bool=False, continue_final_message: bool=False, tokenize: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: bool=False, max_length: Optional[int]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_dict: bool=False, return_assistant_tokens_mask: bool=False, tokenizer_kwargs: Optional[dict[str, Any]]=None, **kwargs) -> Union[str, list[int], list[str], list[list[int]], BatchEncoding]:
'''
Converts a list of dictionaries with `"role"` and `"content"` keys to a list of token
ids. This method is intended for use with chat models, and will read the tokenizer's chat_template attribute to
determine the format and control tokens to use when converting.
Args:
conversation (Union[list[dict[str, str]], list[list[dict[str, str]]]]): A list of dicts
with "role" and "content" keys, representing the chat history so far.
tools (`list[Union[Dict, Callable]]`, *optional*):
A list of tools (callable functions) that will be accessible to the model. If the template does not
support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema,
giving the name, description and argument types for the tool. See our
[chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use)
for more information.
documents (`list[dict[str, str]]`, *optional*):
A list of dicts representing documents that will be accessible to the model if it is performing RAG
(retrieval-augmented generation). If the template does not support RAG, this argument will have no
effect. We recommend that each document should be a dict containing "title" and "text" keys. Please
see the RAG section of the [chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#arguments-for-RAG)
for examples of passing documents with chat templates.
chat_template (`str`, *optional*):
A Jinja template to use for this conversion. It is usually not necessary to pass anything to this
argument, as the model's template will be used by default.
add_generation_prompt (bool, *optional*):
If this is set, a prompt with the token(s) that indicate
the start of an assistant message will be appended to the formatted output. This is useful when you want to generate a response from the model.
Note that this argument will be passed to the chat template, and so it must be supported in the
template for this argument to have any effect.
continue_final_message (bool, *optional*):
If this is set, the chat will be formatted so that the final
message in the chat is open-ended, without any EOS tokens. The model will continue this message
rather than starting a new one. This allows you to "prefill" part of
the model's response for it. Cannot be used at the same time as `add_generation_prompt`.
tokenize (`bool`, defaults to `True`):
Whether to tokenize the output. If `False`, the output will be a string.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (`bool`, defaults to `False`):
Whether to truncate sequences at the maximum length. Has no effect if tokenize is `False`.
max_length (`int`, *optional*):
Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is `False`. If
not specified, the tokenizer's `max_length` attribute will be used as a default.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Has no effect if tokenize is `False`. Acceptable
values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return NumPy `np.ndarray` objects.
return_dict (`bool`, defaults to `False`):
Whether to return a dictionary with named outputs. Has no effect if tokenize is `False`.
tokenizer_kwargs (`dict[str: Any]`, *optional*): Additional kwargs to pass to the tokenizer.
return_assistant_tokens_mask (`bool`, defaults to `False`):
Whether to return a mask of the assistant generated tokens. For tokens generated by the assistant,
the mask will contain 1. For user and system tokens, the mask will contain 0.
This functionality is only available for chat templates that support it via the `{% generation %}` keyword.
**kwargs: Additional kwargs to pass to the template renderer. Will be accessible by the chat template.
Returns:
`Union[list[int], Dict]`: A list of token ids representing the tokenized chat so far, including control tokens. This
output is ready to pass to the model, either directly or via methods like `generate()`. If `return_dict` is
set, will return a dict of tokenizer outputs instead.
'''
pass
def encode_message_with_chat_template(self, message: dict[str, str], conversation_history: Optional[list[dict[str, str]]]=None, **kwargs) -> list[int]:
'''
Tokenize a single message. This method is a convenience wrapper around `apply_chat_template` that allows you
to tokenize messages one by one. This is useful for things like token-by-token streaming.
This method is not guaranteed to be perfect. For some models, it may be impossible to robustly tokenize
single messages. For example, if the chat template adds tokens after each message, but also has a prefix that
is added to the entire chat, it will be impossible to distinguish a chat-start-token from a message-start-token.
In these cases, this method will do its best to find the correct tokenization, but it may not be perfect.
**Note:** This method does not support `add_generation_prompt`. If you want to add a generation prompt,
you should do it separately after tokenizing the conversation.
Args:
message (`dict`):
A dictionary with "role" and "content" keys, representing the message to tokenize.
conversation_history (`list[dict]`, *optional*):
A list of dicts with "role" and "content" keys, representing the chat history so far. If you are
tokenizing messages one by one, you should pass the previous messages in the conversation here.
**kwargs:
Additional kwargs to pass to the `apply_chat_template` method.
Returns:
`list[int]`: A list of token ids representing the tokenized message.
'''
pass
def get_chat_template(self, chat_template: Optional[str]=None, tools: Optional[list[dict]]=None) -> str:
'''
Retrieve the chat template string used for tokenizing chat messages. This template is used
internally by the `apply_chat_template` method and can also be used externally to retrieve the model's chat
template for better generation tracking.
Args:
chat_template (`str`, *optional*):
A Jinja template or the name of a template to use for this conversion.
It is usually not necessary to pass anything to this argument,
as the model's template will be used by default.
tools (`list[Dict]`, *optional*):
A list of tools (callable functions) that will be accessible to the model. If the template does not
support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema,
giving the name, description and argument types for the tool. See our
[chat templating guide](https://huggingface.co/docs/transformers/main/en/chat_templating#automated-function-conversion-for-tool-use)
for more information.
Returns:
`str`: The chat template string.
'''
pass
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], *init_inputs, cache_dir: Optional[Union[str, os.PathLike]]=None, force_download: bool=False, local_files_only: bool=False, token: Optional[Union[str, bool]]=None, revision: str='main', trust_remote_code=False, **kwargs):
'''
Instantiate a [`~tokenization_utils_base.PreTrainedTokenizerBase`] (or a derived class) from a predefined
tokenizer.
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
Can be either:
- A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co.
- A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved
using the [`~tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained`] method, e.g.,
`./my_model_directory/`.
- (**Deprecated**, not applicable to all derived classes) A path or url to a single saved vocabulary
file (if and only if the tokenizer only requires a single vocabulary file like Bert or XLNet), e.g.,
`./my_model_directory/vocab.txt`.
cache_dir (`str` or `os.PathLike`, *optional*):
Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the
standard cache should not be used.
force_download (`bool`, *optional*, defaults to `False`):
Whether or not to force the (re-)download the vocabulary files and override the cached versions if they
exist.
resume_download:
Deprecated and ignored. All downloads are now resumed by default when possible.
Will be removed in v5 of Transformers.
proxies (`dict[str, str]`, *optional*):
A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
when running `hf auth login` (stored in `~/.huggingface`).
local_files_only (`bool`, *optional*, defaults to `False`):
Whether or not to only rely on local files and not to attempt to download any files.
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
identifier allowed by git.
subfolder (`str`, *optional*):
In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for
facebook/rag-token-base), specify it here.
inputs (additional positional arguments, *optional*):
Will be passed along to the Tokenizer `__init__` method.
trust_remote_code (`bool`, *optional*, defaults to `False`):
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to `True` for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
kwargs (additional keyword arguments, *optional*):
Will be passed to the Tokenizer `__init__` method. Can be used to set special tokens like `bos_token`,
`eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`,
`additional_special_tokens`. See parameters in the `__init__` for more details.
<Tip>
Passing `token=True` is required when you want to use a private model.
</Tip>
Examples:
```python
# We can't instantiate directly the base class *PreTrainedTokenizerBase* so let's show our examples on a derived class: BertTokenizer
# Download vocabulary from huggingface.co and cache.
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased")
# Download vocabulary from huggingface.co (user-uploaded) and cache.
tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
# If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
tokenizer = BertTokenizer.from_pretrained("./test/saved_model/")
# If the tokenizer uses a single vocabulary file, you can point directly to this file
tokenizer = BertTokenizer.from_pretrained("./test/saved_model/my_vocab.txt")
# You can link tokens to special vocabulary when instantiating
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased", unk_token="<unk>")
# You should be sure '<unk>' is in the vocabulary when doing that.
# Otherwise use tokenizer.add_special_tokens({'unk_token': '<unk>'}) instead)
assert tokenizer.unk_token == "<unk>"
```'''
pass
@classmethod
def _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, token=None, cache_dir=None, local_files_only=False, _commit_hash=None, _is_local=False, trust_remote_code=False, **kwargs):
pass
@staticmethod
def _eventually_correct_t5_max_length(pretrained_model_name_or_path, max_model_length, init_max_model_length):
pass
@classmethod
def convert_added_tokens(cls, obj: Union[AddedToken, Any], save=False, add_type_field=True):
pass
def save_chat_templates(self, save_directory: Union[str, os.PathLike], tokenizer_config: dict, filename_prefix: Optional[str], save_jinja_files: bool):
'''
Writes chat templates out to the save directory if we're using the new format, and removes them from
the tokenizer config if present. If we're using the legacy format, it doesn't write any files, and instead
writes the templates to the tokenizer config in the correct format.
'''
pass
def save_pretrained(self, save_directory: Union[str, os.PathLike], legacy_format: Optional[bool]=None, filename_prefix: Optional[str]=None, push_to_hub: bool=False, **kwargs) -> tuple[str]:
'''
Save the full tokenizer state.
This method make sure the full tokenizer can then be re-loaded using the
[`~tokenization_utils_base.PreTrainedTokenizer.from_pretrained`] class method..
Warning,None This won't save modifications you may have applied to the tokenizer after the instantiation (for
instance, modifying `tokenizer.do_lower_case` after creation).
Args:
save_directory (`str` or `os.PathLike`): The path to a directory where the tokenizer will be saved.
legacy_format (`bool`, *optional*):
Only applicable for a fast tokenizer. If unset (default), will save the tokenizer in the unified JSON
format as well as in legacy format if it exists, i.e. with tokenizer specific vocabulary and a separate
added_tokens files.
If `False`, will only save the tokenizer in the unified JSON format. This format is incompatible with
"slow" tokenizers (not powered by the *tokenizers* library), so the tokenizer will not be able to be
loaded in the corresponding "slow" tokenizer.
If `True`, will save the tokenizer in legacy format. If the "slow" tokenizer doesn't exits, a value
error is raised.
filename_prefix (`str`, *optional*):
A prefix to add to the names of the files saved by the tokenizer.
push_to_hub (`bool`, *optional*, defaults to `False`):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
Returns:
A tuple of `str`: The files saved.
'''
pass
def _save_pretrained(self, save_directory: Union[str, os.PathLike], file_names: tuple[str], legacy_format: Optional[bool]=None, filename_prefix: Optional[str]=None) -> tuple[str]:
'''
Save a tokenizer using the slow-tokenizer/legacy format: vocabulary + added tokens.
Fast tokenizers can also be saved in a unique JSON file containing {config + vocab + added-tokens} using the
specific [`~tokenization_utils_fast.PreTrainedTokenizerFast._save_pretrained`]
'''
pass
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str]=None) -> tuple[str]:
'''
Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won't save the configuration and special token mappings of the tokenizer. Use
[`~PreTrainedTokenizerFast._save_pretrained`] to save the whole state of the tokenizer.
Args:
save_directory (`str`):
The directory in which to save the vocabulary.
filename_prefix (`str`, *optional*):
An optional prefix to add to the named of the saved files.
Returns:
`Tuple(str)`: Paths to the files saved.
'''
pass
def tokenize(self, text: str, pair: Optional[str]=None, add_special_tokens: bool=False, **kwargs) -> list[str]:
'''
Converts a string into a sequence of tokens, replacing unknown tokens with the `unk_token`.
Args:
text (`str`):
The sequence to be encoded.
pair (`str`, *optional*):
A second sequence to be encoded with the first.
add_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to add the special tokens associated with the corresponding model.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific encode method. See details in
[`~PreTrainedTokenizerBase.__call__`]
Returns:
`list[str]`: The list of tokens.
'''
pass
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, '\n **kwargs: Passed along to the `.tokenize()` method.\n ', '\n Returns:\n `list[int]`, `torch.Tensor`, or `np.ndarray`: The tokenized ids of the text.\n ')
def encode_message_with_chat_template(self, message: dict[str, str], conversation_history: Optional[list[dict[str, str]]]=None, **kwargs) -> list[int]:
'''
Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing `self.convert_tokens_to_ids(self.tokenize(text))`.
Args:
text (`str`, `list[str]` or `list[int]`):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the
`tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
text_pair (`str`, `list[str]` or `list[int]`, *optional*):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
'''
pass
def num_special_tokens_to_add(self, pair: bool=False) -> int:
pass
def _get_padding_truncation_strategies(self, padding=False, truncation=None, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs):
'''
Find the correct padding/truncation strategy
'''
pass
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def __call__(self, text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput], None]=None, text_pair: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, text_target: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput], None]=None, text_pair_target: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, **kwargs) -> BatchEncoding:
'''
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
Args:
text (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
text_pair (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
text_target (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
text_pair_target (`str`, `list[str]`, `list[list[str]]`, *optional*):
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set `is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
'''
pass
def _call_one(self, text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]], text_pair: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
pass
def _is_valid_text_input(t):
pass
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def encode_plus(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, **kwargs) -> BatchEncoding:
'''
Tokenize and prepare for the model a sequence or a pair of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
text (`str`, `list[str]` or (for non-fast tokenizers) `list[int]`):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the
`tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
text_pair (`str`, `list[str]` or `list[int]`, *optional*):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids`
method).
'''
pass
def _encode_plus(self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]]=None, add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
pass
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def batch_encode_plus(self, batch_text_or_text_pairs: Union[list[TextInput], list[TextInputPair], list[PreTokenizedInput], list[PreTokenizedInputPair], list[EncodedInput], list[EncodedInputPair]], add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
'''
Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
<Tip warning={true}>
This method is deprecated, `__call__` should be used instead.
</Tip>
Args:
batch_text_or_text_pairs (`list[str]`, `list[tuple[str, str]]`, `list[list[str]]`, `list[tuple[list[str], list[str]]]`, and for not-fast tokenizers, also `list[list[int]]`, `list[tuple[list[int], list[int]]]`):
Batch of sequences or pair of sequences to be encoded. This can be a list of
string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence (see
details in `encode_plus`).
'''
pass
def _batch_encode_plus(self, batch_text_or_text_pairs: Union[list[TextInput], list[TextInputPair], list[PreTokenizedInput], list[PreTokenizedInputPair], list[EncodedInput], list[EncodedInputPair]], add_special_tokens: bool=True, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy=TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int]=None, stride: int=0, is_split_into_words: bool=False, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, split_special_tokens: bool=False, **kwargs) -> BatchEncoding:
pass
def pad(self, encoded_inputs: Union[BatchEncoding, list[BatchEncoding], dict[str, EncodedInput], dict[str, list[EncodedInput]], list[dict[str, EncodedInput]]], padding: Union[bool, str, PaddingStrategy]=True, max_length: Optional[int]=None, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_attention_mask: Optional[bool]=None, return_tensors: Optional[Union[str, TensorType]]=None, verbose: bool=True) -> BatchEncoding:
'''
Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length
in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level (with `self.padding_side`,
`self.pad_token_id` and `self.pad_token_type_id`).
Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the
text followed by a call to the `pad` method to get a padded encoding.
<Tip>
If the `encoded_inputs` passed are dictionary of numpy arrays, or PyTorch tensors, the
result will use the same type unless you provide a different tensor type with `return_tensors`. In the case of
PyTorch tensors, you will lose the specific device of your tensors however.
</Tip>
Args:
encoded_inputs ([`BatchEncoding`], list of [`BatchEncoding`], `dict[str, list[int]]`, `dict[str, list[list[int]]` or `list[dict[str, list[int]]]`):
Tokenized inputs. Can represent one input ([`BatchEncoding`] or `dict[str, list[int]]`) or a batch of
tokenized inputs (list of [`BatchEncoding`], *dict[str, list[list[int]]]* or *list[dict[str,
list[int]]]*) so you can use this method during preprocessing as well as in a PyTorch Dataloader
collate function.
Instead of `list[int]` you can have tensors (numpy arrays, or PyTorch tensors), see
the note above for the return type.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
- `True` or `'longest'` (default): Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'`: No padding (i.e., can output a batch with sequences of different
lengths).
max_length (`int`, *optional*):
Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (`int`, *optional*):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
`>= 7.5` (Volta).
padding_side (`str`, *optional*):
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
return_attention_mask (`bool`, *optional*):
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the `return_outputs` attribute.
[What are attention masks?](../glossary#attention-mask)
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return Numpy `np.ndarray` objects.
verbose (`bool`, *optional*, defaults to `True`):
Whether or not to print more information and warnings.
'''
pass
def create_token_type_ids_from_sequences(self, token_ids_0: list[int], token_ids_1: Optional[list[int]]=None) -> list[int]:
'''
Create the token type IDs corresponding to the sequences passed. [What are token type
IDs?](../glossary#token-type-ids)
Should be overridden in a subclass if the model has a special way of building those.
Args:
token_ids_0 (`list[int]`): The first tokenized sequence.
token_ids_1 (`list[int]`, *optional*): The second tokenized sequence.
Returns:
`list[int]`: The token type ids.
'''
pass
def build_inputs_with_special_tokens(self, token_ids_0: list[int], token_ids_1: Optional[list[int]]=None) -> list[int]:
'''
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
Args:
token_ids_0 (`list[int]`): The first tokenized sequence.
token_ids_1 (`list[int]`, *optional*): The second tokenized sequence.
Returns:
`list[int]`: The model input with special tokens.
'''
pass
@add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING)
def prepare_for_model(self, ids: list[int], pair_ids: Optional[list[int]]=None, add_special_tokens: bool=True, padding: Union[bool, str, PaddingStrategy]=False, truncation: Union[bool, str, TruncationStrategy, None]=None, max_length: Optional[int]=None, stride: int=0, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, return_token_type_ids: Optional[bool]=None, return_attention_mask: Optional[bool]=None, return_overflowing_tokens: bool=False, return_special_tokens_mask: bool=False, return_offsets_mapping: bool=False, return_length: bool=False, verbose: bool=True, prepend_batch_axis: bool=False, **kwargs) -> BatchEncoding:
'''
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It
adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens. Please Note, for *pair_ids*
different than `None` and *truncation_strategy = longest_first* or `True`, it is not possible to return
overflowing tokens. Such a combination of arguments will raise an error.
Args:
ids (`list[int]`):
Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
`convert_tokens_to_ids` methods.
pair_ids (`list[int]`, *optional*):
Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize`
and `convert_tokens_to_ids` methods.
'''
pass
def truncate_sequences(self, ids: list[int], pair_ids: Optional[list[int]]=None, num_tokens_to_remove: int=0, truncation_strategy: Union[str, TruncationStrategy]='longest_first', stride: int=0) -> tuple[list[int], list[int], list[int]]:
'''
Truncates a sequence pair in-place following the strategy.
Args:
ids (`list[int]`):
Tokenized input ids of the first sequence. Can be obtained from a string by chaining the `tokenize` and
`convert_tokens_to_ids` methods.
pair_ids (`list[int]`, *optional*):
Tokenized input ids of the second sequence. Can be obtained from a string by chaining the `tokenize`
and `convert_tokens_to_ids` methods.
num_tokens_to_remove (`int`, *optional*, defaults to 0):
Number of tokens to remove using the truncation strategy.
truncation_strategy (`str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `'longest_first'`):
The strategy to follow for truncation. Can be:
- `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will truncate
token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a
batch of pairs) is provided.
- `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths greater
than the model maximum admissible input size).
stride (`int`, *optional*, defaults to 0):
If set to a positive number, the overflowing tokens returned will contain some tokens from the main
sequence returned. The value of this argument defines the number of additional tokens.
Returns:
`tuple[list[int], list[int], list[int]]`: The truncated `ids`, the truncated `pair_ids` and the list of
overflowing tokens. Note: The *longest_first* strategy returns empty list of overflowing tokens if a pair
of sequences (or a batch of pairs) is provided.
'''
pass
def _pad(self, encoded_inputs: Union[dict[str, EncodedInput], BatchEncoding], max_length: Optional[int]=None, padding_strategy: PaddingStrategy=PaddingStrategy.DO_NOT_PAD, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_attention_mask: Optional[bool]=None) -> dict:
'''
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs:
Dictionary of tokenized inputs (`list[int]`) or batch of tokenized inputs (`list[list[int]]`).
max_length: maximum length of the returned list and optionally padding length (see below).
Will truncate by taking into account the special tokens.
padding_strategy: PaddingStrategy to use for padding.
- PaddingStrategy.LONGEST Pad to the longest sequence in the batch
- PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
- PaddingStrategy.DO_NOT_PAD: Do not pad
The tokenizer padding sides are defined in `padding_side` argument:
- 'left': pads on the left of the sequences
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
`>= 7.5` (Volta).
padding_side:
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
'''
pass
def convert_tokens_to_string(self, tokens: list[str]) -> str:
'''
Converts a sequence of tokens in a single string. The most simple way to do it is `" ".join(tokens)` but we
often want to remove sub-word tokenization artifacts at the same time.
Args:
tokens (`list[str]`): The token to join in a string.
Returns:
`str`: The joined tokens.
'''
pass
def batch_decode(self, sequences: Union[list[int], list[list[int]], 'np.ndarray', 'torch.Tensor'], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> list[str]:
'''
Convert a list of lists of token ids into a list of strings by calling decode.
Args:
sequences (`Union[list[int], list[list[int]], np.ndarray, torch.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
skip_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (`bool`, *optional*):
Whether or not to clean up the tokenization spaces. If `None`, will default to
`self.clean_up_tokenization_spaces`.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific decode method.
Returns:
`list[str]`: The list of decoded sentences.
'''
pass
def decode(self, token_ids: Union[int, list[int], 'np.ndarray', 'torch.Tensor'], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> str:
'''
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
Args:
token_ids (`Union[int, list[int], np.ndarray, torch.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
skip_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (`bool`, *optional*):
Whether or not to clean up the tokenization spaces. If `None`, will default to
`self.clean_up_tokenization_spaces`.
kwargs (additional keyword arguments, *optional*):
Will be passed to the underlying model specific decode method.
Returns:
`str`: The decoded sentence.
'''
pass
def _decode(self, token_ids: Union[int, list[int]], skip_special_tokens: bool=False, clean_up_tokenization_spaces: Optional[bool]=None, **kwargs) -> str:
pass
def get_special_tokens_mask(self, token_ids_0: list[int], token_ids_1: Optional[list[int]]=None, already_has_special_tokens: bool=False) -> list[int]:
'''
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` or `encode_plus` methods.
Args:
token_ids_0 (`list[int]`):
List of ids of the first sequence.
token_ids_1 (`list[int]`, *optional*):
List of ids of the second sequence.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
'''
pass
@staticmethod
def clean_up_tokenization(out_string: str) -> str:
'''
Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms.
Args:
out_string (`str`): The text to clean up.
Returns:
`str`: The cleaned-up string.
'''
pass
def _eventual_warn_about_too_long_sequence(self, ids: list[int], max_length: Optional[int], verbose: bool):
'''
Depending on the input and internal state we might trigger a warning about a sequence that is too long for its
corresponding model
Args:
ids (`list[str]`): The ids produced by the tokenization
max_length (`int`, *optional*): The max_length desired (does not trigger a warning if it is set)
verbose (`bool`): Whether or not to print more information and warnings.
'''
pass
def _switch_to_input_mode(self):
'''
Private method to put the tokenizer in input mode (when it has different modes for input/outputs)
'''
pass
def _switch_to_target_mode(self):
'''
Private method to put the tokenizer in target mode (when it has different modes for input/outputs)
'''
pass
@contextmanager
def as_target_tokenizer(self):
'''
Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to
sequence-to-sequence models that need a slightly different processing for the labels.
'''
pass
@classmethod
def register_for_auto_class(cls, auto_class='AutoTokenizer'):
'''
Register this class with a given auto class. This should only be used for custom tokenizers as the ones in the
library are already mapped with `AutoTokenizer`.
Args:
auto_class (`str` or `type`, *optional*, defaults to `"AutoTokenizer"`):
The auto class to register this new tokenizer with.
'''
pass
def prepare_seq2seq_batch(self, src_texts: list[str], tgt_texts: Optional[list[str]]=None, max_length: Optional[int]=None, max_target_length: Optional[int]=None, padding: str='longest', return_tensors: Optional[str]=None, truncation: bool=True, **kwargs) -> BatchEncoding:
'''
Prepare model inputs for translation. For best performance, translate one sentence at a time.
Arguments:
src_texts (`list[str]`):
List of documents to summarize or source language texts.
tgt_texts (`list`, *optional*):
List of summaries or target language texts.
max_length (`int`, *optional*):
Controls the maximum length for encoder inputs (documents to summarize or source language texts) If
left unset or set to `None`, this will use the predefined model maximum length if a maximum length is
required by one of the truncation/padding parameters. If the model has no specific maximum input length
(like XLNet) truncation/padding to a maximum length will be deactivated.
max_target_length (`int`, *optional*):
Controls the maximum length of decoder inputs (target language texts or summaries) If left unset or set
to `None`, this will use the max_length value.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Activates and controls padding. Accepts the following values:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
- `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
acceptable input length for the model if that argument is not provided.
- `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
lengths).
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
- `'pt'`: Return PyTorch `torch.Tensor` objects.
- `'np'`: Return Numpy `np.ndarray` objects.
truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `True`):
Activates and controls truncation. Accepts the following values:
- `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
- `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
- `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
**kwargs:
Additional keyword arguments passed along to `self.__call__`.
Return:
[`BatchEncoding`]: A [`BatchEncoding`] with the following fields:
- **input_ids** -- List of token ids to be fed to the encoder.
- **attention_mask** -- List of indices specifying which tokens should be attended to by the model.
- **labels** -- List of token ids for tgt_texts.
The full set of keys `[input_ids, attention_mask, labels]`, will only be returned if tgt_texts is passed.
Otherwise, input_ids, attention_mask will be the only keys.
'''
pass
| 69
| 36
| 56
| 5
| 36
| 15
| 7
| 0.41
| 2
| 26
| 3
| 2
| 41
| 12
| 47
| 61
| 2,744
| 302
| 1,740
| 552
| 1,362
| 714
| 810
| 217
| 758
| 50
| 1
| 7
| 329
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.