text
stringlengths 7
1.24M
| id
stringlengths 14
166
| metadata
dict | __index_level_0__
int64 0
519
|
|---|---|---|---|
# ResNeSt
A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: \\( V = \text{Concat} \\){\\( V^{1},V^{2},\cdots{V}^{K} \\)}. As in standard residual blocks, the final output \\( Y \\) of otheur Split-Attention block is produced using a shortcut connection: \\( Y=V+X \\), if the input and output feature-map share the same shape. For blocks with a stride, an appropriate transformation \\( \mathcal{T} \\) is applied to the shortcut connection to align the output shapes: \\( Y=V+\mathcal{T}(X) \\). For example, \\( \mathcal{T} \\) can be strided convolution or combined convolution-with-pooling.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('resnest101e', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `resnest101e`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('resnest101e', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../scripts) for training a new model afresh.
## Citation
```BibTeX
@misc{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Hang Zhang and Chongruo Wu and Zhongyue Zhang and Yi Zhu and Haibin Lin and Zhi Zhang and Yue Sun and Tong He and Jonas Mueller and R. Manmatha and Mu Li and Alexander Smola},
year={2020},
eprint={2004.08955},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: ResNeSt
Paper:
Title: 'ResNeSt: Split-Attention Networks'
URL: https://paperswithcode.com/paper/resnest-split-attention-networks
Models:
- Name: resnest101e
In Collection: ResNeSt
Metadata:
FLOPs: 17423183648
Parameters: 48280000
File Size: 193782911
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest101e
LR: 0.1
Epochs: 270
Layers: 101
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 4096
Image Size: '256'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L182
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest101-22405ba7.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 82.88%
Top 5 Accuracy: 96.31%
- Name: resnest14d
In Collection: ResNeSt
Metadata:
FLOPs: 3548594464
Parameters: 10610000
File Size: 42562639
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest14d
LR: 0.1
Epochs: 270
Layers: 14
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8192
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L148
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_resnest14-9c8fe254.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 75.51%
Top 5 Accuracy: 92.52%
- Name: resnest200e
In Collection: ResNeSt
Metadata:
FLOPs: 45954387872
Parameters: 70200000
File Size: 193782911
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest200e
LR: 0.1
Epochs: 270
Layers: 200
Dropout: 0.2
Crop Pct: '0.909'
Momentum: 0.9
Batch Size: 2048
Image Size: '320'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L194
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest101-22405ba7.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.85%
Top 5 Accuracy: 96.89%
- Name: resnest269e
In Collection: ResNeSt
Metadata:
FLOPs: 100830307104
Parameters: 110930000
File Size: 445402691
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest269e
LR: 0.1
Epochs: 270
Layers: 269
Dropout: 0.2
Crop Pct: '0.928'
Momentum: 0.9
Batch Size: 2048
Image Size: '416'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L206
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest269-0cc87c48.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 84.53%
Top 5 Accuracy: 96.99%
- Name: resnest26d
In Collection: ResNeSt
Metadata:
FLOPs: 4678918720
Parameters: 17070000
File Size: 68470242
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest26d
LR: 0.1
Epochs: 270
Layers: 26
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8192
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L159
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_resnest26-50eb607c.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.48%
Top 5 Accuracy: 94.3%
- Name: resnest50d
In Collection: ResNeSt
Metadata:
FLOPs: 6937106336
Parameters: 27480000
File Size: 110273258
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest50d
LR: 0.1
Epochs: 270
Layers: 50
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8192
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L170
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50-528c19ca.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.96%
Top 5 Accuracy: 95.38%
- Name: resnest50d_1s4x24d
In Collection: ResNeSt
Metadata:
FLOPs: 5686764544
Parameters: 25680000
File Size: 103045531
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest50d_1s4x24d
LR: 0.1
Epochs: 270
Layers: 50
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8192
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L229
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50_fast_1s4x24d-d4a4f76f.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 81.0%
Top 5 Accuracy: 95.33%
- Name: resnest50d_4s2x40d
In Collection: ResNeSt
Metadata:
FLOPs: 5657064720
Parameters: 30420000
File Size: 122133282
Architecture:
- 1x1 Convolution
- Convolution
- Dense Connections
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Connection
- Softmax
- Split Attention
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- DropBlock
- Label Smoothing
- Mixup
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 64x NVIDIA V100 GPUs
ID: resnest50d_4s2x40d
LR: 0.1
Epochs: 270
Layers: 50
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8192
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L218
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50_fast_4s2x40d-41d14ed0.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 81.11%
Top 5 Accuracy: 95.55%
-->
|
pytorch-image-models/hfdocs/source/models/resnest.mdx/0
|
{
"file_path": "pytorch-image-models/hfdocs/source/models/resnest.mdx",
"repo_id": "pytorch-image-models",
"token_count": 5466
}
| 200
|
# (Tensorflow) EfficientNet
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use \\( 2^N \\) times more computational resources, then we can simply increase the network depth by \\( \alpha ^ N \\), width by \\( \beta ^ N \\), and image size by \\( \gamma ^ N \\), where \\( \alpha, \beta, \gamma \\) are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient \\( \phi \\) to uniformly scales network width, depth, and resolution in a principled way.
The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.
The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block).
The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu).
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('tf_efficientnet_b0', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `tf_efficientnet_b0`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('tf_efficientnet_b0', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../scripts) for training a new model afresh.
## Citation
```BibTeX
@misc{tan2020efficientnet,
title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
author={Mingxing Tan and Quoc V. Le},
year={2020},
eprint={1905.11946},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
Type: model-index
Collections:
- Name: TF EfficientNet
Paper:
Title: 'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks'
URL: https://paperswithcode.com/paper/efficientnet-rethinking-model-scaling-for
Models:
- Name: tf_efficientnet_b0
In Collection: TF EfficientNet
Metadata:
FLOPs: 488688572
Parameters: 5290000
File Size: 21383997
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
Training Resources: TPUv3 Cloud TPU
ID: tf_efficientnet_b0
LR: 0.256
Epochs: 350
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 2048
Image Size: '224'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1241
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b0_aa-827b6e33.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 76.85%
Top 5 Accuracy: 93.23%
- Name: tf_efficientnet_b1
In Collection: TF EfficientNet
Metadata:
FLOPs: 883633200
Parameters: 7790000
File Size: 31512534
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b1
LR: 0.256
Epochs: 350
Crop Pct: '0.882'
Momentum: 0.9
Batch Size: 2048
Image Size: '240'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1251
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b1_aa-ea7a6ee0.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.84%
Top 5 Accuracy: 94.2%
- Name: tf_efficientnet_b2
In Collection: TF EfficientNet
Metadata:
FLOPs: 1234321170
Parameters: 9110000
File Size: 36797929
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b2
LR: 0.256
Epochs: 350
Crop Pct: '0.89'
Momentum: 0.9
Batch Size: 2048
Image Size: '260'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1261
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b2_aa-60c94f97.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.07%
Top 5 Accuracy: 94.9%
- Name: tf_efficientnet_b3
In Collection: TF EfficientNet
Metadata:
FLOPs: 2275247568
Parameters: 12230000
File Size: 49381362
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b3
LR: 0.256
Epochs: 350
Crop Pct: '0.904'
Momentum: 0.9
Batch Size: 2048
Image Size: '300'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1271
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b3_aa-84b4657e.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 81.65%
Top 5 Accuracy: 95.72%
- Name: tf_efficientnet_b4
In Collection: TF EfficientNet
Metadata:
FLOPs: 5749638672
Parameters: 19340000
File Size: 77989689
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
Training Resources: TPUv3 Cloud TPU
ID: tf_efficientnet_b4
LR: 0.256
Epochs: 350
Crop Pct: '0.922'
Momentum: 0.9
Batch Size: 2048
Image Size: '380'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1281
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b4_aa-818f208c.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.03%
Top 5 Accuracy: 96.3%
- Name: tf_efficientnet_b5
In Collection: TF EfficientNet
Metadata:
FLOPs: 13176501888
Parameters: 30390000
File Size: 122403150
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b5
LR: 0.256
Epochs: 350
Crop Pct: '0.934'
Momentum: 0.9
Batch Size: 2048
Image Size: '456'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1291
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ra-9a3e5369.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.81%
Top 5 Accuracy: 96.75%
- Name: tf_efficientnet_b6
In Collection: TF EfficientNet
Metadata:
FLOPs: 24180518488
Parameters: 43040000
File Size: 173232007
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b6
LR: 0.256
Epochs: 350
Crop Pct: '0.942'
Momentum: 0.9
Batch Size: 2048
Image Size: '528'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1301
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b6_aa-80ba17e4.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 84.11%
Top 5 Accuracy: 96.89%
- Name: tf_efficientnet_b7
In Collection: TF EfficientNet
Metadata:
FLOPs: 48205304880
Parameters: 66349999
File Size: 266850607
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b7
LR: 0.256
Epochs: 350
Crop Pct: '0.949'
Momentum: 0.9
Batch Size: 2048
Image Size: '600'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1312
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ra-6c08e654.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 84.93%
Top 5 Accuracy: 97.2%
- Name: tf_efficientnet_b8
In Collection: TF EfficientNet
Metadata:
FLOPs: 80962956270
Parameters: 87410000
File Size: 351379853
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_b8
LR: 0.256
Epochs: 350
Crop Pct: '0.954'
Momentum: 0.9
Batch Size: 2048
Image Size: '672'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1323
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b8_ra-572d5dd9.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 85.35%
Top 5 Accuracy: 97.39%
- Name: tf_efficientnet_el
In Collection: TF EfficientNet
Metadata:
FLOPs: 9356616096
Parameters: 10590000
File Size: 42800271
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: tf_efficientnet_el
Crop Pct: '0.904'
Image Size: '300'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1551
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_el-5143854e.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.45%
Top 5 Accuracy: 95.17%
- Name: tf_efficientnet_em
In Collection: TF EfficientNet
Metadata:
FLOPs: 3636607040
Parameters: 6900000
File Size: 27933644
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: tf_efficientnet_em
Crop Pct: '0.882'
Image Size: '240'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1541
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_em-e78cfe58.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.71%
Top 5 Accuracy: 94.33%
- Name: tf_efficientnet_es
In Collection: TF EfficientNet
Metadata:
FLOPs: 2057577472
Parameters: 5440000
File Size: 22008479
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: tf_efficientnet_es
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1531
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_es-ca1afbfe.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.28%
Top 5 Accuracy: 93.6%
- Name: tf_efficientnet_l2_ns_475
In Collection: TF EfficientNet
Metadata:
FLOPs: 217795669644
Parameters: 480310000
File Size: 1925950424
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- FixRes
- Label Smoothing
- Noisy Student
- RMSProp
- RandAugment
- Weight Decay
Training Data:
- ImageNet
- JFT-300M
Training Resources: TPUv3 Cloud TPU
ID: tf_efficientnet_l2_ns_475
LR: 0.128
Epochs: 350
Dropout: 0.5
Crop Pct: '0.936'
Momentum: 0.9
Batch Size: 2048
Image Size: '475'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Stochastic Depth Survival: 0.8
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1509
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_l2_ns_475-bebbd00a.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 88.24%
Top 5 Accuracy: 98.55%
-->
|
pytorch-image-models/hfdocs/source/models/tf-efficientnet.mdx/0
|
{
"file_path": "pytorch-image-models/hfdocs/source/models/tf-efficientnet.mdx",
"repo_id": "pytorch-image-models",
"token_count": 8002
}
| 201
|
""" ONNX export script
Export PyTorch models as ONNX graphs.
This export script originally started as an adaptation of code snippets found at
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html
The default parameters work with PyTorch 1.6 and ONNX 1.7 and produce an optimal ONNX graph
for hosting in the ONNX runtime (see onnx_validate.py). To export an ONNX model compatible
with caffe2 (see caffe2_benchmark.py and caffe2_validate.py), the --keep-init and --aten-fallback
flags are currently required.
Older versions of PyTorch/ONNX (tested PyTorch 1.4, ONNX 1.5) do not need extra flags for
caffe2 compatibility, but they produce a model that isn't as fast running on ONNX runtime.
Most new release of PyTorch and ONNX cause some sort of breakage in the export / usage of ONNX models.
Please do your research and search ONNX and PyTorch issue tracker before asking me. Thanks.
Copyright 2020 Ross Wightman
"""
import argparse
import timm
from timm.utils.model import reparameterize_model
from timm.utils.onnx import onnx_export
parser = argparse.ArgumentParser(description='PyTorch ImageNet Validation')
parser.add_argument('output', metavar='ONNX_FILE',
help='output model filename')
parser.add_argument('--model', '-m', metavar='MODEL', default='mobilenetv3_large_100',
help='model architecture (default: mobilenetv3_large_100)')
parser.add_argument('--opset', type=int, default=None,
help='ONNX opset to use (default: 10)')
parser.add_argument('--keep-init', action='store_true', default=False,
help='Keep initializers as input. Needed for Caffe2 compatible export in newer PyTorch/ONNX.')
parser.add_argument('--aten-fallback', action='store_true', default=False,
help='Fallback to ATEN ops. Helps fix AdaptiveAvgPool issue with Caffe2 in newer PyTorch/ONNX.')
parser.add_argument('--dynamic-size', action='store_true', default=False,
help='Export model width dynamic width/height. Not recommended for "tf" models with SAME padding.')
parser.add_argument('--check-forward', action='store_true', default=False,
help='Do a full check of torch vs onnx forward after export.')
parser.add_argument('-b', '--batch-size', default=1, type=int,
metavar='N', help='mini-batch size (default: 1)')
parser.add_argument('--img-size', default=None, type=int,
metavar='N', help='Input image dimension, uses model default if empty')
parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN',
help='Override mean pixel value of dataset')
parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD',
help='Override std deviation of of dataset')
parser.add_argument('--num-classes', type=int, default=1000,
help='Number classes in dataset')
parser.add_argument('--checkpoint', default='', type=str, metavar='PATH',
help='path to checkpoint (default: none)')
parser.add_argument('--reparam', default=False, action='store_true',
help='Reparameterize model')
parser.add_argument('--training', default=False, action='store_true',
help='Export in training mode (default is eval)')
parser.add_argument('--verbose', default=False, action='store_true',
help='Extra stdout output')
parser.add_argument('--dynamo', default=False, action='store_true',
help='Use torch dynamo export.')
def main():
args = parser.parse_args()
args.pretrained = True
if args.checkpoint:
args.pretrained = False
print("==> Creating PyTorch {} model".format(args.model))
# NOTE exportable=True flag disables autofn/jit scripted activations and uses Conv2dSameExport layers
# for models using SAME padding
model = timm.create_model(
args.model,
num_classes=args.num_classes,
in_chans=3,
pretrained=args.pretrained,
checkpoint_path=args.checkpoint,
exportable=True,
)
if args.reparam:
model = reparameterize_model(model)
onnx_export(
model,
args.output,
opset=args.opset,
dynamic_size=args.dynamic_size,
aten_fallback=args.aten_fallback,
keep_initializers=args.keep_init,
check_forward=args.check_forward,
training=args.training,
verbose=args.verbose,
use_dynamo=args.dynamo,
input_size=(3, args.img_size, args.img_size),
batch_size=args.batch_size,
)
if __name__ == '__main__':
main()
|
pytorch-image-models/onnx_export.py/0
|
{
"file_path": "pytorch-image-models/onnx_export.py",
"repo_id": "pytorch-image-models",
"token_count": 1811
}
| 202
|
import torch
import torch.nn as nn
from timm.layers import create_act_layer, set_layer_config
import importlib
import os
torch_backend = os.environ.get('TORCH_BACKEND')
if torch_backend is not None:
importlib.import_module(torch_backend)
torch_device = os.environ.get('TORCH_DEVICE', 'cpu')
class MLP(nn.Module):
def __init__(self, act_layer="relu", inplace=True):
super(MLP, self).__init__()
self.fc1 = nn.Linear(1000, 100)
self.act = create_act_layer(act_layer, inplace=inplace)
self.fc2 = nn.Linear(100, 10)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.fc2(x)
return x
def _run_act_layer_grad(act_type, inplace=True):
x = torch.rand(10, 1000) * 10
m = MLP(act_layer=act_type, inplace=inplace)
def _run(x, act_layer=''):
if act_layer:
# replace act layer if set
m.act = create_act_layer(act_layer, inplace=inplace)
out = m(x)
l = (out - 0).pow(2).sum()
return l
x = x.to(device=torch_device)
m.to(device=torch_device)
out_me = _run(x)
with set_layer_config(scriptable=True):
out_jit = _run(x, act_type)
assert torch.isclose(out_jit, out_me)
with set_layer_config(no_jit=True):
out_basic = _run(x, act_type)
assert torch.isclose(out_basic, out_jit)
def test_swish_grad():
for _ in range(100):
_run_act_layer_grad('swish')
def test_mish_grad():
for _ in range(100):
_run_act_layer_grad('mish')
def test_hard_sigmoid_grad():
for _ in range(100):
_run_act_layer_grad('hard_sigmoid', inplace=None)
def test_hard_swish_grad():
for _ in range(100):
_run_act_layer_grad('hard_swish')
def test_hard_mish_grad():
for _ in range(100):
_run_act_layer_grad('hard_mish')
|
pytorch-image-models/tests/test_layers.py/0
|
{
"file_path": "pytorch-image-models/tests/test_layers.py",
"repo_id": "pytorch-image-models",
"token_count": 871
}
| 203
|
import csv
import os
import pkgutil
import re
from typing import Dict, List, Optional, Union
from .dataset_info import DatasetInfo
# NOTE no ambiguity wrt to mapping from # classes to ImageNet subset so far, but likely to change
_NUM_CLASSES_TO_SUBSET = {
1000: 'imagenet-1k',
11221: 'imagenet-21k-miil', # miil subset of fall11
11821: 'imagenet-12k', # timm specific 12k subset of fall11
21841: 'imagenet-22k', # as in fall11.tar
21842: 'imagenet-22k-ms', # a Microsoft (for FocalNet) remapping of 22k w/ moves ImageNet-1k classes to first 1000
21843: 'imagenet-21k-goog', # Google's ImageNet full has two classes not in fall11
}
_SUBSETS = {
'imagenet1k': 'imagenet_synsets.txt',
'imagenet12k': 'imagenet12k_synsets.txt',
'imagenet22k': 'imagenet22k_synsets.txt',
'imagenet21k': 'imagenet21k_goog_synsets.txt',
'imagenet21kgoog': 'imagenet21k_goog_synsets.txt',
'imagenet21kmiil': 'imagenet21k_miil_synsets.txt',
'imagenet22kms': 'imagenet22k_ms_synsets.txt',
}
_LEMMA_FILE = 'imagenet_synset_to_lemma.txt'
_DEFINITION_FILE = 'imagenet_synset_to_definition.txt'
def infer_imagenet_subset(model_or_cfg) -> Optional[str]:
if isinstance(model_or_cfg, dict):
num_classes = model_or_cfg.get('num_classes', None)
else:
num_classes = getattr(model_or_cfg, 'num_classes', None)
if not num_classes:
pretrained_cfg = getattr(model_or_cfg, 'pretrained_cfg', {})
# FIXME at some point pretrained_cfg should include dataset-tag,
# which will be more robust than a guess based on num_classes
num_classes = pretrained_cfg.get('num_classes', None)
if not num_classes or num_classes not in _NUM_CLASSES_TO_SUBSET:
return None
return _NUM_CLASSES_TO_SUBSET[num_classes]
class ImageNetInfo(DatasetInfo):
def __init__(self, subset: str = 'imagenet-1k'):
super().__init__()
subset = re.sub(r'[-_\s]', '', subset.lower())
assert subset in _SUBSETS, f'Unknown imagenet subset {subset}.'
# WordNet synsets (part-of-speach + offset) are the unique class label names for ImageNet classifiers
synset_file = _SUBSETS[subset]
synset_data = pkgutil.get_data(__name__, os.path.join('_info', synset_file))
self._synsets = synset_data.decode('utf-8').splitlines()
# WordNet lemmas (canonical dictionary form of word) and definitions are used to build
# the class descriptions. If detailed=True both are used, otherwise just the lemmas.
lemma_data = pkgutil.get_data(__name__, os.path.join('_info', _LEMMA_FILE))
reader = csv.reader(lemma_data.decode('utf-8').splitlines(), delimiter='\t')
self._lemmas = dict(reader)
definition_data = pkgutil.get_data(__name__, os.path.join('_info', _DEFINITION_FILE))
reader = csv.reader(definition_data.decode('utf-8').splitlines(), delimiter='\t')
self._definitions = dict(reader)
def num_classes(self):
return len(self._synsets)
def label_names(self):
return self._synsets
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
if as_dict:
return {label: self.label_name_to_description(label, detailed=detailed) for label in self._synsets}
else:
return [self.label_name_to_description(label, detailed=detailed) for label in self._synsets]
def index_to_label_name(self, index) -> str:
assert 0 <= index < len(self._synsets), \
f'Index ({index}) out of range for dataset with {len(self._synsets)} classes.'
return self._synsets[index]
def index_to_description(self, index: int, detailed: bool = False) -> str:
label = self.index_to_label_name(index)
return self.label_name_to_description(label, detailed=detailed)
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
if detailed:
description = f'{self._lemmas[label]}: {self._definitions[label]}'
else:
description = f'{self._lemmas[label]}'
return description
|
pytorch-image-models/timm/data/imagenet_info.py/0
|
{
"file_path": "pytorch-image-models/timm/data/imagenet_info.py",
"repo_id": "pytorch-image-models",
"token_count": 1733
}
| 204
|
from multiprocessing import Value
class SharedCount:
def __init__(self, epoch: int = 0):
self.shared_epoch = Value('i', epoch)
@property
def value(self):
return self.shared_epoch.value
@value.setter
def value(self, epoch):
self.shared_epoch.value = epoch
|
pytorch-image-models/timm/data/readers/shared_count.py/0
|
{
"file_path": "pytorch-image-models/timm/data/readers/shared_count.py",
"repo_id": "pytorch-image-models",
"token_count": 122
}
| 205
|
""" PyTorch Conditionally Parameterized Convolution (CondConv)
Paper: CondConv: Conditionally Parameterized Convolutions for Efficient Inference
(https://arxiv.org/abs/1904.04971)
Hacked together by / Copyright 2020 Ross Wightman
"""
import math
from functools import partial
import numpy as np
import torch
from torch import nn as nn
from torch.nn import functional as F
from .helpers import to_2tuple
from .conv2d_same import conv2d_same
from .padding import get_padding_value
def get_condconv_initializer(initializer, num_experts, expert_shape):
def condconv_initializer(weight):
"""CondConv initializer function."""
num_params = np.prod(expert_shape)
if (len(weight.shape) != 2 or weight.shape[0] != num_experts or
weight.shape[1] != num_params):
raise (ValueError(
'CondConv variables must have shape [num_experts, num_params]'))
for i in range(num_experts):
initializer(weight[i].view(expert_shape))
return condconv_initializer
class CondConv2d(nn.Module):
""" Conditionally Parameterized Convolution
Inspired by: https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/condconv/condconv_layers.py
Grouped convolution hackery for parallel execution of the per-sample kernel filters inspired by this discussion:
https://github.com/pytorch/pytorch/issues/17983
"""
__constants__ = ['in_channels', 'out_channels', 'dynamic_padding']
def __init__(self, in_channels, out_channels, kernel_size=3,
stride=1, padding='', dilation=1, groups=1, bias=False, num_experts=4):
super(CondConv2d, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = to_2tuple(kernel_size)
self.stride = to_2tuple(stride)
padding_val, is_padding_dynamic = get_padding_value(
padding, kernel_size, stride=stride, dilation=dilation)
self.dynamic_padding = is_padding_dynamic # if in forward to work with torchscript
self.padding = to_2tuple(padding_val)
self.dilation = to_2tuple(dilation)
self.groups = groups
self.num_experts = num_experts
self.weight_shape = (self.out_channels, self.in_channels // self.groups) + self.kernel_size
weight_num_param = 1
for wd in self.weight_shape:
weight_num_param *= wd
self.weight = torch.nn.Parameter(torch.Tensor(self.num_experts, weight_num_param))
if bias:
self.bias_shape = (self.out_channels,)
self.bias = torch.nn.Parameter(torch.Tensor(self.num_experts, self.out_channels))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init_weight = get_condconv_initializer(
partial(nn.init.kaiming_uniform_, a=math.sqrt(5)), self.num_experts, self.weight_shape)
init_weight(self.weight)
if self.bias is not None:
fan_in = np.prod(self.weight_shape[1:])
bound = 1 / math.sqrt(fan_in)
init_bias = get_condconv_initializer(
partial(nn.init.uniform_, a=-bound, b=bound), self.num_experts, self.bias_shape)
init_bias(self.bias)
def forward(self, x, routing_weights):
B, C, H, W = x.shape
weight = torch.matmul(routing_weights, self.weight)
new_weight_shape = (B * self.out_channels, self.in_channels // self.groups) + self.kernel_size
weight = weight.view(new_weight_shape)
bias = None
if self.bias is not None:
bias = torch.matmul(routing_weights, self.bias)
bias = bias.view(B * self.out_channels)
# move batch elements with channels so each batch element can be efficiently convolved with separate kernel
# reshape instead of view to work with channels_last input
x = x.reshape(1, B * C, H, W)
if self.dynamic_padding:
out = conv2d_same(
x, weight, bias, stride=self.stride, padding=self.padding,
dilation=self.dilation, groups=self.groups * B)
else:
out = F.conv2d(
x, weight, bias, stride=self.stride, padding=self.padding,
dilation=self.dilation, groups=self.groups * B)
out = out.permute([1, 0, 2, 3]).view(B, self.out_channels, out.shape[-2], out.shape[-1])
# Literal port (from TF definition)
# x = torch.split(x, 1, 0)
# weight = torch.split(weight, 1, 0)
# if self.bias is not None:
# bias = torch.matmul(routing_weights, self.bias)
# bias = torch.split(bias, 1, 0)
# else:
# bias = [None] * B
# out = []
# for xi, wi, bi in zip(x, weight, bias):
# wi = wi.view(*self.weight_shape)
# if bi is not None:
# bi = bi.view(*self.bias_shape)
# out.append(self.conv_fn(
# xi, wi, bi, stride=self.stride, padding=self.padding,
# dilation=self.dilation, groups=self.groups))
# out = torch.cat(out, 0)
return out
|
pytorch-image-models/timm/layers/cond_conv2d.py/0
|
{
"file_path": "pytorch-image-models/timm/layers/cond_conv2d.py",
"repo_id": "pytorch-image-models",
"token_count": 2314
}
| 206
|
""" Global Context Attention Block
Paper: `GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond`
- https://arxiv.org/abs/1904.11492
Official code consulted as reference: https://github.com/xvjiarui/GCNet
Hacked together by / Copyright 2021 Ross Wightman
"""
from torch import nn as nn
import torch.nn.functional as F
from .create_act import create_act_layer, get_act_layer
from .helpers import make_divisible
from .mlp import ConvMlp
from .norm import LayerNorm2d
class GlobalContext(nn.Module):
def __init__(self, channels, use_attn=True, fuse_add=False, fuse_scale=True, init_last_zero=False,
rd_ratio=1./8, rd_channels=None, rd_divisor=1, act_layer=nn.ReLU, gate_layer='sigmoid'):
super(GlobalContext, self).__init__()
act_layer = get_act_layer(act_layer)
self.conv_attn = nn.Conv2d(channels, 1, kernel_size=1, bias=True) if use_attn else None
if rd_channels is None:
rd_channels = make_divisible(channels * rd_ratio, rd_divisor, round_limit=0.)
if fuse_add:
self.mlp_add = ConvMlp(channels, rd_channels, act_layer=act_layer, norm_layer=LayerNorm2d)
else:
self.mlp_add = None
if fuse_scale:
self.mlp_scale = ConvMlp(channels, rd_channels, act_layer=act_layer, norm_layer=LayerNorm2d)
else:
self.mlp_scale = None
self.gate = create_act_layer(gate_layer)
self.init_last_zero = init_last_zero
self.reset_parameters()
def reset_parameters(self):
if self.conv_attn is not None:
nn.init.kaiming_normal_(self.conv_attn.weight, mode='fan_in', nonlinearity='relu')
if self.mlp_add is not None:
nn.init.zeros_(self.mlp_add.fc2.weight)
def forward(self, x):
B, C, H, W = x.shape
if self.conv_attn is not None:
attn = self.conv_attn(x).reshape(B, 1, H * W) # (B, 1, H * W)
attn = F.softmax(attn, dim=-1).unsqueeze(3) # (B, 1, H * W, 1)
context = x.reshape(B, C, H * W).unsqueeze(1) @ attn
context = context.view(B, C, 1, 1)
else:
context = x.mean(dim=(2, 3), keepdim=True)
if self.mlp_scale is not None:
mlp_x = self.mlp_scale(context)
x = x * self.gate(mlp_x)
if self.mlp_add is not None:
mlp_x = self.mlp_add(context)
x = x + mlp_x
return x
|
pytorch-image-models/timm/layers/global_context.py/0
|
{
"file_path": "pytorch-image-models/timm/layers/global_context.py",
"repo_id": "pytorch-image-models",
"token_count": 1169
}
| 207
|
""" Normalization layers and wrappers
Norm layer definitions that support fast norm and consistent channel arg order (always first arg).
Hacked together by / Copyright 2022 Ross Wightman
"""
import numbers
from typing import Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from .fast_norm import is_fast_norm, fast_group_norm, fast_layer_norm, fast_rms_norm
class GroupNorm(nn.GroupNorm):
def __init__(self, num_channels, num_groups=32, eps=1e-5, affine=True):
# NOTE num_channels is swapped to first arg for consistency in swapping norm layers with BN
super().__init__(num_groups, num_channels, eps=eps, affine=affine)
self.fast_norm = is_fast_norm() # can't script unless we have these flags here (no globals)
def forward(self, x):
if self.fast_norm:
return fast_group_norm(x, self.num_groups, self.weight, self.bias, self.eps)
else:
return F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps)
class GroupNorm1(nn.GroupNorm):
""" Group Normalization with 1 group.
Input: tensor in shape [B, C, *]
"""
def __init__(self, num_channels, **kwargs):
super().__init__(1, num_channels, **kwargs)
self.fast_norm = is_fast_norm() # can't script unless we have these flags here (no globals)
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self.fast_norm:
return fast_group_norm(x, self.num_groups, self.weight, self.bias, self.eps)
else:
return F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps)
class LayerNorm(nn.LayerNorm):
""" LayerNorm w/ fast norm option
"""
def __init__(self, num_channels, eps=1e-6, affine=True):
super().__init__(num_channels, eps=eps, elementwise_affine=affine)
self._fast_norm = is_fast_norm() # can't script unless we have these flags here (no globals)
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self._fast_norm:
x = fast_layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
else:
x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
return x
class LayerNorm2d(nn.LayerNorm):
""" LayerNorm for channels of '2D' spatial NCHW tensors """
def __init__(self, num_channels, eps=1e-6, affine=True):
super().__init__(num_channels, eps=eps, elementwise_affine=affine)
self._fast_norm = is_fast_norm() # can't script unless we have these flags here (no globals)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = x.permute(0, 2, 3, 1)
if self._fast_norm:
x = fast_layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
else:
x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
x = x.permute(0, 3, 1, 2)
return x
def _is_contiguous(tensor: torch.Tensor) -> bool:
# jit is oh so lovely :/
if torch.jit.is_scripting():
return tensor.is_contiguous()
else:
return tensor.is_contiguous(memory_format=torch.contiguous_format)
def _layer_norm_cf(x: torch.Tensor, weight: torch.Tensor, bias: torch.Tensor, eps: float):
s, u = torch.var_mean(x, dim=1, unbiased=False, keepdim=True)
x = (x - u) * torch.rsqrt(s + eps)
x = x * weight[:, None, None] + bias[:, None, None]
return x
def _layer_norm_cf_sqm(x: torch.Tensor, weight: torch.Tensor, bias: torch.Tensor, eps: float):
u = x.mean(dim=1, keepdim=True)
s = ((x * x).mean(dim=1, keepdim=True) - (u * u)).clamp(0)
x = (x - u) * torch.rsqrt(s + eps)
x = x * weight.view(1, -1, 1, 1) + bias.view(1, -1, 1, 1)
return x
class LayerNormExp2d(nn.LayerNorm):
""" LayerNorm for channels_first tensors with 2d spatial dimensions (ie N, C, H, W).
Experimental implementation w/ manual norm for tensors non-contiguous tensors.
This improves throughput in some scenarios (tested on Ampere GPU), esp w/ channels_last
layout. However, benefits are not always clear and can perform worse on other GPUs.
"""
def __init__(self, num_channels, eps=1e-6):
super().__init__(num_channels, eps=eps)
def forward(self, x) -> torch.Tensor:
if _is_contiguous(x):
x = F.layer_norm(
x.permute(0, 2, 3, 1), self.normalized_shape, self.weight, self.bias, self.eps).permute(0, 3, 1, 2)
else:
x = _layer_norm_cf(x, self.weight, self.bias, self.eps)
return x
class RmsNorm(nn.Module):
""" RmsNorm w/ fast (apex) norm if available
"""
__constants__ = ['normalized_shape', 'eps', 'elementwise_affine']
normalized_shape: Tuple[int, ...]
eps: float
elementwise_affine: bool
def __init__(self, channels, eps=1e-6, affine=True, device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super().__init__()
normalized_shape = channels
if isinstance(normalized_shape, numbers.Integral):
# mypy error: incompatible types in assignment
normalized_shape = (normalized_shape,) # type: ignore[assignment]
self.normalized_shape = tuple(normalized_shape) # type: ignore[arg-type]
self.eps = eps
self.elementwise_affine = affine
if self.elementwise_affine:
self.weight = nn.Parameter(torch.empty(self.normalized_shape, **factory_kwargs))
else:
self.register_parameter('weight', None)
self.reset_parameters()
def reset_parameters(self) -> None:
if self.elementwise_affine:
nn.init.ones_(self.weight)
def forward(self, x: torch.Tensor) -> torch.Tensor:
# NOTE fast norm fallback needs our rms norm impl, so both paths through here.
# Since there is no built-in PyTorch impl, always use APEX RmsNorm if is installed.
x = fast_rms_norm(x, self.normalized_shape, self.weight, self.eps)
return x
|
pytorch-image-models/timm/layers/norm.py/0
|
{
"file_path": "pytorch-image-models/timm/layers/norm.py",
"repo_id": "pytorch-image-models",
"token_count": 2512
}
| 208
|
""" Test Time Pooling (Average-Max Pool)
Hacked together by / Copyright 2020 Ross Wightman
"""
import logging
from torch import nn
import torch.nn.functional as F
from .adaptive_avgmax_pool import adaptive_avgmax_pool2d
_logger = logging.getLogger(__name__)
class TestTimePoolHead(nn.Module):
def __init__(self, base, original_pool=7):
super(TestTimePoolHead, self).__init__()
self.base = base
self.original_pool = original_pool
base_fc = self.base.get_classifier()
if isinstance(base_fc, nn.Conv2d):
self.fc = base_fc
else:
self.fc = nn.Conv2d(
self.base.num_features, self.base.num_classes, kernel_size=1, bias=True)
self.fc.weight.data.copy_(base_fc.weight.data.view(self.fc.weight.size()))
self.fc.bias.data.copy_(base_fc.bias.data.view(self.fc.bias.size()))
self.base.reset_classifier(0) # delete original fc layer
def forward(self, x):
x = self.base.forward_features(x)
x = F.avg_pool2d(x, kernel_size=self.original_pool, stride=1)
x = self.fc(x)
x = adaptive_avgmax_pool2d(x, 1)
return x.view(x.size(0), -1)
def apply_test_time_pool(model, config, use_test_size=False):
test_time_pool = False
if not hasattr(model, 'default_cfg') or not model.default_cfg:
return model, False
if use_test_size and 'test_input_size' in model.default_cfg:
df_input_size = model.default_cfg['test_input_size']
else:
df_input_size = model.default_cfg['input_size']
if config['input_size'][-1] > df_input_size[-1] and config['input_size'][-2] > df_input_size[-2]:
_logger.info('Target input size %s > pretrained default %s, using test time pooling' %
(str(config['input_size'][-2:]), str(df_input_size[-2:])))
model = TestTimePoolHead(model, original_pool=model.default_cfg['pool_size'])
test_time_pool = True
return model, test_time_pool
|
pytorch-image-models/timm/layers/test_time_pool.py/0
|
{
"file_path": "pytorch-image-models/timm/layers/test_time_pool.py",
"repo_id": "pytorch-image-models",
"token_count": 881
}
| 209
|
""" Model creation / weight loading / state_dict helpers
Hacked together by / Copyright 2020 Ross Wightman
"""
import logging
import os
from collections import OrderedDict
from typing import Any, Callable, Dict, Optional, Union
import torch
try:
import safetensors.torch
_has_safetensors = True
except ImportError:
_has_safetensors = False
_logger = logging.getLogger(__name__)
__all__ = ['clean_state_dict', 'load_state_dict', 'load_checkpoint', 'remap_state_dict', 'resume_checkpoint']
def _remove_prefix(text, prefix):
# FIXME replace with 3.9 stdlib fn when min at 3.9
if text.startswith(prefix):
return text[len(prefix):]
return text
def clean_state_dict(state_dict: Dict[str, Any]) -> Dict[str, Any]:
# 'clean' checkpoint by removing .module prefix from state dict if it exists from parallel training
cleaned_state_dict = {}
to_remove = (
'module.', # DDP wrapper
'_orig_mod.', # torchcompile dynamo wrapper
)
for k, v in state_dict.items():
for r in to_remove:
k = _remove_prefix(k, r)
cleaned_state_dict[k] = v
return cleaned_state_dict
def load_state_dict(
checkpoint_path: str,
use_ema: bool = True,
device: Union[str, torch.device] = 'cpu',
weights_only: bool = False,
) -> Dict[str, Any]:
if checkpoint_path and os.path.isfile(checkpoint_path):
# Check if safetensors or not and load weights accordingly
if str(checkpoint_path).endswith(".safetensors"):
assert _has_safetensors, "`pip install safetensors` to use .safetensors"
checkpoint = safetensors.torch.load_file(checkpoint_path, device=device)
else:
try:
checkpoint = torch.load(checkpoint_path, map_location=device, weights_only=weights_only)
except TypeError:
checkpoint = torch.load(checkpoint_path, map_location=device)
state_dict_key = ''
if isinstance(checkpoint, dict):
if use_ema and checkpoint.get('state_dict_ema', None) is not None:
state_dict_key = 'state_dict_ema'
elif use_ema and checkpoint.get('model_ema', None) is not None:
state_dict_key = 'model_ema'
elif 'state_dict' in checkpoint:
state_dict_key = 'state_dict'
elif 'model' in checkpoint:
state_dict_key = 'model'
state_dict = clean_state_dict(checkpoint[state_dict_key] if state_dict_key else checkpoint)
_logger.info("Loaded {} from checkpoint '{}'".format(state_dict_key, checkpoint_path))
return state_dict
else:
_logger.error("No checkpoint found at '{}'".format(checkpoint_path))
raise FileNotFoundError()
def load_checkpoint(
model: torch.nn.Module,
checkpoint_path: str,
use_ema: bool = True,
device: Union[str, torch.device] = 'cpu',
strict: bool = True,
remap: bool = False,
filter_fn: Optional[Callable] = None,
weights_only: bool = False,
):
if os.path.splitext(checkpoint_path)[-1].lower() in ('.npz', '.npy'):
# numpy checkpoint, try to load via model specific load_pretrained fn
if hasattr(model, 'load_pretrained'):
model.load_pretrained(checkpoint_path)
else:
raise NotImplementedError('Model cannot load numpy checkpoint')
return
state_dict = load_state_dict(checkpoint_path, use_ema, device=device, weights_only=weights_only)
if remap:
state_dict = remap_state_dict(state_dict, model)
elif filter_fn:
state_dict = filter_fn(state_dict, model)
incompatible_keys = model.load_state_dict(state_dict, strict=strict)
return incompatible_keys
def remap_state_dict(
state_dict: Dict[str, Any],
model: torch.nn.Module,
allow_reshape: bool = True
):
""" remap checkpoint by iterating over state dicts in order (ignoring original keys).
This assumes models (and originating state dict) were created with params registered in same order.
"""
out_dict = {}
for (ka, va), (kb, vb) in zip(model.state_dict().items(), state_dict.items()):
assert va.numel() == vb.numel(), f'Tensor size mismatch {ka}: {va.shape} vs {kb}: {vb.shape}. Remap failed.'
if va.shape != vb.shape:
if allow_reshape:
vb = vb.reshape(va.shape)
else:
assert False, f'Tensor shape mismatch {ka}: {va.shape} vs {kb}: {vb.shape}. Remap failed.'
out_dict[ka] = vb
return out_dict
def resume_checkpoint(
model: torch.nn.Module,
checkpoint_path: str,
optimizer: torch.optim.Optimizer = None,
loss_scaler: Any = None,
log_info: bool = True,
):
resume_epoch = None
if os.path.isfile(checkpoint_path):
checkpoint = torch.load(checkpoint_path, map_location='cpu', weights_only=False)
if isinstance(checkpoint, dict) and 'state_dict' in checkpoint:
if log_info:
_logger.info('Restoring model state from checkpoint...')
state_dict = clean_state_dict(checkpoint['state_dict'])
model.load_state_dict(state_dict)
if optimizer is not None and 'optimizer' in checkpoint:
if log_info:
_logger.info('Restoring optimizer state from checkpoint...')
optimizer.load_state_dict(checkpoint['optimizer'])
if loss_scaler is not None and loss_scaler.state_dict_key in checkpoint:
if log_info:
_logger.info('Restoring AMP loss scaler state from checkpoint...')
loss_scaler.load_state_dict(checkpoint[loss_scaler.state_dict_key])
if 'epoch' in checkpoint:
resume_epoch = checkpoint['epoch']
if 'version' in checkpoint and checkpoint['version'] > 1:
resume_epoch += 1 # start at the next epoch, old checkpoints incremented before save
if log_info:
_logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, checkpoint['epoch']))
else:
model.load_state_dict(checkpoint)
if log_info:
_logger.info("Loaded checkpoint '{}'".format(checkpoint_path))
return resume_epoch
else:
_logger.error("No checkpoint found at '{}'".format(checkpoint_path))
raise FileNotFoundError()
|
pytorch-image-models/timm/models/_helpers.py/0
|
{
"file_path": "pytorch-image-models/timm/models/_helpers.py",
"repo_id": "pytorch-image-models",
"token_count": 2801
}
| 210
|
""" ConViT Model
@article{d2021convit,
title={ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases},
author={d'Ascoli, St{\'e}phane and Touvron, Hugo and Leavitt, Matthew and Morcos, Ari and Biroli, Giulio and Sagun, Levent},
journal={arXiv preprint arXiv:2103.10697},
year={2021}
}
Paper link: https://arxiv.org/abs/2103.10697
Original code: https://github.com/facebookresearch/convit, original copyright below
Modifications and additions for timm hacked together by / Copyright 2021, Ross Wightman
"""
# Copyright (c) 2015-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the CC-by-NC license found in the
# LICENSE file in the root directory of this source tree.
#
'''These modules are adapted from those of timm, see
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
'''
from typing import Optional
import torch
import torch.nn as nn
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import DropPath, trunc_normal_, PatchEmbed, Mlp, LayerNorm, HybridEmbed
from ._builder import build_model_with_cfg
from ._features_fx import register_notrace_module
from ._registry import register_model, generate_default_cfgs
__all__ = ['ConVit']
@register_notrace_module # reason: FX can't symbolically trace control flow in forward method
class GPSA(nn.Module):
def __init__(
self,
dim,
num_heads=8,
qkv_bias=False,
attn_drop=0.,
proj_drop=0.,
locality_strength=1.,
):
super().__init__()
self.num_heads = num_heads
self.dim = dim
head_dim = dim // num_heads
self.scale = head_dim ** -0.5
self.locality_strength = locality_strength
self.qk = nn.Linear(dim, dim * 2, bias=qkv_bias)
self.v = nn.Linear(dim, dim, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.pos_proj = nn.Linear(3, num_heads)
self.proj_drop = nn.Dropout(proj_drop)
self.gating_param = nn.Parameter(torch.ones(self.num_heads))
self.rel_indices: torch.Tensor = torch.zeros(1, 1, 1, 3) # silly torchscript hack, won't work with None
def forward(self, x):
B, N, C = x.shape
if self.rel_indices is None or self.rel_indices.shape[1] != N:
self.rel_indices = self.get_rel_indices(N)
attn = self.get_attention(x)
v = self.v(x).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
def get_attention(self, x):
B, N, C = x.shape
qk = self.qk(x).reshape(B, N, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k = qk[0], qk[1]
pos_score = self.rel_indices.expand(B, -1, -1, -1)
pos_score = self.pos_proj(pos_score).permute(0, 3, 1, 2)
patch_score = (q @ k.transpose(-2, -1)) * self.scale
patch_score = patch_score.softmax(dim=-1)
pos_score = pos_score.softmax(dim=-1)
gating = self.gating_param.view(1, -1, 1, 1)
attn = (1. - torch.sigmoid(gating)) * patch_score + torch.sigmoid(gating) * pos_score
attn /= attn.sum(dim=-1).unsqueeze(-1)
attn = self.attn_drop(attn)
return attn
def get_attention_map(self, x, return_map=False):
attn_map = self.get_attention(x).mean(0) # average over batch
distances = self.rel_indices.squeeze()[:, :, -1] ** .5
dist = torch.einsum('nm,hnm->h', (distances, attn_map)) / distances.size(0)
if return_map:
return dist, attn_map
else:
return dist
def local_init(self):
self.v.weight.data.copy_(torch.eye(self.dim))
locality_distance = 1 # max(1,1/locality_strength**.5)
kernel_size = int(self.num_heads ** .5)
center = (kernel_size - 1) / 2 if kernel_size % 2 == 0 else kernel_size // 2
for h1 in range(kernel_size):
for h2 in range(kernel_size):
position = h1 + kernel_size * h2
self.pos_proj.weight.data[position, 2] = -1
self.pos_proj.weight.data[position, 1] = 2 * (h1 - center) * locality_distance
self.pos_proj.weight.data[position, 0] = 2 * (h2 - center) * locality_distance
self.pos_proj.weight.data *= self.locality_strength
def get_rel_indices(self, num_patches: int) -> torch.Tensor:
img_size = int(num_patches ** .5)
rel_indices = torch.zeros(1, num_patches, num_patches, 3)
ind = torch.arange(img_size).view(1, -1) - torch.arange(img_size).view(-1, 1)
indx = ind.repeat(img_size, img_size)
indy = ind.repeat_interleave(img_size, dim=0).repeat_interleave(img_size, dim=1)
indd = indx ** 2 + indy ** 2
rel_indices[:, :, :, 2] = indd.unsqueeze(0)
rel_indices[:, :, :, 1] = indy.unsqueeze(0)
rel_indices[:, :, :, 0] = indx.unsqueeze(0)
device = self.qk.weight.device
return rel_indices.to(device)
class MHSA(nn.Module):
def __init__(
self,
dim,
num_heads=8,
qkv_bias=False,
attn_drop=0.,
proj_drop=0.,
):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = head_dim ** -0.5
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def get_attention_map(self, x, return_map=False):
B, N, C = x.shape
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2]
attn_map = (q @ k.transpose(-2, -1)) * self.scale
attn_map = attn_map.softmax(dim=-1).mean(0)
img_size = int(N ** .5)
ind = torch.arange(img_size).view(1, -1) - torch.arange(img_size).view(-1, 1)
indx = ind.repeat(img_size, img_size)
indy = ind.repeat_interleave(img_size, dim=0).repeat_interleave(img_size, dim=1)
indd = indx ** 2 + indy ** 2
distances = indd ** .5
distances = distances.to(x.device)
dist = torch.einsum('nm,hnm->h', (distances, attn_map)) / N
if return_map:
return dist, attn_map
else:
return dist
def forward(self, x):
B, N, C = x.shape
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0)
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
class Block(nn.Module):
def __init__(
self,
dim,
num_heads,
mlp_ratio=4.,
qkv_bias=False,
proj_drop=0.,
attn_drop=0.,
drop_path=0.,
act_layer=nn.GELU,
norm_layer=LayerNorm,
use_gpsa=True,
locality_strength=1.,
):
super().__init__()
self.norm1 = norm_layer(dim)
self.use_gpsa = use_gpsa
if self.use_gpsa:
self.attn = GPSA(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
attn_drop=attn_drop,
proj_drop=proj_drop,
locality_strength=locality_strength,
)
else:
self.attn = MHSA(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
attn_drop=attn_drop,
proj_drop=proj_drop,
)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(
in_features=dim,
hidden_features=mlp_hidden_dim,
act_layer=act_layer,
drop=proj_drop,
)
def forward(self, x):
x = x + self.drop_path(self.attn(self.norm1(x)))
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
class ConVit(nn.Module):
""" Vision Transformer with support for patch or hybrid CNN input stage
"""
def __init__(
self,
img_size=224,
patch_size=16,
in_chans=3,
num_classes=1000,
global_pool='token',
embed_dim=768,
depth=12,
num_heads=12,
mlp_ratio=4.,
qkv_bias=False,
drop_rate=0.,
pos_drop_rate=0.,
proj_drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.,
hybrid_backbone=None,
norm_layer=LayerNorm,
local_up_to_layer=3,
locality_strength=1.,
use_pos_embed=True,
):
super().__init__()
assert global_pool in ('', 'avg', 'token')
embed_dim *= num_heads
self.num_classes = num_classes
self.global_pool = global_pool
self.local_up_to_layer = local_up_to_layer
self.num_features = self.head_hidden_size = self.embed_dim = embed_dim # for consistency with other models
self.locality_strength = locality_strength
self.use_pos_embed = use_pos_embed
if hybrid_backbone is not None:
self.patch_embed = HybridEmbed(
hybrid_backbone, img_size=img_size, in_chans=in_chans, embed_dim=embed_dim)
else:
self.patch_embed = PatchEmbed(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim,
)
num_patches = self.patch_embed.num_patches
self.num_patches = num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.pos_drop = nn.Dropout(p=pos_drop_rate)
if self.use_pos_embed:
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
trunc_normal_(self.pos_embed, std=.02)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
self.blocks = nn.ModuleList([
Block(
dim=embed_dim,
num_heads=num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
proj_drop=proj_drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
use_gpsa=i < local_up_to_layer,
locality_strength=locality_strength,
) for i in range(depth)])
self.norm = norm_layer(embed_dim)
# Classifier head
self.feature_info = [dict(num_chs=embed_dim, reduction=0, module='head')]
self.head_drop = nn.Dropout(drop_rate)
self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
trunc_normal_(self.cls_token, std=.02)
self.apply(self._init_weights)
for n, m in self.named_modules():
if hasattr(m, 'local_init'):
m.local_init()
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
@torch.jit.ignore
def no_weight_decay(self):
return {'pos_embed', 'cls_token'}
@torch.jit.ignore
def group_matcher(self, coarse=False):
return dict(
stem=r'^cls_token|pos_embed|patch_embed', # stem and embed
blocks=[(r'^blocks\.(\d+)', None), (r'^norm', (99999,))]
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
assert not enable, 'gradient checkpointing not supported'
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
if global_pool is not None:
assert global_pool in ('', 'token', 'avg')
self.global_pool = global_pool
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
def forward_features(self, x):
x = self.patch_embed(x)
if self.use_pos_embed:
x = x + self.pos_embed
x = self.pos_drop(x)
cls_tokens = self.cls_token.expand(x.shape[0], -1, -1)
for u, blk in enumerate(self.blocks):
if u == self.local_up_to_layer:
x = torch.cat((cls_tokens, x), dim=1)
x = blk(x)
x = self.norm(x)
return x
def forward_head(self, x, pre_logits: bool = False):
if self.global_pool:
x = x[:, 1:].mean(dim=1) if self.global_pool == 'avg' else x[:, 0]
x = self.head_drop(x)
return x if pre_logits else self.head(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_convit(variant, pretrained=False, **kwargs):
if kwargs.get('features_only', None):
raise RuntimeError('features_only not implemented for Vision Transformer models.')
return build_model_with_cfg(ConVit, variant, pretrained, **kwargs)
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, 'fixed_input_size': True,
'first_conv': 'patch_embed.proj', 'classifier': 'head',
**kwargs
}
default_cfgs = generate_default_cfgs({
# ConViT
'convit_tiny.fb_in1k': _cfg(hf_hub_id='timm/'),
'convit_small.fb_in1k': _cfg(hf_hub_id='timm/'),
'convit_base.fb_in1k': _cfg(hf_hub_id='timm/')
})
@register_model
def convit_tiny(pretrained=False, **kwargs) -> ConVit:
model_args = dict(
local_up_to_layer=10, locality_strength=1.0, embed_dim=48, num_heads=4)
model = _create_convit(variant='convit_tiny', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def convit_small(pretrained=False, **kwargs) -> ConVit:
model_args = dict(
local_up_to_layer=10, locality_strength=1.0, embed_dim=48, num_heads=9)
model = _create_convit(variant='convit_small', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def convit_base(pretrained=False, **kwargs) -> ConVit:
model_args = dict(
local_up_to_layer=10, locality_strength=1.0, embed_dim=48, num_heads=16)
model = _create_convit(variant='convit_base', pretrained=pretrained, **dict(model_args, **kwargs))
return model
|
pytorch-image-models/timm/models/convit.py/0
|
{
"file_path": "pytorch-image-models/timm/models/convit.py",
"repo_id": "pytorch-image-models",
"token_count": 7721
}
| 211
|
""" EVA
EVA from https://github.com/baaivision/EVA , paper: https://arxiv.org/abs/2211.07636
@article{EVA,
title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale},
author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang,
Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2211.07636},
year={2022}
}
EVA-02: A Visual Representation for Neon Genesis - https://arxiv.org/abs/2303.11331
@article{EVA02,
title={EVA-02: A Visual Representation for Neon Genesis},
author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue},
journal={arXiv preprint arXiv:2303.11331},
year={2023}
}
This file contains EVA & EVA02 model implementations evolved from BEiT, additional models in vision_transformer.py.
Modifications by / Copyright 2023 Ross Wightman, original copyrights below
"""
# EVA models Copyright (c) 2022 BAAI-Vision
# EVA02 models Copyright (c) 2023 BAAI-Vision
import math
from typing import Callable, List, Optional, Tuple, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.checkpoint import checkpoint
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD, OPENAI_CLIP_MEAN, OPENAI_CLIP_STD
from timm.layers import PatchEmbed, Mlp, GluMlp, SwiGLU, LayerNorm, DropPath, PatchDropout, RotaryEmbeddingCat, \
apply_rot_embed_cat, apply_keep_indices_nlc, trunc_normal_, resample_patch_embed, resample_abs_pos_embed, \
to_2tuple, use_fused_attn
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._registry import generate_default_cfgs, register_model
__all__ = ['Eva']
class EvaAttention(nn.Module):
fused_attn: torch.jit.Final[bool]
def __init__(
self,
dim: int,
num_heads: int = 8,
qkv_bias: bool = True,
qkv_fused: bool = True,
num_prefix_tokens: int = 1,
qkv_bias_separate: bool = False,
attn_drop: float = 0.,
proj_drop: float = 0.,
attn_head_dim: Optional[int] = None,
norm_layer: Optional[Callable] = None,
):
"""
Args:
dim:
num_heads:
qkv_bias:
qkv_fused:
attn_drop:
proj_drop:
attn_head_dim:
norm_layer:
"""
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
if attn_head_dim is not None:
head_dim = attn_head_dim
all_head_dim = head_dim * self.num_heads
self.scale = head_dim ** -0.5
self.num_prefix_tokens = num_prefix_tokens
self.fused_attn = use_fused_attn()
self.qkv_bias_separate = qkv_bias_separate
if qkv_fused:
self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False)
self.q_proj = self.k_proj = self.v_proj = None
if qkv_bias:
self.q_bias = nn.Parameter(torch.zeros(all_head_dim))
self.register_buffer('k_bias', torch.zeros(all_head_dim), persistent=False)
self.v_bias = nn.Parameter(torch.zeros(all_head_dim))
else:
self.q_bias = self.k_bias = self.v_bias = None
else:
self.q_proj = nn.Linear(dim, all_head_dim, bias=qkv_bias)
self.k_proj = nn.Linear(dim, all_head_dim, bias=False)
self.v_proj = nn.Linear(dim, all_head_dim, bias=qkv_bias)
self.qkv = None
self.q_bias = self.k_bias = self.v_bias = None
self.attn_drop = nn.Dropout(attn_drop)
self.norm = norm_layer(all_head_dim) if norm_layer is not None else nn.Identity()
self.proj = nn.Linear(all_head_dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(
self,
x,
rope: Optional[torch.Tensor] = None,
attn_mask: Optional[torch.Tensor] = None,
):
B, N, C = x.shape
if self.qkv is not None:
if self.q_bias is None:
qkv = self.qkv(x)
else:
qkv_bias = torch.cat((self.q_bias, self.k_bias, self.v_bias))
if self.qkv_bias_separate:
qkv = self.qkv(x)
qkv += qkv_bias
else:
qkv = F.linear(x, weight=self.qkv.weight, bias=qkv_bias)
qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0) # B, num_heads, N, head_dim
else:
q = self.q_proj(x).reshape(B, N, self.num_heads, -1).transpose(1, 2) # B, num_heads, N, C
k = self.k_proj(x).reshape(B, N, self.num_heads, -1).transpose(1, 2)
v = self.v_proj(x).reshape(B, N, self.num_heads, -1).transpose(1, 2)
if rope is not None:
npt = self.num_prefix_tokens
q = torch.cat([q[:, :, :npt, :], apply_rot_embed_cat(q[:, :, npt:, :], rope)], dim=2).type_as(v)
k = torch.cat([k[:, :, :npt, :], apply_rot_embed_cat(k[:, :, npt:, :], rope)], dim=2).type_as(v)
if self.fused_attn:
x = F.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_mask,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
if attn_mask is not None:
attn_mask = attn_mask.to(torch.bool)
attn = attn.masked_fill(~attn_mask[:, None, None, :], float("-inf"))
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(B, N, C)
x = self.norm(x)
x = self.proj(x)
x = self.proj_drop(x)
return x
class EvaBlock(nn.Module):
def __init__(
self,
dim: int,
num_heads: int,
qkv_bias: bool = True,
qkv_fused: bool = True,
mlp_ratio: float = 4.,
swiglu_mlp: bool = False,
scale_mlp: bool = False,
scale_attn_inner: bool = False,
num_prefix_tokens: int = 1,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: float = 0.,
init_values: Optional[float] = None,
act_layer: Callable = nn.GELU,
norm_layer: Callable = LayerNorm,
attn_head_dim: Optional[int] = None,
):
"""
Args:
dim:
num_heads:
qkv_bias:
qkv_fused:
mlp_ratio:
swiglu_mlp:
scale_mlp:
scale_attn_inner:
proj_drop:
attn_drop:
drop_path:
init_values:
act_layer:
norm_layer:
attn_head_dim:
"""
super().__init__()
self.norm1 = norm_layer(dim)
self.attn = EvaAttention(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qkv_fused=qkv_fused,
num_prefix_tokens=num_prefix_tokens,
attn_drop=attn_drop,
proj_drop=proj_drop,
attn_head_dim=attn_head_dim,
norm_layer=norm_layer if scale_attn_inner else None,
)
self.gamma_1 = nn.Parameter(init_values * torch.ones(dim)) if init_values is not None else None
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
hidden_features = int(dim * mlp_ratio)
if swiglu_mlp:
if scale_mlp:
# when norm in SwiGLU used, an impl with separate fc for gate & x is used
self.mlp = SwiGLU(
in_features=dim,
hidden_features=hidden_features,
norm_layer=norm_layer if scale_mlp else None,
drop=proj_drop,
)
else:
# w/o any extra norm, an impl with packed weights is used, matches existing GluMLP
self.mlp = GluMlp(
in_features=dim,
hidden_features=hidden_features * 2,
norm_layer=norm_layer if scale_mlp else None,
act_layer=nn.SiLU,
gate_last=False,
drop=proj_drop,
)
else:
self.mlp = Mlp(
in_features=dim,
hidden_features=hidden_features,
act_layer=act_layer,
norm_layer=norm_layer if scale_mlp else None,
drop=proj_drop,
)
self.gamma_2 = nn.Parameter(init_values * torch.ones(dim)) if init_values is not None else None
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def forward(self, x, rope: Optional[torch.Tensor] = None, attn_mask: Optional[torch.Tensor] = None):
if self.gamma_1 is None:
x = x + self.drop_path1(self.attn(self.norm1(x), rope=rope, attn_mask=attn_mask))
x = x + self.drop_path2(self.mlp(self.norm2(x)))
else:
x = x + self.drop_path1(self.gamma_1 * self.attn(self.norm1(x), rope=rope, attn_mask=attn_mask))
x = x + self.drop_path2(self.gamma_2 * self.mlp(self.norm2(x)))
return x
class EvaBlockPostNorm(nn.Module):
""" EVA block w/ post-norm and support for swiglu, MLP norm scale, ROPE. """
def __init__(
self,
dim: int,
num_heads: int,
qkv_bias: bool = True,
qkv_fused: bool = True,
mlp_ratio: float = 4.,
swiglu_mlp: bool = False,
scale_mlp: bool = False,
scale_attn_inner: bool = False,
num_prefix_tokens: int = 1,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: float = 0.,
init_values: Optional[float] = None, # ignore for post-norm
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
attn_head_dim: Optional[int] = None,
):
"""
Args:
dim:
num_heads:
qkv_bias:
qkv_fused:
mlp_ratio:
swiglu_mlp:
scale_mlp:
scale_attn_inner:
proj_drop:
attn_drop:
drop_path:
init_values:
act_layer:
norm_layer:
attn_head_dim:
"""
super().__init__()
self.attn = EvaAttention(
dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qkv_fused=qkv_fused,
num_prefix_tokens=num_prefix_tokens,
attn_drop=attn_drop,
proj_drop=proj_drop,
attn_head_dim=attn_head_dim,
norm_layer=norm_layer if scale_attn_inner else None,
)
self.norm1 = norm_layer(dim)
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
hidden_features = int(dim * mlp_ratio)
if swiglu_mlp:
if scale_mlp:
# when norm in SwiGLU used, an impl with separate fc for gate & x is used
self.mlp = SwiGLU(
in_features=dim,
hidden_features=hidden_features,
norm_layer=norm_layer if scale_mlp else None,
drop=proj_drop,
)
else:
# w/o any extra norm, an impl with packed fc1 weights is used, matches existing GluMLP
self.mlp = GluMlp(
in_features=dim,
hidden_features=hidden_features * 2,
norm_layer=norm_layer if scale_mlp else None,
act_layer=nn.SiLU,
gate_last=False,
drop=proj_drop,
)
else:
self.mlp = Mlp(
in_features=dim,
hidden_features=hidden_features,
act_layer=act_layer,
norm_layer=norm_layer if scale_mlp else None,
drop=proj_drop,
)
self.norm2 = norm_layer(dim)
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def forward(self, x, rope: Optional[torch.Tensor] = None, attn_mask: Optional[torch.Tensor] = None):
x = x + self.drop_path1(self.norm1(self.attn(x, rope=rope, attn_mask=attn_mask)))
x = x + self.drop_path2(self.norm2(self.mlp(x)))
return x
class Eva(nn.Module):
""" Eva Vision Transformer w/ Abs & Rotary Pos Embed
This class implements the EVA and EVA02 models that were based on the BEiT ViT variant
* EVA - abs pos embed, global avg pool
* EVA02 - abs + rope pos embed, global avg pool, SwiGLU, scale Norm in MLP (ala normformer)
"""
def __init__(
self,
img_size: Union[int, Tuple[int, int]] = 224,
patch_size: Union[int, Tuple[int, int]] = 16,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: str = 'avg',
embed_dim: int = 768,
depth: int = 12,
num_heads: int = 12,
qkv_bias: bool = True,
qkv_fused: bool = True,
mlp_ratio: float = 4.,
swiglu_mlp: bool = False,
scale_mlp: bool = False,
scale_attn_inner: bool = False,
drop_rate: float = 0.,
pos_drop_rate: float = 0.,
patch_drop_rate: float = 0.,
proj_drop_rate: float = 0.,
attn_drop_rate: float = 0.,
drop_path_rate: float = 0.,
norm_layer: Callable = LayerNorm,
init_values: Optional[float] = None,
class_token: bool = True,
num_reg_tokens: int = 0,
use_abs_pos_emb: bool = True,
use_rot_pos_emb: bool = False,
use_post_norm: bool = False,
dynamic_img_size: bool = False,
dynamic_img_pad: bool = False,
ref_feat_shape: Optional[Union[Tuple[int, int], int]] = None,
head_init_scale: float = 0.001,
):
"""
Args:
img_size:
patch_size:
in_chans:
num_classes:
global_pool:
embed_dim:
depth:
num_heads:
qkv_bias:
qkv_fused:
mlp_ratio:
swiglu_mlp:
scale_mlp:
scale_attn_inner:
drop_rate:
pos_drop_rate:
proj_drop_rate:
attn_drop_rate:
drop_path_rate:
norm_layer:
init_values:
class_token:
use_abs_pos_emb:
use_rot_pos_emb:
use_post_norm:
ref_feat_shape:
head_init_scale:
"""
super().__init__()
self.num_classes = num_classes
self.global_pool = global_pool
self.num_features = self.head_hidden_size = self.embed_dim = embed_dim # for consistency with other models
self.num_prefix_tokens = (1 if class_token else 0) + num_reg_tokens
self.dynamic_img_size = dynamic_img_size
self.grad_checkpointing = False
embed_args = {}
if dynamic_img_size:
# flatten deferred until after pos embed
embed_args.update(dict(strict_img_size=False, output_fmt='NHWC'))
self.patch_embed = PatchEmbed(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim,
dynamic_img_pad=dynamic_img_pad,
**embed_args,
)
num_patches = self.patch_embed.num_patches
r = self.patch_embed.feat_ratio() if hasattr(self.patch_embed, 'feat_ratio') else patch_size
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if class_token else None
self.reg_token = nn.Parameter(torch.zeros(1, num_reg_tokens, embed_dim)) if num_reg_tokens else None
self.cls_embed = class_token and self.reg_token is None
self.pos_embed = nn.Parameter(
torch.zeros(1, num_patches + self.num_prefix_tokens, embed_dim)) if use_abs_pos_emb else None
self.pos_drop = nn.Dropout(p=pos_drop_rate)
if patch_drop_rate > 0:
self.patch_drop = PatchDropout(
patch_drop_rate,
num_prefix_tokens=self.num_prefix_tokens,
return_indices=True,
)
else:
self.patch_drop = None
if use_rot_pos_emb:
ref_feat_shape = to_2tuple(ref_feat_shape) if ref_feat_shape is not None else None
self.rope = RotaryEmbeddingCat(
embed_dim // num_heads,
in_pixels=False,
feat_shape=None if dynamic_img_size else self.patch_embed.grid_size,
ref_feat_shape=ref_feat_shape,
)
else:
self.rope = None
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
block_fn = EvaBlockPostNorm if use_post_norm else EvaBlock
self.blocks = nn.ModuleList([
block_fn(
dim=embed_dim,
num_heads=num_heads,
qkv_bias=qkv_bias,
qkv_fused=qkv_fused,
mlp_ratio=mlp_ratio,
swiglu_mlp=swiglu_mlp,
scale_mlp=scale_mlp,
scale_attn_inner=scale_attn_inner,
num_prefix_tokens=self.num_prefix_tokens,
proj_drop=proj_drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
init_values=init_values,
)
for i in range(depth)])
self.feature_info = [
dict(module=f'blocks.{i}', num_chs=embed_dim, reduction=r) for i in range(depth)]
use_fc_norm = self.global_pool == 'avg'
self.norm = nn.Identity() if use_fc_norm else norm_layer(embed_dim)
self.fc_norm = norm_layer(embed_dim) if use_fc_norm else nn.Identity()
self.head_drop = nn.Dropout(drop_rate)
self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
self.apply(self._init_weights)
if self.pos_embed is not None:
trunc_normal_(self.pos_embed, std=.02)
if self.cls_token is not None:
trunc_normal_(self.cls_token, std=.02)
if self.reg_token is not None:
trunc_normal_(self.reg_token, std=.02)
self.fix_init_weight()
if isinstance(self.head, nn.Linear):
trunc_normal_(self.head.weight, std=.02)
self.head.weight.data.mul_(head_init_scale)
self.head.bias.data.mul_(head_init_scale)
def fix_init_weight(self):
def rescale(param, layer_id):
param.div_(math.sqrt(2.0 * layer_id))
for layer_id, layer in enumerate(self.blocks):
rescale(layer.attn.proj.weight.data, layer_id + 1)
rescale(layer.mlp.fc2.weight.data, layer_id + 1)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if m.bias is not None:
nn.init.zeros_(m.bias)
@torch.jit.ignore
def no_weight_decay(self):
nwd = {'pos_embed', 'cls_token'}
return nwd
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
self.grad_checkpointing = enable
@torch.jit.ignore
def group_matcher(self, coarse=False):
matcher = dict(
stem=r'^cls_token|pos_embed|patch_embed', # stem and embed
blocks=[(r'^blocks\.(\d+)', None), (r'^norm', (99999,))],
)
return matcher
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
if global_pool is not None:
self.global_pool = global_pool
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
def _pos_embed(self, x) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
if self.dynamic_img_size:
B, H, W, C = x.shape
if self.pos_embed is not None:
pos_embed = resample_abs_pos_embed(
self.pos_embed,
(H, W),
num_prefix_tokens=self.num_prefix_tokens,
)
else:
pos_embed = None
x = x.view(B, -1, C)
rot_pos_embed = self.rope.get_embed(shape=(H, W)) if self.rope is not None else None
else:
pos_embed = self.pos_embed
rot_pos_embed = self.rope.get_embed() if self.rope is not None else None
if self.cls_token is not None:
x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
if pos_embed is not None:
x = x + pos_embed
if self.reg_token is not None:
to_cat = []
if self.cls_token is not None:
to_cat.append(self.cls_token.expand(x.shape[0], -1, -1))
to_cat.append(self.reg_token.expand(x.shape[0], -1, -1))
x = torch.cat(to_cat + [x], dim=1)
x = self.pos_drop(x)
# obtain shared rotary position embedding and apply patch dropout
if self.patch_drop is not None:
x, keep_indices = self.patch_drop(x)
if rot_pos_embed is not None and keep_indices is not None:
rot_pos_embed = apply_keep_indices_nlc(x, rot_pos_embed, keep_indices)
return x, rot_pos_embed
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
return_prefix_tokens: bool = False,
norm: bool = False,
stop_early: bool = False,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
""" Forward features that returns intermediates.
Args:
x: Input image tensor
indices: Take last n blocks if an int, if is a sequence, select by matching indices
return_prefix_tokens: Return both prefix and spatial intermediate tokens
norm: Apply norm layer to all intermediates
stop_early: Stop iterating over blocks when last desired intermediate hit
output_fmt: Shape of intermediate feature outputs
intermediates_only: Only return intermediate features
"""
assert output_fmt in ('NCHW', 'NLC'), 'Output format for EVA-ViT features must be one of NCHW or NLC.'
reshape = output_fmt == 'NCHW'
intermediates = []
take_indices, max_index = feature_take_indices(len(self.blocks), indices)
# forward pass
B, _, height, width = x.shape
x = self.patch_embed(x)
x, rot_pos_embed = self._pos_embed(x)
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
blocks = self.blocks
else:
blocks = self.blocks[:max_index + 1]
for i, blk in enumerate(blocks):
x = blk(x, rope=rot_pos_embed)
if i in take_indices:
intermediates.append(self.norm(x) if norm else x)
# process intermediates
if self.num_prefix_tokens:
# split prefix (e.g. class, distill) and spatial feature tokens
prefix_tokens = [y[:, 0:self.num_prefix_tokens] for y in intermediates]
intermediates = [y[:, self.num_prefix_tokens:] for y in intermediates]
if reshape:
# reshape to BCHW output format
H, W = self.patch_embed.dynamic_feat_size((height, width))
intermediates = [y.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() for y in intermediates]
if not torch.jit.is_scripting() and return_prefix_tokens:
# return_prefix not support in torchscript due to poor type handling
intermediates = list(zip(intermediates, prefix_tokens))
if intermediates_only:
return intermediates
x = self.norm(x)
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
):
""" Prune layers not required for specified intermediates.
"""
take_indices, max_index = feature_take_indices(len(self.blocks), indices)
self.blocks = self.blocks[:max_index + 1] # truncate blocks
if prune_norm:
self.norm = nn.Identity()
if prune_head:
self.fc_norm = nn.Identity()
self.reset_classifier(0, '')
return take_indices
def forward_features(self, x):
x = self.patch_embed(x)
x, rot_pos_embed = self._pos_embed(x)
for blk in self.blocks:
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint(blk, x, rope=rot_pos_embed)
else:
x = blk(x, rope=rot_pos_embed)
x = self.norm(x)
return x
def forward_head(self, x, pre_logits: bool = False):
if self.global_pool:
x = x[:, self.num_prefix_tokens:].mean(dim=1) if self.global_pool == 'avg' else x[:, 0]
x = self.fc_norm(x)
x = self.head_drop(x)
return x if pre_logits else self.head(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def checkpoint_filter_fn(
state_dict,
model,
interpolation='bicubic',
antialias=True,
):
""" convert patch embedding weight from manual patchify + linear proj to conv"""
out_dict = {}
state_dict = state_dict.get('model_ema', state_dict)
state_dict = state_dict.get('model', state_dict)
state_dict = state_dict.get('module', state_dict)
state_dict = state_dict.get('state_dict', state_dict)
# prefix for loading OpenCLIP compatible weights
if 'visual.trunk.pos_embed' in state_dict:
prefix = 'visual.trunk.'
elif 'visual.pos_embed' in state_dict:
prefix = 'visual.'
else:
prefix = ''
mim_weights = prefix + 'mask_token' in state_dict
no_qkv = prefix + 'blocks.0.attn.q_proj.weight' in state_dict
len_prefix = len(prefix)
for k, v in state_dict.items():
if prefix:
if k.startswith(prefix):
k = k[len_prefix:]
else:
continue
if 'rope' in k:
# fixed embedding no need to load buffer from checkpoint
continue
if 'patch_embed.proj.weight' in k:
_, _, H, W = model.patch_embed.proj.weight.shape
if v.shape[-1] != W or v.shape[-2] != H:
v = resample_patch_embed(
v,
(H, W),
interpolation=interpolation,
antialias=antialias,
verbose=True,
)
elif k == 'pos_embed' and v.shape[1] != model.pos_embed.shape[1]:
# To resize pos embedding when using model at different size from pretrained weights
num_prefix_tokens = 0 if getattr(model, 'no_embed_class', False) else getattr(model, 'num_prefix_tokens', 1)
v = resample_abs_pos_embed(
v,
new_size=model.patch_embed.grid_size,
num_prefix_tokens=num_prefix_tokens,
interpolation=interpolation,
antialias=antialias,
verbose=True,
)
k = k.replace('mlp.ffn_ln', 'mlp.norm')
k = k.replace('attn.inner_attn_ln', 'attn.norm')
k = k.replace('mlp.w12', 'mlp.fc1')
k = k.replace('mlp.w1', 'mlp.fc1_g')
k = k.replace('mlp.w2', 'mlp.fc1_x')
k = k.replace('mlp.w3', 'mlp.fc2')
if no_qkv:
k = k.replace('q_bias', 'q_proj.bias')
k = k.replace('v_bias', 'v_proj.bias')
if mim_weights and k in ('mask_token', 'lm_head.weight', 'lm_head.bias', 'norm.weight', 'norm.bias'):
if k == 'norm.weight' or k == 'norm.bias':
# try moving norm -> fc norm on fine-tune, probably a better starting point than new init
k = k.replace('norm', 'fc_norm')
else:
# skip pretrain mask token & head weights
continue
out_dict[k] = v
return out_dict
def _create_eva(variant, pretrained=False, **kwargs):
out_indices = kwargs.pop('out_indices', 3)
model = build_model_with_cfg(
Eva, variant, pretrained,
pretrained_filter_fn=checkpoint_filter_fn,
feature_cfg=dict(out_indices=out_indices, feature_cls='getter'),
**kwargs,
)
return model
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True,
'mean': OPENAI_CLIP_MEAN, 'std': OPENAI_CLIP_STD,
'first_conv': 'patch_embed.proj', 'classifier': 'head',
'license': 'mit', **kwargs
}
default_cfgs = generate_default_cfgs({
# EVA 01 CLIP fine-tuned on imagenet-1k
'eva_giant_patch14_224.clip_ft_in1k': _cfg(
# hf_hub_id='BAAI/EVA', hf_hub_filename='eva_clip_vis_enc_sz224_ftcls_89p1.pt',
hf_hub_id='timm/',
),
'eva_giant_patch14_336.clip_ft_in1k': _cfg(
# hf_hub_id='BAAI/EVA', hf_hub_filename='eva_clip_vis_enc_sz336_ftcls_89p4.pt',
hf_hub_id='timm/',
input_size=(3, 336, 336), crop_pct=1.0, crop_mode='squash'),
# MIM EVA 01 pretrain, ft on in22k -> in1k
'eva_giant_patch14_336.m30m_ft_in22k_in1k': _cfg(
# hf_hub_id='BAAI/EVA', hf_hub_filename='eva_21k_1k_336px_psz14_ema_89p6.pt',
hf_hub_id='timm/',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD,
input_size=(3, 336, 336), crop_pct=1.0, crop_mode='squash'),
'eva_giant_patch14_560.m30m_ft_in22k_in1k': _cfg(
# hf_hub_id='BAAI/EVA', hf_hub_filename='eva_21k_1k_560px_psz14_ema_89p7.pt',
hf_hub_id='timm/',
mean=IMAGENET_DEFAULT_MEAN, std=IMAGENET_DEFAULT_STD,
input_size=(3, 560, 560), crop_pct=1.0, crop_mode='squash'),
# in22k or m38m MIM pretrain w/ intermediate in22k fine-tune and final in1k fine-tune
'eva02_base_patch14_448.mim_in22k_ft_in22k_in1k': _cfg(
# hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in21k_to_in1k/eva02_B_pt_in21k_medft_in21k_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0, crop_mode='squash',
),
'eva02_large_patch14_448.mim_in22k_ft_in22k_in1k': _cfg(
# hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in21k_to_in1k/eva02_L_pt_in21k_medft_in21k_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0, crop_mode='squash',
),
'eva02_large_patch14_448.mim_m38m_ft_in22k_in1k': _cfg(
hf_hub_id='timm/',
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in21k_to_in1k/eva02_L_pt_m38m_medft_in21k_ft_in1k_p14.pt',
input_size=(3, 448, 448), crop_pct=1.0, crop_mode='squash',
),
# in22k or m3m MIM pretrain w/ in1k fine-tune
'eva02_tiny_patch14_336.mim_in22k_ft_in1k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in1k/eva02_Ti_pt_in21k_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 336, 336), crop_pct=1.0,
),
'eva02_small_patch14_336.mim_in22k_ft_in1k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in1k/eva02_S_pt_in21k_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 336, 336), crop_pct=1.0,
),
'eva02_base_patch14_448.mim_in22k_ft_in1k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in1k/eva02_B_pt_in21k_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0,
),
'eva02_large_patch14_448.mim_in22k_ft_in1k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in1k/eva02_L_pt_in21k_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0,
),
'eva02_large_patch14_448.mim_m38m_ft_in1k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in1k/eva02_L_pt_m38m_ft_in1k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0,
),
# in22k or m3m MIM pretrain w/ in22k fine-tune
'eva02_base_patch14_448.mim_in22k_ft_in22k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in21k/eva02_B_pt_in21k_medft_in21k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0, crop_mode='squash', num_classes=21841,
),
'eva02_large_patch14_448.mim_in22k_ft_in22k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in21k/eva02_L_pt_in21k_medft_in21k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0, crop_mode='squash', num_classes=21841,
),
'eva02_large_patch14_448.mim_m38m_ft_in22k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/cls/in21k/eva02_L_pt_m38m_medft_in21k_p14.pt',
hf_hub_id='timm/',
input_size=(3, 448, 448), crop_pct=1.0, crop_mode='squash', num_classes=21841,
),
# in22k or m38m MIM pretrain
'eva02_tiny_patch14_224.mim_in22k': _cfg(
# hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/pt/eva02_Ti_pt_in21k_p14.pt',
hf_hub_id='timm/',
num_classes=0,
),
'eva02_small_patch14_224.mim_in22k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/pt/eva02_S_pt_in21k_p14.pt',
hf_hub_id='timm/',
num_classes=0,
),
'eva02_base_patch14_224.mim_in22k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/pt/eva02_B_pt_in21k_p14.pt',
hf_hub_id='timm/',
num_classes=0,
),
'eva02_large_patch14_224.mim_in22k': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/pt/eva02_L_pt_in21k_p14.pt',
hf_hub_id='timm/',
num_classes=0,
),
'eva02_large_patch14_224.mim_m38m': _cfg(
#hf_hub_id='Yuxin-CV/EVA-02', hf_hub_filename='eva02/pt/eva02_L_pt_m38m_p14.pt',
hf_hub_id='timm/',
num_classes=0,
),
# EVA01 and EVA02 CLIP image towers
'eva_giant_patch14_clip_224.laion400m': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA01_CLIP_g_14_plus_psz14_s11B.pt',
hf_hub_id='timm/eva_giant_patch14_clip_224.laion400m_s11b_b41k', # float16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
num_classes=1024,
),
'eva_giant_patch14_clip_224.merged2b': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA01_CLIP_g_14_plus_psz14_s11B.pt',
hf_hub_id='timm/eva_giant_patch14_plus_clip_224.merged2b_s11b_b114k', # float16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
num_classes=1024,
),
'eva02_base_patch16_clip_224.merged2b': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA02_CLIP_L_psz14_s4B.pt',
hf_hub_id='timm/eva02_base_patch16_clip_224.merged2b_s8b_b131k', # float16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
num_classes=512,
),
'eva02_large_patch14_clip_224.merged2b': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA02_CLIP_L_psz14_s4B.pt',
hf_hub_id='timm/eva02_large_patch14_clip_224.merged2b_s4b_b131k', # float16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
num_classes=768,
),
'eva02_large_patch14_clip_336.merged2b': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA02_CLIP_L_psz14_s4B.pt',
hf_hub_id='timm/eva02_large_patch14_clip_336.merged2b_s6b_b61k', # float16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
input_size=(3, 336, 336), crop_pct=1.0,
num_classes=768,
),
'eva02_enormous_patch14_clip_224.laion2b': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA02_CLIP_E_psz14_plus_s9B.pt',
hf_hub_id='timm/eva02_enormous_patch14_clip_224.laion2b_s4b_b115k', # float16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
num_classes=1024,
),
'eva02_enormous_patch14_clip_224.laion2b_plus': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA02_CLIP_E_psz14_plus_s9B.pt',
hf_hub_id='timm/eva02_enormous_patch14_plus_clip_224.laion2b_s9b_b144k', # bfloat16 weights
hf_hub_filename='open_clip_pytorch_model.bin',
num_classes=1024,
),
'eva02_enormous_patch14_clip_224.pretrain': _cfg(
# hf_hub_id='QuanSun/EVA-CLIP', hf_hub_filename='EVA02_E_psz14.pt',
num_classes=0,
),
'vit_medium_patch16_rope_reg1_gap_256.sbb_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 256, 256), crop_pct=0.95,
mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)
),
'vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 256, 256), crop_pct=0.95,
mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)
),
'vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 256, 256), crop_pct=0.95,
),
'vit_base_patch16_rope_reg1_gap_256.sbb_in1k': _cfg(
hf_hub_id='timm/',
input_size=(3, 256, 256), crop_pct=0.95,
mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)
),
})
@register_model
def eva_giant_patch14_224(pretrained=False, **kwargs) -> Eva:
""" EVA-g model https://arxiv.org/abs/2211.07636 """
model_args = dict(patch_size=14, embed_dim=1408, depth=40, num_heads=16, mlp_ratio=6144 / 1408)
model = _create_eva('eva_giant_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva_giant_patch14_336(pretrained=False, **kwargs) -> Eva:
""" EVA-g model https://arxiv.org/abs/2211.07636 """
model_args = dict(patch_size=14, embed_dim=1408, depth=40, num_heads=16, mlp_ratio=6144 / 1408)
model = _create_eva('eva_giant_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva_giant_patch14_560(pretrained=False, **kwargs) -> Eva:
""" EVA-g model https://arxiv.org/abs/2211.07636 """
model_args = dict(patch_size=14, embed_dim=1408, depth=40, num_heads=16, mlp_ratio=6144 / 1408)
model = _create_eva('eva_giant_patch14_560', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_tiny_patch14_224(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=224,
patch_size=14,
embed_dim=192,
depth=12,
num_heads=3,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_tiny_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_small_patch14_224(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=224,
patch_size=14,
embed_dim=384,
depth=12,
num_heads=6,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_small_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_base_patch14_224(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=224,
patch_size=14,
embed_dim=768,
depth=12,
num_heads=12,
qkv_fused=False,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
scale_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_base_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_large_patch14_224(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=224,
patch_size=14,
embed_dim=1024,
depth=24,
num_heads=16,
mlp_ratio=4 * 2 / 3,
qkv_fused=False,
swiglu_mlp=True,
scale_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_large_patch14_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_tiny_patch14_336(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=336,
patch_size=14,
embed_dim=192,
depth=12,
num_heads=3,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_tiny_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_small_patch14_336(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=336,
patch_size=14,
embed_dim=384,
depth=12,
num_heads=6,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_small_patch14_336', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_base_patch14_448(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=448,
patch_size=14,
embed_dim=768,
depth=12,
num_heads=12,
qkv_fused=False,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
scale_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_base_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_large_patch14_448(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=448,
patch_size=14,
embed_dim=1024,
depth=24,
num_heads=16,
mlp_ratio=4 * 2 / 3,
qkv_fused=False,
swiglu_mlp=True,
scale_mlp=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('eva02_large_patch14_448', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva_giant_patch14_clip_224(pretrained=False, **kwargs) -> Eva:
""" EVA-g CLIP model (only difference from non-CLIP is the pooling) """
model_args = dict(
patch_size=14, embed_dim=1408, depth=40, num_heads=16, mlp_ratio=6144 / 1408,
global_pool=kwargs.pop('global_pool', 'token'))
model = _create_eva('eva_giant_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_base_patch16_clip_224(pretrained=False, **kwargs) -> Eva:
""" A EVA-CLIP specific variant that adds additional attn scale layernorm to eva02_base """
model_args = dict(
img_size=224,
patch_size=16,
embed_dim=768,
depth=12,
num_heads=12,
qkv_fused=False,
mlp_ratio=4 * 2 / 3,
swiglu_mlp=True,
scale_mlp=True,
scale_attn_inner=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
global_pool=kwargs.pop('global_pool', 'token'),
)
model = _create_eva('eva02_base_patch16_clip_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_large_patch14_clip_224(pretrained=False, **kwargs) -> Eva:
""" A EVA-CLIP specific variant that adds additional attn scale layernorm to eva02_large """
model_args = dict(
img_size=224,
patch_size=14,
embed_dim=1024,
depth=24,
num_heads=16,
mlp_ratio=4 * 2 / 3,
qkv_fused=False,
swiglu_mlp=True,
scale_mlp=True,
scale_attn_inner=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
global_pool=kwargs.pop('global_pool', 'token'),
)
model = _create_eva('eva02_large_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_large_patch14_clip_336(pretrained=False, **kwargs) -> Eva:
""" A EVA-CLIP specific variant that adds additional attn scale layernorm to eva02_large """
model_args = dict(
img_size=336,
patch_size=14,
embed_dim=1024,
depth=24,
num_heads=16,
mlp_ratio=4 * 2 / 3,
qkv_fused=False,
swiglu_mlp=True,
scale_mlp=True,
scale_attn_inner=True,
use_rot_pos_emb=True,
ref_feat_shape=(16, 16), # 224/14
global_pool=kwargs.pop('global_pool', 'token'),
)
model = _create_eva('eva02_large_patch14_clip_336', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def eva02_enormous_patch14_clip_224(pretrained=False, **kwargs) -> Eva:
""" A EVA-CLIP specific variant that uses residual post-norm in blocks """
model_args = dict(
img_size=224,
patch_size=14,
embed_dim=1792,
depth=64,
num_heads=16,
mlp_ratio=15360 / 1792,
use_post_norm=True,
global_pool=kwargs.pop('global_pool', 'token'),
)
model = _create_eva('eva02_enormous_patch14_clip_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_medium_patch16_rope_reg1_gap_256(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=256,
patch_size=16,
embed_dim=512,
depth=12,
num_heads=8,
qkv_fused=True,
qkv_bias=True,
init_values=1e-5,
class_token=False,
num_reg_tokens=1,
use_rot_pos_emb=True,
use_abs_pos_emb=False,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('vit_medium_patch16_rope_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_mediumd_patch16_rope_reg1_gap_256(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=256,
patch_size=16,
embed_dim=512,
depth=20,
num_heads=8,
qkv_fused=True,
qkv_bias=False,
init_values=1e-5,
class_token=False,
num_reg_tokens=1,
use_rot_pos_emb=True,
use_abs_pos_emb=False,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('vit_mediumd_patch16_rope_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_betwixt_patch16_rope_reg4_gap_256(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=256,
patch_size=16,
embed_dim=640,
depth=12,
num_heads=10,
qkv_fused=True,
qkv_bias=True,
init_values=1e-5,
class_token=False,
num_reg_tokens=4,
use_rot_pos_emb=True,
use_abs_pos_emb=False,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('vit_betwixt_patch16_rope_reg4_gap_256', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_base_patch16_rope_reg1_gap_256(pretrained=False, **kwargs) -> Eva:
model_args = dict(
img_size=256,
patch_size=16,
embed_dim=768,
depth=12,
num_heads=12,
qkv_fused=True,
qkv_bias=True,
init_values=1e-5,
class_token=False,
num_reg_tokens=1,
use_rot_pos_emb=True,
use_abs_pos_emb=False,
ref_feat_shape=(16, 16), # 224/14
)
model = _create_eva('vit_base_patch16_rope_reg1_gap_256', pretrained=pretrained, **dict(model_args, **kwargs))
return model
|
pytorch-image-models/timm/models/eva.py/0
|
{
"file_path": "pytorch-image-models/timm/models/eva.py",
"repo_id": "pytorch-image-models",
"token_count": 25759
}
| 212
|
""" Pytorch Inception-Resnet-V2 implementation
Sourced from https://github.com/Cadene/tensorflow-model-zoo.torch (MIT License) which is
based upon Google's Tensorflow implementation and pretrained weights (Apache 2.0 License)
"""
from functools import partial
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.data import IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD
from timm.layers import create_classifier, ConvNormAct
from ._builder import build_model_with_cfg
from ._manipulate import flatten_modules
from ._registry import register_model, generate_default_cfgs, register_model_deprecations
__all__ = ['InceptionResnetV2']
class Mixed_5b(nn.Module):
def __init__(self, conv_block=None):
super(Mixed_5b, self).__init__()
conv_block = conv_block or ConvNormAct
self.branch0 = conv_block(192, 96, kernel_size=1, stride=1)
self.branch1 = nn.Sequential(
conv_block(192, 48, kernel_size=1, stride=1),
conv_block(48, 64, kernel_size=5, stride=1, padding=2)
)
self.branch2 = nn.Sequential(
conv_block(192, 64, kernel_size=1, stride=1),
conv_block(64, 96, kernel_size=3, stride=1, padding=1),
conv_block(96, 96, kernel_size=3, stride=1, padding=1)
)
self.branch3 = nn.Sequential(
nn.AvgPool2d(3, stride=1, padding=1, count_include_pad=False),
conv_block(192, 64, kernel_size=1, stride=1)
)
def forward(self, x):
x0 = self.branch0(x)
x1 = self.branch1(x)
x2 = self.branch2(x)
x3 = self.branch3(x)
out = torch.cat((x0, x1, x2, x3), 1)
return out
class Block35(nn.Module):
def __init__(self, scale=1.0, conv_block=None):
super(Block35, self).__init__()
self.scale = scale
conv_block = conv_block or ConvNormAct
self.branch0 = conv_block(320, 32, kernel_size=1, stride=1)
self.branch1 = nn.Sequential(
conv_block(320, 32, kernel_size=1, stride=1),
conv_block(32, 32, kernel_size=3, stride=1, padding=1)
)
self.branch2 = nn.Sequential(
conv_block(320, 32, kernel_size=1, stride=1),
conv_block(32, 48, kernel_size=3, stride=1, padding=1),
conv_block(48, 64, kernel_size=3, stride=1, padding=1)
)
self.conv2d = nn.Conv2d(128, 320, kernel_size=1, stride=1)
self.act = nn.ReLU()
def forward(self, x):
x0 = self.branch0(x)
x1 = self.branch1(x)
x2 = self.branch2(x)
out = torch.cat((x0, x1, x2), 1)
out = self.conv2d(out)
out = out * self.scale + x
out = self.act(out)
return out
class Mixed_6a(nn.Module):
def __init__(self, conv_block=None):
super(Mixed_6a, self).__init__()
conv_block = conv_block or ConvNormAct
self.branch0 = conv_block(320, 384, kernel_size=3, stride=2)
self.branch1 = nn.Sequential(
conv_block(320, 256, kernel_size=1, stride=1),
conv_block(256, 256, kernel_size=3, stride=1, padding=1),
conv_block(256, 384, kernel_size=3, stride=2)
)
self.branch2 = nn.MaxPool2d(3, stride=2)
def forward(self, x):
x0 = self.branch0(x)
x1 = self.branch1(x)
x2 = self.branch2(x)
out = torch.cat((x0, x1, x2), 1)
return out
class Block17(nn.Module):
def __init__(self, scale=1.0, conv_block=None):
super(Block17, self).__init__()
self.scale = scale
conv_block = conv_block or ConvNormAct
self.branch0 = conv_block(1088, 192, kernel_size=1, stride=1)
self.branch1 = nn.Sequential(
conv_block(1088, 128, kernel_size=1, stride=1),
conv_block(128, 160, kernel_size=(1, 7), stride=1, padding=(0, 3)),
conv_block(160, 192, kernel_size=(7, 1), stride=1, padding=(3, 0))
)
self.conv2d = nn.Conv2d(384, 1088, kernel_size=1, stride=1)
self.act = nn.ReLU()
def forward(self, x):
x0 = self.branch0(x)
x1 = self.branch1(x)
out = torch.cat((x0, x1), 1)
out = self.conv2d(out)
out = out * self.scale + x
out = self.act(out)
return out
class Mixed_7a(nn.Module):
def __init__(self, conv_block=None):
super(Mixed_7a, self).__init__()
conv_block = conv_block or ConvNormAct
self.branch0 = nn.Sequential(
conv_block(1088, 256, kernel_size=1, stride=1),
conv_block(256, 384, kernel_size=3, stride=2)
)
self.branch1 = nn.Sequential(
conv_block(1088, 256, kernel_size=1, stride=1),
conv_block(256, 288, kernel_size=3, stride=2)
)
self.branch2 = nn.Sequential(
conv_block(1088, 256, kernel_size=1, stride=1),
conv_block(256, 288, kernel_size=3, stride=1, padding=1),
conv_block(288, 320, kernel_size=3, stride=2)
)
self.branch3 = nn.MaxPool2d(3, stride=2)
def forward(self, x):
x0 = self.branch0(x)
x1 = self.branch1(x)
x2 = self.branch2(x)
x3 = self.branch3(x)
out = torch.cat((x0, x1, x2, x3), 1)
return out
class Block8(nn.Module):
def __init__(self, scale=1.0, no_relu=False, conv_block=None):
super(Block8, self).__init__()
self.scale = scale
conv_block = conv_block or ConvNormAct
self.branch0 = conv_block(2080, 192, kernel_size=1, stride=1)
self.branch1 = nn.Sequential(
conv_block(2080, 192, kernel_size=1, stride=1),
conv_block(192, 224, kernel_size=(1, 3), stride=1, padding=(0, 1)),
conv_block(224, 256, kernel_size=(3, 1), stride=1, padding=(1, 0))
)
self.conv2d = nn.Conv2d(448, 2080, kernel_size=1, stride=1)
self.relu = None if no_relu else nn.ReLU()
def forward(self, x):
x0 = self.branch0(x)
x1 = self.branch1(x)
out = torch.cat((x0, x1), 1)
out = self.conv2d(out)
out = out * self.scale + x
if self.relu is not None:
out = self.relu(out)
return out
class InceptionResnetV2(nn.Module):
def __init__(
self,
num_classes=1000,
in_chans=3,
drop_rate=0.,
output_stride=32,
global_pool='avg',
norm_layer='batchnorm2d',
norm_eps=1e-3,
act_layer='relu',
):
super(InceptionResnetV2, self).__init__()
self.num_classes = num_classes
self.num_features = self.head_hidden_size = 1536
assert output_stride == 32
conv_block = partial(
ConvNormAct,
padding=0,
norm_layer=norm_layer,
act_layer=act_layer,
norm_kwargs=dict(eps=norm_eps),
act_kwargs=dict(inplace=True),
)
self.conv2d_1a = conv_block(in_chans, 32, kernel_size=3, stride=2)
self.conv2d_2a = conv_block(32, 32, kernel_size=3, stride=1)
self.conv2d_2b = conv_block(32, 64, kernel_size=3, stride=1, padding=1)
self.feature_info = [dict(num_chs=64, reduction=2, module='conv2d_2b')]
self.maxpool_3a = nn.MaxPool2d(3, stride=2)
self.conv2d_3b = conv_block(64, 80, kernel_size=1, stride=1)
self.conv2d_4a = conv_block(80, 192, kernel_size=3, stride=1)
self.feature_info += [dict(num_chs=192, reduction=4, module='conv2d_4a')]
self.maxpool_5a = nn.MaxPool2d(3, stride=2)
self.mixed_5b = Mixed_5b(conv_block=conv_block)
self.repeat = nn.Sequential(*[Block35(scale=0.17, conv_block=conv_block) for _ in range(10)])
self.feature_info += [dict(num_chs=320, reduction=8, module='repeat')]
self.mixed_6a = Mixed_6a(conv_block=conv_block)
self.repeat_1 = nn.Sequential(*[Block17(scale=0.10, conv_block=conv_block) for _ in range(20)])
self.feature_info += [dict(num_chs=1088, reduction=16, module='repeat_1')]
self.mixed_7a = Mixed_7a(conv_block=conv_block)
self.repeat_2 = nn.Sequential(*[Block8(scale=0.20, conv_block=conv_block) for _ in range(9)])
self.block8 = Block8(no_relu=True, conv_block=conv_block)
self.conv2d_7b = conv_block(2080, self.num_features, kernel_size=1, stride=1)
self.feature_info += [dict(num_chs=self.num_features, reduction=32, module='conv2d_7b')]
self.global_pool, self.head_drop, self.classif = create_classifier(
self.num_features, self.num_classes, pool_type=global_pool, drop_rate=drop_rate)
@torch.jit.ignore
def group_matcher(self, coarse=False):
module_map = {k: i for i, (k, _) in enumerate(flatten_modules(self.named_children(), prefix=()))}
module_map.pop(('classif',))
def _matcher(name):
if any([name.startswith(n) for n in ('conv2d_1', 'conv2d_2')]):
return 0
elif any([name.startswith(n) for n in ('conv2d_3', 'conv2d_4')]):
return 1
elif any([name.startswith(n) for n in ('block8', 'conv2d_7')]):
return len(module_map) + 1
else:
for k in module_map.keys():
if k == tuple(name.split('.')[:len(k)]):
return module_map[k]
return float('inf')
return _matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
assert not enable, "checkpointing not supported"
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.classif
def reset_classifier(self, num_classes: int, global_pool: str = 'avg'):
self.num_classes = num_classes
self.global_pool, self.classif = create_classifier(self.num_features, self.num_classes, pool_type=global_pool)
def forward_features(self, x):
x = self.conv2d_1a(x)
x = self.conv2d_2a(x)
x = self.conv2d_2b(x)
x = self.maxpool_3a(x)
x = self.conv2d_3b(x)
x = self.conv2d_4a(x)
x = self.maxpool_5a(x)
x = self.mixed_5b(x)
x = self.repeat(x)
x = self.mixed_6a(x)
x = self.repeat_1(x)
x = self.mixed_7a(x)
x = self.repeat_2(x)
x = self.block8(x)
x = self.conv2d_7b(x)
return x
def forward_head(self, x, pre_logits: bool = False):
x = self.global_pool(x)
x = self.head_drop(x)
return x if pre_logits else self.classif(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_inception_resnet_v2(variant, pretrained=False, **kwargs):
return build_model_with_cfg(InceptionResnetV2, variant, pretrained, **kwargs)
default_cfgs = generate_default_cfgs({
# ported from http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz
'inception_resnet_v2.tf_in1k': {
'hf_hub_id': 'timm/',
'num_classes': 1000, 'input_size': (3, 299, 299), 'pool_size': (8, 8),
'crop_pct': 0.8975, 'interpolation': 'bicubic',
'mean': IMAGENET_INCEPTION_MEAN, 'std': IMAGENET_INCEPTION_STD,
'first_conv': 'conv2d_1a.conv', 'classifier': 'classif',
},
# As per https://arxiv.org/abs/1705.07204 and
# ported from http://download.tensorflow.org/models/ens_adv_inception_resnet_v2_2017_08_18.tar.gz
'inception_resnet_v2.tf_ens_adv_in1k': {
'hf_hub_id': 'timm/',
'num_classes': 1000, 'input_size': (3, 299, 299), 'pool_size': (8, 8),
'crop_pct': 0.8975, 'interpolation': 'bicubic',
'mean': IMAGENET_INCEPTION_MEAN, 'std': IMAGENET_INCEPTION_STD,
'first_conv': 'conv2d_1a.conv', 'classifier': 'classif',
}
})
@register_model
def inception_resnet_v2(pretrained=False, **kwargs) -> InceptionResnetV2:
return _create_inception_resnet_v2('inception_resnet_v2', pretrained=pretrained, **kwargs)
register_model_deprecations(__name__, {
'ens_adv_inception_resnet_v2': 'inception_resnet_v2.tf_ens_adv_in1k',
})
|
pytorch-image-models/timm/models/inception_resnet_v2.py/0
|
{
"file_path": "pytorch-image-models/timm/models/inception_resnet_v2.py",
"repo_id": "pytorch-image-models",
"token_count": 6034
}
| 213
|
"""
pnasnet5large implementation grabbed from Cadene's pretrained models
Additional credit to https://github.com/creafz
https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/pnasnet.py
"""
from collections import OrderedDict
from functools import partial
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.layers import ConvNormAct, create_conv2d, create_pool2d, create_classifier
from ._builder import build_model_with_cfg
from ._registry import register_model, generate_default_cfgs
__all__ = ['PNASNet5Large']
class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding=''):
super(SeparableConv2d, self).__init__()
self.depthwise_conv2d = create_conv2d(
in_channels, in_channels, kernel_size=kernel_size,
stride=stride, padding=padding, groups=in_channels)
self.pointwise_conv2d = create_conv2d(
in_channels, out_channels, kernel_size=1, padding=padding)
def forward(self, x):
x = self.depthwise_conv2d(x)
x = self.pointwise_conv2d(x)
return x
class BranchSeparables(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, stem_cell=False, padding=''):
super(BranchSeparables, self).__init__()
middle_channels = out_channels if stem_cell else in_channels
self.act_1 = nn.ReLU()
self.separable_1 = SeparableConv2d(
in_channels, middle_channels, kernel_size, stride=stride, padding=padding)
self.bn_sep_1 = nn.BatchNorm2d(middle_channels, eps=0.001)
self.act_2 = nn.ReLU()
self.separable_2 = SeparableConv2d(
middle_channels, out_channels, kernel_size, stride=1, padding=padding)
self.bn_sep_2 = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.act_1(x)
x = self.separable_1(x)
x = self.bn_sep_1(x)
x = self.act_2(x)
x = self.separable_2(x)
x = self.bn_sep_2(x)
return x
class ActConvBn(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=''):
super(ActConvBn, self).__init__()
self.act = nn.ReLU()
self.conv = create_conv2d(
in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.act(x)
x = self.conv(x)
x = self.bn(x)
return x
class FactorizedReduction(nn.Module):
def __init__(self, in_channels, out_channels, padding=''):
super(FactorizedReduction, self).__init__()
self.act = nn.ReLU()
self.path_1 = nn.Sequential(OrderedDict([
('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)),
('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)),
]))
self.path_2 = nn.Sequential(OrderedDict([
('pad', nn.ZeroPad2d((-1, 1, -1, 1))), # shift
('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)),
('conv', create_conv2d(in_channels, out_channels // 2, kernel_size=1, padding=padding)),
]))
self.final_path_bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.act(x)
x_path1 = self.path_1(x)
x_path2 = self.path_2(x)
out = self.final_path_bn(torch.cat([x_path1, x_path2], 1))
return out
class CellBase(nn.Module):
def cell_forward(self, x_left, x_right):
x_comb_iter_0_left = self.comb_iter_0_left(x_left)
x_comb_iter_0_right = self.comb_iter_0_right(x_left)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x_right)
x_comb_iter_1_right = self.comb_iter_1_right(x_right)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x_right)
x_comb_iter_2_right = self.comb_iter_2_right(x_right)
x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right
x_comb_iter_3_left = self.comb_iter_3_left(x_comb_iter_2)
x_comb_iter_3_right = self.comb_iter_3_right(x_right)
x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right
x_comb_iter_4_left = self.comb_iter_4_left(x_left)
if self.comb_iter_4_right is not None:
x_comb_iter_4_right = self.comb_iter_4_right(x_right)
else:
x_comb_iter_4_right = x_right
x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right
x_out = torch.cat([x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class CellStem0(CellBase):
def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''):
super(CellStem0, self).__init__()
self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type)
self.comb_iter_0_left = BranchSeparables(
in_chs_left, out_chs_left, kernel_size=5, stride=2, stem_cell=True, padding=pad_type)
self.comb_iter_0_right = nn.Sequential(OrderedDict([
('max_pool', create_pool2d('max', 3, stride=2, padding=pad_type)),
('conv', create_conv2d(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type)),
('bn', nn.BatchNorm2d(out_chs_left, eps=0.001)),
]))
self.comb_iter_1_left = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=7, stride=2, padding=pad_type)
self.comb_iter_1_right = create_pool2d('max', 3, stride=2, padding=pad_type)
self.comb_iter_2_left = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=5, stride=2, padding=pad_type)
self.comb_iter_2_right = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=3, stride=2, padding=pad_type)
self.comb_iter_3_left = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=3, padding=pad_type)
self.comb_iter_3_right = create_pool2d('max', 3, stride=2, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(
in_chs_right, out_chs_right, kernel_size=3, stride=2, stem_cell=True, padding=pad_type)
self.comb_iter_4_right = ActConvBn(
out_chs_right, out_chs_right, kernel_size=1, stride=2, padding=pad_type)
def forward(self, x_left):
x_right = self.conv_1x1(x_left)
x_out = self.cell_forward(x_left, x_right)
return x_out
class Cell(CellBase):
def __init__(
self,
in_chs_left,
out_chs_left,
in_chs_right,
out_chs_right,
pad_type='',
is_reduction=False,
match_prev_layer_dims=False,
):
super(Cell, self).__init__()
# If `is_reduction` is set to `True` stride 2 is used for
# convolution and pooling layers to reduce the spatial size of
# the output of a cell approximately by a factor of 2.
stride = 2 if is_reduction else 1
# If `match_prev_layer_dimensions` is set to `True`
# `FactorizedReduction` is used to reduce the spatial size
# of the left input of a cell approximately by a factor of 2.
self.match_prev_layer_dimensions = match_prev_layer_dims
if match_prev_layer_dims:
self.conv_prev_1x1 = FactorizedReduction(in_chs_left, out_chs_left, padding=pad_type)
else:
self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, kernel_size=1, padding=pad_type)
self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, kernel_size=1, padding=pad_type)
self.comb_iter_0_left = BranchSeparables(
out_chs_left, out_chs_left, kernel_size=5, stride=stride, padding=pad_type)
self.comb_iter_0_right = create_pool2d('max', 3, stride=stride, padding=pad_type)
self.comb_iter_1_left = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=7, stride=stride, padding=pad_type)
self.comb_iter_1_right = create_pool2d('max', 3, stride=stride, padding=pad_type)
self.comb_iter_2_left = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=5, stride=stride, padding=pad_type)
self.comb_iter_2_right = BranchSeparables(
out_chs_right, out_chs_right, kernel_size=3, stride=stride, padding=pad_type)
self.comb_iter_3_left = BranchSeparables(out_chs_right, out_chs_right, kernel_size=3)
self.comb_iter_3_right = create_pool2d('max', 3, stride=stride, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(
out_chs_left, out_chs_left, kernel_size=3, stride=stride, padding=pad_type)
if is_reduction:
self.comb_iter_4_right = ActConvBn(
out_chs_right, out_chs_right, kernel_size=1, stride=stride, padding=pad_type)
else:
self.comb_iter_4_right = None
def forward(self, x_left, x_right):
x_left = self.conv_prev_1x1(x_left)
x_right = self.conv_1x1(x_right)
x_out = self.cell_forward(x_left, x_right)
return x_out
class PNASNet5Large(nn.Module):
def __init__(
self,
num_classes=1000,
in_chans=3,
output_stride=32,
drop_rate=0.,
global_pool='avg',
pad_type='',
):
super(PNASNet5Large, self).__init__()
self.num_classes = num_classes
self.num_features = self.head_hidden_size = 4320
assert output_stride == 32
self.conv_0 = ConvNormAct(
in_chans, 96, kernel_size=3, stride=2, padding=0,
norm_layer=partial(nn.BatchNorm2d, eps=0.001, momentum=0.1), apply_act=False)
self.cell_stem_0 = CellStem0(
in_chs_left=96, out_chs_left=54, in_chs_right=96, out_chs_right=54, pad_type=pad_type)
self.cell_stem_1 = Cell(
in_chs_left=96, out_chs_left=108, in_chs_right=270, out_chs_right=108, pad_type=pad_type,
match_prev_layer_dims=True, is_reduction=True)
self.cell_0 = Cell(
in_chs_left=270, out_chs_left=216, in_chs_right=540, out_chs_right=216, pad_type=pad_type,
match_prev_layer_dims=True)
self.cell_1 = Cell(
in_chs_left=540, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type)
self.cell_2 = Cell(
in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type)
self.cell_3 = Cell(
in_chs_left=1080, out_chs_left=216, in_chs_right=1080, out_chs_right=216, pad_type=pad_type)
self.cell_4 = Cell(
in_chs_left=1080, out_chs_left=432, in_chs_right=1080, out_chs_right=432, pad_type=pad_type,
is_reduction=True)
self.cell_5 = Cell(
in_chs_left=1080, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type,
match_prev_layer_dims=True)
self.cell_6 = Cell(
in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type)
self.cell_7 = Cell(
in_chs_left=2160, out_chs_left=432, in_chs_right=2160, out_chs_right=432, pad_type=pad_type)
self.cell_8 = Cell(
in_chs_left=2160, out_chs_left=864, in_chs_right=2160, out_chs_right=864, pad_type=pad_type,
is_reduction=True)
self.cell_9 = Cell(
in_chs_left=2160, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type,
match_prev_layer_dims=True)
self.cell_10 = Cell(
in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type)
self.cell_11 = Cell(
in_chs_left=4320, out_chs_left=864, in_chs_right=4320, out_chs_right=864, pad_type=pad_type)
self.act = nn.ReLU()
self.feature_info = [
dict(num_chs=96, reduction=2, module='conv_0'),
dict(num_chs=270, reduction=4, module='cell_stem_1.conv_1x1.act'),
dict(num_chs=1080, reduction=8, module='cell_4.conv_1x1.act'),
dict(num_chs=2160, reduction=16, module='cell_8.conv_1x1.act'),
dict(num_chs=4320, reduction=32, module='act'),
]
self.global_pool, self.head_drop, self.last_linear = create_classifier(
self.num_features, self.num_classes, pool_type=global_pool, drop_rate=drop_rate)
@torch.jit.ignore
def group_matcher(self, coarse=False):
return dict(stem=r'^conv_0|cell_stem_[01]', blocks=r'^cell_(\d+)')
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
assert not enable, 'gradient checkpointing not supported'
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.last_linear
def reset_classifier(self, num_classes: int, global_pool: str = 'avg'):
self.num_classes = num_classes
self.global_pool, self.last_linear = create_classifier(
self.num_features, self.num_classes, pool_type=global_pool)
def forward_features(self, x):
x_conv_0 = self.conv_0(x)
x_stem_0 = self.cell_stem_0(x_conv_0)
x_stem_1 = self.cell_stem_1(x_conv_0, x_stem_0)
x_cell_0 = self.cell_0(x_stem_0, x_stem_1)
x_cell_1 = self.cell_1(x_stem_1, x_cell_0)
x_cell_2 = self.cell_2(x_cell_0, x_cell_1)
x_cell_3 = self.cell_3(x_cell_1, x_cell_2)
x_cell_4 = self.cell_4(x_cell_2, x_cell_3)
x_cell_5 = self.cell_5(x_cell_3, x_cell_4)
x_cell_6 = self.cell_6(x_cell_4, x_cell_5)
x_cell_7 = self.cell_7(x_cell_5, x_cell_6)
x_cell_8 = self.cell_8(x_cell_6, x_cell_7)
x_cell_9 = self.cell_9(x_cell_7, x_cell_8)
x_cell_10 = self.cell_10(x_cell_8, x_cell_9)
x_cell_11 = self.cell_11(x_cell_9, x_cell_10)
x = self.act(x_cell_11)
return x
def forward_head(self, x, pre_logits: bool = False):
x = self.global_pool(x)
x = self.head_drop(x)
return x if pre_logits else self.last_linear(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_pnasnet(variant, pretrained=False, **kwargs):
return build_model_with_cfg(
PNASNet5Large,
variant,
pretrained,
feature_cfg=dict(feature_cls='hook', no_rewrite=True), # not possible to re-write this model
**kwargs,
)
default_cfgs = generate_default_cfgs({
'pnasnet5large.tf_in1k': {
'hf_hub_id': 'timm/',
'input_size': (3, 331, 331),
'pool_size': (11, 11),
'crop_pct': 0.911,
'interpolation': 'bicubic',
'mean': (0.5, 0.5, 0.5),
'std': (0.5, 0.5, 0.5),
'num_classes': 1000,
'first_conv': 'conv_0.conv',
'classifier': 'last_linear',
},
})
@register_model
def pnasnet5large(pretrained=False, **kwargs) -> PNASNet5Large:
r"""PNASNet-5 model architecture from the
`"Progressive Neural Architecture Search"
<https://arxiv.org/abs/1712.00559>`_ paper.
"""
model_kwargs = dict(pad_type='same', **kwargs)
return _create_pnasnet('pnasnet5large', pretrained, **model_kwargs)
|
pytorch-image-models/timm/models/pnasnet.py/0
|
{
"file_path": "pytorch-image-models/timm/models/pnasnet.py",
"repo_id": "pytorch-image-models",
"token_count": 7672
}
| 214
|
""" Swin Transformer
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows`
- https://arxiv.org/pdf/2103.14030
Code/weights from https://github.com/microsoft/Swin-Transformer, original copyright/license info below
S3 (AutoFormerV2, https://arxiv.org/abs/2111.14725) Swin weights from
- https://github.com/microsoft/Cream/tree/main/AutoFormerV2
Modifications and additions for timm hacked together by / Copyright 2021, Ross Wightman
"""
# --------------------------------------------------------
# Swin Transformer
# Copyright (c) 2021 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ze Liu
# --------------------------------------------------------
import logging
import math
from typing import Callable, List, Optional, Tuple, Union
import torch
import torch.nn as nn
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import PatchEmbed, Mlp, DropPath, ClassifierHead, to_2tuple, to_ntuple, trunc_normal_, \
_assert, use_fused_attn, resize_rel_pos_bias_table, resample_patch_embed, ndgrid
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._features_fx import register_notrace_function
from ._manipulate import checkpoint_seq, named_apply
from ._registry import generate_default_cfgs, register_model, register_model_deprecations
from .vision_transformer import get_init_weights_vit
__all__ = ['SwinTransformer'] # model_registry will add each entrypoint fn to this
_logger = logging.getLogger(__name__)
_int_or_tuple_2_t = Union[int, Tuple[int, int]]
def window_partition(
x: torch.Tensor,
window_size: Tuple[int, int],
) -> torch.Tensor:
"""
Partition into non-overlapping windows with padding if needed.
Args:
x (tensor): input tokens with [B, H, W, C].
window_size (int): window size.
Returns:
windows: windows after partition with [B * num_windows, window_size, window_size, C].
(Hp, Wp): padded height and width before partition
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
return windows
@register_notrace_function # reason: int argument is a Proxy
def window_reverse(windows, window_size: Tuple[int, int], H: int, W: int):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
C = windows.shape[-1]
x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C)
return x
def get_relative_position_index(win_h: int, win_w: int):
# get pair-wise relative position index for each token inside the window
coords = torch.stack(ndgrid(torch.arange(win_h), torch.arange(win_w))) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += win_h - 1 # shift to start from 0
relative_coords[:, :, 1] += win_w - 1
relative_coords[:, :, 0] *= 2 * win_w - 1
return relative_coords.sum(-1) # Wh*Ww, Wh*Ww
class WindowAttention(nn.Module):
""" Window based multi-head self attention (W-MSA) module with relative position bias.
It supports shifted and non-shifted windows.
"""
fused_attn: torch.jit.Final[bool]
def __init__(
self,
dim: int,
num_heads: int,
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
qkv_bias: bool = True,
attn_drop: float = 0.,
proj_drop: float = 0.,
):
"""
Args:
dim: Number of input channels.
num_heads: Number of attention heads.
head_dim: Number of channels per head (dim // num_heads if not set)
window_size: The height and width of the window.
qkv_bias: If True, add a learnable bias to query, key, value.
attn_drop: Dropout ratio of attention weight.
proj_drop: Dropout ratio of output.
"""
super().__init__()
self.dim = dim
self.window_size = to_2tuple(window_size) # Wh, Ww
win_h, win_w = self.window_size
self.window_area = win_h * win_w
self.num_heads = num_heads
head_dim = head_dim or dim // num_heads
attn_dim = head_dim * num_heads
self.scale = head_dim ** -0.5
self.fused_attn = use_fused_attn(experimental=True) # NOTE not tested for prime-time yet
# define a parameter table of relative position bias, shape: 2*Wh-1 * 2*Ww-1, nH
self.relative_position_bias_table = nn.Parameter(torch.zeros((2 * win_h - 1) * (2 * win_w - 1), num_heads))
# get pair-wise relative position index for each token inside the window
self.register_buffer("relative_position_index", get_relative_position_index(win_h, win_w), persistent=False)
self.qkv = nn.Linear(dim, attn_dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(attn_dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def set_window_size(self, window_size: Tuple[int, int]) -> None:
"""Update window size & interpolate position embeddings
Args:
window_size (int): New window size
"""
window_size = to_2tuple(window_size)
if window_size == self.window_size:
return
self.window_size = window_size
win_h, win_w = self.window_size
self.window_area = win_h * win_w
with torch.no_grad():
new_bias_shape = (2 * win_h - 1) * (2 * win_w - 1), self.num_heads
self.relative_position_bias_table = nn.Parameter(
resize_rel_pos_bias_table(
self.relative_position_bias_table,
new_window_size=self.window_size,
new_bias_shape=new_bias_shape,
))
self.register_buffer("relative_position_index", get_relative_position_index(win_h, win_w), persistent=False)
def _get_rel_pos_bias(self) -> torch.Tensor:
relative_position_bias = self.relative_position_bias_table[
self.relative_position_index.view(-1)].view(self.window_area, self.window_area, -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
return relative_position_bias.unsqueeze(0)
def forward(self, x, mask: Optional[torch.Tensor] = None):
"""
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0)
if self.fused_attn:
attn_mask = self._get_rel_pos_bias()
if mask is not None:
num_win = mask.shape[0]
mask = mask.view(1, num_win, 1, N, N).expand(B_ // num_win, -1, self.num_heads, -1, -1)
attn_mask = attn_mask + mask.reshape(-1, self.num_heads, N, N)
x = torch.nn.functional.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_mask,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = q @ k.transpose(-2, -1)
attn = attn + self._get_rel_pos_bias()
if mask is not None:
num_win = mask.shape[0]
attn = attn.view(-1, num_win, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(B_, N, -1)
x = self.proj(x)
x = self.proj_drop(x)
return x
class SwinTransformerBlock(nn.Module):
""" Swin Transformer Block.
"""
def __init__(
self,
dim: int,
input_resolution: _int_or_tuple_2_t,
num_heads: int = 4,
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
shift_size: int = 0,
always_partition: bool = False,
dynamic_mask: bool = False,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: float = 0.,
act_layer: Callable = nn.GELU,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
input_resolution: Input resolution.
window_size: Window size.
num_heads: Number of attention heads.
head_dim: Enforce the number of channels per head
shift_size: Shift size for SW-MSA.
always_partition: Always partition into full windows and shift
mlp_ratio: Ratio of mlp hidden dim to embedding dim.
qkv_bias: If True, add a learnable bias to query, key, value.
proj_drop: Dropout rate.
attn_drop: Attention dropout rate.
drop_path: Stochastic depth rate.
act_layer: Activation layer.
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.target_shift_size = to_2tuple(shift_size) # store for later resize
self.always_partition = always_partition
self.dynamic_mask = dynamic_mask
self.window_size, self.shift_size = self._calc_window_shift(window_size, shift_size)
self.window_area = self.window_size[0] * self.window_size[1]
self.mlp_ratio = mlp_ratio
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim,
num_heads=num_heads,
head_dim=head_dim,
window_size=self.window_size,
qkv_bias=qkv_bias,
attn_drop=attn_drop,
proj_drop=proj_drop,
)
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = Mlp(
in_features=dim,
hidden_features=int(dim * mlp_ratio),
act_layer=act_layer,
drop=proj_drop,
)
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.register_buffer(
"attn_mask",
None if self.dynamic_mask else self.get_attn_mask(),
persistent=False,
)
def get_attn_mask(self, x: Optional[torch.Tensor] = None) -> Optional[torch.Tensor]:
if any(self.shift_size):
# calculate attention mask for SW-MSA
if x is not None:
H, W = x.shape[1], x.shape[2]
device = x.device
dtype = x.dtype
else:
H, W = self.input_resolution
device = None
dtype = None
H = math.ceil(H / self.window_size[0]) * self.window_size[0]
W = math.ceil(W / self.window_size[1]) * self.window_size[1]
img_mask = torch.zeros((1, H, W, 1), dtype=dtype, device=device) # 1 H W 1
cnt = 0
for h in (
(0, -self.window_size[0]),
(-self.window_size[0], -self.shift_size[0]),
(-self.shift_size[0], None),
):
for w in (
(0, -self.window_size[1]),
(-self.window_size[1], -self.shift_size[1]),
(-self.shift_size[1], None),
):
img_mask[:, h[0]:h[1], w[0]:w[1], :] = cnt
cnt += 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_area)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
else:
attn_mask = None
return attn_mask
def _calc_window_shift(
self,
target_window_size: Union[int, Tuple[int, int]],
target_shift_size: Optional[Union[int, Tuple[int, int]]] = None,
) -> Tuple[Tuple[int, int], Tuple[int, int]]:
target_window_size = to_2tuple(target_window_size)
if target_shift_size is None:
# if passed value is None, recalculate from default window_size // 2 if it was previously non-zero
target_shift_size = self.target_shift_size
if any(target_shift_size):
target_shift_size = (target_window_size[0] // 2, target_window_size[1] // 2)
else:
target_shift_size = to_2tuple(target_shift_size)
if self.always_partition:
return target_window_size, target_shift_size
window_size = [r if r <= w else w for r, w in zip(self.input_resolution, target_window_size)]
shift_size = [0 if r <= w else s for r, w, s in zip(self.input_resolution, window_size, target_shift_size)]
return tuple(window_size), tuple(shift_size)
def set_input_size(
self,
feat_size: Tuple[int, int],
window_size: Tuple[int, int],
always_partition: Optional[bool] = None,
):
"""
Args:
feat_size: New input resolution
window_size: New window size
always_partition: Change always_partition attribute if not None
"""
self.input_resolution = feat_size
if always_partition is not None:
self.always_partition = always_partition
self.window_size, self.shift_size = self._calc_window_shift(window_size)
self.window_area = self.window_size[0] * self.window_size[1]
self.attn.set_window_size(self.window_size)
self.register_buffer(
"attn_mask",
None if self.dynamic_mask else self.get_attn_mask(),
persistent=False,
)
def _attn(self, x):
B, H, W, C = x.shape
# cyclic shift
has_shift = any(self.shift_size)
if has_shift:
shifted_x = torch.roll(x, shifts=(-self.shift_size[0], -self.shift_size[1]), dims=(1, 2))
else:
shifted_x = x
# pad for resolution not divisible by window size
pad_h = (self.window_size[0] - H % self.window_size[0]) % self.window_size[0]
pad_w = (self.window_size[1] - W % self.window_size[1]) % self.window_size[1]
shifted_x = torch.nn.functional.pad(shifted_x, (0, 0, 0, pad_w, 0, pad_h))
_, Hp, Wp, _ = shifted_x.shape
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_area, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA
if getattr(self, 'dynamic_mask', False):
attn_mask = self.get_attn_mask(shifted_x)
else:
attn_mask = self.attn_mask
attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
# merge windows
attn_windows = attn_windows.view(-1, self.window_size[0], self.window_size[1], C)
shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
shifted_x = shifted_x[:, :H, :W, :].contiguous()
# reverse cyclic shift
if has_shift:
x = torch.roll(shifted_x, shifts=self.shift_size, dims=(1, 2))
else:
x = shifted_x
return x
def forward(self, x):
B, H, W, C = x.shape
x = x + self.drop_path1(self._attn(self.norm1(x)))
x = x.reshape(B, -1, C)
x = x + self.drop_path2(self.mlp(self.norm2(x)))
x = x.reshape(B, H, W, C)
return x
class PatchMerging(nn.Module):
""" Patch Merging Layer.
"""
def __init__(
self,
dim: int,
out_dim: Optional[int] = None,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
out_dim: Number of output channels (or 2 * dim if None)
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.out_dim = out_dim or 2 * dim
self.norm = norm_layer(4 * dim)
self.reduction = nn.Linear(4 * dim, self.out_dim, bias=False)
def forward(self, x):
B, H, W, C = x.shape
pad_values = (0, 0, 0, H % 2, 0, W % 2)
x = nn.functional.pad(x, pad_values)
_, H, W, _ = x.shape
x = x.reshape(B, H // 2, 2, W // 2, 2, C).permute(0, 1, 3, 4, 2, 5).flatten(3)
x = self.norm(x)
x = self.reduction(x)
return x
class SwinTransformerStage(nn.Module):
""" A basic Swin Transformer layer for one stage.
"""
def __init__(
self,
dim: int,
out_dim: int,
input_resolution: Tuple[int, int],
depth: int,
downsample: bool = True,
num_heads: int = 4,
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
always_partition: bool = False,
dynamic_mask: bool = False,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
proj_drop: float = 0.,
attn_drop: float = 0.,
drop_path: Union[List[float], float] = 0.,
norm_layer: Callable = nn.LayerNorm,
):
"""
Args:
dim: Number of input channels.
out_dim: Number of output channels.
input_resolution: Input resolution.
depth: Number of blocks.
downsample: Downsample layer at the end of the layer.
num_heads: Number of attention heads.
head_dim: Channels per head (dim // num_heads if not set)
window_size: Local window size.
mlp_ratio: Ratio of mlp hidden dim to embedding dim.
qkv_bias: If True, add a learnable bias to query, key, value.
proj_drop: Projection dropout rate.
attn_drop: Attention dropout rate.
drop_path: Stochastic depth rate.
norm_layer: Normalization layer.
"""
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.output_resolution = tuple(i // 2 for i in input_resolution) if downsample else input_resolution
self.depth = depth
self.grad_checkpointing = False
window_size = to_2tuple(window_size)
shift_size = tuple([w // 2 for w in window_size])
# patch merging layer
if downsample:
self.downsample = PatchMerging(
dim=dim,
out_dim=out_dim,
norm_layer=norm_layer,
)
else:
assert dim == out_dim
self.downsample = nn.Identity()
# build blocks
self.blocks = nn.Sequential(*[
SwinTransformerBlock(
dim=out_dim,
input_resolution=self.output_resolution,
num_heads=num_heads,
head_dim=head_dim,
window_size=window_size,
shift_size=0 if (i % 2 == 0) else shift_size,
always_partition=always_partition,
dynamic_mask=dynamic_mask,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
proj_drop=proj_drop,
attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer,
)
for i in range(depth)])
def set_input_size(
self,
feat_size: Tuple[int, int],
window_size: int,
always_partition: Optional[bool] = None,
):
""" Updates the resolution, window size and so the pair-wise relative positions.
Args:
feat_size: New input (feature) resolution
window_size: New window size
always_partition: Always partition / shift the window
"""
self.input_resolution = feat_size
if isinstance(self.downsample, nn.Identity):
self.output_resolution = feat_size
else:
self.output_resolution = tuple(i // 2 for i in feat_size)
for block in self.blocks:
block.set_input_size(
feat_size=self.output_resolution,
window_size=window_size,
always_partition=always_partition,
)
def forward(self, x):
x = self.downsample(x)
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint_seq(self.blocks, x)
else:
x = self.blocks(x)
return x
class SwinTransformer(nn.Module):
""" Swin Transformer
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
https://arxiv.org/pdf/2103.14030
"""
def __init__(
self,
img_size: _int_or_tuple_2_t = 224,
patch_size: int = 4,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: str = 'avg',
embed_dim: int = 96,
depths: Tuple[int, ...] = (2, 2, 6, 2),
num_heads: Tuple[int, ...] = (3, 6, 12, 24),
head_dim: Optional[int] = None,
window_size: _int_or_tuple_2_t = 7,
always_partition: bool = False,
strict_img_size: bool = True,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
drop_rate: float = 0.,
proj_drop_rate: float = 0.,
attn_drop_rate: float = 0.,
drop_path_rate: float = 0.1,
embed_layer: Callable = PatchEmbed,
norm_layer: Union[str, Callable] = nn.LayerNorm,
weight_init: str = '',
**kwargs,
):
"""
Args:
img_size: Input image size.
patch_size: Patch size.
in_chans: Number of input image channels.
num_classes: Number of classes for classification head.
embed_dim: Patch embedding dimension.
depths: Depth of each Swin Transformer layer.
num_heads: Number of attention heads in different layers.
head_dim: Dimension of self-attention heads.
window_size: Window size.
mlp_ratio: Ratio of mlp hidden dim to embedding dim.
qkv_bias: If True, add a learnable bias to query, key, value.
drop_rate: Dropout rate.
attn_drop_rate (float): Attention dropout rate.
drop_path_rate (float): Stochastic depth rate.
embed_layer: Patch embedding layer.
norm_layer (nn.Module): Normalization layer.
"""
super().__init__()
assert global_pool in ('', 'avg')
self.num_classes = num_classes
self.global_pool = global_pool
self.output_fmt = 'NHWC'
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.num_features = self.head_hidden_size = int(embed_dim * 2 ** (self.num_layers - 1))
self.feature_info = []
if not isinstance(embed_dim, (tuple, list)):
embed_dim = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
# split image into non-overlapping patches
self.patch_embed = embed_layer(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim[0],
norm_layer=norm_layer,
strict_img_size=strict_img_size,
output_fmt='NHWC',
)
patch_grid = self.patch_embed.grid_size
# build layers
head_dim = to_ntuple(self.num_layers)(head_dim)
if not isinstance(window_size, (list, tuple)):
window_size = to_ntuple(self.num_layers)(window_size)
elif len(window_size) == 2:
window_size = (window_size,) * self.num_layers
assert len(window_size) == self.num_layers
mlp_ratio = to_ntuple(self.num_layers)(mlp_ratio)
dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)]
layers = []
in_dim = embed_dim[0]
scale = 1
for i in range(self.num_layers):
out_dim = embed_dim[i]
layers += [SwinTransformerStage(
dim=in_dim,
out_dim=out_dim,
input_resolution=(
patch_grid[0] // scale,
patch_grid[1] // scale
),
depth=depths[i],
downsample=i > 0,
num_heads=num_heads[i],
head_dim=head_dim[i],
window_size=window_size[i],
always_partition=always_partition,
dynamic_mask=not strict_img_size,
mlp_ratio=mlp_ratio[i],
qkv_bias=qkv_bias,
proj_drop=proj_drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
)]
in_dim = out_dim
if i > 0:
scale *= 2
self.feature_info += [dict(num_chs=out_dim, reduction=patch_size * scale, module=f'layers.{i}')]
self.layers = nn.Sequential(*layers)
self.norm = norm_layer(self.num_features)
self.head = ClassifierHead(
self.num_features,
num_classes,
pool_type=global_pool,
drop_rate=drop_rate,
input_fmt=self.output_fmt,
)
if weight_init != 'skip':
self.init_weights(weight_init)
@torch.jit.ignore
def init_weights(self, mode=''):
assert mode in ('jax', 'jax_nlhb', 'moco', '')
head_bias = -math.log(self.num_classes) if 'nlhb' in mode else 0.
named_apply(get_init_weights_vit(mode, head_bias=head_bias), self)
@torch.jit.ignore
def no_weight_decay(self):
nwd = set()
for n, _ in self.named_parameters():
if 'relative_position_bias_table' in n:
nwd.add(n)
return nwd
def set_input_size(
self,
img_size: Optional[Tuple[int, int]] = None,
patch_size: Optional[Tuple[int, int]] = None,
window_size: Optional[Tuple[int, int]] = None,
window_ratio: int = 8,
always_partition: Optional[bool] = None,
) -> None:
""" Updates the image resolution and window size.
Args:
img_size: New input resolution, if None current resolution is used
patch_size (Optional[Tuple[int, int]): New patch size, if None use current patch size
window_size: New window size, if None based on new_img_size // window_div
window_ratio: divisor for calculating window size from grid size
always_partition: always partition into windows and shift (even if window size < feat size)
"""
if img_size is not None or patch_size is not None:
self.patch_embed.set_input_size(img_size=img_size, patch_size=patch_size)
patch_grid = self.patch_embed.grid_size
if window_size is None:
window_size = tuple([pg // window_ratio for pg in patch_grid])
for index, stage in enumerate(self.layers):
stage_scale = 2 ** max(index - 1, 0)
stage.set_input_size(
feat_size=(patch_grid[0] // stage_scale, patch_grid[1] // stage_scale),
window_size=window_size,
always_partition=always_partition,
)
@torch.jit.ignore
def group_matcher(self, coarse=False):
return dict(
stem=r'^patch_embed', # stem and embed
blocks=r'^layers\.(\d+)' if coarse else [
(r'^layers\.(\d+).downsample', (0,)),
(r'^layers\.(\d+)\.\w+\.(\d+)', None),
(r'^norm', (99999,)),
]
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
for l in self.layers:
l.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head.fc
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
self.head.reset(num_classes, pool_type=global_pool)
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
norm: bool = False,
stop_early: bool = False,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
""" Forward features that returns intermediates.
Args:
x: Input image tensor
indices: Take last n blocks if int, all if None, select matching indices if sequence
norm: Apply norm layer to compatible intermediates
stop_early: Stop iterating over blocks when last desired intermediate hit
output_fmt: Shape of intermediate feature outputs
intermediates_only: Only return intermediate features
Returns:
"""
assert output_fmt in ('NCHW',), 'Output shape must be NCHW.'
intermediates = []
take_indices, max_index = feature_take_indices(len(self.layers), indices)
# forward pass
x = self.patch_embed(x)
num_stages = len(self.layers)
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
stages = self.layers
else:
stages = self.layers[:max_index + 1]
for i, stage in enumerate(stages):
x = stage(x)
if i in take_indices:
if norm and i == num_stages - 1:
x_inter = self.norm(x) # applying final norm last intermediate
else:
x_inter = x
x_inter = x_inter.permute(0, 3, 1, 2).contiguous()
intermediates.append(x_inter)
if intermediates_only:
return intermediates
x = self.norm(x)
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
):
""" Prune layers not required for specified intermediates.
"""
take_indices, max_index = feature_take_indices(len(self.layers), indices)
self.layers = self.layers[:max_index + 1] # truncate blocks
if prune_norm:
self.norm = nn.Identity()
if prune_head:
self.reset_classifier(0, '')
return take_indices
def forward_features(self, x):
x = self.patch_embed(x)
x = self.layers(x)
x = self.norm(x)
return x
def forward_head(self, x, pre_logits: bool = False):
return self.head(x, pre_logits=True) if pre_logits else self.head(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def checkpoint_filter_fn(state_dict, model):
""" convert patch embedding weight from manual patchify + linear proj to conv"""
old_weights = True
if 'head.fc.weight' in state_dict:
old_weights = False
import re
out_dict = {}
state_dict = state_dict.get('model', state_dict)
state_dict = state_dict.get('state_dict', state_dict)
for k, v in state_dict.items():
if any([n in k for n in ('relative_position_index', 'attn_mask')]):
continue # skip buffers that should not be persistent
if 'patch_embed.proj.weight' in k:
_, _, H, W = model.patch_embed.proj.weight.shape
if v.shape[-2] != H or v.shape[-1] != W:
v = resample_patch_embed(
v,
(H, W),
interpolation='bicubic',
antialias=True,
verbose=True,
)
if k.endswith('relative_position_bias_table'):
m = model.get_submodule(k[:-29])
if v.shape != m.relative_position_bias_table.shape or m.window_size[0] != m.window_size[1]:
v = resize_rel_pos_bias_table(
v,
new_window_size=m.window_size,
new_bias_shape=m.relative_position_bias_table.shape,
)
if old_weights:
k = re.sub(r'layers.(\d+).downsample', lambda x: f'layers.{int(x.group(1)) + 1}.downsample', k)
k = k.replace('head.', 'head.fc.')
out_dict[k] = v
return out_dict
def _create_swin_transformer(variant, pretrained=False, **kwargs):
default_out_indices = tuple(i for i, _ in enumerate(kwargs.get('depths', (1, 1, 3, 1))))
out_indices = kwargs.pop('out_indices', default_out_indices)
model = build_model_with_cfg(
SwinTransformer, variant, pretrained,
pretrained_filter_fn=checkpoint_filter_fn,
feature_cfg=dict(flatten_sequential=True, out_indices=out_indices),
**kwargs)
return model
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True,
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'patch_embed.proj', 'classifier': 'head.fc',
'license': 'mit', **kwargs
}
default_cfgs = generate_default_cfgs({
'swin_small_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_small_patch4_window7_224_22kto1k_finetune.pth', ),
'swin_base_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22kto1k.pth',),
'swin_base_patch4_window12_384.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22kto1k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0),
'swin_large_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window7_224_22kto1k.pth',),
'swin_large_patch4_window12_384.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22kto1k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0),
'swin_tiny_patch4_window7_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth',),
'swin_small_patch4_window7_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth',),
'swin_base_patch4_window7_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224.pth',),
'swin_base_patch4_window12_384.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0),
# tiny 22k pretrain is worse than 1k, so moved after (untagged priority is based on order)
'swin_tiny_patch4_window7_224.ms_in22k_ft_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_tiny_patch4_window7_224_22kto1k_finetune.pth',),
'swin_tiny_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_tiny_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_small_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.8/swin_small_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_base_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_base_patch4_window12_384.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, num_classes=21841),
'swin_large_patch4_window7_224.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window7_224_22k.pth',
num_classes=21841),
'swin_large_patch4_window12_384.ms_in22k': _cfg(
hf_hub_id='timm/',
url='https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth',
input_size=(3, 384, 384), pool_size=(12, 12), crop_pct=1.0, num_classes=21841),
'swin_s3_tiny_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/s3_t-1d53f6a8.pth'),
'swin_s3_small_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/s3_s-3bb4c69d.pth'),
'swin_s3_base_224.ms_in1k': _cfg(
hf_hub_id='timm/',
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/s3_b-a1e95db4.pth'),
})
@register_model
def swin_tiny_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-T @ 224x224, trained ImageNet-1k
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=96, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer(
'swin_tiny_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_small_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S @ 224x224
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=96, depths=(2, 2, 18, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer(
'swin_small_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_base_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-B @ 224x224
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32))
return _create_swin_transformer(
'swin_base_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_base_patch4_window12_384(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-B @ 384x384
"""
model_args = dict(patch_size=4, window_size=12, embed_dim=128, depths=(2, 2, 18, 2), num_heads=(4, 8, 16, 32))
return _create_swin_transformer(
'swin_base_patch4_window12_384', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_large_patch4_window7_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-L @ 224x224
"""
model_args = dict(patch_size=4, window_size=7, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48))
return _create_swin_transformer(
'swin_large_patch4_window7_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_large_patch4_window12_384(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-L @ 384x384
"""
model_args = dict(patch_size=4, window_size=12, embed_dim=192, depths=(2, 2, 18, 2), num_heads=(6, 12, 24, 48))
return _create_swin_transformer(
'swin_large_patch4_window12_384', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_s3_tiny_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S3-T @ 224x224, https://arxiv.org/abs/2111.14725
"""
model_args = dict(
patch_size=4, window_size=(7, 7, 14, 7), embed_dim=96, depths=(2, 2, 6, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer('swin_s3_tiny_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_s3_small_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S3-S @ 224x224, https://arxiv.org/abs/2111.14725
"""
model_args = dict(
patch_size=4, window_size=(14, 14, 14, 7), embed_dim=96, depths=(2, 2, 18, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer('swin_s3_small_224', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def swin_s3_base_224(pretrained=False, **kwargs) -> SwinTransformer:
""" Swin-S3-B @ 224x224, https://arxiv.org/abs/2111.14725
"""
model_args = dict(
patch_size=4, window_size=(7, 7, 14, 7), embed_dim=96, depths=(2, 2, 30, 2), num_heads=(3, 6, 12, 24))
return _create_swin_transformer('swin_s3_base_224', pretrained=pretrained, **dict(model_args, **kwargs))
register_model_deprecations(__name__, {
'swin_base_patch4_window7_224_in22k': 'swin_base_patch4_window7_224.ms_in22k',
'swin_base_patch4_window12_384_in22k': 'swin_base_patch4_window12_384.ms_in22k',
'swin_large_patch4_window7_224_in22k': 'swin_large_patch4_window7_224.ms_in22k',
'swin_large_patch4_window12_384_in22k': 'swin_large_patch4_window12_384.ms_in22k',
})
|
pytorch-image-models/timm/models/swin_transformer.py/0
|
{
"file_path": "pytorch-image-models/timm/models/swin_transformer.py",
"repo_id": "pytorch-image-models",
"token_count": 21031
}
| 215
|
"""
Ported to pytorch thanks to [tstandley](https://github.com/tstandley/Xception-PyTorch)
@author: tstandley
Adapted by cadene
Creates an Xception Model as defined in:
Francois Chollet
Xception: Deep Learning with Depthwise Separable Convolutions
https://arxiv.org/pdf/1610.02357.pdf
This weights ported from the Keras implementation. Achieves the following performance on the validation set:
Loss:0.9173 Prec@1:78.892 Prec@5:94.292
REMEMBER to set your image size to 3x299x299 for both test and validation
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
"""
import torch.jit
import torch.nn as nn
import torch.nn.functional as F
from timm.layers import create_classifier
from ._builder import build_model_with_cfg
from ._registry import register_model, generate_default_cfgs, register_model_deprecations
__all__ = ['Xception']
class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, padding=0, dilation=1):
super(SeparableConv2d, self).__init__()
self.conv1 = nn.Conv2d(
in_channels, in_channels, kernel_size, stride, padding, dilation, groups=in_channels, bias=False)
self.pointwise = nn.Conv2d(in_channels, out_channels, 1, 1, 0, 1, 1, bias=False)
def forward(self, x):
x = self.conv1(x)
x = self.pointwise(x)
return x
class Block(nn.Module):
def __init__(self, in_channels, out_channels, reps, strides=1, start_with_relu=True, grow_first=True):
super(Block, self).__init__()
if out_channels != in_channels or strides != 1:
self.skip = nn.Conv2d(in_channels, out_channels, 1, stride=strides, bias=False)
self.skipbn = nn.BatchNorm2d(out_channels)
else:
self.skip = None
rep = []
for i in range(reps):
if grow_first:
inc = in_channels if i == 0 else out_channels
outc = out_channels
else:
inc = in_channels
outc = in_channels if i < (reps - 1) else out_channels
rep.append(nn.ReLU(inplace=True))
rep.append(SeparableConv2d(inc, outc, 3, stride=1, padding=1))
rep.append(nn.BatchNorm2d(outc))
if not start_with_relu:
rep = rep[1:]
else:
rep[0] = nn.ReLU(inplace=False)
if strides != 1:
rep.append(nn.MaxPool2d(3, strides, 1))
self.rep = nn.Sequential(*rep)
def forward(self, inp):
x = self.rep(inp)
if self.skip is not None:
skip = self.skip(inp)
skip = self.skipbn(skip)
else:
skip = inp
x += skip
return x
class Xception(nn.Module):
"""
Xception optimized for the ImageNet dataset, as specified in
https://arxiv.org/pdf/1610.02357.pdf
"""
def __init__(self, num_classes=1000, in_chans=3, drop_rate=0., global_pool='avg'):
""" Constructor
Args:
num_classes: number of classes
"""
super(Xception, self).__init__()
self.drop_rate = drop_rate
self.global_pool = global_pool
self.num_classes = num_classes
self.num_features = self.head_hidden_size = 2048
self.conv1 = nn.Conv2d(in_chans, 32, 3, 2, 0, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.act1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32, 64, 3, bias=False)
self.bn2 = nn.BatchNorm2d(64)
self.act2 = nn.ReLU(inplace=True)
self.block1 = Block(64, 128, 2, 2, start_with_relu=False)
self.block2 = Block(128, 256, 2, 2)
self.block3 = Block(256, 728, 2, 2)
self.block4 = Block(728, 728, 3, 1)
self.block5 = Block(728, 728, 3, 1)
self.block6 = Block(728, 728, 3, 1)
self.block7 = Block(728, 728, 3, 1)
self.block8 = Block(728, 728, 3, 1)
self.block9 = Block(728, 728, 3, 1)
self.block10 = Block(728, 728, 3, 1)
self.block11 = Block(728, 728, 3, 1)
self.block12 = Block(728, 1024, 2, 2, grow_first=False)
self.conv3 = SeparableConv2d(1024, 1536, 3, 1, 1)
self.bn3 = nn.BatchNorm2d(1536)
self.act3 = nn.ReLU(inplace=True)
self.conv4 = SeparableConv2d(1536, self.num_features, 3, 1, 1)
self.bn4 = nn.BatchNorm2d(self.num_features)
self.act4 = nn.ReLU(inplace=True)
self.feature_info = [
dict(num_chs=64, reduction=2, module='act2'),
dict(num_chs=128, reduction=4, module='block2.rep.0'),
dict(num_chs=256, reduction=8, module='block3.rep.0'),
dict(num_chs=728, reduction=16, module='block12.rep.0'),
dict(num_chs=2048, reduction=32, module='act4'),
]
self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool)
# #------- init weights --------
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
@torch.jit.ignore
def group_matcher(self, coarse=False):
return dict(
stem=r'^conv[12]|bn[12]',
blocks=[
(r'^block(\d+)', None),
(r'^conv[34]|bn[34]', (99,)),
],
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
assert not enable, "gradient checkpointing not supported"
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.fc
def reset_classifier(self, num_classes: int, global_pool: str = 'avg'):
self.num_classes = num_classes
self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool)
def forward_features(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.act1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.act2(x)
x = self.block1(x)
x = self.block2(x)
x = self.block3(x)
x = self.block4(x)
x = self.block5(x)
x = self.block6(x)
x = self.block7(x)
x = self.block8(x)
x = self.block9(x)
x = self.block10(x)
x = self.block11(x)
x = self.block12(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.act3(x)
x = self.conv4(x)
x = self.bn4(x)
x = self.act4(x)
return x
def forward_head(self, x, pre_logits: bool = False):
x = self.global_pool(x)
if self.drop_rate:
F.dropout(x, self.drop_rate, training=self.training)
return x if pre_logits else self.fc(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _xception(variant, pretrained=False, **kwargs):
return build_model_with_cfg(
Xception, variant, pretrained,
feature_cfg=dict(feature_cls='hook'),
**kwargs)
default_cfgs = generate_default_cfgs({
'legacy_xception.tf_in1k': {
'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/xception-43020ad28.pth',
'input_size': (3, 299, 299),
'pool_size': (10, 10),
'crop_pct': 0.8975,
'interpolation': 'bicubic',
'mean': (0.5, 0.5, 0.5),
'std': (0.5, 0.5, 0.5),
'num_classes': 1000,
'first_conv': 'conv1',
'classifier': 'fc'
# The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
}
})
@register_model
def legacy_xception(pretrained=False, **kwargs) -> Xception:
return _xception('legacy_xception', pretrained=pretrained, **kwargs)
register_model_deprecations(__name__, {
'xception': 'legacy_xception',
})
|
pytorch-image-models/timm/models/xception.py/0
|
{
"file_path": "pytorch-image-models/timm/models/xception.py",
"repo_id": "pytorch-image-models",
"token_count": 3992
}
| 216
|
""" NAdamW Optimizer
Based on simplified algorithm in https://github.com/mlcommons/algorithmic-efficiency/tree/main/baselines/nadamw
Added multi-tensor (foreach) path.
"""
import math
from typing import List, Optional
import torch
from torch import Tensor
# Modified from github.com/pytorch/pytorch/blob/v1.12.1/torch/optim/adamw.py.
class NAdamW(torch.optim.Optimizer):
r"""Implements NAdamW algorithm.
See Table 1 in https://arxiv.org/abs/1910.05446 for the implementation of
the NAdam algorithm (there is also a comment in the code which highlights
the only difference of NAdamW and AdamW).
For further details regarding the algorithm we refer to
`Decoupled Weight Decay Regularization`_.
Args:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay coefficient (default: 1e-2)
.. _Decoupled Weight Decay Regularization:
https://arxiv.org/abs/1711.05101
.. _On the Convergence of Adam and Beyond:
https://openreview.net/forum?id=ryQu7f-RZ
"""
def __init__(
self,
params,
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=1e-2,
maximize: bool = False,
foreach: Optional[bool] = None,
capturable: bool = False,
):
if not 0.0 <= lr:
raise ValueError(f'Invalid learning rate: {lr}')
if not 0.0 <= eps:
raise ValueError(f'Invalid epsilon value: {eps}')
if not 0.0 <= betas[0] < 1.0:
raise ValueError(f'Invalid beta parameter at index 0: {betas[0]}')
if not 0.0 <= betas[1] < 1.0:
raise ValueError(f'Invalid beta parameter at index 1: {betas[1]}')
if not 0.0 <= weight_decay:
raise ValueError(f'Invalid weight_decay value: {weight_decay}')
defaults = dict(
lr=lr,
betas=betas,
eps=eps,
weight_decay=weight_decay,
foreach=foreach,
maximize=maximize,
capturable=capturable,
)
super().__init__(params, defaults)
def __setstate__(self, state):
super().__setstate__(state)
state_values = list(self.state.values())
step_is_tensor = (len(state_values) != 0) and torch.is_tensor(
state_values[0]['step'])
if not step_is_tensor:
for s in state_values:
s['step'] = torch.tensor(float(s['step']))
@torch.no_grad()
def step(self, closure=None):
"""Performs a single optimization step.
Args:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
self._cuda_graph_capture_health_check()
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
for group in self.param_groups:
params_with_grad = []
grads = []
exp_avgs = []
exp_avg_sqs = []
state_steps = []
beta1, beta2 = group['betas']
for p in group['params']:
if p.grad is None:
continue
params_with_grad.append(p)
if p.grad.is_sparse:
raise RuntimeError('NAdamW does not support sparse gradients')
grads.append(p.grad)
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = torch.tensor(0.)
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p, memory_format=torch.preserve_format)
exp_avgs.append(state['exp_avg'])
exp_avg_sqs.append(state['exp_avg_sq'])
state_steps.append(state['step'])
nadamw(
params_with_grad,
grads,
exp_avgs,
exp_avg_sqs,
state_steps,
beta1=beta1,
beta2=beta2,
lr=group['lr'],
weight_decay=group['weight_decay'],
eps=group['eps'],
maximize=group['maximize'],
capturable=group['capturable'],
)
return loss
def nadamw(
params: List[Tensor],
grads: List[Tensor],
exp_avgs: List[Tensor],
exp_avg_sqs: List[Tensor],
state_steps: List[Tensor],
foreach: Optional[bool] = None,
capturable: bool = False,
*,
beta1: float,
beta2: float,
lr: float,
weight_decay: float,
eps: float,
maximize: bool,
) -> None:
r"""Functional API that performs NAdamW algorithm computation.
See NAdamW class for details.
"""
if not all(isinstance(t, torch.Tensor) for t in state_steps):
raise RuntimeError(
'API has changed, `state_steps` argument must contain a list of' +
' singleton tensors')
if foreach is None:
foreach = True
if foreach and not torch.jit.is_scripting():
func = _multi_tensor_nadamw
else:
func = _single_tensor_nadamw
func(
params,
grads,
exp_avgs,
exp_avg_sqs,
state_steps,
beta1=beta1,
beta2=beta2,
lr=lr,
weight_decay=weight_decay,
eps=eps,
maximize=maximize,
capturable=capturable,
)
def _single_tensor_nadamw(
params: List[Tensor],
grads: List[Tensor],
exp_avgs: List[Tensor],
exp_avg_sqs: List[Tensor],
state_steps: List[Tensor],
*,
beta1: float,
beta2: float,
lr: float,
weight_decay: float,
eps: float,
maximize: bool,
capturable: bool
):
for i, param in enumerate(params):
grad = grads[i] if not maximize else -grads[i]
exp_avg = exp_avgs[i]
exp_avg_sq = exp_avg_sqs[i]
step_t = state_steps[i]
# Update step.
step_t += 1
# Perform stepweight decay.
param.mul_(1. - lr * weight_decay)
# Decay the first and second moment running average coefficient.
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
if capturable:
step = step_t
# 1 - beta1 ** step can't be captured in a CUDA graph, even if step is a CUDA tensor
# (incurs "RuntimeError: CUDA error: operation not permitted when stream is capturing")
bias_correction1 = 1 - torch.pow(beta1, step)
bias_correction2 = 1 - torch.pow(beta2, step)
step_size = lr / bias_correction1
step_size_neg = step_size.neg()
bias_correction2_sqrt = bias_correction2.sqrt()
# Only difference between NAdamW and AdamW in this implementation.
# The official PyTorch implementation of NAdam uses a different algorithm.
exp_avg = exp_avg.mul(beta1).add_(grad, alpha=1 - beta1)
denom = (exp_avg_sq.sqrt() / (bias_correction2_sqrt * step_size_neg)).add_(eps / step_size_neg)
param.addcdiv_(exp_avg, denom)
else:
step = step_t.item()
bias_correction1 = 1 - beta1 ** step
bias_correction2 = 1 - beta2 ** step
step_size = lr / bias_correction1
bias_correction2_sqrt = math.sqrt(bias_correction2)
# Only difference between NAdamW and AdamW in this implementation.
# The official PyTorch implementation of NAdam uses a different algorithm.
exp_avg = exp_avg.mul(beta1).add_(grad, alpha=1 - beta1)
denom = (exp_avg_sq.sqrt() / bias_correction2_sqrt).add_(eps)
param.addcdiv_(exp_avg, denom, value=-step_size)
def _multi_tensor_nadamw(
params: List[Tensor],
grads: List[Tensor],
exp_avgs: List[Tensor],
exp_avg_sqs: List[Tensor],
state_steps: List[Tensor],
*,
beta1: float,
beta2: float,
lr: float,
weight_decay: float,
eps: float,
maximize: bool,
capturable: bool,
):
if len(params) == 0:
return
if capturable:
assert all(
p.is_cuda and step.is_cuda for p, step in zip(params, state_steps)
), "If capturable=True, params and state_steps must be CUDA tensors."
if maximize:
grads = torch._foreach_neg(tuple(grads)) # type: ignore[assignment]
grads = [torch.view_as_real(x) if torch.is_complex(x) else x for x in grads]
exp_avgs = [torch.view_as_real(x) if torch.is_complex(x) else x for x in exp_avgs]
exp_avg_sqs = [torch.view_as_real(x) if torch.is_complex(x) else x for x in exp_avg_sqs]
params = [torch.view_as_real(x) if torch.is_complex(x) else x for x in params]
# update steps
torch._foreach_add_(state_steps, 1)
# Perform stepweight decay
torch._foreach_mul_(params, 1 - lr * weight_decay)
# Decay the first and second moment running average coefficient
torch._foreach_mul_(exp_avgs, beta1)
torch._foreach_add_(exp_avgs, grads, alpha=1 - beta1)
torch._foreach_mul_(exp_avg_sqs, beta2)
torch._foreach_addcmul_(exp_avg_sqs, grads, grads, 1 - beta2)
if capturable:
# TODO: use foreach_pow if/when foreach_pow is added
bias_correction1 = [torch.pow(beta1, step) for step in state_steps]
bias_correction2 = [torch.pow(beta2, step) for step in state_steps]
# foreach_sub doesn't allow a scalar as the first arg
torch._foreach_sub_(bias_correction1, 1)
torch._foreach_sub_(bias_correction2, 1)
torch._foreach_neg_(bias_correction1)
torch._foreach_neg_(bias_correction2)
# foreach_div doesn't allow a scalar as the first arg
step_size = torch._foreach_div(bias_correction1, lr)
torch._foreach_reciprocal_(step_size)
torch._foreach_neg_(step_size)
bias_correction2_sqrt = torch._foreach_sqrt(bias_correction2)
# Only difference between NAdamW and AdamW in this implementation.
# The official PyTorch implementation of NAdam uses a different algorithm.
exp_avgs = torch._foreach_mul(exp_avgs, beta1)
torch._foreach_add_(exp_avgs, grads, alpha=1 - beta1)
exp_avg_sq_sqrt = torch._foreach_sqrt(exp_avg_sqs)
torch._foreach_div_(
exp_avg_sq_sqrt, torch._foreach_mul(bias_correction2_sqrt, step_size)
)
eps_over_step_size = torch._foreach_div(step_size, eps)
torch._foreach_reciprocal_(eps_over_step_size)
denom = torch._foreach_add(exp_avg_sq_sqrt, eps_over_step_size)
torch._foreach_addcdiv_(params, exp_avgs, denom)
else:
bias_correction1 = [1 - beta1 ** step.item() for step in state_steps]
bias_correction2 = [1 - beta2 ** step.item() for step in state_steps]
step_size = [(lr / bc) * -1 for bc in bias_correction1]
bias_correction2_sqrt = [math.sqrt(bc) for bc in bias_correction2]
# Only difference between NAdamW and AdamW in this implementation.
# The official PyTorch implementation of NAdam uses a different algorithm.
exp_avgs = torch._foreach_mul(exp_avgs, beta1)
torch._foreach_add_(exp_avgs, grads, alpha=1 - beta1)
exp_avg_sq_sqrt = torch._foreach_sqrt(exp_avg_sqs)
torch._foreach_div_(exp_avg_sq_sqrt, bias_correction2_sqrt)
denom = torch._foreach_add(exp_avg_sq_sqrt, eps)
torch._foreach_addcdiv_(params, exp_avgs, denom, step_size)
|
pytorch-image-models/timm/optim/nadamw.py/0
|
{
"file_path": "pytorch-image-models/timm/optim/nadamw.py",
"repo_id": "pytorch-image-models",
"token_count": 5958
}
| 217
|
from .agc import adaptive_clip_grad
from .attention_extract import AttentionExtract
from .checkpoint_saver import CheckpointSaver
from .clip_grad import dispatch_clip_grad
from .cuda import ApexScaler, NativeScaler
from .decay_batch import decay_batch_step, check_batch_size_retry
from .distributed import distribute_bn, reduce_tensor, init_distributed_device,\
world_info_from_env, is_distributed_env, is_primary
from .jit import set_jit_legacy, set_jit_fuser
from .log import setup_default_logging, FormatterNoInfo
from .metrics import AverageMeter, accuracy
from .misc import natural_key, add_bool_arg, ParseKwargs
from .model import unwrap_model, get_state_dict, freeze, unfreeze, reparameterize_model
from .model_ema import ModelEma, ModelEmaV2, ModelEmaV3
from .random import random_seed
from .summary import update_summary, get_outdir
|
pytorch-image-models/timm/utils/__init__.py/0
|
{
"file_path": "pytorch-image-models/timm/utils/__init__.py",
"repo_id": "pytorch-image-models",
"token_count": 264
}
| 218
|
""" Summary utilities
Hacked together by / Copyright 2020 Ross Wightman
"""
import csv
import os
from collections import OrderedDict
try:
import wandb
except ImportError:
pass
def get_outdir(path, *paths, inc=False):
outdir = os.path.join(path, *paths)
if not os.path.exists(outdir):
os.makedirs(outdir)
elif inc:
count = 1
outdir_inc = outdir + '-' + str(count)
while os.path.exists(outdir_inc):
count = count + 1
outdir_inc = outdir + '-' + str(count)
assert count < 100
outdir = outdir_inc
os.makedirs(outdir)
return outdir
def update_summary(
epoch,
train_metrics,
eval_metrics,
filename,
lr=None,
write_header=False,
log_wandb=False,
):
rowd = OrderedDict(epoch=epoch)
rowd.update([('train_' + k, v) for k, v in train_metrics.items()])
if eval_metrics:
rowd.update([('eval_' + k, v) for k, v in eval_metrics.items()])
if lr is not None:
rowd['lr'] = lr
if log_wandb:
wandb.log(rowd)
with open(filename, mode='a') as cf:
dw = csv.DictWriter(cf, fieldnames=rowd.keys())
if write_header: # first iteration (epoch == 1 can't be used)
dw.writeheader()
dw.writerow(rowd)
|
pytorch-image-models/timm/utils/summary.py/0
|
{
"file_path": "pytorch-image-models/timm/utils/summary.py",
"repo_id": "pytorch-image-models",
"token_count": 633
}
| 219
|
{
mkPoetryApplication,
pkg-config,
protobuf,
openssl,
}:
mkPoetryApplication {
# name = "text-generation-server";
projectDir = ./server;
# nativeBuildInputs = [ pkg-config ];
# buildInputs = [ openssl.dev protobuf ];
}
|
text-generation-inference/_server.nix/0
|
{
"file_path": "text-generation-inference/_server.nix",
"repo_id": "text-generation-inference",
"token_count": 89
}
| 220
|
[package]
name = "text-generation-backends-trtllm"
version.workspace = true
edition.workspace = true
authors.workspace = true
homepage.workspace = true
[dependencies]
async-trait = "0.1"
async-stream = "0.3"
clap = { version = "4.5", features = ["derive"] }
cxx = "1.0"
log = { version = "0.4", features = [] }
text-generation-router = { path = "../../router" }
tokenizers = { version = "0.19", features = ["hf-hub"] }
tokio = { version = "1.38", features = ["rt", "rt-multi-thread", "parking_lot", "signal", "sync"] }
tokio-stream = "0.1.15"
thiserror = "1.0.62"
tracing = "0.1"
tracing-opentelemetry = "0.24"
tracing-subscriber = { version = "0.3", features = ["json", "env-filter"] }
parking_lot = "0.12"
[build-dependencies]
cmake = "0.1"
cxx-build = { version = "1.0", features = ["parallel"] }
pkg-config = "0.3"
|
text-generation-inference/backends/trtllm/Cargo.toml/0
|
{
"file_path": "text-generation-inference/backends/trtllm/Cargo.toml",
"repo_id": "text-generation-inference",
"token_count": 333
}
| 221
|
//
// Created by mfuntowicz on 6/30/24.
//
#pragma once
#include <cmath>
#include <exception>
#include <filesystem>
#include <limits>
#include <iterator>
#include <vector>
#include <spdlog/spdlog.h>
#include "backends/trtllm/include/ffi.h"
huggingface::tgi::backends::TensorRtLlmBackendImpl::TensorRtLlmBackendImpl(
const std::string_view &engineFolder,
const std::string_view &executorWorker
) : TensorRtLlmBackend(engineFolder, executorWorker) {}
bool huggingface::tgi::backends::TensorRtLlmBackendImpl::IsReady() const {
return TensorRtLlmBackend::IsReady();
}
uint64_t huggingface::tgi::backends::TensorRtLlmBackendImpl::Submit(
rust::Slice<const uint32_t> tokens, int32_t topK, float_t topP, float_t temperature, float_t repetition_penalty,
float_t frequency_penalty, uint64_t seed) {
// This will copy all the items from the initial slice
std::vector<int32_t> tokens_(std::make_move_iterator(tokens.begin()), std::make_move_iterator(tokens.end()));
return TensorRtLlmBackend::Submit(
std::move(tokens_), topK, topP, temperature, repetition_penalty, frequency_penalty, seed);
}
size_t huggingface::tgi::backends::TensorRtLlmBackendImpl::StreamTokens(
const uint64_t requestId,
huggingface::tgi::backends::GenerationContext *ctx,
rust::Fn<void(huggingface::tgi::backends::GenerationContext *,
huggingface::tgi::backends::GenerationStep)> callback) {
size_t numTokens = 0;
for (const auto &item: Poll(requestId)) {
GenerationStep step;
if (!item.hasError()) {
SPDLOG_DEBUG("\tStreamTokens -> Decoding token...");
const auto decoded = item.getResult();
const auto token = decoded.outputTokenIds[0][0];
const auto isFinal = decoded.isFinal;
const auto logProb = decoded.logProbs.value()[0][0];
++numTokens;
SPDLOG_DEBUG(FMT_STRING("\tStreamTokens -> {:d} {:.2f} (final = {})"), token, logProb, isFinal);
step = huggingface::tgi::backends::GenerationStep{
static_cast<uint32_t>(token), logProb, isFinal, false, std::move(std::string())
};
SPDLOG_DEBUG("\tStreamTokens -> Post callback");
} else {
// TODO : Return rest::Result with error
const auto what = item.getErrorMsg();
SPDLOG_WARN("\tStreamTokens -> Got error while decoding: {}", what);
step = huggingface::tgi::backends::GenerationStep{
std::numeric_limits<uint32_t>::max(), 0.0, true, true, std::move(what)
};
}
callback(std::move(ctx), std::move(step));
}
return numTokens;
}
std::unique_ptr<huggingface::tgi::backends::TensorRtLlmBackendImpl>
huggingface::tgi::backends::CreateTensorRtLlmBackend(rust::Str engineFolder, rust::Str executorWorker) {
// Unconditionally call this to initialize and discover TRTLLM plugins
InitializeBackend();
const auto enginePath = std::string_view(engineFolder.begin(), engineFolder.end());
const auto executorPath = std::string_view(executorWorker.begin(), executorWorker.end());
return std::make_unique<TensorRtLlmBackendImpl>(std::move(enginePath), std::move(executorPath));
}
|
text-generation-inference/backends/trtllm/src/ffi.cpp/0
|
{
"file_path": "text-generation-inference/backends/trtllm/src/ffi.cpp",
"repo_id": "text-generation-inference",
"token_count": 1376
}
| 222
|
[package]
name = "text-generation-benchmark"
description = "Text Generation Benchmarking tool"
version.workspace = true
edition.workspace = true
authors.workspace = true
homepage.workspace = true
[lib]
path = "src/lib.rs"
[[bin]]
name = "text-generation-benchmark"
path = "src/main.rs"
[dependencies]
average = "0.14"
clap = { version = "4.4.5", features = ["derive", "env"] }
crossterm = "0.27"
float-ord = "0.3.2"
serde = {version = "1.0.188", features = ["derive"]}
serde_json = "1.0"
tabled = "0.14.0"
text-generation-client = { path = "../backends/client" }
thiserror = "1.0.48"
tokenizers = { workspace = true }
tokio = { version = "1.32.0", features = ["rt", "rt-multi-thread", "parking_lot", "signal", "sync", "macros"] }
tui = {package = "ratatui", version = "0.23", default-features = false, features = ["crossterm"]}
tracing = "0.1.37"
tracing-subscriber = { version = "0.3.17", features = ["json", "env-filter"] }
hf-hub = { workspace = true }
|
text-generation-inference/benchmark/Cargo.toml/0
|
{
"file_path": "text-generation-inference/benchmark/Cargo.toml",
"repo_id": "text-generation-inference",
"token_count": 368
}
| 223
|
from text_generation.errors import (
parse_error,
GenerationError,
IncompleteGenerationError,
OverloadedError,
ValidationError,
BadRequestError,
ShardNotReadyError,
ShardTimeoutError,
NotFoundError,
RateLimitExceededError,
UnknownError,
)
def test_generation_error():
payload = {"error_type": "generation", "error": "test"}
assert isinstance(parse_error(400, payload), GenerationError)
def test_incomplete_generation_error():
payload = {"error_type": "incomplete_generation", "error": "test"}
assert isinstance(parse_error(400, payload), IncompleteGenerationError)
def test_overloaded_error():
payload = {"error_type": "overloaded", "error": "test"}
assert isinstance(parse_error(400, payload), OverloadedError)
def test_validation_error():
payload = {"error_type": "validation", "error": "test"}
assert isinstance(parse_error(400, payload), ValidationError)
def test_bad_request_error():
payload = {"error": "test"}
assert isinstance(parse_error(400, payload), BadRequestError)
def test_shard_not_ready_error():
payload = {"error": "test"}
assert isinstance(parse_error(403, payload), ShardNotReadyError)
assert isinstance(parse_error(424, payload), ShardNotReadyError)
def test_shard_timeout_error():
payload = {"error": "test"}
assert isinstance(parse_error(504, payload), ShardTimeoutError)
def test_not_found_error():
payload = {"error": "test"}
assert isinstance(parse_error(404, payload), NotFoundError)
def test_rate_limit_exceeded_error():
payload = {"error": "test"}
assert isinstance(parse_error(429, payload), RateLimitExceededError)
def test_unknown_error():
payload = {"error": "test"}
assert isinstance(parse_error(500, payload), UnknownError)
|
text-generation-inference/clients/python/tests/test_errors.py/0
|
{
"file_path": "text-generation-inference/clients/python/tests/test_errors.py",
"repo_id": "text-generation-inference",
"token_count": 598
}
| 224
|
# Non-core Model Serving
TGI supports various LLM architectures (see full list [here](../supported_models)). If you wish to serve a model that is not one of the supported models, TGI will fallback to the `transformers` implementation of that model. This means you will be unable to use some of the features introduced by TGI, such as tensor-parallel sharding or flash attention. However, you can still get many benefits of TGI, such as continuous batching or streaming outputs.
You can serve these models using the same Docker command-line invocation as with fully supported models ๐
```bash
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id gpt2
```
If the model you wish to serve is a custom transformers model, and its weights and implementation are available in the Hub, you can still serve the model by passing the `--trust-remote-code` flag to the `docker run` command like below ๐
```bash
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id <CUSTOM_MODEL_ID> --trust-remote-code
```
Finally, if the model is not on Hugging Face Hub but on your local, you can pass the path to the folder that contains your model like below ๐
```bash
# Make sure your model is in the $volume directory
docker run --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id /data/<PATH-TO-FOLDER>
```
You can refer to [transformers docs on custom models](https://huggingface.co/docs/transformers/main/en/custom_models) for more information.
|
text-generation-inference/docs/source/basic_tutorials/non_core_models.md/0
|
{
"file_path": "text-generation-inference/docs/source/basic_tutorials/non_core_models.md",
"repo_id": "text-generation-inference",
"token_count": 472
}
| 225
|
# Text Generation Inference
Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs, including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and T5.

Text Generation Inference implements many optimizations and features, such as:
- Simple launcher to serve most popular LLMs
- Production ready (distributed tracing with Open Telemetry, Prometheus metrics)
- Tensor Parallelism for faster inference on multiple GPUs
- Token streaming using Server-Sent Events (SSE)
- Continuous batching of incoming requests for increased total throughput
- Optimized transformers code for inference using [Flash Attention](https://github.com/HazyResearch/flash-attention) and [Paged Attention](https://github.com/vllm-project/vllm) on the most popular architectures
- Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) and [GPT-Q](https://arxiv.org/abs/2210.17323)
- [Safetensors](https://github.com/huggingface/safetensors) weight loading
- Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226)
- Logits warper (temperature scaling, top-p, top-k, repetition penalty)
- Stop sequences
- Log probabilities
- Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance.
- [Guidance](../conceptual/guidance): Enable function calling and tool-use by forcing the model to generate structured outputs based on your own predefined output schemas.
Text Generation Inference is used in production by multiple projects, such as:
- [Hugging Chat](https://github.com/huggingface/chat-ui), an open-source interface for open-access models, such as Open Assistant and Llama
- [OpenAssistant](https://open-assistant.io/), an open-source community effort to train LLMs in the open
- [nat.dev](http://nat.dev/), a playground to explore and compare LLMs.
|
text-generation-inference/docs/source/index.md/0
|
{
"file_path": "text-generation-inference/docs/source/index.md",
"repo_id": "text-generation-inference",
"token_count": 563
}
| 226
|
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 100000,
"logprob": null,
"text": "<๏ฝbeginโofโsentence๏ฝ>"
},
{
"id": 3533,
"logprob": -9.625,
"text": "Test"
},
{
"id": 3102,
"logprob": -11.25,
"text": " request"
}
],
"seed": null,
"tokens": [
{
"id": 185,
"logprob": -1.546875,
"special": false,
"text": "\n"
},
{
"id": 549,
"logprob": -2.859375,
"special": false,
"text": "The"
},
{
"id": 1727,
"logprob": -2.484375,
"special": false,
"text": " test"
},
{
"id": 3102,
"logprob": -0.83203125,
"special": false,
"text": " request"
},
{
"id": 317,
"logprob": -1.1484375,
"special": false,
"text": " is"
},
{
"id": 245,
"logprob": -1.578125,
"special": false,
"text": " a"
},
{
"id": 3412,
"logprob": -2.578125,
"special": false,
"text": " document"
},
{
"id": 344,
"logprob": -1.125,
"special": false,
"text": " that"
},
{
"id": 317,
"logprob": -1.6953125,
"special": false,
"text": " is"
},
{
"id": 1222,
"logprob": -1.71875,
"special": false,
"text": " used"
}
],
"top_tokens": null
},
"generated_text": "\nThe test request is a document that is used"
}
|
text-generation-inference/integration-tests/models/__snapshots__/test_flash_deepseek_v2/test_flash_deepseek_v2.json/0
|
{
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_deepseek_v2/test_flash_deepseek_v2.json",
"repo_id": "text-generation-inference",
"token_count": 1052
}
| 227
|
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 1,
"logprob": null,
"text": "<s>"
},
{
"id": 4321,
"logprob": -13.90625,
"text": "Test"
},
{
"id": 2009,
"logprob": -12.328125,
"text": "request"
}
],
"seed": null,
"tokens": [
{
"id": 13,
"logprob": -2.0566406,
"special": false,
"text": "\n"
},
{
"id": 13,
"logprob": -1.5253906,
"special": false,
"text": "\n"
},
{
"id": 29902,
"logprob": -2.7578125,
"special": false,
"text": "I"
},
{
"id": 4966,
"logprob": -1.9033203,
"special": false,
"text": " hope"
},
{
"id": 445,
"logprob": -0.5019531,
"special": false,
"text": " this"
},
{
"id": 6911,
"logprob": -0.21264648,
"special": false,
"text": " helps"
},
{
"id": 29991,
"logprob": -0.5991211,
"special": false,
"text": "!"
},
{
"id": 2803,
"logprob": -0.37475586,
"special": false,
"text": " Let"
},
{
"id": 592,
"logprob": -0.018463135,
"special": false,
"text": " me"
},
{
"id": 1073,
"logprob": -0.0008597374,
"special": false,
"text": " know"
}
],
"top_tokens": null
},
"generated_text": "\n\nI hope this helps! Let me know"
}
|
text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama/test_flash_llama_grammar.json/0
|
{
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama/test_flash_llama_grammar.json",
"repo_id": "text-generation-inference",
"token_count": 1048
}
| 228
|
{
"details": {
"finish_reason": "length",
"generated_tokens": 40,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 13,
"logprob": -0.27416992,
"special": false,
"text": "\n"
},
{
"id": 13,
"logprob": -0.17016602,
"special": false,
"text": "\n"
},
{
"id": 28737,
"logprob": -2.7109375,
"special": false,
"text": "I"
},
{
"id": 28809,
"logprob": -1.5,
"special": false,
"text": "โ"
},
{
"id": 28719,
"logprob": -0.34204102,
"special": false,
"text": "m"
},
{
"id": 459,
"logprob": -1.6914062,
"special": false,
"text": " not"
},
{
"id": 1864,
"logprob": -0.69140625,
"special": false,
"text": " sure"
},
{
"id": 513,
"logprob": -1.6171875,
"special": false,
"text": " if"
},
{
"id": 315,
"logprob": -1.3837891,
"special": false,
"text": " I"
},
{
"id": 541,
"logprob": -1.2226562,
"special": false,
"text": " can"
},
{
"id": 1567,
"logprob": -1.8652344,
"special": false,
"text": " come"
},
{
"id": 582,
"logprob": -0.0070228577,
"special": false,
"text": " up"
},
{
"id": 395,
"logprob": -0.0054092407,
"special": false,
"text": " with"
},
{
"id": 28705,
"logprob": -0.62597656,
"special": false,
"text": " "
},
{
"id": 28770,
"logprob": -0.0035572052,
"special": false,
"text": "3"
},
{
"id": 4842,
"logprob": -0.93603516,
"special": false,
"text": " unique"
},
{
"id": 3085,
"logprob": -0.028411865,
"special": false,
"text": " words"
},
{
"id": 369,
"logprob": -1.0400391,
"special": false,
"text": " that"
},
{
"id": 6685,
"logprob": -0.09710693,
"special": false,
"text": " describe"
},
{
"id": 528,
"logprob": -0.066467285,
"special": false,
"text": " me"
},
{
"id": 28725,
"logprob": -1.0722656,
"special": false,
"text": ","
},
{
"id": 562,
"logprob": -0.33422852,
"special": false,
"text": " but"
},
{
"id": 315,
"logprob": -0.5136719,
"special": false,
"text": " I"
},
{
"id": 28809,
"logprob": -0.8989258,
"special": false,
"text": "โ"
},
{
"id": 584,
"logprob": -0.2076416,
"special": false,
"text": "ll"
},
{
"id": 1464,
"logprob": -0.8808594,
"special": false,
"text": " try"
},
{
"id": 28723,
"logprob": -0.88427734,
"special": false,
"text": "."
},
{
"id": 13,
"logprob": -0.91064453,
"special": false,
"text": "\n"
},
{
"id": 13,
"logprob": -0.08105469,
"special": false,
"text": "\n"
},
{
"id": 28740,
"logprob": -1.8486328,
"special": false,
"text": "1"
},
{
"id": 28723,
"logprob": -0.111572266,
"special": false,
"text": "."
},
{
"id": 23626,
"logprob": -3.15625,
"special": false,
"text": " Creative"
},
{
"id": 13,
"logprob": -0.9194336,
"special": false,
"text": "\n"
},
{
"id": 28750,
"logprob": -0.24841309,
"special": false,
"text": "2"
},
{
"id": 28723,
"logprob": -9.393692e-05,
"special": false,
"text": "."
},
{
"id": 6785,
"logprob": -3.1386719,
"special": false,
"text": " Fun"
},
{
"id": 1780,
"logprob": -0.53564453,
"special": false,
"text": "ny"
},
{
"id": 13,
"logprob": -0.09033203,
"special": false,
"text": "\n"
},
{
"id": 28770,
"logprob": -0.00466156,
"special": false,
"text": "3"
},
{
"id": 28723,
"logprob": -0.00016450882,
"special": false,
"text": "."
}
]
},
"generated_text": "\n\nIโm not sure if I can come up with 3 unique words that describe me, but Iโll try.\n\n1. Creative\n2. Funny\n3."
}
|
text-generation-inference/integration-tests/models/__snapshots__/test_lora_mistral/test_lora_mistral_with_customer_support_adapter.json/0
|
{
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_lora_mistral/test_lora_mistral_with_customer_support_adapter.json",
"repo_id": "text-generation-inference",
"token_count": 3128
}
| 229
|
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [
{
"id": 1,
"logprob": null,
"text": "<s>"
},
{
"id": 4321,
"logprob": -9.8359375,
"text": "Test"
},
{
"id": 2009,
"logprob": -9.6171875,
"text": "request"
}
],
"seed": null,
"tokens": [
{
"id": 13,
"logprob": -2.3417969,
"special": false,
"text": "\n"
},
{
"id": 3057,
"logprob": -1.8730469,
"special": false,
"text": "Test"
},
{
"id": 2009,
"logprob": -1.2626953,
"special": false,
"text": " request"
},
{
"id": 13,
"logprob": -1.7060547,
"special": false,
"text": "\n"
},
{
"id": 3057,
"logprob": -1.4482422,
"special": false,
"text": "Test"
},
{
"id": 2009,
"logprob": -0.15246582,
"special": false,
"text": " request"
},
{
"id": 13,
"logprob": -0.796875,
"special": false,
"text": "\n"
},
{
"id": 3057,
"logprob": -0.22766113,
"special": false,
"text": "Test"
},
{
"id": 2009,
"logprob": -0.007045746,
"special": false,
"text": " request"
},
{
"id": 13,
"logprob": -0.021759033,
"special": false,
"text": "\n"
}
],
"top_tokens": null
},
"generated_text": "\nTest request\nTest request\nTest request\n"
}
|
text-generation-inference/integration-tests/models/__snapshots__/test_server_gptq_quantized/test_server_gptq_quantized.json/0
|
{
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_server_gptq_quantized/test_server_gptq_quantized.json",
"repo_id": "text-generation-inference",
"token_count": 1051
}
| 230
|
import pytest
@pytest.fixture(scope="module")
def flash_deepseek_v2_handle(launcher):
with launcher("deepseek-ai/DeepSeek-V2-Lite", num_shard=2) as handle:
yield handle
@pytest.fixture(scope="module")
async def flash_deepseek_v2(flash_deepseek_v2_handle):
await flash_deepseek_v2_handle.health(300)
return flash_deepseek_v2_handle.client
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_flash_deepseek_v2(flash_deepseek_v2, response_snapshot):
response = await flash_deepseek_v2.generate(
"Test request", max_new_tokens=10, decoder_input_details=True
)
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_flash_deepseek_v2_all_params(flash_deepseek_v2, response_snapshot):
response = await flash_deepseek_v2.generate(
"Test request",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
stop_sequences=["test"],
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_flash_deepseek_v2_load(
flash_deepseek_v2, generate_load, response_snapshot
):
responses = await generate_load(
flash_deepseek_v2, "Test request", max_new_tokens=10, n=4
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot
|
text-generation-inference/integration-tests/models/test_flash_deepseek_v2.py/0
|
{
"file_path": "text-generation-inference/integration-tests/models/test_flash_deepseek_v2.py",
"repo_id": "text-generation-inference",
"token_count": 710
}
| 231
|
import pytest
@pytest.fixture(scope="module")
def flash_neox_sharded_handle(launcher):
with launcher("OpenAssistant/oasst-sft-1-pythia-12b", num_shard=2) as handle:
yield handle
@pytest.fixture(scope="module")
async def flash_neox_sharded(flash_neox_sharded_handle):
await flash_neox_sharded_handle.health(300)
return flash_neox_sharded_handle.client
@pytest.mark.release
@pytest.mark.asyncio
async def test_flash_neox(flash_neox_sharded, response_snapshot):
response = await flash_neox_sharded.generate(
"<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>",
max_new_tokens=10,
decoder_input_details=True,
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
async def test_flash_neox_load(flash_neox_sharded, generate_load, response_snapshot):
responses = await generate_load(
flash_neox_sharded,
"<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>",
max_new_tokens=10,
n=4,
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot
|
text-generation-inference/integration-tests/models/test_flash_neox_sharded.py/0
|
{
"file_path": "text-generation-inference/integration-tests/models/test_flash_neox_sharded.py",
"repo_id": "text-generation-inference",
"token_count": 507
}
| 232
|
import pytest
@pytest.fixture(scope="module")
def mt0_base_handle(launcher):
with launcher("bigscience/mt0-base") as handle:
yield handle
@pytest.fixture(scope="module")
async def mt0_base(mt0_base_handle):
await mt0_base_handle.health(300)
return mt0_base_handle.client
@pytest.mark.release
@pytest.mark.asyncio
async def test_mt0_base(mt0_base, response_snapshot):
response = await mt0_base.generate(
"Why is the sky blue?",
max_new_tokens=10,
top_p=0.9,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 5
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
async def test_mt0_base_all_params(mt0_base, response_snapshot):
response = await mt0_base.generate(
"Why is the sky blue?",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
stop_sequences=["test"],
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
async def test_mt0_base_load(mt0_base, generate_load, response_snapshot):
responses = await generate_load(
mt0_base,
"Why is the sky blue?",
max_new_tokens=10,
n=4,
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot
|
text-generation-inference/integration-tests/models/test_mt0_base.py/0
|
{
"file_path": "text-generation-inference/integration-tests/models/test_mt0_base.py",
"repo_id": "text-generation-inference",
"token_count": 737
}
| 233
|
import json
def main():
with open("./ShareGPT_V3_unfiltered_cleaned_split.json", "r") as f:
data = json.load(f)
# Select only the first 2k conversations that start with a human.
max = 2000
conversations = []
for conversation in data:
conv = conversation.get("conversations")
if conv and conv[0]["from"] == "human":
# Trim the rest of the output
conversation["conversations"] = conversation["conversations"][:1]
conversations.append(conversation)
if len(conversation) >= max:
break
with open("./small.json", "w") as f:
data = json.dump(conversations, f, indent=4)
if __name__ == "__main__":
main()
|
text-generation-inference/load_tests/filter.py/0
|
{
"file_path": "text-generation-inference/load_tests/filter.py",
"repo_id": "text-generation-inference",
"token_count": 307
}
| 234
|
use crate::infer::Infer;
use crate::{
default_parameters,
server::{generate_internal, ComputeType},
Deserialize, ErrorResponse, GenerateParameters, GenerateRequest, Serialize, ToSchema,
};
use axum::extract::{Extension, Path};
use axum::http::{HeaderMap, StatusCode};
use axum::response::IntoResponse;
use axum::Json;
use futures::stream::FuturesUnordered;
use futures::TryStreamExt;
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub struct OutputChunk {
pub name: String,
pub shape: Vec<usize>,
pub datatype: String,
pub data: Vec<u8>,
}
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub struct InferenceOutput {
pub id: String,
pub outputs: Vec<OutputChunk>,
}
#[derive(Debug, Deserialize, ToSchema)]
pub(crate) struct InferenceRequest {
pub id: String,
#[serde(default = "default_parameters")]
pub parameters: GenerateParameters,
pub inputs: Vec<Input>,
pub outputs: Vec<Output>,
}
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub(crate) struct Input {
pub name: String,
pub shape: Vec<usize>,
pub datatype: String,
pub data: Vec<u8>,
}
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub(crate) struct Output {
pub name: String,
}
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub struct LiveResponse {
pub live: bool,
}
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub struct ReadyResponse {
pub live: bool,
}
#[derive(Debug, Serialize, Deserialize, ToSchema)]
pub struct MetadataServerResponse {
pub name: String,
pub version: String,
pub extensions: Vec<String>,
}
#[utoipa::path(
post,
tag = "Text Generation Inference",
path = "/v2/health/live",
responses(
(status = 200, description = "Service is live", body = LiveReponse),
(status = 404, description = "Service not found", body = ErrorResponse,
example = json!({"error": "No response"}))
)
)]
pub async fn kserve_health_live() -> Json<LiveResponse> {
let data = LiveResponse { live: true };
Json(data)
}
#[utoipa::path(
get,
tag = "Text Generation Inference",
path = "/v2/health/ready",
responses(
(status = 200, description = "Service is ready", body = ReadyResponse),
(status = 404, description = "Service not found", body = ErrorResponse,
example = json!({"error": "No response"}))
)
)]
pub async fn kserve_health_ready() -> Json<ReadyResponse> {
let data = ReadyResponse { live: true };
Json(data)
}
#[utoipa::path(
get,
tag = "Text Generation Inference",
path = "/v2",
responses(
(status = 200, description = "Metadata retrieved", body = MetadataServerResponse),
(status = 404, description = "Service not found", body = ErrorResponse,
example = json!({"error": "No response"}))
)
)]
pub async fn kerve_server_metadata() -> Json<MetadataServerResponse> {
let data = MetadataServerResponse {
name: "text-generation-inference".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
extensions: vec![
"health".to_string(),
"models".to_string(),
"metrics".to_string(),
],
};
Json(data)
}
#[utoipa::path(
get,
tag = "Text Generation Inference",
path = "/v2/models/{model_name}/versions/{model_version}",
responses(
(status = 200, description = "Model version metadata retrieved", body = MetadataServerResponse),
(status = 404, description = "Model or version not found", body = ErrorResponse,
example = json!({"error": "No response"}))
)
)]
pub async fn kserve_model_metadata(
Path((model_name, model_version)): Path<(String, String)>,
) -> Json<MetadataServerResponse> {
let data = MetadataServerResponse {
name: model_name,
version: model_version,
extensions: vec!["infer".to_string(), "ready".to_string()],
};
Json(data)
}
#[utoipa::path(
get,
tag = "Text Generation Inference",
path = "/v2/models/{model_name}/versions/{model_version}/ready",
responses(
(status = 200, description = "Model version is ready", body = ReadyResponse),
(status = 404, description = "Model or version not found", body = ErrorResponse,
example = json!({"error": "No response"}))
)
)]
pub async fn kserve_model_metadata_ready(
Path((_model_name, _model_version)): Path<(String, String)>,
) -> Json<ReadyResponse> {
let data = ReadyResponse { live: true };
Json(data)
}
#[utoipa::path(
post,
tag = "Text Generation Inference",
path = "/v2/models/{model_name}/versions/{model_version}/infer",
request_body = Json<InferenceRequest>,
responses(
(status = 200, description = "Inference executed successfully", body = InferenceOutput),
(status = 404, description = "Model or version not found", body = ErrorResponse,
example = json!({"error": "No response"}))
)
)]
pub async fn kserve_model_infer(
infer: Extension<Infer>,
Extension(compute_type): Extension<ComputeType>,
Json(payload): Json<InferenceRequest>,
) -> Result<impl IntoResponse, (StatusCode, Json<ErrorResponse>)> {
let id = payload.id.clone();
let str_inputs = payload
.inputs
.iter()
.map(|input| {
std::str::from_utf8(&input.data).map_err(|e| {
(
StatusCode::UNPROCESSABLE_ENTITY,
Json(ErrorResponse {
error: e.to_string(),
error_type: "utf8".to_string(),
}),
)
})
})
.collect::<Result<Vec<_>, _>>()?;
if str_inputs.len() != payload.outputs.len() {
return Err((
StatusCode::UNPROCESSABLE_ENTITY,
Json(ErrorResponse {
error: "Inputs and outputs length mismatch".to_string(),
error_type: "length mismatch".to_string(),
}),
));
}
let output_chunks = str_inputs
.iter()
.zip(&payload.outputs)
.map(|(str_input, output)| {
let generate_request = GenerateRequest {
inputs: str_input.to_string(),
parameters: payload.parameters.clone(),
};
let infer = infer.clone();
let compute_type = compute_type.clone();
let span = tracing::Span::current();
async move {
generate_internal(infer, compute_type, Json(generate_request), span)
.await
.map(|(_, Json(generation))| {
let generation_as_bytes = generation.generated_text.as_bytes().to_vec();
OutputChunk {
name: output.name.clone(),
shape: vec![1, generation_as_bytes.len()],
datatype: "BYTES".to_string(),
data: generation_as_bytes,
}
})
.map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(ErrorResponse {
error: "Incomplete generation".into(),
error_type: "Incomplete generation".into(),
}),
)
})
}
})
.collect::<FuturesUnordered<_>>()
.try_collect::<Vec<_>>()
.await?;
let inference_output = InferenceOutput {
id: id.clone(),
outputs: output_chunks,
};
Ok((HeaderMap::new(), Json(inference_output)))
}
|
text-generation-inference/router/src/kserve.rs/0
|
{
"file_path": "text-generation-inference/router/src/kserve.rs",
"repo_id": "text-generation-inference",
"token_count": 3505
}
| 235
|
flash_att_v2_commit_cuda := v2.6.1
flash_att_v2_commit_rocm := 2554f490101742ccdc56620a938f847f61754be6
build-flash-attention-v2-cuda:
pip install -U packaging wheel
pip install flash-attn==$(flash_att_v2_commit_cuda)
install-flash-attention-v2-cuda: build-flash-attention-v2-cuda
echo "Flash v2 installed"
build-flash-attention-v2-rocm:
if [ ! -d 'flash-attention-v2' ]; then \
pip install -U packaging ninja --no-cache-dir && \
git clone https://github.com/ROCm/flash-attention.git flash-attention-v2 && \
cd flash-attention-v2 && git fetch && git checkout $(flash_att_v2_commit_rocm) && \
git submodule update --init --recursive && GPU_ARCHS="gfx90a;gfx942" PYTORCH_ROCM_ARCH="gfx90a;gfx942" python setup.py build; \
fi
install-flash-attention-v2-rocm: build-flash-attention-v2-rocm
cd flash-attention-v2 && \
GPU_ARCHS="gfx90a;gfx942" PYTORCH_ROCM_ARCH="gfx90a;gfx942" python setup.py install
|
text-generation-inference/server/Makefile-flash-att-v2/0
|
{
"file_path": "text-generation-inference/server/Makefile-flash-att-v2",
"repo_id": "text-generation-inference",
"token_count": 396
}
| 236
|
// Adapted from turboderp exllama: https://github.com/turboderp/exllama
#include <ATen/cuda/CUDAContext.h>
#include "q4_matrix.cuh"
#include <vector>
#include "../util.cuh"
#include "../matrix.cuh"
using namespace std;
const int UNSHUF_BLOCKSIZE_X = 64;
const int RECONS_THREADS_X = 64; // Block size and thread count along columns in out, each thread converts 1 column
const int RECONS_THREADS_Y = 1; // Block size and thread count along rows in x and out, each thread converts 8 rows
vector<Q4Matrix*> g_q4_matrices;
void g_q4_keep_matrix(Q4Matrix* m)
{
g_q4_matrices.push_back(m);
}
void g_q4_free_matrices()
{
for (const auto& m : g_q4_matrices) delete m;
g_q4_matrices.clear();
}
Q4Matrix::Q4Matrix
(
const int _height,
const int _width,
const int _groups,
uint32_t* _qweight,
uint32_t* _qzeros,
half* _scales,
uint32_t* _g_idx,
const int _device
) :
height(_height),
width(_width),
groups(_groups),
device(_device)
{
cudaSetDevice(device);
cuda_qweight = _qweight;
cuda_qzeros = _qzeros;
cuda_scales = _scales;
groupsize = height / groups;
if (_g_idx) make_sequential(_g_idx);
}
Q4Matrix::~Q4Matrix()
{
}
// Make sequential
__global__ void make_sequential_kernel
(
const uint32_t* __restrict__ w,
uint32_t* __restrict__ w_new,
const uint32_t* __restrict__ x_map,
const int w_height,
const int w_width
)
{
const uint64_t* w2 = (uint64_t*) w;
uint64_t* w_new2 = (uint64_t*) w_new;
int w2_stride = w_width >> 1;
int w2_column = UNSHUF_BLOCKSIZE_X * blockIdx.x + threadIdx.x;
int w_new2_row = blockIdx.y;
int x_map_idx = w_new2_row << 3;
uint64_t dst = 0;
#pragma unroll
for (int i = 0; i < 8; i++)
{
int source_row = x_map[x_map_idx++];
int w2_row = source_row >> 3;
int w2_subrow = source_row & 0x07;
int w2_row_shift = w2_subrow << 2;
int wnew2_row_shift = i << 2;
uint64_t src = w2[w2_row * w2_stride + w2_column];
src >>= w2_row_shift;
src &= 0x0000000f0000000f;
src <<= wnew2_row_shift;
dst |= src;
}
w_new2[w_new2_row * w2_stride + w2_column] = dst;
}
void Q4Matrix::make_sequential(const uint32_t* cpu_g_idx)
{
uint32_t* cuda_new_qweight = NULL;
cudaMalloc(&cuda_new_qweight, height / 8 * width * sizeof(uint32_t));
cudaMalloc(&cuda_x_map, height * sizeof(uint32_t)); // TODO: Should probably be allocated in PyTorch
uint32_t* cpu_g_idx_map = (uint32_t*) calloc(groups, sizeof(uint32_t));
uint32_t* cpu_x_map = (uint32_t*) malloc(height * sizeof(uint32_t));
uint32_t* cpu_x_map_inv = (uint32_t*) malloc(height * sizeof(uint32_t));
// Group histogram
for (int i = 0; i < height; i++) cpu_g_idx_map[cpu_g_idx[i]]++;
// Group map
for (int i = 0, acc = 0; i < groups; i++)
{
short tmp = cpu_g_idx_map[i];
cpu_g_idx_map[i] = acc;
acc += tmp;
}
// X map (inverse)
for (int row = 0; row < height; row++)
{
uint32_t target_group = cpu_g_idx[row];
uint32_t target_row = cpu_g_idx_map[target_group];
cpu_g_idx_map[target_group]++;
cpu_x_map_inv[row] = target_row;
}
// X map
for (int row = 0; row < height; row++) cpu_x_map[cpu_x_map_inv[row]] = row;
// Move to CUDA
cudaMemcpyAsync(cuda_x_map, cpu_x_map, height * sizeof(uint32_t), cudaMemcpyHostToDevice);
// Rearrange rows in w
dim3 threads(UNSHUF_BLOCKSIZE_X, 1, 1);
dim3 blocks(width / UNSHUF_BLOCKSIZE_X / 2, height / 8, 1);
const cudaStream_t stream = at::cuda::getCurrentCUDAStream();
make_sequential_kernel<<<blocks, threads, 0, stream>>>(cuda_qweight, cuda_new_qweight, cuda_x_map, height / 8, width);
// Replace qweights
cudaMemcpyAsync(cuda_qweight, cuda_new_qweight, height / 8 * width * sizeof(uint32_t), cudaMemcpyDeviceToDevice);
// Cleanup
cudaDeviceSynchronize();
cudaFree(cuda_new_qweight);
free(cpu_g_idx_map);
free(cpu_x_map);
free(cpu_x_map_inv);
}
__global__ void reconstruct_kernel
(
const uint32_t* __restrict__ w,
half* __restrict__ out, // (y)
const half* __restrict__ w_scales,
const uint32_t* __restrict__ w_zeros,
const int height,
const int width,
const int groupsize
)
{
// Start of block
int column = RECONS_THREADS_X * blockIdx.x + threadIdx.x;
int row = (RECONS_THREADS_Y * blockIdx.y + threadIdx.y) * 8;
// Views
MatrixView_q4_column w_(w, height, width);
MatrixView_half_rw out_(out, height, width);
MatrixView_half w_scales_(w_scales, height / groupsize, width);
MatrixView_q4_row w_zeros_(w_zeros, height / groupsize, width);
// Groupsize version
int group = row / groupsize;
half w_scale = w_scales_.item(group, column);
uint32_t w_zero = (w_zeros_.item(group, column) + 1) & 0x0F;
uint32_t w_read = w_.item_uint32_t(row, column);
half* out_ptr = out_.item_ptr(row, column);
#pragma unroll
for (int s = 0; s < 32; s += 4)
{
half w_item = __hmul(__int2half_rn((int)((w_read >> s) & 0x0f) - w_zero), w_scale);
*out_ptr = w_item; out_ptr += out_.width;
}
}
void Q4Matrix::reconstruct(half* out)
{
dim3 threads(RECONS_THREADS_X, RECONS_THREADS_Y, 1);
dim3 blocks
(
(width + threads.x - 1) / threads.x,
(height / 8 + threads.y - 1) / threads.y,
1
);
const cudaStream_t stream = at::cuda::getCurrentCUDAStream();
reconstruct_kernel<<<blocks, threads, 0, stream>>>(cuda_qweight, out, cuda_scales, cuda_qzeros, height / 8, width, groupsize);
}
|
text-generation-inference/server/exllama_kernels/exllama_kernels/cuda_func/q4_matrix.cu/0
|
{
"file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/cuda_func/q4_matrix.cu",
"repo_id": "text-generation-inference",
"token_count": 2592
}
| 237
|
#include "q_matrix.cuh"
#include "matrix_view.cuh"
#include "util.cuh"
#include "quant/qdq_2.cuh"
#include "quant/qdq_3.cuh"
#include "quant/qdq_4.cuh"
#include "quant/qdq_5.cuh"
#include "quant/qdq_6.cuh"
#include "quant/qdq_8.cuh"
#define BLOCK_KN_SIZE 128
#define THREADS_X 32
#define THREADS_Y 32
// Shuffle quantized data on load
__global__ void shuffle_kernel
(
uint32_t* __restrict__ b_q_weight,
const int size_k,
const int size_n,
const int rows_8,
const int rows_6,
const int rows_5,
const int rows_4,
const int rows_3,
const int rows_2
)
{
int n = blockIdx.x * THREADS_X + threadIdx.x;
if (n >= size_n) return;
int k = 0;
uint32_t* b_ptr = b_q_weight + n;
while (k < rows_8) { shuffle_8bit_4 (b_ptr, size_n); b_ptr += 1 * size_n; k += 4; }
while (k < rows_6) { shuffle_6bit_16(b_ptr, size_n); b_ptr += 3 * size_n; k += 16; }
while (k < rows_5) { shuffle_5bit_32(b_ptr, size_n); b_ptr += 5 * size_n; k += 32; }
while (k < rows_4) { shuffle_4bit_8 (b_ptr, size_n); b_ptr += 1 * size_n; k += 8; }
while (k < rows_3) { shuffle_3bit_32(b_ptr, size_n); b_ptr += 3 * size_n; k += 32; }
while (k < rows_2) { shuffle_2bit_16(b_ptr, size_n); b_ptr += 1 * size_n; k += 16; }
}
// QMatrix constructor
QMatrix::QMatrix
(
const int _device,
const int _height,
const int _width,
const int _groups,
uint32_t* _q_weight,
uint16_t* _q_perm,
uint16_t* _q_invperm,
uint32_t* _q_scale,
half* _q_scale_max,
uint16_t* _q_groups,
uint16_t* _q_group_map,
uint32_t* _gptq_qzeros,
half* _gptq_scales,
uint32_t* _gptq_g_idx,
half* _temp_dq
) :
device(_device),
height(_height),
width(_width),
groups(_groups),
temp_dq(_temp_dq)
{
cudaSetDevice(device);
failed = false;
cuda_q_weight = _q_weight;
cuda_q_perm = _q_perm;
cuda_q_invperm = _q_invperm;
cuda_q_scale = _q_scale;
cuda_q_scale_max = _q_scale_max;
cuda_q_groups = _q_groups;
cuda_q_group_map = _q_group_map;
cuda_gptq_qzeros = _gptq_qzeros;
cuda_gptq_scales = _gptq_scales;
is_gptq = (_gptq_qzeros != NULL);
if (is_gptq)
{
gptq_groupsize = 1;
while (gptq_groupsize * groups < height) gptq_groupsize *= 2;
}
// Create group map
rows_8 = 0;
rows_6 = 0;
rows_5 = 0;
rows_4 = 0;
rows_3 = 0;
rows_2 = 0;
if (!is_gptq)
{
uint16_t* cpu_q_groups = (uint16_t*)calloc(groups * 2, sizeof(uint16_t));
cudaMemcpy(cpu_q_groups, cuda_q_groups, groups * 2 * sizeof(uint16_t), cudaMemcpyDeviceToHost);
int row = 0;
for (int i = 0; i < groups; i++)
{
int bits = cpu_q_groups[i * 2];
int rows;
if (i < groups - 1)
{
int qrows = cpu_q_groups[i * 2 + 3] - cpu_q_groups[i * 2 + 1];
rows = qrows * 32 / bits;
}
else rows = height - row;
if (bits == 8) rows_8 += rows;
if (bits == 6) rows_6 += rows;
if (bits == 5) rows_5 += rows;
if (bits == 4) rows_4 += rows;
if (bits == 3) rows_3 += rows;
if (bits == 2) rows_2 += rows;
row += rows;
}
free(cpu_q_groups);
rows_6 += rows_8;
rows_5 += rows_6;
rows_4 += rows_5;
rows_3 += rows_4;
rows_2 += rows_3;
}
else
{
rows_4 = height;
rows_3 = height;
rows_2 = height;
if (_gptq_g_idx)
{
if (!make_sequential(_gptq_g_idx))
{
failed = true;
//printf("FAIL\n");
return;
}
}
}
// DBGI(rows_8);
// DBGI(rows_6);
// DBGI(rows_5);
// DBGI(rows_4);
// DBGI(rows_3);
// DBGI(rows_2);
// Shuffle quantized data
dim3 blockDim, gridDim;
blockDim.x = THREADS_X;
blockDim.y = 1;
gridDim.x = DIVIDE(width, THREADS_X);
gridDim.y = 1;
const cudaStream_t stream = at::cuda::getCurrentCUDAStream();
shuffle_kernel<<<gridDim, blockDim, 0, stream>>>(cuda_q_weight, height, width, rows_8, rows_6, rows_5, rows_4, rows_3, rows_2);
}
QMatrix::~QMatrix()
{
}
// Reconstruct b[k,n] (GPTQ)
__global__ void reconstruct_gptq_kernel
(
const uint32_t* __restrict__ b_q_weight,
const uint16_t* __restrict__ b_q_perm,
const uint32_t* __restrict__ b_gptq_qzeros,
const half* __restrict__ b_gptq_scales,
//const uint16_t* __restrict__ b_q_groups,
const int size_k,
const int size_n,
const int groupsize,
const int groups,
half* __restrict__ b,
const int rows_4
)
{
MatrixView_half_rw b_(b, size_k, size_n);
MatrixView_q4_row b_gptq_qzeros_(b_gptq_qzeros, groups, size_n);
MatrixView_half b_gptq_scales_(b_gptq_scales, groups, size_n);
int offset_k = BLOCK_KN_SIZE * blockIdx.y;
int offset_n = BLOCK_KN_SIZE * blockIdx.x * 4;
int end_k = min(offset_k + BLOCK_KN_SIZE, size_k);
// Preload remapping table
__shared__ uint16_t perm[BLOCK_KN_SIZE];
int t = threadIdx.x;
if (b_q_perm)
{
if (offset_k + t < size_k)
perm[t] = b_q_perm[offset_k + t];
}
// Column
int n = offset_n + t * 4;
if (n >= size_n) return;
// Find initial group
int group = offset_k / groupsize;
int nextgroup = offset_k + groupsize;
// b offset
int qk = offset_k / (32 / 4);
const uint32_t* b_ptr = b_q_weight + qk * size_n + n;
// Initial zeros/scale
int zeros[4];
half2 scales[4];
half2 z1z16[4][2];
half2 y1y16[4][2];
b_gptq_qzeros_.item4(zeros, group, n);
b_gptq_scales_.item4_h2(scales, group, n);
dequant_4bit_8_prep_zero((zeros[0] + 1) & 0x0F, z1z16[0], y1y16[0]);
dequant_4bit_8_prep_zero((zeros[1] + 1) & 0x0F, z1z16[1], y1y16[1]);
dequant_4bit_8_prep_zero((zeros[2] + 1) & 0x0F, z1z16[2], y1y16[2]);
dequant_4bit_8_prep_zero((zeros[3] + 1) & 0x0F, z1z16[3], y1y16[3]);
__syncthreads();
int k = offset_k;
int lk = 0;
while (k < end_k)
{
if (k == nextgroup)
{
group++;
nextgroup += groupsize;
b_gptq_qzeros_.item4(zeros, group, n);
b_gptq_scales_.item4_h2(scales, group, n);
dequant_4bit_8_prep_zero((zeros[0] + 1) & 0x0F, z1z16[0], y1y16[0]);
dequant_4bit_8_prep_zero((zeros[1] + 1) & 0x0F, z1z16[1], y1y16[1]);
dequant_4bit_8_prep_zero((zeros[2] + 1) & 0x0F, z1z16[2], y1y16[2]);
dequant_4bit_8_prep_zero((zeros[3] + 1) & 0x0F, z1z16[3], y1y16[3]);
}
for (int p = 0; p < 4; p++)
{
half2 dq[4][4];
const int4* b_ptr4 = (int4*) b_ptr;
int4 load_int4 = *b_ptr4;
dequant_4bit_8_gptq(load_int4.x, dq[0], z1z16[0], y1y16[0], size_n, false);
dequant_4bit_8_gptq(load_int4.y, dq[1], z1z16[1], y1y16[1], size_n, false);
dequant_4bit_8_gptq(load_int4.z, dq[2], z1z16[2], y1y16[2], size_n, false);
dequant_4bit_8_gptq(load_int4.w, dq[3], z1z16[3], y1y16[3], size_n, false);
b_ptr += size_n;
//half* dqh = (half*)dq;
if (b_q_perm)
{
for (int j = 0; j < 4; j++)
{
for (int v = 0; v < 4; v++) dq[v][j] = __hmul2(scales[v], dq[v][j]);
b_.set4(perm[lk++], n, __low2half(dq[0][j]), __low2half(dq[1][j]), __low2half(dq[2][j]), __low2half(dq[3][j]));
b_.set4(perm[lk++], n, __high2half(dq[0][j]), __high2half(dq[1][j]), __high2half(dq[2][j]), __high2half(dq[3][j]));
}
}
else
{
for (int j = 0; j < 4; j++)
{
for (int v = 0; v < 4; v++) dq[v][j] = __hmul2(scales[v], dq[v][j]);
b_.set4(offset_k + lk++, n, __low2half(dq[0][j]), __low2half(dq[1][j]), __low2half(dq[2][j]), __low2half(dq[3][j]));
b_.set4(offset_k + lk++, n, __high2half(dq[0][j]), __high2half(dq[1][j]), __high2half(dq[2][j]), __high2half(dq[3][j]));
}
}
}
k += 32;
}
}
// Reconstruct b[k,n]
__global__ void reconstruct_kernel
(
const uint32_t* __restrict__ b_q_weight,
const uint16_t* __restrict__ b_q_perm,
const uint32_t* __restrict__ b_q_scale,
const half* __restrict__ b_q_scale_max,
const uint16_t* __restrict__ b_q_group_map,
const int size_k,
const int size_n,
//const int groupsize,
const int groups,
half* __restrict__ b,
const int rows_8,
const int rows_6,
const int rows_5,
const int rows_4,
const int rows_3,
const int rows_2
)
{
MatrixView_half_rw b_(b, size_k, size_n);
MatrixView_q4_row b_q_scale_(b_q_scale, groups, size_n);
int offset_k = BLOCK_KN_SIZE * blockIdx.y;
int offset_n = BLOCK_KN_SIZE * blockIdx.x;
// Preload remapping table
int t = threadIdx.x;
__shared__ uint16_t perm[BLOCK_KN_SIZE];
if (offset_k + t < size_k)
perm[t] = b_q_perm[offset_k + t];
// Column
int n = offset_n + t;
if (n >= size_n) return;
// Find initial group
// int group = offset_k / groupsize;
int group = b_q_group_map[offset_k * 2];
int pre_rows_8 = min(rows_8, offset_k);
int pre_rows_6 = offset_k > rows_8 ? min(rows_6, offset_k) - rows_8 : 0;
int pre_rows_5 = offset_k > rows_6 ? min(rows_5, offset_k) - rows_6 : 0;
int pre_rows_4 = offset_k > rows_5 ? min(rows_4, offset_k) - rows_5 : 0;
int pre_rows_3 = offset_k > rows_4 ? min(rows_3, offset_k) - rows_4 : 0;
int pre_rows_2 = offset_k > rows_3 ? min(rows_2, offset_k) - rows_3 : 0;
int qk = 0;
qk += pre_rows_8 / 32 * 8;
qk += pre_rows_6 / 32 * 6;
qk += pre_rows_5 / 32 * 5;
qk += pre_rows_4 / 32 * 4;
qk += pre_rows_3 / 32 * 3;
qk += pre_rows_2 / 32 * 2;
const uint32_t* b_ptr = b_q_weight + qk * size_n + n;
half qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]);
half2 qs_h2 = __halves2half2(qs_h, qs_h);
int nextgroup = offset_k + b_q_group_map[offset_k * 2 + 1];
int end_k = min(offset_k + BLOCK_KN_SIZE, size_k);
int k = offset_k;
int lk = 0;
__syncthreads();
while (k < rows_8 && k < end_k)
{
if (k == nextgroup) { group++; qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]); nextgroup += b_q_group_map[k * 2 + 1]; qs_h2 = __halves2half2(qs_h, qs_h); }
for (int p = 0; p < 4; p++)
{
half2 dq[4];
uint32_t q_0 = *b_ptr; b_ptr += size_n;
uint32_t q_1 = *b_ptr; b_ptr += size_n;
dequant_8bit_8(q_0, q_1, dq, size_n);
for (int j = 0; j < 4; j++) dq[j] = __hmul2(dq[j], qs_h2);
half* dqh = (half*) dq;
for (int j = 0; j < 8; j++) b_.set(perm[lk++], n, dqh[j]);
}
k += 32;
}
while (k < rows_6 && k < end_k)
{
if (k == nextgroup) { group++; qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]); nextgroup += b_q_group_map[k * 2 + 1]; qs_h2 = __halves2half2(qs_h, qs_h); }
for (int p = 0; p < 2; p++)
{
half2 dq[8];
uint32_t q_0 = *b_ptr; b_ptr += size_n;
uint32_t q_1 = *b_ptr; b_ptr += size_n;
uint32_t q_2 = *b_ptr; b_ptr += size_n;
dequant_6bit_16(q_0, q_1, q_2, dq, size_n);
for (int j = 0; j < 8; j++) dq[j] = __hmul2(dq[j], qs_h2);
half* dqh = (half*) dq;
for (int j = 0; j < 16; j++) b_.set(perm[lk++], n, dqh[j]);
}
k += 32;
}
while (k < rows_5 && k < end_k)
{
if (k == nextgroup) { group++; qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]); nextgroup += b_q_group_map[k * 2 + 1]; qs_h2 = __halves2half2(qs_h, qs_h); }
for (int p = 0; p < 1; p++)
{
half2 dq[16];
uint32_t q_0 = *b_ptr; b_ptr += size_n;
uint32_t q_1 = *b_ptr; b_ptr += size_n;
uint32_t q_2 = *b_ptr; b_ptr += size_n;
uint32_t q_3 = *b_ptr; b_ptr += size_n;
uint32_t q_4 = *b_ptr; b_ptr += size_n;
dequant_5bit_32(q_0, q_1, q_2, q_3, q_4, dq, size_n);
for (int j = 0; j < 16; j++) dq[j] = __hmul2(dq[j], qs_h2);
half* dqh = (half*) dq;
for (int j = 0; j < 32; j++) b_.set(perm[lk++], n, dqh[j]);
}
k += 32;
}
while (k < rows_4 && k < end_k)
{
if (k == nextgroup) { group++; qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]); nextgroup += b_q_group_map[k * 2 + 1]; qs_h2 = __halves2half2(qs_h, qs_h); }
for (int p = 0; p < 4; p++)
{
half2 dq[4];
uint32_t q_0 = *b_ptr; b_ptr += size_n;
dequant_4bit_8(q_0, dq, size_n);
for (int j = 0; j < 4; j++) dq[j] = __hmul2(dq[j], qs_h2);
half* dqh = (half*) dq;
for (int j = 0; j < 8; j++) b_.set(perm[lk++], n, dqh[j]);
}
k += 32;
}
while (k < rows_3 && k < end_k)
{
if (k == nextgroup) { group++; qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]); nextgroup += b_q_group_map[k * 2 + 1]; qs_h2 = __halves2half2(qs_h, qs_h); }
for (int p = 0; p < 1; p++)
{
half2 dq[16];
uint32_t q_0 = *b_ptr; b_ptr += size_n;
uint32_t q_1 = *b_ptr; b_ptr += size_n;
uint32_t q_2 = *b_ptr; b_ptr += size_n;
dequant_3bit_32(q_0, q_1, q_2, dq, size_n);
for (int j = 0; j < 16; j++) dq[j] = __hmul2(dq[j], qs_h2);
half* dqh = (half*) dq;
for (int j = 0; j < 32; j++) b_.set(perm[lk++], n, dqh[j]);
}
k += 32;
}
while (k < rows_2 && k < end_k)
{
if (k == nextgroup) { group++; qs_h = dq_scale(b_q_scale_.item(group, n), b_q_scale_max[group]); nextgroup += b_q_group_map[k * 2 + 1]; qs_h2 = __halves2half2(qs_h, qs_h); }
for (int p = 0; p < 1; p++)
{
half2 dq[8];
uint32_t q_0 = *b_ptr; b_ptr += size_n;
dequant_2bit_16(q_0, dq, size_n);
for (int j = 0; j < 8; j++) dq[j] = __hmul2(dq[j], qs_h2);
half* dqh = (half*) dq;
for (int j = 0; j < 16; j++) b_.set(perm[lk++], n, dqh[j]);
}
k += 16;
}
}
void QMatrix::reconstruct(half* out)
{
dim3 blockDim, gridDim;
blockDim.x = BLOCK_KN_SIZE;
blockDim.y = 1;
gridDim.y = DIVIDE(height, BLOCK_KN_SIZE);
const cudaStream_t stream = at::cuda::getCurrentCUDAStream();
if (!is_gptq)
{
gridDim.x = DIVIDE(width, BLOCK_KN_SIZE);
reconstruct_kernel<<<gridDim, blockDim, 0, stream>>>
(
cuda_q_weight,
cuda_q_perm,
cuda_q_scale,
cuda_q_scale_max,
cuda_q_group_map,
height,
width,
//groupsize,
groups,
out,
rows_8,
rows_6,
rows_5,
rows_4,
rows_3,
rows_2
);
}
else
{
gridDim.x = DIVIDE(width, BLOCK_KN_SIZE * 4);
reconstruct_gptq_kernel<<<gridDim, blockDim, 0, stream>>>
(
cuda_q_weight,
cuda_q_perm,
cuda_gptq_qzeros,
cuda_gptq_scales,
//const uint16_t* __restrict__ b_q_groups,
height,
width,
gptq_groupsize,
groups,
out,
rows_4
);
}
}
__global__ void make_sequential_kernel
(
const uint32_t* __restrict__ w,
uint32_t* __restrict__ w_new,
const uint16_t* __restrict__ q_perm,
const int w_height,
const int w_width
)
{
const uint64_t* w2 = (uint64_t*) w;
uint64_t* w_new2 = (uint64_t*) w_new;
int w2_stride = w_width >> 1;
int w2_column = THREADS_X * blockIdx.x + threadIdx.x;
if (w2_column >= w2_stride) return;
int w_new2_row = blockIdx.y;
int q_perm_idx = w_new2_row << 3;
uint64_t dst = 0;
#pragma unroll
for (int i = 0; i < 8; i++)
{
int source_row = q_perm[q_perm_idx++];
int w2_row = source_row >> 3;
int w2_subrow = source_row & 0x07;
int w2_row_shift = w2_subrow << 2;
int wnew2_row_shift = i << 2;
uint64_t src = w2[w2_row * w2_stride + w2_column];
src >>= w2_row_shift;
src &= 0x0000000f0000000f;
src <<= wnew2_row_shift;
dst |= src;
}
w_new2[w_new2_row * w2_stride + w2_column] = dst;
}
bool QMatrix::make_sequential(const uint32_t* cpu_g_idx)
{
const cudaStream_t stream = at::cuda::getCurrentCUDAStream();
uint32_t* cuda_new_qweight = NULL;
cudaError_t err = cudaMalloc(&cuda_new_qweight, height / 8 * width * sizeof(uint32_t));
if (err != cudaSuccess) {
cudaError_t cuda_status = cudaGetLastError(); // Clear error
return false;
}
uint32_t* cpu_g_idx_map = (uint32_t*) calloc(groups, sizeof(uint32_t));
uint32_t* cpu_x_map = (uint32_t*) malloc(height * sizeof(uint32_t));
uint32_t* cpu_x_map_inv = (uint32_t*) malloc(height * sizeof(uint32_t));
// Group histogram
for (int i = 0; i < height; i++) cpu_g_idx_map[cpu_g_idx[i]]++;
// Group map
for (int i = 0, acc = 0; i < groups; i++)
{
short tmp = cpu_g_idx_map[i];
cpu_g_idx_map[i] = acc;
acc += tmp;
}
// X map (inverse)
for (int row = 0; row < height; row++)
{
uint32_t target_group = cpu_g_idx[row];
uint32_t target_row = cpu_g_idx_map[target_group];
cpu_g_idx_map[target_group]++;
cpu_x_map_inv[row] = target_row;
}
// X map
for (int row = 0; row < height; row++) cpu_x_map[cpu_x_map_inv[row]] = row;
// Reduce to uint16_t
uint16_t* cpu_x_map16 = (uint16_t*)cpu_x_map;
uint16_t* cpu_x_map_inv16 = (uint16_t*)cpu_x_map_inv;
for (int row = 0; row < height; row++) cpu_x_map16[row] = (uint16_t) cpu_x_map[row];
for (int row = 0; row < height; row++) cpu_x_map_inv16[row] = (uint16_t) cpu_x_map_inv[row];
// Move to CUDA
cudaMemcpyAsync(cuda_q_perm, cpu_x_map16, height * sizeof(uint16_t), cudaMemcpyHostToDevice);
cudaMemcpyAsync(cuda_q_invperm, cpu_x_map_inv16, height * sizeof(uint16_t), cudaMemcpyHostToDevice);
// Rearrange rows in w
dim3 blockDim, gridDim;
blockDim.x = THREADS_X;
blockDim.y = 1;
gridDim.x = DIVIDE(width, THREADS_X);
gridDim.y = height / 8;
make_sequential_kernel<<<gridDim, blockDim, 0, stream>>>
(
cuda_q_weight,
cuda_new_qweight,
cuda_q_perm,
height / 8,
width
);
// Replace qweights
cudaMemcpyAsync(cuda_q_weight, cuda_new_qweight, height / 8 * width * sizeof(uint32_t), cudaMemcpyDeviceToDevice);
// Cleanup
cudaDeviceSynchronize();
cudaFree(cuda_new_qweight);
free(cpu_g_idx_map);
free(cpu_x_map);
free(cpu_x_map_inv);
return true;
}
|
text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/q_matrix.cu/0
|
{
"file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/q_matrix.cu",
"repo_id": "text-generation-inference",
"token_count": 10524
}
| 238
|
# Origin: https://github.com/predibase/lorax
# Path: lorax/server/lorax_server/adapters/config.py
# License: Apache License Version 2.0, January 2004
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Dict, Set, Tuple
import torch
from text_generation_server.adapters.weights import AdapterWeights
@dataclass
class ModuleMap:
module_name: str
module_weights: Dict[str, Tuple[torch.Tensor, str]]
@dataclass
class AdapterConfig(ABC):
base_model_name_or_path: str
@abstractmethod
def map_weights_for_model(
self,
adapter_weights: Dict[int, AdapterWeights],
weight_names: Tuple[str],
) -> Tuple[ModuleMap, Set[str]]:
pass
|
text-generation-inference/server/text_generation_server/adapters/config.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/adapters/config.py",
"repo_id": "text-generation-inference",
"token_count": 275
}
| 239
|
from dataclasses import dataclass
import bitsandbytes as bnb
import torch
from bitsandbytes.nn import Int8Params, Params4bit
from text_generation_server.utils.weights import UnquantizedWeight
@dataclass
class BNBWeight(UnquantizedWeight):
weight: torch.Tensor
def get_linear(self, bias: torch.Tensor):
return Linear8bitLt(self.weight, bias, has_fp16_weights=False, threshold=6.0)
class Linear8bitLt(torch.nn.Module):
def __init__(
self,
weight,
bias,
has_fp16_weights=True,
memory_efficient_backward=False,
threshold=0.0,
index=None,
):
super().__init__()
assert (
not memory_efficient_backward
), "memory_efficient_backward is no longer required and the argument is deprecated in 0.37.0 and will be removed in 0.39.0"
self.state = bnb.MatmulLtState()
self.index = index
# Necessary for stacked layers
self.state.threshold = threshold
self.state.has_fp16_weights = has_fp16_weights
self.state.memory_efficient_backward = memory_efficient_backward
if threshold > 0.0 and not has_fp16_weights:
self.state.use_pool = True
self.weight = Int8Params(
weight.data,
has_fp16_weights=has_fp16_weights,
requires_grad=has_fp16_weights,
)
self.weight.cuda(weight.device)
self.bias = bias
def init_8bit_state(self):
self.state.CB = self.weight.CB
self.state.SCB = self.weight.SCB
self.weight.CB = None
self.weight.SCB = None
def forward(self, x: torch.Tensor):
self.state.is_training = self.training
if self.weight.CB is not None:
self.init_8bit_state()
# weights are cast automatically as Int8Params, but the bias has to be cast manually
if self.bias is not None and self.bias.dtype != x.dtype:
self.bias.data = self.bias.data.to(x.dtype)
out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
if not self.state.has_fp16_weights:
if self.state.CB is not None and self.state.CxB is not None:
# we converted 8-bit row major to turing/ampere format in the first inference pass
# we no longer need the row-major weight
del self.state.CB
self.weight.data = self.state.CxB
return out
@dataclass
class BNBFP4Weight(UnquantizedWeight):
weight: torch.Tensor
def get_linear(self, bias: torch.Tensor):
return Linear4bit(self.weight, bias, quant_type="fp4")
@dataclass
class BNBNF4Weight(UnquantizedWeight):
weight: torch.Tensor
def get_linear(self, bias: torch.Tensor):
return Linear4bit(self.weight, bias, quant_type="nf4")
class Linear4bit(torch.nn.Module):
def __init__(self, weight, bias, quant_type):
super().__init__()
self.weight = Params4bit(
weight.data,
requires_grad=False,
compress_statistics=True,
quant_type=quant_type,
)
self.compute_dtype = None
self.weight.cuda(weight.device)
self.bias = bias
def forward(self, x: torch.Tensor):
# weights are cast automatically as Int8Params, but the bias has to be cast manually
if self.bias is not None and self.bias.dtype != x.dtype:
self.bias.data = self.bias.data.to(x.dtype)
if getattr(self.weight, "quant_state", None) is None:
print(
"FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first."
)
inp_dtype = x.dtype
if self.compute_dtype is not None:
x = x.to(self.compute_dtype)
bias = None if self.bias is None else self.bias.to(self.compute_dtype)
out = bnb.matmul_4bit(
x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state
)
out = out.to(inp_dtype)
return out
|
text-generation-inference/server/text_generation_server/layers/bnb.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/layers/bnb.py",
"repo_id": "text-generation-inference",
"token_count": 1825
}
| 240
|
from typing import Optional
import torch
import torch.nn as nn
from loguru import logger
from text_generation_server.layers.fp8 import fp8_quantize
from text_generation_server.layers.marlin.gptq import _check_valid_shape
from text_generation_server.layers.marlin.util import (
_check_marlin_kernels,
permute_scales,
)
from text_generation_server.utils.log import log_once
try:
import marlin_kernels
except ImportError:
marlin_kernels = None
MARLIN_TILE_SIZE = 16
class GPTQMarlinFP8Linear(nn.Module):
"""
FP8 GPTQ-Marlin linear layer.
"""
def __init__(
self,
qweight: torch.Tensor,
scales: torch.Tensor,
bias: Optional[torch.Tensor],
) -> None:
super().__init__()
_check_marlin_kernels()
assert marlin_kernels is not None
log_once(logger.info, "GPU does not support FP8, using Marlin FP8 kernel")
scales = scales.unsqueeze(0)
if scales.shape[1] == 1:
out_features, in_features = qweight.shape
scales = scales.repeat(1, out_features)
qweight, scales = repack_fp8_for_marlin(qweight, scales)
in_features = qweight.shape[0] * MARLIN_TILE_SIZE
out_features = scales.shape[1]
_check_valid_shape(in_features=in_features, out_features=out_features)
self.qweight = qweight
self.scales = scales
self.bias = bias if bias is not None else None
self.workspace = torch.zeros(
out_features // 64 * 16, dtype=torch.int, device=qweight.device
)
@classmethod
def from_unquant(cls, weight, bias, dtype):
qweight, scales = fp8_quantize(weight)
return cls(qweight=qweight, scales=scales.to(dtype), bias=bias)
@classmethod
def from_fp8(cls, weight, scale, _input_scale, bias, dtype):
return cls(qweight=weight, scales=scale.to(dtype), bias=bias)
def forward(self, A: torch.Tensor) -> torch.Tensor:
assert marlin_kernels is not None
A_flat = A.view(-1, A.shape[-1])
C = marlin_kernels.fp8_marlin_gemm(
A_flat,
self.qweight,
self.scales,
self.workspace,
8,
A_flat.shape[0],
self.scales.shape[1],
A_flat.shape[1],
)
C = C.reshape(A.shape[:-1] + (self.scales.shape[1],))
if self.bias is not None:
C += self.bias
return C
def pack_fp8_as_int32(fp8_tensor: torch.Tensor) -> torch.Tensor:
"""
Repack FP8 weights to gptq format (packed int32 elements).
"""
assert fp8_tensor.dtype == torch.float8_e4m3fn
if fp8_tensor.shape[0] % 4 != 0:
raise ValueError(
f"Leading tensor dimension is not divisable by 4: {fp8_tensor.shape[0]}"
)
# Reshape to prepare for packing
reshaped = fp8_tensor.reshape(-1, 4, *fp8_tensor.shape[1:])
# Convert fp8 to uint8 (byte) representation
byte_tensor = reshaped.view(torch.uint8)
# Pack 4 uint8 values into one int32
packed = torch.zeros(
fp8_tensor.shape[0] // 4,
fp8_tensor.shape[1],
dtype=torch.int32,
device=fp8_tensor.device,
)
for i in range(4):
packed.bitwise_or_(byte_tensor[:, i].to(torch.int32) << i * 8)
return packed
def repack_fp8_for_marlin(weight: torch.Tensor, scales: torch.Tensor):
"""
Repack FP8 tensor for GPTQ-Marlin.
"""
out_features, in_features = weight.shape
# Torch linear layers weights with shape [out_features, in_features],
# GPTQ-quantized weights use [in_feateres/pack_factor, in_features],
# so transpose before packing.
qweight = pack_fp8_as_int32(weight.t())
perm = torch.empty(0, dtype=torch.int, device=qweight.device)
repacked = marlin_kernels.gptq_marlin_repack(
qweight, perm, in_features, out_features, 8
)
scales = permute_scales(scales)
return repacked, scales
|
text-generation-inference/server/text_generation_server/layers/marlin/fp8.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/layers/marlin/fp8.py",
"repo_id": "text-generation-inference",
"token_count": 1787
}
| 241
|
# coding=utf-8
# Copyright 2022 HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.distributed
from torch import nn
from transformers.activations import ACT2FN
from transformers.configuration_utils import PretrainedConfig
from typing import Optional, List, Tuple, Any
from text_generation_server.utils.import_utils import SYSTEM
if SYSTEM != "ipex":
from vllm.model_executor.layers.fused_moe import fused_moe
from text_generation_server.layers.attention import (
paged_attention,
attention,
reshape_and_cache,
Seqlen,
)
from text_generation_server.layers import (
FastLinear,
TensorParallelRowLinear,
TensorParallelColumnLinear,
TensorParallelEmbedding,
SpeculativeHead,
get_linear,
)
from text_generation_server.layers.rotary import (
PositionRotaryEmbedding,
)
from text_generation_server.layers.layernorm import (
FastLayerNorm,
)
class DbrxAttentionConfig(PretrainedConfig):
def __init__(
self,
attn_pdrop: float = 0,
clip_qkv: Optional[float] = None,
kv_n_heads: int = 1,
rope_theta: float = 10000.0,
**kwargs: Any,
):
super().__init__(**kwargs)
self.attn_pdrop = attn_pdrop
self.clip_qkv = clip_qkv
self.kv_n_heads = kv_n_heads
self.rope_theta = rope_theta
for k in ["model_type"]:
if k in kwargs:
kwargs.pop(k)
if len(kwargs) != 0:
raise ValueError(f"Found unknown {kwargs=}")
class DbrxFFNConfig(PretrainedConfig):
def __init__(
self,
ffn_act_fn: Optional[dict] = None,
ffn_hidden_size: int = 3584,
moe_num_experts: int = 4,
moe_top_k: int = 1,
moe_jitter_eps: Optional[float] = None,
moe_loss_weight: float = 0.01,
moe_normalize_expert_weights: Optional[float] = 1,
uniform_expert_assignment: bool = False,
**kwargs: Any,
):
super().__init__()
if ffn_act_fn is None:
ffn_act_fn = {"name": "silu"}
self.ffn_act_fn = ffn_act_fn
self.ffn_hidden_size = ffn_hidden_size
self.moe_num_experts = moe_num_experts
self.moe_top_k = moe_top_k
self.moe_jitter_eps = moe_jitter_eps
self.moe_loss_weight = moe_loss_weight
self.moe_normalize_expert_weights = moe_normalize_expert_weights
self.uniform_expert_assignment = uniform_expert_assignment
if uniform_expert_assignment:
raise ValueError("`uniform_expert_assignment = True` is not supported")
for k in ["model_type"]:
if k in kwargs:
kwargs.pop(k)
if len(kwargs) != 0:
raise ValueError(f"Found unknown {kwargs=}")
class DbrxConfig(PretrainedConfig):
attribute_map = {
"hidden_size": "d_model",
"num_attention_heads": "n_heads",
"num_hidden_layers": "n_layers",
}
def __init__(
self,
d_model: int = 2048,
n_heads: int = 16,
n_layers: int = 24,
max_seq_len: int = 2048,
vocab_size: int = 32000,
resid_pdrop: float = 0.0,
emb_pdrop: float = 0.0,
attn_config: Optional[DbrxAttentionConfig] = None,
ffn_config: Optional[DbrxFFNConfig] = None,
use_cache: bool = True,
initializer_range: float = 0.02,
output_router_logits: bool = False,
router_aux_loss_coef: float = 0.05,
**kwargs: Any,
):
if attn_config is None:
self.attn_config = DbrxAttentionConfig()
elif isinstance(attn_config, dict):
self.attn_config = DbrxAttentionConfig(**attn_config)
else:
self.attn_config = attn_config
if ffn_config is None:
self.ffn_config = DbrxFFNConfig()
elif isinstance(ffn_config, dict):
self.ffn_config = DbrxFFNConfig(**ffn_config)
else:
self.ffn_config = ffn_config
self.d_model = d_model
self.n_heads = n_heads
self.n_layers = n_layers
self.max_seq_len = max_seq_len
self.vocab_size = vocab_size
self.resid_pdrop = resid_pdrop
self.emb_pdrop = emb_pdrop
self.use_cache = use_cache
self.initializer_range = initializer_range
self.output_router_logits = output_router_logits
self.router_aux_loss_coef = router_aux_loss_coef
tie_word_embeddings = kwargs.pop("tie_word_embeddings", False)
if tie_word_embeddings:
raise ValueError("tie_word_embeddings is not supported for Dbrx models.")
super().__init__(
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
@property
def num_key_value_heads(self):
# We can't use the attribute map, since this the number of KV
# heads is not top-level.
return self.attn_config.kv_n_heads
def promote_scalar(x: torch.Tensor) -> torch.Tensor:
return x.view(1) if len(x.size()) == 0 else x
def load_attention(config, prefix, weights):
return TensorParallelColumnLinear.load_qkv(
config,
prefix=f"{prefix}.Wqkv",
weights=weights,
bias=False,
num_heads=config.n_heads,
num_key_value_heads=config.attn_config.kv_n_heads,
)
def _load_experts(config, prefix, weights):
world_size = weights.process_group.size()
rank = weights.process_group.rank()
assert (
config.ffn_config.ffn_hidden_size % world_size == 0
), f"The chosen size {config.ffn_config.ffn_hidden_size} is not compatible with sharding on {world_size} shards"
expert_size = config.ffn_config.ffn_hidden_size
block_size = expert_size // world_size
start = rank * block_size
stop = (rank + 1) * block_size
tensor = torch.empty(
(config.ffn_config.moe_num_experts * block_size, config.d_model),
dtype=weights.dtype,
device=weights.device,
)
slice_ = weights._get_slice(f"{prefix}")
for i in range(config.ffn_config.moe_num_experts):
offset = i * expert_size
expert_slice = slice_[start + offset : stop + offset]
tensor[i * block_size : (i + 1) * block_size] = expert_slice.to(
dtype=weights.dtype
).to(device=weights.device)
return tensor
def _load_experts_quantized(config, prefix, weights, cls):
world_size = weights.process_group.size()
rank = weights.process_group.rank()
assert (
config.ffn_config.ffn_hidden_size % world_size == 0
), f"The chosen size {config.ffn_config.ffn_hidden_size} is not compatible with sharding on {world_size} shards"
expert_size = config.ffn_config.ffn_hidden_size
block_size = expert_size // world_size
start = rank * block_size
stop = (rank + 1) * block_size
slice_ = weights._get_slice(f"{prefix}")
experts = []
for i in range(config.ffn_config.moe_num_experts):
if config.quantize in ["gptq", "awq"]:
raise NotImplementedError(
"Dbrx does not support gptq/awq quantization yet."
)
else:
offset = i * expert_size
expert_slice = (
slice_[start + offset : stop + offset]
.to(dtype=weights.dtype)
.to(device=weights.device)
)
if cls == TensorParallelRowLinear:
expert_slice = expert_slice.t().contiguous()
linear = get_linear(expert_slice, None)
experts.append(cls(linear, weights.process_group))
else:
linear = get_linear(expert_slice, None)
experts.append(cls(linear))
return experts
class DbrxAttention(torch.nn.Module):
def __init__(
self,
prefix: str,
config,
weights,
):
super().__init__()
self.clip_qkv = config.attn_config.clip_qkv
self.num_heads = config.n_heads
self.hidden_size = config.d_model
self.head_size = self.hidden_size // self.num_heads
self.rotary_emb = PositionRotaryEmbedding.static(
config=config,
dim=self.head_size,
base=config.attn_config.rope_theta,
device=weights.device,
)
self.softmax_scale = self.head_size**-0.5
if self.num_heads % weights.process_group.size() != 0:
raise ValueError(
f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} "
f"and `num_shards`: {weights.process_group.size()}"
)
self.num_heads = self.num_heads // weights.process_group.size()
self.num_key_value_heads = (
config.attn_config.kv_n_heads // weights.process_group.size()
)
self.query_key_value = load_attention(config, prefix, weights)
self.o_proj = TensorParallelRowLinear.load(
config,
prefix=f"{prefix}.out_proj",
weights=weights,
bias=False,
)
self.num_groups = self.num_heads // self.num_key_value_heads
self.kv_head_mapping = torch.arange(
0, self.num_key_value_heads, dtype=torch.int32, device=weights.device
).repeat_interleave(self.num_groups)
def forward(
self,
hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
):
qkv = self.query_key_value(hidden_states)
if self.clip_qkv is not None:
qkv = qkv.clamp(min=-self.clip_qkv, max=self.clip_qkv)
query, kv = qkv.split(
[
self.head_size * self.num_heads,
2 * self.head_size * self.num_key_value_heads,
],
dim=1,
)
query = query.view(-1, self.num_heads, self.head_size)
kv = kv.view(-1, 2, self.num_key_value_heads, self.head_size)
self.rotary_emb(query, torch.select(kv, dim=1, index=0), cos, sin)
reshape_and_cache(kv[:, 0], kv[:, 1], kv_cache[0], kv_cache[1], slots)
# Prefill
if cu_seqlen_prefill is not None:
# flash attention
attn_output = attention(
query,
kv_cache[0],
kv_cache[1],
seqlen,
block_tables,
self.softmax_scale,
)
# Decode
else:
attn_output = paged_attention(
query,
kv_cache[0],
kv_cache[1],
self.kv_head_mapping,
self.softmax_scale,
block_tables,
seqlen,
max_s,
)
return self.o_proj(attn_output.view(-1, self.num_heads * self.head_size))
class DbrxNormAttentionNorm(nn.Module):
def __init__(
self,
prefix: str,
config,
weights,
):
super().__init__()
self.norm_1 = FastLayerNorm.load_no_bias(
prefix=f"{prefix}.norm_1", weights=weights, eps=1e-5
)
self.self_attn = DbrxAttention(
prefix=f"{prefix}.attn", config=config, weights=weights
)
self.norm_2 = FastLayerNorm.load_no_bias(
prefix=f"{prefix}.norm_2",
weights=weights,
eps=1e-5,
)
def forward(
self,
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
):
normed_hidden_states, res = self.norm_1(hidden_states, residual)
# Self Attention
attn_output = self.self_attn(
normed_hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
)
# faster post attention rms norm
normed_attn_res_output, attn_res = self.norm_2(attn_output, res)
return normed_attn_res_output, attn_res
@torch.jit.script
def select_experts(
gate_logits: torch.Tensor, top_k: int, moe_normalize_expert_weights: int
):
# all_probs: (sequence_length, n_experts) and upcast for softmax
all_probs = torch.nn.functional.softmax(gate_logits, dim=1, dtype=torch.float)
# weights, selected_experts: (sequence_length, top-k)
weights, selected_experts = torch.topk(all_probs, top_k, dim=-1)
if moe_normalize_expert_weights:
weights = weights / torch.norm(
weights, p=moe_normalize_expert_weights, dim=-1, keepdim=True
)
weights = weights.view(-1)
selected_experts = selected_experts.view(-1)
return selected_experts, weights
@torch.jit.script
def round_up(x: torch.Tensor, value: int):
return torch.div(x + (value - 1), value, rounding_mode="trunc") * value
class BlockSparseMoE(nn.Module):
def __init__(self, prefix, config: DbrxConfig, weights):
super().__init__()
self.moe_normalize_expert_weights = (
config.ffn_config.moe_normalize_expert_weights
)
self.hidden_dim = config.d_model
self.ffn_dim = config.ffn_config.ffn_hidden_size // weights.process_group.size()
self.num_experts = config.ffn_config.moe_num_experts
self.top_k = config.ffn_config.moe_top_k
act = config.ffn_config.ffn_act_fn["name"]
if "gelu" in act:
self.act = lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
elif "silu" in act:
self.act = torch.nn.functional.silu
else:
self.act = ACT2FN[act]
# gating
self.gate = FastLinear.load(
config, f"{prefix}.router.layer", weights, bias=False
)
# merged expert weights, all of size (n_experts * ffn_dim, hidden_dim)
w1 = _load_experts(config, f"{prefix}.experts.mlp.w1", weights).view(
self.num_experts, self.ffn_dim, self.hidden_dim
)
v1 = _load_experts(config, f"{prefix}.experts.mlp.v1", weights).view(
self.num_experts, self.ffn_dim, self.hidden_dim
)
self.wv1 = torch.cat([w1, v1], dim=1)
self.w2 = (
_load_experts(config, f"{prefix}.experts.mlp.w2", weights)
.view(self.num_experts, self.ffn_dim, self.hidden_dim)
.transpose(1, 2)
.contiguous()
)
self.process_group = weights.process_group
def forward(self, x: torch.Tensor) -> torch.Tensor:
# router_logits: (num_tokens, n_experts)
router_logits = self.gate(x)
out = fused_moe(
x,
self.wv1,
self.w2,
router_logits,
self.top_k,
renormalize=self.moe_normalize_expert_weights,
inplace=True,
)
# Reduce sum
if self.process_group.size() > 1:
torch.distributed.all_reduce(out, group=self.process_group)
return out.view(*x.shape)
class DenseMoE(nn.Module):
def __init__(self, prefix, config: DbrxConfig, weights):
super().__init__()
self.moe_normalize_expert_weights = (
config.ffn_config.moe_normalize_expert_weights
)
self.hidden_dim = config.d_model
self.ffn_dim = config.ffn_config.ffn_hidden_size // weights.process_group.size()
self.num_experts = config.ffn_config.moe_num_experts
self.top_k = config.ffn_config.moe_top_k
act = config.ffn_config.ffn_act_fn["name"]
if "gelu" in act:
self.act = lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
elif "silu" in act:
self.act = torch.nn.functional.silu
else:
self.act = ACT2FN[act]
# gating
self.gate = FastLinear.load(
config, f"{prefix}.router.layer", weights, bias=False
)
self.w1 = _load_experts_quantized(
config,
prefix=f"{prefix}.experts.mlp.w1",
weights=weights,
cls=TensorParallelColumnLinear,
)
self.w2 = _load_experts_quantized(
config,
prefix=f"{prefix}.experts.mlp.w2",
weights=weights,
cls=TensorParallelRowLinear,
)
self.v1 = _load_experts_quantized(
config,
prefix=f"{prefix}.experts.mlp.v1",
weights=weights,
cls=TensorParallelColumnLinear,
)
self.process_group = weights.process_group
def forward(self, x: torch.Tensor) -> torch.Tensor:
"""
x: (sequence_length, model_dim)
gate_logits: (sequence_length, n_experts)
"""
# optional reshape
input_shape = x.shape
x = x.view(-1, input_shape[-1])
# gate_logits: (sequence_length, n_experts)
gate_logits = self.gate(x)
# all_probs: (sequence_length, n_experts) and upcast for softmax
weights = torch.nn.functional.softmax(gate_logits, dim=1, dtype=torch.float)
if self.top_k < self.num_experts:
_, not_selected_experts = torch.topk(
weights,
self.num_experts - self.top_k,
largest=False,
sorted=False,
dim=1,
)
# Mask not selected experts
weights.scatter_(1, not_selected_experts, 0)
# Re-normalize
if self.moe_normalize_expert_weights:
weights = weights / torch.norm(
weights, p=self.moe_normalize_expert_weights, dim=-1, keepdim=True
)
weights = weights.to(x.dtype)
# Final output tensor
out = x.new_zeros(x.shape[0], self.hidden_dim)
for i in range(self.num_experts):
h = self.act(self.w1[i](x)) * self.v1[i](x)
h = self.w2[i](h, reduce=False)
# Add expert output to out with masking
out += h * weights[:, i].view(-1, 1)
# Reduce sum
if self.process_group.size() > 1:
torch.distributed.all_reduce(out, group=self.process_group)
return out
class DbrxLayer(nn.Module):
def __init__(self, prefix: str, layer_id, config, weights):
super().__init__()
prefix = f"{prefix}.blocks.{layer_id}"
self.attn = DbrxNormAttentionNorm(
prefix=f"{prefix}.norm_attn_norm", config=config, weights=weights
)
moe_cls = BlockSparseMoE if config.quantize is None else DenseMoE
self.moe = moe_cls(f"{prefix}.ffn", config, weights)
def forward(
self,
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
):
# Self Attention
attn_output, attn_res = self.attn(
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
)
moe_output = self.moe(attn_output)
return moe_output, attn_res
class DbrxModel(torch.nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
self.embed_tokens = TensorParallelEmbedding(
prefix=f"{prefix}.wte", weights=weights
)
self.layers = nn.ModuleList(
[
DbrxLayer(
prefix,
layer_id,
config,
weights,
)
for layer_id in range(config.n_layers)
]
)
self.norm = FastLayerNorm.load_no_bias(
prefix=f"{prefix}.norm_f", weights=weights, eps=1e-5
)
self.head_size = self.layers[0].attn.self_attn.head_size
self.num_heads = self.layers[0].attn.self_attn.num_heads
self.num_key_value_heads = self.layers[0].attn.self_attn.num_key_value_heads
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
block_tables: torch.Tensor,
slots: torch.Tensor,
seqlen: Seqlen,
max_s: int,
) -> torch.Tensor:
hidden_states = self.embed_tokens(input_ids)
# Get rotary cos and sin for this forward
# Avoid to index in each layer
cos, sin = self.layers[0].attn.self_attn.rotary_emb.get_cos_sin(
position_ids, max_s, hidden_states.dtype
)
residual = None
for i, layer in enumerate(self.layers):
hidden_states, residual = layer(
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache[i],
block_tables,
slots,
seqlen,
max_s,
)
hidden_states, _ = self.norm(hidden_states, residual)
return hidden_states
class FlashDbrxForCausalLM(torch.nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
if not prefix:
prefix = "transformer"
else:
prefix = f"{prefix}.transformer"
self.model = DbrxModel(prefix, config, weights)
self.lm_head = SpeculativeHead.load(
config,
prefix="lm_head",
weights=weights,
)
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
block_tables: torch.Tensor,
slots: torch.Tensor,
seqlen: Seqlen,
max_s: int,
prefill_cache_indices: Optional[torch.Tensor],
lm_head_indices: Optional[torch.Tensor] = None,
adapter_data: Optional[torch.Tensor] = None,
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
hidden_states = self.model(
input_ids,
position_ids,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
)
if lm_head_indices is not None:
hidden_states = hidden_states[lm_head_indices]
logits, speculative_logits = self.lm_head(hidden_states)
return logits, speculative_logits
|
text-generation-inference/server/text_generation_server/models/custom_modeling/flash_dbrx_modeling.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/flash_dbrx_modeling.py",
"repo_id": "text-generation-inference",
"token_count": 11877
}
| 242
|
# coding=utf-8
# Copyright 2024 the HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" PyTorch Idefics2 model."""
from typing import List, Optional, Tuple
import torch
import torch.utils.checkpoint
from torch import nn
import math
from transformers.activations import ACT2FN
from text_generation_server.models.custom_modeling.vlm import (
load_text_model,
)
from text_generation_server.layers.attention import Seqlen
from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask
from text_generation_server.layers import (
TensorParallelColumnLinear,
TensorParallelEmbedding,
TensorParallelRowLinear,
)
from text_generation_server.utils.weights import DefaultWeightsLoader, UnquantizedWeight
def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
"""
This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
"""
batch, num_key_value_heads, slen, head_dim = hidden_states.shape
if n_rep == 1:
return hidden_states
hidden_states = hidden_states[:, :, None, :, :].expand(
batch, num_key_value_heads, n_rep, slen, head_dim
)
return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
class Idefics2VisionEmbeddings(nn.Module):
"""
This is a modified version of `siglip.modelign_siglip.SiglipVisionEmbeddings` to enable images of variable
resolution.
The modifications are adapted from [Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution](https://arxiv.org/abs/2307.06304)
which allows treating images in their native aspect ratio and without the need to resize them to the same
fixed size. In particular, we start from the original pre-trained SigLIP model
(which uses images of fixed-size square images) and adapt it by training on images of variable resolutions.
"""
def __init__(self, prefix, config, weights):
super().__init__()
self.embed_dim = config.hidden_size
self.image_size = config.image_size
self.patch_size = config.patch_size
self.patch_embedding = nn.Conv2d(
in_channels=config.num_channels,
out_channels=self.embed_dim,
kernel_size=self.patch_size,
stride=self.patch_size,
padding="valid",
)
self.patch_embedding.weight = nn.Parameter(
weights.get_tensor(f"{prefix}.patch_embedding.weight"), requires_grad=False
)
self.patch_embedding.bias = nn.Parameter(
weights.get_tensor(f"{prefix}.patch_embedding.bias"), requires_grad=False
)
self.num_patches_per_side = self.image_size // self.patch_size
self.num_patches = self.num_patches_per_side**2
self.num_positions = self.num_patches
self.position_embedding = TensorParallelEmbedding(
prefix=f"{prefix}.position_embedding", weights=weights
)
def forward(
self, pixel_values: torch.FloatTensor, patch_attention_mask: torch.BoolTensor
) -> torch.Tensor:
batch_size, _, max_im_h, max_im_w = pixel_values.shape
patch_embeds = self.patch_embedding(pixel_values)
embeddings = patch_embeds.flatten(2).transpose(1, 2)
max_nb_patches_h, max_nb_patches_w = (
max_im_h // self.patch_size,
max_im_w // self.patch_size,
)
boundaries = torch.arange(
1 / self.num_patches_per_side, 1.0, 1 / self.num_patches_per_side
)
position_ids = torch.full(
size=(batch_size, max_nb_patches_h * max_nb_patches_w), fill_value=0
)
for batch_idx, p_attn_mask in enumerate(patch_attention_mask):
nb_patches_h = p_attn_mask[:, 0].sum()
nb_patches_w = p_attn_mask[0].sum()
fractional_coords_h = torch.arange(0, 1 - 1e-6, 1 / nb_patches_h)
fractional_coords_w = torch.arange(0, 1 - 1e-6, 1 / nb_patches_w)
bucket_coords_h = torch.bucketize(
fractional_coords_h, boundaries, right=True
)
bucket_coords_w = torch.bucketize(
fractional_coords_w, boundaries, right=True
)
pos_ids = (
bucket_coords_h[:, None] * self.num_patches_per_side + bucket_coords_w
).flatten()
position_ids[batch_idx][p_attn_mask.view(-1).cpu()] = pos_ids
position_ids = position_ids.to(self.position_embedding.weight.device)
embeddings = embeddings + self.position_embedding(position_ids)
return embeddings
class Idefics2VisionAttention(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.config = config
self.embed_dim = config.hidden_size
self.num_heads = config.num_attention_heads
self.head_size = self.embed_dim // self.num_heads
if self.head_size * self.num_heads != self.embed_dim:
raise ValueError(
f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
f" {self.num_heads})."
)
self.scale = self.head_size**-0.5
self.dropout = config.attention_dropout
self.num_heads = self.num_heads // weights.process_group.size()
self.embed_dim = self.embed_dim // weights.process_group.size()
self.qkv = TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"],
dim=0,
weights=weights,
bias=True,
)
self.out_proj = TensorParallelRowLinear.load(
config=config, prefix=f"{prefix}.out_proj", weights=weights, bias=True
)
self.is_causal = False
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
) -> torch.Tensor:
batch_size, q_len, _ = hidden_states.size()
qkv = self.qkv(hidden_states)
query_states, key_states, value_states = qkv.split(
[
self.head_size * self.num_heads,
self.head_size * self.num_heads,
self.head_size * self.num_heads,
],
dim=2,
)
query_states = query_states.view(
batch_size, q_len, self.num_heads, self.head_size
).transpose(1, 2)
key_states = key_states.view(
batch_size, q_len, self.num_heads, self.head_size
).transpose(1, 2)
value_states = value_states.view(
batch_size, q_len, self.num_heads, self.head_size
).transpose(1, 2)
k_v_seq_len = key_states.shape[-2]
attn_weights = (
torch.matmul(query_states, key_states.transpose(2, 3)) * self.scale
)
if attn_weights.size() != (batch_size, self.num_heads, q_len, k_v_seq_len):
raise ValueError(
f"Attention weights should be of size {(batch_size, self.num_heads, q_len, k_v_seq_len)}, but is"
f" {attn_weights.size()}"
)
if attention_mask is not None:
if attention_mask.size() != (batch_size, 1, q_len, k_v_seq_len):
raise ValueError(
f"Attention mask should be of size {(batch_size, 1, q_len, k_v_seq_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights + attention_mask
# upcast attention to fp32
attn_weights = nn.functional.softmax(
attn_weights, dim=-1, dtype=torch.float32
).to(query_states.dtype)
attn_weights = nn.functional.dropout(
attn_weights, p=self.dropout, training=self.training
)
attn_output = torch.matmul(attn_weights, value_states)
if attn_output.size() != (batch_size, self.num_heads, q_len, self.head_size):
raise ValueError(
f"`attn_output` should be of size {(batch_size, self.num_heads, q_len, self.head_size)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.transpose(1, 2).contiguous()
attn_output = attn_output.reshape(batch_size, q_len, self.embed_dim)
attn_output = self.out_proj(attn_output)
return attn_output
class Idefics2VisionMLP(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.config = config
self.activation_fn = ACT2FN[config.hidden_act]
self.fc1 = TensorParallelColumnLinear.load(
prefix=f"{prefix}.fc1", config=config, weights=weights, bias=True
)
self.fc2 = TensorParallelRowLinear.load(
prefix=f"{prefix}.fc2", config=config, weights=weights, bias=True
)
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
hidden_states = self.fc1(hidden_states)
hidden_states = self.activation_fn(hidden_states)
hidden_states = self.fc2(hidden_states)
return hidden_states
class Idefics2EncoderLayer(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.embed_dim = config.hidden_size
self.self_attn = Idefics2VisionAttention(
prefix=f"{prefix}.self_attn", config=config, weights=weights
)
self.layer_norm1 = nn.LayerNorm.load(
prefix=f"{prefix}.layer_norm1", eps=config.layer_norm_eps, weights=weights
)
self.layer_norm2 = nn.LayerNorm.load(
prefix=f"{prefix}.layer_norm2", eps=config.layer_norm_eps, weights=weights
)
self.mlp = Idefics2VisionMLP(
prefix=f"{prefix}.mlp", config=config, weights=weights
)
# Copied from transformers.models.siglip.modeling_siglip.SiglipEncoderLayer.forward
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
) -> torch.Tensor:
residual = hidden_states
hidden_states = self.layer_norm1(hidden_states)
hidden_states = self.self_attn(
hidden_states=hidden_states,
attention_mask=attention_mask,
)
hidden_states = residual + hidden_states
residual = hidden_states
hidden_states = self.layer_norm2(hidden_states)
hidden_states = self.mlp(hidden_states)
hidden_states = residual + hidden_states
return hidden_states
class Idefics2Encoder(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.config = config
self.layers = nn.ModuleList(
[
Idefics2EncoderLayer(
prefix=f"{prefix}.layers.{i}", config=config, weights=weights
)
for i in range(config.num_hidden_layers)
]
)
# Ignore copy
def forward(
self,
inputs_embeds,
attention_mask: Optional[torch.Tensor] = None,
):
hidden_states = inputs_embeds
for encoder_layer in self.layers:
hidden_states = encoder_layer(
hidden_states,
attention_mask,
)
return hidden_states
class Idefics2VisionTransformer(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.config = config
self.embeddings = Idefics2VisionEmbeddings(
prefix=f"{prefix}.embeddings", config=config, weights=weights
)
self.encoder = Idefics2Encoder(
prefix=f"{prefix}.encoder", config=config, weights=weights
)
self.post_layernorm = nn.LayerNorm.load(
prefix=f"{prefix}.post_layernorm",
weights=weights,
eps=config.layer_norm_eps,
)
def forward(
self,
pixel_values,
patch_attention_mask: Optional[torch.BoolTensor] = None,
):
batch_size = pixel_values.size(0)
if patch_attention_mask is None:
patch_size = self.config.patch_size
patch_attention_mask = torch.ones(
(
batch_size,
pixel_values.size(2) // patch_size,
pixel_values.size(3) // patch_size,
)
)
patch_attention_mask = patch_attention_mask.to(
dtype=torch.bool, device=pixel_values.device
)
hidden_states = self.embeddings(
pixel_values=pixel_values, patch_attention_mask=patch_attention_mask
)
patch_attention_mask = patch_attention_mask.view(batch_size, -1)
# The call to `_upad_input` in `_flash_attention_forward` is expensive
# So when the `patch_attention_mask` is full of 1s (i.e. attending to the whole sequence),
# avoiding passing the attention_mask, which is equivalent to attending to the full sequence
if not torch.any(~patch_attention_mask):
patch_attention_mask = None
else:
patch_attention_mask = _prepare_4d_attention_mask(
patch_attention_mask, hidden_states.dtype
)
encoder_outputs = self.encoder(
inputs_embeds=hidden_states,
attention_mask=patch_attention_mask,
)
last_hidden_state = encoder_outputs
last_hidden_state = self.post_layernorm(last_hidden_state)
return last_hidden_state
class Idefics2MLP(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
act = config.text_config.hidden_act
self.act = (
ACT2FN[act]
if "gelu" not in act
else lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
)
self.gate_up_proj = TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"],
weights=weights,
dim=0,
bias=False,
)
self.down_proj = TensorParallelRowLinear.load(
config,
prefix=f"{prefix}.down_proj",
weights=weights,
bias=False,
)
def forward(self, hidden_states):
start_shape = hidden_states.shape[:-1]
gate_up_states = self.gate_up_proj(hidden_states)
intermediate_size = gate_up_states.shape[-1] // 2
gate_up_states = gate_up_states.view(-1, 2, intermediate_size)
return self.down_proj(
self.act(gate_up_states[:, 0]) * gate_up_states[:, 1]
).view(*start_shape, -1)
class Idefics2RMSNorm(nn.Module):
def __init__(self, prefix, weights, eps):
"""
Idefics2RMSNorm is equivalent to T5LayerNorm
"""
super().__init__()
self.weight = nn.Parameter(
weights.get_tensor(f"{prefix}.weight"), requires_grad=False
)
self.variance_epsilon = eps
def forward(self, hidden_states):
input_dtype = hidden_states.dtype
hidden_states = hidden_states.to(torch.float32)
variance = hidden_states.pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
return self.weight * hidden_states.to(input_dtype)
class Idefics2PerceiverAttention(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.layer_idx = None
self.hidden_size = config.text_config.hidden_size
self.num_heads = config.perceiver_config.resampler_n_heads
self.head_size = config.perceiver_config.resampler_head_dim
self.num_key_value_heads = config.perceiver_config.num_key_value_heads
self.num_key_value_groups = self.num_heads // self.num_key_value_heads
self.attention_dropout = config.perceiver_config.attention_dropout
self.num_heads = self.num_heads // weights.process_group.size()
self.num_key_value_heads = (
self.num_key_value_heads // weights.process_group.size()
)
self.q_proj = TensorParallelColumnLinear.load(
config,
prefix=f"{prefix}.q_proj",
weights=weights,
bias=False,
)
self.kv = TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.k_proj", f"{prefix}.v_proj"],
dim=0,
weights=weights,
bias=False,
)
self.o_proj = TensorParallelRowLinear.load(
config=config, prefix=f"{prefix}.o_proj", weights=weights, bias=False
)
self.is_causal = False
def forward(
self,
latents: torch.Tensor,
context: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
bsz, q_len, _ = latents.size()
kv_seq_len = q_len + context.size()[1]
hidden_states = torch.concat([context, latents], dim=-2)
query_states = self.q_proj(latents)
kv = self.kv(hidden_states)
key_states, value_states = kv.split(
[
self.head_size * self.num_key_value_heads,
self.head_size * self.num_key_value_heads,
],
dim=2,
)
query_states = query_states.view(
bsz, q_len, self.num_heads, self.head_size
).transpose(1, 2)
key_states = key_states.view(
bsz, kv_seq_len, self.num_key_value_heads, self.head_size
).transpose(1, 2)
value_states = value_states.view(
bsz, kv_seq_len, self.num_key_value_heads, self.head_size
).transpose(1, 2)
# repeat k/v heads if n_kv_heads < n_heads
key_states = repeat_kv(key_states, self.num_key_value_groups)
value_states = repeat_kv(value_states, self.num_key_value_groups)
attn_weights = torch.matmul(
query_states, key_states.transpose(2, 3)
) / math.sqrt(self.head_size)
if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
raise ValueError(
f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
f" {attn_weights.size()}"
)
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights + attention_mask
# upcast attention to fp32
attn_weights = nn.functional.softmax(
attn_weights, dim=-1, dtype=torch.float32
).to(query_states.dtype)
attn_output = torch.matmul(attn_weights, value_states)
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_size):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_size)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.transpose(1, 2).contiguous()
attn_output = attn_output.reshape(bsz, q_len, self.num_heads * self.head_size)
attn_output = self.o_proj(attn_output)
return attn_output
class Idefics2PerceiverLayer(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.hidden_size = config.text_config.hidden_size
self.n_latents = config.perceiver_config.resampler_n_latents
self.depth = config.perceiver_config.resampler_depth
self.rms_norm_eps = config.text_config.rms_norm_eps
self.input_latents_norm = Idefics2RMSNorm(
prefix=f"{prefix}.input_latents_norm",
weights=weights,
eps=self.rms_norm_eps,
)
self.input_context_norm = Idefics2RMSNorm(
prefix=f"{prefix}.input_context_norm",
weights=weights,
eps=self.rms_norm_eps,
)
self.self_attn = Idefics2PerceiverAttention(
prefix=f"{prefix}.self_attn", config=config, weights=weights
)
self.post_attention_layernorm = Idefics2RMSNorm(
prefix=f"{prefix}.post_attention_layernorm",
weights=weights,
eps=self.rms_norm_eps,
)
self.mlp = Idefics2MLP(prefix=f"{prefix}.mlp", config=config, weights=weights)
def forward(
self,
latents: torch.Tensor,
context: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
):
"""
Args:
latents (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
context (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, sequence_length)` where padding elements are indicated by 0.
"""
residual = latents
latents = self.input_latents_norm(latents)
context = self.input_context_norm(context)
latents = self.self_attn(
latents=latents,
context=context,
attention_mask=attention_mask,
)
latents = residual + latents
residual = latents
latents = self.post_attention_layernorm(latents)
latents = self.mlp(latents)
latents = residual + latents
return latents
class Idefics2PerceiverResampler(nn.Module):
def __init__(self, prefix, config, weights) -> None:
super().__init__()
self.hidden_size = config.text_config.hidden_size
self.hidden_act = config.perceiver_config.hidden_act
self.n_latents = config.perceiver_config.resampler_n_latents
self.depth = config.perceiver_config.resampler_depth
self.rms_norm_eps = config.text_config.rms_norm_eps
# Create Latents for Perceiver
self.latents = weights.get_tensor(f"{prefix}.latents")
# Create Transformer Blocks
self.layers = nn.ModuleList(
[
Idefics2PerceiverLayer(
prefix=f"{prefix}.layers.{idx}", config=config, weights=weights
)
for idx in range(self.depth)
]
)
self.norm = Idefics2RMSNorm(
prefix=f"{prefix}.norm",
weights=weights,
eps=config.text_config.rms_norm_eps,
)
def forward(
self,
context: torch.Tensor,
attention_mask,
) -> torch.Tensor:
# seq embed -> bsz seq embed
latents = self.latents.unsqueeze(0).expand(
(context.shape[0], *self.latents.size())
)
latent_attention_mask = torch.ones(
(attention_mask.size(0), latents.size(1)),
dtype=attention_mask.dtype,
device=attention_mask.device,
)
attention_mask = torch.cat([attention_mask, latent_attention_mask], dim=-1)
attention_mask = _prepare_4d_attention_mask(
attention_mask, latents.dtype, tgt_len=self.n_latents
)
compressed_context = latents
for perceiver_layer in self.layers:
compressed_context = perceiver_layer(
compressed_context,
context,
attention_mask=attention_mask,
)
compressed_context = self.norm(compressed_context)
return compressed_context
class Idefics2Connector(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
self.modality_projection = Idefics2MLP(
prefix=f"{prefix}.modality_projection", config=config, weights=weights
)
self.perceiver_resampler = Idefics2PerceiverResampler(
prefix=f"{prefix}.perceiver_resampler", config=config, weights=weights
)
def forward(self, image_hidden_states, attention_mask):
image_hidden_states = self.modality_projection(image_hidden_states)
image_hidden_states = self.perceiver_resampler(
context=image_hidden_states, attention_mask=attention_mask
)
return image_hidden_states
class Idefics2ForConditionalGeneration(nn.Module):
def __init__(self, prefix, config, weights):
super().__init__()
config.vision_config.quantize = None
config.vision_config.speculator = config.speculator
config.text_config.quantize = config.quantize
config.text_config.speculator = config.speculator
vision_config = config.vision_config
self.text_model = load_text_model(
prefix="model" if not prefix else f"{prefix}.model",
config=config.text_config,
weights=weights,
name="text_model",
)
self.dtype = weights.dtype
# The vision and connector models are not quantized.
with weights.use_loader(DefaultWeightsLoader(UnquantizedWeight)):
self.vision_model = Idefics2VisionTransformer(
prefix=(
f"{prefix}.model.vision_model" if prefix else "model.vision_model"
),
config=vision_config,
weights=weights,
)
config.quantize = None
self.connector = Idefics2Connector(
prefix=f"{prefix}.model.connector" if prefix else "model.connector",
config=config,
weights=weights,
)
self.config = config
self.image_seq_len = config.perceiver_config.resampler_n_latents
self.image_token_id = config.image_token_id
self.pad_token_id = (
config.pad_token_id if config.pad_token_id is not None else -1
)
def _merge_input_ids_with_image_features(
self,
input_ids: torch.Tensor,
inputs_embeds: torch.Tensor,
image_features: torch.Tensor,
):
"""In place merges in vision_embeddings with inputs_embeds."""
# mask = input_ids == self.config.image_token_index
mask = input_ids == self.config.image_token_id
# Let's pray we have enabled enough slots !
inputs_embeds[mask] = image_features.view(-1, image_features.shape[-1])
return inputs_embeds
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
block_tables: torch.Tensor,
slots: torch.Tensor,
seqlen: Seqlen,
max_s: int,
prefill_cache_indices: Optional[torch.Tensor],
lm_head_indices: Optional[torch.Tensor] = None,
pixel_values: torch.FloatTensor = None,
pixel_attention_mask: Optional[torch.BoolTensor] = None,
# Unused here
image_sizes: Optional[torch.Tensor] = None,
adapter_data: Optional[torch.Tensor] = None,
):
inputs_embeds = self.text_model.embed_tokens(input_ids)
if pixel_values is not None:
batch_size, num_images, num_channels, height, width = pixel_values.shape
all_states = []
all_pixel_values = pixel_values
all_pixel_mask = pixel_attention_mask
for i in range(batch_size):
pixel_values = all_pixel_values.to(
dtype=self.dtype
) # fp16 compatibility
pixel_values = pixel_values[i : i + 1]
pixel_values = pixel_values.view(num_images, *pixel_values.shape[2:])
# Remove padding images - padding images are full 0.
nb_values_per_image = pixel_values.shape[1:].numel()
real_images_inds = (pixel_values == 0.0).sum(
dim=(-1, -2, -3)
) != nb_values_per_image
pixel_values = pixel_values[real_images_inds].contiguous()
# Handle the vision attention mask
if pixel_attention_mask is None:
pixel_attention_mask = torch.ones(
size=(
pixel_values.size(0),
pixel_values.size(2),
pixel_values.size(3),
),
dtype=torch.bool,
device=pixel_values.device,
)
else:
# Remove padding images from the mask/pP p
pixel_attention_mask = all_pixel_mask[i : i + 1]
pixel_attention_mask = pixel_attention_mask.view(
1 * num_images, *pixel_attention_mask.shape[2:]
)
pixel_attention_mask = pixel_attention_mask[
real_images_inds
].contiguous()
patch_size = self.config.vision_config.patch_size
patches_subgrid = pixel_attention_mask.unfold(
dimension=1, size=patch_size, step=patch_size
)
patches_subgrid = patches_subgrid.unfold(
dimension=2, size=patch_size, step=patch_size
)
patch_attention_mask = (patches_subgrid.sum(dim=(-1, -2)) > 0).bool()
# Get sequence from the vision encoder
image_hidden_states = self.vision_model(
pixel_values=pixel_values,
patch_attention_mask=patch_attention_mask,
)
# Modality projection & resampling
image_hidden_states = self.connector(
image_hidden_states,
attention_mask=patch_attention_mask.view(pixel_values.size(0), -1),
)
all_states.append(image_hidden_states)
image_hidden_states = torch.stack(all_states, dim=0)
# When we generate, we don't want to replace the potential image_token_id that we generated by images
# that simply don't exist
inputs_embeds = self._merge_input_ids_with_image_features(
input_ids, inputs_embeds, image_hidden_states
)
hidden_states = self.text_model.model(
inputs_embeds=inputs_embeds,
position_ids=position_ids,
cu_seqlen_prefill=cu_seqlen_prefill,
kv_cache=kv_cache,
block_tables=block_tables,
slots=slots,
seqlen=seqlen,
max_s=max_s,
true_max_s=max_s,
prefill_cache_indices=None,
adapter_data=adapter_data,
)
if lm_head_indices is not None:
hidden_states = hidden_states[lm_head_indices]
logits, speculative_logits = self.text_model.lm_head(hidden_states)
return logits, speculative_logits
|
text-generation-inference/server/text_generation_server/models/custom_modeling/idefics2.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/idefics2.py",
"repo_id": "text-generation-inference",
"token_count": 15271
}
| 243
|
from contextlib import nullcontext
import math
import os
import time
import torch
import torch.distributed
import numpy as np
from loguru import logger
from dataclasses import dataclass
from opentelemetry import trace
from transformers import (
PreTrainedTokenizerBase,
AutoConfig,
AutoTokenizer,
GenerationConfig,
)
from typing import Any, ContextManager, Iterable, Optional, Tuple, List, Type, Dict
from text_generation_server.adapters import AdapterBatchData, AdapterBatchMetadata
from huggingface_hub.constants import HUGGINGFACE_HUB_CACHE
from text_generation_server.utils.chunks import concat_text_chunks
from text_generation_server.utils.import_utils import SYSTEM
from text_generation_server.models import Model
from text_generation_server.utils.log import log_master
from text_generation_server.utils.tokens import batch_top_tokens
from text_generation_server.utils.speculate import get_speculate
from text_generation_server.utils import (
initialize_torch_distributed,
weight_files,
Weights,
)
from text_generation_server.models.types import (
Batch,
Tokens,
Generation,
GeneratedText,
)
from text_generation_server.pb import generate_pb2
from text_generation_server.models.globals import (
MEM_POOL,
ATTENTION,
BLOCK_SIZE,
CUDA_GRAPHS,
TGI_WIGGLE_ROOM,
get_adapter_to_index,
)
from text_generation_server.layers.attention import Seqlen
from text_generation_server.utils import StoppingCriteria, HeterogeneousNextTokenChooser
from text_generation_server.utils.dist import MEMORY_FRACTION
from text_generation_server.utils.quantization import get_loader
from text_generation_server.utils.segments import SegmentConcatBuilder, find_segments
from text_generation_server.utils.import_utils import (
empty_cache,
synchronize,
get_free_memory,
)
tracer = trace.get_tracer(__name__)
# Will be set in init
SLIDING_WINDOW: Optional[int] = None
def set_sliding_window(sliding_window: int):
global SLIDING_WINDOW
SLIDING_WINDOW = sliding_window
def get_sliding_windows() -> int:
global SLIDING_WINDOW
return SLIDING_WINDOW
def init_cpu_threads_env(rank_id: int, world_size: int):
import importlib.util
if importlib.util.find_spec("numa") is not None:
import numa
import psutil
nodes = numa.get_max_node() + 1
rank_per_node = math.ceil(world_size / nodes)
num_cpus_per_nodes = int(psutil.cpu_count(logical=False) / nodes)
node_id = int(rank_id / rank_per_node)
rank_offset_per_node = rank_id % rank_per_node
if os.getenv("OMP_NUM_THREADS") is None:
num_cpus_per_rank = max(int(num_cpus_per_nodes / rank_per_node), 1)
else:
num_cpus_per_rank = int(os.getenv("OMP_NUM_THREADS"))
if len(numa.get_membind()) == nodes:
numa.set_membind([node_id])
torch.set_num_threads(num_cpus_per_rank)
if len(numa.get_affinity(0)) == psutil.cpu_count(logical=True):
cpu_start = num_cpus_per_rank * rank_offset_per_node
numa.set_affinity(
0,
list(numa.node_to_cpus(node_id))[
cpu_start : cpu_start + num_cpus_per_rank
],
)
logger.info(f"affinity={numa.get_affinity(0)}, membind = {numa.get_membind()}")
@dataclass
class FlashCausalLMBatch(Batch):
batch_id: int
requests: List[generate_pb2.Request]
# request id -> idx in list mapping
requests_idx_mapping: Dict[int, int]
# Decoder values
input_ids: torch.Tensor
position_ids: torch.Tensor
speculative_ids: Optional[torch.Tensor]
# Flash Attention values
# tensor of length b containing the cumulative sequence lengths of the sequences in the batch, only used in prefill
cu_seqlen_prefill: Optional[torch.Tensor]
# Prefill cache indices is used to slice into the kv tensor before caching it into the paged attention buffers
# as we only keep SLIDING_WINDOW values instead of the whole tensor
prefill_cache_indices: Optional[torch.Tensor]
# Paged Attention values
# Set when creating the batch
# CPU tensor of length b indicating the start of each sequence in slots
start_slots: torch.Tensor
# tensor of indices of the currently used slots, length = \sum_{i=0}^{b} s_i in prefill, length = b in decode
slot_indices: torch.Tensor
# list of length b of list of length s_i // block_size
block_tables: List[List[int]]
# tensor of size [b, max_total_seqlen // block_size] holding the paged attention block tables for all sequences
block_tables_tensor: torch.Tensor
# tensor of length \sum_{i=0}^{b} max_s_i holding the paged attention slots for all sequences
slots: torch.Tensor
# size [b], containing the number of blocks that can be retrieved from the cache
prefix_lens: List[int]
prefix_lens_tensor: torch.Tensor
max_seqlen: int
# Prefill metadata tensors to efficiently compute logprobs
prefill_head_indices: Optional[torch.Tensor]
prefill_next_token_indices: Optional[torch.tensor]
prefill_cu_outlens: Optional[List[int]]
# Prefixes
prefix_ids: List[List[int]]
# All tokens
all_input_ids: List[List[int]]
all_input_ids_tensor: torch.Tensor
# Lengths of all generations present in the batch
input_lengths: List[int]
input_lengths_tensor: torch.Tensor
prefix_offsets: List[Optional[int]]
read_offsets: List[Optional[int]]
# Generation helpers
next_token_chooser: HeterogeneousNextTokenChooser
stopping_criterias: List[StoppingCriteria]
top_n_tokens: List[int]
top_n_tokens_tensor: torch.Tensor
# Adapter metadata for each request
adapter_meta: AdapterBatchMetadata
# Number of blocks in this batch
num_blocks: int
# Maximum number of blocks
max_blocks: int
def to_pb(self) -> generate_pb2.CachedBatch:
return generate_pb2.CachedBatch(
id=self.batch_id,
request_ids=[r.id for r in self.requests],
size=len(self),
max_tokens=self.num_blocks * BLOCK_SIZE,
)
@classmethod
def batch_tokenized_inputs(
cls, requests: Iterable[generate_pb2.Request], tokenizer
):
max_length = 0
all_input_ids = []
batch_size = 0
for r in requests:
batch_size += 1
inputs = concat_text_chunks(r.input_chunks.chunks)
input_ids = tokenizer(
inputs,
truncation=True,
max_length=r.truncate,
add_special_tokens=r.add_special_tokens,
)["input_ids"]
max_length = max(max_length, len(input_ids))
all_input_ids.append(input_ids)
return all_input_ids
@classmethod
def from_tokenized(
cls,
pb: generate_pb2.Batch,
tokenizer: PreTrainedTokenizerBase,
batch_tokenized_inputs,
dtype: torch.dtype,
device: torch.device,
) -> "FlashCausalLMBatch":
sliding_window = get_sliding_windows()
position_ids = []
cu_seqlen_prefill = [0]
start_slots = []
slot_indices = []
prefill_cache_indices = []
input_lengths = []
prefix_offsets = []
read_offsets = []
all_input_ids = []
prefix_ids = []
requests_idx_mapping = {}
all_prefill_logprobs = True
no_prefill_logprobs = True
prefill_head_indices = []
prefill_next_token_indices = []
prefill_cu_outlens = [0]
next_token_chooser_parameters = []
stopping_criterias = []
top_n_tokens = []
adapter_indices_list = []
adapter_set = set()
# Cumulative length
cumulative_length = 0
cumulative_slot_tokens = 0
prefill_out_cumulative_length = 0
num_blocks = 0
max_seqlen = 0
max_length = 0
max_blocks = 0
block_tables = []
slots = []
prefix_lens = []
# Parse batch
for i, (r, tokenized_input) in enumerate(
zip(pb.requests, batch_tokenized_inputs)
):
# request id -> idx in list mapping
requests_idx_mapping[r.id] = i
orig_input_length = len(tokenized_input)
prefix_len = r.prefix_len
assert (
prefix_len <= orig_input_length
), f"Prefix {prefix_len} vs input {orig_input_length}"
if prefix_len == orig_input_length:
assert prefix_len > 0
prefix_len -= 1
prefix_ids.append(tokenized_input[:prefix_len])
tokenized_input = tokenized_input[prefix_len:]
input_length = len(tokenized_input)
input_lengths.append(input_length)
prefix_offsets.append(input_length - 5)
read_offsets.append(input_length)
all_input_ids.append(tokenized_input)
# Position ids
request_position_ids = torch.arange(
prefix_len, orig_input_length, dtype=torch.int32
)
position_ids.append(request_position_ids)
# Add cumulative lengths of all previous inputs
cu_seqlen_prefill.append(cumulative_length + input_length)
next_token_chooser_parameters.append(r.parameters)
stopping_criteria = StoppingCriteria.from_pb(
r.stopping_parameters, tokenizer
)
max_new_tokens = stopping_criteria.max_new_tokens
stopping_criterias.append(stopping_criteria)
top_n_tokens.append(r.top_n_tokens)
ADAPTER_TO_INDEX = get_adapter_to_index()
adapter_index = ADAPTER_TO_INDEX.get(r.adapter_id, 0)
adapter_indices_list.append(torch.full((input_length,), adapter_index))
adapter_set.add(adapter_index)
# Paged attention
# Remove one as the first token des not have a past
speculative_length = get_speculate()
speculative_length = 0 if speculative_length is None else speculative_length
# Tokens that need to be mapped to blocks.
block_tokens = orig_input_length + max_new_tokens - 1 + speculative_length
# Tokens that need to be mapped to slots. We don't need slots for the
# cached prefix (if present).
slot_tokens = input_length + max_new_tokens - 1 + speculative_length
# blocks and slots can be empty (for example in warmup)
if not r.blocks:
needed_blocks = math.ceil(block_tokens / BLOCK_SIZE)
request_blocks = [
b for b in range(num_blocks, num_blocks + needed_blocks)
]
request_slots = [
s
for b in request_blocks
for s in range(b * BLOCK_SIZE, (b + 1) * BLOCK_SIZE)
]
else:
request_blocks = r.blocks
request_slots = r.slots[
prefix_len: #: orig_input_length + max_new_tokens + speculative_length
]
block_tables.append(request_blocks)
slots.extend(request_slots)
prefix_lens.append(prefix_len)
num_blocks += len(request_blocks)
start_slots.append(cumulative_slot_tokens)
request_slot_indices = torch.arange(
cumulative_slot_tokens,
cumulative_slot_tokens + input_length,
dtype=torch.int64,
)
slot_indices.append(request_slot_indices)
# Create tensor to slice into the kv tensor in prefill
if sliding_window is not None:
request_prefill_cache_indices = torch.arange(
cumulative_length + max(0, input_length - sliding_window),
cumulative_length + input_length,
dtype=torch.int64,
)
prefill_cache_indices.append(request_prefill_cache_indices)
all_prefill_logprobs = all_prefill_logprobs and r.prefill_logprobs
no_prefill_logprobs = no_prefill_logprobs and not r.prefill_logprobs
if r.prefill_logprobs:
prefill_head_indices.append(request_position_ids + cumulative_length)
prefill_next_token_indices.append(
prefill_out_cumulative_length + input_length - 1
)
prefill_cu_outlens.append(prefill_out_cumulative_length + input_length)
prefill_out_cumulative_length += input_length
else:
prefill_head_indices.append(
torch.tensor(
[cumulative_length + input_length - 1], dtype=torch.int32
)
)
prefill_next_token_indices.append(prefill_out_cumulative_length)
prefill_cu_outlens.append(prefill_out_cumulative_length + 1)
prefill_out_cumulative_length += 1
# Update
cumulative_length += input_length
cumulative_slot_tokens += slot_tokens
max_seqlen = max(max_seqlen, input_length)
max_blocks = max(max_blocks, len(request_blocks))
max_length = max(
max_length, input_length + max_new_tokens + speculative_length
)
adapter_indices = torch.cat(adapter_indices_list).to(
dtype=torch.int64, device=device
)
next_token_chooser = HeterogeneousNextTokenChooser.from_pb(
next_token_chooser_parameters, dtype, device, tokenizer
)
start_slots = torch.tensor(start_slots, dtype=torch.int64)
# Padded all_input_ids_tensor
all_input_ids_tensor = np.zeros(
(len(all_input_ids), max_length), dtype=np.int64
)
for i, input_ids in enumerate(all_input_ids):
all_input_ids_tensor[i, : len(input_ids)] = input_ids
# Create tensors on device
all_input_ids_tensor = torch.tensor(
all_input_ids_tensor, dtype=torch.int64, device=device
)
if len(pb.requests) > 1:
input_ids = np.concatenate(all_input_ids, dtype=np.int64)
position_ids = torch.cat(position_ids)
slot_indices = torch.cat(slot_indices)
if sliding_window is not None:
prefill_cache_indices = torch.cat(prefill_cache_indices)
else:
input_ids = all_input_ids[0]
position_ids = position_ids[0]
slot_indices = slot_indices[0]
if sliding_window is not None:
prefill_cache_indices = prefill_cache_indices[0]
cu_seqlen_prefill = torch.tensor(
cu_seqlen_prefill, device=device, dtype=torch.int32
)
position_ids = position_ids.to(device)
slot_indices = slot_indices.to(device)
prefill_cache_indices = (
prefill_cache_indices.to(device) if sliding_window is not None else None
)
input_ids = torch.tensor(input_ids, dtype=torch.int64, device=device)
input_lengths_tensor = torch.tensor(
input_lengths, dtype=torch.int32, device=device
)
adapter_segments, adapter_segment_indices = find_segments(adapter_indices)
adapter_segments = torch.tensor(
adapter_segments, dtype=torch.int32, device=device
)
if all_prefill_logprobs:
prefill_head_indices = None
prefill_next_token_indices = cu_seqlen_prefill[1:] - 1
elif no_prefill_logprobs:
prefill_head_indices = cu_seqlen_prefill[1:] - 1
prefill_next_token_indices = None
else:
prefill_head_indices = torch.tensor(
torch.cat(prefill_head_indices), dtype=torch.int64, device=device
)
prefill_next_token_indices = torch.tensor(
prefill_next_token_indices, dtype=torch.int64, device=device
)
top_n_tokens_tensor = torch.tensor(
top_n_tokens, device=device, dtype=torch.int64
)
slots = torch.tensor(slots, dtype=torch.int64, device=device)
block_tables_tensor = torch.zeros(
(len(block_tables), max_blocks), dtype=torch.int32, device="cpu"
)
for i, request_blocks in enumerate(block_tables):
block_tables_tensor[i, : len(request_blocks)] = torch.tensor(request_blocks)
block_tables_tensor = block_tables_tensor.to(device)
prefix_lens_tensor = torch.tensor(prefix_lens, dtype=torch.int32, device=device)
return cls(
batch_id=pb.id,
requests=pb.requests,
requests_idx_mapping=requests_idx_mapping,
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=cu_seqlen_prefill,
prefill_cache_indices=prefill_cache_indices,
start_slots=start_slots,
slot_indices=slot_indices,
block_tables=block_tables,
block_tables_tensor=block_tables_tensor,
slots=slots,
prefix_lens=prefix_lens,
prefix_lens_tensor=prefix_lens_tensor,
max_seqlen=max_seqlen,
prefill_head_indices=prefill_head_indices,
prefill_next_token_indices=prefill_next_token_indices,
prefill_cu_outlens=prefill_cu_outlens,
input_lengths=input_lengths,
input_lengths_tensor=input_lengths_tensor,
prefix_offsets=prefix_offsets,
read_offsets=read_offsets,
all_input_ids=all_input_ids,
all_input_ids_tensor=all_input_ids_tensor,
prefix_ids=prefix_ids,
next_token_chooser=next_token_chooser,
stopping_criterias=stopping_criterias,
top_n_tokens=top_n_tokens,
top_n_tokens_tensor=top_n_tokens_tensor,
num_blocks=num_blocks,
max_blocks=max_blocks,
adapter_meta=AdapterBatchMetadata(
adapter_indices=adapter_indices,
adapter_set=adapter_set,
adapter_segments=adapter_segments,
segment_indices=adapter_segment_indices,
),
speculative_ids=None,
)
@classmethod
def from_pb(
cls,
pb: generate_pb2.Batch,
tokenizer: PreTrainedTokenizerBase,
dtype: torch.dtype,
device: torch.device,
) -> "FlashCausalLMBatch":
batch_tokenized_inputs = cls.batch_tokenized_inputs(pb.requests, tokenizer)
return cls.from_tokenized(pb, tokenizer, batch_tokenized_inputs, dtype, device)
@tracer.start_as_current_span("filter")
def filter(self, request_ids: List[int]) -> "FlashCausalLMBatch":
if len(request_ids) == 0:
raise ValueError("Batch must have at least one request")
# We assume that if len(requests) == len(self) then the requests are the same
if len(request_ids) == len(self):
return self
device = self.input_ids.device
# New values after filtering
requests_idx_mapping = {}
# Used to index into tensors
indices = []
# slots to keep after filtering
slot_filtering_indices = torch.zeros(
self.slots.shape[0], dtype=torch.bool, device=device
)
# Create on CPU to only move to GPU once instead of at every copy
slot_indices = torch.empty(len(request_ids), dtype=torch.int64)
max_seqlen = 0
requests = []
start_slots = []
block_tables = []
all_input_ids = []
prefix_ids = []
input_lengths = []
prefix_lens = []
prefix_offsets = []
read_offsets = []
stopping_criterias = []
top_n_tokens = []
adapter_set = set()
num_blocks = 0
max_blocks = 0
# Cumulative length
cumulative_max_length = 0
for i, request_id in enumerate(request_ids):
idx = self.requests_idx_mapping[request_id]
indices.append(idx)
requests_idx_mapping[request_id] = i
requests.append(self.requests[idx])
# Get length
request_input_length = self.input_lengths[idx]
prefix_len = self.prefix_lens[idx]
max_seqlen = max(max_seqlen, request_input_length)
all_input_ids.append(self.all_input_ids[idx])
prefix_ids.append(self.prefix_ids[idx])
input_lengths.append(request_input_length)
prefix_lens.append(prefix_len)
prefix_offsets.append(self.prefix_offsets[idx])
read_offsets.append(self.read_offsets[idx])
stopping_criteria = self.stopping_criterias[idx]
stopping_criterias.append(stopping_criteria)
top_n_tokens.append(self.top_n_tokens[idx])
ADAPTER_TO_INDEX = get_adapter_to_index()
adapter_index = ADAPTER_TO_INDEX.get(self.requests[idx].adapter_id, 0)
adapter_set.add(adapter_index)
remaining_tokens = (
stopping_criteria.max_new_tokens - stopping_criteria.current_tokens
)
request_block_table = self.block_tables[idx]
num_blocks += len(request_block_table)
block_tables.append(request_block_table)
start_slots.append(cumulative_max_length)
# Copy to tensor (CPU)
slot_indices[i] = cumulative_max_length + request_input_length - 1
# Set slice
slot_filtering_indices[
self.start_slots[idx] : self.start_slots[idx]
+ request_input_length
+ remaining_tokens
- 1
] = True
cumulative_max_length += request_input_length + remaining_tokens - 1
max_blocks = max(max_blocks, len(request_block_table))
# Index into tensors
input_ids = self.input_ids[indices]
position_ids = self.position_ids[indices]
adapter_indices = self.adapter_meta.adapter_indices[indices]
all_input_ids_tensor = self.all_input_ids_tensor[indices]
block_tables_tensor = self.block_tables_tensor[indices]
input_lengths_tensor = self.input_lengths_tensor[indices]
slots = self.slots[slot_filtering_indices]
prefix_lens_tensor = self.prefix_lens_tensor[indices]
next_token_chooser = self.next_token_chooser.filter(indices)
top_n_tokens_tensor = self.top_n_tokens_tensor[indices]
speculative_ids = (
self.speculative_ids[indices] if self.speculative_ids is not None else None
)
start_slots = torch.tensor(start_slots, dtype=torch.int64)
# Move to GPU now that we have the whole tensor
slot_indices = slot_indices.to(device)
adapter_segments, adapter_segment_indices = find_segments(adapter_indices)
adapter_segments = torch.tensor(
adapter_segments, dtype=torch.int32, device=device
)
return type(self)(
batch_id=self.batch_id,
requests=requests,
requests_idx_mapping=requests_idx_mapping,
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=None,
prefill_cache_indices=None,
start_slots=start_slots,
slot_indices=slot_indices,
block_tables=block_tables,
block_tables_tensor=block_tables_tensor,
slots=slots,
max_seqlen=max_seqlen,
prefill_head_indices=None,
prefill_next_token_indices=None,
prefill_cu_outlens=None,
input_lengths=input_lengths,
input_lengths_tensor=input_lengths_tensor,
prefix_lens=prefix_lens,
prefix_lens_tensor=prefix_lens_tensor,
prefix_offsets=prefix_offsets,
read_offsets=read_offsets,
all_input_ids=all_input_ids,
all_input_ids_tensor=all_input_ids_tensor,
prefix_ids=prefix_ids,
next_token_chooser=next_token_chooser,
stopping_criterias=stopping_criterias,
top_n_tokens=top_n_tokens,
top_n_tokens_tensor=top_n_tokens_tensor,
num_blocks=num_blocks,
max_blocks=max_blocks,
speculative_ids=speculative_ids,
adapter_meta=AdapterBatchMetadata(
adapter_indices=adapter_indices,
adapter_set=adapter_set,
adapter_segments=adapter_segments,
segment_indices=adapter_segment_indices,
),
)
@classmethod
@tracer.start_as_current_span("concatenate")
def concatenate(cls, batches: List["FlashCausalLMBatch"]) -> "FlashCausalLMBatch":
# Batch attributes
requests = []
requests_idx_mapping = {}
num_blocks = 0
total_batch_size = 0
total_slots = 0
max_blocks = 0
max_length = 0
max_seqlen = 0
for b in batches:
total_batch_size += len(b)
total_slots += len(b.slots)
num_blocks += b.num_blocks
speculative_length = (
b.speculative_ids.shape[1] if b.speculative_ids is not None else 0
)
max_blocks = max(max_blocks, b.max_blocks)
max_seqlen = max(max_seqlen, b.max_seqlen)
max_length = max(
max_length,
max(
input_length
+ stopping_criteria.max_new_tokens
+ speculative_length
- stopping_criteria.current_tokens
for input_length, stopping_criteria in zip(
b.input_lengths, b.stopping_criterias
)
),
)
input_ids = batches[0].input_ids.new_empty(total_batch_size)
position_ids = batches[0].position_ids.new_empty(total_batch_size)
slots = batches[0].slots.new_empty(total_slots)
slot_indices = batches[0].slot_indices.new_empty(total_batch_size)
input_lengths_tensor = batches[0].input_lengths_tensor.new_empty(
total_batch_size
)
block_tables_tensor = batches[0].block_tables_tensor.new_zeros(
(total_batch_size, max_blocks)
)
prefix_lens_tensor = batches[0].prefix_lens_tensor.new_empty(total_batch_size)
all_input_ids_tensor = batches[0].all_input_ids_tensor.new_zeros(
(total_batch_size, max_length)
)
top_n_tokens_tensor = batches[0].top_n_tokens_tensor.new_zeros(
total_batch_size,
)
total_indices_size = sum(
b.adapter_meta.adapter_indices.shape[0] for b in batches
)
adapter_indices = batches[0].adapter_meta.adapter_indices.new_empty(
total_indices_size
)
adapter_set = set()
adapter_segment_builder = SegmentConcatBuilder()
start_slots = []
block_tables = []
prefix_lens = []
all_input_ids = []
prefix_ids = []
input_lengths = []
prefix_offsets = []
read_offsets = []
next_token_chooser_parameters = []
fsm_grammar_states = []
stopping_criterias = []
top_n_tokens = []
# Cumulative length
cumulative_batch_size = 0
cumulative_slots = 0
cumulative_adapter_indices_size = 0
for i, batch in enumerate(batches):
requests.extend(batch.requests)
if i == 0:
requests_idx_mapping = batch.requests_idx_mapping
else:
# We need to offset the mapping for each batch by the cumulative batch size
for k, v in batch.requests_idx_mapping.items():
requests_idx_mapping[k] = v + cumulative_batch_size
start_index = cumulative_batch_size
end_index = cumulative_batch_size + len(batch)
slots_start_index = cumulative_slots
slots_end_index = cumulative_slots + len(batch.slots)
# Copy tensors (GPU)
input_ids[start_index:end_index] = batch.input_ids
position_ids[start_index:end_index] = batch.position_ids
slot_indices[start_index:end_index] = batch.slot_indices + cumulative_slots
input_lengths_tensor[start_index:end_index] = batch.input_lengths_tensor
top_n_tokens_tensor[start_index:end_index] = batch.top_n_tokens_tensor
slots[slots_start_index:slots_end_index] = batch.slots
# Copy over adapter indices
adapter_start_index = cumulative_adapter_indices_size
adapter_end_index = (
cumulative_adapter_indices_size
+ batch.adapter_meta.adapter_indices.shape[0]
)
adapter_indices[adapter_start_index:adapter_end_index] = (
batch.adapter_meta.adapter_indices
)
cumulative_adapter_indices_size = adapter_end_index
adapter_set.update(batch.adapter_meta.adapter_set)
adapter_segment_builder.concat(
batch.adapter_meta.adapter_segments, batch.adapter_meta.segment_indices
)
all_input_ids_tensor[
start_index:end_index, : batch.all_input_ids_tensor.shape[1]
] = batch.all_input_ids_tensor[:, :max_length]
block_tables_tensor[
start_index:end_index, : batch.block_tables_tensor.shape[1]
] = batch.block_tables_tensor[:, :max_blocks]
prefix_lens_tensor[start_index:end_index] = batch.prefix_lens_tensor
start_slots.append(batch.start_slots + cumulative_slots)
block_tables.extend(batch.block_tables)
prefix_lens.extend(batch.prefix_lens)
all_input_ids.extend(batch.all_input_ids)
prefix_ids.extend(batch.prefix_ids)
input_lengths.extend(batch.input_lengths)
prefix_offsets.extend(batch.prefix_offsets)
read_offsets.extend(batch.read_offsets)
next_token_chooser_parameters.extend([r.parameters for r in batch.requests])
fsm_grammar_states.extend(batch.next_token_chooser.fsm_grammar_states)
stopping_criterias.extend(batch.stopping_criterias)
top_n_tokens.extend(batch.top_n_tokens)
# Update
cumulative_batch_size += len(batch)
cumulative_slots += len(batch.slots)
start_slots = torch.concat(start_slots)
next_token_chooser = HeterogeneousNextTokenChooser.from_pb(
next_token_chooser_parameters,
dtype=batches[0].next_token_chooser.dtype,
device=batches[0].next_token_chooser.device,
tokenizer=batches[0].next_token_chooser.tokenizer,
fsm_grammar_states=fsm_grammar_states,
)
speculative_ids = (
torch.cat([b.speculative_ids for b in batches], dim=0)
if batches[0].speculative_ids is not None
else None
)
adapter_segments, adapter_segment_indices = adapter_segment_builder.build()
return cls(
batch_id=batches[0].batch_id,
requests=requests,
requests_idx_mapping=requests_idx_mapping,
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=None,
prefill_cache_indices=None,
start_slots=start_slots,
slot_indices=slot_indices,
block_tables=block_tables,
block_tables_tensor=block_tables_tensor,
prefix_lens=prefix_lens,
prefix_lens_tensor=prefix_lens_tensor,
slots=slots,
max_seqlen=max_seqlen,
prefill_head_indices=None,
prefill_next_token_indices=None,
prefill_cu_outlens=None,
input_lengths=input_lengths,
input_lengths_tensor=input_lengths_tensor,
prefix_offsets=prefix_offsets,
read_offsets=read_offsets,
all_input_ids=all_input_ids,
all_input_ids_tensor=all_input_ids_tensor,
prefix_ids=prefix_ids,
next_token_chooser=next_token_chooser,
stopping_criterias=stopping_criterias,
top_n_tokens=top_n_tokens,
top_n_tokens_tensor=top_n_tokens_tensor,
num_blocks=num_blocks,
max_blocks=max_blocks,
speculative_ids=speculative_ids,
adapter_meta=AdapterBatchMetadata(
adapter_indices=adapter_indices,
adapter_set=adapter_set,
adapter_segments=adapter_segments,
segment_indices=adapter_segment_indices,
),
)
def __len__(self):
return len(self.requests)
ADAPTER_LAYERS = [
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
]
ROW_PARALLEL = {"o_proj", "down_proj", "lm_head"}
class FlashCausalLM(Model):
def __init__(
self,
model_id: str,
model_class,
revision: Optional[str] = None,
quantize: Optional[str] = None,
speculator: Optional[str] = None,
dtype: Optional[torch.dtype] = None,
trust_remote_code: bool = False,
lora_adapter_ids: Optional[list] = [],
tokenizer_class: PreTrainedTokenizerBase = AutoTokenizer,
config_class: PreTrainedTokenizerBase = AutoConfig,
default_dtype=torch.float16,
aliases=None,
# Used for Santacoder override of config
num_kv_heads: Optional[int] = None,
# Deepseek V2 uses different QK and V dims.
head_size: Optional[int] = None,
skip_special_tokens: bool = True,
):
self.quantize = quantize
self.process_group, rank, world_size = initialize_torch_distributed()
if torch.cuda.is_available():
device = torch.device(f"cuda:{rank}")
dtype = default_dtype if dtype is None else dtype
elif SYSTEM == "ipex":
if hasattr(torch, "xpu") and torch.xpu.is_available():
device = torch.device(f"xpu:{rank}")
dtype = default_dtype if dtype is None else dtype
else:
device = torch.device("cpu")
# Float16 doesn't exist on target.
dtype = torch.bfloat16 if dtype is None else dtype
init_cpu_threads_env(rank_id=rank, world_size=world_size)
else:
raise NotImplementedError(f"{model_class} is only available on GPU")
tokenizer = tokenizer_class.from_pretrained(
model_id,
revision=revision,
padding_side="left",
truncation_side="left",
trust_remote_code=trust_remote_code,
)
try:
generation_config = GenerationConfig.from_pretrained(
model_id, revision=revision, trust_remote_code=trust_remote_code
)
if isinstance(generation_config.eos_token_id, (list, set)):
# TODO Huge hack
tokenizer._eos_token_ids = set(generation_config.eos_token_id)
except Exception:
pass
config = config_class.from_pretrained(
model_id, revision=revision, trust_remote_code=trust_remote_code
)
config.quantize = quantize
config.speculator = speculator
torch.distributed.barrier(group=self.process_group)
weights_loader = get_loader(quantize, model_id, revision)
filenames = weight_files(model_id, revision=revision, extension=".safetensors")
weights = Weights(
filenames,
device,
dtype,
process_group=self.process_group,
aliases=aliases,
weights_loader=weights_loader,
)
prefix = ""
model = model_class(prefix, config, weights)
torch.distributed.barrier(group=self.process_group)
# VLM models define the config we care about in their text_config
text_config = getattr(config, "text_config", None)
if text_config is not None:
config = text_config
if getattr(config, "sliding_window", None) is not None:
set_sliding_window(config.sliding_window)
else:
config.sliding_window = None
self.num_layers = config.num_hidden_layers
self.num_heads = config.num_attention_heads // self.process_group.size()
# Validation is done in the model itself
if num_kv_heads is None:
num_kv_heads = getattr(config, "num_key_value_heads", None)
# GPT-2 workaround
if num_kv_heads is None:
num_kv_heads = getattr(config, "n_head", None)
if num_kv_heads is None:
raise ValueError("Cannot get the number of key/value heads")
self.num_kv_heads = (
num_kv_heads // self.process_group.size()
if num_kv_heads > 1
else num_kv_heads
)
assert self.num_kv_heads > 0
if head_size is None:
# Some models use GQA and different sizes for o_proj
# and q_proj, that allows for that.
if hasattr(config, "head_dim"):
self.head_size = config.head_dim
else:
self.head_size = config.hidden_size // config.num_attention_heads
else:
self.head_size = head_size
self.cuda_graphs = {}
self.kv_cache = []
if ATTENTION == "flashinfer":
from text_generation_server.layers.attention.flashinfer import (
create_prefill_state,
create_decode_state,
create_prefill_with_paged_kv_state,
)
self.prefill_state = create_prefill_state(device=device)
self.prefill_with_paged_kv_state = create_prefill_with_paged_kv_state(
device=device
)
self.decode_state = create_decode_state(
device=device,
num_heads=self.num_heads,
num_kv_heads=self.num_kv_heads,
)
super().__init__(
model_id=model_id,
model=model,
tokenizer=tokenizer,
requires_padding=False,
dtype=dtype,
device=device,
rank=rank,
world_size=world_size,
sliding_window=config.sliding_window,
)
@property
def batch_type(self) -> Type[FlashCausalLMBatch]:
return FlashCausalLMBatch
def max_past(self) -> int:
return getattr(self.model, "max_past", None)
def init_kv_cache(
self,
num_blocks: int,
num_layers: int,
num_heads: int,
head_size: int,
dtype: torch.dtype,
device: torch.device,
):
self.kv_cache = []
empty_cache()
element_size = torch.tensor([], dtype=dtype).element_size()
if SYSTEM == "ipex" and device.type == "xpu":
x = 1
else:
x = BLOCK_SIZE // element_size
if ATTENTION in {"flashdecoding", "flashinfer"}:
self.kv_cache = [
(
torch.empty(
(num_blocks, BLOCK_SIZE, num_heads, head_size),
dtype=dtype,
device=device,
),
torch.empty(
(num_blocks, BLOCK_SIZE, num_heads, head_size),
dtype=dtype,
device=device,
),
)
for _ in range(num_layers)
]
elif SYSTEM == "ipex" and device == torch.device("cpu"):
self.kv_cache = [
(
torch.empty(
(num_blocks, num_heads, BLOCK_SIZE, head_size),
dtype=dtype,
device=device,
),
torch.empty(
(num_blocks, num_heads, BLOCK_SIZE, head_size),
dtype=dtype,
device=device,
),
)
for _ in range(num_layers)
]
else:
self.kv_cache = [
(
torch.empty(
(num_blocks, num_heads, head_size // x, BLOCK_SIZE, x),
dtype=dtype,
device=device,
),
torch.empty(
(num_blocks, num_heads, head_size, BLOCK_SIZE),
dtype=dtype,
device=device,
),
)
for _ in range(num_layers)
]
def cuda_graph_warmup(self, bs: int, max_s: int, max_bt: int):
input_ids = torch.zeros(bs, dtype=torch.int64, device=self.device)
position_ids = torch.zeros(bs, dtype=torch.int32, device=self.device)
slots = torch.arange(bs, dtype=torch.int64, device=self.device)
input_lengths = [max_s] * bs
prefix_lengths = [0] * bs
input_lengths_tensor = (
torch.ones(bs, dtype=torch.int32, device=self.device) * max_s
)
prefix_lengths_tensor = torch.zeros(bs, dtype=torch.int32, device=self.device)
block_tables = torch.arange(
max_bt, dtype=torch.int32, device=self.device
).repeat(bs)
block_tables = block_tables.reshape((bs, max_bt))
if ATTENTION == "flashinfer":
block_tables = block_tables_to_ragged(
block_tables=block_tables,
input_lengths=input_lengths,
prefix_lens=prefix_lengths,
)
self.cuda_graphs[bs] = {
"input_ids": input_ids,
"position_ids": position_ids,
"kv_cache": self.kv_cache,
"block_tables": block_tables,
"slots": slots,
"input_lengths": input_lengths_tensor,
"prefix_lengths": prefix_lengths_tensor,
}
seqlen = Seqlen(
input_lengths=input_lengths_tensor,
prefix_lengths=prefix_lengths_tensor,
cu_seqlen_q=None,
max_q=1,
max_k=max_s,
)
graph = torch.cuda.CUDAGraph()
self.cuda_graphs[bs]["graph"] = graph
if ATTENTION == "flashinfer":
from text_generation_server.layers.attention.flashinfer import (
create_decode_state_cuda_graphs,
)
block_tables_ptr = torch.zeros(
bs + 1, dtype=torch.int32, device=self.device
)
last_page_len = torch.ones(bs, dtype=torch.int32, device=self.device)
state = create_decode_state_cuda_graphs(
device=input_ids.device,
block_tables=block_tables,
block_tables_ptr=block_tables_ptr,
last_page_len=last_page_len,
num_heads=self.num_heads,
num_kv_heads=self.num_kv_heads,
)
self.cuda_graphs[bs]["state"] = state
else:
state = None
torch.cuda.synchronize()
# Run once outside to warmup
with self._forward_context(
block_tables=block_tables,
cu_seqlen_prefill=None,
input_lengths=input_lengths,
input_lengths_tensor=input_lengths_tensor,
state=state,
prefix_lens=prefix_lengths,
prefix_lens_tensor=prefix_lengths_tensor,
):
self.model.forward(
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=None,
kv_cache=self.kv_cache,
block_tables=block_tables,
slots=slots,
seqlen=seqlen,
max_s=max_s,
prefill_cache_indices=None,
lm_head_indices=None,
)
torch.cuda.synchronize()
with torch.cuda.graph(graph, pool=MEM_POOL):
seqlen = Seqlen(
input_lengths=input_lengths_tensor,
prefix_lengths=prefix_lengths_tensor,
cu_seqlen_q=None,
max_q=1,
max_k=max_s,
)
logits, speculative_logits = self.model.forward(
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=None,
kv_cache=self.kv_cache,
block_tables=block_tables,
slots=slots,
seqlen=seqlen,
max_s=max_s,
prefill_cache_indices=None,
lm_head_indices=None,
)
self.cuda_graphs[bs]["logits"] = logits
self.cuda_graphs[bs]["speculative_logits"] = speculative_logits
torch.cuda.synchronize()
def warmup(self, batch: FlashCausalLMBatch):
# The warmup batch is the biggest batch we could ever receive
empty_cache()
try:
self.init_kv_cache(
batch.num_blocks,
self.num_layers,
self.num_kv_heads,
self.head_size,
self.dtype,
self.device,
)
max_bt = batch.max_blocks
max_s = max_bt * BLOCK_SIZE
if SYSTEM == "rocm" and os.environ.get("PYTORCH_TUNABLEOP_ENABLED", False):
torch.cuda.tunable.tuning_enable(False)
_, batch, _ = self.generate_token(batch)
except torch.cuda.OutOfMemoryError as e:
raise RuntimeError(
f"Not enough memory to handle {len(batch.input_ids)} prefill tokens. "
f"You need to decrease `--max-batch-prefill-tokens`"
) from e
synchronize(self.device)
# Inspired by the original implementation in [vllm](https://github.com/vllm-project/vllm)
# Calculate the number of blocks that can be allocated with the free memory
dtype_size = torch.tensor([], dtype=self.dtype).element_size()
cache_block_size = BLOCK_SIZE * self.num_kv_heads * self.head_size
total_cache_size = self.num_layers * cache_block_size * 2 * dtype_size
free_memory = get_free_memory(self.device, MEMORY_FRACTION)
batch_num_blocks = batch.num_blocks if batch is not None else 0
num_blocks = (
# Leave 5% for some wiggle room
int((free_memory * TGI_WIGGLE_ROOM) // total_cache_size)
# Add batch.num_blocks as we allocated it above, so it is included in the peak memory.
+ batch_num_blocks
)
del batch
self.init_kv_cache(
num_blocks,
self.num_layers,
self.num_kv_heads,
self.head_size,
self.dtype,
self.device,
)
if SYSTEM == "rocm":
if (
os.environ.get("PYTORCH_TUNABLEOP_ENABLED") is None
or os.environ.get("PYTORCH_TUNABLEOP_ENABLED") == "1"
):
torch.cuda.tunable.enable()
if os.environ.get("PYTORCH_TUNABLEOP_TUNING") != "0":
torch.cuda.tunable.tuning_enable(True)
if os.environ.get("PYTORCH_TUNABLEOP_SEQLENS") is not None:
tuning_sequences = [
int(val)
for val in os.environ["PYTORCH_TUNABLEOP_SEQLENS"].split(",")
]
elif CUDA_GRAPHS is not None:
tuning_sequences = CUDA_GRAPHS
else:
# For seqlen = 1, we dispatch to LLMM1 kernel.
tuning_sequences = [2, 3, 4, 5, 6, 7]
tunableop_filepath = os.path.join(
HUGGINGFACE_HUB_CACHE,
f"tunableop_{self.model_id.replace('/', '-')}_tp{self.world_size}_rank{self.rank}.csv",
)
log_master(
logger.info,
f"PyTorch TunableOp (https://github.com/fxmarty/pytorch/tree/2.3-patched/aten/src/ATen/cuda/tunable) is enabled. The warmup may take several minutes, picking the ROCm optimal matrix multiplication kernel for the target lengths {', '.join([str(seqlen) for seqlen in tuning_sequences])}, with typical 5-8% latency improvement for small sequence lengths. The picked GEMMs are saved in the file {tunableop_filepath}. To disable TunableOp, please launch TGI with `PYTORCH_TUNABLEOP_ENABLED=0`.",
)
if os.path.isfile(tunableop_filepath):
log_master(
logger.info,
f"The file {tunableop_filepath} already exists and will be reused.",
)
torch.cuda.tunable.read_file(tunableop_filepath)
os.makedirs(HUGGINGFACE_HUB_CACHE, exist_ok=True)
for seqlen in tuning_sequences:
log_master(logger.info, f"Warming up TunableOp for seqlen={seqlen}")
self.tunableop_warmup(seqlen)
torch.cuda.tunable.write_file(tunableop_filepath)
torch.cuda.tunable.tuning_enable(False)
else:
log_master(
logger.info,
"PyTorch ROCm TunableOp (https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/cuda/tunable) is disabled. TunableOp brings an additional 5-8% latency improvement for small sequence lengths but requires a warmup. If necessary, please use the environment variable PYTORCH_TUNABLEOP_ENABLED=1 to enable TunableOp.",
)
if CUDA_GRAPHS:
try:
log_master(
logger.info, f"Cuda Graphs are enabled for sizes {CUDA_GRAPHS}"
)
# Warmup cuda graphs
for bs in CUDA_GRAPHS:
if self.speculate is None or self.speculate + 1 <= bs:
self.cuda_graph_warmup(bs, max_s, max_bt)
except torch.cuda.OutOfMemoryError:
logger.exception("Decode cuda graph warmup failed")
else:
log_master(
logger.info, f"Cuda Graphs are disabled (CUDA_GRAPHS={CUDA_GRAPHS})."
)
return int(num_blocks * BLOCK_SIZE)
def tunableop_warmup(self, seqlen: int):
input_ids = torch.zeros(seqlen, dtype=torch.int64, device=self.device)
position_ids = torch.zeros(seqlen, dtype=torch.int32, device=self.device)
slots = torch.arange(seqlen, dtype=torch.int64, device=self.device)
# Dummy value, some models (starcoder2) don't accept `None`.
input_lengths = torch.ones(seqlen, dtype=torch.int32, device=self.device)
prefix_lens_tensor = torch.zeros(seqlen, dtype=torch.int32, device=self.device)
cu_seqlen_prefill = torch.tensor(
[0, seqlen], device=self.device, dtype=torch.int32
)
seqlen = Seqlen(
input_lengths=input_lengths,
prefix_lengths=prefix_lens_tensor,
cu_seqlen_q=cu_seqlen_prefill,
max_q=1,
max_k=seqlen,
)
# We pass a `cu_seqlen_prefill` in order not to have to deal with paged attention cache allocation/deallocation.
self.model.forward(
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=cu_seqlen_prefill,
kv_cache=self.kv_cache,
block_tables=None,
seqlen=seqlen,
slots=slots,
max_s=seqlen,
lm_head_indices=None,
prefill_cache_indices=None,
)
def forward(
self, batch: FlashCausalLMBatch, adapter_data: AdapterBatchData
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
# Model Forward
if batch.speculative_ids is not None:
input_ids = batch.input_ids
position_ids = batch.position_ids
cu_seqlen_prefill = batch.cu_seqlen_prefill
kv_cache = self.kv_cache
block_tables = batch.block_tables_tensor
slots = batch.slots[batch.slot_indices]
input_lengths = batch.input_lengths_tensor
max_s = batch.max_seqlen
lm_head_indices = batch.prefill_head_indices
speculative_ids = batch.speculative_ids
B, speculative_length = speculative_ids.shape
new_length = speculative_length + 1
new_input_ids = torch.cat(
[input_ids.unsqueeze(-1), speculative_ids], dim=1
).reshape(-1)
arange = torch.arange(new_length, device=position_ids.device).unsqueeze(0)
arange_int = arange.to(dtype=torch.int32)
new_position_ids = (
position_ids.unsqueeze(-1).expand(B, new_length) + arange
).view(-1)
slots = (slots.unsqueeze(-1).expand(B, new_length) + arange_int).view(-1)
input_lengths = (
input_lengths.unsqueeze(-1).expand(B, new_length) + arange_int
).view(-1)
prefix_lens_tensor = (
batch.prefix_lens_tensor.unsqueeze(-1).expand(B, new_length)
).reshape(-1)
# Add Copy the block tables for all members
block_tables = (
block_tables.unsqueeze(1)
.expand(B, new_length, -1)
.reshape(B * new_length, -1)
.contiguous()
)
max_s = max_s + speculative_length
input_ids = new_input_ids
position_ids = new_position_ids
else:
input_ids = batch.input_ids
position_ids = batch.position_ids
cu_seqlen_prefill = batch.cu_seqlen_prefill
kv_cache = self.kv_cache
block_tables = batch.block_tables_tensor
slots = batch.slots[batch.slot_indices]
input_lengths = batch.input_lengths_tensor
prefix_lens_tensor = batch.prefix_lens_tensor
max_s = batch.max_seqlen
lm_head_indices = batch.prefill_head_indices
if cu_seqlen_prefill is None and self.max_past() is not None:
# In decode, not prefill, we're actually overwriting the KV-cache
# in a circular buffer mode.
# This makes sure the max_s for the decode pass is correct.
max_s = min(self.max_past(), max_s)
bs = input_ids.shape[0]
sorted_padded_bs = sorted([k for k in self.cuda_graphs.keys() if k >= bs])
if sorted_padded_bs:
# Get associated cuda graph
cuda_graph = self.cuda_graphs[sorted_padded_bs[0]]
else:
cuda_graph = None
if cu_seqlen_prefill is not None or cuda_graph is None:
if ATTENTION == "flashinfer":
block_tables = block_tables_to_ragged(
block_tables=block_tables,
input_lengths=batch.input_lengths,
prefix_lens=batch.prefix_lens,
)
with self._forward_context(
block_tables=block_tables,
cu_seqlen_prefill=cu_seqlen_prefill,
input_lengths=batch.input_lengths,
input_lengths_tensor=input_lengths + prefix_lens_tensor,
prefix_lens=batch.prefix_lens,
prefix_lens_tensor=prefix_lens_tensor,
):
max_k = (input_lengths + prefix_lens_tensor).max().item()
seqlen = Seqlen(
input_lengths=input_lengths,
prefix_lengths=prefix_lens_tensor,
cu_seqlen_q=cu_seqlen_prefill,
max_q=max_s,
max_k=max_k,
)
logits, speculative_logits = self.model.forward(
input_ids=input_ids,
position_ids=position_ids,
cu_seqlen_prefill=cu_seqlen_prefill,
kv_cache=kv_cache,
block_tables=block_tables,
slots=slots,
seqlen=seqlen,
max_s=max_s,
prefill_cache_indices=batch.prefill_cache_indices,
lm_head_indices=lm_head_indices,
adapter_data=adapter_data,
)
if batch.prefill_cache_indices is not None:
batch.prefill_cache_indices = None
return logits, speculative_logits
# Copy inputs to the static inputs of the cuda graph
# Static inputs are potentially padded
cuda_graph["input_ids"][: input_ids.shape[0]] = input_ids
cuda_graph["position_ids"][: position_ids.shape[0]] = position_ids
if ATTENTION == "flashinfer":
block_tables = block_tables_to_ragged(
block_tables=block_tables,
input_lengths=batch.input_lengths,
prefix_lens=batch.prefix_lens,
)
cuda_graph["block_tables"][: block_tables.shape[0]] = block_tables
else:
cuda_graph["block_tables"][
: block_tables.shape[0], : block_tables.shape[1]
] = block_tables
cuda_graph["slots"].fill_(-1)
cuda_graph["slots"][: slots.shape[0]] = slots
cuda_graph["input_lengths"].zero_()
cuda_graph["input_lengths"][: input_lengths.shape[0]] = (
input_lengths + prefix_lens_tensor
)
with self._forward_context(
block_tables=cuda_graph["block_tables"],
cu_seqlen_prefill=None,
input_lengths=batch.input_lengths,
input_lengths_tensor=cuda_graph["input_lengths"],
prefix_lens=batch.prefix_lens,
prefix_lens_tensor=prefix_lens_tensor,
state=cuda_graph.get("state"),
):
# Replay the graph
cuda_graph["graph"].replay()
# Slice output to the correct shape
speculative_logits = (
cuda_graph["speculative_logits"][:bs]
if cuda_graph["speculative_logits"] is not None
else None
)
logits = cuda_graph["logits"][:bs]
return logits, speculative_logits
@tracer.start_as_current_span("generate_token")
def generate_token(
self, batch: FlashCausalLMBatch
) -> Tuple[List[Generation], Optional[FlashCausalLMBatch], Tuple[int, int]]:
start = time.time_ns()
prefill = batch.cu_seqlen_prefill is not None
prefill_logprobs = batch.prefill_next_token_indices is not None
# Update adapter indices for speculative tokens (if present)
adapter_meta = batch.adapter_meta
if batch.speculative_ids is not None:
B, speculative_length = batch.speculative_ids.shape
new_length = speculative_length + 1
adapter_indices = (
adapter_meta.adapter_indices.unsqueeze(-1)
.expand(B, new_length)
.reshape(-1)
)
adapter_segments = adapter_meta.adapter_segments * new_length
adapter_meta = AdapterBatchMetadata(
adapter_indices=adapter_indices,
adapter_set=adapter_meta.adapter_set,
adapter_segments=adapter_segments,
segment_indices=adapter_meta.segment_indices,
)
# Assign pointers to adapter weights
# TODO(travis): don't update this if indices haven't changed
adapter_data = AdapterBatchData.from_meta(
adapter_meta,
self.layer_to_adapter_weights,
prefill,
batch.prefill_head_indices,
)
out, speculative_logits = self.forward(batch, adapter_data)
if prefill:
next_token_logits = (
out[batch.prefill_next_token_indices] if prefill_logprobs else out
)
if speculative_logits is not None:
speculative_logits = (
speculative_logits[batch.prefill_next_token_indices]
if prefill_logprobs
else speculative_logits
)
next_adapter_indices = batch.adapter_meta.adapter_indices.new_empty(
len(batch)
)
else:
next_token_logits = out
next_adapter_indices = batch.adapter_meta.adapter_indices
speculate = get_speculate()
(
next_input_ids,
next_token_logprobs,
logprobs,
accepted_ids,
speculative_ids,
) = batch.next_token_chooser(
batch.all_input_ids_tensor[:, : batch.max_seqlen],
next_token_logits,
speculate,
batch.speculative_ids,
speculative_logits,
)
batch_top_token_ids, batch_top_token_logprobs = batch_top_tokens(
batch.top_n_tokens, batch.top_n_tokens_tensor, logprobs, accepted_ids
)
if prefill:
if len(batch) > 1 and prefill_logprobs:
# We create the prefill_tokens_indices tensor that will be used to gather prefill logprobs
# When batch == 1, we will just use the batch.input_ids values directly
prefill_tokens_indices = batch.input_ids.new_zeros(len(out))
next_position_ids = batch.position_ids.new_empty(len(batch))
batch.slot_indices = batch.slot_indices[batch.cu_seqlen_prefill[1:] - 1]
# We do not need cu_seqlen_prefill anymore
batch.cu_seqlen_prefill = None
else:
prefill_logprobs = None
next_position_ids = batch.position_ids
# Cumulative length
cumulative_length = 0
# Results
generations: List[Generation] = []
stopped = True
# Zipped iterator
iterator = zip(batch.input_lengths, batch.all_input_ids, accepted_ids)
# We do two for loops as the first one can run completely asynchronously from the GPU while for the second
# one, we need to first do a GPU <-> CPU sync
# It is faster if we delay this sync for the maximum amount of time
# For each member of the batch
index = 0
for i, (input_length, all_input_ids, n_accepted_ids) in enumerate(iterator):
# Indexing metadata
start_index = cumulative_length
end_index = cumulative_length + input_length
if prefill:
# Indexing metadata
out_start_index = batch.prefill_cu_outlens[i]
out_end_index = batch.prefill_cu_outlens[i + 1]
out_length = out_end_index - out_start_index
# Initialize position_ids
# In decode, we do not need this as we can just increment position ids
next_position_ids[i] = batch.position_ids[end_index - 1]
# Initialize adapter indices
# In decode, we only have one token per row in the batch, so grab last index
next_adapter_indices[i] = batch.adapter_meta.adapter_indices[
end_index - 1
]
# Used to gather prefill logprobs
# Copy batch.input_ids to prefill_token_indices
if prefill_logprobs:
if len(batch) > 1:
prefill_tokens_indices[out_start_index : out_end_index - 1] = (
batch.input_ids[start_index + 1 : start_index + out_length]
)
else:
# Set prefill_tokens_indices to the correct slice
prefill_tokens_indices = batch.input_ids[
start_index + 1 : start_index + out_length
]
for j in range(n_accepted_ids):
batch.all_input_ids_tensor[i, input_length + j] = next_input_ids[index]
index += 1
cumulative_length += input_length
# Update values
batch.input_ids = next_input_ids[accepted_ids.cumsum(dim=-1) - 1]
batch.speculative_ids = speculative_ids
batch.position_ids = next_position_ids + accepted_ids
batch.input_lengths_tensor += accepted_ids
batch.slot_indices += accepted_ids
batch.adapter_meta.adapter_indices = next_adapter_indices
if prefill:
# adjust segment lengths to account for all request lengths being 1 during decoding
adapter_segments, _ = find_segments(batch.adapter_meta.adapter_indices)
batch.adapter_meta.adapter_segments = torch.tensor(
adapter_segments,
dtype=torch.int32,
device=batch.adapter_meta.adapter_segments.device,
)
if prefill and prefill_logprobs:
# Get prefill logprobs
prefill_logprobs_tensor = torch.log_softmax(out, -1)
prefill_logprobs = torch.gather(
prefill_logprobs_tensor, 1, prefill_tokens_indices.view(-1, 1)
)
# GPU <-> CPU sync
prefill_logprobs = prefill_logprobs.view(-1).tolist()
# GPU <-> CPU sync
next_token_logprobs = next_token_logprobs.tolist()
next_token_ids = next_input_ids.tolist()
accepted_ids = accepted_ids.tolist()
start_decode = time.time_ns()
# Zipped iterator
iterator = zip(
batch.requests,
batch.input_lengths,
batch.prefix_offsets,
batch.read_offsets,
batch.stopping_criterias,
batch.all_input_ids,
batch.prefix_ids,
batch.next_token_chooser.do_sample,
batch.next_token_chooser.seeds,
batch.top_n_tokens,
accepted_ids,
batch_top_token_ids,
batch_top_token_logprobs,
)
# For each member of the batch
index = 0
for i, (
request,
input_length,
prefix_offset,
read_offset,
stopping_criteria,
all_input_ids,
prefix_ids,
do_sample,
seed,
top_n_tokens,
n_accepted_ids,
top_token_ids,
top_token_logprobs,
) in enumerate(iterator):
# Append next token to all tokens
next_token_texts = []
left = 0
if n_accepted_ids > 1:
log_master(logger.debug, f"Speculated ids {n_accepted_ids - 1}")
current_stopped = False
for j in range(index, index + n_accepted_ids):
# Generated token
next_token_id = next_token_ids[j]
all_input_ids.append(next_token_id)
next_token_text, prefix_offset, read_offset = self.decode_token(
all_input_ids,
prefix_offset,
read_offset,
)
next_token_texts.append(next_token_text)
stop, reason = stopping_criteria(
next_token_id,
next_token_text,
)
if stop:
left = index + n_accepted_ids - j - 1
current_stopped = True
break
else:
current_stopped = False
stopped = stopped and current_stopped
_next_token_ids = next_token_ids[index : index + n_accepted_ids - left]
_next_token_logprobs = next_token_logprobs[
index : index + n_accepted_ids - left
]
index += n_accepted_ids
# Shard generations
# All generations will be appended in the rust sharded client
if i % self.world_size == self.rank:
if stop:
# Decode generated tokens
output_text, _, _ = self.decode_token(
all_input_ids,
prefix_offset=len(all_input_ids)
- stopping_criteria.current_tokens
- 1,
read_offset=len(all_input_ids)
- stopping_criteria.current_tokens,
skip_special_tokens=True,
)
generated_text = GeneratedText(
output_text,
stopping_criteria.current_tokens,
reason,
seed if do_sample else None,
)
else:
generated_text = None
# Prefill
if prefill and request.prefill_logprobs:
out_start_index = batch.prefill_cu_outlens[i]
out_end_index = batch.prefill_cu_outlens[i + 1]
# Remove generated token to only have prefill and add nan for first prompt token
request_prefill_logprobs = (
[float("nan")] * (len(prefix_ids) + 1)
) + prefill_logprobs[out_start_index : out_end_index - 1]
prefill_token_ids = all_input_ids[:-1]
prefill_texts = self.tokenizer.batch_decode(
prefix_ids + prefill_token_ids,
clean_up_tokenization_spaces=False,
skip_special_tokens=False,
)
prefill_tokens = Tokens(
prefix_ids + prefill_token_ids,
request_prefill_logprobs,
prefill_texts,
is_special=[],
)
else:
prefill_tokens = None
if top_n_tokens > 0:
all_top_tokens = []
for top_token_ids, top_token_logprobs in zip(
top_token_ids, top_token_logprobs
):
toptoken_texts = self.tokenizer.batch_decode(
top_token_ids,
clean_up_tokenization_spaces=False,
skip_special_tokens=False,
)
special_toptokens = [
token_id in self.all_special_ids
for token_id in top_token_ids
]
top_tokens = Tokens(
top_token_ids,
top_token_logprobs,
toptoken_texts,
special_toptokens,
)
all_top_tokens.append(top_tokens)
top_tokens = all_top_tokens
else:
top_tokens = None
generation = Generation(
request.id,
prefill_tokens,
Tokens(
_next_token_ids,
_next_token_logprobs,
next_token_texts,
[nid in self.all_special_ids for nid in _next_token_ids],
),
generated_text,
top_tokens,
)
generations.append(generation)
# accept each new token for this specific request since we may
# have more than one new token per request with speculative decoding
for next_token_id in _next_token_ids:
batch.next_token_chooser = (
batch.next_token_chooser.advance_grammar_single(i, next_token_id)
)
# Update values
batch.input_lengths[i] = input_length + n_accepted_ids
if batch.input_lengths[i] > batch.max_seqlen:
batch.max_seqlen = batch.input_lengths[i]
batch.prefix_offsets[i] = prefix_offset
batch.read_offsets[i] = read_offset
batch.all_input_ids[i] = all_input_ids
if stopped:
# No need to return a batch if we know that all requests stopped
forward_ns = start_decode - start
decode_ns = time.time_ns() - start_decode
return generations, None, (forward_ns, decode_ns)
batch.prefill_cu_outlens = None
batch.prefill_head_indices = None
batch.prefill_next_token_indices = None
forward_ns = start_decode - start
decode_ns = time.time_ns() - start_decode
return generations, batch, (forward_ns, decode_ns)
def _forward_context(
self,
*,
block_tables: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
input_lengths: List[int],
input_lengths_tensor: torch.Tensor,
prefix_lens: List[int],
prefix_lens_tensor: torch.Tensor,
state: Optional[Any] = None,
) -> ContextManager:
if ATTENTION != "flashinfer":
return nullcontext()
from text_generation_server.layers.attention.flashinfer import (
use_decode_state,
use_prefill_with_paged_kv_state,
)
# has_prefix_lens = any(prefix_len > 0 for prefix_len in prefix_lens)
if cu_seqlen_prefill is not None:
return use_prefill_with_paged_kv_state(
state=(
state if state is not None else self.prefill_with_paged_kv_state
),
# block_tables=block_tables_to_ragged(
# block_tables=block_tables,
# input_lengths=input_lengths,
# prefix_lens=prefix_lens,
# ),
block_tables=block_tables,
cu_seqlens=cu_seqlen_prefill,
input_lengths=input_lengths_tensor,
num_heads=self.num_heads,
num_kv_heads=self.num_kv_heads,
head_size=self.head_size,
page_size=BLOCK_SIZE,
)
else:
assert input_lengths_tensor is not None
return use_decode_state(
state=state if state is not None else self.decode_state,
input_lengths=input_lengths_tensor,
block_tables=block_tables,
num_heads=self.num_heads,
num_kv_heads=self.num_kv_heads,
head_size=self.head_size,
page_size=BLOCK_SIZE,
)
def block_tables_to_ragged(
*, block_tables: torch.Tensor, input_lengths: List[int], prefix_lens: List[int]
) -> torch.Tensor:
"""Convert block table to ragged format compatible with FlashInfer."""
assert len(input_lengths) == len(prefix_lens)
total_len = sum(input_lengths) + sum(prefix_lens)
block_tables_ragged = torch.empty(
total_len, dtype=torch.int32, device=block_tables.device
)
offset = 0
for i, (input_length, prefix_len) in enumerate(zip(input_lengths, prefix_lens)):
seq_len = prefix_len + input_length
block_tables_ragged[offset : offset + seq_len] = block_tables[i][:seq_len]
offset += seq_len
return block_tables_ragged
|
text-generation-inference/server/text_generation_server/models/flash_causal_lm.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/models/flash_causal_lm.py",
"repo_id": "text-generation-inference",
"token_count": 39451
}
| 244
|
from typing import Iterable
from loguru import logger
from text_generation_server.pb import generate_pb2
def concat_text_chunks(chunks: Iterable[generate_pb2.InputChunk]) -> str:
"""
Concatenate text in text chunks. Non-text chunks are dropped.
"""
text = None
for chunk in chunks:
chunk_type = chunk.WhichOneof("chunk")
if chunk_type == "text":
if text is None:
text = chunk.text
else:
raise NotImplementedError("Request contained more than one text chunk")
else:
# We cannot reject this, e.g. warmup sends an image chunk.
logger.debug(f"Encountered non-text chunk type {chunk_type}")
if text is None:
raise NotImplementedError("Request without a text chunk")
return text
|
text-generation-inference/server/text_generation_server/utils/chunks.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/utils/chunks.py",
"repo_id": "text-generation-inference",
"token_count": 332
}
| 245
|
import torch
from abc import ABC, abstractmethod
from contextlib import contextmanager
from pathlib import Path
from typing import Dict, List, Optional, Union, Type
from safetensors import safe_open
from dataclasses import dataclass
from text_generation_server.utils.import_utils import SYSTEM
class WeightsLoader(ABC):
"""
Instances of this type implement higher-level weight loading.
At a low-level, every weight is stored in the Safetensors format.
The interpretation of weights may be different however, for instance
could be packed, quantized weights. Loaders are responsible for
interpreting the raw tensors, sharding tensors in a manner compatible
with the format, etc.
"""
@abstractmethod
def get_weights(self, weights: "Weights", prefix: str):
"""
Get weights at the given prefix and apply without tensor paralllism.
"""
...
@abstractmethod
def get_weights_col_packed(
self,
weights: "Weights",
prefix: str,
block_sizes: Union[int, List[int]],
):
"""
Get the packed weights at the given prefix with column-splitting for
tensor parallelism. This method should be used when multiple different
weights are packed into a tensor, for instance, query/key/value
weights or a gate/up projection.
The `block_sizes` determines the proportions of the packed tensors.
The columns are split in equally sized blocks when `block_sizes` is an
`int`, or in blocks proportional given to the sizes. For instance
`[2, 1, 1]` will divide an input with dimensionality `1024` in
`[512, 256, 256]`.
"""
...
def get_weights_col(self, weights: "Weights", prefix: str):
"""
Get weights at the given prefix and apply column-splitting for tensor
paralllism.
"""
return weights.get_multi_weights_col([prefix], 0)
@abstractmethod
def get_multi_weights_col(self, weights: "Weights", prefixes: List[str], dim: int):
"""
Get the weights at the given prefixes, column-split them for tensor
parallelim, and then concatenate the weights along the given dimension.
"""
...
@abstractmethod
def get_weights_row(self, weights: "Weights", prefix: str):
"""
Get the weights at the given prefix and apply row-splitting for tensor
parallism.
"""
...
class Weight(ABC):
"""Instances of this type implement unquantized/quantized/to-be
quantized weights."""
@abstractmethod
def get_linear(self, bias: torch.Tensor):
"""Create a linear layer from this weight."""
...
@dataclass
class UnquantizedWeight(Weight):
weight: torch.Tensor
def get_linear(self, bias: torch.Tensor):
from text_generation_server.layers.linear import FastLinear, FastLinearROCm
if SYSTEM == "rocm":
return FastLinearROCm(self.weight, bias)
else:
return FastLinear(self.weight, bias)
class DefaultWeightsLoader(WeightsLoader):
"""Weight loader that loads (unquantized) Torch tensors."""
def __init__(self, weight_class: Type[UnquantizedWeight]):
"""Create a loader. Weights will be wrapped using the given `weights_class`,
normally this will be `UnquantizedWeight`, but a quantizer-specific class
such as `Fp8Weight` can be used to quantize the weights during loading.
"""
self.weight_class = weight_class
"""
Loader that uses tensors as-is with the exception of applying sharding
and/or concatenation.
"""
def get_weights(self, weights: "Weights", prefix: str):
return weights.get_tensor(f"{prefix}.weight")
def get_weights_col_packed(
self,
weights: "Weights",
prefix: str,
block_sizes: Union[int, List[int]],
):
return self.weight_class(
weights.get_packed_sharded(
f"{prefix}.weight", dim=0, block_sizes=block_sizes
),
)
def get_multi_weights_col(self, weights: "Weights", prefixes: List[str], dim: int):
w = [weights.get_sharded(f"{p}.weight", dim=0) for p in prefixes]
return self.weight_class(torch.cat(w, dim=dim))
def get_weights_row(self, weights: "Weights", prefix: str):
return self.weight_class(
weights.get_sharded(f"{prefix}.weight", dim=1),
)
class Weights:
def __init__(
self,
filenames: List[Path],
device,
dtype,
process_group,
weights_loader: WeightsLoader,
aliases: Optional[Dict[str, List[str]]] = None,
prefix: Optional[str] = None,
):
routing = {}
for filename in filenames:
with safe_open(filename, framework="pytorch") as f:
for k in f.keys():
if k in routing:
raise RuntimeError(
f"Key {k} was found in multiple files: {filename} and {routing[k]}"
)
routing[k] = filename
if aliases is None:
aliases = {}
self.aliases = aliases
self.routing = routing
self.device = device
self.dtype = dtype
self.process_group = process_group
self.prefix = prefix
self.weights_loader = weights_loader
self._handles = {}
def _get_handle(self, filename):
if filename not in self._handles:
f = safe_open(filename, framework="pytorch")
self._handles[filename] = f
return self._handles[filename]
def get_filename(self, tensor_name: str) -> (str, str):
names = [tensor_name]
if self.prefix is not None:
prefixed = f"{self.prefix}.{tensor_name}"
names.append(prefixed)
for name in names:
filename = self.routing.get(name, None)
if filename is not None:
return str(filename), name
aliases = self.aliases.get(name, [])
for alias in aliases:
filename = self.routing.get(alias, None)
if filename is not None:
return str(filename), alias
raise RuntimeError(f"weight {tensor_name} does not exist")
def _get_slice(self, tensor_name: str):
filename, tensor_name = self.get_filename(tensor_name)
f = self._get_handle(filename)
slice_ = f.get_slice(tensor_name)
return slice_
def _has_tensor(self, tensor_name: str):
try:
self.get_filename(tensor_name)
except Exception:
return False
return True
def get_shape(self, tensor_name: str):
return self._get_slice(tensor_name).get_shape()
def get_tensor(self, tensor_name: str, to_device=True, to_dtype=True):
filename, tensor_name = self.get_filename(tensor_name)
f = self._get_handle(filename)
tensor = f.get_tensor(tensor_name)
# Special case for gptq which shouldn't convert
# u4 which are disguised as int32. Exl2 uses int16
# as well. FP8 uses torch.float8_e4m3fn
if (
tensor.dtype
not in [
torch.float8_e4m3fn,
torch.int16,
torch.int32,
torch.int64,
]
and to_dtype
):
tensor = tensor.to(dtype=self.dtype)
if to_device:
tensor = tensor.to(device=self.device)
return tensor
def get_partial_sharded(
self, tensor_name: str, dim: int, to_device=True, to_dtype=True
):
filename, tensor_name = self.get_filename(tensor_name)
f = self._get_handle(filename)
slice_ = f.get_slice(tensor_name)
world_size = self.process_group.size()
rank = self.process_group.rank()
size = slice_.get_shape()[dim]
block_size = (size + world_size - 1) // world_size
start = rank * block_size
stop = (rank + 1) * block_size
if dim == 0:
tensor = slice_[start:stop]
elif dim == 1:
tensor = slice_[:, start:stop]
else:
raise NotImplementedError("Let's make that generic when needed")
# Special case for gptq which shouldn't convert
# u4 which are disguised as int32. exl2 uses int16.
# FP8 uses torch.float8_e4m3fn.
if (
tensor.dtype not in (torch.float8_e4m3fn, torch.int16, torch.int32)
and to_dtype
):
tensor = tensor.to(dtype=self.dtype)
if to_device:
tensor = tensor.to(device=self.device)
return tensor
def get_sharded(self, tensor_name: str, dim: int, to_device=True, to_dtype=True):
filename, tensor_name = self.get_filename(tensor_name)
f = self._get_handle(filename)
slice_ = f.get_slice(tensor_name)
world_size = self.process_group.size()
size = slice_.get_shape()[dim]
assert (
size % world_size == 0
), f"The choosen size {size} is not compatible with sharding on {world_size} shards"
return self.get_partial_sharded(
tensor_name, dim, to_device=to_device, to_dtype=to_dtype
)
def get_packed_sharded(
self,
tensor_name: str,
dim: int,
block_sizes: Union[int, List[int]],
to_dtype=True,
) -> torch.Tensor:
"""
Get a shard from a tensor that packs multiple tensors.
When a tensor packs multiple tensors (such as QKV or an up
projection + gate projection), sharding with `get_sharded` is not
safe since it would not split the packed tensors across shards.
This method shards a tensor, such that the packed tensors are
split across shards.
The columns are split in equally sized blocks when blocks is an `int`, or
in blocks proportional given to the sizes. For instance `[2, 1, 1]` will
divide an input with dimensionality `1024` in `[512, 256, 256]`. This is
convenient for e.g. splitting QKV without knowing the storage details of
quantized weights.
"""
slice_ = self._get_slice(tensor_name)
total_size = slice_.get_shape()[dim]
block_sizes = _blocks_to_block_sizes(total_size=total_size, blocks=block_sizes)
world_size = self.process_group.size()
rank = self.process_group.rank()
tensors = []
block_offset = 0
for block_size in block_sizes:
assert (
block_size % world_size == 0
), f"Prepacked tensor cannot be sharded across {world_size} shards"
shard_block_size = block_size // world_size
start = rank * shard_block_size
stop = (rank + 1) * shard_block_size
if dim == 0:
tensor = slice_[block_offset + start : block_offset + stop]
elif dim == 1:
tensor = slice_[:, block_offset + start : block_offset + stop]
else:
raise NotImplementedError("Currently only dim=0 or dim=1 is supported")
tensors.append(tensor)
block_offset += block_size
tensor = torch.cat(tensors, dim=dim)
tensor = tensor.to(device=self.device)
# Avoid casting quantizer dtypes.
if (
tensor.dtype
not in [
torch.float8_e4m3fn,
torch.int16,
torch.int32,
torch.int64,
]
and to_dtype
):
tensor = tensor.to(dtype=self.dtype)
return tensor
def get_weights(self, prefix: str):
return self.weights_loader.get_weights(self, prefix)
def get_weights_col_packed_qkv(
self,
prefix: str,
num_heads: int,
num_key_value_heads: int,
):
return self.get_weights_col_packed(
prefix, [num_heads, num_key_value_heads, num_key_value_heads]
)
def get_weights_col_packed_gate_up(self, prefix: str):
return self.get_weights_col_packed(prefix, 2)
def get_weights_col_packed(self, prefix: str, block_sizes: Union[int, List[int]]):
"""
The columns are split in equally sized blocks when blocks is an `int`, or
in blocks proportional given to the sizes. For instance `[2, 1, 1]` will
divide an input with dimensionality `1024` in `[512, 256, 256]`. This is
convenient for e.g. splitting QKV without knowing the storage details of
quantized weights.
"""
return self.weights_loader.get_weights_col_packed(self, prefix, block_sizes)
def get_weights_col(self, prefix: str):
return self.weights_loader.get_weights_col(self, prefix)
def get_multi_weights_col(self, prefixes: List[str], dim: int):
return self.weights_loader.get_multi_weights_col(self, prefixes, dim)
def get_tensor_shard(self, var, dim):
world_size = self.process_group.size()
rank = self.process_group.rank()
block_size = var.size()[dim] // world_size
start = rank * block_size
stop = (rank + 1) * block_size
if dim == 0:
tensor = var[start:stop]
elif dim == 1:
tensor = var[:, start:stop]
else:
raise NotImplementedError("Let's make that generic when needed")
tensor = tensor.to(dtype=self.dtype)
tensor = tensor.to(device=self.device)
return tensor
def get_weights_row(self, prefix: str):
return self.weights_loader.get_weights_row(self, prefix)
@contextmanager
def use_loader(self, weights_loader: WeightsLoader):
"""
This method is a context manager that can be used to use `Weights` with
a different loader for the duration of the context.
"""
old_loader = self.weights_loader
self.weights_loader = weights_loader
try:
yield
finally:
self.weights_loader = old_loader
def _blocks_to_block_sizes(total_size: int, blocks: Union[int, List[int]]) -> List[int]:
"""
Convert block count or proportions to block sizes.
This function accepts
- The number of blocks (int), in which case the block size is
total_size//blocks; or
- A list of block sizes (List[int]).
In the latter case, if sum(blocks) < total_size, the ratios between
the block sizes will be preserved. For instance, if blocks is
[2, 1, 1] and total_size is 1024, the returned block sizes are
[512, 256, 256].
"""
if isinstance(blocks, list):
total_blocks = sum(blocks)
assert (
total_size % total_blocks == 0
), f"Cannot split {total_size} in proportional blocks: {blocks}"
part_size = total_size // total_blocks
return [part_size * block for block in blocks]
else:
assert total_size % blocks == 0, f"Prepacked is not divisible by {blocks}"
single_size = total_size // blocks
return [single_size] * blocks
|
text-generation-inference/server/text_generation_server/utils/weights.py/0
|
{
"file_path": "text-generation-inference/server/text_generation_server/utils/weights.py",
"repo_id": "text-generation-inference",
"token_count": 6634
}
| 246
|
.PHONY: style check-style test
DATA_DIR = data
dir_guard=@mkdir -p $(@D)
# Format source code automatically
style:
npm run lint
# Check the source code is formatted correctly
check-style:
npm run lint-check
TESTS_RESOURCES = $(DATA_DIR)/small.txt $(DATA_DIR)/roberta.json $(DATA_DIR)/tokenizer-wiki.json $(DATA_DIR)/bert-wiki.json
# Launch the test suite
test: $(TESTS_RESOURCES)
npm run test
$(DATA_DIR)/big.txt :
$(dir_guard)
wget https://norvig.com/big.txt -O $@
$(DATA_DIR)/small.txt : $(DATA_DIR)/big.txt
head -100 $(DATA_DIR)/big.txt > $@
$(DATA_DIR)/roberta.json :
$(dir_guard)
wget https://huggingface.co/roberta-large/raw/main/tokenizer.json -O $@
$(DATA_DIR)/tokenizer-wiki.json :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/anthony/doc-quicktour/tokenizer.json -O $@
$(DATA_DIR)/bert-wiki.json :
$(dir_guard)
wget https://s3.amazonaws.com/models.huggingface.co/bert/anthony/doc-pipeline/tokenizer.json -O $@
|
tokenizers/bindings/node/Makefile/0
|
{
"file_path": "tokenizers/bindings/node/Makefile",
"repo_id": "tokenizers",
"token_count": 406
}
| 247
|
import {
byteLevelPreTokenizer,
metaspacePreTokenizer,
punctuationPreTokenizer,
sequencePreTokenizer,
splitPreTokenizer,
whitespaceSplitPreTokenizer,
} from '../../'
describe('byteLevelPreTokenizer', () => {
it('instantiates correctly', () => {
const processor = byteLevelPreTokenizer()
expect(processor.constructor.name).toEqual('PreTokenizer')
})
})
describe('metaspacePreTokenizer', () => {
it('instantiates correctly without any parameter', () => {
const processor = metaspacePreTokenizer()
expect(processor.constructor.name).toEqual('PreTokenizer')
})
it('accepts `undefined` as first parameter', () => {
expect(metaspacePreTokenizer(undefined)).toBeDefined()
})
it('accepts `undefined` as second parameter', () => {
expect(metaspacePreTokenizer('t', undefined)).toBeDefined()
})
it('can pre-tokenize strings', () => {
const pretok = metaspacePreTokenizer()
expect(pretok.preTokenizeString('Hello there friend')).toEqual([
['โHello', [0, 5]],
['โthere', [5, 11]],
['โfriend', [11, 18]],
])
})
})
describe('punctuationPreTokenizer', () => {
it('instantiates correctly without any parameter', () => {
const processor = punctuationPreTokenizer()
expect(processor.constructor.name).toEqual('PreTokenizer')
})
it('instantiates correctly with non-default split delimeter', () => {
const processor = punctuationPreTokenizer('removed')
expect(processor.constructor.name).toEqual('PreTokenizer')
})
})
describe('splitPreTokenizer', () => {
it('instantiates correctly with invert parameter', () => {
const processor = splitPreTokenizer(' ', 'mergedWithPrevious', false)
expect(processor.constructor.name).toEqual('PreTokenizer')
})
})
describe('sequencePreTokenizer', () => {
it('instantiates correctly', () => {
const punctuation = punctuationPreTokenizer()
const whitespace = whitespaceSplitPreTokenizer()
const sequence2 = sequencePreTokenizer([])
expect(sequence2.constructor.name).toEqual('PreTokenizer')
const sequence3 = sequencePreTokenizer([punctuation, whitespace])
expect(sequence3.constructor.name).toEqual('PreTokenizer')
})
})
|
tokenizers/bindings/node/lib/bindings/pre-tokenizers.test.ts/0
|
{
"file_path": "tokenizers/bindings/node/lib/bindings/pre-tokenizers.test.ts",
"repo_id": "tokenizers",
"token_count": 728
}
| 248
|
{
"name": "tokenizers-linux-arm64-gnu",
"version": "0.13.4-rc1",
"os": [
"linux"
],
"cpu": [
"arm64"
],
"main": "tokenizers.linux-arm64-gnu.node",
"files": [
"tokenizers.linux-arm64-gnu.node"
],
"description": "Tokenizers platform specific bindings",
"keywords": [
"napi-rs",
"NAPI",
"N-API",
"Rust",
"node-addon",
"node-addon-api"
],
"license": "MIT",
"engines": {
"node": ">= 10"
},
"publishConfig": {
"registry": "https://registry.npmjs.org/",
"access": "public"
},
"repository": "tokenizers",
"libc": [
"glibc"
]
}
|
tokenizers/bindings/node/npm/linux-arm64-gnu/package.json/0
|
{
"file_path": "tokenizers/bindings/node/npm/linux-arm64-gnu/package.json",
"repo_id": "tokenizers",
"token_count": 289
}
| 249
|
use crate::arc_rwlock_serde;
use serde::{Deserialize, Serialize};
extern crate tokenizers as tk;
use napi::bindgen_prelude::*;
use napi_derive::napi;
use std::sync::{Arc, RwLock};
use tk::decoders::DecoderWrapper;
/// Decoder
#[derive(Clone, Serialize, Deserialize)]
#[napi]
pub struct Decoder {
#[serde(flatten, with = "arc_rwlock_serde")]
decoder: Option<Arc<RwLock<DecoderWrapper>>>,
}
#[napi]
impl Decoder {
#[napi]
pub fn decode(&self, tokens: Vec<String>) -> Result<String> {
use tk::Decoder;
self
.decoder
.as_ref()
.unwrap()
.read()
.unwrap()
.decode(tokens)
.map_err(|e| Error::from_reason(format!("{}", e)))
}
}
impl tk::Decoder for Decoder {
fn decode_chain(&self, tokens: Vec<String>) -> tk::Result<Vec<String>> {
self
.decoder
.as_ref()
.ok_or("Uninitialized Decoder")?
.read()
.unwrap()
.decode_chain(tokens)
}
}
#[napi]
pub fn bpe_decoder(suffix: Option<String>) -> Decoder {
let suffix = suffix.unwrap_or("</w>".to_string());
let decoder = Some(Arc::new(RwLock::new(
tk::decoders::bpe::BPEDecoder::new(suffix).into(),
)));
Decoder { decoder }
}
#[napi]
pub fn byte_fallback_decoder() -> Decoder {
Decoder {
decoder: Some(Arc::new(RwLock::new(
tk::decoders::byte_fallback::ByteFallback::new().into(),
))),
}
}
#[napi]
pub fn ctc_decoder(
#[napi(ts_arg_type = "string = '<pad>'")] pad_token: Option<String>,
word_delimiter_token: Option<String>,
cleanup: Option<bool>,
) -> Decoder {
let pad_token = pad_token.unwrap_or("<pad>".to_string());
let word_delimiter_token = word_delimiter_token.unwrap_or("|".to_string());
let cleanup = cleanup.unwrap_or(true);
let decoder = Some(Arc::new(RwLock::new(
tk::decoders::ctc::CTC::new(pad_token, word_delimiter_token, cleanup).into(),
)));
Decoder { decoder }
}
#[napi]
pub fn fuse_decoder() -> Decoder {
Decoder {
decoder: Some(Arc::new(RwLock::new(
tk::decoders::fuse::Fuse::new().into(),
))),
}
}
#[napi]
pub fn metaspace_decoder(
#[napi(ts_arg_type = "string = 'โ'")] replacement: Option<String>,
#[napi(ts_arg_type = "prepend_scheme = 'always'")] prepend_scheme: Option<String>,
#[napi(ts_arg_type = "split = true")] split: Option<bool>,
) -> Result<Decoder> {
use tk::pre_tokenizers::metaspace::PrependScheme;
let split = split.unwrap_or(true);
let replacement = replacement.unwrap_or("โ".to_string());
if replacement.chars().count() != 1 {
return Err(Error::from_reason(
"replacement is supposed to be a single char",
));
}
let replacement = replacement.chars().next().unwrap();
let prepend_scheme: PrependScheme =
match prepend_scheme.unwrap_or(String::from("always")).as_str() {
"always" => PrependScheme::Always,
"first" => PrependScheme::First,
"never" => PrependScheme::Never,
_ => {
return Err(Error::from_reason(
"prepend_scheme is supposed to be either 'always', 'first' or 'never'",
));
}
};
Ok(Decoder {
decoder: Some(Arc::new(RwLock::new(
tk::decoders::metaspace::Metaspace::new(replacement, prepend_scheme, split).into(),
))),
})
}
#[napi]
pub fn replace_decoder(pattern: String, content: String) -> Result<Decoder> {
Ok(Decoder {
decoder: Some(Arc::new(RwLock::new(
tk::normalizers::replace::Replace::new(pattern, content)
.map_err(|e| Error::from_reason(e.to_string()))?
.into(),
))),
})
}
#[napi]
pub fn sequence_decoder(decoders: Vec<&Decoder>) -> Decoder {
let sequence: Vec<tk::DecoderWrapper> = decoders
.into_iter()
.filter_map(|decoder| {
decoder
.decoder
.as_ref()
.map(|decoder| (**decoder).read().unwrap().clone())
})
.clone()
.collect();
Decoder {
decoder: Some(Arc::new(RwLock::new(tk::DecoderWrapper::Sequence(
tk::decoders::sequence::Sequence::new(sequence),
)))),
}
}
#[napi]
pub fn strip_decoder(content: String, left: u32, right: u32) -> Result<Decoder> {
let content: char = content.chars().next().ok_or(Error::from_reason(
"Expected non empty string for strip pattern",
))?;
Ok(Decoder {
decoder: Some(Arc::new(RwLock::new(
tk::decoders::strip::Strip::new(content, left as usize, right as usize).into(),
))),
})
}
#[napi]
pub fn word_piece_decoder(
#[napi(ts_arg_type = "string = '##'")] prefix: Option<String>,
#[napi(ts_arg_type = "bool = true")] cleanup: Option<bool>,
) -> Decoder {
let prefix = prefix.unwrap_or("##".to_string());
let cleanup = cleanup.unwrap_or(true);
Decoder {
decoder: Some(Arc::new(RwLock::new(
tk::decoders::wordpiece::WordPiece::new(prefix, cleanup).into(),
))),
}
}
|
tokenizers/bindings/node/src/decoders.rs/0
|
{
"file_path": "tokenizers/bindings/node/src/decoders.rs",
"repo_id": "tokenizers",
"token_count": 2038
}
| 250
|
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
"-C", "link-arg=-mmacosx-version-min=10.11",
]
[target.aarch64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
"-C", "link-arg=-mmacosx-version-min=10.11",
]
|
tokenizers/bindings/python/.cargo/config.toml/0
|
{
"file_path": "tokenizers/bindings/python/.cargo/config.toml",
"repo_id": "tokenizers",
"token_count": 146
}
| 251
|
# Generated content DO NOT EDIT
class AddedToken:
"""
Represents a token that can be be added to a :class:`~tokenizers.Tokenizer`.
It can have special options that defines the way it should behave.
Args:
content (:obj:`str`): The content of the token
single_word (:obj:`bool`, defaults to :obj:`False`):
Defines whether this token should only match single words. If :obj:`True`, this
token will never match inside of a word. For example the token ``ing`` would match
on ``tokenizing`` if this option is :obj:`False`, but not if it is :obj:`True`.
The notion of "`inside of a word`" is defined by the word boundaries pattern in
regular expressions (ie. the token should start and end with word boundaries).
lstrip (:obj:`bool`, defaults to :obj:`False`):
Defines whether this token should strip all potential whitespaces on its left side.
If :obj:`True`, this token will greedily match any whitespace on its left. For
example if we try to match the token ``[MASK]`` with ``lstrip=True``, in the text
``"I saw a [MASK]"``, we would match on ``" [MASK]"``. (Note the space on the left).
rstrip (:obj:`bool`, defaults to :obj:`False`):
Defines whether this token should strip all potential whitespaces on its right
side. If :obj:`True`, this token will greedily match any whitespace on its right.
It works just like :obj:`lstrip` but on the right.
normalized (:obj:`bool`, defaults to :obj:`True` with :meth:`~tokenizers.Tokenizer.add_tokens` and :obj:`False` with :meth:`~tokenizers.Tokenizer.add_special_tokens`):
Defines whether this token should match against the normalized version of the input
text. For example, with the added token ``"yesterday"``, and a normalizer in charge of
lowercasing the text, the token could be extract from the input ``"I saw a lion
Yesterday"``.
special (:obj:`bool`, defaults to :obj:`False` with :meth:`~tokenizers.Tokenizer.add_tokens` and :obj:`False` with :meth:`~tokenizers.Tokenizer.add_special_tokens`):
Defines whether this token should be skipped when decoding.
"""
def __init__(self, content, single_word=False, lstrip=False, rstrip=False, normalized=True, special=False):
pass
@property
def content(self):
"""
Get the content of this :obj:`AddedToken`
"""
pass
@property
def lstrip(self):
"""
Get the value of the :obj:`lstrip` option
"""
pass
@property
def normalized(self):
"""
Get the value of the :obj:`normalized` option
"""
pass
@property
def rstrip(self):
"""
Get the value of the :obj:`rstrip` option
"""
pass
@property
def single_word(self):
"""
Get the value of the :obj:`single_word` option
"""
pass
@property
def special(self):
"""
Get the value of the :obj:`special` option
"""
pass
class Encoding:
"""
The :class:`~tokenizers.Encoding` represents the output of a :class:`~tokenizers.Tokenizer`.
"""
@property
def attention_mask(self):
"""
The attention mask
This indicates to the LM which tokens should be attended to, and which should not.
This is especially important when batching sequences, where we need to applying
padding.
Returns:
:obj:`List[int]`: The attention mask
"""
pass
def char_to_token(self, char_pos, sequence_index=0):
"""
Get the token that contains the char at the given position in the input sequence.
Args:
char_pos (:obj:`int`):
The position of a char in the input string
sequence_index (:obj:`int`, defaults to :obj:`0`):
The index of the sequence that contains the target char
Returns:
:obj:`int`: The index of the token that contains this char in the encoded sequence
"""
pass
def char_to_word(self, char_pos, sequence_index=0):
"""
Get the word that contains the char at the given position in the input sequence.
Args:
char_pos (:obj:`int`):
The position of a char in the input string
sequence_index (:obj:`int`, defaults to :obj:`0`):
The index of the sequence that contains the target char
Returns:
:obj:`int`: The index of the word that contains this char in the input sequence
"""
pass
@property
def ids(self):
"""
The generated IDs
The IDs are the main input to a Language Model. They are the token indices,
the numerical representations that a LM understands.
Returns:
:obj:`List[int]`: The list of IDs
"""
pass
@staticmethod
def merge(encodings, growing_offsets=True):
"""
Merge the list of encodings into one final :class:`~tokenizers.Encoding`
Args:
encodings (A :obj:`List` of :class:`~tokenizers.Encoding`):
The list of encodings that should be merged in one
growing_offsets (:obj:`bool`, defaults to :obj:`True`):
Whether the offsets should accumulate while merging
Returns:
:class:`~tokenizers.Encoding`: The resulting Encoding
"""
pass
@property
def n_sequences(self):
"""
The number of sequences represented
Returns:
:obj:`int`: The number of sequences in this :class:`~tokenizers.Encoding`
"""
pass
@property
def offsets(self):
"""
The offsets associated to each token
These offsets let's you slice the input string, and thus retrieve the original
part that led to producing the corresponding token.
Returns:
A :obj:`List` of :obj:`Tuple[int, int]`: The list of offsets
"""
pass
@property
def overflowing(self):
"""
A :obj:`List` of overflowing :class:`~tokenizers.Encoding`
When using truncation, the :class:`~tokenizers.Tokenizer` takes care of splitting
the output into as many pieces as required to match the specified maximum length.
This field lets you retrieve all the subsequent pieces.
When you use pairs of sequences, the overflowing pieces will contain enough
variations to cover all the possible combinations, while respecting the provided
maximum length.
"""
pass
def pad(self, length, direction="right", pad_id=0, pad_type_id=0, pad_token="[PAD]"):
"""
Pad the :class:`~tokenizers.Encoding` at the given length
Args:
length (:obj:`int`):
The desired length
direction: (:obj:`str`, defaults to :obj:`right`):
The expected padding direction. Can be either :obj:`right` or :obj:`left`
pad_id (:obj:`int`, defaults to :obj:`0`):
The ID corresponding to the padding token
pad_type_id (:obj:`int`, defaults to :obj:`0`):
The type ID corresponding to the padding token
pad_token (:obj:`str`, defaults to `[PAD]`):
The pad token to use
"""
pass
@property
def sequence_ids(self):
"""
The generated sequence indices.
They represent the index of the input sequence associated to each token.
The sequence id can be None if the token is not related to any input sequence,
like for example with special tokens.
Returns:
A :obj:`List` of :obj:`Optional[int]`: A list of optional sequence index.
"""
pass
def set_sequence_id(self, sequence_id):
"""
Set the given sequence index
Set the given sequence index for the whole range of tokens contained in this
:class:`~tokenizers.Encoding`.
"""
pass
@property
def special_tokens_mask(self):
"""
The special token mask
This indicates which tokens are special tokens, and which are not.
Returns:
:obj:`List[int]`: The special tokens mask
"""
pass
def token_to_chars(self, token_index):
"""
Get the offsets of the token at the given index.
The returned offsets are related to the input sequence that contains the
token. In order to determine in which input sequence it belongs, you
must call :meth:`~tokenizers.Encoding.token_to_sequence()`.
Args:
token_index (:obj:`int`):
The index of a token in the encoded sequence.
Returns:
:obj:`Tuple[int, int]`: The token offsets :obj:`(first, last + 1)`
"""
pass
def token_to_sequence(self, token_index):
"""
Get the index of the sequence represented by the given token.
In the general use case, this method returns :obj:`0` for a single sequence or
the first sequence of a pair, and :obj:`1` for the second sequence of a pair
Args:
token_index (:obj:`int`):
The index of a token in the encoded sequence.
Returns:
:obj:`int`: The sequence id of the given token
"""
pass
def token_to_word(self, token_index):
"""
Get the index of the word that contains the token in one of the input sequences.
The returned word index is related to the input sequence that contains
the token. In order to determine in which input sequence it belongs, you
must call :meth:`~tokenizers.Encoding.token_to_sequence()`.
Args:
token_index (:obj:`int`):
The index of a token in the encoded sequence.
Returns:
:obj:`int`: The index of the word in the relevant input sequence.
"""
pass
@property
def tokens(self):
"""
The generated tokens
They are the string representation of the IDs.
Returns:
:obj:`List[str]`: The list of tokens
"""
pass
def truncate(self, max_length, stride=0, direction="right"):
"""
Truncate the :class:`~tokenizers.Encoding` at the given length
If this :class:`~tokenizers.Encoding` represents multiple sequences, when truncating
this information is lost. It will be considered as representing a single sequence.
Args:
max_length (:obj:`int`):
The desired length
stride (:obj:`int`, defaults to :obj:`0`):
The length of previous content to be included in each overflowing piece
direction (:obj:`str`, defaults to :obj:`right`):
Truncate direction
"""
pass
@property
def type_ids(self):
"""
The generated type IDs
Generally used for tasks like sequence classification or question answering,
these tokens let the LM know which input sequence corresponds to each tokens.
Returns:
:obj:`List[int]`: The list of type ids
"""
pass
@property
def word_ids(self):
"""
The generated word indices.
They represent the index of the word associated to each token.
When the input is pre-tokenized, they correspond to the ID of the given input label,
otherwise they correspond to the words indices as defined by the
:class:`~tokenizers.pre_tokenizers.PreTokenizer` that was used.
For special tokens and such (any token that was generated from something that was
not part of the input), the output is :obj:`None`
Returns:
A :obj:`List` of :obj:`Optional[int]`: A list of optional word index.
"""
pass
def word_to_chars(self, word_index, sequence_index=0):
"""
Get the offsets of the word at the given index in one of the input sequences.
Args:
word_index (:obj:`int`):
The index of a word in one of the input sequences.
sequence_index (:obj:`int`, defaults to :obj:`0`):
The index of the sequence that contains the target word
Returns:
:obj:`Tuple[int, int]`: The range of characters (span) :obj:`(first, last + 1)`
"""
pass
def word_to_tokens(self, word_index, sequence_index=0):
"""
Get the encoded tokens corresponding to the word at the given index
in one of the input sequences.
Args:
word_index (:obj:`int`):
The index of a word in one of the input sequences.
sequence_index (:obj:`int`, defaults to :obj:`0`):
The index of the sequence that contains the target word
Returns:
:obj:`Tuple[int, int]`: The range of tokens: :obj:`(first, last + 1)`
"""
pass
@property
def words(self):
"""
The generated word indices.
.. warning::
This is deprecated and will be removed in a future version.
Please use :obj:`~tokenizers.Encoding.word_ids` instead.
They represent the index of the word associated to each token.
When the input is pre-tokenized, they correspond to the ID of the given input label,
otherwise they correspond to the words indices as defined by the
:class:`~tokenizers.pre_tokenizers.PreTokenizer` that was used.
For special tokens and such (any token that was generated from something that was
not part of the input), the output is :obj:`None`
Returns:
A :obj:`List` of :obj:`Optional[int]`: A list of optional word index.
"""
pass
class NormalizedString:
"""
NormalizedString
A NormalizedString takes care of modifying an "original" string, to obtain a "normalized" one.
While making all the requested modifications, it keeps track of the alignment information
between the two versions of the string.
Args:
sequence: str:
The string sequence used to initialize this NormalizedString
"""
def append(self, s):
"""
Append the given sequence to the string
"""
pass
def clear(self):
"""
Clears the string
"""
pass
def filter(self, func):
"""
Filter each character of the string using the given func
"""
pass
def for_each(self, func):
"""
Calls the given function for each character of the string
"""
pass
def lowercase(self):
"""
Lowercase the string
"""
pass
def lstrip(self):
"""
Strip the left of the string
"""
pass
def map(self, func):
"""
Calls the given function for each character of the string
Replaces each character of the string using the returned value. Each
returned value **must** be a str of length 1 (ie a character).
"""
pass
def nfc(self):
"""
Runs the NFC normalization
"""
pass
def nfd(self):
"""
Runs the NFD normalization
"""
pass
def nfkc(self):
"""
Runs the NFKC normalization
"""
pass
def nfkd(self):
"""
Runs the NFKD normalization
"""
pass
@property
def normalized(self):
"""
The normalized part of the string
"""
pass
def prepend(self, s):
"""
Prepend the given sequence to the string
"""
pass
def replace(self, pattern, content):
"""
Replace the content of the given pattern with the provided content
Args:
pattern: Pattern:
A pattern used to match the string. Usually a string or a Regex
content: str:
The content to be used as replacement
"""
pass
def rstrip(self):
"""
Strip the right of the string
"""
pass
def slice(self, range):
"""
Slice the string using the given range
"""
pass
def split(self, pattern, behavior):
"""
Split the NormalizedString using the given pattern and the specified behavior
Args:
pattern: Pattern:
A pattern used to split the string. Usually a string or a regex built with `tokenizers.Regex`
behavior: SplitDelimiterBehavior:
The behavior to use when splitting.
Choices: "removed", "isolated", "merged_with_previous", "merged_with_next",
"contiguous"
Returns:
A list of NormalizedString, representing each split
"""
pass
def strip(self):
"""
Strip both ends of the string
"""
pass
def uppercase(self):
"""
Uppercase the string
"""
pass
class PreTokenizedString:
"""
PreTokenizedString
Wrapper over a string, that provides a way to normalize, pre-tokenize, tokenize the
underlying string, while keeping track of the alignment information (offsets).
The PreTokenizedString manages what we call `splits`. Each split represents a substring
which is a subpart of the original string, with the relevant offsets and tokens.
When calling one of the methods used to modify the PreTokenizedString (namely one of
`split`, `normalize` or `tokenize), only the `splits` that don't have any associated
tokens will get modified.
Args:
sequence: str:
The string sequence used to initialize this PreTokenizedString
"""
def __init__(self, sequence):
pass
def get_splits(self, offset_referential="original", offset_type="char"):
"""
Get the splits currently managed by the PreTokenizedString
Args:
offset_referential: :obj:`str`
Whether the returned splits should have offsets expressed relative
to the original string, or the normalized one. choices: "original", "normalized".
offset_type: :obj:`str`
Whether the returned splits should have offsets expressed in bytes or chars.
When slicing an str, we usually want to use chars, which is the default value.
Now in some cases it might be interesting to get these offsets expressed in bytes,
so it is possible to change this here.
choices: "char", "bytes"
Returns
A list of splits
"""
pass
def normalize(self, func):
"""
Normalize each split of the `PreTokenizedString` using the given `func`
Args:
func: Callable[[NormalizedString], None]:
The function used to normalize each underlying split. This function
does not need to return anything, just calling the methods on the provided
NormalizedString allow its modification.
"""
pass
def split(self, func):
"""
Split the PreTokenizedString using the given `func`
Args:
func: Callable[[index, NormalizedString], List[NormalizedString]]:
The function used to split each underlying split.
It is expected to return a list of `NormalizedString`, that represent the new
splits. If the given `NormalizedString` does not need any splitting, we can
just return it directly.
In order for the offsets to be tracked accurately, any returned `NormalizedString`
should come from calling either `.split` or `.slice` on the received one.
"""
pass
def to_encoding(self, type_id=0, word_idx=None):
"""
Return an Encoding generated from this PreTokenizedString
Args:
type_id: int = 0:
The type_id to be used on the generated Encoding.
word_idx: Optional[int] = None:
An optional word index to be used for each token of this Encoding. If provided,
all the word indices in the generated Encoding will use this value, instead
of the one automatically tracked during pre-tokenization.
Returns:
An Encoding
"""
pass
def tokenize(self, func):
"""
Tokenize each split of the `PreTokenizedString` using the given `func`
Args:
func: Callable[[str], List[Token]]:
The function used to tokenize each underlying split. This function must return
a list of Token generated from the input str.
"""
pass
class Regex:
"""
Instantiate a new Regex with the given pattern
"""
def __init__(self, pattern):
pass
class Token:
pass
class Tokenizer:
"""
A :obj:`Tokenizer` works as a pipeline. It processes some raw text as input
and outputs an :class:`~tokenizers.Encoding`.
Args:
model (:class:`~tokenizers.models.Model`):
The core algorithm that this :obj:`Tokenizer` should be using.
"""
def __init__(self, model):
pass
def add_special_tokens(self, tokens):
"""
Add the given special tokens to the Tokenizer.
If these tokens are already part of the vocabulary, it just let the Tokenizer know about
them. If they don't exist, the Tokenizer creates them, giving them a new id.
These special tokens will never be processed by the model (ie won't be split into
multiple tokens), and they can be removed from the output when decoding.
Args:
tokens (A :obj:`List` of :class:`~tokenizers.AddedToken` or :obj:`str`):
The list of special tokens we want to add to the vocabulary. Each token can either
be a string or an instance of :class:`~tokenizers.AddedToken` for more
customization.
Returns:
:obj:`int`: The number of tokens that were created in the vocabulary
"""
pass
def add_tokens(self, tokens):
"""
Add the given tokens to the vocabulary
The given tokens are added only if they don't already exist in the vocabulary.
Each token then gets a new attributed id.
Args:
tokens (A :obj:`List` of :class:`~tokenizers.AddedToken` or :obj:`str`):
The list of tokens we want to add to the vocabulary. Each token can be either a
string or an instance of :class:`~tokenizers.AddedToken` for more customization.
Returns:
:obj:`int`: The number of tokens that were created in the vocabulary
"""
pass
def decode(self, ids, skip_special_tokens=True):
"""
Decode the given list of ids back to a string
This is used to decode anything coming back from a Language Model
Args:
ids (A :obj:`List/Tuple` of :obj:`int`):
The list of ids that we want to decode
skip_special_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether the special tokens should be removed from the decoded string
Returns:
:obj:`str`: The decoded string
"""
pass
def decode_batch(self, sequences, skip_special_tokens=True):
"""
Decode a batch of ids back to their corresponding string
Args:
sequences (:obj:`List` of :obj:`List[int]`):
The batch of sequences we want to decode
skip_special_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether the special tokens should be removed from the decoded strings
Returns:
:obj:`List[str]`: A list of decoded strings
"""
pass
@property
def decoder(self):
"""
The `optional` :class:`~tokenizers.decoders.Decoder` in use by the Tokenizer
"""
pass
def enable_padding(
self, direction="right", pad_id=0, pad_type_id=0, pad_token="[PAD]", length=None, pad_to_multiple_of=None
):
"""
Enable the padding
Args:
direction (:obj:`str`, `optional`, defaults to :obj:`right`):
The direction in which to pad. Can be either ``right`` or ``left``
pad_to_multiple_of (:obj:`int`, `optional`):
If specified, the padding length should always snap to the next multiple of the
given value. For example if we were going to pad witha length of 250 but
``pad_to_multiple_of=8`` then we will pad to 256.
pad_id (:obj:`int`, defaults to 0):
The id to be used when padding
pad_type_id (:obj:`int`, defaults to 0):
The type id to be used when padding
pad_token (:obj:`str`, defaults to :obj:`[PAD]`):
The pad token to be used when padding
length (:obj:`int`, `optional`):
If specified, the length at which to pad. If not specified we pad using the size of
the longest sequence in a batch.
"""
pass
def enable_truncation(self, max_length, stride=0, strategy="longest_first", direction="right"):
"""
Enable truncation
Args:
max_length (:obj:`int`):
The max length at which to truncate
stride (:obj:`int`, `optional`):
The length of the previous first sequence to be included in the overflowing
sequence
strategy (:obj:`str`, `optional`, defaults to :obj:`longest_first`):
The strategy used to truncation. Can be one of ``longest_first``, ``only_first`` or
``only_second``.
direction (:obj:`str`, defaults to :obj:`right`):
Truncate direction
"""
pass
def encode(self, sequence, pair=None, is_pretokenized=False, add_special_tokens=True):
"""
Encode the given sequence and pair. This method can process raw text sequences
as well as already pre-tokenized sequences.
Example:
Here are some examples of the inputs that are accepted::
encode("A single sequence")`
encode("A sequence", "And its pair")`
encode([ "A", "pre", "tokenized", "sequence" ], is_pretokenized=True)`
encode(
[ "A", "pre", "tokenized", "sequence" ], [ "And", "its", "pair" ],
is_pretokenized=True
)
Args:
sequence (:obj:`~tokenizers.InputSequence`):
The main input sequence we want to encode. This sequence can be either raw
text or pre-tokenized, according to the ``is_pretokenized`` argument:
- If ``is_pretokenized=False``: :class:`~tokenizers.TextInputSequence`
- If ``is_pretokenized=True``: :class:`~tokenizers.PreTokenizedInputSequence`
pair (:obj:`~tokenizers.InputSequence`, `optional`):
An optional input sequence. The expected format is the same that for ``sequence``.
is_pretokenized (:obj:`bool`, defaults to :obj:`False`):
Whether the input is already pre-tokenized
add_special_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether to add the special tokens
Returns:
:class:`~tokenizers.Encoding`: The encoded result
"""
pass
def encode_batch(self, input, is_pretokenized=False, add_special_tokens=True):
"""
Encode the given batch of inputs. This method accept both raw text sequences
as well as already pre-tokenized sequences.
Example:
Here are some examples of the inputs that are accepted::
encode_batch([
"A single sequence",
("A tuple with a sequence", "And its pair"),
[ "A", "pre", "tokenized", "sequence" ],
([ "A", "pre", "tokenized", "sequence" ], "And its pair")
])
Args:
input (A :obj:`List`/:obj:`Tuple` of :obj:`~tokenizers.EncodeInput`):
A list of single sequences or pair sequences to encode. Each sequence
can be either raw text or pre-tokenized, according to the ``is_pretokenized``
argument:
- If ``is_pretokenized=False``: :class:`~tokenizers.TextEncodeInput`
- If ``is_pretokenized=True``: :class:`~tokenizers.PreTokenizedEncodeInput`
is_pretokenized (:obj:`bool`, defaults to :obj:`False`):
Whether the input is already pre-tokenized
add_special_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether to add the special tokens
Returns:
A :obj:`List` of :class:`~tokenizers.Encoding`: The encoded batch
"""
pass
def encode_batch_fast(self, input, is_pretokenized=False, add_special_tokens=True):
"""
Encode the given batch of inputs. This method is faster than `encode_batch`
because it doesn't keep track of offsets, they will be all zeros.
Example:
Here are some examples of the inputs that are accepted::
encode_batch_fast([
"A single sequence",
("A tuple with a sequence", "And its pair"),
[ "A", "pre", "tokenized", "sequence" ],
([ "A", "pre", "tokenized", "sequence" ], "And its pair")
])
Args:
input (A :obj:`List`/:obj:`Tuple` of :obj:`~tokenizers.EncodeInput`):
A list of single sequences or pair sequences to encode. Each sequence
can be either raw text or pre-tokenized, according to the ``is_pretokenized``
argument:
- If ``is_pretokenized=False``: :class:`~tokenizers.TextEncodeInput`
- If ``is_pretokenized=True``: :class:`~tokenizers.PreTokenizedEncodeInput`
is_pretokenized (:obj:`bool`, defaults to :obj:`False`):
Whether the input is already pre-tokenized
add_special_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether to add the special tokens
Returns:
A :obj:`List` of :class:`~tokenizers.Encoding`: The encoded batch
"""
pass
@property
def encode_special_tokens(self):
"""
Modifies the tokenizer in order to use or not the special tokens
during encoding.
Args:
value (:obj:`bool`):
Whether to use the special tokens or not
"""
pass
@staticmethod
def from_buffer(buffer):
"""
Instantiate a new :class:`~tokenizers.Tokenizer` from the given buffer.
Args:
buffer (:obj:`bytes`):
A buffer containing a previously serialized :class:`~tokenizers.Tokenizer`
Returns:
:class:`~tokenizers.Tokenizer`: The new tokenizer
"""
pass
@staticmethod
def from_file(path):
"""
Instantiate a new :class:`~tokenizers.Tokenizer` from the file at the given path.
Args:
path (:obj:`str`):
A path to a local JSON file representing a previously serialized
:class:`~tokenizers.Tokenizer`
Returns:
:class:`~tokenizers.Tokenizer`: The new tokenizer
"""
pass
@staticmethod
def from_pretrained(identifier, revision="main", auth_token=None):
"""
Instantiate a new :class:`~tokenizers.Tokenizer` from an existing file on the
Hugging Face Hub.
Args:
identifier (:obj:`str`):
The identifier of a Model on the Hugging Face Hub, that contains
a tokenizer.json file
revision (:obj:`str`, defaults to `main`):
A branch or commit id
auth_token (:obj:`str`, `optional`, defaults to `None`):
An optional auth token used to access private repositories on the
Hugging Face Hub
Returns:
:class:`~tokenizers.Tokenizer`: The new tokenizer
"""
pass
@staticmethod
def from_str(json):
"""
Instantiate a new :class:`~tokenizers.Tokenizer` from the given JSON string.
Args:
json (:obj:`str`):
A valid JSON string representing a previously serialized
:class:`~tokenizers.Tokenizer`
Returns:
:class:`~tokenizers.Tokenizer`: The new tokenizer
"""
pass
def get_added_tokens_decoder(self):
"""
Get the underlying vocabulary
Returns:
:obj:`Dict[int, AddedToken]`: The vocabulary
"""
pass
def get_vocab(self, with_added_tokens=True):
"""
Get the underlying vocabulary
Args:
with_added_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether to include the added tokens
Returns:
:obj:`Dict[str, int]`: The vocabulary
"""
pass
def get_vocab_size(self, with_added_tokens=True):
"""
Get the size of the underlying vocabulary
Args:
with_added_tokens (:obj:`bool`, defaults to :obj:`True`):
Whether to include the added tokens
Returns:
:obj:`int`: The size of the vocabulary
"""
pass
def id_to_token(self, id):
"""
Convert the given id to its corresponding token if it exists
Args:
id (:obj:`int`):
The id to convert
Returns:
:obj:`Optional[str]`: An optional token, :obj:`None` if out of vocabulary
"""
pass
@property
def model(self):
"""
The :class:`~tokenizers.models.Model` in use by the Tokenizer
"""
pass
def no_padding(self):
"""
Disable padding
"""
pass
def no_truncation(self):
"""
Disable truncation
"""
pass
@property
def normalizer(self):
"""
The `optional` :class:`~tokenizers.normalizers.Normalizer` in use by the Tokenizer
"""
pass
def num_special_tokens_to_add(self, is_pair):
"""
Return the number of special tokens that would be added for single/pair sentences.
:param is_pair: Boolean indicating if the input would be a single sentence or a pair
:return:
"""
pass
@property
def padding(self):
"""
Get the current padding parameters
`Cannot be set, use` :meth:`~tokenizers.Tokenizer.enable_padding` `instead`
Returns:
(:obj:`dict`, `optional`):
A dict with the current padding parameters if padding is enabled
"""
pass
def post_process(self, encoding, pair=None, add_special_tokens=True):
"""
Apply all the post-processing steps to the given encodings.
The various steps are:
1. Truncate according to the set truncation params (provided with
:meth:`~tokenizers.Tokenizer.enable_truncation`)
2. Apply the :class:`~tokenizers.processors.PostProcessor`
3. Pad according to the set padding params (provided with
:meth:`~tokenizers.Tokenizer.enable_padding`)
Args:
encoding (:class:`~tokenizers.Encoding`):
The :class:`~tokenizers.Encoding` corresponding to the main sequence.
pair (:class:`~tokenizers.Encoding`, `optional`):
An optional :class:`~tokenizers.Encoding` corresponding to the pair sequence.
add_special_tokens (:obj:`bool`):
Whether to add the special tokens
Returns:
:class:`~tokenizers.Encoding`: The final post-processed encoding
"""
pass
@property
def post_processor(self):
"""
The `optional` :class:`~tokenizers.processors.PostProcessor` in use by the Tokenizer
"""
pass
@property
def pre_tokenizer(self):
"""
The `optional` :class:`~tokenizers.pre_tokenizers.PreTokenizer` in use by the Tokenizer
"""
pass
def save(self, path, pretty=True):
"""
Save the :class:`~tokenizers.Tokenizer` to the file at the given path.
Args:
path (:obj:`str`):
A path to a file in which to save the serialized tokenizer.
pretty (:obj:`bool`, defaults to :obj:`True`):
Whether the JSON file should be pretty formatted.
"""
pass
def to_str(self, pretty=False):
"""
Gets a serialized string representing this :class:`~tokenizers.Tokenizer`.
Args:
pretty (:obj:`bool`, defaults to :obj:`False`):
Whether the JSON string should be pretty formatted.
Returns:
:obj:`str`: A string representing the serialized Tokenizer
"""
pass
def token_to_id(self, token):
"""
Convert the given token to its corresponding id if it exists
Args:
token (:obj:`str`):
The token to convert
Returns:
:obj:`Optional[int]`: An optional id, :obj:`None` if out of vocabulary
"""
pass
def train(self, files, trainer=None):
"""
Train the Tokenizer using the given files.
Reads the files line by line, while keeping all the whitespace, even new lines.
If you want to train from data store in-memory, you can check
:meth:`~tokenizers.Tokenizer.train_from_iterator`
Args:
files (:obj:`List[str]`):
A list of path to the files that we should use for training
trainer (:obj:`~tokenizers.trainers.Trainer`, `optional`):
An optional trainer that should be used to train our Model
"""
pass
def train_from_iterator(self, iterator, trainer=None, length=None):
"""
Train the Tokenizer using the provided iterator.
You can provide anything that is a Python Iterator
* A list of sequences :obj:`List[str]`
* A generator that yields :obj:`str` or :obj:`List[str]`
* A Numpy array of strings
* ...
Args:
iterator (:obj:`Iterator`):
Any iterator over strings or list of strings
trainer (:obj:`~tokenizers.trainers.Trainer`, `optional`):
An optional trainer that should be used to train our Model
length (:obj:`int`, `optional`):
The total number of sequences in the iterator. This is used to
provide meaningful progress tracking
"""
pass
@property
def truncation(self):
"""
Get the currently set truncation parameters
`Cannot set, use` :meth:`~tokenizers.Tokenizer.enable_truncation` `instead`
Returns:
(:obj:`dict`, `optional`):
A dict with the current truncation parameters if truncation is enabled
"""
pass
|
tokenizers/bindings/python/py_src/tokenizers/__init__.pyi/0
|
{
"file_path": "tokenizers/bindings/python/py_src/tokenizers/__init__.pyi",
"repo_id": "tokenizers",
"token_count": 17199
}
| 252
|
# Generated content DO NOT EDIT
from .. import processors
PostProcessor = processors.PostProcessor
BertProcessing = processors.BertProcessing
ByteLevel = processors.ByteLevel
RobertaProcessing = processors.RobertaProcessing
Sequence = processors.Sequence
TemplateProcessing = processors.TemplateProcessing
|
tokenizers/bindings/python/py_src/tokenizers/processors/__init__.py/0
|
{
"file_path": "tokenizers/bindings/python/py_src/tokenizers/processors/__init__.py",
"repo_id": "tokenizers",
"token_count": 74
}
| 253
|
#![warn(clippy::all)]
#![allow(clippy::upper_case_acronyms)]
// Many false positives with pyo3 it seems &str, and &PyAny get flagged
#![allow(clippy::borrow_deref_ref)]
extern crate tokenizers as tk;
mod decoders;
mod encoding;
mod error;
mod models;
mod normalizers;
mod pre_tokenizers;
mod processors;
mod token;
mod tokenizer;
mod trainers;
mod utils;
use pyo3::prelude::*;
use pyo3::wrap_pymodule;
pub const VERSION: &str = env!("CARGO_PKG_VERSION");
// For users using multiprocessing in python, it is quite easy to fork the process running
// tokenizers, ending up with a deadlock because we internaly make use of multithreading. So
// we register a callback to be called in the event of a fork so that we can warn the user.
#[cfg(target_family = "unix")]
static mut REGISTERED_FORK_CALLBACK: bool = false;
#[cfg(target_family = "unix")]
extern "C" fn child_after_fork() {
use tk::parallelism::*;
if has_parallelism_been_used() && !is_parallelism_configured() {
eprintln!(
"huggingface/tokenizers: The current process just got forked, after parallelism has \
already been used. Disabling parallelism to avoid deadlocks..."
);
eprintln!("To disable this warning, you can either:");
eprintln!(
"\t- Avoid using `tokenizers` before the fork if possible\n\
\t- Explicitly set the environment variable {}=(true | false)",
ENV_VARIABLE
);
set_parallelism(false);
}
}
/// Tokenizers Module
#[pymodule]
pub fn tokenizers(m: &Bound<'_, PyModule>) -> PyResult<()> {
let _ = env_logger::try_init_from_env("TOKENIZERS_LOG");
// Register the fork callback
#[cfg(target_family = "unix")]
unsafe {
if !REGISTERED_FORK_CALLBACK {
libc::pthread_atfork(None, None, Some(child_after_fork));
REGISTERED_FORK_CALLBACK = true;
}
}
m.add_class::<tokenizer::PyTokenizer>()?;
m.add_class::<tokenizer::PyAddedToken>()?;
m.add_class::<token::PyToken>()?;
m.add_class::<encoding::PyEncoding>()?;
m.add_class::<utils::PyRegex>()?;
m.add_class::<utils::PyNormalizedString>()?;
m.add_class::<utils::PyPreTokenizedString>()?;
m.add_wrapped(wrap_pymodule!(models::models))?;
m.add_wrapped(wrap_pymodule!(pre_tokenizers::pre_tokenizers))?;
m.add_wrapped(wrap_pymodule!(decoders::decoders))?;
m.add_wrapped(wrap_pymodule!(processors::processors))?;
m.add_wrapped(wrap_pymodule!(normalizers::normalizers))?;
m.add_wrapped(wrap_pymodule!(trainers::trainers))?;
m.add("__version__", env!("CARGO_PKG_VERSION"))?;
Ok(())
}
|
tokenizers/bindings/python/src/lib.rs/0
|
{
"file_path": "tokenizers/bindings/python/src/lib.rs",
"repo_id": "tokenizers",
"token_count": 1087
}
| 254
|
from tokenizers import BertWordPieceTokenizer
from ..utils import bert_files, data_dir, multiprocessing_with_parallelism
class TestBertWordPieceTokenizer:
def test_basic_encode(self, bert_files):
tokenizer = BertWordPieceTokenizer.from_file(bert_files["vocab"])
# Encode with special tokens by default
output = tokenizer.encode("My name is John", "pair")
assert output.ids == [101, 2026, 2171, 2003, 2198, 102, 3940, 102]
assert output.tokens == [
"[CLS]",
"my",
"name",
"is",
"john",
"[SEP]",
"pair",
"[SEP]",
]
assert output.offsets == [
(0, 0),
(0, 2),
(3, 7),
(8, 10),
(11, 15),
(0, 0),
(0, 4),
(0, 0),
]
assert output.type_ids == [0, 0, 0, 0, 0, 0, 1, 1]
# Can encode without the special tokens
output = tokenizer.encode("My name is John", "pair", add_special_tokens=False)
assert output.ids == [2026, 2171, 2003, 2198, 3940]
assert output.tokens == ["my", "name", "is", "john", "pair"]
assert output.offsets == [(0, 2), (3, 7), (8, 10), (11, 15), (0, 4)]
assert output.type_ids == [0, 0, 0, 0, 1]
def test_multiprocessing_with_parallelism(self, bert_files):
tokenizer = BertWordPieceTokenizer.from_file(bert_files["vocab"])
multiprocessing_with_parallelism(tokenizer, False)
multiprocessing_with_parallelism(tokenizer, True)
def test_train_from_iterator(self):
text = ["A first sentence", "Another sentence", "And a last one"]
tokenizer = BertWordPieceTokenizer()
tokenizer.train_from_iterator(text, show_progress=False)
output = tokenizer.encode("A sentence")
assert output.tokens == ["a", "sentence"]
|
tokenizers/bindings/python/tests/implementations/test_bert_wordpiece.py/0
|
{
"file_path": "tokenizers/bindings/python/tests/implementations/test_bert_wordpiece.py",
"repo_id": "tokenizers",
"token_count": 914
}
| 255
|
# Post-processors
<tokenizerslangcontent>
<python>
## BertProcessing
[[autodoc]] tokenizers.processors.BertProcessing
## ByteLevel
[[autodoc]] tokenizers.processors.ByteLevel
## RobertaProcessing
[[autodoc]] tokenizers.processors.RobertaProcessing
## TemplateProcessing
[[autodoc]] tokenizers.processors.TemplateProcessing
</python>
<rust>
The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website.
</rust>
<node>
The node API has not been documented yet.
</node>
</tokenizerslangcontent>
|
tokenizers/docs/source-doc-builder/api/post-processors.mdx/0
|
{
"file_path": "tokenizers/docs/source-doc-builder/api/post-processors.mdx",
"repo_id": "tokenizers",
"token_count": 174
}
| 256
|
Crates.io
----------------------------------------------------------------------------------------------------
๐ค Tokenizers is available on `crates.io <https://crates.io/crates/tokenizers>`__.
You just need to add it to your :obj:`Cargo.toml`::
tokenizers = "0.10"
|
tokenizers/docs/source/installation/rust.inc/0
|
{
"file_path": "tokenizers/docs/source/installation/rust.inc",
"repo_id": "tokenizers",
"token_count": 74
}
| 257
|
use tokenizers::Tokenizer;
fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let tokenizer = Tokenizer::from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct", None)?;
let data = std::fs::read_to_string("data/big.txt")?;
let data: Vec<_> = data.lines().collect();
let add_special_tokens = false;
tokenizer.encode_batch_char_offsets(data, add_special_tokens)?;
Ok(())
}
|
tokenizers/tokenizers/examples/encode_batch.rs/0
|
{
"file_path": "tokenizers/tokenizers/examples/encode_batch.rs",
"repo_id": "tokenizers",
"token_count": 165
}
| 258
|
import * as wasm from "unstable_wasm";
console.log(wasm.tokenize("ab"));
console.log(wasm.tokenize("abc"));
|
tokenizers/tokenizers/examples/unstable_wasm/www/index.js/0
|
{
"file_path": "tokenizers/tokenizers/examples/unstable_wasm/www/index.js",
"repo_id": "tokenizers",
"token_count": 43
}
| 259
|
use super::{super::OrderedVocabIter, convert_merges_to_hashmap, BpeBuilder, Pair, BPE};
use serde::{
de::{Error, MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use std::collections::HashMap;
impl Serialize for BPE {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut model = serializer.serialize_struct("BPE", 8)?;
// Start by small fields
model.serialize_field("type", "BPE")?;
model.serialize_field("dropout", &self.dropout)?;
model.serialize_field("unk_token", &self.unk_token)?;
model.serialize_field("continuing_subword_prefix", &self.continuing_subword_prefix)?;
model.serialize_field("end_of_word_suffix", &self.end_of_word_suffix)?;
model.serialize_field("fuse_unk", &self.fuse_unk)?;
model.serialize_field("byte_fallback", &self.byte_fallback)?;
model.serialize_field("ignore_merges", &self.ignore_merges)?;
// Then the large ones
let mut merges: Vec<(&Pair, &u32)> = self
.merges
.iter()
.map(|(pair, (rank, _))| (pair, rank))
.collect();
merges.sort_unstable_by_key(|k| *k.1);
let merges = merges
.into_iter()
.map(|(pair, _)| (self.vocab_r[&pair.0].clone(), self.vocab_r[&pair.1].clone()))
.collect::<Vec<_>>();
let ordered_vocab = OrderedVocabIter::new(&self.vocab_r);
model.serialize_field("vocab", &ordered_vocab)?;
model.serialize_field("merges", &merges)?;
model.end()
}
}
impl<'de> Deserialize<'de> for BPE {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_struct(
"BPE",
&[
"type",
"dropout",
"unk_token",
"continuing_subword_prefix",
"end_of_word_suffix",
"fuse_unk",
"byte_fallback",
"ignore_merges",
"vocab",
"merges",
],
BPEVisitor,
)
}
}
struct BPEVisitor;
impl<'de> Visitor<'de> for BPEVisitor {
type Value = BPE;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct BPE")
}
fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut builder = BpeBuilder::new();
let mut vocab: Option<HashMap<String, u32>> = None;
#[derive(Debug, Deserialize)]
#[serde(untagged)]
enum MergeType {
Tuple(Vec<(String, String)>),
Legacy(Vec<String>),
}
let mut merges: Option<MergeType> = None;
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"dropout" => {
if let Some(dropout) = map.next_value()? {
builder = builder.dropout(dropout);
}
}
"unk_token" => {
if let Some(unk) = map.next_value()? {
builder = builder.unk_token(unk);
}
}
"continuing_subword_prefix" => {
if let Some(prefix) = map.next_value()? {
builder = builder.continuing_subword_prefix(prefix);
}
}
"end_of_word_suffix" => {
if let Some(suffix) = map.next_value()? {
builder = builder.end_of_word_suffix(suffix);
}
}
"fuse_unk" => {
if let Some(suffix) = map.next_value()? {
builder = builder.fuse_unk(suffix);
}
}
"byte_fallback" => {
if let Some(suffix) = map.next_value()? {
builder = builder.byte_fallback(suffix);
}
}
"ignore_merges" => {
if let Some(suffix) = map.next_value()? {
builder = builder.ignore_merges(suffix);
}
}
"vocab" => vocab = Some(map.next_value()?),
"merges" => merges = Some(map.next_value()?),
"type" => match map.next_value()? {
"BPE" => {}
u => {
return Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(u),
&"BPE",
))
}
},
_ => {}
}
}
if let (Some(vocab), Some(merges)) = (vocab, merges) {
let merges = match merges {
MergeType::Tuple(merges) => merges,
MergeType::Legacy(merges) => {
convert_merges_to_hashmap(merges.into_iter(), &vocab).map_err(Error::custom)?
}
};
builder = builder.vocab_and_merges(vocab, merges);
Ok(builder.build().map_err(Error::custom)?)
} else {
Err(Error::custom("Missing vocab/merges"))
}
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::models::bpe::Vocab;
#[test]
fn test_serialization() {
let vocab: Vocab = [
("<unk>".into(), 0),
("a".into(), 1),
("b".into(), 2),
("ab".into(), 3),
]
.iter()
.cloned()
.collect();
let bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![("a".to_string(), "b".to_string())])
.unk_token("<unk>".to_string())
.ignore_merges(true)
.build()
.unwrap();
let legacy = r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b":2,"ab":3},"merges":["a b"]}"#;
let legacy = serde_json::from_str(legacy).unwrap();
assert_eq!(bpe, legacy);
let data = serde_json::to_string(&bpe).unwrap();
assert_eq!(
data,
r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b":2,"ab":3},"merges":[["a","b"]]}"#
);
let reconstructed = serde_json::from_str(&data).unwrap();
assert_eq!(bpe, reconstructed);
// With a space in the token
let vocab: Vocab = [
("<unk>".into(), 0),
("a".into(), 1),
("b c d".into(), 2),
("ab c d".into(), 3),
]
.iter()
.cloned()
.collect();
let bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![("a".to_string(), "b c d".to_string())])
.unk_token("<unk>".to_string())
.ignore_merges(true)
.build()
.unwrap();
let data = serde_json::to_string(&bpe).unwrap();
assert_eq!(
data,
r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b c d":2,"ab c d":3},"merges":[["a","b c d"]]}"#
);
let reconstructed = serde_json::from_str(&data).unwrap();
assert_eq!(bpe, reconstructed);
}
#[test]
fn test_serialization_ignore_merges() {
let vocab: Vocab = [("<unk>".into(), 0), ("a".into(), 1), ("b".into(), 2)]
.iter()
.cloned()
.collect();
let mut bpe = BpeBuilder::default()
.vocab_and_merges(vocab, vec![])
.unk_token("<unk>".to_string())
.ignore_merges(true)
.build()
.unwrap();
let bpe_string = r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"ignore_merges":true,"vocab":{"<unk>":0,"a":1,"b":2},"merges":[]}"#;
assert_eq!(serde_json::from_str::<BPE>(bpe_string).unwrap(), bpe);
bpe.ignore_merges = false;
let bpe_string = r#"{"type":"BPE","dropout":null,"unk_token":"<unk>","continuing_subword_prefix":null,"end_of_word_suffix":null,"fuse_unk":false,"byte_fallback":false,"vocab":{"<unk>":0,"a":1,"b":2},"merges":[]}"#;
assert_eq!(serde_json::from_str::<BPE>(bpe_string).unwrap(), bpe);
}
}
|
tokenizers/tokenizers/src/models/bpe/serialization.rs/0
|
{
"file_path": "tokenizers/tokenizers/src/models/bpe/serialization.rs",
"repo_id": "tokenizers",
"token_count": 4848
}
| 260
|
use crate::tokenizer::{NormalizedString, Normalizer, Result};
use serde::{Deserialize, Serialize};
use unicode_categories::UnicodeCategories;
/// Checks whether a character is whitespace
fn is_whitespace(c: char) -> bool {
// These are technically control characters but we count them as whitespace
match c {
'\t' | '\n' | '\r' => true,
_ => c.is_whitespace(),
}
}
/// Checks whether a character is a control character
fn is_control(c: char) -> bool {
// These are technically control characters but we count them as whitespace
match c {
'\t' | '\n' | '\r' => false,
// The definition of `is_control` here is quite large and contains also
// Cc, Cf, Cn or Co
// cf. https://unicode.org/reports/tr44/ (Table 12)
_ => c.is_other(),
}
}
/// Checks whether a character is chinese
/// This defines a "chinese character" as anything in the CJK Unicode block:
/// https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)
///
/// Note that the CJK Unicode block is NOT all Japanese and Korean characters,
/// despite its name. The modern Korean Hangul alphabet is a different block,
/// as is Japanese Hiragana and Katakana. Those alphabets are used to write
/// space-separated words, so they are not treated specially and handled
/// like for all of the other languages.
fn is_chinese_char(c: char) -> bool {
matches!(
c as usize,
0x4E00..=0x9FFF |
0x3400..=0x4DBF |
0x20000..=0x2A6DF |
0x2A700..=0x2B73F |
0x2B740..=0x2B81F |
0x2B920..=0x2CEAF |
0xF900..=0xFAFF |
0x2F800..=0x2FA1F
)
}
#[derive(Copy, Clone, Debug, Deserialize, Serialize)]
#[serde(tag = "type")]
#[non_exhaustive]
pub struct BertNormalizer {
/// Whether to do the bert basic cleaning:
/// 1. Remove any control characters
/// 2. Replace all sorts of whitespace by the classic one ` `
pub clean_text: bool,
/// Whether to put spaces around chinese characters so they get split
pub handle_chinese_chars: bool,
/// Whether to strip accents
pub strip_accents: Option<bool>,
/// Whether to lowercase the input
pub lowercase: bool,
}
impl Default for BertNormalizer {
fn default() -> Self {
Self {
clean_text: true,
handle_chinese_chars: true,
strip_accents: None,
lowercase: true,
}
}
}
impl BertNormalizer {
pub fn new(
clean_text: bool,
handle_chinese_chars: bool,
strip_accents: Option<bool>,
lowercase: bool,
) -> Self {
Self {
clean_text,
handle_chinese_chars,
strip_accents,
lowercase,
}
}
fn do_clean_text(&self, normalized: &mut NormalizedString) {
normalized
.filter(|c| !(c as usize == 0 || c as usize == 0xfffd || is_control(c)))
.map(|c| if is_whitespace(c) { ' ' } else { c });
}
fn do_handle_chinese_chars(&self, normalized: &mut NormalizedString) {
let mut new_chars: Vec<(char, isize)> = vec![];
normalized.for_each(|c| {
if is_chinese_char(c) {
new_chars.extend([(' ', 0), (c, 1), (' ', 1)]);
} else {
new_chars.push((c, 0));
}
});
normalized.transform(new_chars, 0);
}
fn do_strip_accents(&self, normalized: &mut NormalizedString) {
normalized.nfd().filter(|c| !c.is_mark_nonspacing());
}
fn do_lowercase(&self, normalized: &mut NormalizedString) {
normalized.lowercase();
}
}
impl Normalizer for BertNormalizer {
fn normalize(&self, normalized: &mut NormalizedString) -> Result<()> {
if self.clean_text {
self.do_clean_text(normalized);
}
if self.handle_chinese_chars {
self.do_handle_chinese_chars(normalized);
}
let strip_accents = self.strip_accents.unwrap_or(self.lowercase);
if strip_accents {
self.do_strip_accents(normalized);
}
if self.lowercase {
self.do_lowercase(normalized);
}
Ok(())
}
}
|
tokenizers/tokenizers/src/normalizers/bert.rs/0
|
{
"file_path": "tokenizers/tokenizers/src/normalizers/bert.rs",
"repo_id": "tokenizers",
"token_count": 1856
}
| 261
|
use crate::pre_tokenizers::PreTokenizerWrapper;
use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result};
use crate::utils::macro_rules_attribute;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, PartialEq)]
#[macro_rules_attribute(impl_serde_type!)]
pub struct Sequence {
pretokenizers: Vec<PreTokenizerWrapper>,
}
impl Sequence {
pub fn new(pretokenizers: Vec<PreTokenizerWrapper>) -> Self {
Self { pretokenizers }
}
pub fn get_pre_tokenizers(&self) -> &[PreTokenizerWrapper] {
&self.pretokenizers
}
pub fn get_pre_tokenizers_mut(&mut self) -> &mut [PreTokenizerWrapper] {
&mut self.pretokenizers
}
}
impl PreTokenizer for Sequence {
fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> {
for pretokenizer in &self.pretokenizers {
pretokenizer.pre_tokenize(pretokenized)?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::pre_tokenizers::{punctuation::Punctuation, whitespace::WhitespaceSplit};
use crate::{OffsetReferential, OffsetType};
#[test]
fn sequence_basic() {
let pretokenizers = vec![
PreTokenizerWrapper::WhitespaceSplit(WhitespaceSplit),
PreTokenizerWrapper::Punctuation(Punctuation::default()),
];
let pretok = Sequence::new(pretokenizers);
let mut pretokenized: PreTokenizedString = "Hey friend! How are you?!?".into();
pretok.pre_tokenize(&mut pretokenized).unwrap();
assert_eq!(
pretokenized
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, o, _)| (s, o))
.collect::<Vec<_>>(),
vec![
("Hey", (0, 3)),
("friend", (4, 10)),
("!", (10, 11)),
("How", (16, 19)),
("are", (20, 23)),
("you", (24, 27)),
("?", (27, 28)),
("!", (28, 29)),
("?", (29, 30)),
]
);
}
}
|
tokenizers/tokenizers/src/pre_tokenizers/sequence.rs/0
|
{
"file_path": "tokenizers/tokenizers/src/pre_tokenizers/sequence.rs",
"repo_id": "tokenizers",
"token_count": 1011
}
| 262
|
use crate::{
normalizer::Range, Encoding, NormalizedString, OffsetReferential, Offsets, Result, Token,
};
use std::collections::HashMap;
/// Various possible types of offsets
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum OffsetType {
Byte,
Char,
None,
}
/// Wrapper for a subpart of a `NormalizedString`.
///
/// This Split contains the underlying `NormalizedString` as well as its offsets
/// in the original string. These offsets are in the `original` referential.
/// It also contains any `Token` associated to the current split
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Split {
/// The underlying `NormalizedString`. Each SubString is represented by a `NormalizedString`
/// and in the end we might be carrying a lot of SubString representing various parts of the
/// original input string.
normalized: NormalizedString,
/// Optional Tokens associated to this Split
tokens: Option<Vec<Token>>,
}
impl From<NormalizedString> for Split {
fn from(n: NormalizedString) -> Self {
Self {
normalized: n,
tokens: None,
}
}
}
impl From<(NormalizedString, Option<Vec<Token>>)> for Split {
fn from(f: (NormalizedString, Option<Vec<Token>>)) -> Self {
Self {
normalized: f.0,
tokens: f.1,
}
}
}
/// The `PreTokenizedString` is in charge of splitting an underlying string,
/// making sure everything is fine while doing so, and providing ways to normalize
/// and tokenize these splits.
/// Once everything has been normalized and tokenized, the `PreTokenizedString` is able
/// to build an `Encoding` with all the relevant offsets and word ids, relative to the
/// original string.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct PreTokenizedString {
original: String,
splits: Vec<Split>,
}
impl PreTokenizedString {
/// Split the `PreTokenizedString` by providing a `split_fn` in charge of splitting
/// each substring (`NormalizedString`) into multiple parts.
///
/// `split_fn` takes a `NormalizedString` and is in charge of returning an iterator
/// over the produced `NormalizedString`. `split_fn` is free of modifying these
/// `NormalizedString` as relevant, as long as it respects the constraint stated below.
///
/// There are only one constraint that *MUST* be respected:
/// > The produced `NormalizedString`, if combined back together, must have the
/// > same `original` string as the original one given to `split_fn`. This concretely
/// > means that for the offset tracking to work as expected, `split_fn` must produce
/// > "splits" of the original string.
pub fn split<F, U, R>(&mut self, mut split_fn: F) -> Result<()>
where
F: FnMut(usize, NormalizedString) -> Result<U>,
U: IntoIterator<Item = R>,
R: Into<Split>,
{
// new_splits is at least as big as self.splits
let mut new_splits = Vec::with_capacity(self.splits.len());
for (i, original_split) in self.splits.drain(..).enumerate() {
if original_split.tokens.is_some() {
new_splits.push(original_split);
continue;
}
new_splits.extend(
split_fn(i, original_split.normalized)?
.into_iter()
.filter_map(|split| {
let split: Split = split.into();
if split.normalized.is_empty() {
None
} else {
Some(split)
}
}),
);
}
self.splits = new_splits;
Ok(())
}
/// Normalized all the splits that do not have attached `Tokens`, using the provided
/// `normalize` function.
pub fn normalize<F>(&mut self, normalize: F) -> Result<()>
where
F: Fn(&mut NormalizedString) -> Result<()>,
{
for split in self.splits.iter_mut().filter(|s| s.tokens.is_none()) {
normalize(&mut split.normalized)?;
}
Ok(())
}
/// Tokenize all the splits that do not have attached `Tokens`, using the provided
/// `tokenize` function
pub fn tokenize<F>(&mut self, tokenize: F) -> Result<()>
where
F: Fn(&NormalizedString) -> Result<Vec<Token>>,
{
for split in self.splits.iter_mut().filter(|s| s.tokens.is_none()) {
split.tokens = Some(tokenize(&split.normalized)?);
}
Ok(())
}
/// Transform the current `PreTokenizedString` into an `Encoding`.
///
/// If a `word_idx` is provided, any word in the generated `Encoding`
/// will be set to this value. This is generally used with pre-tokenized
/// input, that do not need the `PreTokenizedString` to generate word ids.
///
/// This method will fail if some splits do not have associated `Token`.
pub fn into_encoding(
self,
word_idx: Option<u32>,
type_id: u32,
offset_type: OffsetType,
) -> Result<Encoding> {
if self.splits.is_empty() {
Ok(Encoding::default())
} else if !self.splits.iter().all(|split| split.tokens.is_some()) {
Err("Split has not been tokenized, call `PreTokenizedString::tokenize` first".into())
} else {
let offset_converter = match offset_type {
OffsetType::Char => Some(BytesToCharOffsetConverter::new(&self.original)),
OffsetType::Byte => None,
OffsetType::None => {
let tokens = self
.splits
.into_iter()
.flat_map(|split| {
split.tokens.unwrap().into_iter().map(|token| {
// Replace this with the actual fields you need for the Encoding type
(token.id, String::with_capacity(0), (0, 0), None, 0)
})
})
.collect();
return Ok(tokens);
}
};
Ok(self
.splits
.into_iter()
.enumerate()
.flat_map(|(idx, split)| {
let normalized = split.normalized;
let offsets = normalized.offsets_original();
let offset_converter = &offset_converter;
split.tokens.unwrap().into_iter().map(move |token| {
let mut offsets = normalized
.convert_offsets(Range::Normalized(token.offsets.0..token.offsets.1))
.map_or(token.offsets, |range| {
(offsets.0 + range.start, offsets.0 + range.end)
});
// Convert to char offsets if relevant
if let Some(converter) = offset_converter {
offsets = converter.convert(offsets).unwrap_or(offsets);
}
(
token.id,
token.value,
offsets,
if word_idx.is_some() {
word_idx
} else {
Some(idx as u32)
},
type_id,
)
})
})
.collect())
}
}
/// Returns a list of splits, each of them being a slice of the normalized
/// string, the associated offsets either in original or normalized
/// referential, as well as the potention tokens
pub fn get_splits(
&self,
offset_ref: OffsetReferential,
offset_type: OffsetType,
) -> Vec<(&str, Offsets, &Option<Vec<Token>>)> {
let offset_converter = match offset_type {
OffsetType::Char => Some(BytesToCharOffsetConverter::new(&self.original)),
OffsetType::Byte => None,
OffsetType::None => None,
};
let mut offset = 0;
self.splits
.iter()
.map(|split| {
let mut offsets = match offset_ref {
OffsetReferential::Original => split.normalized.offsets_original(),
OffsetReferential::Normalized => {
let len = split.normalized.len();
offset += len;
(offset - len, offset)
}
};
// Convert to char offsets if relevant
if let Some(ref converter) = offset_converter {
offsets = converter.convert(offsets).unwrap_or(offsets);
}
(split.normalized.get(), offsets, &split.tokens)
})
.collect()
}
}
impl From<NormalizedString> for PreTokenizedString {
fn from(s: NormalizedString) -> Self {
Self {
original: s.get_original().to_owned(),
splits: vec![Split {
normalized: s,
tokens: None,
}],
}
}
}
impl From<&str> for PreTokenizedString {
fn from(s: &str) -> Self {
let normalized: NormalizedString = s.into();
normalized.into()
}
}
impl From<String> for PreTokenizedString {
fn from(s: String) -> Self {
let normalized: NormalizedString = s.into();
normalized.into()
}
}
struct BytesToCharOffsetConverter {
map: HashMap<usize, usize>,
}
impl BytesToCharOffsetConverter {
pub fn new(sequence: &str) -> Self {
Self {
map: sequence
.char_indices()
.enumerate()
.flat_map(|(i, (b, c))| {
let mut n = 0;
std::iter::repeat_with(move || {
let o = (b + n, i);
n += 1;
o
})
.take(c.len_utf8())
})
.collect(),
}
}
pub fn convert(&self, offsets: Offsets) -> Option<Offsets> {
match (self.map.get(&offsets.0), self.map.get(&offsets.1)) {
(Some(start), Some(end)) => Some((*start, *end)),
// If we reached the end, `end` is not in the map
(Some(start), None) => {
// But the one just before should be
let last = self.map.get(&(offsets.1 - 1)).copied().unwrap_or(start + 1);
Some((*start, last + 1))
}
_ => None,
}
}
}
|
tokenizers/tokenizers/src/tokenizer/pre_tokenizer.rs/0
|
{
"file_path": "tokenizers/tokenizers/src/tokenizer/pre_tokenizer.rs",
"repo_id": "tokenizers",
"token_count": 5310
}
| 263
|
mod common;
use common::*;
use tokenizers::tokenizer::AddedToken;
macro_rules! check_offsets {
($input: expr, $output:expr, $offset:expr, $result:expr) => {
let offsets = $output.get_offsets()[$offset];
assert_eq!(&$input[offsets.0..offsets.1], $result);
};
}
#[test]
fn byte_level_basic() {
// Without trimming offsets
let tokenizer = get_byte_level(true, false);
let input = "Hello there, how are you?";
let output = tokenizer.encode(input, false).unwrap();
check_offsets!(input, output, 0, "Hello");
check_offsets!(input, output, 1, " there");
check_offsets!(input, output, 2, ",");
check_offsets!(input, output, 3, " how");
check_offsets!(input, output, 4, " are");
check_offsets!(input, output, 5, " you");
check_offsets!(input, output, 6, "?");
// And when trimming offsets:
let tokenizer = get_byte_level(true, true);
let input = "Hello there, how are you?";
let output = tokenizer.encode(input, false).unwrap();
check_offsets!(input, output, 0, "Hello");
check_offsets!(input, output, 1, "there");
check_offsets!(input, output, 2, ",");
check_offsets!(input, output, 3, "how");
check_offsets!(input, output, 4, "are");
check_offsets!(input, output, 5, "you");
check_offsets!(input, output, 6, "?");
}
#[test]
fn byte_level_unicode() {
let tokenizer = get_byte_level(true, false);
let input = "iโญขj";
let output = tokenizer.encode(input, false).unwrap();
check_offsets!(input, output, 1, "โญข");
check_offsets!(input, output, 2, "โญข");
check_offsets!(input, output, 3, "โญข");
}
#[test]
fn byte_level_double_sequence() {
let input_a = "My name is Anthony";
let input_b = "What is my name?";
// Without trimming offsets
let tokenizer = get_byte_level(true, false);
let output = tokenizer.encode((input_a, input_b), false).unwrap();
let offsets = output.get_offsets();
assert_eq!(
offsets,
&[
(0, 2),
(2, 7),
(7, 10),
(10, 18),
(0, 4),
(4, 7),
(7, 10),
(10, 15),
(15, 16)
]
);
assert_eq!(
output.get_word_ids(),
&[
Some(0),
Some(1),
Some(2),
Some(3),
Some(0),
Some(1),
Some(2),
Some(3),
Some(4)
]
);
assert_eq!(output.get_type_ids(), &[0, 0, 0, 0, 1, 1, 1, 1, 1]);
// When trimming offsets
let tokenizer = get_byte_level(true, true);
let output = tokenizer.encode((input_a, input_b), false).unwrap();
let offsets = output.get_offsets();
assert_eq!(
offsets,
&[
(0, 2),
(3, 7),
(8, 10),
(11, 18),
(0, 4),
(5, 7),
(8, 10),
(11, 15),
(15, 16)
]
);
}
#[test]
fn byte_level_pre_tokenized_sequence() {
let input = ["My", "name", "is", "Anthonino"];
// Without trimming offsets
let tokenizer = get_byte_level(true, false);
let output = tokenizer.encode(&input[..], false).unwrap();
assert_eq!(
output.get_tokens(),
&["ฤ My", "ฤ name", "ฤ is", "ฤ Anth", "on", "ino"]
);
assert_eq!(
output.get_word_ids(),
&[Some(0), Some(1), Some(2), Some(3), Some(3), Some(3)]
);
assert_eq!(
output.get_offsets(),
&[(0, 2), (0, 4), (0, 2), (0, 4), (4, 6), (6, 9)]
);
}
#[test]
#[ignore]
fn byte_level_pre_tokenized_sequence_with_trimming() {
let input = ["My", "name", "is", "Anthonino"];
// When trimming offsets (expect same result)
let tokenizer = get_byte_level(true, true);
let output = tokenizer.encode(&input[..], false).unwrap();
assert_eq!(
output.get_word_ids(),
&[Some(0), Some(1), Some(2), Some(3), Some(3), Some(3)]
);
assert_eq!(
output.get_offsets(),
&[(0, 2), (0, 4), (0, 2), (0, 4), (4, 6), (6, 9)]
);
}
#[test]
fn split_on_added_tokens_bert() {
let input = "Yesterday I saw a [MASK] far away";
let mut tokenizer = get_bert();
tokenizer.add_special_tokens(&[AddedToken::from("[MASK]", true)]);
let output = tokenizer.encode(input, false).unwrap();
assert_eq!(
output.get_offsets(),
&[
(0, 9),
(10, 11),
(12, 15),
(16, 17),
(18, 24),
(25, 28),
(29, 33)
]
);
assert_eq!(
output.get_tokens(),
&["yesterday", "i", "saw", "a", "[MASK]", "far", "away"]
);
assert_eq!(
output.get_word_ids(),
&[
Some(0),
Some(1),
Some(2),
Some(3),
Some(4),
Some(5),
Some(6)
]
);
}
|
tokenizers/tokenizers/tests/offsets.rs/0
|
{
"file_path": "tokenizers/tokenizers/tests/offsets.rs",
"repo_id": "tokenizers",
"token_count": 2497
}
| 264
|
import argparse
import subprocess
def main(config_dir, config_name, args):
subprocess.run(["optimum-benchmark", "--config-dir", f"{config_dir}", "--config-name", f"{config_name}"] + ["hydra/job_logging=disabled", "hydra/hydra_logging=disabled"] + args)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--config-dir", type=str, required=True, help="The path to the config directory.")
parser.add_argument("--config-name", type=str, required=True, help="The config name.")
args, unknown = parser.parse_known_args()
main(args.config_dir, args.config_name, unknown)
|
transformers/benchmark/optimum_benchmark_wrapper.py/0
|
{
"file_path": "transformers/benchmark/optimum_benchmark_wrapper.py",
"repo_id": "transformers",
"token_count": 216
}
| 265
|
FROM python:3.10
LABEL maintainer="Hugging Face"
RUN apt update
RUN git clone https://github.com/huggingface/transformers
RUN python3 -m pip install --no-cache-dir --upgrade pip && python3 -m pip install --no-cache-dir git+https://github.com/huggingface/doc-builder ./transformers[dev]
RUN apt-get -y update && apt-get install -y libsndfile1-dev && apt install -y tesseract-ocr
# Torch needs to be installed before deepspeed
RUN python3 -m pip install --no-cache-dir ./transformers[deepspeed]
RUN python3 -m pip install --no-cache-dir torchvision git+https://github.com/facebookresearch/detectron2.git pytesseract
RUN python3 -m pip install -U "itsdangerous<2.1.0"
# Test if the image could successfully build the doc. before publishing the image
RUN doc-builder build transformers transformers/docs/source/en --build_dir doc-build-dev --notebook_dir notebooks/transformers_doc --clean
RUN rm -rf doc-build-dev
|
transformers/docker/transformers-doc-builder/Dockerfile/0
|
{
"file_path": "transformers/docker/transformers-doc-builder/Dockerfile",
"repo_id": "transformers",
"token_count": 292
}
| 266
|
# docstyle-ignore
INSTALL_CONTENT = """
# Transformers installation
! pip install transformers datasets evaluate accelerate
# To install from source instead of the last release, comment the command above and uncomment the following one.
# ! pip install git+https://github.com/huggingface/transformers.git
"""
notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
black_avoid_patterns = {
"{processor_class}": "FakeProcessorClass",
"{model_class}": "FakeModelClass",
"{object_class}": "FakeObjectClass",
}
|
transformers/docs/source/_config.py/0
|
{
"file_path": "transformers/docs/source/_config.py",
"repo_id": "transformers",
"token_count": 157
}
| 267
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Schnellstart
[[open-in-colab]]
Mit ๐ค Transformers kรถnnen Sie sofort loslegen! Verwenden Sie die [`pipeline`] fรผr schnelle Inferenz und laden Sie schnell ein vortrainiertes Modell und einen Tokenizer mit einer [AutoClass](./model_doc/auto), um Ihre Text-, Bild- oder Audioaufgabe zu lรถsen.
<Tip>
Alle in der Dokumentation vorgestellten Codebeispiele haben oben links einen Umschalter fรผr PyTorch und TensorFlow. Wenn
nicht, wird erwartet, dass der Code fรผr beide Backends ohne รnderungen funktioniert.
</Tip>
## Pipeline
[`pipeline`] ist der einfachste Weg, ein vortrainiertes Modell fรผr eine bestimmte Aufgabe zu verwenden.
<Youtube id="tiZFewofSLM"/>
Die [`pipeline`] unterstรผtzt viele gรคngige Aufgaben:
**Text**:
* Stimmungsanalyse: Klassifizierung der Polaritรคt eines gegebenen Textes.
* Textgenerierung (auf Englisch): Generierung von Text aus einer gegebenen Eingabe.
* Name-Entity-Recognition (NER): Kennzeichnung jedes Worts mit der Entitรคt, die es reprรคsentiert (Person, Datum, Ort usw.).
* Beantwortung von Fragen: Extrahieren der Antwort aus dem Kontext, wenn ein gewisser Kontext und eine Frage gegeben sind.
* Fill-mask: Ausfรผllen von Lรผcken in einem Text mit maskierten Wรถrtern.
* Zusammenfassung: Erstellung einer Zusammenfassung einer langen Text- oder Dokumentensequenz.
* รbersetzung: รbersetzen eines Textes in eine andere Sprache.
* Merkmalsextraktion: Erstellen einer Tensordarstellung des Textes.
**Bild**:
* Bildklassifizierung: Klassifizierung eines Bildes.
* Bildsegmentierung: Klassifizierung jedes Pixels in einem Bild.
* Objekterkennung: Erkennen von Objekten innerhalb eines Bildes.
**Audio**:
* Audioklassifizierung: Zuweisung eines Labels zu einem bestimmten Audiosegment.
* Automatische Spracherkennung (ASR): Transkription von Audiodaten in Text.
<Tip>
Fรผr mehr Details รผber die [`pipeline`] und assoziierte Aufgaben, schauen Sie in die Dokumentation [hier](./main_classes/pipelines).
</Tip>
### Verwendung der Pipeline
Im folgenden Beispiel werden Sie die [`pipeline`] fรผr die Stimmungsanalyse verwenden.
Installieren Sie die folgenden Abhรคngigkeiten, falls Sie dies nicht bereits getan haben:
<frameworkcontent>
<pt>
```bash
pip install torch
```
</pt>
<tf>
```bash
pip install tensorflow
```
</tf>
</frameworkcontent>
Importieren sie die [`pipeline`] und spezifizieren sie die Aufgabe, welche sie lรถsen mรถchten:
```py
>>> from transformers import pipeline
>>> classifier = pipeline("sentiment-analysis")
```
Die Pipeline lรคdt ein standardmรครiges [vortrainiertes Modell](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) und einen Tokenizer fรผr die Stimmungs-Analyse herunter und speichert sie. Jetzt kรถnnen Sie den "Klassifikator" auf Ihren Zieltext anwenden:
```py
>>> classifier("We are very happy to show you the ๐ค Transformers library.")
[{'label': 'POSITIVE', 'score': 0.9998}]
```
For more than one sentence, pass a list of sentences to the [`pipeline`] which returns a list of dictionaries:
```py
>>> results = classifier(["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."])
>>> for result in results:
... print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
label: POSITIVE, with score: 0.9998
label: NEGATIVE, with score: 0.5309
```
Die [`pipeline`] kann auch รผber einen ganzen Datensatz iterieren. Starten wir mit der Installation der [๐ค Datasets](https://huggingface.co/docs/datasets/) Bibliothek:
```bash
pip install datasets
```
Erstellen wir eine [`pipeline`] mit der Aufgabe die wir lรถsen und dem Modell welches wir nutzen mรถchten.
```py
>>> import torch
>>> from transformers import pipeline
>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h")
```
Als nรคchstes laden wir den Datensatz (siehe ๐ค Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart) fรผr mehr Details) welches wir nutzen mรถchten. Zum Beispiel laden wir den [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) Datensatz:
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT
```
Wir mรผssen sicherstellen, dass die Abtastrate des Datensatzes der Abtastrate entspricht, mit der `facebook/wav2vec2-base-960h` trainiert wurde.
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate))
```
Audiodateien werden automatisch geladen und neu abgetastet, wenn die Spalte "audio" aufgerufen wird.
Extrahieren wir die rohen Wellenform-Arrays der ersten 4 Beispiele und รผbergeben wir sie als Liste an die Pipeline:
```py
>>> result = speech_recognizer(dataset[:4]["audio"])
>>> print([d["text"] for d in result])
['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FODING HOW I'D SET UP A JOIN TO HET WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE AP SO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AND I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I THURN A JOIN A COUNT']
```
Bei einem grรถรeren Datensatz mit vielen Eingaben (wie bei Sprache oder Bildverarbeitung) sollten Sie einen Generator anstelle einer Liste รผbergeben, der alle Eingaben in den Speicher lรคdt. Weitere Informationen finden Sie in der [Pipeline-Dokumentation](./main_classes/pipelines).
### Ein anderes Modell und einen anderen Tokenizer in der Pipeline verwenden
Die [`pipeline`] kann jedes Modell aus dem [Model Hub](https://huggingface.co/models) verwenden, wodurch es einfach ist, die [`pipeline`] fรผr andere Anwendungsfรคlle anzupassen. Wenn Sie beispielsweise ein Modell wรผnschen, das franzรถsischen Text verarbeiten kann, verwenden Sie die Tags im Model Hub, um nach einem geeigneten Modell zu filtern. Das oberste gefilterte Ergebnis liefert ein mehrsprachiges [BERT-Modell](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment), das auf die Stimmungsanalyse abgestimmt ist. Groรartig, verwenden wir dieses Modell!
```py
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
```
<frameworkcontent>
<pt>
Use the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` below):
```py
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</pt>
<tf>
Use the [`TFAutoModelForSequenceClassification`] and [`AutoTokenizer`] to load the pretrained model and it's associated tokenizer (more on an `TFAutoClass` below):
```py
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
</tf>
</frameworkcontent>
Dann kรถnnen Sie das Modell und den Tokenizer in der [`pipeline`] angeben und den `Klassifikator` auf Ihren Zieltext anwenden:
```py
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐ค Transformers.")
[{'label': '5 stars', 'score': 0.7273}]
```
Wenn Sie kein Modell fรผr Ihren Anwendungsfall finden kรถnnen, mรผssen Sie ein vortrainiertes Modell auf Ihren Daten feinabstimmen. Schauen Sie sich unser [Feinabstimmungs-Tutorial](./training) an, um zu erfahren, wie das geht. Und schlieรlich, nachdem Sie Ihr trainiertes Modell verfeinert haben, sollten Sie es mit der Community im Model Hub teilen (siehe Tutorial [hier](./model_sharing)), um NLP fรผr alle zu demokratisieren! ๐ค
## AutoClass
<Youtube id="AhChOFRegn4"/>
Unter der Haube arbeiten die Klassen [`AutoModelForSequenceClassification`] und [`AutoTokenizer`] zusammen, um die [`pipeline`] zu betreiben. Eine [`AutoClass`](./model_doc/auto) ist eine Abkรผrzung, die automatisch die Architektur eines trainierten Modells aus dessen Namen oder Pfad abruft. Sie mรผssen nur die passende `AutoClass` fรผr Ihre Aufgabe und den zugehรถrigen Tokenizer mit [`AutoTokenizer`] auswรคhlen.
Kehren wir zu unserem Beispiel zurรผck und sehen wir uns an, wie Sie die `AutoClass` verwenden kรถnnen, um die Ergebnisse der [`pipeline`] zu replizieren.
### AutoTokenizer
Ein Tokenizer ist fรผr die Vorverarbeitung von Text in ein fรผr das Modell verstรคndliches Format zustรคndig. Zunรคchst zerlegt der Tokenisierer den Text in Wรถrter, die *Token* genannt werden. Es gibt mehrere Regeln fรผr den Tokenisierungsprozess, z. B. wie und auf welcher Ebene ein Wort aufgespalten wird (weitere Informationen รผber Tokenisierung [hier](./tokenizer_summary)). Das Wichtigste ist jedoch, dass Sie den Tokenizer mit demselben Modellnamen instanziieren mรผssen, um sicherzustellen, dass Sie dieselben Tokenisierungsregeln verwenden, mit denen ein Modell zuvor trainiert wurde.
Laden sie einen Tokenizer mit [`AutoTokenizer`]:
```py
>>> from transformers import AutoTokenizer
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Anschlieรend wandelt der Tokenizer die Token in Zahlen um, um einen Tensor als Eingabe fรผr das Modell zu konstruieren. Dieser wird als *Vokabular* des Modells bezeichnet.
รbergeben Sie Ihren Text an den Tokenizer:
```py
>>> encoding = tokenizer("We are very happy to show you the ๐ค Transformers library.")
>>> print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
Der Tokenizer gibt ein Wรถrterbuch zurรผck, das Folgendes enthรคlt:
* [input_ids](./glossary#input-ids): numerische Reprรคsentationen Ihrer Token.
* [atttention_mask](.glossary#attention-mask): gibt an, welche Token beachtet werden sollen.
Genau wie die [`pipeline`] akzeptiert der Tokenizer eine Liste von Eingaben. Darรผber hinaus kann der Tokenizer den Text auch auffรผllen und kรผrzen, um einen Stapel mit einheitlicher Lรคnge zurรผckzugeben:
<frameworkcontent>
<pt>
```py
>>> pt_batch = tokenizer(
... ["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="pt",
... )
```
</pt>
<tf>
```py
>>> tf_batch = tokenizer(
... ["We are very happy to show you the ๐ค Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... max_length=512,
... return_tensors="tf",
... )
```
</tf>
</frameworkcontent>
Lesen Sie das Tutorial [preprocessing](./preprocessing) fรผr weitere Details zur Tokenisierung.
### AutoModel
<frameworkcontent>
<pt>
๐ค Transformers bietet eine einfache und einheitliche Mรถglichkeit, vortrainierte Instanzen zu laden. Das bedeutet, dass Sie ein [`AutoModel`] laden kรถnnen, wie Sie einen [`AutoTokenizer`] laden wรผrden. Der einzige Unterschied ist die Auswahl des richtigen [`AutoModel`] fรผr die Aufgabe. Da Sie eine Text- oder Sequenzklassifizierung vornehmen, laden Sie [`AutoModelForSequenceClassification`]:
```py
>>> from transformers import AutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse fรผr welche Aufgabe zu verwenden ist.
</Tip>
Jetzt kรถnnen Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell รผbergeben. Sie mรผssen nur das Wรถrterbuch entpacken, indem Sie `**` hinzufรผgen:
```py
>>> pt_outputs = pt_model(**pt_batch)
```
Das Modell gibt die endgรผltigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten:
```py
>>> from torch import nn
>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
>>> print(pt_predictions)
tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],
[0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)
```
</pt>
<tf>
๐ค Transformers bietet eine einfache und einheitliche Methode zum Laden von vortrainierten Instanzen. Das bedeutet, dass Sie ein [`TFAutoModel`] genauso laden kรถnnen, wie Sie einen [`AutoTokenizer`] laden wรผrden. Der einzige Unterschied ist die Auswahl des richtigen [`TFAutoModel`] fรผr die Aufgabe. Da Sie Text - oder Sequenz - Klassifizierung machen, laden Sie [`TFAutoModelForSequenceClassification`]:
```py
>>> from transformers import TFAutoModelForSequenceClassification
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
```
<Tip>
In der [Aufgabenzusammenfassung](./task_summary) steht, welche [AutoModel]-Klasse fรผr welche Aufgabe zu verwenden ist.
</Tip>
Jetzt kรถnnen Sie Ihren vorverarbeiteten Stapel von Eingaben direkt an das Modell รผbergeben, indem Sie die Wรถrterbuchschlรผssel direkt an die Tensoren รผbergeben:
```py
>>> tf_outputs = tf_model(tf_batch)
```
Das Modell gibt die endgรผltigen Aktivierungen in dem Attribut "logits" aus. Wenden Sie die Softmax-Funktion auf die "logits" an, um die Wahrscheinlichkeiten zu erhalten:
```py
>>> import tensorflow as tf
>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
>>> tf_predictions # doctest: +IGNORE_RESULT
```
</tf>
</frameworkcontent>
<Tip>
Alle ๐ค Transformers-Modelle (PyTorch oder TensorFlow) geben die Tensoren *vor* der endgรผltigen Aktivierungsfunktion
Funktion (wie Softmax) aus, da die endgรผltige Aktivierungsfunktion oft mit dem Verlusten verschmolzen ist.
</Tip>
Modelle sind ein standardmรครiges [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) oder ein [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model), sodass Sie sie in Ihrer รผblichen Trainingsschleife verwenden kรถnnen. Um jedoch die Dinge einfacher zu machen, bietet ๐ค Transformers eine [`Trainer`]-Klasse fรผr PyTorch, die Funktionalitรคt fรผr verteiltes Training, gemischte Prรคzision und mehr bietet. Fรผr TensorFlow kรถnnen Sie die Methode `fit` aus [Keras](https://keras.io/) verwenden. Siehe das [training tutorial](./training) fรผr weitere Details.
<Tip>
Transformers-Modellausgaben sind spezielle Datenklassen, so dass ihre Attribute in einer IDE automatisch vervollstรคndigt werden.
Die Modellausgรคnge verhalten sich auch wie ein Tupel oder ein Wรถrterbuch (z.B. kรถnnen Sie mit einem Integer, einem Slice oder einem String indexieren), wobei die Attribute, die "None" sind, ignoriert werden.
</Tip>
### Modell speichern
<frameworkcontent>
<pt>
Sobald Ihr Modell feinabgestimmt ist, kรถnnen Sie es mit seinem Tokenizer speichern, indem Sie [`PreTrainedModel.save_pretrained`] verwenden:
```py
>>> pt_save_directory = "./pt_save_pretrained"
>>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT
>>> pt_model.save_pretrained(pt_save_directory)
```
Wenn Sie bereit sind, das Modell erneut zu verwenden, laden Sie es mit [`PreTrainedModel.from_pretrained`]:
```py
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
```
</pt>
<tf>
Sobald Ihr Modell feinabgestimmt ist, kรถnnen Sie es mit seinem Tokenizer unter Verwendung von [`TFPreTrainedModel.save_pretrained`] speichern:
```py
>>> tf_save_directory = "./tf_save_pretrained"
>>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT
>>> tf_model.save_pretrained(tf_save_directory)
```
Wenn Sie bereit sind, das Modell wieder zu verwenden, laden Sie es mit [`TFPreTrainedModel.from_pretrained`]:
```py
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
```
</tf>
</frameworkcontent>
Ein besonders cooles ๐ค Transformers-Feature ist die Mรถglichkeit, ein Modell zu speichern und es entweder als PyTorch- oder TensorFlow-Modell wieder zu laden. Der Parameter "from_pt" oder "from_tf" kann das Modell von einem Framework in das andere konvertieren:
<frameworkcontent>
<pt>
```py
>>> from transformers import AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)
```
</pt>
<tf>
```py
>>> from transformers import TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
```
</tf>
</frameworkcontent>
## Custom model builds
Sie kรถnnen die Konfigurationsklasse des Modells รคndern, um zu bestimmen, wie ein Modell aufgebaut ist. Die Konfiguration legt die Attribute eines Modells fest, z. B. die Anzahl der verborgenen Schichten oder der Aufmerksamkeitskรถpfe. Wenn Sie ein Modell aus einer benutzerdefinierten Konfigurationsklasse initialisieren, beginnen Sie bei Null. Die Modellattribute werden zufรคllig initialisiert, und Sie mรผssen das Modell trainieren, bevor Sie es verwenden kรถnnen, um aussagekrรคftige Ergebnisse zu erhalten.
Beginnen Sie mit dem Import von [`AutoConfig`] und laden Sie dann das trainierte Modell, das Sie รคndern mรถchten. Innerhalb von [`AutoConfig.from_pretrained`] kรถnnen Sie das Attribut angeben, das Sie รคndern mรถchten, z. B. die Anzahl der Aufmerksamkeitskรถpfe:
```py
>>> from transformers import AutoConfig
>>> my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12)
```
<frameworkcontent>
<pt>
Create a model from your custom configuration with [`AutoModel.from_config`]:
```py
>>> from transformers import AutoModel
>>> my_model = AutoModel.from_config(my_config)
```
</pt>
<tf>
Create a model from your custom configuration with [`TFAutoModel.from_config`]:
```py
>>> from transformers import TFAutoModel
>>> my_model = TFAutoModel.from_config(my_config)
```
</tf>
</frameworkcontent>
Weitere Informationen zur Erstellung von benutzerdefinierten Konfigurationen finden Sie in der Anleitung [Erstellen einer benutzerdefinierten Architektur](./create_a_model).
## Wie geht es weiter?
Nachdem Sie nun die ๐ค Transformers-Kurztour abgeschlossen haben, schauen Sie sich unsere Anleitungen an und erfahren Sie, wie Sie spezifischere Dinge tun kรถnnen, wie das Schreiben eines benutzerdefinierten Modells, die Feinabstimmung eines Modells fรผr eine Aufgabe und wie man ein Modell mit einem Skript trainiert. Wenn Sie mehr รผber die Kernkonzepte von ๐ค Transformers erfahren mรถchten, nehmen Sie sich eine Tasse Kaffee und werfen Sie einen Blick auf unsere konzeptionellen Leitfรคden!
|
transformers/docs/source/de/quicktour.md/0
|
{
"file_path": "transformers/docs/source/de/quicktour.md",
"repo_id": "transformers",
"token_count": 7330
}
| 268
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Instantiate a big model
A barrier to accessing very large pretrained models is the amount of memory required. When loading a pretrained PyTorch model, you usually:
1. Create a model with random weights.
2. Load your pretrained weights.
3. Put those pretrained weights in the model.
The first two steps both require a full version of the model in memory and if the model weighs several GBs, you may not have enough memory for two copies of it. This problem is amplified in distributed training environments because each process loads a pretrained model and stores two copies in memory.
> [!TIP]
> The randomly created model is initialized with "empty" tensors, which take space in memory without filling it. The random values are whatever was in this chunk of memory at the time. To improve loading speed, the [`_fast_init`](https://github.com/huggingface/transformers/blob/c9f6e5e35156e068b227dd9b15521767f6afd4d2/src/transformers/modeling_utils.py#L2710) parameter is set to `True` by default to skip the random initialization for all weights that are correctly loaded.
This guide will show you how Transformers can help you load large pretrained models despite their memory requirements.
## Sharded checkpoints
From Transformers v4.18.0, a checkpoint larger than 10GB is automatically sharded by the [`~PreTrainedModel.save_pretrained`] method. It is split into several smaller partial checkpoints and creates an index file that maps parameter names to the files they're stored in.
The maximum shard size is controlled with the `max_shard_size` parameter, but by default it is 5GB, because it is easier to run on free-tier GPU instances without running out of memory.
For example, let's shard [BioMistral/BioMistral-7B](https://hf.co/BioMistral/BioMistral-7B).
```py
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... print(sorted(os.listdir(tmp_dir)))
['config.json', 'generation_config.json', 'model-00001-of-00006.safetensors', 'model-00002-of-00006.safetensors', 'model-00003-of-00006.safetensors', 'model-00004-of-00006.safetensors', 'model-00005-of-00006.safetensors', 'model-00006-of-00006.safetensors', 'model.safetensors.index.json']
```
The sharded checkpoint is reloaded with the [`~PreTrainedModel.from_pretrained`] method.
```py
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... new_model = AutoModel.from_pretrained(tmp_dir)
```
The main advantage of sharded checkpoints for big models is that each shard is loaded after the previous one, which caps the memory usage to only the model size and the largest shard size.
You could also directly load a sharded checkpoint inside a model without the [`~PreTrainedModel.from_pretrained`] method (similar to PyTorch's `load_state_dict()` method for a full checkpoint). In this case, use the [`~modeling_utils.load_sharded_checkpoint`] method.
```py
>>> from transformers.modeling_utils import load_sharded_checkpoint
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... load_sharded_checkpoint(model, tmp_dir)
```
### Shard metadata
The index file determines which keys are in the checkpoint and where the corresponding weights are stored. This file is loaded like any other JSON file and you can get a dictionary from it.
```py
>>> import json
>>> with tempfile.TemporaryDirectory() as tmp_dir:
... model.save_pretrained(tmp_dir, max_shard_size="5GB")
... with open(os.path.join(tmp_dir, "model.safetensors.index.json"), "r") as f:
... index = json.load(f)
>>> print(index.keys())
dict_keys(['metadata', 'weight_map'])
```
The `metadata` key provides the total model size.
```py
>>> index["metadata"]
{'total_size': 28966928384}
```
The `weight_map` key maps each parameter name (typically `state_dict` in a PyTorch model) to the shard it's stored in.
```py
>>> index["weight_map"]
{'lm_head.weight': 'model-00006-of-00006.safetensors',
'model.embed_tokens.weight': 'model-00001-of-00006.safetensors',
'model.layers.0.input_layernorm.weight': 'model-00001-of-00006.safetensors',
'model.layers.0.mlp.down_proj.weight': 'model-00001-of-00006.safetensors',
...
}
```
## Accelerate's Big Model Inference
> [!TIP]
> Make sure you have Accelerate v0.9.0 or later and PyTorch v1.9.0 or later installed.
From Transformers v4.20.0, the [`~PreTrainedModel.from_pretrained`] method is supercharged with Accelerate's [Big Model Inference](https://hf.co/docs/accelerate/usage_guides/big_modeling) feature to efficiently handle really big models! Big Model Inference creates a *model skeleton* on PyTorch's [**meta**](https://pytorch.org/docs/main/meta.html) device. The randomly initialized parameters are only created when the pretrained weights are loaded. This way, you aren't keeping two copies of the model in memory at the same time (one for the randomly initialized model and one for the pretrained weights), and the maximum memory consumed is only the full model size.
To enable Big Model Inference in Transformers, set `low_cpu_mem_usage=True` in the [`~PreTrainedModel.from_pretrained`] method.
```py
from transformers import AutoModelForCausalLM
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", low_cpu_mem_usage=True)
```
Accelerate automatically dispatches the model weights across all available devices, starting with the fastest device (GPU) first and then offloading to the slower devices (CPU and even hard drive). This is enabled by setting `device_map="auto"` in the [`~PreTrainedModel.from_pretrained`] method. When you pass the `device_map` parameter, `low_cpu_mem_usage` is automatically set to `True` so you don't need to specify it.
```py
from transformers import AutoModelForCausalLM
# these loading methods are equivalent
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", low_cpu_mem_usage=True)
```
You can also write your own `device_map` by mapping each layer to a device. It should map all model parameters to a device, but you don't have to detail where all the submodules of a layer go if the entire layer is on the same device.
```python
device_map = {"model.layers.1": 0, "model.layers.14": 1, "model.layers.31": "cpu", "lm_head": "disk"}
```
Access `hf_device_map` attribute to see how Accelerate split the model across devices.
```py
gemma.hf_device_map
```
```python out
{'model.embed_tokens': 0,
'model.layers.0': 0,
'model.layers.1': 0,
'model.layers.2': 0,
'model.layers.3': 0,
'model.layers.4': 0,
'model.layers.5': 0,
'model.layers.6': 0,
'model.layers.7': 0,
'model.layers.8': 0,
'model.layers.9': 0,
'model.layers.10': 0,
'model.layers.11': 0,
'model.layers.12': 0,
'model.layers.13': 0,
'model.layers.14': 'cpu',
'model.layers.15': 'cpu',
'model.layers.16': 'cpu',
'model.layers.17': 'cpu',
'model.layers.18': 'cpu',
'model.layers.19': 'cpu',
'model.layers.20': 'cpu',
'model.layers.21': 'cpu',
'model.layers.22': 'cpu',
'model.layers.23': 'cpu',
'model.layers.24': 'cpu',
'model.layers.25': 'cpu',
'model.layers.26': 'cpu',
'model.layers.27': 'cpu',
'model.layers.28': 'cpu',
'model.layers.29': 'cpu',
'model.layers.30': 'cpu',
'model.layers.31': 'cpu',
'model.norm': 'cpu',
'lm_head': 'cpu'}
```
## Model data type
PyTorch model weights are normally instantiated as torch.float32 and it can be an issue if you try to load a model as a different data type. For example, you'd need twice as much memory to load the weights in torch.float32 and then again to load them in your desired data type, like torch.float16.
> [!WARNING]
> Due to how PyTorch is designed, the `torch_dtype` parameter only supports floating data types.
To avoid wasting memory like this, explicitly set the `torch_dtype` parameter to the desired data type or set `torch_dtype="auto"` to load the weights with the most optimal memory pattern (the data type is automatically derived from the model weights).
<hfoptions id="dtype">
<hfoption id="specific dtype">
```py
from transformers import AutoModelForCausalLM
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", torch_dtype=torch.float16)
```
</hfoption>
<hfoption id="auto dtype">
```py
from transformers import AutoModelForCausalLM
gemma = AutoModelForCausalLM.from_pretrained("google/gemma-7b", torch_dtype="auto")
```
</hfoption>
</hfoptions>
You can also set the data type to use for models instantiated from scratch.
```python
import torch
from transformers import AutoConfig, AutoModel
my_config = AutoConfig.from_pretrained("google/gemma-2b", torch_dtype=torch.float16)
model = AutoModel.from_config(my_config)
```
|
transformers/docs/source/en/big_models.md/0
|
{
"file_path": "transformers/docs/source/en/big_models.md",
"repo_id": "transformers",
"token_count": 3022
}
| 269
|
<!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Installation
Install ๐ค Transformers for whichever deep learning library you're working with, setup your cache, and optionally configure ๐ค Transformers to run offline.
๐ค Transformers is tested on Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, and Flax. Follow the installation instructions below for the deep learning library you are using:
* [PyTorch](https://pytorch.org/get-started/locally/) installation instructions.
* [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions.
* [Flax](https://flax.readthedocs.io/en/latest/) installation instructions.
## Install with pip
You should install ๐ค Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies.
Start by creating a virtual environment in your project directory:
```bash
python -m venv .env
```
Activate the virtual environment. On Linux and MacOs:
```bash
source .env/bin/activate
```
Activate Virtual environment on Windows
```bash
.env/Scripts/activate
```
Now you're ready to install ๐ค Transformers with the following command:
```bash
pip install transformers
```
For CPU-support only, you can conveniently install ๐ค Transformers and a deep learning library in one line. For example, install ๐ค Transformers and PyTorch with:
```bash
pip install 'transformers[torch]'
```
๐ค Transformers and TensorFlow 2.0:
```bash
pip install 'transformers[tf-cpu]'
```
<Tip warning={true}>
M1 / ARM Users
You will need to install the following before installing TensorFLow 2.0
```bash
brew install cmake
brew install pkg-config
```
</Tip>
๐ค Transformers and Flax:
```bash
pip install 'transformers[flax]'
```
Finally, check if ๐ค Transformers has been properly installed by running the following command. It will download a pretrained model:
```bash
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
```
Then print out the label and score:
```bash
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
## Install from source
Install ๐ค Transformers from source with the following command:
```bash
pip install git+https://github.com/huggingface/transformers
```
This command installs the bleeding edge `main` version rather than the latest `stable` version. The `main` version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an [Issue](https://github.com/huggingface/transformers/issues) so we can fix it even sooner!
Check if ๐ค Transformers has been properly installed by running the following command:
```bash
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
```
## Editable install
You will need an editable install if you'd like to:
* Use the `main` version of the source code.
* Contribute to ๐ค Transformers and need to test changes in the code.
Clone the repository and install ๐ค Transformers with the following commands:
```bash
git clone https://github.com/huggingface/transformers.git
cd transformers
pip install -e .
```
These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the folder you cloned to: `~/transformers/`.
<Tip warning={true}>
You must keep the `transformers` folder if you want to keep using the library.
</Tip>
Now you can easily update your clone to the latest version of ๐ค Transformers with the following command:
```bash
cd ~/transformers/
git pull
```
Your Python environment will find the `main` version of ๐ค Transformers on the next run.
## Install with conda
Install from the conda channel `conda-forge`:
```bash
conda install conda-forge::transformers
```
## Cache setup
Pretrained models are downloaded and locally cached at: `~/.cache/huggingface/hub`. This is the default directory given by the shell environment variable `TRANSFORMERS_CACHE`. On Windows, the default directory is given by `C:\Users\username\.cache\huggingface\hub`. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory:
1. Shell environment variable (default): `HUGGINGFACE_HUB_CACHE` or `TRANSFORMERS_CACHE`.
2. Shell environment variable: `HF_HOME`.
3. Shell environment variable: `XDG_CACHE_HOME` + `/huggingface`.
<Tip>
๐ค Transformers will use the shell environment variables `PYTORCH_TRANSFORMERS_CACHE` or `PYTORCH_PRETRAINED_BERT_CACHE` if you are coming from an earlier iteration of this library and have set those environment variables, unless you specify the shell environment variable `TRANSFORMERS_CACHE`.
</Tip>
## Offline mode
Run ๐ค Transformers in a firewalled or offline environment with locally cached files by setting the environment variable `HF_HUB_OFFLINE=1`.
<Tip>
Add [๐ค Datasets](https://huggingface.co/docs/datasets/) to your offline training workflow with the environment variable `HF_DATASETS_OFFLINE=1`.
</Tip>
```bash
HF_DATASETS_OFFLINE=1 HF_HUB_OFFLINE=1 \
python examples/pytorch/translation/run_translation.py --model_name_or_path google-t5/t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
This script should run without hanging or waiting to timeout because it won't attempt to download the model from the Hub.
You can also bypass loading a model from the Hub from each [`~PreTrainedModel.from_pretrained`] call with the [`local_files_only`] parameter. When set to `True`, only local files are loaded:
```py
from transformers import T5Model
model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True)
```
### Fetch models and tokenizers to use offline
Another option for using ๐ค Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this:
* Download a file through the user interface on the [Model Hub](https://huggingface.co/models) by clicking on the โ icon.

* Use the [`PreTrainedModel.from_pretrained`] and [`PreTrainedModel.save_pretrained`] workflow:
1. Download your files ahead of time with [`PreTrainedModel.from_pretrained`]:
```py
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B")
```
2. Save your files to a specified directory with [`PreTrainedModel.save_pretrained`]:
```py
>>> tokenizer.save_pretrained("./your/path/bigscience_t0")
>>> model.save_pretrained("./your/path/bigscience_t0")
```
3. Now when you're offline, reload your files with [`PreTrainedModel.from_pretrained`] from the specified directory:
```py
>>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0")
>>> model = AutoModel.from_pretrained("./your/path/bigscience_t0")
```
* Programmatically download files with the [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) library:
1. Install the `huggingface_hub` library in your virtual environment:
```bash
python -m pip install huggingface_hub
```
2. Use the [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) function to download a file to a specific path. For example, the following command downloads the `config.json` file from the [T0](https://huggingface.co/bigscience/T0_3B) model to your desired path:
```py
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0")
```
Once your file is downloaded and locally cached, specify it's local path to load and use it:
```py
>>> from transformers import AutoConfig
>>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json")
```
<Tip>
See the [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) section for more details on downloading files stored on the Hub.
</Tip>
|
transformers/docs/source/en/installation.md/0
|
{
"file_path": "transformers/docs/source/en/installation.md",
"repo_id": "transformers",
"token_count": 2902
}
| 270
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Callbacks
Callbacks are objects that can customize the behavior of the training loop in the PyTorch
[`Trainer`] (this feature is not yet implemented in TensorFlow) that can inspect the training loop
state (for progress reporting, logging on TensorBoard or other ML platforms...) and take decisions (like early
stopping).
Callbacks are "read only" pieces of code, apart from the [`TrainerControl`] object they return, they
cannot change anything in the training loop. For customizations that require changes in the training loop, you should
subclass [`Trainer`] and override the methods you need (see [trainer](trainer) for examples).
By default, `TrainingArguments.report_to` is set to `"all"`, so a [`Trainer`] will use the following callbacks.
- [`DefaultFlowCallback`] which handles the default behavior for logging, saving and evaluation.
- [`PrinterCallback`] or [`ProgressCallback`] to display progress and print the
logs (the first one is used if you deactivate tqdm through the [`TrainingArguments`], otherwise
it's the second one).
- [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4
or tensorboardX).
- [`~integrations.WandbCallback`] if [wandb](https://www.wandb.com/) is installed.
- [`~integrations.CometCallback`] if [comet_ml](https://www.comet.com/site/) is installed.
- [`~integrations.MLflowCallback`] if [mlflow](https://www.mlflow.org/) is installed.
- [`~integrations.NeptuneCallback`] if [neptune](https://neptune.ai/) is installed.
- [`~integrations.AzureMLCallback`] if [azureml-sdk](https://pypi.org/project/azureml-sdk/) is
installed.
- [`~integrations.CodeCarbonCallback`] if [codecarbon](https://pypi.org/project/codecarbon/) is
installed.
- [`~integrations.ClearMLCallback`] if [clearml](https://github.com/allegroai/clearml) is installed.
- [`~integrations.DagsHubCallback`] if [dagshub](https://dagshub.com/) is installed.
- [`~integrations.FlyteCallback`] if [flyte](https://flyte.org/) is installed.
- [`~integrations.DVCLiveCallback`] if [dvclive](https://dvc.org/doc/dvclive) is installed.
If a package is installed but you don't wish to use the accompanying integration, you can change `TrainingArguments.report_to` to a list of just those integrations you want to use (e.g. `["azure_ml", "wandb"]`).
The main class that implements callbacks is [`TrainerCallback`]. It gets the
[`TrainingArguments`] used to instantiate the [`Trainer`], can access that
Trainer's internal state via [`TrainerState`], and can take some actions on the training loop via
[`TrainerControl`].
## Available Callbacks
Here is the list of the available [`TrainerCallback`] in the library:
[[autodoc]] integrations.CometCallback
- setup
[[autodoc]] DefaultFlowCallback
[[autodoc]] PrinterCallback
[[autodoc]] ProgressCallback
[[autodoc]] EarlyStoppingCallback
[[autodoc]] integrations.TensorBoardCallback
[[autodoc]] integrations.WandbCallback
- setup
[[autodoc]] integrations.MLflowCallback
- setup
[[autodoc]] integrations.AzureMLCallback
[[autodoc]] integrations.CodeCarbonCallback
[[autodoc]] integrations.NeptuneCallback
[[autodoc]] integrations.ClearMLCallback
[[autodoc]] integrations.DagsHubCallback
[[autodoc]] integrations.FlyteCallback
[[autodoc]] integrations.DVCLiveCallback
- setup
## TrainerCallback
[[autodoc]] TrainerCallback
Here is an example of how to register a custom callback with the PyTorch [`Trainer`]:
```python
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training"
def on_train_begin(self, args, state, control, **kwargs):
print("Starting training")
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback], # We can either pass the callback class this way or an instance of it (MyCallback())
)
```
Another way to register a callback is to call `trainer.add_callback()` as follows:
```python
trainer = Trainer(...)
trainer.add_callback(MyCallback)
# Alternatively, we can pass an instance of the callback class
trainer.add_callback(MyCallback())
```
## TrainerState
[[autodoc]] TrainerState
## TrainerControl
[[autodoc]] TrainerControl
|
transformers/docs/source/en/main_classes/callback.md/0
|
{
"file_path": "transformers/docs/source/en/main_classes/callback.md",
"repo_id": "transformers",
"token_count": 1520
}
| 271
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Tokenizer
A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
Rust library [๐ค Tokenizers](https://github.com/huggingface/tokenizers). The "Fast" implementations allows:
1. a significant speed-up in particular when doing batched tokenization and
2. additional methods to map between the original string (character and words) and the token space (e.g. getting the
index of the token comprising a given character or the span of characters corresponding to a given token).
The base classes [`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`]
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
(downloaded from HuggingFace's AWS S3 repository). They both rely on
[`~tokenization_utils_base.PreTrainedTokenizerBase`] that contains the common methods, and
[`~tokenization_utils_base.SpecialTokensMixin`].
[`PreTrainedTokenizer`] and [`PreTrainedTokenizerFast`] thus implement the main
methods for using all the tokenizers:
- Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
encoding/decoding (i.e., tokenizing and converting to integers).
- Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...).
- Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
tokenizer for easy access and making sure they are not split during tokenization.
[`BatchEncoding`] holds the output of the
[`~tokenization_utils_base.PreTrainedTokenizerBase`]'s encoding methods (`__call__`,
`encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary. When the tokenizer is a pure python
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by
these methods (`input_ids`, `attention_mask`...). When the tokenizer is a "Fast" tokenizer (i.e., backed by
HuggingFace [tokenizers library](https://github.com/huggingface/tokenizers)), this class provides in addition
several advanced alignment methods which can be used to map between the original string (character and words) and the
token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding
to a given token).
## PreTrainedTokenizer
[[autodoc]] PreTrainedTokenizer
- __call__
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## PreTrainedTokenizerFast
The [`PreTrainedTokenizerFast`] depend on the [tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the ๐ค tokenizers library can be
loaded very simply into ๐ค transformers. Take a look at the [Using tokenizers from ๐ค tokenizers](../fast_tokenizers) page to understand how this is done.
[[autodoc]] PreTrainedTokenizerFast
- __call__
- add_tokens
- add_special_tokens
- apply_chat_template
- batch_decode
- decode
- encode
- push_to_hub
- all
## BatchEncoding
[[autodoc]] BatchEncoding
|
transformers/docs/source/en/main_classes/tokenizer.md/0
|
{
"file_path": "transformers/docs/source/en/main_classes/tokenizer.md",
"repo_id": "transformers",
"token_count": 1144
}
| 272
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BERTweet
## Overview
The BERTweet model was proposed in [BERTweet: A pre-trained language model for English Tweets](https://www.aclweb.org/anthology/2020.emnlp-demos.2.pdf) by Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen.
The abstract from the paper is the following:
*We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having
the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et
al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al.,
2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks:
Part-of-speech tagging, Named-entity recognition and text classification.*
This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BERTweet).
## Usage example
```python
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>> bertweet = AutoModel.from_pretrained("vinai/bertweet-base")
>>> # For transformers v4.x+:
>>> tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base", use_fast=False)
>>> # For transformers v3.x:
>>> # tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
>>> # INPUT TWEET IS ALREADY NORMALIZED!
>>> line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:"
>>> input_ids = torch.tensor([tokenizer.encode(line)])
>>> with torch.no_grad():
... features = bertweet(input_ids) # Models outputs are now tuples
>>> # With TensorFlow 2.0+:
>>> # from transformers import TFAutoModel
>>> # bertweet = TFAutoModel.from_pretrained("vinai/bertweet-base")
```
<Tip>
This implementation is the same as BERT, except for tokenization method. Refer to [BERT documentation](bert) for
API reference information.
</Tip>
## BertweetTokenizer
[[autodoc]] BertweetTokenizer
|
transformers/docs/source/en/model_doc/bertweet.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/bertweet.md",
"repo_id": "transformers",
"token_count": 806
}
| 273
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Chameleon
## Overview
The Chameleon model was proposed in [Chameleon: Mixed-Modal Early-Fusion Foundation Models
](https://arxiv.org/abs/2405.09818v1) by META AI Chameleon Team. Chameleon is a Vision-Language Model that use vector quantization to tokenize images which enables the model to generate multimodal output. The model takes images and texts as input, including an interleaved format, and generates textual response. Image generation module is not released yet.
The abstract from the paper is the following:
*We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training
approach from inception, an alignment recipe, and an architectural parameterization tailored for the
early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range
of tasks, including visual question answering, image captioning, text generation, image generation, and
long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including
state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while
being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image
generation, all in a single model. It also matches or exceeds the performance of much larger models,
including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal
generation evaluation, where either the prompt or outputs contain mixed sequences of both images and
text. Chameleon marks a significant step forward in unified modeling of full multimodal documents*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/chameleon_arch.png"
alt="drawing" width="600"/>
<small> Chameleon incorporates a vector quantizer module to transform images into discrete tokens. That also enables image generation using an auto-regressive transformer. Taken from the <a href="https://arxiv.org/abs/2405.09818v1">original paper.</a> </small>
This model was contributed by [joaogante](https://huggingface.co/joaogante) and [RaushanTurganbay](https://huggingface.co/RaushanTurganbay).
The original code can be found [here](https://github.com/facebookresearch/chameleon).
## Usage tips
- We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to set `processor.tokenizer.padding_side = "left"` before generating.
- Note that Chameleon was tuned for safety alignment. If the model is refusing to answer, consider asking a more concrete question, instead of an open question.
- Chameleon generates in chat format which means that the generated text will always be the "assistant's turn". You can enable a text completion generation by passing `return_for_text_completion=True` when calling the processor.
> [!NOTE]
> Chameleon implementation in Transformers uses a special image token to indicate where to merge image embeddings. For special image token we didn't add a new one but used one of the reserved tokens: `<reserved08707>`. You have to add `<image>` to your prompt in the place where the image should be embedded for correct generation.
## Usage example
### Single image inference
Chameleon is a gated model so make sure to have access and login to Hugging Face Hub using a token.
Here's how to load the model and perform inference in half-precision (`torch.bfloat16`):
```python
from transformers import ChameleonProcessor, ChameleonForConditionalGeneration
import torch
from PIL import Image
import requests
processor = ChameleonProcessor.from_pretrained("facebook/chameleon-7b")
model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", torch_dtype=torch.bfloat16, device_map="cuda")
# prepare image and text prompt
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
prompt = "What do you see in this image?<image>"
inputs = processor(prompt, image, return_tensors="pt").to(model.device)
# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=50)
print(processor.decode(output[0], skip_special_tokens=True))
```
### Multi image inference
Chameleon can perform inference with multiple images as input, where images either belong to the same prompt or different prompts (in batched inference). Here is how you can do it:
```python
from transformers import ChameleonProcessor, ChameleonForConditionalGeneration
import torch
from PIL import Image
import requests
processor = ChameleonProcessor.from_pretrained("facebook/chameleon-7b")
model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", torch_dtype=torch.bfloat16, device_map="cuda")
# Get three different images
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image_stop = Image.open(requests.get(url, stream=True).raw)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image_cats = Image.open(requests.get(url, stream=True).raw)
url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"
image_snowman = Image.open(requests.get(url, stream=True).raw)
# Prepare a batched prompt, where the first one is a multi-image prompt and the second is not
prompts = [
"What do these images have in common?<image><image>",
"<image>What is shown in this image?"
]
# We can simply feed images in the order they have to be used in the text prompt
# Each "<image>" token uses one image leaving the next for the subsequent "<image>" tokens
inputs = processor(text=prompts, images=[image_stop, image_cats, image_snowman], padding=True, return_tensors="pt").to(device="cuda", dtype=torch.bfloat16)
# Generate
generate_ids = model.generate(**inputs, max_new_tokens=50)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
```
## Model optimization
### Quantization using Bitsandbytes
The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
```python
from transformers import ChameleonForConditionalGeneration, BitsAndBytesConfig
# specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = ChameleonForConditionalGeneration.from_pretrained("facebook/chameleon-7b", quantization_config=quantization_config, device_map="cuda")
```
### Use Flash-Attention 2 and SDPA to further speed-up generation
The models supports both, Flash-Attention 2 and PyTorch's [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) which can be enables for optimization. SDPA is the default options when you load the model, If you want to switch for Flash Attention 2, first make sure to install flash-attn. Refer to the [original repository](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
```python
from transformers import ChameleonForConditionalGeneration
model_id = "facebook/chameleon-7b"
model = ChameleonForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2"
).to(0)
```
## ChameleonConfig
[[autodoc]] ChameleonConfig
## ChameleonVQVAEConfig
[[autodoc]] ChameleonVQVAEConfig
## ChameleonProcessor
[[autodoc]] ChameleonProcessor
## ChameleonImageProcessor
[[autodoc]] ChameleonImageProcessor
- preprocess
## ChameleonVQVAE
[[autodoc]] ChameleonVQVAE
- forward
## ChameleonModel
[[autodoc]] ChameleonModel
- forward
## ChameleonForConditionalGeneration
[[autodoc]] ChameleonForConditionalGeneration
- forward
|
transformers/docs/source/en/model_doc/chameleon.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/chameleon.md",
"repo_id": "transformers",
"token_count": 2563
}
| 274
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Convolutional Vision Transformer (CvT)
## Overview
The CvT model was proposed in [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the [Vision Transformer (ViT)](vit) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs.
The abstract from the paper is the following:
*We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT)
in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through
two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer
block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs)
to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention,
global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves
state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition,
performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on
ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding,
a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.*
This model was contributed by [anugunj](https://huggingface.co/anugunj). The original code can be found [here](https://github.com/microsoft/CvT).
## Usage tips
- CvT models are regular Vision Transformers, but trained with convolutions. They outperform the [original model (ViT)](vit) when fine-tuned on ImageNet-1K and CIFAR-100.
- You can check out demo notebooks regarding inference as well as fine-tuning on custom data [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) (you can just replace [`ViTFeatureExtractor`] by [`AutoImageProcessor`] and [`ViTForImageClassification`] by [`CvtForImageClassification`]).
- The available checkpoints are either (1) pre-trained on [ImageNet-22k](http://www.image-net.org/) (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/) (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
## Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with CvT.
<PipelineTag pipeline="image-classification"/>
- [`CvtForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
- See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
## CvtConfig
[[autodoc]] CvtConfig
<frameworkcontent>
<pt>
## CvtModel
[[autodoc]] CvtModel
- forward
## CvtForImageClassification
[[autodoc]] CvtForImageClassification
- forward
</pt>
<tf>
## TFCvtModel
[[autodoc]] TFCvtModel
- call
## TFCvtForImageClassification
[[autodoc]] TFCvtForImageClassification
- call
</tf>
</frameworkcontent>
|
transformers/docs/source/en/model_doc/cvt.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/cvt.md",
"repo_id": "transformers",
"token_count": 1314
}
| 275
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
specific language governing permissions and limitations under the License. -->
# ImageGPT
## Overview
The ImageGPT model was proposed in [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt) by Mark
Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like
model trained to predict the next pixel value, allowing for both unconditional and conditional image generation.
The abstract from the paper is the following:
*Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models
can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels,
without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels,
we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and
low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide
ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also
competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0%
top-1 accuracy on a linear probe of our features.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/imagegpt_architecture.png"
alt="drawing" width="600"/>
<small> Summary of the approach. Taken from the [original paper](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf). </small>
This model was contributed by [nielsr](https://huggingface.co/nielsr), based on [this issue](https://github.com/openai/image-gpt/issues/7). The original code can be found
[here](https://github.com/openai/image-gpt).
## Usage tips
- ImageGPT is almost exactly the same as [GPT-2](gpt2), with the exception that a different activation
function is used (namely "quick gelu"), and the layer normalization layers don't mean center the inputs. ImageGPT
also doesn't have tied input- and output embeddings.
- As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence
length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a
sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors
applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long
sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger
embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special "start of sentence" (SOS)
token, used at the beginning of every sequence. One can use [`ImageGPTImageProcessor`] to prepare
images for the model.
- Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly
performant image features useful for downstream tasks, such as image classification. The authors showed that the
features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as
a sklearn logistic regression model for example). This is also referred to as "linear probing". Features can be
easily obtained by first forwarding the image through the model, then specifying `output_hidden_states=True`, and
then average-pool the hidden states at whatever layer you like.
- Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can
use [`ImageGPTForImageClassification`].
- ImageGPT comes in different sizes: there's ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also
train an XL variant, which they didn't release. The differences in size are summarized in the following table:
| **Model variant** | **Depths** | **Hidden sizes** | **Decoder hidden size** | **Params (M)** | **ImageNet-1k Top 1** |
|---|---|---|---|---|---|
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
## Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with ImageGPT.
<PipelineTag pipeline="image-classification"/>
- Demo notebooks for ImageGPT can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ImageGPT).
- [`ImageGPTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
- See also: [Image classification task guide](../tasks/image_classification)
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
## ImageGPTConfig
[[autodoc]] ImageGPTConfig
## ImageGPTFeatureExtractor
[[autodoc]] ImageGPTFeatureExtractor
- __call__
## ImageGPTImageProcessor
[[autodoc]] ImageGPTImageProcessor
- preprocess
## ImageGPTModel
[[autodoc]] ImageGPTModel
- forward
## ImageGPTForCausalImageModeling
[[autodoc]] ImageGPTForCausalImageModeling
- forward
## ImageGPTForImageClassification
[[autodoc]] ImageGPTForImageClassification
- forward
|
transformers/docs/source/en/model_doc/imagegpt.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/imagegpt.md",
"repo_id": "transformers",
"token_count": 1915
}
| 276
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# T5
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=t5">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-t5-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/t5-base">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
<a href="https://huggingface.co/papers/1910.10683">
<img alt="Paper page" src="https://img.shields.io/badge/Paper%20page-1910.10683-green">
</a>
</div>
## Overview
The T5 model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by [Colin Raffel](https://huggingface.co/craffel), Noam Shazeer, [Adam Roberts](https://huggingface.co/adarob), Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, [Peter J. Liu](https://huggingface.co/peterjliu).
The abstract from the paper is the following:
*Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
NLP, we release our dataset, pre-trained models, and code.*
All checkpoints can be found on the [hub](https://huggingface.co/models?search=t5).
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/google-research/text-to-text-transfer-transformer).
## Usage tips
- T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
different prefix to the input corresponding to each task, e.g., for translation: *translate English to German: ...*,
for summarization: *summarize: ...*.
- The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above).
- Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens.
- T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right.
- See the [training](#training), [inference](#inference) and [resources](#resources) sections below for all details regarding usage.
T5 comes in different sizes:
- [google-t5/t5-small](https://huggingface.co/google-t5/t5-small)
- [google-t5/t5-base](https://huggingface.co/google-t5/t5-base)
- [google-t5/t5-large](https://huggingface.co/google-t5/t5-large)
- [google-t5/t5-3b](https://huggingface.co/google-t5/t5-3b)
- [google-t5/t5-11b](https://huggingface.co/google-t5/t5-11b).
Based on the original T5 model, Google has released some follow-up works:
- **T5v1.1**: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without
mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found [here](t5v1.1).
- **mT5**: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to
the documentation of mT5 which can be found [here](mt5).
- **byT5**: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer
to the documentation of byT5 which can be found [here](byt5).
- **UL2**: UL2 is a T5 like model pretrained on various denoising objectives
- **Flan-T5**: Flan is a pretraining methods that is based on prompting. The Flan-T5 are T5 models trained on the Flan collection of
datasets which include: `taskmaster2`, `djaym7/wiki_dialog`, `deepmind/code_contests`, `lambada`, `gsm8k`, `aqua_rat`, `esnli`, `quasc` and `qed`.
- **FLan-UL2** : the UL2 model finetuned using the "Flan" prompt tuning and dataset collection.
- **UMT5**: UmT5 is a multilingual T5 model trained on an improved and refreshed mC4 multilingual corpus, 29 trillion characters across 107 language, using a new sampling method, UniMax. Refer to
the documentation of mT5 which can be found [here](umt5).
## Training
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input
sequence is fed to the model using `input_ids`. The target sequence is shifted to the right, i.e., prepended by a
start-sequence token and fed to the decoder using the `decoder_input_ids`. In teacher-forcing style, the target
sequence is then appended by the EOS token and corresponds to the `labels`. The PAD token is hereby used as the
start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
One can use [`T5ForConditionalGeneration`] (or the Tensorflow/Flax variant), which includes the
language modeling head on top of the decoder.
- Unsupervised denoising training
In this setup, spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and
the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens. Each
sentinel token represents a unique mask token for this sentence and should start with `<extra_id_0>`,
`<extra_id_1>`, ... up to `<extra_id_99>`. As a default, 100 sentinel tokens are available in
[`T5Tokenizer`].
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be
processed as follows:
```python
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
>>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
>>> labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
>>> # the forward function automatically creates the correct decoder_input_ids
>>> loss = model(input_ids=input_ids, labels=labels).loss
>>> loss.item()
3.7837
```
If you're interested in pre-training T5 on a new corpus, check out the [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) script in the Examples
directory.
- Supervised training
In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping.
Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input
sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar.", then they should be prepared for
the model as follows:
```python
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
>>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
>>> labels = tokenizer("Das Haus ist wunderbar.", return_tensors="pt").input_ids
>>> # the forward function automatically creates the correct decoder_input_ids
>>> loss = model(input_ids=input_ids, labels=labels).loss
>>> loss.item()
0.2542
```
As you can see, only 2 inputs are required for the model in order to compute a loss: `input_ids` (which are the
`input_ids` of the encoded input sequence) and `labels` (which are the `input_ids` of the encoded
target sequence). The model will automatically create the `decoder_input_ids` based on the `labels`, by
shifting them one position to the right and prepending the `config.decoder_start_token_id`, which for T5 is
equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate
English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used
during T5's pre-training.
However, the example above only shows a single training example. In practice, one trains deep learning models in
batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one
typically defines a `max_source_length` and `max_target_length`, which determine the maximum length of the
input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on
the task.
In addition, we must make sure that padding token id's of the `labels` are not taken into account by the loss
function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the `ignore_index`
of the `CrossEntropyLoss`. In Flax, one can use the `decoder_attention_mask` to ignore padded tokens from
the loss (see the [Flax summarization script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization) for details). We also pass
`attention_mask` as additional input to the model, which makes sure that padding tokens of the inputs are
ignored. The code example below illustrates all of this.
```python
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration
>>> import torch
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
>>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> # the following 2 hyperparameters are task-specific
>>> max_source_length = 512
>>> max_target_length = 128
>>> # Suppose we have the following 2 training examples:
>>> input_sequence_1 = "Welcome to NYC"
>>> output_sequence_1 = "Bienvenue ร NYC"
>>> input_sequence_2 = "HuggingFace is a company"
>>> output_sequence_2 = "HuggingFace est une entreprise"
>>> # encode the inputs
>>> task_prefix = "translate English to French: "
>>> input_sequences = [input_sequence_1, input_sequence_2]
>>> encoding = tokenizer(
... [task_prefix + sequence for sequence in input_sequences],
... padding="longest",
... max_length=max_source_length,
... truncation=True,
... return_tensors="pt",
... )
>>> input_ids, attention_mask = encoding.input_ids, encoding.attention_mask
>>> # encode the targets
>>> target_encoding = tokenizer(
... [output_sequence_1, output_sequence_2],
... padding="longest",
... max_length=max_target_length,
... truncation=True,
... return_tensors="pt",
... )
>>> labels = target_encoding.input_ids
>>> # replace padding token id's of the labels by -100 so it's ignored by the loss
>>> labels[labels == tokenizer.pad_token_id] = -100
>>> # forward pass
>>> loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss
>>> loss.item()
0.188
```
Additional training tips:
- T5 models need a slightly higher learning rate than the default one set in the `Trainer` when using the AdamW
optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question
answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer.
According to [this forum post](https://discuss.huggingface.co/t/t5-finetuning-tips/684), task prefixes matter when
(1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's
pre-training mixture (see Appendix D of the [paper](https://arxiv.org/pdf/1910.10683.pdf) for the task prefixes
used).
If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of
*pad_to_multiple_of* to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding
batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is
encountered during training thus significantly slowing down the training. only padding up to the longest example in a
batch) leads to very slow training on TPU.
## Inference
At inference time, it is recommended to use [`~generation.GenerationMixin.generate`]. This
method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder
and auto-regressively generates the decoder output. Check out [this blog post](https://huggingface.co/blog/how-to-generate) to know all the details about generating text with Transformers.
There's also [this blog post](https://huggingface.co/blog/encoder-decoder#encoder-decoder) which explains how
generation works in general in encoder-decoder models.
```python
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
>>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids
>>> outputs = model.generate(input_ids)
>>> print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Das Haus ist wunderbar.
```
Note that T5 uses the `pad_token_id` as the `decoder_start_token_id`, so when doing generation without using
[`~generation.GenerationMixin.generate`], make sure you start it with the `pad_token_id`.
The example above only shows a single example. You can also do batched inference, like so:
```python
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
>>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> task_prefix = "translate English to German: "
>>> # use different length sentences to test batching
>>> sentences = ["The house is wonderful.", "I like to work in NYC."]
>>> inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True)
>>> output_sequences = model.generate(
... input_ids=inputs["input_ids"],
... attention_mask=inputs["attention_mask"],
... do_sample=False, # disable sampling to test if batching affects output
... )
>>> print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True))
['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.']
```
Because T5 has been trained with the span-mask denoising objective,
it can be used to predict the sentinel (masked-out) tokens during inference.
The predicted tokens will then be placed between the sentinel tokens.
```python
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration
>>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small")
>>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
>>> input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
>>> sequence_ids = model.generate(input_ids)
>>> sequences = tokenizer.batch_decode(sequence_ids)
>>> sequences
['<pad> <extra_id_0> park offers <extra_id_1> the <extra_id_2> park.</s>']
```
## Performance
If you'd like a faster training and inference performance, install [NVIDIA APEX](https://github.com/NVIDIA/apex#quick-start) for NVIDIA GPUs, or [ROCm APEX](https://github.com/ROCmSoftwarePlatform/apex) for AMD GPUs and then the model will automatically use `apex.normalization.FusedRMSNorm` instead of `T5LayerNorm`. The former uses an optimized fused kernel which is several times faster than the latter.
## Resources
A list of official Hugging Face and community (indicated by ๐) resources to help you get started with T5. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
<PipelineTag pipeline="text-classification"/>
- A notebook for how to [finetune T5 for classification and multiple choice](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb).
- A notebook for how to [finetune T5 for sentiment span extraction](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb). ๐
<PipelineTag pipeline="token-classification"/>
- A notebook for how to [finetune T5 for named entity recognition](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing). ๐
<PipelineTag pipeline="text-generation"/>
- A notebook for [Finetuning CodeT5 for generating docstrings from Ruby code](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/T5/Fine_tune_CodeT5_for_generating_docstrings_from_Ruby_code.ipynb).
<PipelineTag pipeline="summarization"/>
- A notebook to [Finetune T5-base-dutch to perform Dutch abstractive summarization on a TPU](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/T5/Fine_tuning_Dutch_T5_base_on_CNN_Daily_Mail_for_summarization_(on_TPU_using_HuggingFace_Accelerate).ipynb).
- A notebook for how to [finetune T5 for summarization in PyTorch and track experiments with WandB](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb#scrollTo=OKRpFvYhBauC). ๐
- A blog post on [Distributed Training: Train BART/T5 for Summarization using ๐ค Transformers and Amazon SageMaker](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq).
- [`T5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb).
- [`TFT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb).
- [`FlaxT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/summarization).
- [Summarization](https://huggingface.co/course/chapter7/5?fw=pt#summarization) chapter of the ๐ค Hugging Face course.
- [Summarization task guide](../tasks/summarization)
<PipelineTag pipeline="fill-mask"/>
- [`FlaxT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#t5-like-span-masked-language-modeling) for training T5 with a span-masked language model objective. The script also shows how to train a T5 tokenizer. [`FlaxT5ForConditionalGeneration`] is also supported by this [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb).
<PipelineTag pipeline="translation"/>
- [`T5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb).
- [`TFT5ForConditionalGeneration`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/translation) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb).
- [Translation task guide](../tasks/translation)
<PipelineTag pipeline="question-answering"/>
- A notebook on how to [finetune T5 for question answering with TensorFlow 2](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb). ๐
- A notebook on how to [finetune T5 for question answering on a TPU](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil).
๐ **Deploy**
- A blog post on how to deploy [T5 11B for inference for less than $500](https://www.philschmid.de/deploy-t5-11b).
## T5Config
[[autodoc]] T5Config
## T5Tokenizer
[[autodoc]] T5Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## T5TokenizerFast
[[autodoc]] T5TokenizerFast
<frameworkcontent>
<pt>
## T5Model
[[autodoc]] T5Model
- forward
## T5ForConditionalGeneration
[[autodoc]] T5ForConditionalGeneration
- forward
## T5EncoderModel
[[autodoc]] T5EncoderModel
- forward
## T5ForSequenceClassification
[[autodoc]] T5ForSequenceClassification
- forward
## T5ForTokenClassification
[[autodoc]] T5ForTokenClassification
- forward
## T5ForQuestionAnswering
[[autodoc]] T5ForQuestionAnswering
- forward
</pt>
<tf>
## TFT5Model
[[autodoc]] TFT5Model
- call
## TFT5ForConditionalGeneration
[[autodoc]] TFT5ForConditionalGeneration
- call
## TFT5EncoderModel
[[autodoc]] TFT5EncoderModel
- call
</tf>
<jax>
## FlaxT5Model
[[autodoc]] FlaxT5Model
- __call__
- encode
- decode
## FlaxT5ForConditionalGeneration
[[autodoc]] FlaxT5ForConditionalGeneration
- __call__
- encode
- decode
## FlaxT5EncoderModel
[[autodoc]] FlaxT5EncoderModel
- __call__
</jax>
</frameworkcontent>
|
transformers/docs/source/en/model_doc/t5.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/t5.md",
"repo_id": "transformers",
"token_count": 7103
}
| 277
|
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# UniSpeech
## Overview
The UniSpeech model was proposed in [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael
Zeng, Xuedong Huang .
The abstract from the paper is the following:
*In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both
unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive
self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture
information more correlated with phonetic structures and improve the generalization across languages and domains. We
evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The
results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech
recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all
testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task,
i.e., a relative word error rate reduction of 6% against the previous approach.*
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be
found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech).
## Usage tips
- UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please
use [`Wav2Vec2Processor`] for the feature extraction.
- UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using [`Wav2Vec2CTCTokenizer`].
## Resources
- [Audio classification task guide](../tasks/audio_classification)
- [Automatic speech recognition task guide](../tasks/asr)
## UniSpeechConfig
[[autodoc]] UniSpeechConfig
## UniSpeech specific outputs
[[autodoc]] models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput
## UniSpeechModel
[[autodoc]] UniSpeechModel
- forward
## UniSpeechForCTC
[[autodoc]] UniSpeechForCTC
- forward
## UniSpeechForSequenceClassification
[[autodoc]] UniSpeechForSequenceClassification
- forward
## UniSpeechForPreTraining
[[autodoc]] UniSpeechForPreTraining
- forward
|
transformers/docs/source/en/model_doc/unispeech.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/unispeech.md",
"repo_id": "transformers",
"token_count": 853
}
| 278
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# XLNet
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=xlnet">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlnet-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/xlnet-base-cased">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>
## Overview
The XLNet model was proposed in [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov,
Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn
bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization
order.
The abstract from the paper is the following:
*With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves
better performance than pretraining approaches based on autoregressive language modeling. However, relying on
corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a
pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive
pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all
permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into
pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large
margin, including question answering, natural language inference, sentiment analysis, and document ranking.*
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/zihangdai/xlnet/).
## Usage tips
- The specific attention pattern can be controlled at training and test time using the `perm_mask` input.
- Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained
using only a sub-set of the output tokens as target which are selected with the `target_mapping` input.
- To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the `perm_mask` and
`target_mapping` inputs to control the attention span and outputs (see examples in
*examples/pytorch/text-generation/run_generation.py*)
- XLNet is one of the few models that has no sequence length limit.
- XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,โฆ,sequence length.
- XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies.
## Resources
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
## XLNetConfig
[[autodoc]] XLNetConfig
## XLNetTokenizer
[[autodoc]] XLNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## XLNetTokenizerFast
[[autodoc]] XLNetTokenizerFast
## XLNet specific outputs
[[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput
[[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput
[[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput
<frameworkcontent>
<pt>
## XLNetModel
[[autodoc]] XLNetModel
- forward
## XLNetLMHeadModel
[[autodoc]] XLNetLMHeadModel
- forward
## XLNetForSequenceClassification
[[autodoc]] XLNetForSequenceClassification
- forward
## XLNetForMultipleChoice
[[autodoc]] XLNetForMultipleChoice
- forward
## XLNetForTokenClassification
[[autodoc]] XLNetForTokenClassification
- forward
## XLNetForQuestionAnsweringSimple
[[autodoc]] XLNetForQuestionAnsweringSimple
- forward
## XLNetForQuestionAnswering
[[autodoc]] XLNetForQuestionAnswering
- forward
</pt>
<tf>
## TFXLNetModel
[[autodoc]] TFXLNetModel
- call
## TFXLNetLMHeadModel
[[autodoc]] TFXLNetLMHeadModel
- call
## TFXLNetForSequenceClassification
[[autodoc]] TFXLNetForSequenceClassification
- call
## TFLNetForMultipleChoice
[[autodoc]] TFXLNetForMultipleChoice
- call
## TFXLNetForTokenClassification
[[autodoc]] TFXLNetForTokenClassification
- call
## TFXLNetForQuestionAnsweringSimple
[[autodoc]] TFXLNetForQuestionAnsweringSimple
- call
</tf>
</frameworkcontent>
|
transformers/docs/source/en/model_doc/xlnet.md/0
|
{
"file_path": "transformers/docs/source/en/model_doc/xlnet.md",
"repo_id": "transformers",
"token_count": 2042
}
| 279
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# GPU inference
GPUs are the standard choice of hardware for machine learning, unlike CPUs, because they are optimized for memory bandwidth and parallelism. To keep up with the larger sizes of modern models or to run these large models on existing and older hardware, there are several optimizations you can use to speed up GPU inference. In this guide, you'll learn how to use FlashAttention-2 (a more memory-efficient attention mechanism), BetterTransformer (a PyTorch native fastpath execution), and bitsandbytes to quantize your model to a lower precision. Finally, learn how to use ๐ค Optimum to accelerate inference with ONNX Runtime on Nvidia and AMD GPUs.
<Tip>
The majority of the optimizations described here also apply to multi-GPU setups!
</Tip>
## FlashAttention-2
<Tip>
FlashAttention-2 is experimental and may change considerably in future versions.
</Tip>
[FlashAttention-2](https://huggingface.co/papers/2205.14135) is a faster and more efficient implementation of the standard attention mechanism that can significantly speedup inference by:
1. additionally parallelizing the attention computation over sequence length
2. partitioning the work between GPU threads to reduce communication and shared memory reads/writes between them
FlashAttention-2 is currently supported for the following architectures:
* [Bark](https://huggingface.co/docs/transformers/model_doc/bark#transformers.BarkModel)
* [Bart](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartModel)
* [Chameleon](https://huggingface.co/docs/transformers/model_doc/chameleon#transformers.Chameleon)
* [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPModel)
* [Cohere](https://huggingface.co/docs/transformers/model_doc/cohere#transformers.CohereModel)
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DistilBert](https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel)
* [Gemma](https://huggingface.co/docs/transformers/model_doc/gemma#transformers.GemmaModel)
* [Gemma2](https://huggingface.co/docs/transformers/model_doc/gemma2#transformers.Gemma2Model)
* [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)
* [GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode#transformers.GPTBigCodeModel)
* [GPTNeo](https://huggingface.co/docs/transformers/model_doc/gpt_neo#transformers.GPTNeoModel)
* [GPTNeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox#transformers.GPTNeoXModel)
* [GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj#transformers.GPTJModel)
* [Granite](https://huggingface.co/docs/transformers/model_doc/granite#transformers.GraniteModel)
* [Idefics2](https://huggingface.co/docs/transformers/model_doc/idefics2#transformers.Idefics2Model)
* [Falcon](https://huggingface.co/docs/transformers/model_doc/falcon#transformers.FalconModel)
* [JetMoe](https://huggingface.co/docs/transformers/model_doc/jetmoe#transformers.JetMoeModel)
* [Jamba](https://huggingface.co/docs/transformers/model_doc/jamba#transformers.JambaModel)
* [Llama](https://huggingface.co/docs/transformers/model_doc/llama#transformers.LlamaModel)
* [Llava](https://huggingface.co/docs/transformers/model_doc/llava)
* [Llava-NeXT](https://huggingface.co/docs/transformers/model_doc/llava_next)
* [Llava-NeXT-Video](https://huggingface.co/docs/transformers/model_doc/llava_next_video)
* [VipLlava](https://huggingface.co/docs/transformers/model_doc/vipllava)
* [VideoLlava](https://huggingface.co/docs/transformers/model_doc/video_llava)
* [M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)
* [MBart](https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartModel)
* [Mistral](https://huggingface.co/docs/transformers/model_doc/mistral#transformers.MistralModel)
* [Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral#transformers.MixtralModel)
* [Musicgen](https://huggingface.co/docs/transformers/model_doc/musicgen#transformers.MusicgenModel)
* [MusicGen Melody](https://huggingface.co/docs/transformers/model_doc/musicgen_melody#transformers.MusicgenMelodyModel)
* [Nemotron](https://huggingface.co/docs/transformers/model_doc/nemotron)
* [NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)
* [OLMo](https://huggingface.co/docs/transformers/model_doc/olmo#transformers.OlmoModel)
* [OPT](https://huggingface.co/docs/transformers/model_doc/opt#transformers.OPTModel)
* [Phi](https://huggingface.co/docs/transformers/model_doc/phi#transformers.PhiModel)
* [Phi3](https://huggingface.co/docs/transformers/model_doc/phi3#transformers.Phi3Model)
* [StableLm](https://huggingface.co/docs/transformers/model_doc/stablelm#transformers.StableLmModel)
* [Starcoder2](https://huggingface.co/docs/transformers/model_doc/starcoder2#transformers.Starcoder2Model)
* [Qwen2](https://huggingface.co/docs/transformers/model_doc/qwen2#transformers.Qwen2Model)
* [Qwen2Audio](https://huggingface.co/docs/transformers/model_doc/qwen2_audio#transformers.Qwen2AudioEncoder)
* [Qwen2MoE](https://huggingface.co/docs/transformers/model_doc/qwen2_moe#transformers.Qwen2MoeModel)
* [Qwen2VL](https://huggingface.co/docs/transformers/model_doc/qwen2_vl#transformers.Qwen2VLModel)
* [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperModel)
* [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2Model)
* [Hubert](https://huggingface.co/docs/transformers/model_doc/hubert#transformers.HubertModel)
* [data2vec_audio](https://huggingface.co/docs/transformers/main/en/model_doc/data2vec#transformers.Data2VecAudioModel)
* [Sew](https://huggingface.co/docs/transformers/main/en/model_doc/sew#transformers.SEWModel)
* [SigLIP](https://huggingface.co/docs/transformers/model_doc/siglip)
* [UniSpeech](https://huggingface.co/docs/transformers/v4.39.3/en/model_doc/unispeech#transformers.UniSpeechModel)
* [unispeech_sat](https://huggingface.co/docs/transformers/v4.39.3/en/model_doc/unispeech-sat#transformers.UniSpeechSatModel)
You can request to add FlashAttention-2 support for another model by opening a GitHub Issue or Pull Request.
Before you begin, make sure you have FlashAttention-2 installed.
<hfoptions id="install">
<hfoption id="NVIDIA">
```bash
pip install flash-attn --no-build-isolation
```
We strongly suggest referring to the detailed [installation instructions](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#installation-and-features) to learn more about supported hardware and data types!
</hfoption>
<hfoption id="AMD">
FlashAttention-2 is also supported on AMD GPUs and current support is limited to **Instinct MI210**, **Instinct MI250** and **Instinct MI300**. We strongly suggest using this [Dockerfile](https://github.com/huggingface/optimum-amd/tree/main/docker/transformers-pytorch-amd-gpu-flash/Dockerfile) to use FlashAttention-2 on AMD GPUs.
</hfoption>
</hfoptions>
To enable FlashAttention-2, pass the argument `attn_implementation="flash_attention_2"` to [`~AutoModelForCausalLM.from_pretrained`]:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```
<Tip>
FlashAttention-2 can only be used when the model's dtype is `fp16` or `bf16`. Make sure to cast your model to the appropriate dtype and load them on a supported device before using FlashAttention-2.
<br>
You can also set `use_flash_attention_2=True` to enable FlashAttention-2 but it is deprecated in favor of `attn_implementation="flash_attention_2"`.
</Tip>
FlashAttention-2 can be combined with other optimization techniques like quantization to further speedup inference. For example, you can combine FlashAttention-2 with 8-bit or 4-bit quantization:
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# load in 8bit
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
attn_implementation="flash_attention_2",
)
# load in 4bit
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
attn_implementation="flash_attention_2",
)
```
### Expected speedups
You can benefit from considerable speedups for inference, especially for inputs with long sequences. However, since FlashAttention-2 does not support computing attention scores with padding tokens, you must manually pad/unpad the attention scores for batched inference when the sequence contains padding tokens. This leads to a significant slowdown for batched generations with padding tokens.
To overcome this, you should use FlashAttention-2 without padding tokens in the sequence during training (by packing a dataset or [concatenating sequences](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516) until reaching the maximum sequence length).
For a single forward pass on [tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b) with a sequence length of 4096 and various batch sizes without padding tokens, the expected speedup is:
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/falcon-7b-inference-large-seqlen.png">
</div>
For a single forward pass on [meta-llama/Llama-7b-hf](https://hf.co/meta-llama/Llama-7b-hf) with a sequence length of 4096 and various batch sizes without padding tokens, the expected speedup is:
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-7b-inference-large-seqlen.png">
</div>
For sequences with padding tokens (generating with padding tokens), you need to unpad/pad the input sequences to correctly compute the attention scores. With a relatively small sequence length, a single forward pass creates overhead leading to a small speedup (in the example below, 30% of the input is filled with padding tokens):
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-small-seqlen-padding.png">
</div>
But for larger sequence lengths, you can expect even more speedup benefits:
<Tip>
FlashAttention is more memory efficient, meaning you can train on much larger sequence lengths without running into out-of-memory issues. You can potentially reduce memory usage up to 20x for larger sequence lengths. Take a look at the [flash-attention](https://github.com/Dao-AILab/flash-attention) repository for more details.
</Tip>
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-large-seqlen-padding.png">
</div>
## PyTorch scaled dot product attention
PyTorch's [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA) can also call FlashAttention and memory-efficient attention kernels under the hood. SDPA support is currently being added natively in Transformers and is used by default for `torch>=2.1.1` when an implementation is available. You may also set `attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
For now, Transformers supports SDPA inference and training for the following architectures:
* [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTModel)
* [Bart](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartModel)
* [Bert](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertModel)
* [CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert#transformers.CamembertModel)
* [Chameleon](https://huggingface.co/docs/transformers/model_doc/chameleon#transformers.Chameleon)
* [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPModel)
* [Cohere](https://huggingface.co/docs/transformers/model_doc/cohere#transformers.CohereModel)
* [data2vec_audio](https://huggingface.co/docs/transformers/main/en/model_doc/data2vec#transformers.Data2VecAudioModel)
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DeiT](https://huggingface.co/docs/transformers/model_doc/deit#transformers.DeiTModel)
* [Dpr](https://huggingface.co/docs/transformers/model_doc/dpr#transformers.DprReader)
* [Falcon](https://huggingface.co/docs/transformers/model_doc/falcon#transformers.FalconModel)
* [Gemma](https://huggingface.co/docs/transformers/model_doc/gemma#transformers.GemmaModel)
* [Gemma2](https://huggingface.co/docs/transformers/model_doc/gemma2#transformers.Gemma2Model)
* [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)
* [GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode#transformers.GPTBigCodeModel)
* [GPTNeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox#transformers.GPTNeoXModel)
* [Hubert](https://huggingface.co/docs/transformers/model_doc/hubert#transformers.HubertModel)
* [Idefics](https://huggingface.co/docs/transformers/model_doc/idefics#transformers.IdeficsModel)
* [Granite](https://huggingface.co/docs/transformers/model_doc/granite#transformers.GraniteModel)
* [JetMoe](https://huggingface.co/docs/transformers/model_doc/jetmoe#transformers.JetMoeModel)
* [Jamba](https://huggingface.co/docs/transformers/model_doc/jamba#transformers.JambaModel)
* [Llama](https://huggingface.co/docs/transformers/model_doc/llama#transformers.LlamaModel)
* [Mistral](https://huggingface.co/docs/transformers/model_doc/mistral#transformers.MistralModel)
* [Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral#transformers.MixtralModel)
* [Musicgen](https://huggingface.co/docs/transformers/model_doc/musicgen#transformers.MusicgenModel)
* [MusicGen Melody](https://huggingface.co/docs/transformers/model_doc/musicgen_melody#transformers.MusicgenMelodyModel)
* [OLMo](https://huggingface.co/docs/transformers/model_doc/olmo#transformers.OlmoModel)
* [PaliGemma](https://huggingface.co/docs/transformers/model_doc/paligemma#transformers.PaliGemmaForConditionalGeneration)
* [Phi](https://huggingface.co/docs/transformers/model_doc/phi#transformers.PhiModel)
* [Phi3](https://huggingface.co/docs/transformers/model_doc/phi3#transformers.Phi3Model)
* [Idefics](https://huggingface.co/docs/transformers/model_doc/idefics#transformers.IdeficsModel)
* [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperModel)
* [Mistral](https://huggingface.co/docs/transformers/model_doc/mistral#transformers.MistralModel)
* [Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral#transformers.MixtralModel)
* [StableLm](https://huggingface.co/docs/transformers/model_doc/stablelm#transformers.StableLmModel)
* [Starcoder2](https://huggingface.co/docs/transformers/model_doc/starcoder2#transformers.Starcoder2Model)
* [Qwen2](https://huggingface.co/docs/transformers/model_doc/qwen2#transformers.Qwen2Model)
* [Qwen2Audio](https://huggingface.co/docs/transformers/model_doc/qwen2_audio#transformers.Qwen2AudioEncoder)
* [Qwen2MoE](https://huggingface.co/docs/transformers/model_doc/qwen2_moe#transformers.Qwen2MoeModel)
* [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaModel)
* [Sew](https://huggingface.co/docs/transformers/main/en/model_doc/sew#transformers.SEWModel)
* [SigLIP](https://huggingface.co/docs/transformers/model_doc/siglip)
* [StableLm](https://huggingface.co/docs/transformers/model_doc/stablelm#transformers.StableLmModel)
* [Starcoder2](https://huggingface.co/docs/transformers/model_doc/starcoder2#transformers.Starcoder2Model)
* [UniSpeech](https://huggingface.co/docs/transformers/v4.39.3/en/model_doc/unispeech#transformers.UniSpeechModel)
* [unispeech_sat](https://huggingface.co/docs/transformers/v4.39.3/en/model_doc/unispeech-sat#transformers.UniSpeechSatModel)
* [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaModel)
* [Qwen2VL](https://huggingface.co/docs/transformers/model_doc/qwen2_vl#transformers.Qwen2VLModel)
* [Musicgen](https://huggingface.co/docs/transformers/model_doc/musicgen#transformers.MusicgenModel)
* [MusicGen Melody](https://huggingface.co/docs/transformers/model_doc/musicgen_melody#transformers.MusicgenMelodyModel)
* [Nemotron](https://huggingface.co/docs/transformers/model_doc/nemotron)
* [ViT](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTModel)
* [ViTHybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid#transformers.ViTHybridModel)
* [ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae#transformers.ViTMAEModel)
* [ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn#transformers.ViTMSNModel)
* [VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae#transformers.VideoMAEModell)
* [wav2vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2Model)
* [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperModel)
* [XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta#transformers.XLMRobertaModel)
* [XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl#transformers.XLMRobertaXLModel)
* [YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos#transformers.YolosModel)
<Tip>
FlashAttention can only be used for models with the `fp16` or `bf16` torch type, so make sure to cast your model to the appropriate type first. The memory-efficient attention backend is able to handle `fp32` models.
</Tip>
<Tip>
SDPA does not support certain sets of attention parameters, such as `head_mask` and `output_attentions=True`.
In that case, you should see a warning message and we will fall back to the (slower) eager implementation.
</Tip>
By default, SDPA selects the most performant kernel available but you can check whether a backend is available in a given setting (hardware, problem size) with [`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel) as a context manager:
```diff
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16).to("cuda")
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
If you see a bug with the traceback below, try using the nightly version of PyTorch which may have broader coverage for FlashAttention:
```bash
RuntimeError: No available kernel. Aborting execution.
# install PyTorch nightly
pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
```
## BetterTransformer
<Tip warning={true}>
Some BetterTransformer features are being upstreamed to Transformers with default support for native `torch.nn.scaled_dot_product_attention`. BetterTransformer still has a wider coverage than the Transformers SDPA integration, but you can expect more and more architectures to natively support SDPA in Transformers.
</Tip>
<Tip>
Check out our benchmarks with BetterTransformer and scaled dot product attention in the [Out of the box acceleration and memory savings of ๐ค decoder models with PyTorch 2.0](https://pytorch.org/blog/out-of-the-box-acceleration/) and learn more about the fastpath execution in the [BetterTransformer](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2) blog post.
</Tip>
BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are:
1. fusion, which combines multiple sequential operations into a single "kernel" to reduce the number of computation steps
2. skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors
BetterTransformer also converts all attention operations to use the more memory-efficient [scaled dot product attention (SDPA)](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention), and it calls optimized kernels like [FlashAttention](https://huggingface.co/papers/2205.14135) under the hood.
Before you start, make sure you have ๐ค Optimum [installed](https://huggingface.co/docs/optimum/installation).
Then you can enable BetterTransformer with the [`PreTrainedModel.to_bettertransformer`] method:
```python
model = model.to_bettertransformer()
```
You can return the original Transformers model with the [`~PreTrainedModel.reverse_bettertransformer`] method. You should use this before saving your model to use the canonical Transformers modeling:
```py
model = model.reverse_bettertransformer()
model.save_pretrained("saved_model")
```
## bitsandbytes
bitsandbytes is a quantization library that includes support for 4-bit and 8-bit quantization. Quantization reduces your model size compared to its native full precision version, making it easier to fit large models onto GPUs with limited memory.
Make sure you have bitsandbytes and ๐ค Accelerate installed:
```bash
# these versions support 8-bit and 4-bit
pip install bitsandbytes>=0.39.0 accelerate>=0.20.0
# install Transformers
pip install transformers
```
### 4-bit
To load a model in 4-bit for inference, use the `load_in_4bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `"auto"` to allow ๐ค Accelerate to automatically and efficiently allocate the model given the available resources in the environment.
```py
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
```
To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 600MB of memory to the first GPU and 1GB of memory to the second GPU:
```py
max_memory_mapping = {0: "600MB", 1: "1GB"}
model_name = "bigscience/bloom-3b"
model_4bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping
)
```
### 8-bit
<Tip>
If you're curious and interested in learning more about the concepts underlying 8-bit quantization, read the [Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration) blog post.
</Tip>
To load a model in 8-bit for inference, use the `load_in_8bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `"auto"` to allow ๐ค Accelerate to automatically and efficiently allocate the model given the available resources in the environment:
```py
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=BitsAndBytesConfig(load_in_8bit=True))
```
If you're loading a model in 8-bit for text generation, you should use the [`~transformers.GenerationMixin.generate`] method instead of the [`Pipeline`] function which is not optimized for 8-bit models and will be slower. Some sampling strategies, like nucleus sampling, are also not supported by the [`Pipeline`] for 8-bit models. You should also place all inputs on the same device as the model:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "bigscience/bloom-2b5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=BitsAndBytesConfig(load_in_8bit=True))
prompt = "Hello, my llama is cute"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(**inputs)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 1GB of memory to the first GPU and 2GB of memory to the second GPU:
```py
max_memory_mapping = {0: "1GB", 1: "2GB"}
model_name = "bigscience/bloom-3b"
model_8bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping
)
```
<Tip>
Feel free to try running a 11 billion parameter [T5 model](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) or the 3 billion parameter [BLOOM model](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing) for inference on Google Colab's free tier GPUs!
</Tip>
## ๐ค Optimum
<Tip>
Learn more details about using ORT with ๐ค Optimum in the [Accelerated inference on NVIDIA GPUs](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#accelerated-inference-on-nvidia-gpus) and [Accelerated inference on AMD GPUs](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu#accelerated-inference-on-amd-gpus) guides. This section only provides a brief and simple example.
</Tip>
ONNX Runtime (ORT) is a model accelerator that supports accelerated inference on Nvidia GPUs, and AMD GPUs that use [ROCm](https://www.amd.com/en/products/software/rocm.html) stack. ORT uses optimization techniques like fusing common operations into a single node and constant folding to reduce the number of computations performed and speedup inference. ORT also places the most computationally intensive operations on the GPU and the rest on the CPU to intelligently distribute the workload between the two devices.
ORT is supported by ๐ค Optimum which can be used in ๐ค Transformers. You'll need to use an [`~optimum.onnxruntime.ORTModel`] for the task you're solving, and specify the `provider` parameter which can be set to either [`CUDAExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#cudaexecutionprovider), [`ROCMExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu) or [`TensorrtExecutionProvider`](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#tensorrtexecutionprovider). If you want to load a model that was not yet exported to ONNX, you can set `export=True` to convert your model on-the-fly to the ONNX format:
```py
from optimum.onnxruntime import ORTModelForSequenceClassification
ort_model = ORTModelForSequenceClassification.from_pretrained(
"distilbert/distilbert-base-uncased-finetuned-sst-2-english",
export=True,
provider="CUDAExecutionProvider",
)
```
Now you're free to use the model for inference:
```py
from optimum.pipelines import pipeline
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased-finetuned-sst-2-english")
pipeline = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0")
result = pipeline("Both the music and visual were astounding, not to mention the actors performance.")
```
## Combine optimizations
It is often possible to combine several of the optimization techniques described above to get the best inference performance possible for your model. For example, you can load a model in 4-bit, and then enable BetterTransformer with FlashAttention:
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# load model in 4-bit
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config)
# enable BetterTransformer
model = model.to_bettertransformer()
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
# enable FlashAttention
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
transformers/docs/source/en/perf_infer_gpu_one.md/0
|
{
"file_path": "transformers/docs/source/en/perf_infer_gpu_one.md",
"repo_id": "transformers",
"token_count": 9727
}
| 280
|
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# AWQ
<Tip>
Try AWQ quantization with this [notebook](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)!
</Tip>
[Activation-aware Weight Quantization (AWQ)](https://hf.co/papers/2306.00978) doesn't quantize all the weights in a model, and instead, it preserves a small percentage of weights that are important for LLM performance. This significantly reduces quantization loss such that you can run models in 4-bit precision without experiencing any performance degradation.
There are several libraries for quantizing models with the AWQ algorithm, such as [llm-awq](https://github.com/mit-han-lab/llm-awq), [autoawq](https://github.com/casper-hansen/AutoAWQ) or [optimum-intel](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc). Transformers supports loading models quantized with the llm-awq and autoawq libraries. This guide will show you how to load models quantized with autoawq, but the process is similar for llm-awq quantized models.
Make sure you have autoawq installed:
```bash
pip install autoawq
```
AWQ-quantized models can be identified by checking the `quantization_config` attribute in the model's [config.json](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ/blob/main/config.json) file:
```json
{
"_name_or_path": "/workspace/process/huggingfaceh4_zephyr-7b-alpha/source",
"architectures": [
"MistralForCausalLM"
],
...
...
...
"quantization_config": {
"quant_method": "awq",
"zero_point": true,
"group_size": 128,
"bits": 4,
"version": "gemm"
}
}
```
A quantized model is loaded with the [`~PreTrainedModel.from_pretrained`] method. If you loaded your model on the CPU, make sure to move it to a GPU device first. Use the `device_map` parameter to specify where to place the model:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "TheBloke/zephyr-7B-alpha-AWQ"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:0")
```
Loading an AWQ-quantized model automatically sets other weights to fp16 by default for performance reasons. If you want to load these other weights in a different format, use the `torch_dtype` parameter:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "TheBloke/zephyr-7B-alpha-AWQ"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32)
```
AWQ quantization can also be combined with [FlashAttention-2](../perf_infer_gpu_one#flashattention-2) to further accelerate inference:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ", attn_implementation="flash_attention_2", device_map="cuda:0")
```
## Fused modules
Fused modules offers improved accuracy and performance and it is supported out-of-the-box for AWQ modules for [Llama](https://huggingface.co/meta-llama) and [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) architectures, but you can also fuse AWQ modules for unsupported architectures.
<Tip warning={true}>
Fused modules cannot be combined with other optimization techniques such as FlashAttention-2.
</Tip>
<hfoptions id="fuse">
<hfoption id="supported architectures">
To enable fused modules for supported architectures, create an [`AwqConfig`] and set the parameters `fuse_max_seq_len` and `do_fuse=True`. The `fuse_max_seq_len` parameter is the total sequence length and it should include the context length and the expected generation length. You can set it to a larger value to be safe.
For example, to fuse the AWQ modules of the [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) model.
```python
import torch
from transformers import AwqConfig, AutoModelForCausalLM
model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ"
quantization_config = AwqConfig(
bits=4,
fuse_max_seq_len=512,
do_fuse=True,
)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0)
```
The [TheBloke/Mistral-7B-OpenOrca-AWQ](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-AWQ) model was benchmarked with `batch_size=1` with and without fused modules.
<figcaption class="text-center text-gray-500 text-lg">Unfused module</figcaption>
| Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------|
| 1 | 32 | 32 | 60.0984 | 38.4537 | 4.50 GB (5.68%) |
| 1 | 64 | 64 | 1333.67 | 31.6604 | 4.50 GB (5.68%) |
| 1 | 128 | 128 | 2434.06 | 31.6272 | 4.50 GB (5.68%) |
| 1 | 256 | 256 | 3072.26 | 38.1731 | 4.50 GB (5.68%) |
| 1 | 512 | 512 | 3184.74 | 31.6819 | 4.59 GB (5.80%) |
| 1 | 1024 | 1024 | 3148.18 | 36.8031 | 4.81 GB (6.07%) |
| 1 | 2048 | 2048 | 2927.33 | 35.2676 | 5.73 GB (7.23%) |
<figcaption class="text-center text-gray-500 text-lg">Fused module</figcaption>
| Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) |
|-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------|
| 1 | 32 | 32 | 81.4899 | 80.2569 | 4.00 GB (5.05%) |
| 1 | 64 | 64 | 1756.1 | 106.26 | 4.00 GB (5.05%) |
| 1 | 128 | 128 | 2479.32 | 105.631 | 4.00 GB (5.06%) |
| 1 | 256 | 256 | 1813.6 | 85.7485 | 4.01 GB (5.06%) |
| 1 | 512 | 512 | 2848.9 | 97.701 | 4.11 GB (5.19%) |
| 1 | 1024 | 1024 | 3044.35 | 87.7323 | 4.41 GB (5.57%) |
| 1 | 2048 | 2048 | 2715.11 | 89.4709 | 5.57 GB (7.04%) |
The speed and throughput of fused and unfused modules were also tested with the [optimum-benchmark](https://github.com/huggingface/optimum-benchmark) library.
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_forward_memory_plot.png" alt="generate throughput per batch size" />
<figcaption class="mt-2 text-center text-sm text-gray-500">forward peak memory/batch size</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/fused_generate_throughput_plot.png" alt="forward latency per batch size" />
<figcaption class="mt-2 text-center text-sm text-gray-500">generate throughput/batch size</figcaption>
</div>
</div>
</hfoption>
<hfoption id="unsupported architectures">
For architectures that don't support fused modules yet, you need to create a custom fusing mapping to define which modules need to be fused with the `modules_to_fuse` parameter. For example, to fuse the AWQ modules of the [TheBloke/Yi-34B-AWQ](https://huggingface.co/TheBloke/Yi-34B-AWQ) model.
```python
import torch
from transformers import AwqConfig, AutoModelForCausalLM
model_id = "TheBloke/Yi-34B-AWQ"
quantization_config = AwqConfig(
bits=4,
fuse_max_seq_len=512,
modules_to_fuse={
"attention": ["q_proj", "k_proj", "v_proj", "o_proj"],
"layernorm": ["ln1", "ln2", "norm"],
"mlp": ["gate_proj", "up_proj", "down_proj"],
"use_alibi": False,
"num_attention_heads": 56,
"num_key_value_heads": 8,
"hidden_size": 7168
}
)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0)
```
The parameter `modules_to_fuse` should include:
- `"attention"`: The names of the attention layers to fuse in the following order: query, key, value and output projection layer. If you don't want to fuse these layers, pass an empty list.
- `"layernorm"`: The names of all the LayerNorm layers you want to replace with a custom fused LayerNorm. If you don't want to fuse these layers, pass an empty list.
- `"mlp"`: The names of the MLP layers you want to fuse into a single MLP layer in the order: (gate (dense, layer, post-attention) / up / down layers).
- `"use_alibi"`: If your model uses ALiBi positional embedding.
- `"num_attention_heads"`: The number of attention heads.
- `"num_key_value_heads"`: The number of key value heads that should be used to implement Grouped Query Attention (GQA). If `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if `num_key_value_heads=1` the model will use Multi Query Attention (MQA), otherwise GQA is used.
- `"hidden_size"`: The dimension of the hidden representations.
</hfoption>
</hfoptions>
## ExLlama-v2 support
Recent versions of `autoawq` supports ExLlama-v2 kernels for faster prefill and decoding. To get started, first install the latest version of `autoawq` by running:
```bash
pip install git+https://github.com/casper-hansen/AutoAWQ.git
```
Get started by passing an `AwqConfig()` with `version="exllama"`.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig
quantization_config = AwqConfig(version="exllama")
model = AutoModelForCausalLM.from_pretrained(
"TheBloke/Mistral-7B-Instruct-v0.1-AWQ",
quantization_config=quantization_config,
device_map="auto",
)
input_ids = torch.randint(0, 100, (1, 128), dtype=torch.long, device="cuda")
output = model(input_ids)
print(output.logits)
tokenizer = AutoTokenizer.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-AWQ")
input_ids = tokenizer.encode("How to make a cake", return_tensors="pt").to(model.device)
output = model.generate(input_ids, do_sample=True, max_length=50, pad_token_id=50256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
<Tip warning={true}>
Note this feature is supported on AMD GPUs.
</Tip>
|
transformers/docs/source/en/quantization/awq.md/0
|
{
"file_path": "transformers/docs/source/en/quantization/awq.md",
"repo_id": "transformers",
"token_count": 4388
}
| 281
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Automatic speech recognition
[[open-in-colab]]
<Youtube id="TksaY_FDgnk"/>
Automatic speech recognition (ASR) converts a speech signal to text, mapping a sequence of audio inputs to text outputs. Virtual assistants like Siri and Alexa use ASR models to help users everyday, and there are many other useful user-facing applications like live captioning and note-taking during meetings.
This guide will show you how to:
1. Finetune [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) on the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset to transcribe audio to text.
2. Use your finetuned model for inference.
<Tip>
To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/automatic-speech-recognition)
</Tip>
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install transformers datasets evaluate jiwer
```
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load MInDS-14 dataset
Start by loading a smaller subset of the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset from the ๐ค Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
```py
>>> from datasets import load_dataset, Audio
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]")
```
Split the dataset's `train` split into a train and test set with the [`~Dataset.train_test_split`] method:
```py
>>> minds = minds.train_test_split(test_size=0.2)
```
Then take a look at the dataset:
```py
>>> minds
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 16
})
test: Dataset({
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
num_rows: 4
})
})
```
While the dataset contains a lot of useful information, like `lang_id` and `english_transcription`, you'll focus on the `audio` and `transcription` in this guide. Remove the other columns with the [`~datasets.Dataset.remove_columns`] method:
```py
>>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"])
```
Take a look at the example again:
```py
>>> minds["train"][0]
{'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414,
0.00024414, 0.00024414], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 8000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
There are two fields:
- `audio`: a 1-dimensional `array` of the speech signal that must be called to load and resample the audio file.
- `transcription`: the target text.
## Preprocess
The next step is to load a Wav2Vec2 processor to process the audio signal:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base")
```
The MInDS-14 dataset has a sampling rate of 8000kHz (you can find this information in its [dataset card](https://huggingface.co/datasets/PolyAI/minds14)), which means you'll need to resample the dataset to 16000kHz to use the pretrained Wav2Vec2 model:
```py
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
>>> minds["train"][0]
{'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ...,
2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'sampling_rate': 16000},
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav',
'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"}
```
As you can see in the `transcription` above, the text contains a mix of upper and lowercase characters. The Wav2Vec2 tokenizer is only trained on uppercase characters so you'll need to make sure the text matches the tokenizer's vocabulary:
```py
>>> def uppercase(example):
... return {"transcription": example["transcription"].upper()}
>>> minds = minds.map(uppercase)
```
Now create a preprocessing function that:
1. Calls the `audio` column to load and resample the audio file.
2. Extracts the `input_values` from the audio file and tokenize the `transcription` column with the processor.
```py
>>> def prepare_dataset(batch):
... audio = batch["audio"]
... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"])
... batch["input_length"] = len(batch["input_values"][0])
... return batch
```
To apply the preprocessing function over the entire dataset, use ๐ค Datasets [`~datasets.Dataset.map`] function. You can speed up `map` by increasing the number of processes with the `num_proc` parameter. Remove the columns you don't need with the [`~datasets.Dataset.remove_columns`] method:
```py
>>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4)
```
๐ค Transformers doesn't have a data collator for ASR, so you'll need to adapt the [`DataCollatorWithPadding`] to create a batch of examples. It'll also dynamically pad your text and labels to the length of the longest element in its batch (instead of the entire dataset) so they are a uniform length. While it is possible to pad your text in the `tokenizer` function by setting `padding=True`, dynamic padding is more efficient.
Unlike other data collators, this specific data collator needs to apply a different padding method to `input_values` and `labels`:
```py
>>> import torch
>>> from dataclasses import dataclass, field
>>> from typing import Any, Dict, List, Optional, Union
>>> @dataclass
... class DataCollatorCTCWithPadding:
... processor: AutoProcessor
... padding: Union[bool, str] = "longest"
... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
... # split inputs and labels since they have to be of different lengths and need
... # different padding methods
... input_features = [{"input_values": feature["input_values"][0]} for feature in features]
... label_features = [{"input_ids": feature["labels"]} for feature in features]
... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt")
... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt")
... # replace padding with -100 to ignore loss correctly
... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
... batch["labels"] = labels
... return batch
```
Now instantiate your `DataCollatorForCTCWithPadding`:
```py
>>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest")
```
## Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the ๐ค [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [word error rate](https://huggingface.co/spaces/evaluate-metric/wer) (WER) metric (see the ๐ค Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
```py
>>> import evaluate
>>> wer = evaluate.load("wer")
```
Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the WER:
```py
>>> import numpy as np
>>> def compute_metrics(pred):
... pred_logits = pred.predictions
... pred_ids = np.argmax(pred_logits, axis=-1)
... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
... pred_str = processor.batch_decode(pred_ids)
... label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
... wer = wer.compute(predictions=pred_str, references=label_str)
... return {"wer": wer}
```
Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training.
## Train
<frameworkcontent>
<pt>
<Tip>
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
</Tip>
You're ready to start training your model now! Load Wav2Vec2 with [`AutoModelForCTC`]. Specify the reduction to apply with the `ctc_loss_reduction` parameter. It is often better to use the average instead of the default summation:
```py
>>> from transformers import AutoModelForCTC, TrainingArguments, Trainer
>>> model = AutoModelForCTC.from_pretrained(
... "facebook/wav2vec2-base",
... ctc_loss_reduction="mean",
... pad_token_id=processor.tokenizer.pad_token_id,
... )
```
At this point, only three steps remain:
1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the WER and save the training checkpoint.
2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
3. Call [`~Trainer.train`] to finetune your model.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_asr_mind_model",
... per_device_train_batch_size=8,
... gradient_accumulation_steps=2,
... learning_rate=1e-5,
... warmup_steps=500,
... max_steps=2000,
... gradient_checkpointing=True,
... fp16=True,
... group_by_length=True,
... eval_strategy="steps",
... per_device_eval_batch_size=8,
... save_steps=1000,
... eval_steps=1000,
... logging_steps=25,
... load_best_model_at_end=True,
... metric_for_best_model="wer",
... greater_is_better=False,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=encoded_minds["train"],
... eval_dataset=encoded_minds["test"],
... tokenizer=processor,
... data_collator=data_collator,
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:
```py
>>> trainer.push_to_hub()
```
</pt>
</frameworkcontent>
<Tip>
For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog [post](https://huggingface.co/blog/fine-tune-wav2vec2-english) for English ASR and this [post](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for multilingual ASR.
</Tip>
## Inference
Great, now that you've finetuned a model, you can use it for inference!
Load an audio file you'd like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to!
```py
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
>>> sampling_rate = dataset.features["audio"].sampling_rate
>>> audio_file = dataset[0]["audio"]["path"]
```
The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for automatic speech recognition with your model, and pass your audio file to it:
```py
>>> from transformers import pipeline
>>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model")
>>> transcriber(audio_file)
{'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'}
```
<Tip>
The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results!
</Tip>
You can also manually replicate the results of the `pipeline` if you'd like:
<frameworkcontent>
<pt>
Load a processor to preprocess the audio file and transcription and return the `input` as PyTorch tensors:
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
```
Pass your inputs to the model and return the logits:
```py
>>> from transformers import AutoModelForCTC
>>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model")
>>> with torch.no_grad():
... logits = model(**inputs).logits
```
Get the predicted `input_ids` with the highest probability, and use the processor to decode the predicted `input_ids` back into text:
```py
>>> import torch
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = processor.batch_decode(predicted_ids)
>>> transcription
['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER']
```
</pt>
</frameworkcontent>
|
transformers/docs/source/en/tasks/asr.md/0
|
{
"file_path": "transformers/docs/source/en/tasks/asr.md",
"repo_id": "transformers",
"token_count": 4907
}
| 282
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# LLM prompting guide
[[open-in-colab]]
Large Language Models such as Falcon, LLaMA, etc. are pretrained transformer models initially trained to predict the
next token given some input text. They typically have billions of parameters and have been trained on trillions of
tokens for an extended period of time. As a result, these models become quite powerful and versatile, and you can use
them to solve multiple NLP tasks out of the box by instructing the models with natural language prompts.
Designing such prompts to ensure the optimal output is often called "prompt engineering". Prompt engineering is an
iterative process that requires a fair amount of experimentation. Natural languages are much more flexible and expressive
than programming languages, however, they can also introduce some ambiguity. At the same time, prompts in natural language
are quite sensitive to changes. Even minor modifications in prompts can lead to wildly different outputs.
While there is no exact recipe for creating prompts to match all cases, researchers have worked out a number of best
practices that help to achieve optimal results more consistently.
This guide covers the prompt engineering best practices to help you craft better LLM prompts and solve various NLP tasks.
You'll learn:
- [Basics of prompting](#basics-of-prompting)
- [Best practices of LLM prompting](#best-practices-of-llm-prompting)
- [Advanced prompting techniques: few-shot prompting and chain-of-thought](#advanced-prompting-techniques)
- [When to fine-tune instead of prompting](#prompting-vs-fine-tuning)
<Tip>
Prompt engineering is only a part of the LLM output optimization process. Another essential component is choosing the
optimal text generation strategy. You can customize how your LLM selects each of the subsequent tokens when generating
the text without modifying any of the trainable parameters. By tweaking the text generation parameters, you can reduce
repetition in the generated text and make it more coherent and human-sounding.
Text generation strategies and parameters are out of scope for this guide, but you can learn more about these topics in
the following guides:
* [Generation with LLMs](../llm_tutorial)
* [Text generation strategies](../generation_strategies)
</Tip>
## Basics of prompting
### Types of models
The majority of modern LLMs are decoder-only transformers. Some examples include: [LLaMA](../model_doc/llama),
[Llama2](../model_doc/llama2), [Falcon](../model_doc/falcon), [GPT2](../model_doc/gpt2). However, you may encounter
encoder-decoder transformer LLMs as well, for instance, [Flan-T5](../model_doc/flan-t5) and [BART](../model_doc/bart).
Encoder-decoder-style models are typically used in generative tasks where the output **heavily** relies on the input, for
example, in translation and summarization. The decoder-only models are used for all other types of generative tasks.
When using a pipeline to generate text with an LLM, it's important to know what type of LLM you are using, because
they use different pipelines.
Run inference with decoder-only models with the `text-generation` pipeline:
```python
>>> from transformers import pipeline
>>> import torch
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> generator = pipeline('text-generation', model = 'openai-community/gpt2')
>>> prompt = "Hello, I'm a language model"
>>> generator(prompt, max_length = 30)
[{'generated_text': "Hello, I'm a language model programmer so you can use some of my stuff. But you also need some sort of a C program to run."}]
```
To run inference with an encoder-decoder, use the `text2text-generation` pipeline:
```python
>>> text2text_generator = pipeline("text2text-generation", model = 'google/flan-t5-base')
>>> prompt = "Translate from English to French: I'm very happy to see you"
>>> text2text_generator(prompt)
[{'generated_text': 'Je suis trรจs heureuse de vous rencontrer.'}]
```
### Base vs instruct/chat models
Most of the recent LLM checkpoints available on ๐ค Hub come in two versions: base and instruct (or chat). For example,
[`tiiuae/falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) and [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct).
Base models are excellent at completing the text when given an initial prompt, however, they are not ideal for NLP tasks
where they need to follow instructions, or for conversational use. This is where the instruct (chat) versions come in.
These checkpoints are the result of further fine-tuning of the pre-trained base versions on instructions and conversational data.
This additional fine-tuning makes them a better choice for many NLP tasks.
Let's illustrate some simple prompts that you can use with [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct)
to solve some common NLP tasks.
### NLP tasks
First, let's set up the environment:
```bash
pip install -q transformers accelerate
```
Next, let's load the model with the appropriate pipeline (`"text-generation"`):
```python
>>> from transformers import pipeline, AutoTokenizer
>>> import torch
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> model = "tiiuae/falcon-7b-instruct"
>>> tokenizer = AutoTokenizer.from_pretrained(model)
>>> pipe = pipeline(
... "text-generation",
... model=model,
... tokenizer=tokenizer,
... torch_dtype=torch.bfloat16,
... device_map="auto",
... )
```
<Tip>
Note that Falcon models were trained using the `bfloat16` datatype, so we recommend you use the same. This requires a recent
version of CUDA and works best on modern cards.
</Tip>
Now that we have the model loaded via the pipeline, let's explore how you can use prompts to solve NLP tasks.
#### Text classification
One of the most common forms of text classification is sentiment analysis, which assigns a label like "positive", "negative",
or "neutral" to a sequence of text. Let's write a prompt that instructs the model to classify a given text (a movie review).
We'll start by giving the instruction, and then specifying the text to classify. Note that instead of leaving it at that, we're
also adding the beginning of the response - `"Sentiment: "`:
```python
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> prompt = """Classify the text into neutral, negative or positive.
... Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen.
... Sentiment:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=10,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result: Classify the text into neutral, negative or positive.
Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen.
Sentiment:
Positive
```
As a result, the output contains a classification label from the list we have provided in the instructions, and it is a correct one!
<Tip>
You may notice that in addition to the prompt, we pass a `max_new_tokens` parameter. It controls the number of tokens the
model shall generate, and it is one of the many text generation parameters that you can learn about
in [Text generation strategies](../generation_strategies) guide.
</Tip>
#### Named Entity Recognition
Named Entity Recognition (NER) is a task of finding named entities in a piece of text, such as a person, location, or organization.
Let's modify the instructions in the prompt to make the LLM perform this task. Here, let's also set `return_full_text = False`
so that output doesn't contain the prompt:
```python
>>> torch.manual_seed(1) # doctest: +IGNORE_RESULT
>>> prompt = """Return a list of named entities in the text.
... Text: The Golden State Warriors are an American professional basketball team based in San Francisco.
... Named entities:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=15,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"{seq['generated_text']}")
- Golden State Warriors
- San Francisco
```
As you can see, the model correctly identified two named entities from the given text.
#### Translation
Another task LLMs can perform is translation. You can choose to use encoder-decoder models for this task, however, here,
for the simplicity of the examples, we'll keep using Falcon-7b-instruct, which does a decent job. Once again, here's how
you can write a basic prompt to instruct a model to translate a piece of text from English to Italian:
```python
>>> torch.manual_seed(2) # doctest: +IGNORE_RESULT
>>> prompt = """Translate the English text to Italian.
... Text: Sometimes, I've believed as many as six impossible things before breakfast.
... Translation:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=20,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"{seq['generated_text']}")
A volte, ho creduto a sei impossibili cose prima di colazione.
```
Here we've added a `do_sample=True` and `top_k=10` to allow the model to be a bit more flexible when generating output.
#### Text summarization
Similar to the translation, text summarization is another generative task where the output **heavily** relies on the input,
and encoder-decoder models can be a better choice. However, decoder-style models can be used for this task as well.
Previously, we have placed the instructions at the very beginning of the prompt. However, the very end of the prompt can
also be a suitable location for instructions. Typically, it's better to place the instruction on one of the extreme ends.
```python
>>> torch.manual_seed(3) # doctest: +IGNORE_RESULT
>>> prompt = """Permaculture is a design process mimicking the diversity, functionality and resilience of natural ecosystems. The principles and practices are drawn from traditional ecological knowledge of indigenous cultures combined with modern scientific understanding and technological innovations. Permaculture design provides a framework helping individuals and communities develop innovative, creative and effective strategies for meeting basic needs while preparing for and mitigating the projected impacts of climate change.
... Write a summary of the above text.
... Summary:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=30,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"{seq['generated_text']}")
Permaculture is an ecological design mimicking natural ecosystems to meet basic needs and prepare for climate change. It is based on traditional knowledge and scientific understanding.
```
#### Question answering
For question answering task we can structure the prompt into the following logical components: instructions, context, question, and
the leading word or phrase (`"Answer:"`) to nudge the model to start generating the answer:
```python
>>> torch.manual_seed(4) # doctest: +IGNORE_RESULT
>>> prompt = """Answer the question using the context below.
... Context: Gazpacho is a cold soup and drink made of raw, blended vegetables. Most gazpacho includes stale bread, tomato, cucumbers, onion, bell peppers, garlic, olive oil, wine vinegar, water, and salt. Northern recipes often include cumin and/or pimentรณn (smoked sweet paprika). Traditionally, gazpacho was made by pounding the vegetables in a mortar with a pestle; this more laborious method is still sometimes used as it helps keep the gazpacho cool and avoids the foam and silky consistency of smoothie versions made in blenders or food processors.
... Question: What modern tool is used to make gazpacho?
... Answer:
... """
>>> sequences = pipe(
... prompt,
... max_new_tokens=10,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result: Modern tools often used to make gazpacho include
```
#### Reasoning
Reasoning is one of the most difficult tasks for LLMs, and achieving good results often requires applying advanced prompting techniques, like
[Chain-of-though](#chain-of-thought).
Let's try if we can make a model reason about a simple arithmetics task with a basic prompt:
```python
>>> torch.manual_seed(5) # doctest: +IGNORE_RESULT
>>> prompt = """There are 5 groups of students in the class. Each group has 4 students. How many students are there in the class?"""
>>> sequences = pipe(
... prompt,
... max_new_tokens=30,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result:
There are a total of 5 groups, so there are 5 x 4=20 students in the class.
```
Correct! Let's increase the complexity a little and see if we can still get away with a basic prompt:
```python
>>> torch.manual_seed(6) # doctest: +IGNORE_RESULT
>>> prompt = """I baked 15 muffins. I ate 2 muffins and gave 5 muffins to a neighbor. My partner then bought 6 more muffins and ate 2. How many muffins do we now have?"""
>>> sequences = pipe(
... prompt,
... max_new_tokens=10,
... do_sample=True,
... top_k=10,
... return_full_text = False,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result:
The total number of muffins now is 21
```
This is a wrong answer, it should be 12. In this case, this can be due to the prompt being too basic, or due to the choice
of model, after all we've picked the smallest version of Falcon. Reasoning is difficult for models of all sizes, but larger
models are likely to perform better.
## Best practices of LLM prompting
In this section of the guide we have compiled a list of best practices that tend to improve the prompt results:
* When choosing the model to work with, the latest and most capable models are likely to perform better.
* Start with a simple and short prompt, and iterate from there.
* Put the instructions at the beginning of the prompt, or at the very end. When working with large context, models apply various optimizations to prevent Attention complexity from scaling quadratically. This may make a model more attentive to the beginning or end of a prompt than the middle.
* Clearly separate instructions from the text they apply to - more on this in the next section.
* Be specific and descriptive about the task and the desired outcome - its format, length, style, language, etc.
* Avoid ambiguous descriptions and instructions.
* Favor instructions that say "what to do" instead of those that say "what not to do".
* "Lead" the output in the right direction by writing the first word (or even begin the first sentence for the model).
* Use advanced techniques like [Few-shot prompting](#few-shot-prompting) and [Chain-of-thought](#chain-of-thought)
* Test your prompts with different models to assess their robustness.
* Version and track the performance of your prompts.
## Advanced prompting techniques
### Few-shot prompting
The basic prompts in the sections above are the examples of "zero-shot" prompts, meaning, the model has been given
instructions and context, but no examples with solutions. LLMs that have been fine-tuned on instruction datasets, generally
perform well on such "zero-shot" tasks. However, you may find that your task has more complexity or nuance, and, perhaps,
you have some requirements for the output that the model doesn't catch on just from the instructions. In this case, you can
try the technique called few-shot prompting.
In few-shot prompting, we provide examples in the prompt giving the model more context to improve the performance.
The examples condition the model to generate the output following the patterns in the examples.
Here's an example:
```python
>>> torch.manual_seed(0) # doctest: +IGNORE_RESULT
>>> prompt = """Text: The first human went into space and orbited the Earth on April 12, 1961.
... Date: 04/12/1961
... Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon.
... Date:"""
>>> sequences = pipe(
... prompt,
... max_new_tokens=8,
... do_sample=True,
... top_k=10,
... )
>>> for seq in sequences:
... print(f"Result: {seq['generated_text']}")
Result: Text: The first human went into space and orbited the Earth on April 12, 1961.
Date: 04/12/1961
Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon.
Date: 09/28/1960
```
In the above code snippet we used a single example to demonstrate the desired output to the model, so this can be called a
"one-shot" prompting. However, depending on the task complexity you may need to use more than one example.
Limitations of the few-shot prompting technique:
- While LLMs can pick up on the patterns in the examples, these technique doesn't work well on complex reasoning tasks
- Few-shot prompting requires creating lengthy prompts. Prompts with large number of tokens can increase computation and latency. There's also a limit to the length of the prompts.
- Sometimes when given a number of examples, models can learn patterns that you didn't intend them to learn, e.g. that the third movie review is always negative.
### Chain-of-thought
Chain-of-thought (CoT) prompting is a technique that nudges a model to produce intermediate reasoning steps thus improving
the results on complex reasoning tasks.
There are two ways of steering a model to producing the reasoning steps:
- few-shot prompting by illustrating examples with detailed answers to questions, showing the model how to work through a problem.
- by instructing the model to reason by adding phrases like "Let's think step by step" or "Take a deep breath and work through the problem step by step."
If we apply the CoT technique to the muffins example from the [reasoning section](#reasoning) and use a larger model,
such as (`tiiuae/falcon-180B-chat`) which you can play with in the [HuggingChat](https://huggingface.co/chat/),
we'll get a significant improvement on the reasoning result:
```text
Let's go through this step-by-step:
1. You start with 15 muffins.
2. You eat 2 muffins, leaving you with 13 muffins.
3. You give 5 muffins to your neighbor, leaving you with 8 muffins.
4. Your partner buys 6 more muffins, bringing the total number of muffins to 14.
5. Your partner eats 2 muffins, leaving you with 12 muffins.
Therefore, you now have 12 muffins.
```
## Prompting vs fine-tuning
You can achieve great results by optimizing your prompts, however, you may still ponder whether fine-tuning a model
would work better for your case. Here are some scenarios when fine-tuning a smaller model may be a preferred option:
- Your domain is wildly different from what LLMs were pre-trained on and extensive prompt optimization did not yield sufficient results.
- You need your model to work well in a low-resource language.
- You need the model to be trained on sensitive data that is under strict regulations.
- You have to use a small model due to cost, privacy, infrastructure or other limitations.
In all of the above examples, you will need to make sure that you either already have or can easily obtain a large enough
domain-specific dataset at a reasonable cost to fine-tune a model. You will also need to have enough time and resources
to fine-tune a model.
If the above examples are not the case for you, optimizing prompts can prove to be more beneficial.
|
transformers/docs/source/en/tasks/prompting.md/0
|
{
"file_path": "transformers/docs/source/en/tasks/prompting.md",
"repo_id": "transformers",
"token_count": 5574
}
| 283
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Export to TFLite
[TensorFlow Lite](https://www.tensorflow.org/lite/guide) is a lightweight framework for deploying machine learning models
on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices.
TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and
power consumption.
A TensorFlow Lite model is represented in a special efficient portable format identified by the `.tflite` file extension.
๐ค Optimum offers functionality to export ๐ค Transformers models to TFLite through the `exporters.tflite` module.
For the list of supported model architectures, please refer to [๐ค Optimum documentation](https://huggingface.co/docs/optimum/exporters/tflite/overview).
To export a model to TFLite, install the required dependencies:
```bash
pip install optimum[exporters-tf]
```
To check out all available arguments, refer to the [๐ค Optimum docs](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model),
or view help in command line:
```bash
optimum-cli export tflite --help
```
To export a model's checkpoint from the ๐ค Hub, for example, `google-bert/bert-base-uncased`, run the following command:
```bash
optimum-cli export tflite --model google-bert/bert-base-uncased --sequence_length 128 bert_tflite/
```
You should see the logs indicating progress and showing where the resulting `model.tflite` is saved, like this:
```bash
Validating TFLite model...
-[โ] TFLite model output names match reference model (logits)
- Validating TFLite Model output "logits":
-[โ] (1, 128, 30522) matches (1, 128, 30522)
-[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05)
The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05:
- logits: max diff = 5.817413330078125e-05.
The exported model was saved at: bert_tflite
```
The example above illustrates exporting a checkpoint from ๐ค Hub. When exporting a local model, first make sure that you
saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the
`local_path` to the `model` argument instead of the checkpoint name on ๐ค Hub.
|
transformers/docs/source/en/tflite.md/0
|
{
"file_path": "transformers/docs/source/en/tflite.md",
"repo_id": "transformers",
"token_count": 878
}
| 284
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Crea una arquitectura personalizada
Una [`AutoClass`](model_doc/auto) infiere, automรกticamente, la arquitectura del modelo y descarga la configuraciรณn y los pesos del modelo preentrenado. Normalmente, recomendamos usar una `AutoClass` para producir un cรณdigo agnรณstico a puntos de guardado o checkpoints. Sin embargo, los usuarios que quieran mรกs control sobre los parรกmetros especรญficos de los modelos pueden crear su propio modelo ๐ค Transformers personalizado a partir de varias clases base. Esto puede ser particularmente รบtil para alguien que estรฉ interesado en estudiar, entrenar o experimentar con modelos ๐ค Transformers. En esta guรญa vamos a profundizar en la creaciรณn de modelos personalizados sin usar `AutoClass`. Aprenderemos a:
- Cargar y personalizar una configuraciรณn para un modelo.
- Crear una arquitectura para un modelo.
- Crear tokenizadores rรกpidos y lentos para textos.
- Crear un extractor de propiedades para tareas de audio o imรกgenes.
- Crear un procesador para tareas multimodales.
## Configuraciรณn
Una [configuraciรณn](main_classes/configuration) es un conjunto de atributos especรญficos de un modelo. Cada configuraciรณn de modelo tiene atributos diferentes. Por ejemplo, todos los modelos de PLN tienen los atributos `hidden_size`, `num_attention_heads`, `num_hidden_layers` y `vocab_size` en comรบn. Estos atributos especifican el nรบmero de cabezas de atenciรณn o de capas ocultas con las que se construyen los modelos.
Puedes echarle un vistazo a [DistilBERT](model_doc/distilbert) y sus atributos accediendo a [`DistilBertConfig`]:
```py
>>> from transformers import DistilBertConfig
>>> config = DistilBertConfig()
>>> print(config)
DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.16.2",
"vocab_size": 30522
}
```
[`DistilBertConfig`] muestra todos los atributos por defecto que se han usado para construir un modelo [`DistilBertModel`] base. Todos ellos son personalizables, lo que deja espacio para poder experimentar. Por ejemplo, puedes personalizar un modelo predeterminado para:
- Probar una funciรณn de activaciรณn diferente, usando el parรกmetro `activation`.
- Usar un valor de abandono (tambiรฉn conocido como _dropout_) mรกs alto para las probabilidades de las capas de atenciรณn, usando el parรกmetro `attention_dropout`.
```py
>>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4)
>>> print(my_config)
DistilBertConfig {
"activation": "relu",
"attention_dropout": 0.4,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"transformers_version": "4.16.2",
"vocab_size": 30522
}
```
Los atributos de los modelos preentrenados pueden ser modificados con la funciรณn [`~PretrainedConfig.from_pretrained`]:
```py
>>> my_config = DistilBertConfig.from_pretrained("distilbert/distilbert-base-uncased", activation="relu", attention_dropout=0.4)
```
Cuando estรฉs satisfecho con la configuraciรณn de tu modelo, puedes guardarlo con la funciรณn [`~PretrainedConfig.save_pretrained`]. Tu configuraciรณn se guardarรก en un archivo JSON dentro del directorio que le especifiques como parรกmetro.
```py
>>> my_config.save_pretrained(save_directory="./your_model_save_path")
```
Para volver a usar el archivo de configuraciรณn, puedes cargarlo usando [`~PretrainedConfig.from_pretrained`]:
```py
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json")
```
<Tip>
Tambiรฉn puedes guardar los archivos de configuraciรณn como un diccionario; o incluso guardar solo la diferencia entre tu archivo personalizado y la configuraciรณn por defecto. Consulta la [documentaciรณn sobre configuraciรณn](main_classes/configuration) para ver mรกs detalles.
</Tip>
## Modelo
El siguiente paso serรก crear un [modelo](main_classes/models). El modelo, al que a veces tambiรฉn nos referimos como arquitectura, es el encargado de definir cada capa y quรฉ operaciones se realizan. Los atributos como `num_hidden_layers` de la configuraciรณn se usan para definir la arquitectura. Todos los modelos comparten una clase base, [`PreTrainedModel`], y algunos mรฉtodos comunes que se pueden usar para redimensionar los _embeddings_ o para recortar cabezas de auto-atenciรณn (tambiรฉn llamadas _self-attention heads_). Ademรกs, todos los modelos son subclases de [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) o [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html), lo que significa que son compatibles con su respectivo framework.
<frameworkcontent>
<pt>
Carga los atributos de tu configuraciรณn personalizada en el modelo de la siguiente forma:
```py
>>> from transformers import DistilBertModel
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json")
>>> model = DistilBertModel(my_config)
```
Esto crea un modelo con valores aleatorios, en lugar de crearlo con los pesos del preentrenamiento, por lo que no serรกs capaz de usar este modelo para nada รบtil hasta que no lo entrenes. El entrenamiento es un proceso costoso, tanto en cuestiรณn de recursos como de tiempo, por lo que generalmente es mejor usar un modelo preentrenado para obtener mejores resultados mรกs rรกpido, consumiendo una fracciรณn de los recursos que un entrenamiento completo hubiera requerido.
Puedes crear un modelo preentrenado con [`~PreTrainedModel.from_pretrained`]:
```py
>>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased")
```
Cuando cargues tus pesos del preentrenamiento, el modelo por defecto se carga automรกticamente si nos lo proporciona ๐ค Transformers. Sin embargo, siempre puedes reemplazar (todos o algunos de) los atributos del modelo por defecto por los tuyos:
```py
>>> model = DistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config)
```
</pt>
<tf>
Carga los atributos de tu configuraciรณn personalizada en el modelo de la siguiente forma:
```py
>>> from transformers import TFDistilBertModel
>>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json")
>>> tf_model = TFDistilBertModel(my_config)
```
Esto crea un modelo con valores aleatorios, en lugar de crearlo con los pesos del preentrenamiento, por lo que no serรกs capaz de usar este modelo para nada รบtil hasta que no lo entrenes. El entrenamiento es un proceso costoso, tanto en cuestiรณn de recursos como de tiempo, por lo que generalmente es mejor usar un modelo preentrenado para obtener mejores resultados mรกs rรกpido, consumiendo solo una fracciรณn de los recursos que un entrenamiento completo hubiera requerido.
Puedes crear un modelo preentrenado con [`~TFPreTrainedModel.from_pretrained`]:
```py
>>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased")
```
Cuando cargues tus pesos del preentrenamiento, el modelo por defecto se carga automรกticamente si este nos lo proporciona ๐ค Transformers. Sin embargo, siempre puedes reemplazar (todos o algunos de) los atributos del modelo por defecto por los tuyos:
```py
>>> tf_model = TFDistilBertModel.from_pretrained("distilbert/distilbert-base-uncased", config=my_config)
```
</tf>
</frameworkcontent>
### Cabezas de modelo
En este punto del tutorial, tenemos un modelo DistilBERT base que devuelve los *hidden states* o estados ocultos. Los *hidden states* se pasan como parรกmetros de entrada a la cabeza del modelo para producir la salida. ๐ค Transformers ofrece una cabeza de modelo diferente para cada tarea, siempre y cuando el modelo sea compatible para la tarea (por ejemplo, no puedes usar DistilBERT para una tarea secuencia a secuencia como la traducciรณn).
<frameworkcontent>
<pt>
Por ejemplo, [`DistilBertForSequenceClassification`] es un modelo DistilBERT base con una cabeza de clasificaciรณn de secuencias. La cabeza de clasificaciรณn de secuencias es una capa superior que precede a la recolecciรณn de las salidas.
```py
>>> from transformers import DistilBertForSequenceClassification
>>> model = DistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased")
```
Puedes reutilizar este punto de guardado o *checkpoint* para otra tarea fรกcilmente cambiando a una cabeza de un modelo diferente. Para una tarea de respuesta a preguntas, puedes usar la cabeza del modelo [`DistilBertForQuestionAnswering`]. La cabeza de respuesta a preguntas es similar a la de clasificaciรณn de secuencias, excepto porque consta de una capa lineal delante de la salida de los *hidden states*.
```py
>>> from transformers import DistilBertForQuestionAnswering
>>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
```
</pt>
<tf>
Por ejemplo, [`TFDistilBertForSequenceClassification`] es un modelo DistilBERT base con una cabeza de clasificaciรณn de secuencias. La cabeza de clasificaciรณn de secuencias es una capa superior que precede a la recolecciรณn de las salidas.
```py
>>> from transformers import TFDistilBertForSequenceClassification
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased")
```
Puedes reutilizar este punto de guardado o *checkpoint* para otra tarea fรกcilmente cambiando a una cabeza de un modelo diferente. Para una tarea de respuesta a preguntas, puedes usar la cabeza del modelo [`TFDistilBertForQuestionAnswering`]. La cabeza de respuesta a preguntas es similar a la de clasificaciรณn de secuencias, excepto porque consta de una capa lineal delante de la salida de los *hidden states*.
```py
>>> from transformers import TFDistilBertForQuestionAnswering
>>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
```
</tf>
</frameworkcontent>
## Tokenizer
La ultima clase base que debes conocer antes de usar un modelo con datos textuales es la clase [tokenizer](main_classes/tokenizer), que convierte el texto bruto en tensores. Hay dos tipos de *tokenizers* que puedes usar con ๐ค Transformers:
- [`PreTrainedTokenizer`]: una implementaciรณn de un *tokenizer* hecha en Python.
- [`PreTrainedTokenizerFast`]: un *tokenizer* de nuestra librerรญa [๐ค Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/), basada en Rust. Este tipo de *tokenizer* es bastante mรกs rรกpido, especialmente durante la tokenizaciรณn por lotes, gracias a estar implementado en Rust. Esta rรกpida tokenizaciรณn tambiรฉn ofrece mรฉtodos adicionales como el *offset mapping*, que relaciona los tokens con sus palabras o caracteres originales.
Ambos *tokenizers* son compatibles con los mรฉtodos comunes, como los de encodificaciรณn y decodificaciรณn, los mรฉtodos para aรฑadir tokens y aquellos que manejan tokens especiales.
<Tip warning={true}>
No todos los modelos son compatibles con un *tokenizer* rรกpido. รchale un vistazo a esta [tabla](index#supported-frameworks) para comprobar si un modelo especรญfico es compatible con un *tokenizer* rรกpido.
</Tip>
Si has entrenado tu propio *tokenizer*, puedes crear uno desde tu archivo de โvocabularioโ:
```py
>>> from transformers import DistilBertTokenizer
>>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left")
```
Es importante recordar que los vocabularios que provienen de un *tokenizer* personalizado serรกn diferentes a los vocabularios generados por el *tokenizer* de un modelo preentrenado. Debes usar el vocabulario de un *tokenizer* preentrenado si vas a usar un modelo preentrenado, de lo contrario las entradas no tendrรกn sentido. Crea un *tokenizer* con el vocabulario de un modelo preentrenado usando la clase [`DistilBertTokenizer`]:
```py
>>> from transformers import DistilBertTokenizer
>>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
```
Crea un *tokenizer* rรกpido con la clase [`DistilBertTokenizerFast`]:
```py
>>> from transformers import DistilBertTokenizerFast
>>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert/distilbert-base-uncased")
```
<Tip>
Por defecto, el [`AutoTokenizer`] intentarรก cargar un *tokenizer* rรกpido. Puedes desactivar este comportamiento cambiando el parรกmetro `use_fast=False` de `from_pretrained`.
</Tip>
## Extractor de Caracterรญsticas
Un extractor de caracterรญsticas procesa entradas de audio e imagen. Hereda de la clase base [`~feature_extraction_utils.FeatureExtractionMixin`] y tambiรฉn puede heredar de la clase [`ImageFeatureExtractionMixin`] para el procesamiento de caracterรญsticas de las imรกgenes o de la clase [`SequenceFeatureExtractor`] para el procesamiento de entradas de audio.
Dependiendo de si trabajas en una tarea de audio o de video, puedes crear un extractor de caracterรญsticas asociado al modelo que estรฉs usando. Por ejemplo, podrรญas crear un [`ViTFeatureExtractor`] por defecto si estรกs usando [ViT](model_doc/vit) para clasificaciรณn de imรกgenes:
```py
>>> from transformers import ViTFeatureExtractor
>>> vit_extractor = ViTFeatureExtractor()
>>> print(vit_extractor)
ViTFeatureExtractor {
"do_normalize": true,
"do_resize": true,
"feature_extractor_type": "ViTFeatureExtractor",
"image_mean": [
0.5,
0.5,
0.5
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": 2,
"size": 224
}
```
<Tip>
Si no estรกs buscando ninguna personalizaciรณn en especรญfico, usa el mรฉtodo `from_pretrained` para cargar los parรกmetros del extractor de caracterรญsticas por defecto del modelo.
</Tip>
Puedes modificar cualquier parรกmetro de [`ViTFeatureExtractor`] para crear tu extractor de caracterรญsticas personalizado:
```py
>>> from transformers import ViTFeatureExtractor
>>> my_vit_extractor = ViTFeatureExtractor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3])
>>> print(my_vit_extractor)
ViTFeatureExtractor {
"do_normalize": false,
"do_resize": true,
"feature_extractor_type": "ViTFeatureExtractor",
"image_mean": [
0.3,
0.3,
0.3
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": "PIL.Image.BOX",
"size": 224
}
```
Para las entradas de audio, puedes crear un [`Wav2Vec2FeatureExtractor`] y personalizar los parรกmetros de una forma similar:
```py
>>> from transformers import Wav2Vec2FeatureExtractor
>>> w2v2_extractor = Wav2Vec2FeatureExtractor()
>>> print(w2v2_extractor)
Wav2Vec2FeatureExtractor {
"do_normalize": true,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 16000
}
```
## Procesador
Para modelos que son compatibles con tareas multimodales, ๐ค Transformers ofrece una clase *procesador* que agrupa un extractor de caracterรญsticas y un *tokenizer* en el mismo objeto. Por ejemplo, probemos a usar el procesador [`Wav2Vec2Processor`] para una tarea de reconocimiento de voz (ASR). Un ASR transcribe el audio a texto, por lo que necesitaremos un extractor de caracterรญsticas y un *tokenizer*.
Crea un extractor de caracterรญsticas para manejar la entrada de audio:
```py
>>> from transformers import Wav2Vec2FeatureExtractor
>>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True)
```
Crea un *tokenizer* para manejar la entrada de texto:
```py
>>> from transformers import Wav2Vec2CTCTokenizer
>>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt")
```
Puedes combinar el extractor de caracterรญsticas y el *tokenizer* en el [`Wav2Vec2Processor`]:
```py
>>> from transformers import Wav2Vec2Processor
>>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
```
Con dos clases base (la configuraciรณn y el modelo) y una clase de preprocesamiento adicional (*tokenizer*, extractor de caracterรญsticas o procesador), puedes crear cualquiera de los modelos compatibles con ๐ค Transformers. Cada una de estas clases son configurables, permitiรฉndote usar sus atributos especรญficos. Puedes crear un modelo para entrenarlo de una forma fรกcil, o modificar un modelo preentrenado disponible para especializarlo.
|
transformers/docs/source/es/create_a_model.md/0
|
{
"file_path": "transformers/docs/source/es/create_a_model.md",
"repo_id": "transformers",
"token_count": 6229
}
| 285
|
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Verificaciones en un Pull Request
Cuando abres un _pull request_ en ๐ค Transformers, se ejecutarรกn una serie de verificaciones para asegurarte de que el _patch_ que estรกs agregando no rompa nada existente. Estas verificaciones son de cuatro tipos:
- pruebas regulares
- creaciรณn de la documentaciรณn
- estilo del cรณdigo y documentaciรณn
- consistencia del repositorio
En este documento, intentaremos explicar cuรกles son esas diferentes verificaciones y el motivo detrรกs de ellas, asรญ como tambiรฉn cรณmo depurarlas localmente si una falla en tu PR.
Recuerda que todas las verificaciones requieren que tengas una instalaciรณn de desarrollo:
```bash
pip install transformers[dev]
```
o una instalaciรณn editable:
```bash
pip install -e .[dev]
```
del repositorio de Transformers.
## Pruebas
Todos los procesos que comienzan con `ci/circleci: run_tests_` ejecutan partes del conjunto de pruebas de Transformers. Cada uno de esos procesos se enfoca en una parte de la biblioteca en un entorno determinado: por ejemplo, `ci/circleci: run_tests_pipelines_tf` ejecuta la prueba de _pipelines_ en un entorno donde solo estรก instalado TensorFlow.
Ten en cuenta que para evitar ejecutar pruebas cuando no hay un cambio real en los mรณdulos que estรกs probando, solo se ejecuta una parte del conjunto de pruebas: se ejecuta una tarea auxiliar para determinar las diferencias en la biblioteca antes y despuรฉs del PR (lo que GitHub te muestra en la pestaรฑa "Files changes") y selecciona las pruebas afectadas por esa diferencia. Este auxiliar se puede ejecutar localmente usando:
```bash
python utils/tests_fetcher.py
```
desde el directorio raiz del repositorio de Transformers. Se ejecutarรก lo siguiente:
1. Verificaciรณn para cada archivo en el _diff_ si los cambios estรกn en el cรณdigo, solo en comentarios o _docstrings_. Solo los archivos con cambios reales de cรณdigo se conservan.
2. Creaciรณn de un mapa interno que proporciona para cada archivo del cรณdigo fuente de la biblioteca todos los archivos a los que impacta recursivamente. Se dice que el mรณdulo A impacta al mรณdulo B si el mรณdulo B importa el mรณdulo A. Para el impacto recursivo, necesitamos una cadena de mรณdulos que va del mรณdulo A al mรณdulo B en la que cada mรณdulo importa el anterior.
3. Aplicaciรณn de este mapa en los archivos recopilados en el paso 1, lo que nos da una lista de archivos modelo afectados por el PR.
4. Asignaciรณn de cada uno de esos archivos a sus archivos de prueba correspondientes y para obtener una la lista de pruebas a ejecutar.
Al ejecutar el _script_ localmente, debes obtener los resultados de los pasos 1, 3 y 4 impresos y asรญ saber quรฉ pruebas se ejecutarรกn. El _script_ tambiรฉn crearรก un archivo llamado `test_list.txt` que contiene la lista de pruebas para ejecutar, y puede ejecutarlas localmente con el siguiente comando:
```bash
python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt)
```
En caso de que se te escape algo, el conjunto completo de pruebas tambiรฉn se ejecuta a diario.
## Creaciรณn de la documentaciรณn
El proceso `build_pr_documentation` compila y genera una vista previa de la documentaciรณn para asegurarse de que todo se vea bien una vez que se fusione tu PR. Un bot agregarรก un enlace para obtener una vista previa de la documentaciรณn en tu PR. Cualquier cambio que realices en el PR se actualiza automรกticamente en la vista previa. Si la documentaciรณn no se genera, haz clic en **Detalles** junto al proceso fallido para ver dรณnde saliรณ mal. A menudo, el error es tan simple como que falta un archivo en `toctree`.
Si estรกs interesado en compilar u obtener una vista previa de la documentaciรณn localmente, echa un vistazo al [`README.md`](https://github.com/huggingface/transformers/tree/main/docs) en la carpeta `docs`.
## Estilo de cรณdigo y documentaciรณn.
El formato de cรณdigo se aplica a todos los archivos fuente, los ejemplos y las pruebas utilizando `black` e `ruff`. Tambiรฉn tenemos una herramienta personalizada que se ocupa del formato de los _docstrings_ y archivos `rst` (`utils/style_doc.py`), asรญ como del orden de las importaciones _lazy_ realizadas en los archivos `__init__.py` de Transformers (`utils /custom_init_isort.py`). Todo esto se puede probar ejecutando
```bash
make style
```
CI verifica que se hayan aplicado dentro de la verificaciรณn `ci/circleci: check_code_quality`. Tambiรฉn se ejecuta `ruff`, que harรก una verificaciรณn bรกsica a tu cรณdigo y te harรก saber si encuentra una variable no definida, o una que no se usa. Para ejecutar esa verificaciรณn localmente, usa
```bash
make quality
```
Esto puede llevar mucho tiempo, asรญ que para ejecutar lo mismo solo en los archivos que modificaste en la rama actual, ejecuta
```bash
make fixup
```
Este รบltimo comando tambiรฉn ejecutarรก todas las verificaciones adicionales para la consistencia del repositorio. Echemos un vistazo a estas pruebas.
## Consistencia del repositorio
Esta verificaciรณn reagrupa todas las pruebas para asegurarse de que tu PR deja el repositorio en buen estado, y se realiza mediante `ci/circleci: check_repository_consistency`. Puedes ejecutar localmente esta verificaciรณn ejecutando lo siguiente:
```bash
make repo-consistency
```
Esta instrucciรณn verifica que:
- Todos los objetos agregados al _init_ estรกn documentados (realizados por `utils/check_repo.py`)
- Todos los archivos `__init__.py` tienen el mismo contenido en sus dos secciones (realizado por `utils/check_inits.py`)
- Todo el cรณdigo identificado como una copia de otro mรณdulo es consistente con el original (realizado por `utils/check_copies.py`)
- Todas las clases de configuraciรณn tienen al menos _checkpoint_ vรกlido mencionado en sus _docstrings_ (realizado por `utils/check_config_docstrings.py`)
- Las traducciones de los README y el รญndice del documento tienen la misma lista de modelos que el README principal (realizado por `utils/check_copies.py`)
- Las tablas generadas automaticamente en la documentaciรณn estรกn actualizadas (realizadas por `utils/check_table.py`)
- La biblioteca tiene todos los objetos disponibles incluso si no estรกn instaladas todas las dependencias opcionales (realizadas por `utils/check_dummies.py`)
Si esta verificaciรณn falla, los primeros dos elementos requieren una reparaciรณn manual, los รบltimos cuatro pueden repararse automรกticamente ejecutando el comando
```bash
make fix-copies
```
Las verificaciones adicionales se refieren a los PRs que agregan nuevos modelos, principalmente que:
- Todos los modelos agregados estรกn en un Auto-mapping (realizado por `utils/check_repo.py`)
<!-- TODO Sylvain, add a check that makes sure the common tests are implemented.-->
- Todos los modelos se verifican correctamente (realizados por `utils/check_repo.py`)
<!-- TODO Sylvain, add the following
- All models are added to the main README, inside the main doc
- All checkpoints used actually exist on the Hub
-->
|
transformers/docs/source/es/pr_checks.md/0
|
{
"file_path": "transformers/docs/source/es/pr_checks.md",
"repo_id": "transformers",
"token_count": 2659
}
| 286
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Allenamento distribuito con ๐ค Accelerate
La parallelizzazione รจ emersa come strategia per allenare modelli sempre piรน grandi su hardware limitato e accelerarne la velocitร di allenamento di diversi ordini di magnitudine. In Hugging Face, abbiamo creato la libreria [๐ค Accelerate](https://huggingface.co/docs/accelerate) per aiutarti ad allenare in modo semplice un modello ๐ค Transformers su qualsiasi tipo di configurazione distribuita, sia che si tratti di piรน GPU su una sola macchina o di piรน GPU su piรน macchine. In questo tutorial, imparerai come personalizzare il training loop nativo di PyTorch per consentire l'addestramento in un ambiente distribuito.
## Configurazione
Inizia installando ๐ค Accelerate:
```bash
pip install accelerate
```
Poi importa e crea un oggetto [`Accelerator`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator). `Accelerator` rileverร automaticamente il tuo setup distribuito e inizializzerร tutte le componenti necessarie per l'allenamento. Non dovrai allocare esplicitamente il tuo modello su un device.
```py
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
```
## Preparati ad accelerare
Il prossimo passo รจ quello di passare tutti gli oggetti rilevanti per l'allenamento al metodo [`prepare`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.prepare). Questo include i tuoi DataLoaders per l'allenamento e per la valutazione, un modello e un ottimizzatore:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
```
## Backward
Infine, sostituisci il tipico metodo `loss.backward()` nel tuo loop di allenamento con il metodo [`backward`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.backward) di ๐ค Accelerate:
```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss)
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
Come puoi vedere nel seguente codice, hai solo bisogno di aggiungere quattro righe in piรน di codice al tuo training loop per abilitare l'allenamento distribuito!
```diff
+ from accelerate import Accelerator
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
+ accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=3e-5)
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+ train_dataloader, eval_dataloader, model, optimizer
+ )
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
- batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
```
## Allenamento
Una volta che hai aggiunto le righe di codice rilevanti, lancia il tuo allenamento in uno script o in un notebook come Colaboratory.
### Allenamento con uno script
Se stai eseguendo il tuo allenamento da uno script, esegui il comando seguente per creare e salvare un file di configurazione:
```bash
accelerate config
```
Poi lancia il tuo allenamento con:
```bash
accelerate launch train.py
```
### Allenamento con un notebook
La libreria ๐ค Accelerate puรฒ anche essere utilizzata in un notebook se stai pianificando di utilizzare le TPU di Colaboratory. Inserisci tutto il codice legato all'allenamento in una funzione, e passala al `notebook_launcher`:
```py
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
```
Per maggiori informazioni relative a ๐ค Accelerate e le sue numerose funzionalitร , fai riferimento alla [documentazione](https://huggingface.co/docs/accelerate).
|
transformers/docs/source/it/accelerate.md/0
|
{
"file_path": "transformers/docs/source/it/accelerate.md",
"repo_id": "transformers",
"token_count": 1891
}
| 287
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Inferenza Efficiente su CPU
Questa guida si concentra sull'inferenza di modelli di grandi dimensioni in modo efficiente sulla CPU.
## `BetterTransformer` per inferenza piรน rapida
Abbiamo integrato di recente `BetterTransformer` per fare inferenza piรน rapidamente con modelli per testi, immagini e audio. Visualizza la documentazione sull'integrazione [qui](https://huggingface.co/docs/optimum/bettertransformer/overview) per maggiori dettagli.
## PyTorch JIT-mode (TorchScript)
TorchScript รจ un modo di creare modelli serializzabili e ottimizzabili da codice PyTorch. Ogni programmma TorchScript puรฒ esere salvato da un processo Python e caricato in un processo dove non ci sono dipendenze Python.
Comparandolo con l'eager mode di default, jit mode in PyTorch normalmente fornisce prestazioni migliori per l'inferenza del modello da parte di metodologie di ottimizzazione come la operator fusion.
Per una prima introduzione a TorchScript, vedi la Introduction to [PyTorch TorchScript tutorial](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules).
### IPEX Graph Optimization con JIT-mode
Intelยฎ Extension per PyTorch fornnisce ulteriori ottimizzazioni in jit mode per i modelli della serie Transformers. Consigliamo vivamente agli utenti di usufruire dei vantaggi di Intelยฎ Extension per PyTorch con jit mode. Alcuni operator patterns usati fequentemente dai modelli Transformers models sono giร supportati in Intelยฎ Extension per PyTorch con jit mode fusions. Questi fusion patterns come Multi-head-attention fusion, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm fusion and etc. sono abilitati e hanno buone performance. I benefici della fusion รจ fornito agli utenti in modo trasparente. In base alle analisi, il ~70% dei problemi piรน popolari in NLP question-answering, text-classification, and token-classification possono avere benefici sulle performance grazie ai fusion patterns sia per Float32 precision che per BFloat16 Mixed precision.
Vedi maggiori informazioni per [IPEX Graph Optimization](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html).
#### Installazione di IPEX
I rilasci di IPEX seguono PyTorch, verifica i vari approcci per [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/).
### Utilizzo del JIT-mode
Per abilitare JIT-mode in Trainer per evaluation e prediction, devi aggiungere `jit_mode_eval` negli argomenti di Trainer.
<Tip warning={true}>
per PyTorch >= 1.14.0. JIT-mode potrebe giovare a qualsiasi modello di prediction e evaluaion visto che il dict input รจ supportato in jit.trace
per PyTorch < 1.14.0. JIT-mode potrebbe giovare ai modelli il cui ordine dei parametri corrisponde all'ordine delle tuple in ingresso in jit.trace, come i modelli per question-answering.
Nel caso in cui l'ordine dei parametri seguenti non corrisponda all'ordine delle tuple in ingresso in jit.trace, come nei modelli di text-classification, jit.trace fallirร e lo cattureremo con una eccezione al fine di renderlo un fallback. Il logging รจ usato per notificare gli utenti.
</Tip>
Trovi un esempo con caso d'uso in [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
- Inference using jit mode on CPU:
<pre>python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
<b>--jit_mode_eval </b></pre>
- Inference with IPEX using jit mode on CPU:
<pre>python run_qa.py \
--model_name_or_path csarron/bert-base-uncased-squad-v1 \
--dataset_name squad \
--do_eval \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/ \
--no_cuda \
<b>--use_ipex \</b>
<b>--jit_mode_eval</b></pre>
|
transformers/docs/source/it/perf_infer_cpu.md/0
|
{
"file_path": "transformers/docs/source/it/perf_infer_cpu.md",
"repo_id": "transformers",
"token_count": 1497
}
| 288
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ๐ค Accelerate ใ็จใใๅๆฃๅญฆ็ฟ
ใขใใซใๅคงใใใชใใซใคใใฆใ้ใใใใใผใใฆใงใขใงใใๅคงใใชใขใใซใ่จ็ทดใใ่จ็ทด้ๅบฆใๅคงๅน
ใซไธๆใใใใใใฎๆนๆณใจใใฆไธฆๅๅฆ็ใๆตฎไธใใฆใใพใใใ1ๅฐใฎใใทใณใซ่คๆฐใฎGPUใใใฃใฆใใ่คๆฐใฎใใทใณใซใพใใใ่คๆฐใฎGPUใใใฃใฆใใใใใใใฟใคใใฎๅๆฃๅฆ็ใปใใใขใใไธใงใฆใผใถใผใ็ฐกๅใซ ๐ค Transformers ใขใใซใ่จ็ทดใงใใใใใซใ Hugging Face ใงใฏ [๐ค Accelerate](https://huggingface.co/docs/accelerate) ใฉใคใใฉใชใไฝๆใใพใใใใใฎใใฅใผใใชใขใซใงใฏใPyTorch ใฎ่จ็ทดใซใผใใใซในใฟใใคใบใใฆใๅๆฃๅฆ็็ฐๅขใงใฎ่จ็ทดใๅฏ่ฝใซใใๆนๆณใซใคใใฆๅญฆใณใพใใ
## ใปใใใขใใ
ใฏใใใซ ๐ค Accelerate ใใคใณในใใผใซใใพใใใ:
```bash
pip install accelerate
```
ใใใใใคใณใใผใใใฆ [`~accelerate.Accelerator`] ใชใใธใงใฏใใไฝๆใใพใใใใ[`~accelerate.Accelerator`] ใฏๅๆฃๅฆ็ใปใใใขใใใ่ชๅ็ใซๆคๅบใใ่จ็ทดใฎใใใซๅฟ
่ฆใชๅ
จใฆใฎใณใณใใผใใณใใๅๆๅใใพใใใขใใซใใใใคในใซๆ็คบ็ใซ้
็ฝฎใใๅฟ
่ฆใฏใใใพใใใ
```py
>>> from accelerate import Accelerator
>>> accelerator = Accelerator()
```
## Accelerate ใใๆบๅใใใพใใใ
ๆฌกใซใ้ข้ฃใใๅ
จใฆใฎ่จ็ทดใชใใธใงใฏใใ [`~accelerate.Accelerator.prepare`] ใกใฝใใใซๆธกใใพใใใใใซใฏใ่จ็ทดใจ่ฉไพกใใใใใฎDataloaderใใขใใซใoptimizer ใๅซใพใใพใ:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
```
## Backward
ๆๅพใซ่จ็ทดใซใผใๅ
ใฎ `loss.backward()` ใ ๐ค Accelerate ใฎ [`~accelerate.Accelerator.backward`] ใกใฝใใใง็ฝฎใๆใใพใ๏ผ
```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss)
... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```
ไปฅไธใฎใณใผใใง็ขบ่ชใงใใ้ใใ่จ็ทดใซใผใใซ4่กใฎใณใผใใ่ฟฝๅ ใใใ ใใงๅๆฃๅญฆ็ฟใๅฏ่ฝใงใ๏ผ
```diff
+ from accelerate import Accelerator
from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler
+ accelerator = Accelerator()
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2)
optimizer = AdamW(model.parameters(), lr=3e-5)
- device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
- model.to(device)
+ train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
+ train_dataloader, eval_dataloader, model, optimizer
+ )
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
- batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
- loss.backward()
+ accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
```
## ่จ็ทดใใ
้ข้ฃใใใณใผใใ่ฟฝๅ ใใใใในใฏใชใใใพใใฏ Colaboratory ใชใฉใฎใใผใใใใฏใง่จ็ทดใ้ๅงใใพใใ
### ในใฏใชใใใง่จ็ทดใใ
ในใฏใชใใใใ่จ็ทดใใใฆใใๅ ดๅใฏใ่จญๅฎใใกใคใซใไฝๆใปไฟๅญใใใใใซไปฅไธใฎใณใใณใใๅฎ่กใใฆใใ ใใ:
```bash
accelerate config
```
ใใใฆๆฌกใฎใใใซใใฆ่จ็ทดใ้ๅงใใพใ:
```bash
accelerate launch train.py
```
### ใใผใใใใฏใง่จ็ทดใใ
Colaboratory ใฎ TPU ใฎๅฉ็จใใ่ใใฎๅ ดๅใ๐ค Accelerate ใฏใใผใใใใฏไธใงๅฎ่กใใใใจใใงใใพใใ่จ็ทดใซๅฟ
่ฆใชๅ
จใฆใฎใณใผใใ้ขๆฐใซๅซใใ[`~accelerate.notebook_launcher`] ใซๆธกใใฆใใ ใใ:
```py
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
```
๐ค Accelerate ใจ่ฑๅฏใชๆฉ่ฝใซใคใใฆใใฃใจ็ฅใใใๆนใฏ[ใใญใฅใกใณใ](https://huggingface.co/docs/accelerate)ใๅ็
งใใฆใใ ใใใ
|
transformers/docs/source/ja/accelerate.md/0
|
{
"file_path": "transformers/docs/source/ja/accelerate.md",
"repo_id": "transformers",
"token_count": 2185
}
| 289
|
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# DeepSpeed Integration
[DeepSpeed](https://github.com/microsoft/DeepSpeed) ใฏใ[ZeRO ่ซๆ](https://arxiv.org/abs/1910.02054) ใง่ชฌๆใใใฆใใใในใฆใๅฎ่ฃ
ใใพใใ็พๅจใๆฌกใฎใใฎใๅฎๅ
จใซใตใใผใใใฆใใพใใ
1. ใชใใใฃใใคใถใผใฎ็ถๆ
ๅๅฒ (ZeRO ในใใผใธ 1)
2. ๅพ้
ๅๅฒ (ZeRO ในใใผใธ 2)
3. ใใฉใกใผใฟใผใฎๅๅฒ (ZeRO ในใใผใธ 3)
4. ใซในใฟใ ๆททๅ็ฒพๅบฆใใฌใผใใณใฐๅฆ็
5. ไธ้ฃใฎ้ซ้ CUDA ๆกๅผตใใผในใฎใชใใใฃใใคใถใผ
6. CPU ใใใณ NVMe ใธใฎ ZeRO ใชใใญใผใ
ZeRO-Offload ใซใฏ็ฌ่ชใฎๅฐ็จใใผใใผใใใใพใ: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840)ใ NVMe ใตใใผใใซใคใใฆใฏใ่ซๆ [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857)ใ
DeepSpeed ZeRO-2 ใฏใใใฎๆฉ่ฝใๆจ่ซใซใฏๅฝนใซ็ซใใชใใใใไธปใซใใฌใผใใณใฐใฎใฟใซไฝฟ็จใใใพใใ
DeepSpeed ZeRO-3 ใฏใๅทจๅคงใชใขใใซใ่คๆฐใฎ GPU ใซใญใผใใงใใใใใๆจ่ซใซใไฝฟ็จใงใใพใใ
ๅไธใฎ GPU ใงใฏไธๅฏ่ฝใงใใ
๐ค Transformers ใฏใ2 ใคใฎใชใใทใงใณใไปใใฆ [DeepSpeed](https://github.com/microsoft/DeepSpeed) ใ็ตฑๅใใพใใ
1. [`Trainer`] ใซใใใณใข DeepSpeed ๆฉ่ฝใฎ็ตฑๅใไฝใงใใใฃใฆใใใใฟใคใใงใ
็ตฑๅใฎๅ ดๅ - ใซในใฟใ ๆงๆใใกใคใซใๆๅฎใใใใใใณใใฌใผใใไฝฟ็จใใใ ใใงใไปใซไฝใใใๅฟ
่ฆใฏใใใพใใใใใใฆใใฎ
ใใฎใใญใฅใกใณใใงใฏใใฎๆฉ่ฝใซ็ฆ็นใๅฝใฆใฆใใพใใ
2. [`Trainer`] ใไฝฟ็จใใใDeepSpeed ใ็ตฑๅใใ็ฌ่ชใฎใใฌใผใใผใไฝฟ็จใใใๅ ดๅ
`from_pretrained` ใ `from_config` ใชใฉใฎใณใขๆฉ่ฝใซใฏใ้่ฆใชๆฉ่ฝใฎ็ตฑๅใๅซใพใใฆใใพใใ
ZeRO ในใใผใธ 3 ไปฅ้ใฎ `zero.Init`ใชใฉใฎ DeepSpeed ใฎ้จๅใใใฎๆฉ่ฝใๆดป็จใใใซใฏใๆฌกใฎใใญใฅใกใณใใใ่ชญใฟใใ ใใใ
[้ใใฌใผใใผ DeepSpeed ็ตฑๅ](#nontrainer-deepspeed-integration)ใ
็ตฑๅใใใฆใใใใฎ:
ใใฌใผใใณใฐ๏ผ
1. DeepSpeed ZeRO ใใฌใผใใณใฐใฏใZeRO-Infinity (CPU ใใใณ NVME ใชใใญใผใ) ใไฝฟ็จใใฆๅฎๅ
จใช ZeRO ในใใผใธ 1ใ2ใใใใณ 3 ใใตใใผใใใพใใ
ๆจ่ซ๏ผ
1. DeepSpeed ZeRO Inference ใฏใZeRO-Infinity ใซใใ ZeRO ในใใผใธ 3 ใใตใใผใใใพใใใใฌใผใใณใฐใจๅใ ZeRO ใใญใใณใซใไฝฟ็จใใพใใใ
ใชใใใฃใใคใถใจ lr ในใฑใธใฅใผใฉใฏไฝฟ็จใใใในใใผใธ 3 ใฎใฟใ้ข้ฃใใพใใ่ฉณ็ดฐใซใคใใฆใฏใไปฅไธใๅ็
งใใฆใใ ใใใ
[ใผใญๆจ่ซ](#zero-inference)ใ
DeepSpeed Inference ใใใใพใใใใใฏใTensor Parallelism ใฎไปฃใใใซ Tensor Parallelism ใไฝฟ็จใใใพใฃใใ็ฐใชใใใฏใใญใธใผใงใใ
ZeRO (่ฟๆฅๅ
ฌ้)ใ
<a id='deepspeed-trainer-integration'></a>
## Trainer Deepspeed Integration
<a id='deepspeed-installation'></a>
### Installation
pypi ็ต็ฑใงใฉใคใใฉใชใใคใณในใใผใซใใพใใ
```bash
pip install deepspeed
```
ใพใใฏ`tansformers`, `extras`็ต็ฑ:
```bash
pip install transformers[deepspeed]
```
ใพใใฏใ[DeepSpeed ใฎ GitHub ใใผใธ](https://github.com/microsoft/deepspeed#installation) ใง่ฉณ็ดฐใ็ขบ่ชใใฆใใ ใใใ
[้ซๅบฆใชใคใณในใใผใซ](https://www.deepspeed.ai/tutorials/advanced-install/)ใ
ใใใงใใใซใใซ่ฆๅดใใๅ ดๅใฏใใพใ [CUDA ๆกๅผตๆฉ่ฝใฎใคใณในใใผใซ ใใผใ](trainer#cuda-extension-installation-notes) ใๅฟ
ใ่ชญใใงใใ ใใใ
ๆกๅผตๆฉ่ฝใไบๅใใซใใใใๅฎ่กๆใซๆกๅผตๆฉ่ฝใใใซใใใใใใจใซไพๅญใใฆใใใไธ่จใฎ่งฃๆฑบ็ญใใในใฆ่ฉฆใใๅ ดๅ
ใใใๅฝนใซ็ซใใชใใฃใๅ ดๅใๆฌกใซ่ฉฆใในใใใจใฏใใขใธใฅใผใซใใคใณในใใผใซใใๅใซใขใธใฅใผใซใไบๅใซใใซใใใใใจใงใใ
DeepSpeed ใฎใญใผใซใซ ใใซใใไฝๆใใใซใฏ:
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \
--global-option="build_ext" --global-option="-j8" --no-cache -v \
--disable-pip-version-check 2>&1 | tee build.log
```
NVMe ใชใใญใผใใไฝฟ็จใใๅ ดๅใฏใไธ่จใฎๆ้ ใซ`DS_BUILD_AIO=1`ใๅซใใๅฟ
่ฆใใใใพใ (ใพใใ
*libaio-dev* ใทในใใ ๅ
จไฝใซใคใณในใใผใซใใพใ)ใ
`TORCH_CUDA_ARCH_LIST` ใ็ทจ้ใใฆใไฝฟ็จใใ GPU ใซใผใใฎใขใผใญใใฏใใฃใฎใณใผใใๆฟๅ
ฅใใพใใใในใฆใไปฎๅฎใใใจ
ใใชใใฎใซใผใใฏๅใใงใๆฌกใฎๆนๆณใงใขใผใใๅๅพใงใใพใใ
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())"
```
ใใใใฃใฆใ`8, 6`ใๅๅพใใๅ ดๅใฏใ`TORCH_CUDA_ARCH_LIST="8.6"`ใไฝฟ็จใใพใใ่คๆฐใฎ็ฐใชใใซใผใใใๆใกใฎๅ ดๅใฏใใในใฆใใชในใใใใใจใใงใใพใ
ใใใใฎใใกใ`TORCH_CUDA_ARCH_LIST="6.1;8.6"`ใๅฅฝใใงใ
่คๆฐใฎใใทใณใงๅใใปใใใขใใใไฝฟ็จใใๅฟ
่ฆใใใๅ ดๅใฏใใใคใใช ใใคใผใซใไฝๆใใพใใ
```bash
git clone https://github.com/microsoft/DeepSpeed/
cd DeepSpeed
rm -rf build
TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \
python setup.py build_ext -j8 bdist_wheel
```
`dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ใฎใใใชใใฎใ็ๆใใใใฎใงใใใใใคใณในใใผใซใงใใพใ
`pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl`ใจใใฆใญใผใซใซใพใใฏไปใฎใใทใณใซใคใณในใใผใซใใพใใ
็นฐใ่ฟใใพใใใ`TORCH_CUDA_ARCH_LIST`ใใฟใผใฒใใ ใขใผใญใใฏใใฃใซๅใใใฆ่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ
NVIDIA GPU ใฎๅฎๅ
จใชใชในใใจใใใใซๅฏพๅฟใใ **ใณใณใใฅใผใใฃใณใฐๆฉ่ฝ** (ใใฎ่จไบใฎ Arch ใจๅใ) ใ่ฆใคใใใใจใใงใใพใใ
ใณใณใใญในใ) [ใใ](https://developer.nvidia.com/cuda-gpus)ใ
ไปฅไธใไฝฟ็จใใฆใpytorch ใๆง็ฏใใใใขใผใใ็ขบ่ชใงใใพใใ
```bash
python -c "import torch; print(torch.cuda.get_arch_list())"
```
ใใใงใฏใใคใณในใใผใซใใใฆใใ GPU ใฎ 1 ใคใฎใขใผใใ่ฆใคใใๆนๆณใ่ชฌๆใใพใใใใจใใฐใGPU 0 ใฎๅ ดๅ:
```bash
CUDA_VISIBLE_DEVICES=0 python -c "import torch; \
print(torch.cuda.get_device_properties(torch.device('cuda')))"
```
ๅบๅใๆฌกใฎๅ ดๅ:
```bash
_CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82)
```
ใใใใใฐใใใฎใซใผใใฎใขใผใใ`8.6`ใงใใใใจใใใใใพใใ
`TORCH_CUDA_ARCH_LIST` ใๅฎๅ
จใซ็็ฅใใใใจใใงใใพใใใใใใใฐใใใซใ ใใญใฐใฉใ ใ่ชๅ็ใซใฏใจใชใๅฎ่กใใพใใ
ใใซใใ่กใใใ GPU ใฎใขใผใญใใฏใใฃใใใใฏใใฟใผใฒใใ ใใทใณใฎ GPU ใจไธ่ดใใๅ ดๅใใใใฐใไธ่ดใใชใๅ ดๅใใใใพใใ
็ฎ็ใฎใขใผใใๆ็คบ็ใซๆๅฎใใใใจใใๅงใใใพใใ
ๆๆกใใใใใจใใในใฆ่ฉฆใใฆใใพใ ใใซใใฎๅ้กใ็บ็ใใๅ ดๅใฏใGitHub ใฎๅ้กใซ้ฒใใงใใ ใใใ
[ใใฃใผใในใใผใ](https://github.com/microsoft/DeepSpeed/issues)ใ
<a id='deepspeed-multi-gpu'></a>
### Deployment with multiple GPUs
DeepSpeed ็ตฑๅใใใใญใคใใใซใฏใ[`Trainer`] ใณใใณใ ใฉใคใณๅผๆฐใ่ชฟๆดใใฆๆฐใใๅผๆฐ `--deepspeed ds_config.json` ใๅซใใพใใใใใงใ`ds_config.json` ใฏ DeepSpeed ๆงๆใใกใคใซใงใใ
[ใใกใ](https://www.deepspeed.ai/docs/config-json/)ใซ่จ่ผใใใฆใใพใใใใกใคใซๅใฏใใชใๆฌก็ฌฌใงใใ
DeepSpeed ใฎ`add_config_arguments`ใฆใผใใฃใชใใฃใไฝฟ็จใใฆใๅฟ
่ฆใชใณใใณใ ใฉใคใณๅผๆฐใใณใผใใซ่ฟฝๅ ใใใใจใใๅงใใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[DeepSpeed ใฎๅผๆฐ่งฃๆ](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) ใใญใฅใกใณใใๅ็
งใใฆใใ ใใใ
ใใใง้ธๆใใใฉใณใใฃใผใไฝฟ็จใงใใพใใ pytorch ใฉใณใใฃใผใๅผใ็ถใไฝฟ็จใงใใพใใ
```bash
torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ใพใใฏใ`deepspeed`ใซใใฃใฆๆไพใใใใฉใณใใฃใผใไฝฟ็จใใพใใ
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json
```
ใ่ฆงใฎใจใใใๅผๆฐใฏๅใใงใฏใใใพใใใใใปใจใใฉใฎใใผใบใงใฏใฉใกใใงใๆฉ่ฝใใพใใใฎ
ใใพใใพใชใใผใใจ GPU ใๆงๆใใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ใใกใ](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใๅ็
งใใฆใใ ใใใ
`deepspeed`ใฉใณใใฃใผใไฝฟ็จใใๅฉ็จๅฏ่ฝใชใในใฆใฎ GPU ใไฝฟ็จใใใๅ ดๅใฏใ`--num_gpus`ใใฉใฐใ็็ฅใใใ ใใงใใ
ไปฅไธใฏใๅฉ็จๅฏ่ฝใชใในใฆใฎ GPU ใใใใญใคใใ DeepSpeed ใง`run_translation.py`ใๅฎ่กใใไพใงใใ
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
DeepSpeed ใฎใใญใฅใกใณใใซใฏใ`--deepspeed --deepspeed_config ds_config.json`ใ่กจ็คบใใใๅฏ่ฝๆงใ้ซใใใจใซๆณจๆใใฆใใ ใใใ
DeepSpeed ้ข้ฃใฎๅผๆฐใ 2 ใคใใใพใใใ็ฐกๅใซใใใใใงใใใๅฆ็ใในใๅผๆฐใใใงใซ้ๅธธใซๅคใใใใงใใ
ใใฎ 2 ใคใ 1 ใคใฎๅผๆฐใซ็ตๅใใพใใใ
ๅฎ้ใฎไฝฟ็จไพใซใคใใฆใฏใใใฎ [ๆ็จฟ](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400) ใๅ็
งใใฆใใ ใใใ
<a id='deepspeed-one-gpu'></a>
### Deployment with one GPU
1 ใคใฎ GPU ใง DeepSpeed ใใใใญใคใใใซใฏใ[`Trainer`] ใณใใณใ ใฉใคใณๅผๆฐใๆฌกใฎใใใซ่ชฟๆดใใพใใ
```bash
deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path google-t5/t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
ใใใฏ่คๆฐใฎ GPU ใฎๅ ดๅใจใปใผๅใใงใใใใใใงใฏใDeepSpeed ใซ 1 ใคใฎ GPU ใ ใใไฝฟ็จใใใใใซๆ็คบ็ใซๆ็คบใใพใใ
`--num_gpus=1`ใใใใฉใซใใงใฏใDeepSpeed ใฏๆๅฎใใใใใผใไธใง่ช่ญใงใใใในใฆใฎ GPU ใใใใญใคใใพใใ่ตทๅใใ GPU ใ 1 ใคใ ใใฎๅ ดๅ
ใฎๅ ดๅใใใฎๅผๆฐใฏๅฟ
่ฆใใใพใใใๆฌกใฎ [ใใญใฅใกใณใ](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใงใฏใใฉใณใใฃใผ ใชใใทใงใณใซใคใใฆ่ชฌๆใใฆใใพใใ
1 ใคใฎ GPU ใ ใใง DeepSpeed ใไฝฟ็จใใใใฎใฏใชใใงใใ?
1. ไธ้จใฎ่จ็ฎใจใกใขใชใใในใใฎ CPU ใจ RAM ใซๅงไปปใงใใ ZeRO ใชใใญใผใๆฉ่ฝใๅใใฆใใใใใ
ใขใใซใฎใใผใบใซๅใใใฆใใๅคใใฎ GPU ใชใฝใผในใๆฎใใฆใใใพใใใใๅคงใใชใใใ ใตใคใบใใพใใฏ้ๅธธใซๅคงใใชใขใใซใฎใใฃใใใฃใณใฐใๅฏ่ฝใซใใ
ๆฎ้ใฏๅใใชใใงใใใใ
2. ในใใผใใช GPU ใกใขใช็ฎก็ใทในใใ ใๆไพใใใกใขใชใฎๆญ็ๅใๆๅฐ้ใซๆใใพใใ
ใใๅคงใใชใขใใซใจใใผใฟ ใใใใ
ๆฌกใซๆงๆใซใคใใฆ่ฉณใใ่ชฌๆใใพใใใๅไธใฎ GPU ใงๅคงๅน
ใชๆนๅใๅฎ็พใใใใใฎ้ตใฏๆฌกใฎใจใใใงใใ
DeepSpeed ใไฝฟ็จใใใซใฏใๆงๆใใกใคใซใซๅฐใชใใจใๆฌกใฎๆงๆใๅฟ
่ฆใงใใ
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true
}
}
```
ใใใซใใใใชใใใฃใใคใถใผใฎใชใใญใผใใใใฎไปใฎ้่ฆใชๆฉ่ฝใๆๅนใซใชใใพใใใใใใก ใตใคใบใ่ฉฆใใฆใฟใใจใใใงใใใใ
่ฉณ็ดฐใซใคใใฆใฏใไปฅไธใฎใใฃในใซใใทใงใณใๅ็
งใใฆใใ ใใใ
ใใฎใฟใคใใฎใใใญใคใกใณใใฎๅฎ้็ใชไฝฟ็จไพใซใคใใฆใฏใใใฎ [ๆ็จฟ](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) ใๅ็
งใใฆใใ ใใใ
ใใฎใใญใฅใกใณใใง่ฉณใใ่ชฌๆใใใฆใใใใใซใCPU ใใใณ NVMe ใชใใญใผใใๅใใ ZeRO-3 ใ่ฉฆใใใจใใงใใพใใ
ใใผใ๏ผ
- GPU 0 ใจใฏ็ฐใชใ็นๅฎใฎ GPU ใงๅฎ่กใใๅฟ
่ฆใใใๅ ดๅใ`CUDA_VISIBLE_DEVICES` ใไฝฟ็จใใฆๅถ้ใใใใจใฏใงใใพใใใ
ๅฉ็จๅฏ่ฝใช GPU ใฎ่กจ็คบ็ฏๅฒใไปฃใใใซใๆฌกใฎๆงๆใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
```bash
deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ...
```
ใใฎไพใงใฏใDeepSpeed ใซ GPU 1 (2 ็ช็ฎใฎ GPU) ใไฝฟ็จใใใใใซๆ็คบใใพใใ
<a id='deepspeed-multi-node'></a>
### ่คๆฐใฎใใผใใไฝฟ็จใใใใใญใคใกใณใ
ใใฎใปใฏใทใงใณใฎๆ
ๅ ฑใฏ DeepSpeed ็ตฑๅใซๅบๆใฎใใฎใงใฏใชใใใใใใใใซใใใผใ ใใญใฐใฉใ ใซ้ฉ็จใงใใพใใใใ ใใDeepSpeed ใฏใSLURM ็ฐๅขใงใชใ้ใใไปใฎใฉใณใใฃใผใใใไฝฟใใใใ`deepspeed`ใฉใณใใฃใผใๆไพใใพใใ
ใใฎใปใฏใทใงใณใงใฏใใใใใ 8 GPU ใๅใใ 2 ใคใฎใใผใใใใใจไปฎๅฎใใพใใใพใใๆๅใฎใใผใใซใฏ `ssh hostname1` ใไฝฟ็จใใฆใ2 ็ช็ฎใฎใใผใใซใฏ `ssh hostname2` ใไฝฟ็จใใฆๆฅ็ถใงใใพใใไธกๆนใจใใในใฏใผใใชใใงใญใผใซใซใฎ ssh ็ต็ฑใง็ธไบใซๆฅ็ถใงใใๅฟ
่ฆใใใใพใใใใกใใใใใใใฎใในใ (ใใผใ) ๅใใไฝๆฅญใใฆใใๅฎ้ใฎใในใๅใซๅคๆดใใๅฟ
่ฆใใใใพใใ
#### The torch.distributed.run launcher
ใใจใใฐใ`torch.distributed.run` ใไฝฟ็จใใใซใฏใๆฌกใฎใใใซใใพใใ
```bash
python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \
--master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json
```
ๅใใผใใซ SSH ใงๆฅ็ถใใใใใใใฎใใผใใงๅใใณใใณใใๅฎ่กใใๅฟ
่ฆใใใใพใใๆฅใๅฟ
่ฆใฏใใใพใใใใฉใณใใฃใผใฏไธกๆนใฎใใผใใๅๆใใใพใงๅพ
ๆฉใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[torchrun](https://pytorch.org/docs/stable/elastic/run.html) ใๅ็
งใใฆใใ ใใใใกใชใฟใซใใใใฏ pytorch ใฎๆฐใใผใธใงใณๅใฎ`torch.distributed.launch`ใ็ฝฎใๆใใใฉใณใใฃใผใงใใใใพใใ
#### ใใฃใผใในใใผใ ใฉใณใใฃใผ
ไปฃใใใซ`deepspeed`ใฉใณใใฃใผใไฝฟ็จใใใซใฏใใพใ`hostfile`ใใกใคใซใไฝๆใใๅฟ
่ฆใใใใพใใ
```
hostname1 slots=8
hostname2 slots=8
```
ใใใฆใๆฌกใฎใใใซ่ตทๅใงใใพใใ
```bash
deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \
your_program.py <normal cl args> --deepspeed ds_config.json
```
`torch.distributed.run`ใฉใณใใฃใผใจใฏ็ฐใชใใ`deepspeed`ใฏไธกๆนใฎใใผใใงใใฎใณใใณใใ่ชๅ็ใซ่ตทๅใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[ใชใฝใผในๆงๆ (ใใซใใใผใ)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) ใๅ็
งใใฆใใ ใใใ
#### Launching in a SLURM environment
SLURM ็ฐๅขใงใฏใๆฌกใฎใขใใญใผใใไฝฟ็จใงใใพใใไปฅไธใฏใ็นๅฎใฎ SLURM ็ฐๅขใซ้ฉๅใใใใใใซๅฟ
่ฆใช slurm ในใฏใชใใ `launch.slurm` ใงใใ
```bash
#SBATCH --job-name=test-nodes # name
#SBATCH --nodes=2 # nodes
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
#SBATCH --cpus-per-task=10 # number of cores per tasks
#SBATCH --gres=gpu:8 # number of gpus
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
#SBATCH --output=%x-%j.out # output file name
export GPUS_PER_NODE=8
export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
export MASTER_PORT=9901
srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \
--nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \
--master_addr $MASTER_ADDR --master_port $MASTER_PORT \
your_program.py <normal cl args> --deepspeed ds_config.json'
```
ใใจใฏๅฎ่กใในใฑใธใฅใผใซใใใ ใใงใใ
```bash
sbatch launch.slurm
```
#### Use of Non-shared filesystem
ใใใฉใซใใงใฏใDeepSpeed ใฏใใซใใใผใ็ฐๅขใๅ
ฑๆในใใฌใผใธใไฝฟ็จใใใใจใๆณๅฎใใฆใใพใใใใใๅฝใฆใฏใพใใใๅใใผใใใญใผใซใซ ใใกใคใซใทในใใ ใใๅ็
งใงใใชใๅ ดๅใฏใ่จญๅฎใใกใคใซใ่ชฟๆดใใฆ [`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#) ใๅซใใๅฟ
่ฆใใใใพใใใใงใใฏใใคใณใ ใชใใทใงใณ) ใๆฌกใฎ่จญๅฎใงๆๅฎใใพใใ
```json
{
"checkpoint": {
"use_node_local_storage": true
}
}
```
ใใใใฏใ[`Trainer`] ใฎ `--save_on_each_node` ๅผๆฐใไฝฟ็จใใใใจใใงใใไธ่จใฎ่จญๅฎใฏ่ชๅ็ใซ่ฟฝๅ ใใใพใใ
<a id='deepspeed-notebook'></a>
### Deployment in Notebooks
ใใผใใใใฏใฎใปใซใในใฏใชใใใจใใฆๅฎ่กใใๅ ดๅใฎๅ้กใฏใไพๅญใใ้ๅธธใฎ`deepspeed`ใฉใณใใฃใผใใชใใใจใงใใ
็นๅฎใฎ่จญๅฎใงใฏใใใใใจใใฅใฌใผใใใๅฟ
่ฆใใใใพใใ
GPU ใ 1 ใคใ ใไฝฟ็จใใฆใใๅ ดๅใDeepSpeed ใไฝฟ็จใใใใใซใใผใใใใฏๅ
ใฎใใฌใผใใณใฐ ใณใผใใ่ชฟๆดใใๅฟ
่ฆใใใๆนๆณใฏๆฌกใฎใจใใใงใใ
```python
# DeepSpeed requires a distributed environment even when only one process is used.
# This emulates a launcher in the notebook
import os
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use
os.environ["RANK"] = "0"
os.environ["LOCAL_RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
# Now proceed as normal, plus pass the deepspeed config file
training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json")
trainer = Trainer(...)
trainer.train()
```
ๆณจ: `...` ใฏใ้ขๆฐใซๆธกใ้ๅธธใฎๅผๆฐใ่กจใใพใใ
่คๆฐใฎ GPU ใไฝฟ็จใใๅ ดๅใDeepSpeed ใๅไฝใใใซใฏใใซใใใญใปใน็ฐๅขใไฝฟ็จใใๅฟ
่ฆใใใใพใใใคใพใใใใชใใฏๆใฃใฆใใพใ
ใใฎ็ฎ็ใงใฉใณใใฃใผใไฝฟ็จใใใใจใฏใงใใพใใใใใใใฏใๆ็คบใใใๅๆฃ็ฐๅขใใจใใฅใฌใผใใใใใจใซใใฃใฆใฏๅฎ็พใงใใพใใใ
ใใฎใปใฏใทใงใณใฎๅ้ ญใงใ
็พๅจใฎใใฃใฌใฏใใชใฎใใผใใใใฏใซใใฎๅ ดใงๆงๆใใกใคใซใไฝๆใใใๅ ดๅใฏใๅฐ็จใฎ
ใปใซใฎๅ
ๅฎน:
```python no-style
%%bash
cat <<'EOT' > ds_config_zero3.json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
EOT
```
ใใฌใผใใณใฐ ในใฏใชใใใใใผใใใใฏใฎใปใซใงใฏใชใ้ๅธธใฎใใกใคใซใซใใๅ ดๅใฏใๆฌกใฎใใใซใใฆ`deepspeed`ใ้ๅธธใฉใใ่ตทๅใงใใพใใ
็ดฐ่ใใใฎใทใงใซใใใจใใฐใ`run_translation.py` ใไฝฟ็จใใใซใฏใๆฌกใฎใใใซ่ตทๅใใพใใ
```python no-style
!git clone https://github.com/huggingface/transformers
!cd transformers; deepspeed examples/pytorch/translation/run_translation.py ...
```
ใพใใฏใ`%%bash` ใใธใใฏใไฝฟ็จใใใจใใทใงใซ ใใญใฐใฉใ ใๅฎ่กใใใใใฎ่คๆฐ่กใฎใณใผใใ่จ่ฟฐใใใใจใใงใใพใใ
```python no-style
%%bash
git clone https://github.com/huggingface/transformers
cd transformers
deepspeed examples/pytorch/translation/run_translation.py ...
```
ใใฎใใใชๅ ดๅใใใฎใปใฏใทใงใณใฎๆๅใซ็คบใใใณใผใใฏๅฟ
่ฆใใใพใใใ
ๆณจ: `%%bash` ใใธใใฏใฏๅชใใฆใใพใใใ็พๆ็นใงใฏๅบๅใใใใใกใชใณใฐใใใใใใใญใปในใ็ตไบใใใพใงใญใฐใฏ่กจ็คบใใใพใใใ
ๅฎไบใใพใใ
<a id='deepspeed-config'></a>
### Configuration
่จญๅฎใใกใคใซใงไฝฟ็จใงใใ DeepSpeed ่จญๅฎใชใใทใงใณใฎๅฎๅ
จใชใฌใคใใซใคใใฆใฏใๆฌกใๅ็
งใใฆใใ ใใใ
[ๆฌกใฎใใญใฅใกใณใ](https://www.deepspeed.ai/docs/config-json/) ใซใขใฏใปในใใฆใใ ใใใ
ใใพใใพใชๅฎ้ใฎใใผใบใซๅฏพๅฟใใๆฐๅใฎ DeepSpeed ๆงๆไพใ [DeepSpeedExamples](https://github.com/microsoft/DeepSpeedExamples)ใง่ฆใคใใใใจใใงใใพใใ
ใชใใธใใช:
```bash
git clone https://github.com/microsoft/DeepSpeedExamples
cd DeepSpeedExamples
find . -name '*json'
```
ไธ่จใฎใณใผใใ็ถใใฆใLamb ใชใใใฃใใคใถใผใๆงๆใใใใจใใฆใใใจใใพใใใใใใฃใฆใๆฌกใฎไธญใใๆค็ดขใงใใพใ
`.json` ใใกใคใซใฎไพ:
```bash
grep -i Lamb $(find . -name '*json')
```
ใใใซใใใคใใฎไพใ [ใกใคใณ ใชใใธใใช](https://github.com/microsoft/DeepSpeed) ใซใใใใพใใ
DeepSpeed ใไฝฟ็จใใๅ ดๅใฏใๅธธใซ DeepSpeed ๆงๆใใกใคใซใๆๅฎใใๅฟ
่ฆใใใใพใใใไธ้จใฎๆงๆใใฉใกใผใฟใซใฏ
ใณใใณใใฉใคใณ็ต็ฑใง่จญๅฎใใพใใๅพฎๅฆใช้ใใซใคใใฆใฏใใใฎใฌใคใใฎๆฎใใฎ้จๅใง่ชฌๆใใพใใ
DeepSpeed ๆงๆใใกใคใซใใฉใฎใใใชใใฎใใ็่งฃใใใใใซใZeRO ในใใผใธ 2 ๆฉ่ฝใๆๅนใซใใๆงๆใใกใคใซใๆฌกใซ็คบใใพใใ
ใชใใใฃใใคใถใผ็ถๆ
ใฎ CPU ใชใใญใผใใๅซใฟใ`AdamW`ใชใใใฃใใคใถใผใจ`WarmupLR`ในใฑใธใฅใผใฉใผใไฝฟ็จใใๆททๅใๆๅนใซใใพใใ
`--fp16` ใๆธกใใใๅ ดๅใฎ็ฒพๅบฆใใฌใผใใณใฐ:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
}
```
ใใญใฐใฉใ ใๅฎ่กใใใจใDeepSpeed ใฏ [`Trainer`] ใใๅใๅใฃใ่จญๅฎใใญใฐใซ่จ้ฒใใพใใ
ใณใณใฝใผใซใซๆธกใใใใใใๆ็ต็ใซใฉใฎใใใช่จญๅฎใๆธกใใใใฎใใๆญฃ็ขบใซ็ขบ่ชใงใใพใใ
<a id='deepspeed-config-passing'></a>
### Passing Configuration
ใใฎใใญใฅใกใณใใง่ชฌๆใใใใใซใ้ๅธธใDeepSpeed ่จญๅฎใฏ json ใใกใคใซใธใฎใในใจใใฆๆธกใใใพใใใ
ใใฌใผใใณใฐใฎ่จญๅฎใซใณใใณใ ใฉใคใณ ใคใณใฟใผใใงใคในใไฝฟ็จใใใไปฃใใใซใคใณในใฟใณในใไฝๆใใพใใ
[`Trainer`] via [`TrainingArguments`] ใใฎๅพใ`deepspeed` ๅผๆฐใซใคใใฆใฏๆฌกใฎใใจใใงใใพใ
ใในใใใใ `dict` ใๆธกใใพใใใใใซใใใใใฎๅ ดใงๆงๆใไฝๆใงใใใใใๆธใ่พผใๅฟ
่ฆใใใใพใใใ
[`TrainingArguments`] ใซๆธกใๅใซใใกใคใซ ใทในใใ ใๅคๆดใใพใใ
่ฆ็ดใใใจใๆฌกใฎใใจใใงใใพใใ
```python
TrainingArguments(..., deepspeed="/path/to/ds_config.json")
```
ใพใใฏ๏ผ
```python
ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params)
TrainingArguments(..., deepspeed=ds_config_dict)
```
<a id='deepspeed-config-shared'></a>
### Shared Configuration
<Tip warning={true}>
ใใฎใปใฏใทใงใณใฏๅฟ
่ชญใงใ
</Tip>
[`Trainer`] ใจ DeepSpeed ใฎไธกๆนใๆญฃใใๆฉ่ฝใใใซใฏใใใใคใใฎ่จญๅฎๅคใๅฟ
่ฆใงใใ
ใใใใฃใฆใๆคๅบใๅฐ้ฃใชใจใฉใผใซใคใชใใๅฏ่ฝๆงใฎใใๅฎ็พฉใฎ็ซถๅใ้ฒใใใใซใใใใใๆงๆใใใใจใซใใพใใใ
[`Trainer`] ใณใใณใใฉใคใณๅผๆฐ็ต็ฑใ
ใใใซใไธ้จใฎๆงๆๅคใฏใขใใซใฎๆงๆใซๅบใฅใใฆ่ชๅ็ใซๅฐๅบใใใพใใ
่คๆฐใฎๅคใๆๅใง่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ[`Trainer`] ใซๅคง้จๅใไปปใใใฎใๆๅใงใ
ใฎ่จญๅฎใ่กใใพใใ
ใใใใฃใฆใใใฎใฌใคใใฎๆฎใใฎ้จๅใงใฏใ็นๅฅใช่จญๅฎๅค `auto` ใ่กจ็คบใใใพใใใใใ่จญๅฎใใใจใ
ๆญฃใใๅคใพใใฏๆใๅน็็ใชๅคใซ่ชๅ็ใซ็ฝฎใๆใใใใพใใใใใ็ก่ฆใใใใจใ่ช็ฑใซ้ธๆใใฆใใ ใใ
ๆจๅฅจไบ้
ใๅ็
งใใๅคใๆ็คบ็ใซ่จญๅฎใใพใใใใฎๅ ดๅใๆฌกใฎ็นใซๅๅๆณจๆใใฆใใ ใใใ
[`Trainer`] ๅผๆฐใจ DeepSpeed ่จญๅฎใฏไธ่ดใใพใใใใจใใฐใๅใใใฎใไฝฟ็จใใฆใใพใใ
ๅญฆ็ฟ็ใใใใใตใคใบใใพใใฏๅพ้
็ดฏ็ฉ่จญๅฎ?ใใใใไธ่ดใใชใๅ ดๅใใใฌใผใใณใฐใฏ้ๅธธใซๅคฑๆใใๅฏ่ฝๆงใใใใพใ
ๆนๆณใๆคๅบใใใฎใ้ฃใใใใใชใใฏ่ญฆๅใๅใใพใใใ
DeepSpeed ใฎใฟใซๅบๆใฎๅคใใใใใซๅใใใฆๆๅใง่จญๅฎใใๅฟ
่ฆใใใๅคใไปใซใ่คๆฐใใใพใใ
ใใชใใฎ่ฆๆใ
็ฌ่ชใฎใใญใฐใฉใ ใงใDeepSpeed ๆงๆใใในใฟใผใจใใฆๅคๆดใใใๅ ดๅใฏใๆฌกใฎใขใใญใผใใไฝฟ็จใใใใจใใงใใพใใ
ใใใซๅบใฅใใฆ [`TrainingArguments`] ใ่จญๅฎใใพใใๆ้ ใฏๆฌกใฎใจใใใงใใ
1. ใในใฟใผๆงๆใจใใฆไฝฟ็จใใ DeepSpeed ๆงๆใไฝๆใพใใฏใญใผใใใพใ
2. ใใใใฎๅคใซๅบใฅใใฆ [`TrainingArguments`] ใชใใธใงใฏใใไฝๆใใพใ
`scheduler.params.total_num_steps`ใชใฉใฎไธ้จใฎๅคใฏๆฌกใฎใใใซ่จ็ฎใใใใใจใซๆณจๆใใฆใใ ใใใ
`train` ไธญใซ [`Trainer`] ใๅฎ่กใใพใใใใใกใใ่ชๅใง่จ็ฎใใใใจใใงใใพใใ
<a id='deepspeed-zero'></a>
### ZeRO
[Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) ใฏใDeepSpeed ใฎไธปๅ่ฃฝๅใงใใใใ
3 ใคใฎ็ฐใชใใฌใใซ (ๆฎต้) ใฎๆ้ฉๅใใตใใผใใใพใใๆๅใฎใใฎใฏใในใฑใผใฉใใชใใฃใฎ่ฆณ็นใใใฏใใพใ่ๅณๆทฑใใใฎใงใฏใใใพใใใ
ใใใใฃใฆใใใฎใใญใฅใกใณใใงใฏในใใผใธ 2 ใจ 3 ใซ็ฆ็นใๅฝใฆใพใใในใใผใธ 3 ใฏใๆๆฐใฎ ZeRO-Infinity ใฎ่ฟฝๅ ใซใใฃใฆใใใซๆนๅใใใฆใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใDeepSpeed ใฎใใญใฅใกใณใใๅ็
งใใฆใใ ใใใ
ๆงๆใใกใคใซใฎ `zero_optimization` ใปใฏใทใงใณใฏๆใ้่ฆใช้จๅใงใ ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training))ใใใใงๅฎ็พฉใใพใ
ใฉใฎ ZeRO ในใใผใธใๆๅนใซใใใใใใใฆใใใใใฉใฎใใใซๆงๆใใใใๅใใฉใกใผใฟใฎ่ชฌๆใฏใ
DeepSpeed ใฎใใญใฅใกใณใใ
ใใฎใปใฏใทใงใณใฏใDeepSpeed ่จญๅฎใไปใใฆใฎใฟ่จญๅฎใใๅฟ
่ฆใใใใพใ - [`Trainer`] ใๆไพใใพใ
ๅ็ญใฎใณใใณใใฉใคใณๅผๆฐใฏใใใพใใใ
ๆณจ: ็พๅจใDeepSpeed ใฏใใฉใกใผใฟใผๅใๆค่จผใใชใใใใในใใซใ้้ใใใจใใใใฉใซใ่จญๅฎใไฝฟ็จใใใพใใ
ในใใซใ้้ใฃใฆใใใใฉใกใผใฟใ DeepSpeed ใจใณใธใณใฎ่ตทๅใญใฐ ใกใใปใผใธใ่ฆใฆใใใฎๅคใ็ขบ่ชใงใใพใใ
ไฝฟ็จใใใคใใใงใใ
<a id='deepspeed-zero2-config'></a>
#### ZeRO-2 Config
ไปฅไธใฏใZeRO ในใใผใธ 2 ใฎๆงๆไพใงใใ
```json
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
}
}
```
**ๆง่ฝ่ชฟๆด๏ผ**
- `offload_optimizer` ใๆๅนใซใใใจใGPU RAM ใฎไฝฟ็จ้ใๅๆธใใใพใ (`"stage": 2` ใๅฟ
่ฆใงใ)
- `"overlap_comm": true` ใฏใGPU RAM ไฝฟ็จ้ใฎๅขๅ ใจใใฌใผใใชใใใฆใ้
ๅปถใใในใฆๅๆธใใพใใ `overlap_comm`ใฏ 4.5x ใไฝฟ็จใใพใ
`allgather_bucket_size`ใจ`reduce_bucket_size`ใฎๅคใใใใใฃใฆใ5e8 ใซ่จญๅฎใใใฆใใๅ ดๅใ9GB ใๅฟ
่ฆใซใชใใพใใ
ใใใใใชใณใ (`5e8 x 2Bytes x 2 x 4.5`)ใใใใใฃใฆใ8GB ไปฅไธใฎ RAM ใๆญ่ผใใ GPU ใไฝฟ็จใใฆใใๅ ดๅใ
OOM ใจใฉใผใ็บ็ใใๅ ดๅใฏใใใใใฎใใฉใกใผใฟใ`2e8`็จๅบฆใซๆธใใๅฟ
่ฆใใใใใใใซใฏ 3.6GB ใๅฟ
่ฆใซใชใใพใใใใใใใชใใงใใใ
OOM ใซ้ใๅงใใฆใใๅ ดๅใฏใใใๅคงๅฎน้ใฎ GPU ใงใๅๆงใงใใ
- ใใใใฎใใใใกใๆธใใใจใใใๅคใใฎ GPU RAM ใๅฉ็จใใใใใซ้ไฟก้ๅบฆใ็ ็ฒใซใใใใจใซใชใใพใใใใใใกใตใคใบใๅฐใใใปใฉใ
้ไฟกใ้
ใใชใใไปใฎใฟในใฏใงไฝฟ็จใงใใ GPU RAM ใๅขใใพใใใใใใฃใฆใใใใใตใคใบใๅคงใใๅ ดๅใฏใ
้่ฆใชใฎใฏใใใฌใผใใณใฐๆ้ใๅฐใ้
ใใใใใจใฏ่ฏใใใฌใผใใซใชใๅฏ่ฝๆงใใใใพใใ
ใใใซใ`deepspeed==0.4.4`ใซใฏใๆฌกใฎใณใใณใใงๆๅนใซใงใใๆฐใใใชใใทใงใณ`round_robin_gradients`ใ่ฟฝๅ ใใใพใใใ
```json
{
"zero_optimization": {
"round_robin_gradients": true
}
}
```
ใใใฏใใใ็ดฐใใๅพ้
ใใผใใฃใทใงใใณใฐใซใใฃใฆใฉใณใฏ้ใฎ CPU ใกใขใชใธใฎๅพ้
ใณใใผใไธฆๅๅใใใCPU ใชใใญใผใใฎในใใผใธ 2 ๆ้ฉๅใงใใใใใฉใผใใณในใฎๅฉ็นใฏใๅพ้
็ดฏ็ฉในใใใ (ใชใใใฃใใคใถใผ ในใใใ้ใฎใณใใผใฎๅขๅ ) ใพใใฏ GPU ๆฐ (ไธฆๅๅฆ็ใฎๅขๅ ) ใซๅฟใใฆๅขๅ ใใพใใ
<a id='deepspeed-zero3-config'></a>
#### ZeRO-3 Config
ไปฅไธใฏใZeRO ในใใผใธ 3 ใฎๆงๆไพใงใใ
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
ใขใใซใพใใฏใขใฏใใฃใใผใทใงใณใ GPU ใกใขใชใซ้ฉๅใใใCPU ใๆชไฝฟ็จใงใใใใใซ OOM ใ็บ็ใใฆใใๅ ดๅ
`"device": "cpu"` ใไฝฟ็จใใฆใชใใใฃใใคใถใฎ็ถๆ
ใจใใฉใกใผใฟใ CPU ใกใขใชใซใกใขใชใชใใญใผใใใใจใใใฎๅถ้ใ่งฃๆฑบใใใๅฏ่ฝๆงใใใใพใใ
CPU ใกใขใชใซใชใใญใผใใใใใชใๅ ดๅใฏใ`device`ใจใณใใชใซ`cpu`ใฎไปฃใใใซ`none`ใไฝฟ็จใใพใใใชใใญใผใๅ
NVMe ใซใคใใฆใฏๅพใปใฉ่ชฌๆใใพใใ
ๅบๅฎใกใขใชใฏใ`pin_memory`ใ`true`ใซ่จญๅฎใใใจๆๅนใซใชใใพใใใใฎๆฉ่ฝใซใใใๆฌกใฎใใใชใณในใใใใใฆในใซใผใใใใๅไธใใใใใจใใงใใพใใ
ไปใฎใใญใปในใไฝฟ็จใงใใใกใขใชใๅฐใชใใชใใพใใใใณ็ใใใใใกใขใชใฏใใใใ่ฆๆฑใใ็นๅฎใฎใใญใปในใฎใใใซ็ขบไฟใใใพใใ
้ๅธธใ้ๅธธใฎ CPU ใกใขใชใใใใฏใใใซ้ซ้ใซใขใฏใปในใใใพใใ
**ๆง่ฝ่ชฟๆด๏ผ**
- `stage3_max_live_parameters`: `1e9`
- `stage3_max_reuse_distance`: `1e9`
OOM ใซ้ใใๅ ดๅใฏใใstage3_max_live_parametersใใจใstage3_max_reuse_ distanceใใๆธใใใพใใๅฝฑ้ฟใฏๆๅฐ้ใซๆใใใใใฏใใงใ
ใขใฏใใฃใๅใใงใใฏใใคใณใใๅฎ่กใใชใ้ใใใใใฉใผใใณในใซๅฝฑ้ฟใใพใใ `1e9`ใฏ็ด 2GB ใๆถ่ฒปใใพใใ่จๆถใๅ
ฑๆใใฆใใใฎใฏใ
`stage3_max_live_parameters` ใจ `stage3_max_reuse_distance` ใชใฎใงใๅ ็ฎใใใใใฎใงใฏใชใใๅ่จใง 2GB ใซใชใใพใใ
`stage3_max_live_parameters` ใฏใ็นๅฎใฎๆ็นใง GPU ไธใซไฟๆใใๅฎๅ
จใชใใฉใกใผใฟใฎๆฐใฎไธ้ใงใใ
ๆ้ใ ใๅๅฉ็จ่ท้ขใใฏใใใฉใกใผใฟใๅฐๆฅใใคๅใณไฝฟ็จใใใใใๅคๆญใใใใใซไฝฟ็จใใๆๆจใงใใ
`stage3_max_reuse_ distance`ใไฝฟ็จใใฆใใใฉใกใผใฟใ็ ดๆฃใใใไฟๆใใใใๆฑบๅฎใใพใใใใฉใกใผใฟใ
่ฟใๅฐๆฅใซๅใณไฝฟ็จใใใไบๅฎ (`stage3_max_reuse_distance`ๆชๆบ) ใชใฎใงใ้ไฟกใๆธใใใใใซไฟๆใใพใใ
ใชใผใใผใใใใใใใฏใใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใๆๅนใซใใฆใใๅ ดๅใซ้ๅธธใซๅฝน็ซใกใพใใใใฉใฏใผใๅ่จ็ฎใ่กใใใ
backward ใฏๅไธใฌใคใคใผ็ฒๅบฆใๆธกใใๅพๆนๅ่จ็ฎใพใงใใฉใกใผใฟใๅๆนๅ่จ็ฎใซไฟๆใใใใจ่ใใฆใใพใใ
ๆฌกใฎๆงๆๅคใฏใใขใใซใฎ้่กจ็คบใตใคใบใซใใฃใฆ็ฐใชใใพใใ
- `reduce_bucket_size`: `hidden_size*hidden_size`
- `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size`
- `stage3_param_persistence_threshold`: `10 * hidden_size`
ใใใใฃใฆใใใใใฎๅคใ `auto` ใซ่จญๅฎใใใจใ[`Trainer`] ใๆจๅฅจใใใๅคใ่ชๅ็ใซๅฒใๅฝใฆใพใใ
ไพกๅค่ฆณใใใ ใใใใกใใใใใใใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
`stage3_gather_16bit_weights_on_model_save` ใฏใใขใใซใฎไฟๅญๆใซใขใใซ fp16 ใฎ้ใฟ็ตฑๅใๆๅนใซใใพใใๅคงใใ
ใขใใซใจ่คๆฐใฎ GPU ใฎๅ ดๅใใใใฏใกใขใชใจ้ๅบฆใฎไธกๆนใฎ็นใง้ซไพกใชๆไฝใงใใ็พๅจๅฟ
้ ใจใชใฃใฆใใใฎใฏใ
ใใฌใผใใณใฐใๅ้ใใไบๅฎใงใใใใฎๅถ้ใๅใ้คใใใใไพฟๅฉใซใใไปๅพใฎใขใใใใผใใซๆณจ็ฎใใฆใใ ใใใ
ใใฌใญใทใใซใ
ZeRO-2 ๆงๆใใ็งป่กใใฆใใๅ ดๅใฏใ`allgather_partitions`ใ`allgather_bucket_size`ใใใใณ
`reduce_scatter`่จญๅฎใใฉใกใผใฟใฏ ZeRO-3 ใงใฏไฝฟ็จใใใพใใใใใใใ่จญๅฎใใกใคใซใซไฟๅญใใฆใใใจใ
็ก่ฆใใใใ
- `sub_group_size`: `1e9`
`sub_group_size` ใฏใใชใใใฃใใคใถใผใฎในใใใไธญใซใใฉใกใผใฟใผใๆดๆฐใใใ็ฒๅบฆใๅถๅพกใใพใใใใฉใกใผใฟใฏๆฌกใฎใจใใใงใใ
`sub_group_size` ใฎใใฑใใใซใฐใซใผใๅใใใๅใใฑใใใฏไธๅบฆใซ 1 ใคใใคๆดๆฐใใใพใใ NVMeใชใใญใผใใงไฝฟ็จใใๅ ดๅ
ใใใใฃใฆใZeRO-Infinity ใฎ `sub_group_size`ใฏใใขใใซใฎ็ถๆ
ใ CPU ใซๅบๅ
ฅใใใ็ฒๅบฆใๅถๅพกใใพใใ
ใชใใใฃใใคใถในใใใไธญใซ NVMe ใใใกใขใชใๅๅพใใพใใใใใซใใใ้ๅธธใซๅคง่ฆๆจกใชใขใใซใฎ CPU ใกใขใชไธ่ถณใ้ฒๆญขใใใพใใ
NVMe ใชใใญใผใใไฝฟ็จใใชใๅ ดๅใฏใ`sub_group_size`ใใใใฉใซใๅคใฎ *1e9* ใฎใพใพใซใใใใจใใงใใพใใๅคๆดใใใใจใใงใใพใ
ๆฌกใฎๅ ดๅใฎใใใฉใซใๅค:
1. ใชใใใฃใใคใถใผ ในใใใไธญใซ OOM ใ็บ็ใใ: `sub_group_size` ใๆธใใใฆใไธๆใใใใกใผใฎใกใขใชไฝฟ็จ้ใๅๆธใใพใใ
2. ใชใใใฃใใคใถใผ ในใใใใซๆ้ใใใใใพใใ`sub_group_size`ใๅขใใใฆใๅธฏๅๅน
ใฎไฝฟ็จ็ใๅไธใใใพใใ
ใใผใฟใใใใกใฎๅขๅ ใ
#### ZeRO-0 Config
ในใใผใธ 0 ใจ 1 ใฏใใฃใใซไฝฟ็จใใใชใใใใๆๅพใซใชในใใใฆใใใใจใซๆณจๆใใฆใใ ใใใ
ในใใผใธ 0 ใงใฏใใในใฆใฎใฟใคใใฎใทใฃใผใใฃใณใฐใ็กๅนใซใใDDP ใจใใฆ DeepSpeed ใฎใฟใไฝฟ็จใใพใใๆฌกใฎใณใใณใใงใชใณใซใงใใพใใ
```json
{
"zero_optimization": {
"stage": 0
}
}
```
ใใใซใใใไปใซไฝใๅคๆดใใๅฟ
่ฆใใชใใๅบๆฌ็ใซ ZeRO ใ็กๅนใซใชใใพใใ
#### ZeRO-1 Config
ในใใผใธ 1 ใฏใในใใผใธ 2 ใใใฐใฉใใผใทใงใณ ใทใฃใผใใฃใณใฐใ้คใใใใฎใงใใใชใใใฃใใคใถใผใฎ็ถๆ
ใใทใฃใผใๅใใใ ใใงใๅฆ็ใๅฐใ้ซ้ๅใใใใใซใใคใงใ่ฉฆใใใจใใงใใพใใ
```json
{
"zero_optimization": {
"stage": 1
}
}
```
<a id='deepspeed-nvme'></a>
### NVMe Support
ZeRO-Infinity ใฏใGPU ใจ CPU ใกใขใชใ NVMe ใกใขใชใงๆกๅผตใใใใจใงใ้ๅธธใซๅคง่ฆๆจกใชใขใใซใฎใใฌใผใใณใฐใๅฏ่ฝใซใใพใใใใใใง
ในใใผใ ใใผใใฃใทใงใใณใฐใใใณใฟใคใชใณใฐ ใขใซใดใชใบใ ใงใฏใๅ GPU ใ้ๅธธใซๅฐ้ใฎใใผใฟใ้ๅไฟกใใๅฟ
่ฆใใใใพใใ
ใชใใญใผใใซใใใๆๆฐใฎ NVMe ใใใฌใผใใณใฐใซๅฉ็จใงใใๅ่จใกใขใช ใใผใซใใใใซๅคงใใใใใฎใซ้ฉใใฆใใใใจใๅคๆใใพใใใ
ใใญใปในใ ZeRO-Infinity ใซใฏใZeRO-3 ใๆๅนใซใชใฃใฆใใๅฟ
่ฆใใใใพใใ
ๆฌกใฎ่จญๅฎไพใงใฏใNVMe ใใชใใใฃใใคใถใฎ็ถๆ
ใจใใฉใกใผใฟใฎไธกๆนใใชใใญใผใใงใใใใใซใใพใใ
```json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 4,
"fast_init": false
},
"offload_param": {
"device": "nvme",
"nvme_path": "/local_nvme",
"pin_memory": true,
"buffer_count": 5,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": false,
"overlap_events": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
}
```
ใชใใใฃใใคใถใฎ็ถๆ
ใจใใฉใกใผใฟใฎไธกๆนใ NVMe ใซใชใใญใผใใใใใใฉใกใใ 1 ใคใ ใใใชใใญใผใใใใใใพใฃใใใชใใญใผใใใชใใใ้ธๆใงใใพใใใใจใใฐใๆฌกใฎๅ ดๅ
ๅฉ็จๅฏ่ฝใช CPU ใกใขใชใๅคง้ใซใใๅ ดๅใฏใ้ซ้ใซใชใใใใๅฟ
ใ CPU ใกใขใชใฎใฟใซใชใใญใผใใใฆใใ ใใ (ใใณใ:
*"device": "CPU"*)ใ
[ใชใใใฃใใคใถใผใฎ็ถๆ
](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) ใจ [ใใฉใกใผใฟใผ](https://www.deepspeed.ai/docs/config-json/#parameter-offloading)ใ
`nvme_path`ใๅฎ้ใซ NVMe ใงใใใใจใ็ขบ่ชใใฆใใ ใใใNVMe ใฏ้ๅธธใฎใใผใใใฉใคใใพใใฏ SSD ใงๅไฝใใพใใใ
ใฏใใใซ้
ใใชใใพใใ้ซ้ในใฑใผใฉใใซใชใใฌใผใใณใฐใฏใๆๆฐใฎ NVMe ่ปข้้ๅบฆใๅฟต้ ญใซ็ฝฎใใฆ่จญ่จใใใพใใ (ใใฎๆ็นใงใฏ
ๆธใ่พผใฟใงใฏใ่ชญใฟๅใๆๅคง 3.5 GB/็งใๆธใ่พผใฟๆๅคง 3 GB/็งใฎใใผใฏ้ๅบฆใๅพใใใพใ)ใ
ๆ้ฉใช`aio`ๆงๆใใญใใฏใ่ฆใคใใใซใฏใใฟใผใฒใใ่จญๅฎใงใใณใใใผใฏใๅฎ่กใใๅฟ
่ฆใใใใพใใ
[ใใใง่ชฌๆ](https://github.com/microsoft/DeepSpeed/issues/998)ใ
<a id='deepspeed-zero2-zero3-performance'></a>
#### ZeRO-2 vs ZeRO-3 Performance
ZeRO-3 ใฏใไปใฎใในใฆใๅใใใใซๆงๆใใใฆใใๅ ดๅใZeRO-2 ใใใ้
ใใชใๅฏ่ฝๆงใใใใพใใๅ่
ใฏๅ้ใใๅฟ
่ฆใใใใใใงใใ
ZeRO-2 ใฎๆฉ่ฝใซๅ ใใฆใขใใซใฎ้ใฟไปใใ่กใใพใใ ZeRO-2 ใใใผใบใๆบใใใๆฐๅใฎ GPU ใ่ถ
ใใฆๆกๅผตใใๅฟ
่ฆใใชใๅ ดๅ
ใใใใใฐใใใใซๅบๅทใใใใจใ้ธๆใใใใจใใงใใพใใ ZeRO-3 ใซใใใใฏใใใซ้ซใในใฑใผใฉใใชใใฃๅฎน้ใๅฏ่ฝใซใชใใใจใ็่งฃใใใใจใ้่ฆใงใ
ในใใผใใ็ ็ฒใซใใฆใ
ZeRO-3 ใฎๆงๆใ่ชฟๆดใใฆใZeRO-2 ใซ่ฟใฅใใใใจใใงใใพใใ
- `stage3_param_persistence_threshold` ใ้ๅธธใซๅคงใใชๆฐๅคใซ่จญๅฎใใพใใใใจใใฐใ`6 * hidden_โโsize * hidden_โโsize` ใฎใใใซใๆๅคงโโใใฉใกใผใฟใใใๅคงใใใชใใพใใใใใซใใใใใฉใกใผใฟใ GPU ใซไฟๆใใใพใใ
- ZeRO-2 ใซใฏใใฎใชใใทใงใณใใชใใใใ`offload_params` ใใชใใซใใพใใ
ๅคๆดใใชใใฆใใ`offload_params`ใใชใใซใใใ ใใงใใใฉใผใใณในใๅคงๅน
ใซๅไธใใๅฏ่ฝๆงใใใใพใใ
`stage3_param_persistence_threshold`ใใใกใใใใใใใฎๅคๆดใฏใใฌใผใใณใฐใงใใใขใใซใฎใตใคใบใซๅฝฑ้ฟใใพใใใใใง
ใใใใฏใใใผใบใซๅฟใใฆใในใฑใผใฉใใชใใฃใจๅผใๆใใซ้ๅบฆใๅไธใใใใฎใซๅฝน็ซใกใพใใ
<a id='deepspeed-zero2-example'></a>
#### ZeRO-2 Example
ไปฅไธใฏใๅฎๅ
จใช ZeRO-2 ่ชๅๆงๆใใกใคใซ `ds_config_zero2.json` ใงใใ
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
ไปฅไธใฏใๆๅใง่จญๅฎใใใๅฎๅ
จใช ZeRO-2 ใฎใในใฆใๆๅนใชๆงๆใใกใคใซใงใใใใใงใฏไธปใซใๅ
ธๅ็ใชใใฎใ็ขบ่ชใใใใใฎใใฎใงใใ
ๅคใฏๆฌกใฎใใใซใชใใพใใใ่คๆฐใฎ`auto`่จญๅฎใๅซใพใใๅคใไฝฟ็จใใใใจใๅผทใใๅงใใใพใใ
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
<a id='deepspeed-zero3-example'></a>
#### ZeRO-3 Example
ไปฅไธใฏใๅฎๅ
จใช ZeRO-3 ่ชๅๆงๆใใกใคใซ`ds_config_zero3.json`ใงใใ
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
ไปฅไธใฏใๆๅใง่จญๅฎใใใๅฎๅ
จใช ZeRO-3 ใฎใในใฆใๆๅนใชๆงๆใใกใคใซใงใใใใใงใฏไธปใซใๅ
ธๅ็ใชใใฎใ็ขบ่ชใใใใใฎใใฎใงใใ
ๅคใฏๆฌกใฎใใใซใชใใพใใใ่คๆฐใฎ`auto`่จญๅฎใๅซใพใใๅคใไฝฟ็จใใใใจใๅผทใใๅงใใใพใใ
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1e6,
"stage3_prefetch_bucket_size": 0.94e6,
"stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
#### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance
ใใใงใใใพใใพใชๆฎต้ใใใใใจใใใใใพใใใใฉใกใใไฝฟ็จใใใใใฉใฎใใใซๆฑบๅฎใใใฐใใใงใใใใ?ใใฎใปใฏใทใงใณใงใฏใใใฎ่ณชๅใซ็ญใใฆใใใพใใ
ไธ่ฌใซใๆฌกใฎใใจใๅฝใฆใฏใพใใพใใ
- ้ๅบฆใฎ็น๏ผๅทฆใฎๆนใๅณใใ้ใ๏ผ
ในใใผใธ 0 (DDP) > ในใใผใธ 1 > ในใใผใธ 2 > ในใใผใธ 2 + ใชใใญใผใ > ในใใผใธ 3 > ในใใผใธ 3 + ใชใใญใผใ
- GPU ใกใขใชใฎไฝฟ็จ็ถๆณ (ๅณใฏๅทฆใใใ GPU ใกใขใชๅน็ใ้ซใ)
ในใใผใธ 0 (DDP) < ในใใผใธ 1 < ในใใผใธ 2 < ในใใผใธ 2 + ใชใใญใผใ < ในใใผใธ 3 < ในใใผใธ 3 + ใชใใญใผใ
ใใใใฃใฆใๆๅฐ้ใฎๆฐใฎ GPU ใซๅใพใใชใใๆ้ใฎๅฎ่กใๅฎ็พใใใๅ ดๅใฏใๆฌกใฎใใญใปในใซๅพใใใจใใงใใพใใๆใ้ใใขใใญใผใใใ้ๅงใใGPU OOM ใซ้ฅใฃใๅ ดๅใฏใๆฌกใซ้
ใใขใใญใผใใซ้ฒใฟใพใใใใใใซใใไฝฟ็จใใใ GPU ใกใขใชใๅฐใชใใชใใพใใใชใฉใชใฉใ
ใพใใใใใ ใตใคใบใ 1 ใซ่จญๅฎใใพใ (ๅฟ
่ฆใชๆๅนใใใ ใตใคใบใซๅฏพใใฆใใใคใงใๅพ้
็ดฏ็ฉใไฝฟ็จใงใใพใ)ใ
1. `--gradient_checkpointing 1` (HF Trainer) ใพใใฏ็ดๆฅ `model.gradient_checkpointing_enable()` ใๆๅนใซใใพใ - OOM ใฎๅ ดๅ
2. ๆๅใซ ZeRO ในใใผใธ 2 ใ่ฉฆใใฆใใ ใใใ OOMใฎๅ ดๅ
3. ZeRO ในใใผใธ 2 + `offload_optimizer` ใ่ฉฆใใพใ - OOM ใฎๅ ดๅ
4. ZeRO ในใใผใธ 3 ใซๅใๆฟใใ - OOM ใฎๅ ดๅ
5. `cpu` ใซๅฏพใใฆ `offload_param` ใๆๅนใซใใพใ - OOM ใฎๅ ดๅ
6. OOM ใฎๅ ดๅใฏใ`cpu`ใซๅฏพใใฆ`offload_optimizer`ใๆๅนใซใใพใใ
7. ใใใงใใใใ ใตใคใบ 1 ใซ้ฉๅใใชใๅ ดๅใฏใใพใใใพใใพใชใใใฉใซใๅคใ็ขบ่ชใใๅฏ่ฝใงใใใฐๅคใไธใใพใใใใจใใฐใ`generate`ใไฝฟ็จใใๅบใๆค็ดขใใผใ ใไฝฟ็จใใชใๅ ดๅใฏใๅคง้ใฎใกใขใชใๆถ่ฒปใใใใใๆค็ดขใใผใ ใ็ญใใใพใใ
8. fp32 ใงใฏๅฟ
ใๆททๅๅ็ฒพๅบฆใไฝฟ็จใใพใใใคใพใใAmpere ไปฅไธใฎ GPU ใงใฏ bf16ใๅคใ GPU ใขใผใญใใฏใใฃใงใฏ fp16 ใไฝฟ็จใใพใใ
9. ใใใงใ OOM ใ่กใๅ ดๅใฏใใใผใใฆใงใขใ่ฟฝๅ ใใใใZeRO-Infinity ใๆๅนใซใใใใจใใงใใพใใใคใพใใใชใใญใผใ `offload_param` ใจ `offload_optimizer` ใ `nvme` ใซๅใๆฟใใพใใ้ๅธธใซ้ซ้ใช nvme ใงใใใใจใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ้ธ่ฉฑใจใใฆใZeRO-Infinity ใไฝฟ็จใใฆๅฐใใช GPU ใง BLOOM-176B ใๆจ่ซใใใใจใใงใใพใใใใ้ๅธธใซ้
ใใฃใใงใใใงใใใใพใใใใพใใ๏ผ
ใใกใใใๆใ GPU ใกใขใชๅน็ใฎ้ซใๆงๆใใๅงใใฆใๅพใใ้ใซ้ฒใใใจใงใใใใใฎๆ้ ใ้ใซๅฎ่กใใใใจใใงใใพใใใใใใฏไบ็ญๅใใฆใฟใฆใใ ใใใ
OOM ใๅผใ่ตทใใใชใใใใ ใตใคใบ 1 ใๅๅพใใใใๅฎๅนในใซใผใใใใๆธฌๅฎใใพใใ
ๆฌกใซใใใใ ใตใคใบใใงใใใ ใๅคงใใใใฆใฟใพใใใใใ ใตใคใบใๅคงใใใปใฉใไน็ฎใใ่กๅใๅทจๅคงใชๅ ดๅใซ GPU ใฎใใใฉใผใใณในใๆ้ซใซใชใใใใGPU ใฎๅน็ใๅไธใใพใใ
ใใใงใใใใฉใผใใณในๆ้ฉๅใฒใผใ ใๅงใพใใพใใไธ้จใฎใชใใญใผใๆฉ่ฝใใชใใซใใใใZeRO ๆฎต้ใงในใใใใใฆใณใใฆใใใ ใตใคใบใๅขๆธใใฆใๅฎๅนในใซใผใใใใๅๅบฆๆธฌๅฎใใใใจใใงใใพใใๆบ่ถณใใใพใงๆดใๆตใใ็นฐใ่ฟใใพใใ
ๆฐธ้ ใซใใใซ่ฒปใใๅฟ
่ฆใฏใใใพใใใใ3 ใๆใฎใใฌใผใใณใฐใ้ๅงใใใใจใใฆใใๅ ดๅใฏใในใซใผใใใใซ้ขใใฆๆใๅนๆ็ใช่จญๅฎใ่ฆใคใใใใใซๆฐๆฅใใใฆใใ ใใใใใฎใใใใใฌใผใใณใฐใฎใณในใใๆๅฐ้ใซใชใใใใฌใผใใณใฐใใใๆฉใๅฎไบใงใใพใใ็พๅจใฎ็ฎใพใใใใๅคๅใใ ML ใฎไธ็ใงใฏใไฝใใใใฌใผใใณใฐใใใฎใซใใใซ 1 ใๆใใใๅ ดๅใ็ตถๅฅฝใฎๆฉไผใ้ใๅฏ่ฝๆงใใใใพใใใใกใใใใใใฏ็งใๆ่ฆใๅ
ฑๆใใฆใใใ ใใงใใใๆฑบใใฆใใชใใๆฅใใใใจใใฆใใใใใงใฏใใใพใใใ BLOOM-176B ใฎใใฌใผใใณใฐใ้ๅงใใๅใซใใใฎใใญใปในใซ 2 ๆฅ้่ฒปใใใในใซใผใใใใ 90 TFLOP ใใ 150 TFLOP ใซๅไธใใใใใจใใงใใพใใใใใฎๅใ็ตใฟใซใใใใใฌใผใใณใฐๆ้ใ 1 ใๆไปฅไธ็ฏ็ดใงใใพใใใ
ใใใใฎใกใขใฏไธปใซใใฌใผใใณใฐ ใขใผใ็จใซๆธใใใใใฎใงใใใใปใจใใฉใฎๅ ดๅใฏๆจ่ซใซใ้ฉ็จใใใใฏใใงใใใใจใใฐใๅพ้
ใใงใใฏใใคใณใใฏใใฌใผใใณใฐไธญใซใฎใฟๅฝน็ซใคใใใๆจ่ซไธญใฏไฝใ่กใใใพใใใใใใซใใใซใ GPU ๆจ่ซใๅฎ่กใใฆใใฆใ[DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/)ใ[Accelerate](https://ใใฐใใงใคใน.co/blog/bloom-inference-pytorch-scripts) ใฏๅชใใใใใฉใผใใณในใๆไพใใใฏใใงใใ
ใใฎไปใฎใใใฉใผใใณใน้ข้ฃใฎ็ฐกๅใชใกใข:
- ไฝใใๆๅใใใใฌใผใใณใฐใใฆใใๅ ดๅใฏใๅธธใซ 16 ใงๅฒใๅใใๅฝข็ถใฎใใณใฝใซ (้ ใใใตใคใบใชใฉ) ใไฝฟ็จใใใใใซใใฆใใ ใใใใใใ ใตใคใบใซใคใใฆใฏใๅฐใชใใจใ 2 ใงๅฒใๅใใใใใซใใฆใใ ใใใ GPU ใใใใใซ้ซใใใใฉใผใใณในใๅผใๅบใใใๅ ดๅใฏใใใผใใฆใงใขๅบๆใฎ [ๆณขใจใฟใคใซใฎ้ๅญๅ](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) ใฎๅฏๅๆงใใใใพใใ
### Activation Checkpointing or Gradient Checkpointing
ใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใจๅพ้
ใใงใใฏใใคใณใใฏใๅใๆนๆณ่ซใๆใ 2 ใคใฎ็ฐใชใ็จ่ชใงใใใจใฆใใใใใใใงใใใใใใชๆใใงใใ
ๅพ้
ใใงใใฏใใคใณใใไฝฟ็จใใใจใ้ๅบฆใ GPU ใกใขใชใจๅผใๆใใซใงใใพใใใใใซใใใGPU OOM ใๅ
ๆใใใใใใใ ใตใคใบใๅขใใใใจใใงใใๅคใใฎๅ ดๅใใใใฉใผใใณในใฎๅไธใซใคใชใใใพใใ
HF Transformers ใขใใซใฏใDeepSpeed ใฎใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใซใคใใฆไฝใ็ฅใใชใใใใDeepSpeed ๆงๆใใกใคใซใงใใฎๆฉ่ฝใๆๅนใซใใใใจใใฆใใไฝใ่ตทใใใพใใใ
ใใใใฃใฆใใใฎ้ๅธธใซๆ็ใชๆฉ่ฝใๆดป็จใใใซใฏ 2 ใคใฎๆนๆณใใใใพใใ
1. HF Transformers ใขใใซใไฝฟ็จใใใๅ ดๅใฏใ`model.gradient_checkpointing_enable()` ใๅฎ่กใใใใHF ใใฌใผใใผใง `--gradient_checkpointing` ใไฝฟ็จใใพใใใใใซใใใใใใ่ชๅ็ใซๆๅนใซใชใใพใใใใใงไฝฟใใใใฎใ `torch.utils.checkpoint` ใงใใ
2. ็ฌ่ชใฎใขใใซใไฝๆใใDeepSpeed ใฎใขใฏใใฃใใผใทใงใณ ใใงใใฏใใคใณใใไฝฟ็จใใใๅ ดๅใฏใ[ใใใง่ฆๅฎใใใฆใใ API](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html) ใไฝฟ็จใงใใพใใ HF Transformers ใขใใชใณใฐ ใณใผใใไฝฟ็จใใฆใ`torch.utils.checkpoint` ใ DeepSpeed ใฎ API ใซ็ฝฎใๆใใใใจใใงใใพใใๅพ่
ใฏใ้ ๆนๅใขใฏใใฃใใผใทใงใณใๅ่จ็ฎใใไปฃใใใซ CPU ใกใขใชใซใชใใญใผใใงใใใใใใใๆ่ปใงใใ
### Optimizer and Scheduler
`offload_optimizer`ใๆๅนใซใใชใ้ใใDeepSpeed ในใฑใธใฅใผใฉใผใจ HuggingFace ในใฑใธใฅใผใฉใผใ็ตใฟๅใใใฆไฝฟ็จโโใงใใพใใ
ใชใใใฃใใคใถใผ (HuggingFace ในใฑใธใฅใผใฉใผใจ DeepSpeed ใชใใใฃใใคใถใผใฎ็ตใฟๅใใใ้คใ):
| Combos | HF Scheduler | DS Scheduler |
|:-------------|:-------------|:-------------|
| HF Optimizer | Yes | Yes |
| DS Optimizer | No | Yes |
`offload_optimizer`ใๆๅนใชๅ ดๅใCPU ใจ
GPU ๅฎ่ฃ
(LAMB ใ้คใ)ใ
<a id='deepspeed-optimizer'></a>
#### Optimizer
DeepSpeed ใฎไธปใชใชใใใฃใใคใถใผใฏใAdamใAdamWใOneBitAdamใLamb ใงใใใใใใฏ ZeRO ใงๅพนๅบ็ใซใในใใใใฆใใใ
ใใใใฃใฆใไฝฟ็จใใใใจใใๅงใใใพใใใใ ใใไปใฎใชใใใฃใใคใถใใtorchใใใใคใณใใผใใใใใจใฏใงใใพใใๅฎๅ
จใชใใญใฅใกใณใใฏ [ใใกใ](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters) ใซใใใพใใ
่จญๅฎใใกใคใซใง `optimizer` ใจใณใใชใ่จญๅฎใใชใๅ ดๅใ[`Trainer`] ใฏ
่ชๅ็ใซ`AdamW`ใซ่จญๅฎใใใๆๅฎใใใๅคใพใใฏๆฌกใฎใณใใณใใฉใคใณใฎใใใฉใซใใไฝฟ็จใใใพใใ
ๅผๆฐ: `--learning_rate`ใ`--adam_beta1`ใ`--adam_beta2`ใ`--adam_epsilon`ใใใใณ `--weight_decay`ใ
ไปฅไธใฏใ`AdamW`ใฎ่ชๅๆงๆใใใ`optimizer`ใจใณใใชใฎไพใงใใ
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
}
}
```
ใณใใณใใฉใคใณๅผๆฐใซใใฃใฆๆงๆใใกใคใซๅ
ใฎๅคใ่จญๅฎใใใใใจใซๆณจๆใใฆใใ ใใใใใใฏ 1 ใคใใใใใงใ
ๅคใฎๆฑบๅฎ็ใชใฝใผในใๆไพใใใใจใใฐๅญฆ็ฟ็ใๆฌกใฎใใใซ่จญๅฎใใใฆใใๅ ดๅใซใ่ฆใคใใซใใใจใฉใผใๅ้ฟใใพใใ
ใใพใใพใชๅ ดๆใงใใพใใพใชไพกๅค่ฆณใใณใใณใใฉใคใณใฎใซใผใซใใชใผใใผใฉใคใใใใๅคใฏๆฌกใฎใจใใใงใใ
- `lr` ใจ `--learning_rate` ใฎๅค
- `betas` ใจ `--adam_beta1 --adam_beta2` ใฎๅค
- `eps` ใจ `--adam_epsilon` ใฎๅค
- `weight_decay` ใจ `--weight_decay` ใฎๅค
ใใใใฃใฆใใณใใณใใฉใคใณใงๅ
ฑๆใใคใใผใใฉใกใผใฟใ่ชฟๆดใใใใจใๅฟใใชใใงใใ ใใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ไธ่จใซใชในใใใใฆใใชใๅฅใฎใชใใใฃใใคใถใผใไฝฟ็จใใๅ ดๅใฏใใใใใฌใใซใฎๆงๆใซ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
```json
{
"zero_allow_untested_optimizer": true
}
```
`AdamW`ใจๅๆงใซใๅ
ฌๅผใซใตใใผใใใใฆใใไปใฎใชใใใฃใใคใถใผใๆงๆใงใใพใใใใใใฏ็ฐใชใ่จญๅฎๅคใๆใคๅฏ่ฝๆงใใใใใจใซๆณจๆใใฆใใ ใใใไพใใฐAdam ใฎๅ ดๅใฏใ`weight_decay`ใ`0.01`ไป่ฟใซใใๅฟ
่ฆใใใใพใใ
ใใใซใใชใใญใผใใฏใDeepspeed ใฎ CPU Adam ใชใใใฃใใคใถใผใจไฝต็จใใใจๆใๅนๆ็ใซๆฉ่ฝใใพใใ `deepspeed==0.8.3` ใชใฎใงใใชใใญใผใใงๅฅใฎใชใใใฃใใคใถใผใไฝฟ็จใใใๅ ดๅใฏใไปฅไธใ่ฟฝๅ ใใๅฟ
่ฆใใใใพใใ
```json
{
"zero_force_ds_cpu_optimizer": false
}
```
ๆไธไฝใฎๆงๆใซ็งป่กใใพใใ
<a id='deepspeed-scheduler'></a>
#### Scheduler
DeepSpeed ใฏใ`LRRangeTest`ใ`OneCycle`ใ`WarmupLR`ใใใใณ`WarmupDecayLR`ๅญฆ็ฟ็ในใฑใธใฅใผใฉใผใใตใใผใใใฆใใพใใๅฎๅ
จใช
ใใญใฅใกใณใใฏ[ใใ](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters)ใงใใ
ใใใงใฏใ๐ค Transformers ใจ DeepSpeed ใฎ้ใงในใฑใธใฅใผใฉใผใ้่คใใๅ ดๆใ็คบใใพใใ
- `--lr_scheduler_type constant_with_warmup` ็ต็ฑใฎ `WarmupLR`
- `--lr_scheduler_type Linear` ใไปใใ `WarmupDecayLR`ใใใใฏ `--lr_scheduler_type` ใฎใใใฉใซใๅคใงใใใใพใใ
ใใใใฃใฆใในใฑใธใฅใผใฉใ่จญๅฎใใชใๅ ดๅใใใใใใใฉใซใใง่จญๅฎใใใในใฑใธใฅใผใฉใซใชใใพใใ
่จญๅฎใใกใคใซใง `scheduler` ใจใณใใชใ่จญๅฎใใชใๅ ดๅใ[`Trainer`] ใฏ
`--lr_scheduler_type`ใ`--learning_rate`ใใใใณ `--warmup_steps` ใพใใฏ `--warmup_ratio` ใฎๅคใ่จญๅฎใใพใใ
๐ค ใใใฎใใฉใณในใใฉใผใใผใใผใธใงใณใ
ไปฅไธใฏใ`WarmupLR`ใฎ่ชๅๆงๆใใใ`scheduler`ใจใณใใชใฎไพใงใใ
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
*"auto"* ใไฝฟ็จใใใฆใใใใใ[`Trainer`] ๅผๆฐใฏ่จญๅฎใซๆญฃใใๅคใ่จญๅฎใใพใใ
ใใกใคใซใใใใฏใๅคใฎๆฑบๅฎ็ใชใฝใผในใ 1 ใคใใใใจใจใใใจใใฐๆฌกใฎใใใชๅ ดๅใซ่ฆใคใใซใใใจใฉใผใ้ฟใใใใใงใใ
ๅญฆ็ฟ็ใฏใๅ ดๆใใจใซ็ฐใชใๅคใซ่จญๅฎใใใพใใใณใใณใใฉใคใณใฎใซใผใซใ่จญๅฎใใใๅคใฏๆฌกใฎใจใใใงใใ
- `warmup_min_lr` ใฎๅคใฏ `0` ใงใใ
- `warmup_max_lr` ใจ `--learning_rate` ใฎๅคใ
- `warmup_num_steps` ใจ `--warmup_steps` ใฎๅค (ๆๅฎใใใฆใใๅ ดๅ)ใใใไปฅๅคใฎๅ ดๅใฏ `--warmup_ratio` ใไฝฟ็จใใพใ
ใใฌใผใใณใฐ ในใใใใฎๆฐใไน็ฎใใๅใไธใใพใใ
- `total_num_steps` ใซใฏ `--max_steps` ใฎๅคใๆๅฎใใใใๆๅฎใใใฆใใชใๅ ดๅใฏๅฎ่กๆใซ่ชๅ็ใซๅฐๅบใใใพใใ
็ฐๅขใใใผใฟใปใใใฎใตใคใบใใใใณใใฎไปใฎใณใใณใ ใฉใคใณๅผๆฐ (
`WarmupDecayLR`)ใ
ใใกใใใๆงๆๅคใฎไธ้จใพใใฏใในใฆใๅผใ็ถใใงใ่ชๅใง่จญๅฎใใใใจใใงใใพใใ
```json
{
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 1000
}
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ใใจใใฐใ`WarmupDecayLR`ใฎๅ ดๅใฏใๆฌกใฎใจใณใใชใไฝฟ็จใงใใพใใ
```json
{
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"last_batch_iteration": -1,
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
}
}
```
`total_num_steps`ใ`warmup_max_lr`ใ`warmup_num_steps`ใใใใณ `total_num_steps` ใฏใญใผใๆใซ่จญๅฎใใใพใใ
<a id='deepspeed-fp32'></a>
### fp32 Precision
Deepspeed ใฏใๅฎๅ
จใช fp32 ใจ fp16 ใฎๆททๅ็ฒพๅบฆใใตใใผใใใพใใ
fp16 ๆททๅ็ฒพๅบฆใไฝฟ็จใใใจใๅฟ
่ฆใชใกใขใชใๅคงๅน
ใซๅๆธใใใ้ๅบฆใๅไธใใใใใ
ไฝฟ็จใใฆใใใขใใซใใใฎใใฌใผใใณใฐ ใขใผใใง้ฉๅใซๅไฝใใชใๅ ดๅใฏใไฝฟ็จใใชใๆนใใใใงใใใใ้ๅธธใใ
ใขใใซใ fp16 ๆททๅ็ฒพๅบฆใงไบๅใใฌใผใใณใฐใใใฆใใชใๅ ดๅใซ็บ็ใใพใ (ใใจใใฐใใใใฏ bf16 ใงไบๅใใฌใผใใณใฐใใใๅ ดๅใซใใ็บ็ใใพใ)
ใขใใซ๏ผใใใฎใใใชใขใใซใงใฏใใชใผใใผใใญใผใพใใฏใขใณใใผใใญใผใ็บ็ใใ`NaN`ๆๅคฑใ็บ็ใใๅฏ่ฝๆงใใใใพใใใใใใใชใใฎๅ ดๅใฏใไฝฟ็จใใใใจๆใใงใใใ
ๅฎๅ
จใช fp32 ใขใผใใใใใฉใซใใฎ fp16 ๆททๅ็ฒพๅบฆใขใผใใๆฌกใฎใใใซๆ็คบ็ใซ็กๅนใซใใพใใ
```json
{
"fp16": {
"enabled": false,
}
}
```
Ampere ใขใผใญใใฏใใฃ ใใผในใฎ GPU ใไฝฟ็จใใฆใใๅ ดๅใpytorch ใใผใธใงใณ 1.7 ไปฅ้ใฏ่ชๅ็ใซ ใไฝฟ็จใใใใใซๅใๆฟใใใพใใ
ไธ้จใฎๆไฝใงใฏใฏใใใซๅน็็ใช tf32 ๅฝขๅผใไฝฟ็จใใพใใใ็ตๆใฏไพ็ถใจใใฆ fp32 ใซใชใใพใใ่ฉณ็ดฐใจ
ใใณใใใผใฏใซใคใใฆใฏใ[Ampere ใใใคในไธใฎ TensorFloat-32(TF32)](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) ใๅ็
งใใฆใใ ใใใๆๆธใซใฏไปฅไธใๅซใพใใพใ
ไฝใใใฎ็็ฑใงใใฎ่ชๅๅคๆใไฝฟ็จใใใใชใๅ ดๅใฏใใใฎ่ชๅๅคๆใ็กๅนใซใใๆนๆณใซใคใใฆ่ชฌๆใใพใใ
๐ค ใใฌใผใใผใงใฏใ`--tf32` ใไฝฟ็จใใฆๆๅนใซใใใใ`--tf32 0` ใพใใฏ `--no_tf32` ใไฝฟ็จใใฆ็กๅนใซใใใใจใใงใใพใใใใใฉใซใใงใฏใPyTorch ใฎใใใฉใซใใไฝฟ็จใใใพใใ
<a id='deepspeed-amp'></a>
### Automatic Mixed Precision
pytorch ใฎใใใช AMP ใฎๆนๆณใพใใฏ apex ใฎใใใชๆนๆณใง่ชๅๆททๅ็ฒพๅบฆใไฝฟ็จใงใใพใใ
### fp16
fp16 (float16) ใ่จญๅฎใใฆ pytorch AMP ใฎใใใชใขใผใใ่จญๅฎใใใซใฏ:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
[`Trainer`] ใฏใใฎๅคใซๅบใฅใใฆใใใ่ชๅ็ใซๆๅนใพใใฏ็กๅนใซใใพใใ
`args.fp16_backend`ใๆฎใใฎ่จญๅฎๅคใฏใใชใๆฌก็ฌฌใงใใ
ใใฎใขใผใใฏใ`--fp16 --fp16_backend amp`ใพใใฏ`--fp16_full_eval`ใณใใณใใฉใคใณๅผๆฐใๆธกใใใใจๆๅนใซใชใใพใใ
ใใฎใขใผใใๆ็คบ็ใซๆๅน/็กๅนใซใใใใจใใงใใพใใ
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ใใใ[ใใญใฅใกใณใ](https://www.deepspeed.ai/docs/config-json/#fp16-training-options)ใงใใ
### BF16
fp16 ใฎไปฃใใใซ bf16 (bfloat16) ใๅฟ
่ฆใชๅ ดๅใฏใๆฌกใฎๆงๆใปใฏใทใงใณใไฝฟ็จใใใพใใ
```json
{
"bf16": {
"enabled": "auto"
}
}
```
bf16 ใฏ fp32 ใจๅใใใคใใใใฏ ใฌใณใธใๅใใฆใใใใใๆๅคฑในใฑใผใชใณใฐใฏๅฟ
่ฆใใใพใใใ
ใใฎใขใผใใฏใ`--bf16` ใพใใฏ `--bf16_full_eval` ใณใใณใใฉใคใณๅผๆฐใๆธกใใใใจๆๅนใซใชใใพใใ
ใใฎใขใผใใๆ็คบ็ใซๆๅน/็กๅนใซใใใใจใใงใใพใใ
```json
{
"bf16": {
"enabled": true
}
}
```
<Tip>
`deepspeed==0.6.0`ใฎๆ็นใงใฏใbf16 ใตใใผใใฏๆฐใใๅฎ้จ็ใชใใฎใงใใ
bf16 ใๆๅนใช็ถๆ
ใง [ๅพ้
็ดฏ็ฉ](#gradient-accumulation) ใไฝฟ็จใใๅ ดๅใฏใbf16 ใงๅพ้
ใ็ดฏ็ฉใใใใใจใซๆณจๆใใๅฟ
่ฆใใใใพใใใใฎๅฝขๅผใฎ็ฒพๅบฆใไฝใใใใใใใฏๅธๆใฉใใใงใฏใชใๅฏ่ฝๆงใใใใพใใๆๅคฑใฎใใ่็ฉใซใคใชใใใพใใ
ใใฎๅ้กใไฟฎๆญฃใใใใ้ซ็ฒพๅบฆใฎ `dtype` (fp16 ใพใใฏ fp32) ใไฝฟ็จใใใชใใทใงใณใๆไพใใใใใฎไฝๆฅญใ่กใใใฆใใพใใ
</Tip>
### NCCL Collectives
่จ็ทดไฝๅถใฎ`dtype`ใใใใใใพใใพใชๅๆธใๅ้/ๅๆฃๆไฝใชใฉใฎใณใใฅใใฑใผใทใงใณ้ๅไฝใซไฝฟ็จใใใๅฅใฎ`dtype`ใใใใพใใ
ใในใฆใฎๅ้/ๅๆฃๆไฝใฏใใใผใฟใๅซใพใใฆใใใฎใจๅใ `dtype` ใงๅฎ่กใใใใใใbf16 ใใฌใผใใณใฐไฝๅถใไฝฟ็จใใฆใใๅ ดๅใใใผใฟใฏ bf16 ใงๅ้ใใใพใใๅ้ใฏๆๅคฑใฎใชใๆไฝใงใใ
ใใพใใพใชใชใใฅใผในๆไฝใฏ้ๅธธใซๆๅคฑใๅคงใใๅฏ่ฝๆงใใใใพใใใใจใใฐใ่คๆฐใฎ GPU ้ใงๅพ้
ใๅนณๅๅใใใๅ ดๅใ้ไฟกใ fp16 ใพใใฏ bf16 ใง่กใใใๅ ดๅใ็ตๆใฏๆๅคฑใๅคใใชใๅฏ่ฝๆงใใใใพใใ่คๆฐใฎๆฐๅคใไฝ็ฒพๅบฆใงใขใใใฟใคใบใใใจ็ตๆใฏๆญฃ็ขบใงใฏใชใใใใงใใ ใ bf16 ใงใฏ fp16 ใใใ็ฒพๅบฆใไฝใใใใใใใซใใใงใใ้ๅธธใฏ้ๅธธใซๅฐใใ grad ใๅนณๅใใ้ใฎๆๅคฑใๆๅฐ้ใซๆใใใใใใใfp16 ใงๅๅใงใใใใจใใใใใใพใใใใใใฃใฆใใใใฉใซใใงใฏใๅ็ฒพๅบฆใใฌใผใใณใฐใงใฏ fp16 ใใชใใฏใทใงใณๆผ็ฎใฎใใใฉใซใใจใใฆไฝฟ็จใใใพใใใใ ใใใใฎๆฉ่ฝใๅฎๅ
จใซๅถๅพกใงใใๅฟ
่ฆใซๅฟใใฆๅฐใใชใชใผใใผใใใใ่ฟฝๅ ใใฆใใชใใฏใทใงใณใ็ดฏ็ฉ dtype ใจใใฆ fp32 ใไฝฟ็จใใ็ตๆใฎๆบๅใใงใใๅ ดๅใซใฎใฟๅ็ฒพๅบฆ `dtype` ใซใใฆใณใญใฃในใใใใใใซใใใใจใใงใใพใใใงใใฌใผใใณใฐไธญใงใใ
ใใใฉใซใใใชใผใใผใฉใคใใใใซใฏใๆฐใใๆงๆใจใณใใชใ่ฟฝๅ ใใใ ใใงใใ
```json
{
"communication_data_type": "fp32"
}
```
ใใฎ่จไบใฎๅท็ญๆ็นใงใฎๆๅนใชๅคใฏใ"fp16"ใ"bfp16"ใ"fp32"ใงใใ
ๆณจ: ในใใผใธ ใผใญ 3 ใซใฏใbf16 ้ไฟกใฟใคใใซ้ขใใใใฐใใใใ`deepspeed==0.8.1`ใงไฟฎๆญฃใใใพใใใ
### apex
apex AMP ใฎใใใชใขใผใ ใปใใใ่จญๅฎใใใซใฏ:
```json
"amp": {
"enabled": "auto",
"opt_level": "auto"
}
```
[`Trainer`] ใฏ `args.fp16_backend` ใฎๅคใซๅบใฅใใฆ่ชๅ็ใซ่จญๅฎใใพใใ
`args.fp16_opt_level`ใ
ใใฎใขใผใใฏใ`--fp16 --fp16_backend apex --fp16_opt_level 01`ใณใใณใ ใฉใคใณๅผๆฐใๆธกใใใใจๆๅนใซใชใใพใใ
ใใฎใขใผใใๆ็คบ็ใซๆงๆใใใใจใใงใใพใใ
```json
{
"amp": {
"enabled": true,
"opt_level": "O1"
}
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
ใใใฏ[ใใญใฅใกใณใ](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options)ใงใใ
<a id='deepspeed-bs'></a>
### Batch Size
ใใใใตใคใบใ่จญๅฎใใใซใฏใๆฌกใไฝฟ็จใใพใใ
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
[`Trainer`] ใฏ่ชๅ็ใซ `train_micro_batch_size_per_gpu` ใๆฌกใฎๅคใซ่จญๅฎใใพใใ
`args.per_device_train_batch_size`ใจ`train_batch_size`ใ`args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`ใซๅคๆดใใพใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"train_batch_size": 12,
"train_micro_batch_size_per_gpu": 4
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
<a id='deepspeed-grad-acc'></a>
### Gradient Accumulation
ๅพ้
็ดฏ็ฉใปใใใๆงๆใใใซใฏ:
```json
{
"gradient_accumulation_steps": "auto"
}
```
[`Trainer`] ใฏ่ชๅ็ใซใใใ `args.gradient_accumulation_steps` ใฎๅคใซ่จญๅฎใใพใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"gradient_accumulation_steps": 3
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
<a id='deepspeed-grad-clip'></a>
### Gradient Clipping
ใฐใฉใใผใทใงใณ ใฐใฉใใผใทใงใณ ใฏใชใใใณใฐ ใปใใใๆงๆใใใซใฏ:
```json
{
"gradient_clipping": "auto"
}
```
[`Trainer`] ใฏ่ชๅ็ใซใใใ `args.max_grad_norm` ใฎๅคใซ่จญๅฎใใพใใ
ๅคใๆ็คบ็ใซ่จญๅฎใใใใจใใงใใพใใ
```json
{
"gradient_clipping": 1.0
}
```
ใใ ใใ[`Trainer`] ใณใใณใใฉใคใณๅผๆฐใจ DeepSpeed ใ่ชๅใงๅๆใใใใจใซใชใใพใใ
ๆงๆใ
<a id='deepspeed-weight-extraction'></a>
### Getting The Model Weights Out
ใใฌใผใใณใฐใ็ถ็ถใใDeepSpeed ใฎไฝฟ็จใๅ้ใใ้ใใไฝใๅฟ้
ใใๅฟ
่ฆใฏใใใพใใใ DeepSpeed ในใใข
fp32 ใฎใซในใฟใ ใใงใใฏใใคใณใ ใชใใใฃใใคใถใผ ใใกใคใซๅ
ใฎใในใฟใผใฎ้ใฟใใใใฏ `global_step*/*optim_states.pt` (ใใใฏ glob
ใใฟใผใณ)ใ้ๅธธใฎใใงใใฏใใคใณใใฎไธใซไฟๅญใใใพใใ
**FP16 ใฆใงใคใ:**
ใขใใซใ ZeRO-2 ใงไฟๅญใใใจใใขใใซใฎ้ใฟใๅซใ้ๅธธใฎ `pytorch_model.bin` ใใกใคใซใไฝๆใใใพใใใ
ใใใใฏ้ใฟใฎ fp16 ใใผใธใงใณใซใใใพใใใ
ZeRO-3 ใงใฏใใขใใซใฎ้ใฟใ่คๆฐใฎ GPU ใซๅๅฒใใใใใใ็ถๆณใฏใใใซ่ค้ใซใชใใพใใ
ใใใใฃใฆใfp16 ใไฟๅญใใใใใฎ `Trainer` ใๅๅพใใใซใฏใ`"stage3_gather_16bit_weights_on_model_save": true` ใๅฟ
่ฆใงใใ
้ใฟใฎใใผใธใงใณใใใฎ่จญๅฎใ`False`ใฎๅ ดๅใ`pytorch_model.bin`ใฏไฝๆใใใพใใใใใใฏใใใใฉใซใใง DeepSpeed ใฎ `state_dict` ใซๅฎ้ใฎ้ใฟใงใฏใชใใใฌใผในใใซใใผใๅซใพใใใใใงใใใใฎ `state_dict` ใไฟๅญใใๅ ดๅใใญใผใใ็ดใใใจใฏใงใใพใใใ
```json
{
"zero_optimization": {
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
**FP32 ้้:**
fp16 ใฆใงใคใใฏใใฌใผใใณใฐใๅ้ใใใฎใซ้ฉใใฆใใพใใใใขใใซใฎๅพฎ่ชฟๆดใๅฎไบใใใใใ
[ใขใใซ ใใ](https://huggingface.co/models) ใซใขใฏใปในใใใใfp32 ใๅ
ฅๆใใใใจๆใใใไปใฎไบบใซๆธกใใพใใ
้ใฟใใใใฏๅคง้ใฎใกใขใชใๅฟ
่ฆใจใใใใญใปในใงใใใใใใใฌใผใใณใฐไธญใซ่กใในใใงใฏใชใใฎใ็ๆณ็ใงใใ
ใใใใฃใฆใใใฌใผใใณใฐใฎๅฎไบๅพใซใชใใฉใคใณใงๅฎ่กใใใฎใๆ้ฉใงใใใใ ใใๅฟ
่ฆใซๅฟใใฆใ็ฉบใ CPU ใๅๅใซใใๅ ดๅใฏใ
ๅใใใฌใผใใณใฐ ในใฏใชใใใงๅฎ่กใงใใใใจใๆใๅบใใฆใใ ใใใๆฌกใฎใปใฏใทใงใณใงใฏใไธกๆนใฎใขใใญใผใใซใคใใฆ่ชฌๆใใพใใ
**ใฉใคใ FP32 ใฆใงใคใ ใชใซใใช:**
ใขใใซใๅคงใใใใใฌใผใใณใฐใฎ็ตไบๆใซ็ฉบใ CPU ใกใขใชใใปใจใใฉๆฎใฃใฆใใชใๅ ดๅใใใฎใขใใญใผใใฏๆฉ่ฝใใชใๅฏ่ฝๆงใใใใพใใ
ๅฐใชใใจใ 1 ใคใฎใใงใใฏใใคใณใใไฟๅญใใฆใใฆใๆๆฐใฎใใงใใฏใใคใณใใไฝฟ็จใใใๅ ดๅใฏใๆฌกใฎๆ้ ใๅฎ่กใงใใพใใ
```python
from transformers.trainer_utils import get_last_checkpoint
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = get_last_checkpoint(trainer.args.output_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
`--load_best_model_at_end` class:*~transformers.TrainingArguments* ๅผๆฐใไฝฟ็จใใฆใใๅ ดๅ (ๆ้ฉใชใขใใซใ่ฟฝ่ทกใใใใ)
ใใงใใฏใใคใณใ)ใๆๅใซๆ็ตใขใใซใๆ็คบ็ใซไฟๅญใใฆใใใไธ่จใจๅใใใจใ่กใใใจใงใใฌใผใใณใฐใ็ตไบใงใใพใใ
```python
from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final")
trainer.deepspeed.save_checkpoint(checkpoint_dir)
fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
```
<Tip>
`load_state_dict_from_zero_checkpoint` ใๅฎ่กใใใใจใ`model` ใฏใใฏใไฝฟ็จใงใใชใใชใใใจใซๆณจๆใใฆใใ ใใใ
ๅใใขใใชใฑใผใทใงใณใฎ DeepSpeed ใณใณใใญในใใใคใพใใdeepspeed ใจใณใธใณใๅๅๆๅใใๅฟ
่ฆใใใใพใใ
`model.load_state_dict(state_dict)` ใฏใใใใใในใฆใฎ DeepSpeed ใใธใใฏใๅ้คใใพใใใใใใฃใฆใใใใฏๆๅพใซใฎใฟๅฎ่กใใฆใใ ใใ
ใใฌใผใใณใฐใฎๆงๅญใ
</Tip>
ใใกใใใclass:*~transformers.Trainer* ใไฝฟ็จใใๅฟ
่ฆใฏใชใใไธ่จใฎไพใ็ฌ่ชใฎใใฎใซ่ชฟๆดใใใใจใใงใใพใใ
ใใฌใผใใผใ
ไฝใใใฎ็็ฑใงใใใซๆน่ฏใใใๅ ดๅใฏใ้ใฟใฎ fp32 `state_dict` ใๆฝๅบใใฆ้ฉ็จใใใใจใใงใใพใใ
ๆฌกใฎไพใซ็คบใใใใซใใใใใฏ่ชๅใงไฝๆใใพใใ
```python
from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
model = model.cpu()
model.load_state_dict(state_dict)
```
**ใชใใฉใคใณ FP32 ใฆใงใคใ ใชใซใใช:**
DeepSpeed ใฏ็นๅฅใชๅคๆในใฏใชใใ`zero_to_fp32.py`ใไฝๆใใใใงใใฏใใคใณใใฎๆไธไฝใซ้
็ฝฎใใพใใ
ใใฉใซใใใใฎในใฏใชใใใไฝฟ็จใใใจใใใคใงใ้ใฟใๆฝๅบใงใใพใใในใฏใชใใใฏในใฟใณใใขใญใณใชใฎใงใใใๅฟ
่ฆใใใพใใใ
ๆฝๅบใ่กใใใใฎ่จญๅฎใใกใคใซใพใใฏ `Trainer` ใๅฟ
่ฆใงใใ
ใใงใใฏใใคใณใ ใใฉใซใใผใๆฌกใฎใใใซใชใฃใฆใใใจใใพใใ
```bash
$ ls -l output_dir/checkpoint-1/
-rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json
drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/
-rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest
-rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt
-rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin
-rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt
-rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json
-rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model
-rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json
-rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json
-rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin
-rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py*
```
ใใฎไพใงใฏใDeepSpeed ใใงใใฏใใคใณใ ใตใใใฉใซใใผ *global_step1* ใ 1 ใคใ ใใใใพใใใใใใฃใฆใFP32ใๅๆง็ฏใใใซใฏ
้ใฟใๅฎ่กใใใ ใใงใ:
```bash
python zero_to_fp32.py . pytorch_model.bin
```
ใใใ ใใ `pytorch_model.bin`ใซใฏใ่คๆฐใฎ GPU ใใ็ตฑๅใใใๅฎๅ
จใช fp32 ใขใใซใฎ้ใฟใๅซใพใใใใใซใชใใพใใ
ในใฏใชใใใฏใZeRO-2 ใพใใฏ ZeRO-3 ใใงใใฏใใคใณใใ่ชๅ็ใซๅฆ็ใงใใใใใซใชใใพใใ
`python zero_to_fp32.py -h` ใๅฎ่กใใใจใไฝฟ็จๆนๆณใฎ่ฉณ็ดฐใ่กจ็คบใใใพใใ
ในใฏใชใใใฏใใใกใคใซ`latest`ใฎๅ
ๅฎนใไฝฟ็จใใฆ deepspeed ใตใใใฉใซใใผใ่ชๅๆคๅบใใพใใ
ไพใซใฏ`global_step1`ใๅซใพใใพใใ
ๆณจ: ็พๅจใในใฏใชใใใซใฏๆ็ต็ใช fp32 ใขใใซใฎ้ใฟใฎ 2 ๅใฎไธ่ฌ RAM ใๅฟ
่ฆใงใใ
### ZeRO-3 ใจ Infinity Nuances
ZeRO-3 ใฏใใใฉใกใผใฟ ใทใฃใผใใฃใณใฐๆฉ่ฝใฎ็นใง ZeRO-2 ใจใฏๅคงใใ็ฐใชใใพใใ
ZeRO-Infinity ใฏ ZeRO-3 ใใใใซๆกๅผตใใNVMe ใกใขใชใใใฎไปใฎ่คๆฐใฎ้ๅบฆใจในใฑใผใฉใใชใใฃใฎๅไธใใตใใผใใใพใใ
ใขใใซใซ็นๅฅใชๅคๆดใๅ ใใๅฟ
่ฆใใชใใฆใๆญฃๅธธใซๅไฝใใใใใซใใใใๅชๅใๆใใใฆใใพใใใใ็นๅฎใฎ็นใงใฏ
็ถๆณใซใใฃใฆใฏใๆฌกใฎๆ
ๅ ฑใๅฟ
่ฆใซใชใๅ ดๅใใใใพใใ
#### Constructing Massive Models
DeepSpeed/ZeRO-3 ใฏใๆขๅญใฎ RAM ใซๅใพใใชใๅฏ่ฝๆงใฎใใๆฐๅ
ใฎใใฉใกใผใฟใๆใคใขใใซใๅฆ็ใงใใพใใใใฎใใใชๅ ดๅใ
ใพใใๅๆๅใใใ้ซ้ใซๅฎ่กใใใๅ ดๅใฏใ*deepspeed.zero.Init()* ใไฝฟ็จใใฆใขใใซใๅๆๅใใพใใ
ใณใณใใญในใ ใใใผใธใฃใผ (้ขๆฐใใณใฌใผใฟใผใงใใใใพใ)ใๆฌกใฎใใใซใชใใพใใ
```python
from transformers import T5ForConditionalGeneration, T5Config
import deepspeed
with deepspeed.zero.Init():
config = T5Config.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration(config)
```
ใ่ฆงใฎใจใใใใใใซใใใฉใณใใ ใซๅๆๅใใใใขใใซใๅพใใใพใใ
ไบๅใใฌใผใใณใฐใใใใขใใซใไฝฟ็จใใใๅ ดๅใ`model_class.from_pretrained` ใฏๆฌกใฎๆกไปถใๆบใใ้ใใใฎๆฉ่ฝใๆๅนใซใใพใใ
`is_deepspeed_zero3_enabled()` ใฏ `True` ใ่ฟใใพใใใใใฏ็พๅจใ
[`TrainingArguments`] ใชใใธใงใฏใ (ๆธกใใใ DeepSpeed ๆงๆใใกใคใซใซ ZeRO-3 ๆงๆใๅซใพใใฆใใๅ ดๅ)
ใปใฏใทใงใณใใใใใฃใฆใๅผใณๅบใใฎๅใซ** [`TrainingArguments`] ใชใใธใงใฏใใไฝๆใใๅฟ
่ฆใใใใพใใ
`from_pretrained`ใ่ใใใใใทใผใฑใณในใฎไพใๆฌกใซ็คบใใพใใ
```python
from transformers import AutoModel, Trainer, TrainingArguments
training_args = TrainingArguments(..., deepspeed=ds_config)
model = AutoModel.from_pretrained("google-t5/t5-small")
trainer = Trainer(model=model, args=training_args, ...)
```
ๅ
ฌๅผใฎใตใณใใซ ในใฏใชใใใไฝฟ็จใใฆใใฆใใณใใณใ ใฉใคใณๅผๆฐใซ `--deepspeed ds_config.json` ใๅซใพใใฆใใๅ ดๅ
ZeRO-3 ่จญๅฎใๆๅนใซใใใจใใใใใตใณใใซ ในใฏใชใใใฎ่จ่ฟฐๆนๆณใงใใใใใใในใฆใใใงใซๅฎไบใใฆใใพใใ
ๆณจ: ใขใใซใฎ fp16 ้ใฟใๅไธใฎ GPU ใฎใกใขใชใซๅใพใใชใๅ ดๅใฏใใใฎๆฉ่ฝใไฝฟ็จใใๅฟ
่ฆใใใใพใใ
ใใฎๆนๆณใจใใฎไปใฎ้ข้ฃๆฉ่ฝใฎ่ฉณ็ดฐใซใคใใฆใฏใ[ๅคง่ฆๆจกใขใใซใฎๆง็ฏ](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models) ใๅ็
งใใฆใใ ใใใ
ใพใใfp16 ใงไบๅ่จ็ทดใใใใขใใซใใญใผใใใใจใใฏใ`from_pretrained` ใซไฝฟ็จใใใใใซๆ็คบใใๅฟ
่ฆใใใใพใใ
`torch_dtype=torch.float16`ใ่ฉณ็ดฐใซใคใใฆใฏใ[from_pretrained-torch-dtype](#from_pretrained-torch-dtype) ใๅ็
งใใฆใใ ใใใ
#### Gathering Parameters
่คๆฐใฎ GPU ไธใฎ ZeRO-3 ใงใฏใ็พๅจใฎ GPU ใฎใใฉใกใผใฟใงใชใ้ใใๅไธใฎ GPU ใใในใฆใฎใใฉใกใผใฟใๆใคใใจใฏใใใพใใใ
ๅฎ่กๅฑคใใใใใฃใฆใใในใฆใฎใฌใคใคใผใฎใในใฆใฎใใฉใกใผใฟใผใซไธๅบฆใซใขใฏใปในใใๅฟ
่ฆใใใๅ ดๅใฏใใใใ่กใใใใฎ็นๅฎใฎๆนๆณใใใใพใใ
ใปใจใใฉใฎๅ ดๅใฏๅฟ
่ฆใใใพใใใใๅฟ
่ฆใชๅ ดๅใฏใ[ใใฉใกใผใฟใฎๅ้](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) ใๅ็
งใใฆใใ ใใใ
ใใ ใใใใใคใใฎๅ ดๆใงๅ
้จ็ใซไฝฟ็จใใฆใใพใใใใฎไพใฎ 1 ใคใฏใไบๅใใฌใผใใณใฐใใใใขใใซใฎ้ใฟใใญใผใใใใจใใงใใ
`from_pretrained`ใไธๅบฆใซ 1 ใคใฎใฌใคใคใผใใญใผใใใๅๅ ใใฆใใใในใฆใฎ GPU ใซๅณๅบงใซๅๅฒใใพใใ
ๅคง่ฆๆจกใชใขใใซใงใฏใใกใขใชใฎ้ขไฟใงใ1 ใคใฎ GPU ใซใญใผใใใฆใใ่คๆฐใฎ GPU ใซๅๆฃใใใใจใฏใงใใพใใใ
ๅถ้ใ
ใพใใZeRO-3 ใงใฏใ็ฌ่ชใฎใณใผใใไฝๆใใๆฌกใฎใใใชใขใใซ ใใฉใกใผใฟใผใฎ้ใฟใ็บ็ใใใจใใพใใ
```python
tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True)
```
`tensor([1.])` ใซในใใฌในใๆใใๅ ดๅใใพใใฏใใฉใกใผใฟใฎใตใคใบใ `1` ใงใใใจใใใจใฉใผใ็บ็ใใๅ ดๅ
ใใๅคงใใชๅคๆฌกๅ
ๅฝข็ถใใใใฏใใใฉใกใผใฟใผใๅๅฒใใใฆใใใ่กจ็คบใใใใฎใฏ ZeRO-3 ใใฌใผในใใซใใผใงใใใใจใๆๅณใใพใใ
<a id='deepspeed-zero-inference'></a>
### ZeRO Inference
ZeRO Inference ใฏใZeRO-3 Training ใจๅใๆงๆใไฝฟ็จใใพใใใชใใใฃใใคใถใผใจในใฑใธใฅใผใฉใผใฎใปใฏใทใงใณใฏๅฟ
่ฆใใใพใใใใง
ๅฎ้ใๅใใใฎใใใฌใผใใณใฐใจๅ
ฑๆใใใๅ ดๅใฏใใใใใ่จญๅฎใใกใคใซใซๆฎใใใจใใงใใพใใๅฝผใใฏใใ ใใใชใใ ใใ
็ก่ฆใใใพใใใ
ใใไปฅๅคใฎๅ ดๅใฏใ้ๅธธใฎ [`TrainingArguments`] ๅผๆฐใๆธกใใ ใใงใใไพใใฐ๏ผ
```bash
deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json
```
ๅฏไธ้่ฆใชใใจใฏใZeRO-2 ใซใฏไฝใฎๅฉ็นใใชใใใใZeRO-3 ๆงๆใไฝฟ็จใใๅฟ
่ฆใใใใจใใใใจใงใใ
ZeRO-3 ใฎใฟใใใฉใกใผใฟใผใฎใทใฃใผใใฃใณใฐใๅฎ่กใใใฎใซๅฏพใใZeRO-1 ใฏๅพ้
ใจใชใใใฃใใคใถใผใฎ็ถๆ
ใใทใฃใผใใฃใณใฐใใใใใๆจ่ซใซๅฝน็ซใกใพใใ
ไปฅไธใฏใๅฉ็จๅฏ่ฝใชใในใฆใฎ GPU ใใใใญใคใใ DeepSpeed ใง`run_translation.py`ใๅฎ่กใใไพใงใใ
```bash
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero3.json \
--model_name_or_path google-t5/t5-small --output_dir output_dir \
--do_eval --max_eval_samples 50 --warmup_steps 50 \
--max_source_length 128 --val_max_target_length 128 \
--overwrite_output_dir --per_device_eval_batch_size 4 \
--predict_with_generate --dataset_config "ro-en" --fp16 \
--source_lang en --target_lang ro --dataset_name wmt16 \
--source_prefix "translate English to Romanian: "
```
ๆจ่ซใฎใใใซใใชใใใฃใใคใถใผใฎ็ถๆ
ใจๅพ้
ใซใใฃใฆไฝฟ็จใใใ่ฟฝๅ ใฎๅคงใใชใกใขใชใฏๅฟ
่ฆใชใใใใ
ใฏใใใซๅคงใใชใใใใใทใผใฑใณใน้ทใๅใใใผใใฆใงใขใซ้ฉๅใงใใๅฟ
่ฆใใใใพใใ
ใใใซใDeepSpeed ใฏ็พๅจใDeepspeed-Inference ใจๅผใฐใใ้ข้ฃ่ฃฝๅใ้็บใใฆใใพใใใใใใจใฏไฝใฎ้ขไฟใใใใพใใใ
ZeRO ใใฏใใญใธใผใซๆบๆ ใใฆใใพใใใไปฃใใใซใใณใฝใซไธฆๅๅฆ็ใไฝฟ็จใใฆใๅไธใฎ GPU ใซๅใพใใชใใขใใซใในใฑใผใชใณใฐใใพใใใใใฏ
็พๅจ้็บไธญใงใใ่ฃฝๅใๅฎๆใใใ็ตฑๅใๆไพใใไบๅฎใงใใ
### Memory Requirements
Deepspeed ZeRO ใฏใกใขใชใ CPU (ใใใณ NVMe) ใซใชใใญใผใใงใใใใใใใฌใผใ ใฏใผใฏใฏใไฝฟ็จใใใฆใใ GPU ใฎๆฐใซๅฟใใฆๅฟ
่ฆใช CPU ใใใณ GPU ใกใขใชใฎ้ใ็ฅใใใจใใงใใใฆใผใใฃใชใใฃใๆไพใใพใใ
ๅไธใฎ GPU ใง `bigscience/T0_3B`ใๅพฎ่ชฟๆดใใใใใซๅฟ
่ฆใชใกใขใชใฎ้ใ่ฆ็ฉใใฃใฆใฟใพใใใใ
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 1 GPU per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1
15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ใใใใฃใฆใๅไธใฎ 80 GB GPU ใง CPU ใชใใญใผใใชใใงๆญ่ผใใใใจใใๅฐใใช 8 GB GPU ใงใๆๅคง 60 GB ใฎ CPU ใกใขใชใๅฟ
่ฆใซใชใใใจใๅฏ่ฝใงใใ (ใใใฏใใฉใกใผใฟใใชใใใฃใใคใถใฎ็ถๆ
ใใใใณๅพ้
ใฎใใใฎใกใขใชใงใใใใจใซๆณจๆใใฆใใ ใใใcuda ใซใผใใซใใขใฏใใฃใใผใทใงใณใใใใณไธๆใกใขใชใซใฏใใๅฐใๅคใใฎใกใขใชใๅฟ
่ฆใงใใ)
ๆฌกใซใใณในใใจ้ๅบฆใฎใใฌใผใใชใใซใชใใพใใใใๅฐใใ GPU ใ่ณผๅ
ฅใพใใฏใฌใณใฟใซใใๆนใๅฎใใชใใพใ (Deepspeed ZeRO ใงใฏ่คๆฐใฎ GPU ใไฝฟ็จใงใใใใใGPU ใฎๆฐใๆธใใใใจใใงใใพใ)ใใใใใใใฎๅ ดๅใฏ้
ใใชใใพใใใใฎใใใไฝใใๅฎ่กใใ้ๅบฆใๆฐใซใใชใใฆใใ้ๅบฆใฎไฝไธใฏ GPU ใฎไฝฟ็จๆ้ใซ็ดๆฅๅฝฑ้ฟใใใณในใใๅขๅคงใใใใใใฉใใๆใๅนๆ็ใใๅฎ้จใใฆๆฏ่ผใใฆใใ ใใใ
ๅๅใช GPU ใกใขใชใใใๅ ดๅใฏใใในใฆใ้ซ้ใซใชใใใใCPU/NVMe ใชใใญใผใใๅฟ
ใ็กๅนใซใใฆใใ ใใใ
ใใจใใฐใ2 ใคใฎ GPU ใซๅฏพใใฆๅใใใจใ็นฐใ่ฟใใฆใฟใพใใใใ
```bash
$ python -c 'from transformers import AutoModel; \
from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \
model = AutoModel.from_pretrained("bigscience/T0_3B"); \
estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)'
[...]
Estimated memory needed for params, optim states and gradients for a:
HW: Setup with 1 node, 2 GPUs per node.
SW: Model with 2783M total params, 65M largest layer params.
per CPU | per GPU | Options
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1
70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1
62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0
0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1
31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0
```
ใใใใฃใฆใใใใงใฏใCPU ใซใชใใญใผใใใใซ 2x 32GB ไปฅไธใฎ GPU ใๅฟ
่ฆใซใชใใพใใ
่ฉณ็ดฐใซใคใใฆใฏใ[ใกใขใชๆจๅฎใใผใซ](https://deepspeed.readthedocs.io/en/latest/memory.html) ใๅ็
งใใฆใใ ใใใ
### Filing Issues
ใใใงใฏใๅ้กใฎ็็ธใใใใซ่งฃๆใใไฝๆฅญใฎใใญใใฏใ่งฃ้คใงใใใใใๅ้กใๅ ฑๅใใๆนๆณใ่ชฌๆใใพใใ
ใฌใใผใใซใฏๅฟ
ใๆฌกใฎๅ
ๅฎนใๅซใใฆใใ ใใใ
1. ใฌใใผใๅ
ใฎๅฎๅ
จใช Deepspeed ๆงๆใใกใคใซ
2. [`Trainer`] ใไฝฟ็จใใฆใใๅ ดๅใฏใณใใณใใฉใคใณๅผๆฐใใพใใฏ
ใใฌใผใใผใฎใปใใใขใใใ่ชๅใงในใฏใชใใไฝๆใใฆใใๅ ดๅใฏใ[`TrainingArguments`] ๅผๆฐใใใชใใงใใ ใใ
[`TrainingArguments`] ใซใฏ็ก้ขไฟใชใจใณใใชใๅคๆฐๅซใพใใฆใใใใใใใณใใใพใใ
3. ๆฌกใฎๅบๅ:
```bash
python -c 'import torch; print(f"torch: {torch.__version__}")'
python -c 'import transformers; print(f"transformers: {transformers.__version__}")'
python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")'
```
4. ๅฏ่ฝใงใใใฐใๅ้กใๅ็พใงใใ Google Colab ใใผใใใใฏใธใฎใชใณใฏใๅซใใฆใใ ใใใใใใไฝฟใใพใ
[ใใผใใใใฏ](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) ใจใใฆ
ๅบ็บ็นใ
5. ไธๅฏ่ฝใงใชใ้ใใใซในใฟใ ใใผใฟใปใใใงใฏใชใใๅธธใซไฝฟ็จใงใใๆจๆบใใผใฟใปใใใไฝฟ็จใใฆใใ ใใใ
6. ๅฏ่ฝใงใใใฐใๆขๅญใฎ [ใตใณใใซ](https://github.com/huggingface/transformers/tree/main/examples/pytorch) ใฎใใใใใไฝฟ็จใใฆๅ้กใๅ็พใใฆใฟใฆใใ ใใใ
- Deepspeed ใๅ้กใฎๅๅ ใงใฏใชใใใจใใใใใใพใใ
ๆๅบใใใๅ้กใฎไธ้จใฏใDeepspeed ใจใฏ็ก้ขไฟใงใใใใจใๅคๆใใพใใใใใใฏใDeepspeed ใใปใใใขใใใใๅ้คใใใๅพใงใใ
ๅ้กใฏใพใ ๆฎใฃใฆใใใ
ใใใใฃใฆใๅฎๅ
จใซๆ็ฝใงใชใๅ ดๅใฏใDeepSpeed ้ข้ฃใฎๅ้กใงใใ
ไพๅคใ็บ็ใใDeepSpeed ใขใธใฅใผใซใ้ขไฟใใฆใใใใจใใใใใพใใใพใใDeepSpeed ใๅซใพใชใใปใใใขใใใๅใในใใใฆใใ ใใใ
ๅ้กใ่งฃๆฑบใใชใๅ ดๅใซใฎใฟใDeepspeed ใซใคใใฆ่จๅใใๅฟ
่ฆใช่ฉณ็ดฐใใในใฆๆไพใใฆใใ ใใใ
- ๅ้กใ็ตฑๅ้จๅใงใฏใชใ DeepSpeed ใณใขใซใใใใจใๆใใใชๅ ดๅใฏใๅ้กใๆๅบใใฆใใ ใใใ
[Deepspeed](https://github.com/microsoft/DeepSpeed/) ใ็ดๆฅไฝฟ็จใใพใใใใใใใใชใๅ ดๅใงใใใๅฎๅฟใใ ใใใ
ใฉใกใใฎๅ้กใใฉใใซใผใงใๅ้กใใใพใใใๆ็จฟใใใใใใใๅคๆญใใๆฌกใฎๅ ดๅใฏๅฅใฎๅ้กใใฉใใซใผใซใชใใคใฌใฏใใใพใใ
ใใใงใใๅฟ
่ฆใใใใ
### Troubleshooting
#### the `deepspeed` process gets killed at startup without a traceback
`deepspeed`ใใญใปในใ่ตทๅๆใซใใฌใผในใใใฏใชใใงๅผทๅถ็ตไบใใใๅ ดๅใใใใฏ้ๅธธใใใญใฐใฉใ ใ่ฉฆ่กใใใใจใๆๅณใใพใใ
ใทในใใ ใๆใฃใฆใใใใใๅคใใฎ CPU ใกใขใชใๅฒใๅฝใฆใใใใใญใปในใๅฒใๅฝใฆใ่จฑๅฏใใใฆใใใใใOS ใซใผใใซใใใใๅผทๅถ็ตไบใใพใใ
ใใญใปในใใใใฏใ่จญๅฎใใกใคใซใซ `offload_optimizer` ใพใใฏ `offload_param` ใๅซใพใใฆใใๅฏ่ฝๆงใ้ซใใใใงใใ
ใฉใกใใ`cpu`ใซใชใใญใผใใใใใใซ่จญๅฎใใใฆใใพใใ NVMe ใไฝฟ็จใใฆใใๅ ดๅใฏใๆฌกใฎ็ฐๅขใงๅฎ่กใใฆใใๅ ดๅใฏ NVMe ใธใฎใชใใญใผใใ่ฉฆใใฆใใ ใใใ
ใผใญ-3ใ [็นๅฎใฎใขใใซใซๅฟ
่ฆใชใกใขใช้ใ่ฆ็ฉใใ]ๆนๆณใฏๆฌกใฎใจใใใงใ(https://deepspeed.readthedocs.io/en/latest/memory.html)ใ
#### training and/or eval/predict loss is `NaN`
ใใใฏใbf16 ๆททๅ็ฒพๅบฆใขใผใใงไบๅใใฌใผใใณใฐใใใใขใใซใๅๅพใใใใใ fp16 (ๆททๅ็ฒพๅบฆใฎๆ็กใซใใใใใ) ใงไฝฟ็จใใใใจใใๅ ดๅใซใใ็บ็ใใพใใ TPU ใงใใฌใผใใณใฐใใใใปใจใใฉใฎใขใใซใใใใณๅคใใฎๅ ดๅใGoogle ใซใใฃใฆใชใชใผในใใใใขใใซใฏใใใฎใซใใดใชใซๅ้กใใใพใ (ใใจใใฐใใปใผใในใฆใฎ t5 ใใผในใฎใขใใซ)ใใใใงใฎ่งฃๆฑบ็ญใฏใใใผใใฆใงใขใใตใใผใใใฆใใๅ ดๅ (TPUใAmpere GPU ไปฅ้)ใfp32 ใพใใฏ bf16 ใไฝฟ็จใใใใจใงใใ
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
}
}
```
ใญใฐใซใฏใDeepspeed ใๆฌกใฎใใใซ`OVERFLOW!`ใๅ ฑๅใใฆใใใใจใใใใใพใใ
```
0%| | 0/189 [00:00<?, ?it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144
1%|โ | 1/189 [00:00<01:26, 2.17it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0
1%|โโ
[...]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
14%|โโโโโโโโโโโโโโโโโ | 27/189 [00:14<01:13, 2.21it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|โโโโโโโโโโโโโโโโโโ | 28/189 [00:14<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
15%|โโโโโโโโโโโโโโโโโโ | 29/189 [00:15<01:13, 2.18it/s]
[deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
[...]
```
ใใใฏใDeepspeed ๆๅคฑในใฑใผใฉใผใๆๅคฑใชใผใใผใใญใผใๅ
ๆใใในใฑใผใชใณใฐไฟๆฐใ่ฆใคใใใใชใใใจใๆๅณใใพใใ
(ใญใฐใฏใใใง่ชญใฟใใใใใใใใซใใใตใผใธใใใฆใใพใใ)
ใใฎๅ ดๅใ้ๅธธใฏ `initial_scale_power` ใฎๅคใไธใใๅฟ
่ฆใใใใพใใ้ๅธธใ`initial_scale_power: 32` ใซ่จญๅฎใใใจๅ้กใ่งฃๆฑบใใพใใ
### Notes
- DeepSpeed ใซใฏ pip ใงใคใณในใใผใซๅฏ่ฝใช PyPI ใใใฑใผใธใใใใพใใใใใผใใฆใงใขใซๆใ้ฉๅใใใใใซใใพใๆๅนใซใใๅฟ
่ฆใใใๅ ดๅใฏใ[ใฝใผใน](https://github.com/microsoft/deepspeed#installation) ใใใคใณในใใผใซใใใใจใๅผทใใๅงใใใพใใ
1 ใใใ Adam ใชใฉใฎ็นๅฎใฎๆฉ่ฝใฏใpypi ใใฃในใใชใใฅใผใทใงใณใงใฏๅฉ็จใงใใพใใใ
- ๐ค Transformers ใง DeepSpeed ใไฝฟ็จใใใใใซ [`Trainer`] ใไฝฟ็จใใๅฟ
่ฆใฏใใใพใใ - ไปปๆใฎใขใใซใไฝฟ็จใงใใพใ
ๅพ่
ใฏ [DeepSpeed ็ตฑๅๆ้ ](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models) ใซๅพใฃใฆ่ชฟๆดใใๅฟ
่ฆใใใใพใใ
## Non-Trainer Deepspeed Integration
[`~integrations.HfDeepSpeedConfig`] ใฏใDeepspeed ใ ๐ค Transformers ใณใขใซ็ตฑๅใใใใใซไฝฟ็จใใใพใ
[`Trainer`] ใไฝฟ็จใใชใๅ ดๅใฎๆฉ่ฝใๅฎ่กใใๅฏไธใฎใใจใฏใDeepspeed ZeRO-3 ใใฉใกใผใฟๅ้ใๅฆ็ใใ`from_pretrained`ๅผใณๅบใไธญใซใขใใซใ่คๆฐใฎ GPU ใซ่ชๅ็ใซๅๅฒใใใใจใงใใใใไปฅๅคใฏใในใฆ่ชๅใง่กใๅฟ
่ฆใใใใพใใ
[`Trainer`] ใไฝฟ็จใใใจใใในใฆใ่ชๅ็ใซๅฆ็ใใใพใใ
[`Trainer`] ใไฝฟ็จใใชใๅ ดๅใDeepSpeed ZeRO-3 ใๅน็็ใซๅฐๅ
ฅใใใซใฏใ
ใขใใซใใคใณในใฟใณในๅใใๅใซ [`~integrations.HfDeepSpeedConfig`] ใชใใธใงใฏใใๅ้คใใใใฎใชใใธใงใฏใใ็ใใใพใพใซใใพใใ
Deepspeed ZeRO-1 ใพใใฏ ZeRO-2 ใไฝฟ็จใใฆใใๅ ดๅใฏใ`HfDeepSpeedConfig`ใไฝฟ็จใใๅฟ
่ฆใฏใพใฃใใใใใพใใใ
ใใจใใฐใไบๅใใฌใผใใณใฐใใใใขใใซใฎๅ ดๅใฏๆฌกใฎใใใซใชใใพใใ
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model = AutoModel.from_pretrained("openai-community/gpt2")
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
ใพใใฏใไบๅใใฌใผใใณใฐใใใฆใใชใใขใใซใฎๅ ดๅ:
```python
from transformers.integrations import HfDeepSpeedConfig
from transformers import AutoModel, AutoConfig
import deepspeed
ds_config = {...} # deepspeed config object or path to the file
# must run before instantiating the model to detect zero 3
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
config = AutoConfig.from_pretrained("openai-community/gpt2")
model = AutoModel.from_config(config)
engine = deepspeed.initialize(model=model, config_params=ds_config, ...)
```
[`Trainer`] ็ตฑๅใไฝฟ็จใใฆใใชใๅ ดๅใฏใๅฎๅ
จใซ็ฌๅใง่กใใใจใซใชใใใจใซๆณจๆใใฆใใ ใใใๅบๆฌ็ใซใฏใ[Deepspeed](https://www.deepspeed.ai/) Web ใตใคใใฎใใญใฅใกใณใใซๅพใฃใฆใใ ใใใใพใใ่จญๅฎใใกใคใซใๆ็คบ็ใซ่จญๅฎใใๅฟ
่ฆใใใใพใใ`"auto"`ๅคใฏไฝฟ็จใงใใใไปฃใใใซๅฎ้ใฎๅคใๅ
ฅๅใใๅฟ
่ฆใใใใพใใ
## HfDeepSpeedConfig
[[autodoc]] integrations.HfDeepSpeedConfig
- all
### Custom DeepSpeed ZeRO Inference
ไปฅไธใฏใๅไธใฎ GPU ใซใขใใซใ้ฉๅใงใใชใๅ ดๅใซใ[`Trainer`] ใไฝฟ็จใใใซ DeepSpeed ZeRO ๆจ่ซใๅฎ่กใใๆนๆณใฎไพใงใใ่งฃๆฑบ็ญใซใฏใ่ฟฝๅ ใฎ GPU ใฎไฝฟ็จใใพใใฏ GPU ใกใขใชใ CPU ใกใขใชใซใชใใญใผใใใใใจใๅซใพใใพใใ
ใใใง็่งฃใในใ้่ฆใชใใฅใขใณในใฏใZeRO ใฎ่จญ่จๆนๆณใซใใใ็ฐใชใ GPU ใง็ฐใชใๅ
ฅๅใไธฆ่กใใฆๅฆ็ใงใใใจใใใใจใงใใ
ใใฎไพใซใฏๅคง้ใฎใกใขใใใใ่ชๅทฑๆๆธๅใใใฆใใพใใ
ๅฟ
ใๆฌกใฎใใจใ่กใฃใฆใใ ใใใ
1. ๅๅใช GPU ใกใขใชใใใๅ ดๅใฏใCPU ใชใใญใผใใ็กๅนใซใใพใ (้ๅบฆใไฝไธใใใใ)ใ
2. Ampere ใพใใฏๆฐใใ GPU ใๆๆใใฆใใๅ ดๅใฏใๅฆ็ใ้ซ้ๅใใใใใซ bf16 ใๆๅนใซใใพใใใใฎใใผใใฆใงใขใใชใๅ ดๅใฏใbf16 ๆททๅ็ฒพๅบฆใงไบๅใใฌใผใใณใฐใใใใขใใซ (ใปใจใใฉใฎ t5 ใขใใซใชใฉ) ใไฝฟ็จใใชใ้ใใfp16 ใๆๅนใซใใใใจใใงใใพใใใใใใฏ้ๅธธใfp16 ใงใชใผใใผใใญใผใใๅบๅใจใใฆใฌใใผใธใ่กจ็คบใใใพใใ
```python
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.integrations import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size
# ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For in-depth info on Deepspeed config see
# https://huggingface.co/docs/transformers/main/main_classes/deepspeed
# keeping the same format as json for consistency, except it uses lower case for true/false
# fmt: off
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# fmt: on
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
ใใใ`t0.py`ใจใใฆไฟๅญใใฆๅฎ่กใใพใใใใ
```bash
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
```
ใใใฏ้ๅธธใซๅบๆฌ็ใชไพใงใใใใใผใบใซๅใใใฆ่ชฟๆดใใฆใใ ใใใ
### `generate` nuances
ZeRO Stage-3 ใง่คๆฐใฎ GPU ใไฝฟ็จใใๅ ดๅใ`generate(..., synced_gpus=True)`ใๅผใณๅบใใฆ GPU ใๅๆใใๅฟ
่ฆใใใใพใใใใใ่กใใชใใจใ1 ใคใฎ GPU ใไปใฎ GPU ใใๅ
ใซ็ๆใ็ตไบใใๅ ดๅใๆฎใใฎ GPU ใ็ๆใๅๆญขใใ GPU ใใใฆใงใคใใฎใทใฃใผใใๅไฟกใงใใชใใชใใใใใทในใใ ๅ
จไฝใใใณใฐใใพใใ
`transformers>=4.28` ไปฅ้ใ`synced_gpus` ใๆ็คบ็ใซๆๅฎใใใฆใใชใๅ ดๅใใใใใฎๆกไปถใๆคๅบใใใใจ่ชๅ็ใซ `True` ใซ่จญๅฎใใใพใใใใ ใใๅฟ
่ฆใซๅฟใใฆ `synced_gpus` ใฎๅคใใชใผใใผใฉใคใใใใใจใใงใใพใใ
## Deepspeed ็ตฑๅใฎใในใ
DeepSpeed ็ตฑๅใๅซใ PR ใ้ไฟกใใๅ ดๅใฏใCircleCI PR CI ใปใใใขใใใซใฏ GPU ใใชใใใจใซๆณจๆใใฆใใ ใใใใใฎใใใGPU ใๅฟ
่ฆใจใใใในใใฏๅฅใฎ CI ใงๆฏๆฉใฎใฟๅฎ่กใใใพใใใใใใฃใฆใPR ใง็ท่ฒใฎ CI ใฌใใผใใ่กจ็คบใใใฆใใDeepSpeed ใในใใๅๆ ผใใใใจใๆๅณใใใใใงใฏใใใพใใใ
DeepSpeed ใในใใๅฎ่กใใใซใฏใๅฐใชใใจใไปฅไธใๅฎ่กใใฆใใ ใใใ
```bash
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
```
ใขใใชใณใฐใพใใฏ pytorch ใตใณใใซ ใณใผใใฎใใใใใๅคๆดใใๅ ดๅใฏใModel Zoo ใในใใๅฎ่กใใพใใไปฅไธใฏใในใฆใฎ DeepSpeed ใในใใๅฎ่กใใพใใ
```bash
RUN_SLOW=1 pytest tests/deepspeed
```
## Main DeepSpeed Resources
- [ใใญใธใงใฏใใฎ github](https://github.com/microsoft/deepspeed)
- [ไฝฟ็จๆนๆณใใญใฅใกใณใ](https://www.deepspeed.ai/getting-started/)
- [API ใใญใฅใกใณใ](https://deepspeed.readthedocs.io/en/latest/index.html)
- [ใใญใฐๆ็จฟ](https://www.microsoft.com/en-us/research/search/?q=deepspeed)
่ซๆ:
- [ZeRO: ๅ
ใใฉใกใผใฟ ใขใใซใฎใใฌใผใใณใฐใซๅใใใกใขใชใฎๆ้ฉๅ](https://arxiv.org/abs/1910.02054)
- [ZeRO-Offload: 10 ๅ่ฆๆจกใฎใขใใซ ใใฌใผใใณใฐใฎๆฐไธปๅ](https://arxiv.org/abs/2101.06840)
- [ZeRO-Infinity: ๆฅต้ในใฑใผใซใฎๆทฑๅฑคๅญฆ็ฟใฎใใใฎ GPU ใกใขใชใฎๅฃใๆใก็ ดใ](https://arxiv.org/abs/2104.07857)
ๆๅพใซใHuggingFace [`Trainer`] ใฏ DeepSpeed ใฎใฟใ็ตฑๅใใฆใใใใจใ่ฆใใฆใใใฆใใ ใใใ
DeepSpeed ใฎไฝฟ็จใซ้ขใใฆๅ้กใ่ณชๅใใใๅ ดๅใฏใ[DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues) ใซๅ้กใๆๅบใใฆใใ ใใใ
|
transformers/docs/source/ja/main_classes/deepspeed.md/0
|
{
"file_path": "transformers/docs/source/ja/main_classes/deepspeed.md",
"repo_id": "transformers",
"token_count": 49427
}
| 290
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ALIGN
## ๆฆ่ฆ
ALIGNใขใใซใฏใใ[Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918)ใใจใใ่ซๆใงChao JiaใYinfei YangใYe XiaใYi-Ting ChenใZarana ParekhใHieu PhamใQuoc V. LeใYunhsuan SungใZhen LiใTom DuerigใซใใฃใฆๆๆกใใใพใใใALIGNใฏใใซใใขใผใใซใช่ฆ่ฆ่จ่ชใขใใซใงใใใใใฏ็ปๅใจใใญในใใฎ้กไผผๅบฆใใใผใญใทใงใใ็ปๅๅ้กใซไฝฟ็จใงใใพใใALIGNใฏ[EfficientNet](efficientnet)ใ่ฆ่ฆใจใณใณใผใใผใจใใฆใ[BERT](bert)ใใใญในใใจใณใณใผใใผใจใใฆๆญ่ผใใใใฅใขใซใจใณใณใผใใผๆง้ ใ็นๅพดใจใใๅฏพ็
งๅญฆ็ฟใซใใฃใฆ่ฆ่ฆใจใใญในใใฎ่กจ็พใๆดๅใใใใใจใๅญฆใณใพใใใใใพใงใฎ็ ็ฉถใจใฏ็ฐใชใใALIGNใฏๅทจๅคงใงใใคใธใผใชใใผใฟใปใใใๆดป็จใใใณใผใในใฎในใฑใผใซใๅฉ็จใใฆๅ็ดใชๆนๆณใชใใๆๅ
็ซฏใฎ่กจ็พใ้ๆใงใใใใจใ็คบใใฆใใพใใ
่ซๆใฎ่ฆๆจใฏไปฅไธใฎ้ใใงใ๏ผ
*ไบๅๅญฆ็ฟใใใ่กจ็พใฏใๅคใใฎ่ช็ถ่จ่ชๅฆ็๏ผNLP๏ผใใใณ็ฅ่ฆใฟในใฏใซใจใฃใฆ้่ฆใซใชใฃใฆใใพใใNLPใซใใใ่กจ็พๅญฆ็ฟใฏใไบบ้ใฎใขใใใผใทใงใณใฎใชใ็ใฎใใญในใใงใฎๅญฆ็ฟใธใจ็งป่กใใฆใใพใใใ่ฆ่ฆใใใณ่ฆ่ฆ่จ่ชใฎ่กจ็พใฏไพ็ถใจใใฆ็ฒพๅทงใชๅญฆ็ฟใใผใฟใปใใใซๅคงใใไพๅญใใฆใใใใใใฏ้ซไพกใงใใฃใใๅฐ้็ฅ่ญใๅฟ
่ฆใจใใใใใพใใ่ฆ่ฆใขใใชใฑใผใทใงใณใฎๅ ดๅใImageNetใOpenImagesใฎใใใชๆ็คบ็ใชใฏใฉในใฉใใซใๆใคใใผใฟใปใใใไฝฟ็จใใฆๅญฆ็ฟใใใใใจใใปใจใใฉใงใใ่ฆ่ฆ่จ่ชใฎๅ ดๅใConceptual CaptionsใMSCOCOใCLIPใชใฉใฎไบบๆฐใฎใใใใผใฟใปใใใฏใในใฆใใใใใ็ก่ฆใงใใชใใใผใฟๅ้๏ผใใใณใฏใชใผใใณใฐ๏ผใใญใปในใๅซใฟใพใใใใฎใณในใใฎใใใใญใฅใฌใผใทใงใณใใญใปในใฏใใผใฟใปใใใฎใตใคใบใๅถ้ใใ่จ็ทดใใใใขใใซใฎในใฑใผใชใณใฐใๅฆจใใพใใๆฌ่ซๆใงใฏใConceptual Captionsใใผใฟใปใใใฎ้ซไพกใชใใฃใซใฟใชใณใฐใๅพๅฆ็ในใใใใชใใงๅพใใใใ10ๅใ่ถ
ใใ็ปๅalt-textใใขใฎใใคใบใฎๅคใใใผใฟใปใใใๆดป็จใใพใใใทใณใใซใชใใฅใขใซใจใณใณใผใใผใขใผใญใใฏใใฃใฏใๅฏพ็
งๆๅคฑใไฝฟ็จใใฆ็ปๅใจใใญในใใใขใฎ่ฆ่ฆ็ใใใณ่จ่ช็่กจ็พใๆดๅใใใใใจใๅญฆ็ฟใใพใใๆใ
ใฏใใณใผใในใฎ่ฆๆจกใใใฎใใคใบใ่ฃใใใใฎใใใชๅ็ดใชๅญฆ็ฟในใญใผใ ใงใๆๅ
็ซฏใฎ่กจ็พใซใคใชใใใใจใ็คบใใพใใๆใ
ใฎ่ฆ่ฆ่กจ็พใฏใImageNetใVTABใชใฉใฎๅ้กใฟในใฏใธใฎ่ปข็งปใซใใใฆๅผทๅใชๆง่ฝใ็บๆฎใใพใใๆดๅใใ่ฆ่ฆ็ใใใณ่จ่ช็่กจ็พใฏใใผใญใทใงใใ็ปๅๅ้กใๅฏ่ฝใซใใใพใใใใๆด็ทดใใใใฏใญในใขใใณใทใงใณใขใใซใจๆฏ่ผใใฆใใFlickr30KใใใณMSCOCO็ปๅใใญในใๆค็ดขใใณใใใผใฏใซใใใฆๆฐใใชๆๅ
็ซฏใฎ็ตๆใ้ๆใใพใใใพใใใใใใฎ่กจ็พใฏใ่ค้ใชใใญในใใใใณใใญในใ+็ปๅใฎใฏใจใชใ็จใใใฏใญในใขใผใใซๆค็ดขใๅฏ่ฝใซใใพใใ*
ใใฎใขใใซใฏ[Alara Dirik](https://huggingface.co/adirik)ใซใใๆไพใใใพใใใ
ใชใชใธใใซใฎใณใผใใฏๅ
ฌ้ใใใฆใใใใใใฎๅฎ่ฃ
ใฏๅ
่ซๆใซๅบใฅใใKakao Brainใฎๅฎ่ฃ
ใใใผในใซใใฆใใพใใ
## ไฝฟ็จไพ
ALIGNใฏEfficientNetใไฝฟ็จใใฆ่ฆ่ฆ็็นๅพดใใBERTใไฝฟ็จใใฆใใญในใ็นๅพดใๅๅพใใพใใใใญในใใจ่ฆ่ฆใฎไธกๆนใฎ็นๅพดใฏใๅไธใฎๆฌกๅ
ใๆใคๆฝๅจ็ฉบ้ใซๅฐๅฝฑใใใพใใๅฐๅฝฑใใใ็ปๅใจใใญในใ็นๅพด้ใฎใใใ็ฉใ้กไผผๅบฆในใณใขใจใใฆไฝฟ็จใใใพใใ
[`AlignProcessor`]ใฏใใใญในใใฎใจใณใณใผใใจ็ปๅใฎๅๅฆ็ใไธกๆน่กใใใใซใ[`EfficientNetImageProcessor`]ใจ[`BertTokenizer`]ใๅไธใฎใคใณในใฟใณในใซใฉใใใใพใใไปฅไธใฎไพใฏใ[`AlignProcessor`]ใจ[`AlignModel`]ใไฝฟ็จใใฆ็ปๅ-ใใญในใ้กไผผๅบฆในใณใขใๅๅพใใๆนๆณใ็คบใใฆใใพใใ
```python
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# this is the image-text similarity score
logits_per_image = outputs.logits_per_image
# we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs)
```
## ๅ่่ณๆ
ALIGNใฎไฝฟ็จใ้ๅงใใใฎใซๅฝน็ซใคๅ
ฌๅผใฎHugging Faceใจใณใใฅใใใฃ๏ผ๐ใง็คบใใใฆใใ๏ผใฎๅ่่ณๆใฎไธ่ฆงใงใใ
- [ALIGNใจCOYO-700Mใใผใฟใปใใ](https://huggingface.co/blog/vit-align)ใซ้ขใใใใญใฐๆ็จฟใ
- ใผใญใทใงใใ็ปๅๅ้ก[ใใข](https://huggingface.co/spaces/adirik/ALIGN-zero-shot-image-classification)ใ
- `kakaobrain/align-base` ใขใใซใฎ[ใขใใซใซใผใ](https://huggingface.co/kakaobrain/align-base)ใ
ใใใซๅ่่ณๆใๆๅบใใใๅ ดๅใฏใๆฐๅ
ผใญใชใPull Requestใ้ใใฆใใ ใใใ็งใใกใฏใใใใฌใใฅใผใใใใพใ๏ผๅ่่ณๆใฏใๆขๅญใฎใใฎใ่ค่ฃฝใใใฎใงใฏใชใใไฝใๆฐใใใใจใ็คบใใใจใ็ๆณ็ใงใใ
## AlignConfig
[[autodoc]] AlignConfig
- from_text_vision_configs
## AlignTextConfig
[[autodoc]] AlignTextConfig
## AlignVisionConfig
[[autodoc]] AlignVisionConfig
## AlignProcessor
[[autodoc]] AlignProcessor
## AlignModel
[[autodoc]] AlignModel
- forward
- get_text_features
- get_image_features
## AlignTextModel
[[autodoc]] AlignTextModel
- forward
## AlignVisionModel
[[autodoc]] AlignVisionModel
- forward
|
transformers/docs/source/ja/model_doc/align.md/0
|
{
"file_path": "transformers/docs/source/ja/model_doc/align.md",
"repo_id": "transformers",
"token_count": 2911
}
| 291
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# BioGPT
## Overview
BioGPT ใขใใซใฏใ[BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian LuoใLiai SunใYingce Xiaใ Tao QinใSheng ZhangใHoifung PoonใTie-Yan Liuใ BioGPT ใฏใ็็ฉๅปๅญฆใใญในใใฎ็ๆใจใใคใใณใฐใฎใใใฎใใใกใคใณๅบๆใฎ็ๆไบๅใใฌใผใใณใฐๆธใฟ Transformer ่จ่ชใขใใซใงใใ BioGPT ใฏใTransformer ่จ่ชใขใใซใฎใใใฏใใผใณใซๅพใใ1,500 ไธใฎ PubMed ๆ้ฒใงๆๅใใไบๅใใฌใผใใณใฐใใใฆใใพใใ
่ซๆใฎ่ฆ็ดใฏๆฌกใฎใจใใใงใใ
*ไบๅใใฌใผใใณใฐๆธใฟ่จ่ชใขใใซใฏใไธ่ฌ็ใช่ช็ถ่จ่ช้ ๅใงใฎๅคงใใชๆๅใซ่งฆ็บใใใฆใ็็ฉๅปๅญฆ้ ๅใงใพใใพใๆณจ็ฎใ้ใใฆใใพใใไธ่ฌ่จ่ชใใกใคใณใฎไบๅใใฌใผใใณใฐๆธใฟ่จ่ชใขใใซใฎ 2 ใคใฎไธปใชใใฉใณใใใคใพใ BERT (ใใใณใใฎใใชใขใณใ) ใจ GPT (ใใใณใใฎใใชใขใณใ) ใฎใใกใ1 ใค็ฎใฏ BioBERT ใ PubMedBERT ใชใฉใฎ็็ฉๅปๅญฆใใกใคใณใงๅบใ็ ็ฉถใใใฆใใพใใใใใใฏใใพใใพใชไธๆตใฎ็็ฉๅปๅญฆ็ใฟในใฏใงๅคงใใชๆๅใๅใใฆใใพใใใ็ๆ่ฝๅใฎๆฌ ๅฆใซใใๅฟ็จ็ฏๅฒใๅถ้ใใใฆใใพใใใใฎ่ซๆใงใฏใๅคง่ฆๆจกใช็็ฉๅปๅญฆๆ็ฎใงไบๅใใฌใผใใณใฐใใใใใกใคใณๅบๆใฎ็ๆ Transformer ่จ่ชใขใใซใงใใ BioGPT ใๆๆกใใพใใ็งใใกใฏ 6 ใคใฎ็็ฉๅปๅญฆ็่ช็ถ่จ่ชๅฆ็ใฟในใฏใง BioGPT ใ่ฉไพกใใใปใจใใฉใฎใฟในใฏใง็งใใกใฎใขใใซใไปฅๅใฎใขใใซใใใๅชใใฆใใใใจใๅฎ่จผใใพใใใ็นใซใBC5CDRใKD-DTIใDDI ใฎใจใณใใใผใจใณใ้ขไฟๆฝๅบใฟในใฏใงใฏใใใใ 44.98%ใ38.42%ใ40.76% ใฎ F1 ในใณใขใ็ฒๅพใใPubMedQA ใงใฏ 78.2% ใฎ็ฒพๅบฆใ็ฒๅพใใๆฐ่จ้ฒใๆจน็ซใใพใใใใใญในใ็ๆใซ้ขใใ็งใใกใฎใฑใผในในใฟใใฃใฏใ็็ฉๅปๅญฆๆ็ฎใซใใใ BioGPT ใฎๅฉ็นใใใใซๅฎ่จผใใ็็ฉๅปๅญฆ็จ่ชใฎๆตๆขใช่ชฌๆใ็ๆใใพใใ*
## Usage tips
- BioGPT ใฏ็ตถๅฏพไฝ็ฝฎๅใ่พผใฟใๅใใใขใใซใงใใใใใ้ๅธธใฏๅ
ฅๅใๅทฆๅดใงใฏใชใๅณๅดใซใใใฃใณใฐใใใใจใใๅงใใใพใใ
- BioGPT ใฏๅ ๆ่จ่ชใขใใชใณใฐ (CLM) ็ฎ็ใงใใฌใผใใณใฐใใใฆใใใใใใทใผใฑใณในๅ
ใฎๆฌกใฎใใผใฏใณใไบๆธฌใใใฎใซๅผทๅใงใใ run_generation.py ใตใณใใซ ในใฏใชใใใง็ขบ่ชใงใใใใใซใใใฎๆฉ่ฝใๅฉ็จใใใจใBioGPT ใฏๆงๆ็ใซไธ่ฒซใใใใญในใใ็ๆใงใใพใใ
- ใขใใซใฏใไปฅๅใซ่จ็ฎใใใใญใผใจๅคใฎใขใใณใทใงใณ ใใขใงใใ`past_key_values`(PyTorch ใฎๅ ดๅ) ใๅ
ฅๅใจใใฆๅใๅใใใจใใงใใพใใใใฎ (past_key_values ใพใใฏ past) ๅคใไฝฟ็จใใใจใใขใใซใใใญในใ็ๆใฎใณใณใใญในใใงไบๅใซ่จ็ฎใใใๅคใๅ่จ็ฎใงใใชใใชใใพใใ PyTorch ใฎไฝฟ็จๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใBioGptForCausalLM.forward() ใกใฝใใใฎ past_key_values ๅผๆฐใๅ็
งใใฆใใ ใใใ
ใใฎใขใใซใฏใ[kamalkraj](https://huggingface.co/kamalkraj) ใซใใฃใฆๆไพใใใพใใใๅ
ใฎใณใผใใฏ [ใใ](https://github.com/microsoft/BioGPT) ใซใใใพใใ
## Documentation resources
- [ๅ ๆ่จ่ชใขใใชใณใฐ ใฟในใฏ ใฌใคใ](../tasks/language_modeling)
## BioGptConfig
[[autodoc]] BioGptConfig
## BioGptTokenizer
[[autodoc]] BioGptTokenizer
- save_vocabulary
## BioGptModel
[[autodoc]] BioGptModel
- forward
## BioGptForCausalLM
[[autodoc]] BioGptForCausalLM
- forward
## BioGptForTokenClassification
[[autodoc]] BioGptForTokenClassification
- forward
## BioGptForSequenceClassification
[[autodoc]] BioGptForSequenceClassification
- forward
|
transformers/docs/source/ja/model_doc/biogpt.md/0
|
{
"file_path": "transformers/docs/source/ja/model_doc/biogpt.md",
"repo_id": "transformers",
"token_count": 1982
}
| 292
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# CLIPSeg
## Overview
CLIPSeg ใขใใซใฏใTimo Lรผddecke, Alexander Ecker ใซใใฃใฆ [Image Segmentation using Text and Image Prompts](https://arxiv.org/abs/2112.10003) ใงๆๆกใใใพใใใ
ใใใฆใขใฌใฏใตใณใใผใปใจใใซใผใ CLIPSeg ใฏใใผใญใทใงใใใใใณใฏใณใทใงใใ็ปๅใปใฐใกใณใใผใทใงใณใฎใใใซใๅ็ตใใใ [CLIP](clip) ใขใใซใฎไธใซๆๅฐ้ใฎใใณใผใใ่ฟฝๅ ใใพใใ
่ซๆใฎ่ฆ็ดใฏๆฌกใฎใจใใใงใใ
*็ปๅใฎใปใฐใกใณใใผใทใงใณใฏ้ๅธธใใใฌใผใใณใฐใซใใฃใฆ่งฃๆฑบใใใพใใ
ใชใใธใงใฏใ ใฏใฉในใฎๅบๅฎใปใใใฎใขใใซใๅพใง่ฟฝๅ ใฎใฏใฉในใใใ่ค้ใชใฏใจใชใ็ตใฟ่พผใใจใณในใใใใใใพใ
ใใใใฎๅผใๅซใใใผใฟใปใใใงใขใใซใๅใใฌใผใใณใฐใใๅฟ
่ฆใใใใใใงใใใใใงใทในใใ ใๆๆกใใพใ
ไปปๆใฎๆ
ๅ ฑใซๅบใฅใใฆ็ปๅใปใฐใกใณใใผใทใงใณใ็ๆใงใใพใใ
ใในใๆใซใใญใณใโโใใ่กจ็คบใใใพใใใใญใณใใใฏใใญในใใพใใฏ
็ปๅใใใฎใขใใญใผใใซใใใ็ตฑไธใใใใขใใซใไฝๆใงใใพใใ
3 ใคใฎไธ่ฌ็ใชใปใฐใกใณใใผใทใงใณ ใฟในใฏใซใคใใฆ (1 ๅใใฌใผใใณใฐๆธใฟ)
ๅ็
งๅผใฎใปใฐใกใณใใผใทใงใณใใผใญใทใงใใ ใปใฐใกใณใใผใทใงใณใใฏใณใทใงใใ ใปใฐใกใณใใผใทใงใณใจใใๆ็ขบใช่ชฒ้กใไผดใใพใใ
CLIP ใขใใซใใใใฏใใผใณใจใใฆๆง็ฏใใใใใใใฉใณในใใผในใฎใใณใผใใงๆกๅผตใใฆใ้ซๅฏๅบฆใชใใผใฟ้ไฟกใๅฏ่ฝใซใใพใใ
ไบๆธฌใใฎๆกๅผตใใผใธใงใณใงใใฌใผใใณใฐใใๅพใ
PhraseCut ใใผใฟใปใใใ็งใใกใฎใทในใใ ใฏใใใชใผใใญในใ ใใญใณใใใพใใฏ
ใฏใจใชใ่กจใ่ฟฝๅ ใฎ็ปๅใๅพ่
ใฎ็ปๅใใผในใฎใใญใณใใใฎใใพใใพใชใใชใจใผใทใงใณใ่ฉณ็ดฐใซๅๆใใพใใ
ใใฎๆฐใใใใคใใชใใๅ
ฅๅใซใใใๅ็้ฉๅฟใๅฏ่ฝใซใชใใพใใ
ๅ่ฟฐใฎ 3 ใคใฎใปใฐใกใณใใผใทใงใณ ใฟในใฏใฎใฟใงใใใ
ใใญในใใพใใฏ็ปๅใใฏใจใชใใใใคใใช ใปใฐใกใณใใผใทใงใณ ใฟในใฏใซ
ๅฎๅผๅใใใใจใใงใใใๆๅพใซใใทในใใ ใใใพใ้ฉๅฟใใฆใใใใจใใใใใพใใ
ใขใใฉใผใใณในใพใใฏใใญใใใฃใๅซใไธ่ฌๅใใใใฏใจใช*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clipseg_architecture.png"
alt="ๆ็ป" width="600"/>
<small> CLIPSeg ใฎๆฆ่ฆใ <a href="https://arxiv.org/abs/2112.10003">ๅ
ใฎ่ซๆใใๆ็ฒใ</a> </small>
ใใฎใขใใซใฏใ[nielsr](https://huggingface.co/nielsr) ใซใใฃใฆๆไพใใใพใใใ
ๅ
ใฎใณใผใใฏ [ใใ](https://github.com/timojl/clipseg) ใซใใใพใใ
## Usage tips
- [`CLIPSegForImageSegmentation`] ใฏใ[`CLIPSegModel`] ใฎไธใซใใณใผใใ่ฟฝๅ ใใพใใๅพ่
ใฏ [`CLIPModel`] ใจๅใใงใใ
- [`CLIPSegForImageSegmentation`] ใฏใใในใๆใซไปปๆใฎใใญใณใใใซๅบใฅใใฆ็ปๅใปใฐใกใณใใผใทใงใณใ็ๆใงใใพใใใใญใณใใใฏใใญในใใฎใใใใใงใ
(`input_ids` ใจใใฆใขใใซใซๆไพใใใ) ใพใใฏ็ปๅ (`conditional_pixel_values` ใจใใฆใขใใซใซๆไพใใใ)ใใซในใฟใ ใๆไพใใใใจใใงใใพใ
ๆกไปถไปใๅใ่พผใฟ (`conditional_embeddings`ใจใใฆใขใใซใซๆไพใใใพใ)ใ
## Resources
CLIPSeg ใฎไฝฟ็จใ้ๅงใใใฎใซๅฝน็ซใคใๅ
ฌๅผ Hugging Face ใใใณใณใใฅใใใฃ (๐ ใง็คบใใใฆใใ) ใชใฝใผในใฎใชในใใใใใซๅซใใใชใฝใผในใฎ้ไฟกใซ่ๅณใใใๅ ดๅใฏใใๆฐ่ปฝใซใใซ ใชใฏใจในใใ้ใใฆใใ ใใใๅฏฉๆปใใใฆใใใ ใใพใใใชใฝใผในใฏใๆขๅญใฎใชใฝใผในใ่ค่ฃฝใใใฎใงใฏใชใใไฝใๆฐใใใใฎใ็คบใใใจใ็ๆณ็ใงใใ
<PipelineTag pipeline="image-segmentation"/>
- [CLIPSeg ใไฝฟ็จใใใผใญใทใงใใ็ปๅใปใฐใกใณใใผใทใงใณ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/CLIPSeg/Zero_shot_image_segmentation_with_CLIPSeg.ipynb) ใ่ชฌๆใใใใผใใใใฏใ
## CLIPSegConfig
[[autodoc]] CLIPSegConfig
- from_text_vision_configs
## CLIPSegTextConfig
[[autodoc]] CLIPSegTextConfig
## CLIPSegVisionConfig
[[autodoc]] CLIPSegVisionConfig
## CLIPSegProcessor
[[autodoc]] CLIPSegProcessor
## CLIPSegModel
[[autodoc]] CLIPSegModel
- forward
- get_text_features
- get_image_features
## CLIPSegTextModel
[[autodoc]] CLIPSegTextModel
- forward
## CLIPSegVisionModel
[[autodoc]] CLIPSegVisionModel
- forward
## CLIPSegForImageSegmentation
[[autodoc]] CLIPSegForImageSegmentation
- forward
|
transformers/docs/source/ja/model_doc/clipseg.md/0
|
{
"file_path": "transformers/docs/source/ja/model_doc/clipseg.md",
"repo_id": "transformers",
"token_count": 2201
}
| 293
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Deformable DETR
## Overview
ๅคๅฝขๅฏ่ฝ DETR ใขใใซใฏใXizhou ZhuใWeijie SuใLewei LuใBin LiใXiaogang Wang, Jifeng Dai ใซใใฃใฆ [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) ใงๆๆกใใใพใใ
ๅคๅฝขๅฏ่ฝใช DETR ใฏใๅ็
งๅจๅฒใฎๅฐๆฐใฎไธป่ฆใชใตใณใใชใณใฐ ใใคใณใใฎใฟใซๆณจ็ฎใใๆฐใใๅคๅฝขๅฏ่ฝใชใขใใณใทใงใณ ใขใธใฅใผใซใๅฉ็จใใใใจใซใใใๅๆใฎ้
ใใฎๅ้กใจๅ
ใฎ [DETR](detr) ใฎๅถ้ใใใ็นๅพดใฎ็ฉบ้่งฃๅๅบฆใ่ปฝๆธใใพใใ
่ซๆใฎ่ฆ็ดใฏๆฌกใฎใจใใใงใใ
*DETR ใฏใๅชใใใใใฉใผใใณในใๅฎ่จผใใชใใใ็ฉไฝๆคๅบใซใใใๅคใใฎๆไฝๆฅญใง่จญ่จใใใใณใณใใผใใณใใฎๅฟ
่ฆๆงใๆ้คใใใใใซๆ่ฟๆๆกใใใพใใใใใ ใใ็ปๅ็นๅพดใใใใฎๅฆ็ใซใใใ Transformer ใขใใณใทใงใณ ใขใธใฅใผใซใฎๅถ้ใซใใใๅๆใ้
ใใ็นๅพดใฎ็ฉบ้่งฃๅๅบฆใๅถ้ใใใใจใใๅ้กใใใใพใใใใใใฎๅ้กใ่ปฝๆธใใใใใซใ็งใใกใฏ Deformable DETR ใๆๆกใใพใใใใใฎ DETR ใฎใขใใณใทใงใณ ใขใธใฅใผใซใฏใๅ็
งๅจๅฒใฎๅฐๆฐใฎไธป่ฆใชใตใณใใชใณใฐ ใใคใณใใฎใฟใซๆณจ็ฎใใพใใๅคๅฝขๅฏ่ฝใช DETR ใฏใ10 ๅใฎ 1 ใฎใใฌใผใใณใฐ ใจใใใฏใงใDETR ใใใๅชใใใใใฉใผใใณใน (็นใซๅฐใใชใชใใธใงใฏใใฎๅ ดๅ) ใ้ๆใงใใพใใ COCO ใใณใใใผใฏใซ้ขใใๅบ็ฏใชๅฎ้จใซใใใ็งใใกใฎใขใใญใผใใฎๆๅนๆงใๅฎ่จผใใใพใใใ*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png"
alt="ๆ็ป" width="600"/>
<small> ๅคๅฝขๅฏ่ฝใช DETR ใขใผใญใใฏใใฃใ <a href="https://arxiv.org/abs/2010.04159">ๅ
ใฎ่ซๆ</a>ใใๆ็ฒใ</small>
ใใฎใขใใซใฏใ[nielsr](https://huggingface.co/nielsr) ใซใใฃใฆๆไพใใใพใใใๅ
ใฎใณใผใใฏ [ใใ](https://github.com/fundamentalvision/Deformable-DETR) ใซใใใพใใ
## Usage tips
- ใใฌใผใใณใฐ Deformable DETR ใฏใๅ
ใฎ [DETR](detr) ใขใใซใใใฌใผใใณใฐใใใใจใจๅ็ญใงใใใใข ใใผใใใใฏใซใคใใฆใฏใไปฅไธใฎ [resources](#resources) ใปใฏใทใงใณใๅ็
งใใฆใใ ใใใ
## Resources
Deformable DETR ใฎไฝฟ็จใ้ๅงใใใฎใซๅฝน็ซใคๅ
ฌๅผ Hugging Face ใใใณใณใใฅใใใฃ (๐ ใง็คบใใใ) ใชใฝใผในใฎใชในใใ
<PipelineTag pipeline="object-detection"/>
- [`DeformableDetrForObjectDetection`] ใฎใซในใฟใ ใใผใฟใปใใใงใฎๆจ่ซใจๅพฎ่ชฟๆดใซ้ขใใใใข ใใผใใใใฏใฏใ[ใใกใ](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Deformable-DETR) ใซใใใพใใ
- [็ฉไฝๆคๅบใฟในใฏใฌใคใ](../tasks/object_detection) ใๅ็
งใใฆใใ ใใใ
ใใใซๅซใใใชใฝใผในใฎ้ไฟกใซ่ๅณใใใๅ ดๅใฏใใๆฐ่ปฝใซใใซ ใชใฏใจในใใ้ใใฆใใ ใใใๅฏฉๆปใใใฆใใใ ใใพใใใชใฝใผในใฏใๆขๅญใฎใชใฝใผในใ่ค่ฃฝใใใฎใงใฏใชใใไฝใๆฐใใใใฎใ็คบใใใจใ็ๆณ็ใงใใ
## DeformableDetrImageProcessor
[[autodoc]] DeformableDetrImageProcessor
- preprocess
- post_process_object_detection
## DeformableDetrFeatureExtractor
[[autodoc]] DeformableDetrFeatureExtractor
- __call__
- post_process_object_detection
## DeformableDetrConfig
[[autodoc]] DeformableDetrConfig
## DeformableDetrModel
[[autodoc]] DeformableDetrModel
- forward
## DeformableDetrForObjectDetection
[[autodoc]] DeformableDetrForObjectDetection
- forward
|
transformers/docs/source/ja/model_doc/deformable_detr.md/0
|
{
"file_path": "transformers/docs/source/ja/model_doc/deformable_detr.md",
"repo_id": "transformers",
"token_count": 1792
}
| 294
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Efficient Inference on a Single GPU
ใใฎใฌใคใใซๅ ใใฆใ[1ใคใฎGPUใงใฎใใฌใผใใณใฐใฌใคใ](perf_train_gpu_one)ใจ[CPUใงใฎๆจ่ซใฌใคใ](perf_infer_cpu)ใซ้ข้ฃใใๆ
ๅ ฑใใใใพใใ
## Flash Attention 2
<Tip>
ใใฎๆฉ่ฝใฏๅฎ้จ็ใงใใใๅฐๆฅใฎใใผใธใงใณใงๅคงๅน
ใซๅคๆดใใใๅฏ่ฝๆงใใใใพใใใใจใใฐใFlash Attention 2 APIใฏ่ฟใๅฐๆฅ`BetterTransformer` APIใซ็งป่กใใใใใใใพใใใ
</Tip>
Flash Attention 2ใฏใใใฉใณในใใฉใผใใผใใผในใฎใขใใซใฎใใฌใผใใณใฐใจๆจ่ซ้ๅบฆใๅคงๅน
ใซ้ซ้ๅใงใใพใใFlash Attention 2ใฏใTri Daoๆฐใซใใฃใฆ[ๅ
ฌๅผใฎFlash Attentionใชใใธใใช](https://github.com/Dao-AILab/flash-attention)ใงๅฐๅ
ฅใใใพใใใFlash Attentionใซ้ขใใ็งๅญฆ่ซๆใฏ[ใใกใ](https://arxiv.org/abs/2205.14135)ใง่ฆใใใจใใงใใพใใ
Flash Attention 2ใๆญฃใใใคใณในใใผใซใใใซใฏใไธ่จใฎใชใใธใใชใซ่จ่ผใใใฆใใใคใณในใใผใซใฌใคใใซๅพใฃใฆใใ ใใใ
ไปฅไธใฎใขใใซใซๅฏพใใฆFlash Attention 2ใใใคใใฃใใตใใผใใใฆใใพใ๏ผ
- Llama
- Falcon
ใใใซๅคใใฎใขใใซใซFlash Attention 2ใฎใตใใผใใ่ฟฝๅ ใใใใจใGitHubใงๆๆกใใใใจใใงใใๅคๆดใ็ตฑๅใใใใใซใใซใชใฏใจในใใ้ใใใจใใงใใพใใใตใใผใใใใฆใใใขใใซใฏใใใใฃใณใฐใใผใฏใณใไฝฟ็จใใฆใใฌใผใใณใฐใๅซใใๆจ่ซใจใใฌใผใใณใฐใซไฝฟ็จใงใใพใ๏ผ็พๅจใฎ`BetterTransformer` APIใงใฏใตใใผใใใใฆใใชใ๏ผใ
<Tip>
Flash Attention 2ใฏใใขใใซใฎdtypeใ`fp16`ใพใใฏ`bf16`ใฎๅ ดๅใซใฎใฟไฝฟ็จใงใใNVIDIA-GPUใใใคในใงใฎใฟๅฎ่กใใใพใใใใฎๆฉ่ฝใไฝฟ็จใใๅใซใใขใใซใ้ฉๅใชdtypeใซใญใฃในใใใใตใใผใใใใฆใใใใใคในใซใญใผใใใฆใใ ใใใ
</Tip>
### Quick usage
ใขใใซใงFlash Attention 2ใๆๅนใซใใใซใฏใ`from_pretrained`ใฎๅผๆฐใซ`attn_implementation="flash_attention_2"`ใ่ฟฝๅ ใใพใใ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```
ใใกใใฏใ็ๆใพใใฏๅพฎ่ชฟๆดใฎใใใซไฝฟ็จใใใใญในใใงใใ
### Expected speedups
็นใซ้ทใใทใผใฑใณในใซๅฏพใใฆใๅพฎ่ชฟๆดใจๆจ่ซใฎ้ใซใฏใใใชใใฎ้ซ้ๅใๆๅพ
ใงใใพใใใใ ใใFlash Attentionใฏใใใฃใณใฐใใผใฏใณใไฝฟ็จใใฆใขใใณใทใงใณในใณใขใ่จ็ฎใใชใใใใใทใผใฑใณในใซใใใฃใณใฐใใผใฏใณใๅซใพใใๅ ดๅใใใใๆจ่ซใซใใใฆใขใใณใทใงใณในใณใขใๆๅใงใใใ/ใขใณใใใใใๅฟ
่ฆใใใใใใใฃใณใฐใใผใฏใณใๅซใใใใ็ๆใฎๅคงๅน
ใช้
ๅปถใ็บ็ใใพใใ
ใใใๅ
ๆใใใใใซใใใฌใผใใณใฐไธญใซใทใผใฑใณในใซใใใฃใณใฐใใผใฏใณใไฝฟ็จใใใซFlash Attentionใไฝฟ็จใใๅฟ
่ฆใใใใพใ๏ผใใจใใฐใใใผใฟใปใใใใใใฏใใใใจใซใใใใทใผใฑใณในใๆๅคงใทใผใฑใณใน้ทใซ้ใใใพใง้ฃ็ตใใใใจใชใฉ๏ผใใใใซ[ไพ](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py#L516)ใๆไพใใใฆใใพใใ
ไปฅไธใฏใใใใฃใณใฐใใผใฏใณใฎใชใๅ ดๅใซใใทใผใฑใณใน้ทใ4096ใฎ[tiiuae/falcon-7b](https://hf.co/tiiuae/falcon-7b)ใซๅฏพใใๅ็ดใชใใฉใฏใผใใในใฎไบๆณใใใ้ซ้ๅใงใใใใพใใพใชใใใใตใคใบใ็คบใใใฆใใพใ๏ผ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/falcon-7b-inference-large-seqlen.png">
</div>
ไปฅไธใฏใใใใฃใณใฐใใผใฏใณใฎใชใๅ ดๅใซใใทใผใฑใณใน้ทใ4096ใฎ[`meta-llama/Llama-7b-hf`](https://hf.co/meta-llama/Llama-7b-hf)ใซๅฏพใใๅ็ดใชใใฉใฏใผใใในใฎไบๆณใใใ้ซ้ๅใงใใใใพใใพใชใใใใตใคใบใ็คบใใใฆใใพใ๏ผ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-7b-inference-large-seqlen.png">
</div>
ใใใฃใณใฐใใผใฏใณใๅซใใทใผใฑใณใน๏ผใใใฃใณใฐใใผใฏใณใไฝฟ็จใใฆใใฌใผใใณใฐใพใใฏ็ๆใใ๏ผใฎๅ ดๅใใขใใณใทใงใณในใณใขใๆญฃใใ่จ็ฎใใใใใซๅ
ฅๅใทใผใฑใณในใใขใณใใใ/ใใใใใๅฟ
่ฆใใใใพใใๆฏ่ผ็ๅฐใใใทใผใฑใณใน้ทใฎๅ ดๅใ็ด็ฒใชใใฉใฏใผใใในใงใฏใใใฃใณใฐใใผใฏใณใ30%ๆชๆบใใๅใใใใฆใใชใใใใใใใฏใใใใช้ซ้ๅใใใใใใพใใ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-small-seqlen-padding.png">
</div>
ใใใใๅคงใใชใทใผใฑใณใน้ทใฎๅ ดๅใ็ด็ฒใชๆจ่ซ๏ผใใฌใผใใณใฐใๅซใ๏ผใซใฏ่ๅณๆทฑใ้ซ้ๅใๅพใใใพใใ
Flash Attentionใฏใใขใใณใทใงใณ่จ็ฎใใใใกใขใชๅน็ใฎ่ฏใใใฎใซใใๅคงใใชใทใผใฑใณใน้ทใงใฎCUDA OOMใฎๅ้กใๅ้ฟใงใใใใใซใใพใใๅคงใใชใทใผใฑใณใน้ทใซๅฏพใใฆๆๅคง20ใฎใกใขใชๅๆธใใใใใใใจใใใใพใใ่ฉณ็ดฐใซใคใใฆใฏใ[ๅ
ฌๅผใฎFlash Attentionใชใใธใใช](https://github.com/Dao-AILab/flash-attention)ใใ่ฆงใใ ใใใ
<div style="text-align: center">
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/llama-2-large-seqlen-padding.png">
</div>
### Advanced usage
ใใฎๆฉ่ฝใใขใใซใฎๆ้ฉๅใซๅคใใฎๆขๅญใฎๆฉ่ฝใจ็ตใฟๅใใใใใจใใงใใพใใไปฅไธใซใใใคใใฎไพใ็คบใใพใ๏ผ
### Combining Flash Attention 2 and 8-bit models
ใใฎๆฉ่ฝใ8ใใใใฎ้ๅญๅใจ็ตใฟๅใใใใใจใใงใใพใ๏ผ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
attn_implementation="flash_attention_2",
)
```
### Combining Flash Attention 2 and 4-bit models
ใใฎๆฉ่ฝใ 4 ใใใใฎ้ๅญๅใจ็ตใฟๅใใใใใจใใงใใพใ๏ผ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
attn_implementation="flash_attention_2",
)
```
### Combining Flash Attention 2 and PEFT
ใใฎๆฉ่ฝใไฝฟ็จใใฆใFlash Attention 2ใใใผในใซใขใใใฟใผใใใฌใผใใณใฐใใ้ใซPEFTใ็ตใฟๅใใใใใจใใงใใพใใ
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM
from peft import LoraConfig
model_id = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_4bit=True,
attn_implementation="flash_attention_2",
)
lora_config = LoraConfig(
r=8,
task_type="CAUSAL_LM"
)
model.add_adapter(lora_config)
... # train your model
```
## BetterTransformer
[BetterTransformer](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฏใ๐ค TransformersใขใใซใPyTorchใใคใใฃใใฎ้ซ้ใในๅฎ่กใซๅคๆใใพใใใใใซใใใFlash Attentionใชใฉใฎๆ้ฉๅใใใใซใผใใซใๅ
้จใงๅผใณๅบใใใพใใ
BetterTransformerใฏใใใญในใใ็ปๅใใใใณใชใผใใฃใชใขใใซใฎๅไธใใใณใใซใGPUใงใฎ้ซ้ใชๆจ่ซใใตใใผใใใฆใใพใใ
<Tip>
Flash Attentionใฏใfp16ใพใใฏbf16ใฎdtypeใไฝฟ็จใใใขใใซใซใฎใฟไฝฟ็จใงใใพใใBetterTransformerใไฝฟ็จใใๅใซใใขใใซใ้ฉๅใชdtypeใซใญใฃในใใใฆใใ ใใใ
</Tip>
### Encoder models
PyTorchใใคใใฃใใฎ[`nn.MultiHeadAttention`](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)ใขใใณใทใงใณ้ซ้ใในใBetterTransformerใจๅผใฐใใใใฎใฏใ[๐ค Optimumใฉใคใใฉใช](https://huggingface.co/docs/optimum/bettertransformer/overview)ใฎ็ตฑๅใ้ใใฆTransformersใจไธ็ทใซไฝฟ็จใงใใพใใ
PyTorchใฎใขใใณใทใงใณ้ซ้ใในใไฝฟ็จใใใจใใซใผใใซใใฅใผใธใงใณใจ[ใในใใใใใใณใฝใซ](https://pytorch.org/docs/stable/nested.html)ใฎไฝฟ็จใซใใใๆจ่ซใ้ซ้ๅใงใใพใใ่ฉณ็ดฐใชใใณใใใผใฏๆ
ๅ ฑใฏ[ใใฎใใญใฐ่จไบ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใซใใใพใใ
[`optimum`](https://github.com/huggingface/optimum)ใใใฑใผใธใใคใณในใใผใซใใๅพใๆจ่ซไธญใซBetter Transformerใไฝฟ็จใใใซใฏใ้ข้ฃใใๅ
้จใขใธใฅใผใซใๅผใณๅบใใใจใง็ฝฎใๆใใๅฟ
่ฆใใใใพใ[`~PreTrainedModel.to_bettertransformer`]:
```python
model = model.to_bettertransformer()
```
ใกใฝใใ [`~PreTrainedModel.reverse_bettertransformer`] ใฏใใขใใซใไฟๅญใใๅใซไฝฟ็จใในใใงใๆจๆบใฎใใฉใณในใใฉใผใใผใขใใชใณใฐใไฝฟ็จใใใใใฎใใฎใงใ๏ผ
```python
model = model.reverse_bettertransformer()
model.save_pretrained("saved_model")
```
BetterTransformer APIใไฝฟใฃใใจใณใณใผใใผใขใใซใฎๅฏ่ฝๆงใซใคใใฆ่ฉณใใ็ฅใใซใฏใ[ใใฎใใญใฐใในใ](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)ใใ่ฆงใใ ใใใ
### Decoder models
ใใญในใใขใใซใ็นใซใใณใผใใผใใผในใฎใขใใซ๏ผGPTใT5ใLlamaใชใฉ๏ผใซใจใฃใฆใBetterTransformer APIใฏใในใฆใฎๆณจๆๆไฝใ[`torch.nn.functional.scaled_dot_product_attention`ใชใใฌใผใฟใผ](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention)๏ผSDPA๏ผใไฝฟ็จใใใใใซๅคๆใใพใใใใฎใชใใฌใผใฟใผใฏPyTorch 2.0ไปฅ้ใงใฎใฟๅฉ็จๅฏ่ฝใงใใ
ใขใใซใBetterTransformerใซๅคๆใใใซใฏใไปฅไธใฎๆ้ ใๅฎ่กใใฆใใ ใใ๏ผ
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
# convert the model to BetterTransformer
model.to_bettertransformer()
# Use it for training or inference
```
SDPAใฏใใใผใใฆใงใขใๅ้กใฎใตใคใบใซๅฟใใฆ[Flash Attention](https://arxiv.org/abs/2205.14135)ใซใผใใซใไฝฟ็จใใใใจใใงใใพใใFlash Attentionใๆๅนใซใใใใ็นๅฎใฎ่จญๅฎ๏ผใใผใใฆใงใขใๅ้กใตใคใบ๏ผใงไฝฟ็จๅฏ่ฝใใฉใใใ็ขบ่ชใใใซใฏใ[`torch.backends.cuda.sdp_kernel`](https://pytorch.org/docs/master/backends.html#torch.backends.cuda.sdp_kernel)ใใณใณใใญในใใใใผใธใฃใจใใฆไฝฟ็จใใพใใ
```diff
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16).to("cuda")
# convert the model to BetterTransformer
model.to_bettertransformer()
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
+ with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
ใใใใฌใผในใใใฏใซใใฐใ่กจ็คบใใใๅ ดๅ
```bash
RuntimeError: No available kernel. Aborting execution.
```
Flash Attention ใฎๅบ็ฏใชใซใใฌใใธใๆใคใใใใใชใ PyTorch ใฎใใคใใชใผใใผใธใงใณใ่ฉฆใใฆใฟใใใจใใๅงใใใพใใ
```bash
pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
```
Or make sure your model is correctly casted in float16 or bfloat16
ใขใใซใๆญฃใใfloat16ใพใใฏbfloat16ใซใญใฃในใใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
Have a look at [this detailed blogpost](https://pytorch.org/blog/out-of-the-box-acceleration/) to read more about what is possible to do with `BetterTransformer` + SDPA API.
`BetterTransformer` + SDPA APIใไฝฟ็จใใฆไฝใๅฏ่ฝใใซใคใใฆ่ฉณใใ่ชญใใซใฏใ[ใใฎ่ฉณ็ดฐใชใใญใฐใในใ](https://pytorch.org/blog/out-of-the-box-acceleration/)ใใ่ฆงใใ ใใใ
## `bitsandbytes` integration for FP4 mixed-precision inference
FP4ๆททๅ็ฒพๅบฆๆจ่ซใฎใใใฎ`bitsandbytes`็ตฑๅ
You can install `bitsandbytes` and benefit from easy model compression on GPUs. Using FP4 quantization you can expect to reduce up to 8x the model size compared to its native full precision version. Check out below how to get started.
`bitsandbytes`ใใคใณในใใผใซใใGPUใง็ฐกๅใชใขใใซใฎๅง็ธฎใๅฉ็จใงใใพใใFP4้ๅญๅใไฝฟ็จใใใจใใใคใใฃใใฎใใซใใฌใทใธใงใณใใผใธใงใณใจๆฏ่ผใใฆใขใใซใตใคใบใๆๅคง8ๅๅๆธใงใใใใจใๆๅพ
ใงใใพใใไปฅไธใ็ขบ่ชใใฆใใฉใฎใใใซๅงใใใใใ่ฆงใใ ใใใ
<Tip>
Note that this feature can also be used in a multi GPU setup.
ใใฎๆฉ่ฝใฏใใใซใGPUใปใใใขใใใงใไฝฟ็จใงใใใใจใซๆณจๆใใฆใใ ใใใ
</Tip>
### Requirements [[requirements-for-fp4-mixedprecision-inference]]
- Latest `bitsandbytes` library
`pip install bitsandbytes>=0.39.0`
- Install latest `accelerate` from source
`pip install git+https://github.com/huggingface/accelerate.git`
- Install latest `transformers` from source
`pip install git+https://github.com/huggingface/transformers.git`
### Running FP4 models - single GPU setup - Quickstart
ไปฅไธใฎใณใผใใๅฎ่กใใใใจใงใ็ฐกๅใซๅไธใฎGPUใงFP4ใขใใซใๅฎ่กใงใใพใ:
```py
from transformers import AutoModelForCausalLM
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
```
ๆณจๆ: `device_map`ใฏใชใใทใงใณใงใใใๆจ่ซๆใซ `device_map = 'auto'` ใ่จญๅฎใใใใจใๆจๅฅจใใใฆใใพใใใใใซใใใๅฉ็จๅฏ่ฝใชใชใฝใผในใซๅน็็ใซใขใใซใใใฃในใใใใใใพใใ
### Running FP4 models - multi GPU setup
ๆททๅ4ใใใใขใใซใ่คๆฐใฎGPUใซใญใผใใใๆนๆณใฏใๅไธGPUใปใใใขใใใจๅใใงใ๏ผๅไธGPUใปใใใขใใใจๅใใณใใณใใงใ๏ผ๏ผ
```py
model_name = "bigscience/bloom-2b5"
model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
```
ใใใใ`accelerate`ใไฝฟ็จใใฆใๅGPUใซๅฒใๅฝใฆใGPU RAMใๅถๅพกใใใใจใใงใใพใใไปฅไธใฎใใใซใ`max_memory`ๅผๆฐใไฝฟ็จใใพใ๏ผ
```py
max_memory_mapping = {0: "600MB", 1: "1GB"}
model_name = "bigscience/bloom-3b"
model_4bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping
)
```
ใใฎไพใงใฏใๆๅใฎGPUใฏ600MBใฎใกใขใชใไฝฟ็จใใ2็ช็ฎใฎGPUใฏ1GBใไฝฟ็จใใพใใ
### Advanced usage
ใใฎใกใฝใใใฎใใใชใ้ซๅบฆใชไฝฟ็จๆณใซใคใใฆใฏใ[้ๅญๅ](main_classes/quantization)ใฎใใญใฅใกใณใใผใทใงใณใใผใธใใ่ฆงใใ ใใใ
## `bitsandbytes` integration for Int8 mixed-precision matrix decomposition
<Tip>
ใใฎๆฉ่ฝใฏใใใซใGPU็ฐๅขใงใไฝฟ็จใงใใพใใ
</Tip>
่ซๆ[`LLM.int8()๏ผในใฑใผใฉใใซใชTransformerๅใใฎ8ใใใ่กๅไน็ฎ`](https://arxiv.org/abs/2208.07339)ใซใใใฐใHugging Face็ตฑๅใHubๅ
ใฎใในใฆใฎใขใใซใงใใใๆฐ่กใฎใณใผใใงใตใใผใใใใฆใใพใใใใฎใกใฝใใใฏใๅ็ฒพๅบฆ๏ผ`float16`ใใใณ`bfloat16`๏ผใฎ้ใฟใฎๅ ดๅใซ`nn.Linear`ใตใคใบใ2ๅใๅ็ฒพๅบฆ๏ผ`float32`๏ผใฎ้ใฟใฎๅ ดๅใฏ4ๅใซ็ธฎๅฐใใๅคใๅคใซๅฏพใใฆใปใจใใฉๅฝฑ้ฟใไธใใพใใใ

Int8ๆททๅ็ฒพๅบฆ่กๅๅ่งฃใฏใ่กๅไน็ฎใ2ใคใฎในใใชใผใ ใซๅๅฒใใใใจใซใใฃใฆๅไฝใใพใ๏ผ(1) ใทในใใใใฃใใฏใช็นๅพดๅคใๅคในใใชใผใ ใfp16ใง่กๅไน็ฎ๏ผ0.01%๏ผใ(2) int8่กๅไน็ฎใฎ้ๅธธใฎในใใชใผใ ๏ผ99.9%๏ผใใใฎๆนๆณใไฝฟ็จใใใจใ้ๅธธใซๅคงใใชใขใใซใซๅฏพใใฆไบๆธฌใฎๅฃๅใชใใซint8ๆจ่ซใๅฏ่ฝใงใใ
ใใฎใกใฝใใใฎ่ฉณ็ดฐใซใคใใฆใฏใ[่ซๆ](https://arxiv.org/abs/2208.07339)ใพใใฏ[ใใฎ็ตฑๅใซ้ขใใใใญใฐ่จไบ](https://huggingface.co/blog/hf-bitsandbytes-integration)ใใ็ขบ่ชใใ ใใใ

ใชใใใใฎๆฉ่ฝใไฝฟ็จใใใซใฏGPUใๅฟ
่ฆใงใใใใซใผใใซใฏGPUๅฐ็จใซใณใณใใคใซใใใฆใใๅฟ
่ฆใใใใพใใใใฎๆฉ่ฝใไฝฟ็จใใๅใซใใขใใซใฎ1/4๏ผใพใใฏใใผใ็ฒพๅบฆใฎ้ใฟใฎๅ ดๅใฏ1/2๏ผใไฟๅญใใใฎใซๅๅใชGPUใกใขใชใใใใใจใ็ขบ่ชใใฆใใ ใใใ
ใใฎใขใธใฅใผใซใไฝฟ็จใใ้ใฎใใซใใซ้ขใใ่ฉณ็ดฐใฏใไปฅไธใฎใใผใใใ่ฆงใใใ ใใใ[Google Colabใฎใใข](#colab-demos)ใใ่ฆงใใ ใใใ
### Requirements [[requirements-for-int8-mixedprecision-matrix-decomposition]]
- `bitsandbytes<0.37.0`ใไฝฟ็จใใๅ ดๅใNVIDIA GPUใไฝฟ็จใใฆใใใใจใ็ขบ่ชใใ8ใใใใใณใฝใซใณใขใใตใใผใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใ๏ผTuringใAmpereใใพใใฏใใไปฅ้ใฎใขใผใญใใฏใใฃใผใไพ๏ผT4ใRTX20s RTX30sใA40-A100ใชใฉ๏ผใ`bitsandbytes>=0.37.0`ใฎๅ ดๅใใในใฆใฎGPUใใตใใผใใใใใฏใใงใใ
- ๆญฃใใใใผใธใงใณใฎ`bitsandbytes`ใใคใณในใใผใซใใใซใฏใๆฌกใฎใณใใณใใๅฎ่กใใฆใใ ใใ๏ผ
`pip install bitsandbytes>=0.31.5`
- `accelerate`ใใคใณในใใผใซใใพใ๏ผ
`pip install accelerate>=0.12.0`
### Running mixed-Int8 models - single GPU setup
ๅฟ
่ฆใชใฉใคใใฉใชใใคใณในใใผใซใใๅพใใใใฏใน 8 ใใใใขใใซใ่ชญใฟ่พผใๆนๆณใฏๆฌกใฎ้ใใงใ๏ผ
```py
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=BitsAndBytesConfig(load_in_8bit=True))
```
ไปฅไธใฏใทใณใใซใชไพใงใ๏ผ
* `pipeline()` ้ขๆฐใฎไปฃใใใซใใขใใซใฎ `generate()` ใกใฝใใใไฝฟ็จใใใใจใใๅงใใใพใใ`pipeline()` ้ขๆฐใไฝฟ็จใใฆๆจ่ซใใใใจใฏๅฏ่ฝใงใใใๆททๅ8ใใใใขใใซใซๆ้ฉๅใใใฆใใใใ`generate()` ใกใฝใใใไฝฟ็จใใใใใ้
ใใชใใพใใใพใใไธ้จใฎใตใณใใชใณใฐๆฆ็ฅ๏ผไพ๏ผใใฏใฌใฆในใตใณใใชใณใฐ๏ผใฏใ`pipeline()` ้ขๆฐใงใฏๆททๅ8ใใใใขใใซใงใฏใตใใผใใใใฆใใพใใใ
* ใในใฆใฎๅ
ฅๅใใขใใซใจๅใใใใคในใซ้
็ฝฎใใฆใใ ใใใ
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "bigscience/bloom-2b5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=BitsAndBytesConfig(load_in_8bit=True))
prompt = "Hello, my llama is cute"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(**inputs)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
### Running mixed-int8 models - multi GPU setup
่คๆฐใฎGPUใซๆททๅ8ใใใใขใใซใใญใผใใใๆนๆณใฏใๆฌกใฎ้ใใงใ๏ผใทใณใฐใซGPUใปใใใขใใใจๅใใณใใณใใงใ๏ผ๏ผ
```py
model_name = "bigscience/bloom-2b5"
model_8bit = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=BitsAndBytesConfig(load_in_8bit=True))
```
`accelerate`ใไฝฟ็จใใฆๅGPUใซๅฒใๅฝใฆใGPU RAMใๅถๅพกใใ้ใซใฏใไปฅไธใฎใใใซ`max_memory`ๅผๆฐใไฝฟ็จใใพใ๏ผ
```py
max_memory_mapping = {0: "1GB", 1: "2GB"}
model_name = "bigscience/bloom-3b"
model_8bit = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping
)
```
In this example, the first GPU will use 1GB of memory and the second 2GB.
### Colab demos
ใใฎๆนๆณใไฝฟ็จใใใจใไปฅๅใฎGoogle Colabใงใฏๆจ่ซใงใใชใใฃใใขใใซใซๅฏพใใฆๆจ่ซใ่กใใใจใใงใใพใใไปฅไธใฏใGoogle Colabใง8ใใใ้ๅญๅใไฝฟ็จใใฆT5-11b๏ผfp32ใง42GB๏ผใๅฎ่กใใใใขใฎใชใณใฏใงใ๏ผ
[](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing)
ใพใใBLOOM-3Bใฎใใขใใ่ฆงใใใ ใใพใ๏ผ
[](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
## Advanced usage: mixing FP4 (or Int8) and BetterTransformer
็ฐใชใๆนๆณใ็ตใฟๅใใใฆใใขใใซใฎๆ้ฉใชใใใฉใผใใณในใๅพใใใจใใงใใพใใไพใใฐใBetterTransformerใไฝฟ็จใใฆFP4ใใใฏในใใฌใทใธใงใณๆจ่ซใจใใฉใใทใฅใขใใณใทใงใณใ็ตใฟๅใใใใใจใใงใใพใใ
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config)
input_text = "Hello my dog is cute and"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
transformers/docs/source/ja/perf_infer_gpu_one.md/0
|
{
"file_path": "transformers/docs/source/ja/perf_infer_gpu_one.md",
"repo_id": "transformers",
"token_count": 9356
}
| 295
|
<!--
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ ใใฎใใกใคใซใฏMarkdownๅฝขๅผใงใใใ็นๅฎใฎMDXใซ้กไผผใใใใญใฅใกใณใใใซใใผใฎๆงๆใๅซใใงใใใ
Markdownใใฅใผใขใผใงๆญฃใใ่กจ็คบใใใชใใใจใใใใพใใ
-->
# Preprocess
[[open-in-colab]]
ใใผใฟใปใใใงใขใใซใใใฌใผใใณใฐใใๅใซใใใใใขใใซใฎๆๅพ
ใใๅ
ฅๅๅฝขๅผใซๅๅฆ็ใใๅฟ
่ฆใใใใพใใ
ใใผใฟใใใญในใใ็ปๅใใพใใฏใชใผใใฃใชใงใใใใฉใใใซใใใใใใใใใใฏใใณใฝใซใฎใใใใซๅคๆใใฆ็ตใฟ็ซใฆใๅฟ
่ฆใใใใพใใ
๐ค Transformersใฏใใใผใฟใใขใใซ็จใซๆบๅใใใฎใซๅฝน็ซใคๅๅฆ็ใฏใฉในใฎใปใใใๆไพใใฆใใพใใ
ใใฎใใฅใผใใชใขใซใงใฏใๆฌกใฎใใจใๅญฆใณใพใ๏ผ
* ใใญในใใฎๅ ดๅใ[Tokenizer](./main_classes/tokenizer)ใไฝฟ็จใใฆใใญในใใใใผใฏใณใฎใทใผใฑใณในใซๅคๆใใใใผใฏใณใฎๆฐๅค่กจ็พใไฝๆใใใใใใใใณใฝใซใซ็ตใฟ็ซใฆใๆนๆณใ
* ้ณๅฃฐใจใชใผใใฃใชใฎๅ ดๅใ[Feature extractor](./main_classes/feature_extractor)ใไฝฟ็จใใฆใชใผใใฃใชๆณขๅฝขใใ้ฃ็ถ็ใช็นๅพดใๆฝๅบใใใใใใใใณใฝใซใซๅคๆใใๆนๆณใ
* ็ปๅๅ
ฅๅใฎๅ ดๅใ[ImageProcessor](./main_classes/image)ใไฝฟ็จใใฆ็ปๅใใใณใฝใซใซๅคๆใใๆนๆณใ
* ใใซใใขใผใใซๅ
ฅๅใฎๅ ดๅใ[Processor](./main_classes/processors)ใไฝฟ็จใใฆใใผใฏใใคใถใจ็นๅพดๆฝๅบๅจใพใใฏ็ปๅใใญใปใใตใ็ตใฟๅใใใๆนๆณใ
<Tip>
`AutoProcessor`ใฏๅธธใซๅไฝใใไฝฟ็จใใใขใใซใซ้ฉๅใชใฏใฉในใ่ชๅ็ใซ้ธๆใใพใใ
ใใผใฏใใคใถใ็ปๅใใญใปใใตใ็นๅพดๆฝๅบๅจใใพใใฏใใญใปใใตใไฝฟ็จใใฆใใใใซใใใใใใๅไฝใใพใใ
</Tip>
ๅงใใๅใซใ๐ค Datasetsใใคใณในใใผใซใใฆใใใใคใใฎใใผใฟใปใใใ่ฉฆใใใจใใงใใใใใซใใฆใใ ใใ๏ผ
```bash
pip install datasets
```
## Natural Language Processing
<Youtube id="Yffk5aydLzg"/>
ใใญในใใใผใฟใฎๅๅฆ็ใซไฝฟ็จใใไธป่ฆใชใใผใซใฏใ[ใใผใฏใใคใถ](main_classes/tokenizer)ใงใใใใผใฏใใคใถใฏใไธ้ฃใฎใซใผใซใซๅพใฃใฆใใญในใใ*ใใผใฏใณ*ใซๅๅฒใใพใใใใผใฏใณใฏๆฐๅคใซๅคๆใใใใใฎๅพใใณใฝใซใซๅคๆใใใใขใใซใฎๅ
ฅๅใจใชใใพใใใขใใซใๅฟ
่ฆใจใใ่ฟฝๅ ใฎๅ
ฅๅใฏใใใผใฏใใคใถใซใใฃใฆ่ฟฝๅ ใใใพใใ
<Tip>
ไบๅๅญฆ็ฟๆธใฟใขใใซใไฝฟ็จใใไบๅฎใฎๅ ดๅใ้ข้ฃใใไบๅๅญฆ็ฟๆธใฟใใผใฏใใคใถใไฝฟ็จใใใใจใ้่ฆใงใใใใใซใใใใใญในใใไบๅๅญฆ็ฟใณใผใในใจๅใๆนๆณใงๅๅฒใใใไบๅๅญฆ็ฟไธญใซ้ๅธธ*ใใญใฃใ*ใจใใฆๅ็
งใใใๅฏพๅฟใใใใผใฏใณใคใณใใใฏในใไฝฟ็จใใพใใ
</Tip>
[`AutoTokenizer.from_pretrained`]ใกใฝใใใไฝฟ็จใใฆไบๅๅญฆ็ฟๆธใฟใใผใฏใใคใถใใญใผใใใฆใ้ๅงใใพใใใใใใใซใใใใขใใซใไบๅๅญฆ็ฟใใใ*ใใญใฃใ*ใใใฆใณใญใผใใใใพใ๏ผ
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")
```
ๆฌกใซใใใญในใใใใผใฏใใคใถใซๆธกใใพใ๏ผ
```py
>>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
>>> print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
ใใผใฏใใคใถใฏใ้่ฆใช3ใคใฎ้
็ฎใๆใค่พๆธใ่ฟใใพใ๏ผ
* [input_ids](glossary#input-ids) ใฏๆไธญใฎๅใใผใฏใณใซๅฏพๅฟใใใคใณใใใฏในใงใใ
* [attention_mask](glossary#attention-mask) ใฏใใผใฏใณใใขใใณใทใงใณใๅใใๅฟ
่ฆใใใใใฉใใใ็คบใใพใใ
* [token_type_ids](glossary#token-type-ids) ใฏ่คๆฐใฎใทใผใฑใณในใใใๅ ดๅใใใผใฏใณใใฉใฎใทใผใฑใณในใซๅฑใใฆใใใใ่ญๅฅใใพใใ
`input_ids` ใใใณใผใใใฆๅ
ฅๅใ่ฟใใพใ๏ผ
```python
>>> tokenizer.decode(encoded_input["input_ids"])
'[CLS] ้ญๆณไฝฟใใฎไบใซๅนฒๆธใใใชใๅฝผใใฏๅพฎๅฆใงๆใใฃใฝใใ [SEP]'
```
ๅฆไฝใซใๅใใใใใ ใใใใจๆใใพใใใใใผใฏใใคใถใฏใใฎๆ็ซ ใซ2ใคใฎ็นๅฅใชใใผใฏใณใ`CLS`๏ผใฏใฉใทใใกใคใข๏ผใจ`SEP`๏ผใปใใฌใผใฟ๏ผใ่ฟฝๅ ใใพใใใ
ใในใฆใฎใขใใซใ็นๅฅใชใใผใฏใณใๅฟ
่ฆใจใใใใใงใฏใใใพใใใใๅฟ
่ฆใชๅ ดๅใใใผใฏใใคใถใฏ่ชๅ็ใซใใใใ่ฟฝๅ ใใพใใ
่คๆฐใฎๆ็ซ ใๅๅฆ็ใใๅ ดๅใใใผใฏใใคใถใซใชในใใจใใฆๆธกใใฆใใ ใใ๏ผ
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_inputs = tokenizer(batch_sentences)
>>> print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]}
```
### Pad
ๆ็ซ ใฏๅธธใซๅใ้ทใใงใฏใชใใใจใใใใใใใฏใใณใฝใซ๏ผใขใใซใฎๅ
ฅๅ๏ผใๅไธใชๅฝข็ถใๆใคๅฟ
่ฆใใใใใๅ้กใจใชใใพใใ
ใใใฃใณใฐใฏใ็ญใๆใซ็นๅฅใชใใใใฃใณใฐใใผใฏใณใใ่ฟฝๅ ใใฆใใใณใฝใซใ้ทใใทใผใฑใณในใซๅใใใใใใฎๆฆ็ฅใงใใ
ใใใๅ
ใฎ็ญใใทใผใฑใณในใๆ้ทใฎใทใผใฑใณในใซๅใใใใใใซใ`padding`ใใฉใกใผใฟใ`True`ใซ่จญๅฎใใพใ๏ผ
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
```
1็ช็ฎใจ3็ช็ฎใฎๆใฏใ็ญใใใใซ`0`ใงใใใฃใณใฐใใใฆใใพใใ
### Truncation
้ใฎในใใฏใใซใงใฏใๆๆใใขใใซใๅฆ็ใใใฎใซ้ทใใใใทใผใฑใณในใใใใใใใใพใใใใใฎๅ ดๅใใทใผใฑใณในใ็ญ็ธฎใใๅฟ
่ฆใใใใพใใ
ใขใใซใๅใๅ
ฅใใๆๅคงใฎ้ทใใซใทใผใฑใณในใๅใ่ฉฐใใใซใฏใ`truncation`ใใฉใกใผใฟใ`True`ใซ่จญๅฎใใพใ๏ผ
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
>>> print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
```
<Tip>
็ฐใชใใใใฃใณใฐใจๅใ่ฉฐใใฎๅผๆฐใซใคใใฆ่ฉณใใใฏใ[ใใใฃใณใฐใจๅใ่ฉฐใ](./pad_truncation)ใฎใณใณใปใใใฌใคใใใ่ฆงใใ ใใใ
</Tip>
### Build tensors
ๆๅพใซใใใผใฏใใคใถใใขใใซใซไพ็ตฆใใใๅฎ้ใฎใใณใฝใซใ่ฟใใใใซ่จญๅฎใใพใใ
`return_tensors`ใใฉใกใผใฟใ`pt`๏ผPyTorch็จ๏ผใพใใฏ`tf`๏ผTensorFlow็จ๏ผใซ่จญๅฎใใพใ๏ผ
<frameworkcontent>
<pt>
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
>>> print(encoded_input)
{'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])}
```
</pt>
<tf>
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about second breakfast, Pip.",
... "What about elevensies?",
... ]
>>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
>>> print(encoded_input)
{'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
dtype=int32)>,
'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>}
```
</tf>
</frameworkcontent>
## Audio
ใชใผใใฃใชใฟในใฏใฎๅ ดๅใใใผใฟใปใใใใขใใซ็จใซๆบๅใใใใใซ[็นๅพดๆฝๅบๅจ](main_classes/feature_extractor)ใๅฟ
่ฆใงใใ
็นๅพดๆฝๅบๅจใฏ็ใฎใชใผใใฃใชใใผใฟใใ็นๅพดใๆฝๅบใใใใใใใใณใฝใซใซๅคๆใใใใใซ่จญ่จใใใฆใใพใใ
[PolyAI/minds14](https://huggingface.co/datasets/PolyAI/minds14)ใใผใฟใปใใใใญใผใใใฆ๏ผใใผใฟใปใใใฎใญใผใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏ๐ค [Datasetsใใฅใผใใชใขใซ](https://huggingface.co/docs/datasets/load_hub)ใๅ็
ง๏ผใ
ใชใผใใฃใชใใผใฟใปใใใง็นๅพดๆฝๅบๅจใใฉใฎใใใซไฝฟ็จใงใใใใ็ขบ่ชใใฆใฟใพใใใ๏ผ
```python
>>> from datasets import load_dataset, Audio
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
ใขใฏใปในใใฆ`audio`ๅใฎๆๅใฎ่ฆ็ด ใ็ขบ่ชใใพใใ`audio`ๅใๅผใณๅบใใจใ่ชๅ็ใซใชใผใใฃใชใใกใคใซใ่ชญใฟ่พผใพใใใชใตใณใใชใณใฐใใใพใ๏ผ
```py
>>> dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
```
ใใใซใใใ3ใคใฎใขใคใใ ใ่ฟใใใพใ๏ผ
* `array` ใฏ่ชญใฟ่พผใพใใ้ณๅฃฐไฟกๅทใงใ1Dใฎ้
ๅใจใใฆ่ชญใฟ่พผใพใใพใใๅฟ
่ฆใซๅฟใใฆใชใตใณใใชใณใฐใใใใใจใใใใพใใ
* `path` ใฏ้ณๅฃฐใใกใคใซใฎๅ ดๆใๆใใพใใ
* `sampling_rate` ใฏ้ณๅฃฐไฟกๅทๅ
ใฎใใผใฟใใคใณใใ1็ง้ใซใใใคๆธฌๅฎใใใใใ็คบใใพใใ
ใใฎใใฅใผใใชใขใซใงใฏใ[Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)ใขใใซใไฝฟ็จใใพใใ
ใขใใซใซใผใใ็ขบ่ชใใใจใWav2Vec2ใ16kHzใฎใตใณใใชใณใฐใใใ้ณๅฃฐใชใผใใฃใชใงไบๅๅญฆ็ฟใใใฆใใใใจใใใใใพใใ
ใขใใซใฎไบๅๅญฆ็ฟใซไฝฟ็จใใใใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใจใใใชใใฎใชใผใใฃใชใใผใฟใฎใตใณใใชใณใฐใฌใผใใไธ่ดใใใใจใ้่ฆใงใใ
ใใผใฟใฎใตใณใใชใณใฐใฌใผใใ็ฐใชใๅ ดๅใใใผใฟใใชใตใณใใชใณใฐใใๅฟ
่ฆใใใใพใใ
1. ๐ค Datasetsใฎ [`~datasets.Dataset.cast_column`] ใกใฝใใใไฝฟ็จใใฆใใตใณใใชใณใฐใฌใผใใ16kHzใซใขใใใตใณใใชใณใฐใใพใ๏ผ
```py
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
```
2. ๅใณ `audio` ๅใๅผใณๅบใใฆใชใผใใฃใชใใกใคใซใใชใตใณใใซใใพใ๏ผ
```py
>>> dataset[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ...,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
```
ๆฌกใซใๅ
ฅๅใๆญฃ่ฆๅใใใใฃใณใฐใใใใใซ็นๅพดๆฝๅบๅจใใญใผใใใพใใใใญในใใใผใฟใใใใฃใณใฐใใๅ ดๅใ็ญใใทใผใฑใณในใซใฏ `0` ใ่ฟฝๅ ใใใพใใๅใ่ใๆนใใชใผใใฃใชใใผใฟใซใ้ฉ็จใใใพใใ็นๅพดๆฝๅบๅจใฏ `array` ใซ `0` ใ่ฟฝๅ ใใพใ๏ผใใใฏ็ก้ณใจใใฆ่งฃ้ใใใพใ๏ผใ
[`AutoFeatureExtractor.from_pretrained`]ใไฝฟ็จใใฆ็นๅพดๆฝๅบๅจใใญใผใใใพใ๏ผ
```python
>>> from transformers import AutoFeatureExtractor
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
```
ใชใผใใฃใช `array` ใ็นๅพดๆฝๅบๅจใซๆธกใใพใใ็นๅพดๆฝๅบๅจใง็บ็ใใๅฏ่ฝๆงใฎใใ็ก้ณใจใฉใผใใใ่ฏใใใใใฐใใใใใซใ็นๅพดๆฝๅบๅจใซ `sampling_rate` ๅผๆฐใ่ฟฝๅ ใใใใจใใๅงใใใพใใ
```python
>>> audio_input = [dataset[0]["audio"]["array"]]
>>> feature_extractor(audio_input, sampling_rate=16000)
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ...,
5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]}
```
ๅๆงใซใใใผใฏใใคใถใจๅๆงใซใใใใๅ
ใฎๅฏๅคใทใผใฑใณในใๅฆ็ใใใใใซใใใฃใณใฐใพใใฏๅใ่ฉฐใใ้ฉ็จใงใใพใใๆฌกใซใใใใใฎ2ใคใฎใชใผใใฃใชใตใณใใซใฎใทใผใฑใณใน้ทใ็ขบ่ชใใฆใฟใพใใใ๏ผ
```python
>>> dataset[0]["audio"]["array"].shape
(173398,)
>>> dataset[1]["audio"]["array"].shape
(106496,)
```
ใใฎ้ขๆฐใฏใใใผใฟใปใใใๅๅฆ็ใใฆใชใผใใฃใชใตใณใใซใฎ้ทใใๅใใซใใใใใฎใใฎใงใใๆๅคงใตใณใใซ้ทใๆๅฎใใ็นๅพดๆฝๅบๅจใฏใทใผใฑใณในใใใใซๅใใใฆใใใฃใณใฐใพใใฏๅใ่ฉฐใใพใใ
```py
>>> def preprocess_function(examples):
... audio_arrays = [x["array"] for x in examples["audio"]]
... inputs = feature_extractor(
... audio_arrays,
... sampling_rate=16000,
... padding=True,
... max_length=100000,
... truncation=True,
... )
... return inputs
```
`preprocess_function`ใใใผใฟใปใใใฎๆๅใฎๆฐไพใซ้ฉ็จใใพใ๏ผ
```python
>>> processed_dataset = preprocess_function(dataset[:5])
```
ใตใณใใซใฎ้ทใใฏ็พๅจๅใใงใๆๅฎใใใๆๅคง้ทใจไธ่ดใใฆใใพใใใใใงๅฆ็ใใใใใผใฟใปใใใใขใใซใซๆธกใใใจใใงใใพใ๏ผ
```py
>>> processed_dataset["input_values"][0].shape
(100000,)
>>> processed_dataset["input_values"][1].shape
(100000,)
```
## Computer Vision
ใณใณใใฅใผใฟใใธใงใณใฟในใฏใงใฏใใขใใซ็จใซใใผใฟใปใใใๆบๅใใใใใฎ[็ปๅใใญใปใใต](main_classes/image_processor)ใๅฟ
่ฆใงใใ
็ปๅใฎๅๅฆ็ใซใฏใ็ปๅใใขใใซใๆๅพ
ใใๅ
ฅๅๅฝขๅผใซๅคๆใใใใใฎใใใคใใฎในใใใใๅซใพใใฆใใพใใใใใใฎในใใใใซใฏใใชใตใคใบใๆญฃ่ฆๅใใซใฉใผใใฃใใซใฎ่ฃๆญฃใใใใณ็ปๅใใใณใฝใซใซๅคๆใใใชใฉใๅซใพใใพใใ
<Tip>
็ปๅใฎๅๅฆ็ใฏใ้ๅธธใ็ปๅใฎๅขๅผทใฎๅฝขๅผใซๅพใใพใใ็ปๅใฎๅๅฆ็ใจ็ปๅใฎๅขๅผทใฎไธกๆนใฏ็ปๅใใผใฟใๅคๆใใพใใใ็ฐใชใ็ฎ็ใใใใพใ๏ผ
* ็ปๅใฎๅขๅผทใฏใ้ๅญฆ็ฟใ้ฒใใใขใใซใฎๅ
็ขๆงใๅไธใใใใฎใซๅฝน็ซใคๆนๆณใง็ปๅใๅคๆดใใพใใใใผใฟใๅขๅผทใใๆนๆณใฏ็ก้ใงใๆใใใ่ฒใฎ่ชฟๆดใใฏใญใใใๅ่ปขใใชใตใคใบใใบใผใ ใชใฉใๆงใ
ใชๆนๆณใใใใพใใใใ ใใๅขๅผทๆไฝใซใใฃใฆ็ปๅใฎๆๅณใๅคใใใชใใใใซๆณจๆใใๅฟ
่ฆใใใใพใใ
* ็ปๅใฎๅๅฆ็ใฏใ็ปๅใใขใใซใฎๆๅพ
ใใๅ
ฅๅๅฝขๅผใจไธ่ดใใใใจใไฟ่จผใใพใใใณใณใใฅใผใฟใใธใงใณใขใใซใใใกใคใณใใฅใผใใณใฐใใๅ ดๅใ็ปๅใฏใขใใซใๆๅใซใใฌใผใใณใฐใใใใจใใจใพใฃใใๅใๆนๆณใงๅๅฆ็ใใๅฟ
่ฆใใใใพใใ
็ปๅใฎๅขๅผทใซใฏไปปๆใฎใฉใคใใฉใชใไฝฟ็จใงใใพใใ็ปๅใฎๅๅฆ็ใซใฏใใขใใซใซ้ข้ฃไปใใใใ`ImageProcessor`ใไฝฟ็จใใพใใ
</Tip>
ใณใณใใฅใผใฟใใธใงใณใฎใใผใฟใปใใใง็ปๅใใญใปใใตใไฝฟ็จใใๆนๆณใ็คบใใใใซใ[food101](https://huggingface.co/datasets/food101)ใใผใฟใปใใใใญใผใใใพใ๏ผใใผใฟใปใใใฎใญใผใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏ๐ค[Datasetsใใฅใผใใชใขใซ](https://huggingface.co/docs/datasets/load_hub)ใๅ็
ง๏ผ๏ผ
<Tip>
ใใผใฟใปใใใใใชใๅคงใใใใใ๐ค Datasetsใฎ`split`ใใฉใกใผใฟใไฝฟ็จใใฆใใฌใผใใณใฐใใผใฟใฎๅฐใใชใตใณใใซใฎใฟใใญใผใใใพใ๏ผ
</Tip>
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("food101", split="train[:100]")
```
ๆฌกใซใ๐ค Datasetsใฎ [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) ๆฉ่ฝใง็ปๅใ่ฆใฆใฟใพใใใ๏ผ
```python
>>> dataset[0]["image"]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/>
</div>
AutoImageProcessorใ[`AutoImageProcessor.from_pretrained`]ใไฝฟ็จใใฆใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoImageProcessor
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
```
1. ใพใใ็ปๅใฎๆกๅผตใ่ฟฝๅ ใใพใใใใๅฅฝใใชใฉใคใใฉใชใไฝฟ็จใงใใพใใใใใฎใใฅใผใใชใขใซใงใฏtorchvisionใฎ[`transforms`](https://pytorch.org/vision/stable/transforms.html)ใขใธใฅใผใซใไฝฟ็จใใพใใๅฅใฎใใผใฟๆกๅผตใฉใคใใฉใชใไฝฟ็จใใใๅ ดๅใฏใ[Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)ใพใใฏ[Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)ใง่ฉณ็ดฐใๅญฆใถใใจใใงใใพใใ
ใใใงใฏใ[`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html)ใไฝฟ็จใใฆใใใคใใฎๅคๆใ้ฃ้ใใใพใ - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html)ใจ[`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html)ใ
ใตใคใบใฎๅคๆดใซ้ขใใฆใฏใ`image_processor`ใใ็ปๅใตใคใบใฎ่ฆไปถใๅๅพใงใใพใใ
ไธ้จใฎใขใใซใงใฏใๆญฃ็ขบใช้ซใใจๅน
ใๅฟ
่ฆใงใใใไปใฎใขใใซใงใฏ`shortest_edge`ใฎใฟใๅฎ็พฉใใใฆใใพใใ
```py
>>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
>>> size = (
... image_processor.size["shortest_edge"]
... if "shortest_edge" in image_processor.size
... else (image_processor.size["height"], image_processor.size["width"])
... )
>>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
```
2. ใขใใซใฏ[`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values)ใๅ
ฅๅใจใใฆๅใๅใใพใใ
`ImageProcessor`ใฏ็ปๅใฎๆญฃ่ฆๅใจ้ฉๅใชใใณใฝใซใฎ็ๆใๅฆ็ใงใใพใใ
ไธ้ฃใฎ็ปๅใซๅฏพใใ็ปๅๆกๅผตใจ็ปๅๅๅฆ็ใ็ตใฟๅใใใ`pixel_values`ใ็ๆใใ้ขๆฐใไฝๆใใพใ๏ผ
```python
>>> def transforms(examples):
... images = [_transforms(img.convert("RGB")) for img in examples["image"]]
... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"]
... return examples
```
<Tip>
ไธ่จใฎไพใงใฏใ็ปๅใฎใตใคใบๅคๆดใๆขใซ็ปๅๅขๅผทๅคๆใง่กใฃใฆใใใใใ`do_resize=False`ใ่จญๅฎใใพใใใ
้ฉๅใช `image_processor` ใใใฎ `size` ๅฑๆงใๆดป็จใใฆใใพใใ็ปๅๅขๅผทไธญใซ็ปๅใฎใตใคใบๅคๆดใ่กใใชใๅ ดๅใฏใใใฎใใฉใกใผใฟใ็็ฅใใฆใใ ใใใ
ใใใฉใซใใงใฏใ`ImageProcessor` ใใตใคใบๅคๆดใๅฆ็ใใพใใ
็ปๅใๅขๅผทๅคๆใฎไธ้จใจใใฆๆญฃ่ฆๅใใใๅ ดๅใฏใ`image_processor.image_mean` ใจ `image_processor.image_std` ใฎๅคใไฝฟ็จใใฆใใ ใใใ
</Tip>
3. ๆฌกใซใ๐ค Datasetsใฎ[`set_transform`](https://huggingface.co/docs/datasets/process#format-transform)ใไฝฟ็จใใฆใๅคๆใใชใขใซใฟใคใ ใง้ฉ็จใใพใ๏ผ
```python
>>> dataset.set_transform(transforms)
```
4. ็ปๅใซใขใฏใปในใใใจใ็ปๅใใญใปใใตใ `pixel_values` ใ่ฟฝๅ ใใใใจใใใใใพใใใใใงๅฆ็ๆธใฟใฎใใผใฟใปใใใใขใใซใซๆธกใใใจใใงใใพใ๏ผ
```python
>>> dataset[0].keys()
```
ไปฅไธใฏใๅคๆใ้ฉ็จใใใๅพใฎ็ปๅใฎๅค่ฆณใงใใ ็ปๅใฏใฉใณใใ ใซๅใๆใใใใใฎ่ฒใฎ็นๆงใ็ฐใชใใพใใ
```py
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> img = dataset[0]["pixel_values"]
>>> plt.imshow(img.permute(1, 2, 0))
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/>
</div>
<Tip>
ใชใใธใงใฏใๆคๅบใๆๅณใปใฐใกใณใใผใทใงใณใใคใณในใฟใณในใปใฐใกใณใใผใทใงใณใใใใณใใใใใฃใใฏใปใฐใกใณใใผใทใงใณใชใฉใฎใฟในใฏใฎๅ ดๅใ`ImageProcessor`ใฏ
ใในใๅฆ็ใกใฝใใใๆไพใใพใใใใใใฎใกใฝใใใฏใใขใใซใฎ็ใฎๅบๅใๅข็ใใใฏในใใปใฐใกใณใใผใทใงใณใใใใชใฉใฎๆๅณใฎใใไบๆธฌใซๅคๆใใพใใ
</Tip>
### Pad
ไธ้จใฎๅ ดๅใใใจใใฐใ[DETR](./model_doc/detr)ใใใกใคใณใใฅใผใใณใฐใใๅ ดๅใใขใใซใฏใใฌใผใใณใฐๆใซในใฑใผใซใฎๅคๆดใ้ฉ็จใใพใใ
ใใใซใใใใใใๅ
ใฎ็ปๅใฎใตใคใบใ็ฐใชใๅ ดๅใใใใพใใ[`DetrImageProcessor`]ใใ[`DetrImageProcessor.pad`]ใไฝฟ็จใใ
ใซในใฟใ ใฎ`collate_fn`ใๅฎ็พฉใใฆ็ปๅใไธ็ทใซใใใๅฆ็ใงใใพใใ
```py
>>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch]
... batch = {}
... batch["pixel_values"] = encoding["pixel_values"]
... batch["pixel_mask"] = encoding["pixel_mask"]
... batch["labels"] = labels
... return batch
```
## Multi Modal
ใใซใใขใผใใซๅ
ฅๅใไฝฟ็จใใใฟในใฏใฎๅ ดๅใใขใใซ็จใซใใผใฟใปใใใๆบๅใใใใใฎ[ใใญใปใใต](main_classes/processors)ใๅฟ
่ฆใงใใใใญใปใใตใฏใใใผใฏใใคใถใ็นๅพด้ๆฝๅบๅจใชใฉใฎ2ใคใฎๅฆ็ใชใใธใงใฏใใ็ตๅใใพใใ
่ชๅ้ณๅฃฐ่ช่ญ๏ผASR๏ผใฎใใใฎใใญใปใใตใฎไฝฟ็จๆนๆณใ็คบใใใใซใ[LJ Speech](https://huggingface.co/datasets/lj_speech)ใใผใฟใปใใใใญใผใใใพใ๏ผใใผใฟใปใใใฎใญใผใๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏ๐ค [Datasets ใใฅใผใใชใขใซ](https://huggingface.co/docs/datasets/load_hub)ใๅ็
ง๏ผ๏ผ
```python
>>> from datasets import load_dataset
>>> lj_speech = load_dataset("lj_speech", split="train")
```
ASR๏ผ่ชๅ้ณๅฃฐ่ช่ญ๏ผใฎๅ ดๅใไธปใซ `audio` ใจ `text` ใซ็ฆ็นใๅฝใฆใฆใใใใใไปใฎๅใๅ้คใงใใพใ๏ผ
```python
>>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"])
```
ๆฌกใซใ`audio`ใจ`text`ใฎๅใ่ฆใฆใฟใพใใใ๏ผ
```python
>>> lj_speech[0]["audio"]
{'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ...,
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav',
'sampling_rate': 22050}
>>> lj_speech[0]["text"]
'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition'
```
ๅธธใซใใชใผใใฃใชใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใใใขใใซใฎไบๅๅญฆ็ฟใซไฝฟ็จใใใใใผใฟใปใใใฎใตใณใใชใณใฐใฌใผใใจไธ่ดใใใใใใซ[ใชใตใณใใซ](preprocessing#audio)ใใๅฟ
่ฆใใใใพใ๏ผ
```py
>>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000))
```
ใใญใปใใตใ [`AutoProcessor.from_pretrained`] ใไฝฟ็จใใฆใญใผใใใพใ๏ผ
```py
>>> from transformers import AutoProcessor
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
```
1. `array`ๅ
ใซๅซใพใใใชใผใใฃใชใใผใฟใ`input_values`ใซๅฆ็ใใ`text`ใ`labels`ใซใใผใฏใณๅใใ้ขๆฐใไฝๆใใพใ๏ผ
```py
>>> def prepare_dataset(example):
... audio = example["audio"]
... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
... return example
```
2. ใตใณใใซใซ`prepare_dataset`้ขๆฐใ้ฉ็จใใพใ๏ผ
```py
>>> prepare_dataset(lj_speech[0])
```
|
transformers/docs/source/ja/preprocessing.md/0
|
{
"file_path": "transformers/docs/source/ja/preprocessing.md",
"repo_id": "transformers",
"token_count": 12720
}
| 296
|
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Multiple choice
[[open-in-colab]]
ๅค่ข้ธๆใฟในใฏใฏ่ณชๅๅฟ็ญใซไผผใฆใใพใใใใใใคใใฎๅ่ฃใฎๅ็ญใใณใณใใญในใใจใจใใซๆไพใใใๆญฃใใๅ็ญใ้ธๆใใใใใซใขใใซใใใฌใผใใณใฐใใใ็นใ็ฐใชใใพใใ
ใใฎใฌใคใใงใฏใๆฌกใฎๆนๆณใ่ชฌๆใใพใใ
1. [SWAG](https://huggingface.co/datasets/swag) ใใผใฟใปใใใฎใ้ๅธธใๆงๆใง [BERT](https://huggingface.co/google-bert/bert-base-uncased) ใๅพฎ่ชฟๆดใใฆใๆ้ฉใชใใผใฟใปใใใ้ธๆใใพใ่คๆฐใฎ้ธๆ่ขใจไฝใใใฎใณใณใใญในใใ่ๆ
ฎใใฆๅ็ญใใพใใ
2. ๅพฎ่ชฟๆดใใใขใใซใๆจ่ซใซไฝฟ็จใใพใใ
ๅงใใๅใซใๅฟ
่ฆใชใฉใคใใฉใชใใในใฆใคใณในใใผใซใใใฆใใใใจใ็ขบ่ชใใฆใใ ใใใ
```bash
pip install transformers datasets evaluate
```
ใขใใซใใขใใใญใผใใใฆใณใใฅใใใฃใจๅ
ฑๆใงใใใใใซใHugging Face ใขใซใฆใณใใซใญใฐใคใณใใใใจใใๅงใใใพใใใใญใณใใใ่กจ็คบใใใใใใใผใฏใณใๅ
ฅๅใใฆใญใฐใคใณใใพใใ
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Load SWAG dataset
ใพใใ๐ค ใใผใฟใปใใ ใฉใคใใฉใชใใ SWAG ใใผใฟใปใใใฎใ้ๅธธใๆงๆใใญใผใใใพใใ
```py
>>> from datasets import load_dataset
>>> swag = load_dataset("swag", "regular")
```
ๆฌกใซใไพใ่ฆใฆใฟใพใใใใ
```py
>>> swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}
```
ใใใซใฏใใใใใฎใใฃใผใซใใใใใใใซ่ฆใใพใใใๅฎ้ใฏ้ๅธธใซ็ฐกๅใงใใ
- `sent1` ใจ `sent2`: ใใใใฎใใฃใผใซใใฏๆใฎๅงใพใใ็คบใใใใฎ 2 ใคใ็ตใฟๅใใใใจ `startphrase` ใใฃใผใซใใๅพใใใพใใ
- `ending`: ๆใฎ็ตใใๆนใจใใฆ่ใใใใ็ตใใๆนใ็คบๅใใพใใใๆญฃใใใฎใฏ 1 ใคใ ใใงใใ
- `label`: ๆญฃใใๆใฎ็ตใใใ่ญๅฅใใพใใ
## Preprocess
ๆฌกใฎในใใใใงใฏใBERT ใใผใฏใใคใถใผใใญใผใใใฆใๆใฎๅงใพใใจ 4 ใคใฎๅฏ่ฝใช็ตใใใๅฆ็ใใพใใ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
```
ไฝๆใใๅๅฆ็้ขๆฐใฏๆฌกใฎใใจใ่กใๅฟ
่ฆใใใใพใใ
1. `sent1` ใใฃใผใซใใฎใณใใผใ 4 ใคไฝๆใใใใใใใ `sent2` ใจ็ตใฟๅใใใฆๆใฎๅงใพใใๅ็พใใพใใ
2. `sent2` ใ 4 ใคใฎๅฏ่ฝใชๆๆซๅฐพใฎใใใใใจ็ตใฟๅใใใพใใ
3. ใใใ 2 ใคใฎใชในใใใใผใฏใณๅใงใใใใใซใใฉใใๅใใใใฎๅพใๅไพใซๅฏพๅฟใใ `input_ids`ใ`attention_mask`ใใใใณ `labels` ใใฃใผใซใใๅซใพใใใใใซ้ใใฉใใๅใใพใใ
```py
>>> ending_names = ["ending0", "ending1", "ending2", "ending3"]
>>> def preprocess_function(examples):
... first_sentences = [[context] * 4 for context in examples["sent1"]]
... question_headers = examples["sent2"]
... second_sentences = [
... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
... ]
... first_sentences = sum(first_sentences, [])
... second_sentences = sum(second_sentences, [])
... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
```
ใใผใฟใปใใๅ
จไฝใซๅๅฆ็้ขๆฐใ้ฉ็จใใใซใฏใ๐ค Datasets [`~datasets.Dataset.map`] ใกใฝใใใไฝฟ็จใใพใใ `batched=True` ใ่จญๅฎใใฆใใผใฟใปใใใฎ่คๆฐใฎ่ฆ็ด ใไธๅบฆใซๅฆ็ใใใใจใงใ`map` ้ขๆฐใ้ซ้ๅใงใใพใใ
```py
tokenized_swag = swag.map(preprocess_function, batched=True)
```
๐ค Transformers ใซใฏๅค่ข้ธๆ็จใฎใใผใฟ็
งๅๅจใใชใใใใ[`DataCollatโโorWithPadding`] ใ่ชฟๆดใใฆใตใณใใซใฎใใใใไฝๆใใๅฟ
่ฆใใใใพใใใใผใฟใปใใๅ
จไฝใๆๅคง้ทใพใงใใใฃใณใฐใใใฎใงใฏใชใใ็
งๅไธญใซใใใๅ
ใฎๆ้ทใฎ้ทใใพใงๆใ *ๅ็ใซใใใฃใณใฐ* ใใๆนใๅน็็ใงใใ
`DataCollatโโorForMultipleChoice` ใฏใใในใฆใฎใขใใซๅ
ฅๅใๅนณๅฆๅใใใใใฃใณใฐใ้ฉ็จใใฆใ็ตๆใ้ๅนณๅฆๅใใพใใ
<frameworkcontent>
<pt>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import torch
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="pt",
... )
... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
... batch["labels"] = torch.tensor(labels, dtype=torch.int64)
... return batch
```
</pt>
<tf>
```py
>>> from dataclasses import dataclass
>>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
>>> from typing import Optional, Union
>>> import tensorflow as tf
>>> @dataclass
... class DataCollatorForMultipleChoice:
... """
... Data collator that will dynamically pad the inputs for multiple choice received.
... """
... tokenizer: PreTrainedTokenizerBase
... padding: Union[bool, str, PaddingStrategy] = True
... max_length: Optional[int] = None
... pad_to_multiple_of: Optional[int] = None
... def __call__(self, features):
... label_name = "label" if "label" in features[0].keys() else "labels"
... labels = [feature.pop(label_name) for feature in features]
... batch_size = len(features)
... num_choices = len(features[0]["input_ids"])
... flattened_features = [
... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
... ]
... flattened_features = sum(flattened_features, [])
... batch = self.tokenizer.pad(
... flattened_features,
... padding=self.padding,
... max_length=self.max_length,
... pad_to_multiple_of=self.pad_to_multiple_of,
... return_tensors="tf",
... )
... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
... return batch
```
</tf>
</frameworkcontent>
## Evaluate
ใใฌใผใใณใฐไธญใซใกใใชใฏในใๅซใใใจใๅคใใฎๅ ดๅใใขใใซใฎใใใฉใผใใณในใ่ฉไพกใใใฎใซๅฝน็ซใกใพใใ ๐ค [Evaluate](https://huggingface.co/docs/evaluate/index) ใฉใคใใฉใชใไฝฟ็จใใฆใ่ฉไพกใกใฝใใใใใฐใใใญใผใใงใใพใใใใฎใฟในใฏใงใฏใ[accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ใกใใชใฏในใ่ชญใฟ่พผใฟใพใ (๐ค Evaluate [ใฏใคใใฏ ใใขใผ](https://huggingface.co/docs/evaluate/a_quick_tour) ใๅ็
งใใฆใใ ใใ) ) ใกใใชใฏในใฎ่ชญใฟ่พผใฟใจ่จ็ฎๆนๆณใฎ่ฉณ็ดฐใซใคใใฆใฏใๆฌกใๅ็
งใใฆใใ ใใ)ใ
```py
>>> import evaluate
>>> accuracy = evaluate.load("accuracy")
```
ๆฌกใซใไบๆธฌใจใฉใใซใ [`~evaluate.EvaluationModule.compute`] ใซๆธกใใฆ็ฒพๅบฆใ่จ็ฎใใ้ขๆฐใไฝๆใใพใใ
```py
>>> import numpy as np
>>> def compute_metrics(eval_pred):
... predictions, labels = eval_pred
... predictions = np.argmax(predictions, axis=1)
... return accuracy.compute(predictions=predictions, references=labels)
```
ใใใง`compute_metrics`้ขๆฐใฎๆบๅใๆดใใพใใใใใฌใผใใณใฐใใปใใใขใใใใใจใใซใใฎ้ขๆฐใซๆปใใพใใ
## Train
<frameworkcontent>
<pt>
<Tip>
[`Trainer`] ใไฝฟ็จใใใขใใซใฎๅพฎ่ชฟๆดใซๆ
ฃใใฆใใชใๅ ดๅใฏใ[ใใ](../training#train-with-pytorch-trainer) ใฎๅบๆฌ็ใชใใฅใผใใชใขใซใใ่ฆงใใ ใใใ
</Tip>
ใใใงใขใใซใฎใใฌใผใใณใฐใ้ๅงใใๆบๅใๆดใใพใใใ [`AutoModelForMultipleChoice`] ใไฝฟ็จใใฆ BERT ใใญใผใใใพใใ
```py
>>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
```
ใใฎๆ็นใงๆฎใฃใฆใใๆ้ ใฏๆฌกใฎ 3 ใคใ ใใงใใ
1. [`TrainingArguments`] ใงใใฌใผใใณใฐ ใใคใใผใใฉใกใผใฟใๅฎ็พฉใใพใใๅฏไธใฎๅฟ
้ ใใฉใกใผใฟใฏใใขใใซใฎไฟๅญๅ ดๆใๆๅฎใใ `output_dir` ใงใใ `push_to_hub=True`ใ่จญๅฎใใฆใใใฎใขใใซใใใใซใใใทใฅใใพใ (ใขใใซใใขใใใญใผใใใใซใฏใHugging Face ใซใตใคใณใคใณใใๅฟ
่ฆใใใใพใ)ใๅใจใใใฏใฎ็ตไบๆใซใ[`Trainer`] ใฏ็ฒพๅบฆใ่ฉไพกใใใใฌใผใใณใฐ ใใงใใฏใใคใณใใไฟๅญใใพใใ
2. ใใฌใผใใณใฐๅผๆฐใใใขใใซใใใผใฟใปใใใใใผใฏใใคใถใผใใใผใฟ็
งๅๅจใใใใณ `compute_metrics` ้ขๆฐใจใจใใซ [`Trainer`] ใซๆธกใใพใใ
3. [`~Trainer.train`] ใๅผใณๅบใใฆใขใใซใๅพฎ่ชฟๆดใใพใใ
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_swag_model",
... eval_strategy="epoch",
... save_strategy="epoch",
... load_best_model_at_end=True,
... learning_rate=5e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_swag["train"],
... eval_dataset=tokenized_swag["validation"],
... tokenizer=tokenizer,
... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
... compute_metrics=compute_metrics,
... )
>>> trainer.train()
```
ใใฌใผใใณใฐใๅฎไบใใใใ [`~transformers.Trainer.push_to_hub`] ใกใฝใใใไฝฟ็จใใฆใขใใซใใใใซๅ
ฑๆใใ่ชฐใใใขใใซใไฝฟ็จใงใใพใใใใซใ
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras ใไฝฟ็จใใใขใใซใฎๅพฎ่ชฟๆดใซๆ
ฃใใฆใใชใๅ ดๅใฏใ[ใใกใ](../training#train-a-tensorflow-model-with-keras) ใฎๅบๆฌ็ใชใใฅใผใใชใขใซใใ่ฆงใใ ใใใ
</Tip>
TensorFlow ใงใขใใซใๅพฎ่ชฟๆดใใใซใฏใใชใใใฃใใคใถใผ้ขๆฐใๅญฆ็ฟ็ในใฑใธใฅใผใซใใใใณใใใคใใฎใใฌใผใใณใฐ ใใคใใผใใฉใกใผใฟใผใใปใใใขใใใใใใจใใๅงใใพใใ
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_train_epochs = 2
>>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
>>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
```
ๆฌกใซใ[`TFAutoModelForMultipleChoice`] ใไฝฟ็จใใฆ BERT ใใญใผใใงใใพใใ
```py
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ใไฝฟ็จใใฆใใใผใฟใปใใใ `tf.data.Dataset` ๅฝขๅผใซๅคๆใใพใใ
```py
>>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_swag["train"],
... shuffle=True,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_swag["validation"],
... shuffle=False,
... batch_size=batch_size,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ใไฝฟ็จใใฆใใฌใผใใณใฐ็จใฎใขใใซใ่จญๅฎใใพใใ Transformers ใขใใซใซใฏใในใฆใใใฉใซใใฎใฟในใฏ้ข้ฃใฎๆๅคฑ้ขๆฐใใใใใใๆฌกใฎๅ ดๅใ้คใใๆๅคฑ้ขๆฐใๆๅฎใใๅฟ
่ฆใฏใชใใใจใซๆณจๆใใฆใใ ใใใ
```py
>>> model.compile(optimizer=optimizer) # No loss argument!
```
ใใฌใผใใณใฐใ้ๅงใใๅใซใปใใใขใใใใๆๅพใฎ 2 ใคใฎใใจใฏใไบๆธฌใใ็ฒพๅบฆใ่จ็ฎใใใใจใจใใขใใซใใใใซใใใทใฅใใๆนๆณใๆไพใใใใจใงใใใฉใกใใ [Keras ใณใผใซใใใฏ](../main_classes/keras_callbacks) ใไฝฟ็จใใฆ่กใใใพใใ
`compute_metrics` ้ขๆฐใ [`~transformers.KerasMetricCallback`] ใซๆธกใใพใใ
```py
>>> from transformers.keras_callbacks import KerasMetricCallback
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
```
[`~transformers.PushToHubCallback`] ใงใขใใซใจใใผใฏใใคใถใผใใใใทใฅใใๅ ดๆใๆๅฎใใพใใ
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="my_awesome_model",
... tokenizer=tokenizer,
... )
```
ๆฌกใซใใณใผใซใใใฏใใพใจใใฆใใณใใซใใพใใ
```py
>>> callbacks = [metric_callback, push_to_hub_callback]
```
ใคใใซใใขใใซใฎใใฌใผใใณใฐใ้ๅงใใๆบๅใๆดใใพใใใใใฌใผใใณใฐใใใณๆค่จผใใผใฟใปใใใใจใใใฏๆฐใใณใผใซใใใฏใๆๅฎใใฆ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ใๅผใณๅบใใใขใใซใๅพฎ่ชฟๆดใใพใใ
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
```
ใใฌใผใใณใฐใๅฎไบใใใจใใขใใซใฏ่ชๅ็ใซใใใซใขใใใญใผใใใใ่ชฐใงใไฝฟ็จใงใใใใใซใชใใพใใ
</tf>
</frameworkcontent>
<Tip>
่คๆฐ้ธๆ็จใซใขใใซใๅพฎ่ชฟๆดใใๆนๆณใฎ่ฉณ็ดฐใชไพใซใคใใฆใฏใๅฏพๅฟใใใปใฏใทใงใณใๅ็
งใใฆใใ ใใใ
[PyTorch ใใผใใใใฏ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)
ใพใใฏ [TensorFlow ใใผใใใใฏ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)ใ
</Tip>
# Inference
ใขใใซใๅพฎ่ชฟๆดใใใฎใงใใใใๆจ่ซใซไฝฟ็จใงใใใใใซใชใใพใใใ
ใใใคใใฎใใญในใใจ 2 ใคใฎๅ็ญๅ่ฃใ่ใใฆใใ ใใใ
```py
>>> prompt = "France has a bread law, Le Dรฉcret Pain, with strict rules on what is allowed in a traditional baguette."
>>> candidate1 = "The law does not apply to croissants and brioche."
>>> candidate2 = "The law applies to baguettes."
```
<frameworkcontent>
<pt>
ๅใใญใณใใใจๅ็ญๅ่ฃใฎใใขใใใผใฏใณๅใใPyTorch ใใณใฝใซใ่ฟใใพใใใใใคใใฎ`lables`ใไฝๆใใๅฟ
่ฆใใใใพใใ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
>>> labels = torch.tensor(0).unsqueeze(0)
```
ๅ
ฅๅใจใฉใใซใใขใใซใซๆธกใใ`logits`ใ่ฟใใพใใ
```py
>>> from transformers import AutoModelForMultipleChoice
>>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
>>> logits = outputs.logits
```
ๆใ้ซใ็ขบ็ใงใฏใฉในใๅๅพใใพใใ
```py
>>> predicted_class = logits.argmax().item()
>>> predicted_class
'0'
```
</pt>
<tf>
ๅใใญใณใใใจๅ็ญๅ่ฃใฎใใขใใใผใฏใณๅใใTensorFlow ใใณใฝใซใ่ฟใใพใใ
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
>>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)
```
ๅ
ฅๅใใขใใซใซๆธกใใ`logits`ใ่ฟใใพใใ
```py
>>> from transformers import TFAutoModelForMultipleChoice
>>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
>>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
>>> outputs = model(inputs)
>>> logits = outputs.logits
```
ๆใ้ซใ็ขบ็ใงใฏใฉในใๅๅพใใพใใ
```py
>>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
>>> predicted_class
'0'
```
</tf>
</frameworkcontent>
|
transformers/docs/source/ja/tasks/multiple_choice.md/0
|
{
"file_path": "transformers/docs/source/ja/tasks/multiple_choice.md",
"repo_id": "transformers",
"token_count": 7533
}
| 297
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# XLA Integration for TensorFlow Models
[[open-in-colab]]
ๅ ้็ทๅฝขไปฃๆฐ๏ผAccelerated Linear Algebra๏ผใ้็งฐXLAใฏใTensorFlowใขใใซใฎใฉใณใฟใคใ ใ้ซ้ๅใใใใใฎใณใณใใคใฉใงใใ[ๅ
ฌๅผใใญใฅใกใณใ](https://www.tensorflow.org/xla)ใซใใใฐใXLA๏ผAccelerated Linear Algebra๏ผใฏ็ทๅฝขไปฃๆฐใฎใใใฎใใกใคใณๅบๆใฎใณใณใใคใฉใงใTensorFlowใขใใซใๆฝๅจ็ใซใฝใผในใณใผใใฎๅคๆดใชใใง้ซ้ๅใงใใพใใ
TensorFlowใงXLAใไฝฟ็จใใใฎใฏ็ฐกๅใงใใXLAใฏ`tensorflow`ใฉใคใใฉใชๅ
ใซใใใฑใผใธๅใใใฆใใใ[`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs)ใชใฉใฎใฐใฉใใไฝๆใใ้ขๆฐๅ
ใง`jit_compile`ๅผๆฐใไฝฟ็จใใฆใใชใฌใผใงใใพใใ`fit()`ใ`predict()`ใชใฉใฎKerasใกใฝใใใไฝฟ็จใใๅ ดๅใ`model.compile()`ใซ`jit_compile`ๅผๆฐใๆธกใใ ใใงXLAใๆๅนใซใงใใพใใใใ ใใXLAใฏใใใใฎใกใฝใใใซ้ๅฎใใใฆใใใใใงใฏใใใพใใใไปปๆใฎ`tf.function`ใ้ซ้ๅใใใใใซใไฝฟ็จใงใใพใใ
๐ค Transformersๅ
ใฎใใใคใใฎTensorFlowใกใฝใใใฏใXLAใจไบๆๆงใใใใใใซๆธใ็ดใใใฆใใพใใใใใซใฏใ[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)ใ[T5](https://huggingface.co/docs/transformers/model_doc/t5)ใ[OPT](https://huggingface.co/docs/transformers/model_doc/opt)ใชใฉใฎใใญในใ็ๆใขใใซใใ[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)ใชใฉใฎ้ณๅฃฐๅฆ็ใขใใซใๅซใพใใพใใ
้ๅบฆๅไธใฎๅ
ทไฝ็ใช้ใฏใขใใซใซ้ๅธธใซไพๅญใใพใใใ๐ค Transformersๅ
ใฎTensorFlowใใญในใ็ๆใขใใซใงใฏใ็ด100ๅใฎ้ๅบฆๅไธใ็ขบ่ชใใฆใใพใใใใฎใใญใฅใกใณใใงใฏใใใใใฎใขใใซใซXLAใไฝฟ็จใใฆๆๅคงใฎใใใฉใผใใณในใๅพใๆนๆณใ่ชฌๆใใพใใใพใใใใณใใใผใฏใจXLA็ตฑๅใฎใใถใคใณๅฒๅญฆใซใคใใฆ่ฉณใใๅญฆใณใใๅ ดๅใฎ่ฟฝๅ ใชใฝใผในใธใฎใชใณใฏใๆไพใใพใใ
## Running TF functions with XLA
ไปฅไธใฎTensorFlowใขใใซใ่ใใฆใฟใพใใใ๏ผ
```py
import tensorflow as tf
model = tf.keras.Sequential(
[tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")]
)
```
ไธ่จใฎใขใใซใฏใๆฌกๅ
ใ`(10, )`ใฎๅ
ฅๅใๅใๅ
ฅใใพใใใใฎใขใใซใใใฉใฏใผใใในใงๅฎ่กใใใซใฏใๆฌกใฎใใใซใใพใ๏ผ
```py
# Generate random inputs for the model.
batch_size = 16
input_vector_dim = 10
random_inputs = tf.random.normal((batch_size, input_vector_dim))
# Run a forward pass.
_ = model(random_inputs)
```
XLAใงใณใณใใคใซใใใ้ขๆฐใไฝฟ็จใใฆใใฉใฏใผใใในใๅฎ่กใใใซใฏใไปฅไธใฎใใใซใใพใ๏ผ
```py
xla_fn = tf.function(model, jit_compile=True)
_ = xla_fn(random_inputs)
```
`model`ใฎใใใฉใซใใฎ `call()` ้ขๆฐใฏXLAใฐใฉใใใณใณใใคใซใใใใใซไฝฟ็จใใใพใใใใ ใใXLAใซใณใณใใคใซใใใไปใฎใขใใซ้ขๆฐใใใๅ ดๅใใใใๅฏ่ฝใงใใไปฅไธใฏใใฎๆนๆณใงใ๏ผ
```py
my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True)
```
## Running a TF text generation model with XLA from ๐ค Transformers
๐ค Transformersๅ
ใงXLAใงใฎ้ซ้ๅใใใ็ๆใๆๅนใซใใใซใฏใๆๆฐใใผใธใงใณใฎ`transformers`ใใคใณในใใผใซใใใฆใใๅฟ
่ฆใใใใพใใๆฌกใฎใณใใณใใๅฎ่กใใฆใคใณในใใผใซใงใใพใ๏ผ
```bash
pip install transformers --upgrade
```
ๆฌกใซใๆฌกใฎใณใผใใๅฎ่กใงใใพใ๏ผ
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
# Will error if the minimal version of Transformers is not installed.
from transformers.utils import check_min_version
check_min_version("4.21.0")
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
input_string = ["TensorFlow is"]
# One line to create an XLA generation function
xla_generate = tf.function(model.generate, jit_compile=True)
tokenized_input = tokenizer(input_string, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
# Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the
```
`generate()`ใงXLAใๆๅนใซใใใฎใฏใใใฃใไธ่กใฎใณใผใใงใใใณใผใใฎๆฎใ้จๅใฏๅคๆดใใใฆใใพใใใใใ ใใXLAๅบๆใฎใใใคใใฎๆณจๆ็นใไธ่จใฎใณใผใในใใใใใซใใใพใใใใใใซๆณจๆใใๅฟ
่ฆใใใใXLAใใใใใ้ๅบฆๅไธใๅฎ็พใใใใใซใใใใๆๆกใใใใจใ้่ฆใงใใๆฌกใฎใปใฏใทใงใณใงใใใใซใคใใฆ่ฉณใใ่ชฌๆใใพใใ
## Gotchas to be aware of
XLAใๆๅนใซใใ้ขๆฐ๏ผไธ่จใฎ`xla_generate()`ใชใฉ๏ผใๅใใฆๅฎ่กใใใจใๅ
้จใง่จ็ฎใฐใฉใใๆจ่ซใใใใจใใพใใใใใใฏๆ้ใใใใใพใใใใฎใใญใปในใฏ["ใใฌใผใทใณใฐ"๏ผtracing๏ผ](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing)ใจใใฆ็ฅใใใฆใใพใใ
็ๆๆ้ใ้ซ้ใงใฏใชใใใจใซๆฐไปใใใใใใพใใใ`xla_generate()`๏ผใพใใฏไปใฎXLAๅฏพๅฟ้ขๆฐ๏ผใฎ้ฃ็ถๅผใณๅบใใงใฏใ้ขๆฐใธใฎๅ
ฅๅใๆๅใซ่จ็ฎใฐใฉใใๆง็ฏใใใใจใใจๅใๅฝข็ถใซๅพใฃใฆใใๅ ดๅใ่จ็ฎใฐใฉใใๆจ่ซใใๅฟ
่ฆใฏใใใพใใใใใใฏใๅ
ฅๅๅฝข็ถใๅบๅฎใใใฆใใใขใใชใใฃ๏ผไพ๏ผ็ปๅ๏ผใซใฏๅ้กใใใพใใใใๅคๆฐใฎๅ
ฅๅๅฝข็ถใขใใชใใฃ๏ผไพ๏ผใใญในใ๏ผใๆฑใๅ ดๅใซใฏๆณจๆใๅฟ
่ฆใงใใ
`xla_generate()`ใๅธธใซๅใๅ
ฅๅๅฝข็ถใงๅไฝใใใใใซใใใซใฏใใใผใฏใใคใถใๅผใณๅบใ้ใซ`padding`ๅผๆฐใๆๅฎใงใใพใใ
```py
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
input_string = ["TensorFlow is"]
xla_generate = tf.function(model.generate, jit_compile=True)
# Here, we call the tokenizer with padding options.
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(f"Generated -- {decoded_text}")
```
ใใใซใใใ`xla_generate()`ใธใฎๅ
ฅๅใๅธธใซใใฌใผในใใใๅฝข็ถใฎๅ
ฅๅใๅใๅใใใจใ็ขบ่ชใใ็ๆๆ้ใฎ้ซ้ๅใๅฎ็พใงใใพใใไปฅไธใฎใณใผใใงใใใ็ขบ่ชใงใใพใ๏ผ
```py
import time
import tensorflow as tf
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2", padding_side="left", pad_token="</s>")
model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
xla_generate = tf.function(model.generate, jit_compile=True)
for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]:
tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf")
start = time.time_ns()
generated_tokens = xla_generate(**tokenized_input, num_beams=2)
end = time.time_ns()
print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n")
```
Tesla T4 GPUใไฝฟ็จใใใจใๆฌกใฎใใใชๅบๅใๆๅพ
ใใใพใ๏ผ
```bash
Execution time -- 30819.6 ms
Execution time -- 79.0 ms
Execution time -- 78.9 ms
```
ๆๅใฎ`xla_generate()`ๅผใณๅบใใฏใใฌใผใทใณใฐใฎใใใซๆ้ใใใใใพใใใ้ฃ็ถใใๅผใณๅบใใฏๆก้ใใซ้ซ้ใงใใ็ๆใชใใทใงใณใฎใใใชใๅคๆดใใๅใใฌใผใทใณใฐใๅผใ่ตทใใใ็ๆๆ้ใฎ้
ๅปถใๅผใ่ตทใใใใจใซๆณจๆใใฆใใ ใใใ
ใใฎใใญใฅใกใณใใงใฏใ๐ค Transformersใๆไพใใใใญในใ็ๆใชใใทใงใณใใในใฆ็ถฒ็พ
ใใฆใใพใใใ้ซๅบฆใชใฆใผในใฑใผในใซใคใใฆใฏใใญใฅใกใณใใผใทใงใณใๅ็
งใใใใจใใๅงใใใพใใ
## Additional Resources
ใใใงใฏใ๐ค Transformersใจไธ่ฌ็ใชXLAใซใคใใฆใใใซ่ฉณใใๅญฆใณใใๅ ดๅใฎใใใคใใฎ่ฟฝๅ ใชใฝใผในใๆไพใใพใใ
* [ใใฎColab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb)ใงใฏใXLAๅฏพๅฟใฎใจใณใณใผใใผใใณใผใใผ๏ผ[T5](https://huggingface.co/docs/transformers/model_doc/t5)ใชใฉ๏ผใใใณใใณใผใใผๅฐ็จ๏ผ[GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)ใชใฉ๏ผใใญในใ็ๆใขใใซใ่ฉฆใใใใฎๅฏพ่ฉฑๅใใขใๆไพใใใฆใใพใใ
* [ใใฎใใญใฐ่จไบ](https://huggingface.co/blog/tf-xla-generate)ใงใฏใXLAๅฏพๅฟใขใใซใฎๆฏ่ผใใณใใใผใฏใฎๆฆ่ฆใจใTensorFlowใงใฎXLAใซใคใใฆใฎๅๅฅฝ็ใช็ดนไปใๆไพใใใฆใใพใใ
* [ใใฎใใญใฐ่จไบ](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html)ใงใฏใ๐ค TransformersใฎTensorFlowใขใใซใซXLAใตใใผใใ่ฟฝๅ ใใ้ใฎ่จญ่จๅฒๅญฆใซใคใใฆ่ชฌๆใใฆใใพใใ
* ไธ่ฌ็ใชXLAใจTensorFlowใฐใฉใใซใคใใฆ่ฉณใใๅญฆใถใใใฎใใใใใฎๆ็จฟ๏ผ
* [XLA: ๆฉๆขฐๅญฆ็ฟ็จใฎๆ้ฉๅใณใณใใคใฉ](https://www.tensorflow.org/xla)
* [ใฐใฉใใจ`tf.function`ใฎ็ดนไป](https://www.tensorflow.org/guide/intro_to_graphs)
* [`tf.function`ใไฝฟ็จใใใใใฉใผใใณในๅไธ](https://www.tensorflow.org/guide/function)
|
transformers/docs/source/ja/tf_xla.md/0
|
{
"file_path": "transformers/docs/source/ja/tf_xla.md",
"repo_id": "transformers",
"token_count": 4413
}
| 298
|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์ฑํ
๋ชจ๋ธ์ ์ํ ํ
ํ๋ฆฟ[[templates-for-chat-models]]
## ์๊ฐ[[introduction]]
์์ฆ LLM์ ๊ฐ์ฅ ํํ ํ์ฉ ์ฌ๋ก ์ค ํ๋๋ **์ฑํ
**์
๋๋ค. ์ฑํ
์ ์ผ๋ฐ์ ์ธ ์ธ์ด ๋ชจ๋ธ์ฒ๋ผ ๋จ์ผ ๋ฌธ์์ด์ ์ด์ด๊ฐ๋ ๋์ ์ฌ๋ฌ ๊ฐ์ **๋ฉ์์ง**๋ก ๊ตฌ์ฑ๋ ๋ํ๋ฅผ ์ด์ด๊ฐ๋๋ค. ์ด ๋ํ์๋ "์ฌ์ฉ์"๋ "์ด์์คํดํธ"์ ๊ฐ์ **์ญํ **๊ณผ ๋ฉ์์ง ํ
์คํธ๊ฐ ํฌํจ๋ฉ๋๋ค.
ํ ํฐํ์ ๋ง์ฐฌ๊ฐ์ง๋ก, ๋ค์ํ ๋ชจ๋ธ์ ์ฑํ
์ ๋ํด ๋งค์ฐ ๋ค๋ฅธ ์
๋ ฅ ํ์์ ๊ธฐ๋ํฉ๋๋ค. ์ด๊ฒ์ด ์ฐ๋ฆฌ๊ฐ **์ฑํ
ํ
ํ๋ฆฟ**์ ๊ธฐ๋ฅ์ผ๋ก ์ถ๊ฐํ ์ด์ ์
๋๋ค. ์ฑํ
ํ
ํ๋ฆฟ์ ํ ํฌ๋์ด์ ์ ์ผ๋ถ์
๋๋ค. ์ฑํ
ํ
ํ๋ฆฟ์ ๋ํ ๋ชฉ๋ก์ ๋ชจ๋ธ์ด ๊ธฐ๋ํ๋ ํ์์ธ '๋จ์ผ ํ ํฐํ๊ฐ ๊ฐ๋ฅํ ๋ฌธ์์ด'๋ก ๋ณํํ๋ ๋ฐฉ๋ฒ์ ์ง์ ํฉ๋๋ค.
`BlenderBot` ๋ชจ๋ธ์ ์ฌ์ฉํ ๊ฐ๋จํ ์์ ๋ฅผ ํตํด ์ด๋ฅผ ๊ตฌ์ฒด์ ์ผ๋ก ์ดํด๋ณด๊ฒ ์ต๋๋ค. BlenderBot์ ๊ธฐ๋ณธ์ ์ผ๋ก ๋งค์ฐ ๊ฐ๋จํ ํ
ํ๋ฆฟ์ ๊ฐ์ง๊ณ ์์ผ๋ฉฐ, ์ฃผ๋ก ๋ํ ๋ผ์ด๋ ์ฌ์ด์ ๊ณต๋ฐฑ์ ์ถ๊ฐํฉ๋๋ค:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
" Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!</s>"
```
์ ์ฒด ์ฑํ
์ด ํ๋์ ๋ฌธ์์ด๋ก ์์ถ๋ ๊ฒ์ ํ์ธํ ์ ์์ต๋๋ค. ๊ธฐ๋ณธ ์ค์ ์ธ `tokenize=True`๋ฅผ ์ฌ์ฉํ๋ฉด, ๊ทธ ๋ฌธ์์ด๋ ํ ํฐํ๋ฉ๋๋ค. ๋ ๋ณต์กํ ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๊ธฐ ์ํด `mistralai/Mistral-7B-Instruct-v0.1` ๋ชจ๋ธ์ ์ฌ์ฉํด ๋ณด๊ฒ ์ต๋๋ค.
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
>>> chat = [
... {"role": "user", "content": "Hello, how are you?"},
... {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
... {"role": "user", "content": "I'd like to show off how chat templating works!"},
... ]
>>> tokenizer.apply_chat_template(chat, tokenize=False)
"<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"
```
์ด๋ฒ์๋ ํ ํฌ๋์ด์ ๊ฐ [INST]์ [/INST] ์ ์ด ํ ํฐ์ ์ถ๊ฐํ์ฌ ์ฌ์ฉ์ ๋ฉ์์ง์ ์์๊ณผ ๋์ ํ์ํ์ต๋๋ค(์ด์์คํดํธ ๋ฉ์์ง ์ ์ธ). Mistral-instruct๋ ์ด๋ฌํ ํ ํฐ์ผ๋ก ํ๋ จ๋์์ง๋ง, BlenderBot์ ๊ทธ๋ ์ง ์์์ต๋๋ค.
## ์ฑํ
ํ
ํ๋ฆฟ์ ์ด๋ป๊ฒ ์ฌ์ฉํ๋์?[[how-do-i-use-chat-templates]]
์์ ์์์ ๋ณผ ์ ์๋ฏ์ด ์ฑํ
ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๊ธฐ ์ฝ์ต๋๋ค. `role`๊ณผ `content` ํค๊ฐ ํฌํจ๋ ๋ฉ์์ง ๋ชฉ๋ก์ ์์ฑํ ๋ค์, [`~PreTrainedTokenizer.apply_chat_template`] ๋ฉ์๋์ ์ ๋ฌํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ๋ฐ๋ก ์ฌ์ฉํ ์ ์๋ ์ถ๋ ฅ์ด ์์ฑ๋ฉ๋๋ค! ๋ชจ๋ธ ์์ฑ์ ์
๋ ฅ์ผ๋ก ์ฑํ
ํ
ํ๋ฆฟ์ ์ฌ์ฉํ ๋, `add_generation_prompt=True`๋ฅผ ์ฌ์ฉํ์ฌ [์์ฑ ํ๋กฌํํธ](#what-are-generation-prompts)๋ฅผ ์ถ๊ฐํ๋ ๊ฒ๋ ์ข์ ๋ฐฉ๋ฒ์
๋๋ค.
๋ค์์ `Zephyr` ์ด์์คํดํธ ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ `model.generate()`์ ์
๋ ฅ์ ์ค๋นํ๋ ์์ ์
๋๋ค:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceH4/zephyr-7b-beta"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint) # ์ฌ๊ธฐ์ bfloat16 ์ฌ์ฉ ๋ฐ/๋๋ GPU๋ก ์ด๋ํ ์ ์์ต๋๋ค.
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
print(tokenizer.decode(tokenized_chat[0]))
```
์ด๋ ๊ฒ ํ๋ฉด Zephyr๊ฐ ๊ธฐ๋ํ๋ ์
๋ ฅ ํ์์ ๋ฌธ์์ด์ด ์์ฑ๋ฉ๋๋ค.
```text
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
```
์ด์ ์
๋ ฅ์ด Zephyr์ ๋ง๊ฒ ํ์์ด ์ง์ ๋์์ผ๋ฏ๋ก ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ ์ฌ์ฉ์์ ์ง๋ฌธ์ ๋ํ ์๋ต์ ์์ฑํ ์ ์์ต๋๋ค:
```python
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
```
์ด๋ ๊ฒ ํ๋ฉด ๋ค์๊ณผ ๊ฐ์ ๊ฒฐ๊ณผ๊ฐ ๋์ต๋๋ค:
```text
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all.
```
์ด์ ์ฌ์์ก์ฃ !
## ์ฑํ
์ ์ํ ์๋ํ๋ ํ์ดํ๋ผ์ธ์ด ์๋์?[[is-there-an-automated-pipeline-for-chat]]
๋ค, ์์ต๋๋ค! ์ฐ๋ฆฌ์ ํ
์คํธ ์์ฑ ํ์ดํ๋ผ์ธ์ ์ฑํ
์
๋ ฅ์ ์ง์ํ์ฌ ์ฑํ
๋ชจ๋ธ์ ์ฝ๊ฒ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ด์ ์๋ "ConversationalPipeline" ํด๋์ค๋ฅผ ์ฌ์ฉํ์ง๋ง, ์ด์ ๋ ์ด ๊ธฐ๋ฅ์ด [`TextGenerationPipeline`]์ ํตํฉ๋์์ต๋๋ค. ์ด๋ฒ์๋ ํ์ดํ๋ผ์ธ์ ์ฌ์ฉํ์ฌ `Zephyr` ์์ ๋ฅผ ๋ค์ ์๋ํด ๋ณด๊ฒ ์ต๋๋ค:
```python
from transformers import pipeline
pipe = pipeline("text-generation", "HuggingFaceH4/zephyr-7b-beta")
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
print(pipe(messages, max_new_tokens=128)[0]['generated_text'][-1]) # ์ด์์คํดํธ์ ์๋ต์ ์ถ๋ ฅํฉ๋๋ค.
```
```text
{'role': 'assistant', 'content': "Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all."}
```
ํ์ดํ๋ผ์ธ์ ํ ํฐํ์ `apply_chat_template` ํธ์ถ ์ ์ธ๋ถ ์ฌํญ์ ๋ชจ๋ ์ฒ๋ฆฌํด์ฃผ๊ธฐ ๋๋ฌธ์, ๋ชจ๋ธ์ ์ฑํ
ํ
ํ๋ฆฟ์ด ์์ผ๋ฉด ํ์ดํ๋ผ์ธ์ ์ด๊ธฐํํ๊ณ ๋ฉ์์ง ๋ชฉ๋ก์ ์ ๋ฌํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค!
## "์์ฑ ํ๋กฌํํธ"๋ ๋ฌด์์ธ๊ฐ์?[[what-are-generation-prompts]]
`apply_chat_template` ๋ฉ์๋์๋ `add_generation_prompt` ์ธ์๊ฐ ์๋ค๋ ๊ฒ์ ๋์น์ฑ์ ๊ฒ์
๋๋ค. ์ด ์ธ์๋ ํ
ํ๋ฆฟ์ ๋ด ์๋ต์ ์์์ ๋ํ๋ด๋ ํ ํฐ์ ์ถ๊ฐํ๋๋ก ์ง์ํฉ๋๋ค. ์๋ฅผ ๋ค์ด, ๋ค์๊ณผ ๊ฐ์ ์ฑํ
์ ๊ณ ๋ คํด ๋ณด์ธ์:
```python
messages = [
{"role": "user", "content": "Hi there!"},
{"role": "assistant", "content": "Nice to meet you!"},
{"role": "user", "content": "Can I ask a question?"}
]
```
Zephyr ์์ ์์ ๋ณด์๋ ๊ฒ๊ณผ ๊ฐ์ด, ์์ฑ ํ๋กฌํํธ ์์ด ChatML ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๋ค๋ฉด ๋ค์๊ณผ ๊ฐ์ด ๋ณด์ผ ๊ฒ์
๋๋ค:
```python
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
"""
```
์์ฑ ํ๋กฌํํธ๊ฐ **์๋** ๊ฒฝ์ฐ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```python
tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```
์ด๋ฒ์๋ ๋ด ์๋ต์ ์์์ ๋ํ๋ด๋ ํ ํฐ์ ์ถ๊ฐํ ๊ฒ์ ์ฃผ๋ชฉํ์ธ์. ์ด๋ ๊ฒ ํ๋ฉด ๋ชจ๋ธ์ด ํ
์คํธ๋ฅผ ์์ฑํ ๋ ์ฌ์ฉ์์ ๋ฉ์์ง๋ฅผ ๊ณ์ํ๋ ๋์ ๋ด ์๋ต์ ์์ฑํ๊ฒ ๋ฉ๋๋ค. ๊ธฐ์ตํ์ธ์, ์ฑํ
๋ชจ๋ธ์ ์ฌ์ ํ ์ธ์ด ๋ชจ๋ธ์ผ ๋ฟ์ด๋ฉฐ, ๊ทธ๋ค์๊ฒ ์ฑํ
์ ํน๋ณํ ์ข
๋ฅ์ ํ
์คํธ์ผ ๋ฟ์
๋๋ค! ์ ์ ํ ์ ์ด ํ ํฐ์ผ๋ก ์๋ดํด์ผ ์ฑํ
๋ชจ๋ธ์ด ๋ฌด์์ ํด์ผ ํ๋์ง ์ ์ ์์ต๋๋ค.
๋ชจ๋ ๋ชจ๋ธ์ด ์์ฑ ํ๋กฌํํธ๋ฅผ ํ์๋ก ํ๋ ๊ฒ์ ์๋๋๋ค. BlenderBot๊ณผ LLaMA ๊ฐ์ ์ผ๋ถ ๋ชจ๋ธ์ ๋ด ์๋ต ์ ์ ํน๋ณํ ํ ํฐ์ด ์์ต๋๋ค. ์ด๋ฌํ ๊ฒฝ์ฐ `add_generation_prompt` ์ธ์๋ ํจ๊ณผ๊ฐ ์์ต๋๋ค. `add_generation_prompt`์ ์ ํํ ํจ๊ณผ๋ ์ฌ์ฉ ์ค์ธ ํ
ํ๋ฆฟ์ ๋ฐ๋ผ ๋ค๋ฆ
๋๋ค.
## ์ฑํ
ํ
ํ๋ฆฟ์ ํ๋ จ์ ์ฌ์ฉํ ์ ์๋์?[[can-i-use-chat-templates-in-training]]
๋ค! ์ด ๋ฐฉ๋ฒ์ ์ฑํ
ํ
ํ๋ฆฟ์ ๋ชจ๋ธ์ด ํ๋ จ ์ค์ ๋ณด๋ ํ ํฐ๊ณผ ์ผ์นํ๋๋ก ํ๋ ์ข์ ๋ฐฉ๋ฒ์
๋๋ค. ๋ฐ์ดํฐ ์ธํธ์ ๋ํ ์ ์ฒ๋ฆฌ ๋จ๊ณ๋ก ์ฑํ
ํ
ํ๋ฆฟ์ ์ ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ๊ทธ ํ์๋ ๋ค๋ฅธ ์ธ์ด ๋ชจ๋ธ ํ๋ จ ์์
๊ณผ ๊ฐ์ด ๊ณ์ํ ์ ์์ต๋๋ค. ํ๋ จํ ๋๋ ์ผ๋ฐ์ ์ผ๋ก `add_generation_prompt=False`๋ก ์ค์ ํด์ผ ํฉ๋๋ค. ์ด์์คํดํธ ์๋ต์ ํ๋กฌํํธํ๋ ์ถ๊ฐ ํ ํฐ์ ํ๋ จ ์ค์๋ ๋์์ด ๋์ง ์๊ธฐ ๋๋ฌธ์
๋๋ค. ์์ ๋ฅผ ๋ณด๊ฒ ์ต๋๋ค:
```python
from transformers import AutoTokenizer
from datasets import Dataset
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta")
chat1 = [
{"role": "user", "content": "Which is bigger, the moon or the sun?"},
{"role": "assistant", "content": "The sun."}
]
chat2 = [
{"role": "user", "content": "Which is bigger, a virus or a bacterium?"},
{"role": "assistant", "content": "A bacterium."}
]
dataset = Dataset.from_dict({"chat": [chat1, chat2]})
dataset = dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)})
print(dataset['formatted_chat'][0])
```
๋ค์๊ณผ ๊ฐ์ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์์ต๋๋ค:
```text
<|user|>
Which is bigger, the moon or the sun?</s>
<|assistant|>
The sun.</s>
```
์ฌ๊ธฐ์๋ถํฐ๋ ์ผ๋ฐ์ ์ธ ์ธ์ด ๋ชจ๋ธ ์์
๊ณผ ๊ฐ์ด `formatted_chat` ์ด์ ์ฌ์ฉํ์ฌ ํ๋ จ์ ๊ณ์ํ๋ฉด ๋ฉ๋๋ค.
<Tip>
`apply_chat_template(tokenize=False)`๋ก ํ
์คํธ๋ฅผ ํ์ํํ ๋ค์ ๋ณ๋์ ๋จ๊ณ์์ ํ ํฐํํ๋ ๊ฒฝ์ฐ, `add_special_tokens=False` ์ธ์๋ฅผ ์ค์ ํด์ผ ํฉ๋๋ค. `apply_chat_template(tokenize=True)`๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ์๋ ์ด ๋ฌธ์ ๋ฅผ ๊ฑฑ์ ํ ํ์๊ฐ ์์ต๋๋ค!
๊ธฐ๋ณธ์ ์ผ๋ก ์ผ๋ถ ํ ํฌ๋์ด์ ๋ ํ ํฐํํ ๋ `<bos>` ๋ฐ `<eos>`์ ๊ฐ์ ํน๋ณ ํ ํฐ์ ์ถ๊ฐํฉ๋๋ค. ์ฑํ
ํ
ํ๋ฆฟ์ ํญ์ ํ์ํ ๋ชจ๋ ํน๋ณ ํ ํฐ์ ํฌํจํด์ผ ํ๋ฏ๋ก, ๊ธฐ๋ณธ `add_special_tokens=True`๋ก ์ถ๊ฐ์ ์ธ ํน๋ณ ํ ํฐ์ ์ถ๊ฐํ๋ฉด ์๋ชป๋๊ฑฐ๋ ์ค๋ณต๋๋ ํน๋ณ ํ ํฐ์ ์์ฑํ์ฌ ๋ชจ๋ธ ์ฑ๋ฅ์ด ์ ํ๋ ์ ์์ต๋๋ค.
</Tip>
## ๊ณ ๊ธ: ์ฑํ
ํ
ํ๋ฆฟ์ ์ถ๊ฐ ์
๋ ฅ ์ฌ์ฉ[[advanced-extra-inputs-to-chat-templates]]
`apply_chat_template`๊ฐ ํ์ํ ์ ์ผํ ์ธ์๋ `messages`์
๋๋ค. ๊ทธ๋ฌ๋ `apply_chat_template`์ ํค์๋ ์ธ์๋ฅผ ์ ๋ฌํ๋ฉด ํ
ํ๋ฆฟ ๋ด๋ถ์์ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ด๋ฅผ ํตํด ์ฑํ
ํ
ํ๋ฆฟ์ ๋ค์ํ ์ฉ๋๋ก ์ฌ์ฉํ ์ ์๋ ์์ ๋ฅผ ์ป์ ์ ์์ต๋๋ค. ์ด๋ฌํ ์ธ์์ ์ด๋ฆ์ด๋ ํ์์๋ ์ ํ์ด ์์ด ๋ฌธ์์ด, ๋ฆฌ์คํธ, ๋์
๋๋ฆฌ ๋ฑ์ ์ ๋ฌํ ์ ์์ต๋๋ค.
๊ทธ๋ ๊ธด ํ์ง๋ง, ์ด๋ฌํ ์ถ๊ฐ ์ธ์์ ์ผ๋ฐ์ ์ธ ์ฌ์ฉ ์ฌ๋ก๋ก 'ํจ์ ํธ์ถ์ ์ํ ๋๊ตฌ'๋ '๊ฒ์ ์ฆ๊ฐ ์์ฑ์ ์ํ ๋ฌธ์'๋ฅผ ์ ๋ฌํ๋ ๊ฒ์ด ์์ต๋๋ค. ์ด๋ฌํ ์ผ๋ฐ์ ์ธ ๊ฒฝ์ฐ์ ๋ํด ์ธ์์ ์ด๋ฆ๊ณผ ํ์์ ๋ํ ๋ช ๊ฐ์ง ๊ถ์ฅ ์ฌํญ์ด ์์ผ๋ฉฐ, ์ด๋ ์๋ ์น์
์ ์ค๋ช
๋์ด ์์ต๋๋ค. ์ฐ๋ฆฌ๋ ๋ชจ๋ธ ์์ฑ์์๊ฒ ๋๊ตฌ ํธ์ถ ์ฝ๋๋ฅผ ๋ชจ๋ธ ๊ฐ์ ์ฝ๊ฒ ์ ์กํ ์ ์๋๋ก ์ฑํ
ํ
ํ๋ฆฟ์ ์ด ํ์๊ณผ ํธํ๋๋๋ก ๋ง๋ค ๊ฒ์ ๊ถ์ฅํฉ๋๋ค.
## ๊ณ ๊ธ: ๋๊ตฌ ์ฌ์ฉ / ํจ์ ํธ์ถ[[advanced-tool-use--function-calling]]
"๋๊ตฌ ์ฌ์ฉ" LLM์ ๋ต๋ณ์ ์์ฑํ๊ธฐ ์ ์ ์ธ๋ถ ๋๊ตฌ๋ก์ ํจ์๋ฅผ ํธ์ถํ ์ ์์ต๋๋ค. ๋๊ตฌ ์ฌ์ฉ ๋ชจ๋ธ์ ๋๊ตฌ๋ฅผ ์ ๋ฌํ ๋๋ ๋จ์ํ ํจ์ ๋ชฉ๋ก์ `tools` ์ธ์๋ก ์ ๋ฌํ ์ ์์ต๋๋ค:
```python
import datetime
def current_time():
"""ํ์ฌ ํ์ง ์๊ฐ์ ๋ฌธ์์ด๋ก ๊ฐ์ ธ์ต๋๋ค."""
return str(datetime.now())
def multiply(a: float, b: float):
"""
๋ ์ซ์๋ฅผ ๊ณฑํ๋ ํจ์
์ธ์:
a: ๊ณฑํ ์ฒซ ๋ฒ์งธ ์ซ์
b: ๊ณฑํ ๋ ๋ฒ์งธ ์ซ์
"""
return a * b
tools = [current_time, multiply]
model_input = tokenizer.apply_chat_template(
messages,
tools=tools
)
```
์ด๊ฒ์ด ์ฌ๋ฐ๋ฅด๊ฒ ์๋ํ๋ ค๋ฉด ํจ์๋ฅผ ์ ํ์์ผ๋ก ์์ฑํด์ผ ๋๊ตฌ๋ก ์ฌ๋ฐ๋ฅด๊ฒ ๊ตฌ๋ฌธ ๋ถ์ํ ์ ์์ต๋๋ค. ๊ตฌ์ฒด์ ์ผ๋ก ๋ค์ ๊ท์น์ ๋ฐ๋ผ์ผ ํฉ๋๋ค:
- ํจ์๋ ์ค๋ช
์ ์ธ ์ด๋ฆ์ ๊ฐ์ ธ์ผ ํฉ๋๋ค.
- ๋ชจ๋ ์ธ์์๋ ํ์
ํํธ๊ฐ ์์ด์ผ ํฉ๋๋ค.
- ํจ์์๋ ํ์ค Google ์คํ์ผ์ ๋ํฌ์คํธ๋ง์ด ์์ด์ผ ํฉ๋๋ค(์ฆ, ์ด๊ธฐ ํจ์ ์ค๋ช
๋ค์์ ์ธ์๋ฅผ ์ค๋ช
ํ๋ `Args:` ๋ธ๋ก์ด ์์ด์ผ ํฉ๋๋ค).
- `Args:` ๋ธ๋ก์๋ ํ์
์ ํฌํจํ์ง ๋ง์ธ์. ์ฆ, `a (int): The first number to multiply` ๋์ `a: The first number to multiply`๋ผ๊ณ ์์ฑํด์ผ ํฉ๋๋ค. ํ์
ํํธ๋ ํจ์ ํค๋์ ์์ด์ผ ํฉ๋๋ค.
- ํจ์์๋ ๋ฐํ ํ์
๊ณผ ๋ํฌ์คํธ๋ง์ `Returns:` ๋ธ๋ก์ด ์์ ์ ์์ต๋๋ค. ๊ทธ๋ฌ๋ ๋๋ถ๋ถ์ ๋๊ตฌ ์ฌ์ฉ ๋ชจ๋ธ์ ์ด๋ฅผ ๋ฌด์ํ๋ฏ๋ก ์ด๋ ์ ํ ์ฌํญ์
๋๋ค.
### ๋๊ตฌ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ๋ธ์ ์ ๋ฌํ๊ธฐ[[passing-tool-results-to-the-model]]
์์ ์์ ์ฝ๋๋ ๋ชจ๋ธ์ ์ฌ์ฉํ ์ ์๋ ๋๊ตฌ๋ฅผ ๋์ดํ๋ ๋ฐ ์ถฉ๋ถํ์ง๋ง, ์ค์ ๋ก ์ฌ์ฉํ๊ณ ์ ํ๋ ๊ฒฝ์ฐ๋ ์ด๋ป๊ฒ ํด์ผ ํ ๊น์? ์ด๋ฌํ ๊ฒฝ์ฐ์๋ ๋ค์์ ์ํํด์ผ ํฉ๋๋ค:
1. ๋ชจ๋ธ์ ์ถ๋ ฅ์ ํ์ฑํ์ฌ ๋๊ตฌ ์ด๋ฆ๊ณผ ์ธ์๋ฅผ ๊ฐ์ ธ์ต๋๋ค.
2. ๋ชจ๋ธ์ ๋๊ตฌ ํธ์ถ์ ๋ํ์ ์ถ๊ฐํฉ๋๋ค.
3. ํด๋น ์ธ์์ ๋์ํ๋ ํจ์๋ฅผ ํธ์ถํฉ๋๋ค.
4. ๊ฒฐ๊ณผ๋ฅผ ๋ํ์ ์ถ๊ฐํฉ๋๋ค.
### ๋๊ตฌ ์ฌ์ฉ ์์ [[a-complete-tool-use-example]]
๋๊ตฌ ์ฌ์ฉ ์์ ๋ฅผ ๋จ๊ณ๋ณ๋ก ์ดํด๋ณด๊ฒ ์ต๋๋ค. ์ด ์์ ์์๋ ๋๊ตฌ ์ฌ์ฉ ๋ชจ๋ธ ์ค์์ ์ฑ๋ฅ์ด ๊ฐ์ฅ ์ฐ์ํ 8B `Hermes-2-Pro` ๋ชจ๋ธ์ ์ฌ์ฉํ ๊ฒ์
๋๋ค. ๋ฉ๋ชจ๋ฆฌ๊ฐ ์ถฉ๋ถํ๋ค๋ฉด, ๋ ํฐ ๋ชจ๋ธ์ธ [Command-R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) ๋๋ [Mixtral-8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1)๋ฅผ ์ฌ์ฉํ๋ ๊ฒ๋ ๊ณ ๋ คํ ์ ์์ต๋๋ค. ์ด ๋ ๋ชจ๋ธ ๋ชจ๋ ๋๊ตฌ ์ฌ์ฉ์ ์ง์ํ๋ฉฐ ๋ ๊ฐ๋ ฅํ ์ฑ๋ฅ์ ์ ๊ณตํฉ๋๋ค.
๋จผ์ ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ฅผ ๋ก๋ํด ๋ณด๊ฒ ์ต๋๋ค:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "NousResearch/Hermes-2-Pro-Llama-3-8B"
tokenizer = AutoTokenizer.from_pretrained(checkpoint, revision="pr/13")
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto")
```
๋ค์์ผ๋ก, ๋๊ตฌ ๋ชฉ๋ก์ ์ ์ํด ๋ณด๊ฒ ์ต๋๋ค:
```python
def get_current_temperature(location: str, unit: str) -> float:
"""
ํน์ ์์น์ ํ์ฌ ์จ๋๋ฅผ ๊ฐ์ ธ์ต๋๋ค.
์ธ์:
์์น: ์จ๋๋ฅผ ๊ฐ์ ธ์ฌ ์์น, "๋์, ๊ตญ๊ฐ" ํ์
๋จ์: ์จ๋ ๋จ์ (์ ํ์ง: ["celsius", "fahrenheit"])
๋ฐํ๊ฐ:
์ง์ ๋ ์์น์ ํ์ฌ ์จ๋๋ฅผ ์ง์ ๋ ๋จ์๋ก ๋ฐํ, float ํ์.
"""
return 22. # ์ด ํจ์๋ ์ค์ ๋ก ์จ๋๋ฅผ ๊ฐ์ ธ์์ผ ํ ๊ฒ์
๋๋ค!
def get_current_wind_speed(location: str) -> float:
"""
์ฃผ์ด์ง ์์น์ ํ์ฌ ํ์์ km/h ๋จ์๋ก ๊ฐ์ ธ์ต๋๋ค.
์ธ์:
์์น(location): ํ์์ ๊ฐ์ ธ์ฌ ์์น, "๋์, ๊ตญ๊ฐ" ํ์
๋ฐํ๊ฐ:
์ฃผ์ด์ง ์์น์ ํ์ฌ ํ์์ km/h ๋จ์๋ก ๋ฐํ, float ํ์.
"""
return 6. # ์ด ํจ์๋ ์ค์ ๋ก ํ์์ ๊ฐ์ ธ์์ผ ํ ๊ฒ์
๋๋ค!
tools = [get_current_temperature, get_current_wind_speed]
```
์ด์ ๋ด์ ์ํ ๋ํ๋ฅผ ์ค์ ํด ๋ณด๊ฒ ์ต๋๋ค:
```python
messages = [
{"role": "system", "content": "You are a bot that responds to weather queries. You should reply with the unit used in the queried location."},
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
```
์ด์ ์ฑํ
ํ
ํ๋ฆฟ์ ์ ์ฉํ๊ณ ์๋ต์ ์์ฑํด ๋ณด๊ฒ ์ต๋๋ค:
```python
inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
out = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
```
๊ฒฐ๊ณผ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```text
<tool_call>
{"arguments": {"location": "Paris, France", "unit": "celsius"}, "name": "get_current_temperature"}
</tool_call><|im_end|>
```
๋ชจ๋ธ์ด ํจ์ ํธ์ถ์ ์ ํจํ ์ธ์๋ก ์ํํ์ผ๋ฉฐ, ํจ์ ๋ํฌ์คํธ๋ง์ ์์ฒญ๋ ํ์์ผ๋ก ํธ์ถํ์์ ์ ์ ์์ต๋๋ค. ๋ชจ๋ธ์ ์ฐ๋ฆฌ๊ฐ ํ๋์ค์ ํ๋ฆฌ๋ฅผ ์ง์นญํ๊ณ ์๋ค๋ ๊ฒ์ ์ถ๋ก ํ๊ณ , ํ๋์ค๊ฐ SI ๋จ์์ ๋ณธ๊ณ ์ฅ์์ ๊ธฐ์ตํ์ฌ ์จ๋๋ฅผ ์ญ์จ๋ก ํ์ํด์ผ ํ๋ค๊ณ ํ๋จํ์ต๋๋ค.
๋ชจ๋ธ์ ๋๊ตฌ ํธ์ถ์ ๋ํ์ ์ถ๊ฐํด ๋ณด๊ฒ ์ต๋๋ค. ์ฌ๊ธฐ์ ์์์ `tool_call_id`๋ฅผ ์์ฑํฉ๋๋ค. ์ด ID๋ ๋ชจ๋ ๋ชจ๋ธ์์ ์ฌ์ฉ๋๋ ๊ฒ์ ์๋์ง๋ง, ์ฌ๋ฌ ๋๊ตฌ ํธ์ถ์ ํ ๋ฒ์ ๋ฐํํ๊ณ ๊ฐ ์๋ต์ด ์ด๋ ํธ์ถ์ ํด๋นํ๋์ง ์ถ์ ํ ์ ์๊ฒ ํด์ค๋๋ค. ์ด ID๋ ๋ํ ๋ด์์ ๊ณ ์ ํด์ผ ํฉ๋๋ค.
```python
tool_call_id = "vAHdf3" # ์์์ ID, ๊ฐ ๋๊ตฌ ํธ์ถ๋ง๋ค ๊ณ ์ ํด์ผ ํจ
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France", "unit": "celsius"}}
messages.append({"role": "assistant", "tool_calls": [{"id": tool_call_id, "type": "function", "function": tool_call}]})
```
์ด์ ๋๊ตฌ ํธ์ถ์ ๋ํ์ ์ถ๊ฐํ์ผ๋ฏ๋ก, ํจ์๋ฅผ ํธ์ถํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ๋ํ์ ์ถ๊ฐํ ์ ์์ต๋๋ค. ์ด ์์ ์์๋ ํญ์ 22.0์ ๋ฐํํ๋ ๋๋ฏธ ํจ์๋ฅผ ์ฌ์ฉํ๊ณ ์์ผ๋ฏ๋ก, ๊ฒฐ๊ณผ๋ฅผ ์ง์ ์ถ๊ฐํ๋ฉด ๋ฉ๋๋ค. ๋ค์ ํ ๋ฒ, `tool_call_id`๋ ๋๊ตฌ ํธ์ถ์ ์ฌ์ฉํ๋ ID์ ์ผ์นํด์ผ ํฉ๋๋ค.
```python
messages.append({"role": "tool", "tool_call_id": tool_call_id, "name": "get_current_temperature", "content": "22.0"})
```
๋ง์ง๋ง์ผ๋ก, ์ด์์คํดํธ๊ฐ ํจ์ ์ถ๋ ฅ์ ์ฝ๊ณ ์ฌ์ฉ์์ ๊ณ์ ๋ํํ ์ ์๋๋ก ํ๊ฒ ์ต๋๋ค:
```python
inputs = tokenizer.apply_chat_template(messages, chat_template="tool_use", tools=tools, add_generation_prompt=True, return_dict=True, return_tensors="pt")
inputs = {k: v.to(model.device) for k, v in inputs.items()}
out = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(out[0][len(inputs["input_ids"][0]):]))
```
๊ฒฐ๊ณผ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```text
The current temperature in Paris, France is 22.0 ยฐ Celsius.<|im_end|>
```
์ด๊ฒ์ ๋๋ฏธ ๋๊ตฌ์ ๋จ์ผ ํธ์ถ์ ์ฌ์ฉํ ๊ฐ๋จํ ๋ฐ๋ชจ์์ง๋ง, ๋์ผํ ๊ธฐ์ ์ ์ฌ์ฉํ์ฌ ์ฌ๋ฌ ์ค์ ๋๊ตฌ์ ๋ ๊ธด ๋ํ๋ฅผ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค. ์ด๋ฅผ ํตํด ์ค์๊ฐ ์ ๋ณด, ๊ณ์ฐ ๋๊ตฌ ๋๋ ๋๊ท๋ชจ ๋ฐ์ดํฐ๋ฒ ์ด์ค์ ์ ๊ทผํ์ฌ ๋ํํ ์์ด์ ํธ์ ๊ธฐ๋ฅ์ ํ์ฅํ ์ ์์ต๋๋ค.
<Tip>
์์์ ๋ณด์ฌ์ค ๋๊ตฌ ํธ์ถ ๊ธฐ๋ฅ์ ๋ชจ๋ ๋ชจ๋ธ์์ ์ฌ์ฉ๋๋ ๊ฒ์ ์๋๋๋ค. ์ผ๋ถ ๋ชจ๋ธ์ ๋๊ตฌ ํธ์ถ ID๋ฅผ ์ฌ์ฉํ๊ณ , ์ผ๋ถ๋ ํจ์ ์ด๋ฆ๋ง ์ฌ์ฉํ์ฌ ๊ฒฐ๊ณผ์ ๋๊ตฌ ํธ์ถ์ ์์์ ๋ฐ๋ผ ๋งค์นญํ๋ฉฐ, ํผ๋์ ํผํ๊ธฐ ์ํด ํ ๋ฒ์ ํ๋์ ๋๊ตฌ ํธ์ถ๋ง ๋ฐํํ๋ ๋ชจ๋ธ๋ ์์ต๋๋ค. ๊ฐ๋ฅํ ๋ง์ ๋ชจ๋ธ๊ณผ ํธํ๋๋ ์ฝ๋๋ฅผ ์ํ๋ค๋ฉด, ์ฌ๊ธฐ์ ๋ณด์ฌ์ค ๊ฒ์ฒ๋ผ ๋๊ตฌ ํธ์ถ์ ๊ตฌ์ฑํ๊ณ , ๋ชจ๋ธ์ด ๋ฐํํ ์์๋๋ก ๋๊ตฌ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค. ๊ฐ ๋ชจ๋ธ์ ์ฑํ
ํ
ํ๋ฆฟ์ด ๋๋จธ์ง ์์
์ ์ฒ๋ฆฌํ ๊ฒ์
๋๋ค.
</Tip>
### ๋๊ตฌ ์คํค๋ง ์ดํดํ๊ธฐ[[understanding-tool-schemas]]
`apply_chat_template`์ `tools` ์ธ์์ ์ ๋ฌํ๋ ๊ฐ ํจ์๋ [JSON ์คํค๋ง](https://json-schema.org/learn/getting-started-step-by-step)๋ก ๋ณํ๋ฉ๋๋ค. ์ด๋ฌํ ์คํค๋ง๋ ๋ชจ๋ธ ์ฑํ
ํ
ํ๋ฆฟ์ ์ ๋ฌ๋ฉ๋๋ค. ์ฆ, ๋๊ตฌ ์ฌ์ฉ ๋ชจ๋ธ์ ํจ์ ์์ฒด๋ฅผ ์ง์ ๋ณด์ง ์์ผ๋ฉฐ, ํจ์ ๋ด๋ถ์ ์ค์ ์ฝ๋๋ฅผ ๋ณด์ง ์์ต๋๋ค. ๋๊ตฌ ์ฌ์ฉ ๋ชจ๋ธ์ด ๊ด์ฌ์ ๊ฐ์ง๋ ๊ฒ์ ํจ์ **์ ์**์ **์ธ์**์
๋๋ค. ํจ์๊ฐ ๋ฌด์์ ํ๊ณ ์ด๋ป๊ฒ ์ฌ์ฉํ๋์ง์ ๊ด์ฌ์ด ์์ ๋ฟ, ์ด๋ป๊ฒ ์๋ํ๋์ง๋ ์ค์ํ์ง ์์ต๋๋ค! ๋ชจ๋ธ์ ์ถ๋ ฅ์ ์ฝ๊ณ ๋ชจ๋ธ์ด ๋๊ตฌ ์ฌ์ฉ์ ์์ฒญํ๋์ง ๊ฐ์งํ์ฌ, ์ธ์๋ฅผ ๋๊ตฌ ํจ์์ ์ ๋ฌํ๊ณ ์ฑํ
์์ ์๋ต์ ๋ฐํํ๋ ๊ฒ์ ์ฌ๋ฌ๋ถ์ ๋ชซ์
๋๋ค.
์์ ๊ท๊ฒฉ์ ๋ฐ๋ฅธ๋ค๋ฉด, ํ
ํ๋ฆฟ์ ์ ๋ฌํ JSON ์คํค๋ง ์์ฑ์ ์๋ํํ๊ณ ๋ณด์ด์ง ์๊ฒ ์ฒ๋ฆฌํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ๊ทธ๋ฌ๋ ๋ฌธ์ ๊ฐ ๋ฐ์ํ๊ฑฐ๋ ๋ณํ์ ๋ ์ ์ดํ๊ณ ์ถ๋ค๋ฉด ์๋์ผ๋ก ๋ณํ์ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค. ๋ค์์ ์๋ ์คํค๋ง ๋ณํ ์์ ์
๋๋ค.
```python
from transformers.utils import get_json_schema
def multiply(a: float, b: float):
"""
๋ ์ซ์๋ฅผ ๊ณฑํ๋ ํจ์
์ธ์:
a: ๊ณฑํ ์ฒซ ๋ฒ์งธ ์ซ์
b: ๊ณฑํ ๋ ๋ฒ์งธ ์ซ์
"""
return a * b
schema = get_json_schema(multiply)
print(schema)
```
์ด ๊ฒฐ๊ณผ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```json
{
"type": "function",
"function": {
"name": "multiply",
"description": "A function that multiplies two numbers",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "number",
"description": "The first number to multiply"
},
"b": {
"type": "number",
"description": "The second number to multiply"
}
},
"required": ["a", "b"]
}
}
}
```
์ํ๋ค๋ฉด ์ด๋ฌํ ์คํค๋ง๋ฅผ ํธ์งํ๊ฑฐ๋ `get_json_schema`๋ฅผ ์ ํ ์ฌ์ฉํ์ง ์๊ณ ์ฒ์๋ถํฐ ์ง์ ์์ฑํ ์๋ ์์ต๋๋ค. JSON ์คํค๋ง๋ `apply_chat_template`์ `tools` ์ธ์์ ์ง์ ์ ๋ฌํ ์ ์์ต๋๋ค. ์ด๋ฅผ ํตํด ๋ ๋ณต์กํ ํจ์์ ๋ํ ์ ๋ฐํ ์คํค๋ง๋ฅผ ์ ์ํ ์ ์๊ฒ ๋ฉ๋๋ค. ๊ทธ๋ฌ๋ ์คํค๋ง๊ฐ ๋ณต์กํ ์๋ก ๋ชจ๋ธ์ด ์ฒ๋ฆฌํ๋ ๋ฐ ํผ๋์ ๊ฒช์ ๊ฐ๋ฅ์ฑ์ด ๋์์ง๋๋ค! ๊ฐ๋ฅํ ํ ๊ฐ๋จํ ํจ์ ์๋ช
์ ์ ์งํ๊ณ , ์ธ์(ํนํ ๋ณต์กํ๊ณ ์ค์ฒฉ๋ ์ธ์)๋ฅผ ์ต์ํํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค.
์ฌ๊ธฐ ์ง์ ์คํค๋ง๋ฅผ ์ ์ํ๊ณ ์ด๋ฅผ `apply_chat_template`์ ์ ๋ฌํ๋ ์์ ๊ฐ ์์ต๋๋ค:
```python
# ์ธ์๋ฅผ ๋ฐ์ง ์๋ ๊ฐ๋จํ ํจ์
current_time = {
"type": "function",
"function": {
"name": "current_time",
"description": "Get the current local time as a string.",
"parameters": {
'type': 'object',
'properties': {}
}
}
}
# ๋ ๊ฐ์ ์ซ์ ์ธ์๋ฅผ ๋ฐ๋ ๋ ์์ ํ ํจ์
multiply = {
'type': 'function',
'function': {
'name': 'multiply',
'description': 'A function that multiplies two numbers',
'parameters': {
'type': 'object',
'properties': {
'a': {
'type': 'number',
'description': 'The first number to multiply'
},
'b': {
'type': 'number', 'description': 'The second number to multiply'
}
},
'required': ['a', 'b']
}
}
}
model_input = tokenizer.apply_chat_template(
messages,
tools = [current_time, multiply]
)
```
## ๊ณ ๊ธ: ๊ฒ์ ์ฆ๊ฐ ์์ฑ[[advanced-retrieval-augmented-generation]]
"๊ฒ์ ์ฆ๊ฐ ์์ฑ" ๋๋ "RAG" LLM์ ์ฟผ๋ฆฌ์ ์๋ตํ๊ธฐ ์ ์ ๋ฌธ์์ ์ฝํผ์ค๋ฅผ ๊ฒ์ํ์ฌ ์ ๋ณด๋ฅผ ์ป์ ์ ์์ต๋๋ค. ์ด๋ฅผ ํตํด ๋ชจ๋ธ์ ์ ํ๋ ์ปจํ
์คํธ ํฌ๊ธฐ ์ด์์ผ๋ก ์ง์ ๊ธฐ๋ฐ์ ํฌ๊ฒ ํ์ฅํ ์ ์์ต๋๋ค. RAG ๋ชจ๋ธ์ ๋ํ ์ฐ๋ฆฌ์ ๊ถ์ฅ ์ฌํญ์ ํ
ํ๋ฆฟ์ด `documents` ์ธ์๋ฅผ ํ์ฉํด์ผ ํ๋ค๋ ๊ฒ์
๋๋ค. ์ด ์ธ์๋ ๊ฐ "๋ฌธ์"๊ฐ `title`๊ณผ `contents` ํค๋ฅผ ๊ฐ์ง๋ ๋จ์ผ dict์ธ ๋ฌธ์ ๋ชฉ๋ก์ด์ด์ผ ํฉ๋๋ค. ์ด ํ์์ ๋๊ตฌ์ ์ฌ์ฉ๋๋ JSON ์คํค๋ง๋ณด๋ค ํจ์ฌ ๊ฐ๋จํ๋ฏ๋ก ๋ณ๋์ ๋์ฐ๋ฏธ ํจ์๊ฐ ํ์ํ์ง ์์ต๋๋ค.
๋ค์์ RAG ํ
ํ๋ฆฟ์ด ์๋ํ๋ ์์ ์
๋๋ค:
```python
document1 = {
"title": "The Moon: Our Age-Old Foe",
"contents": "Man has always dreamed of destroying the moon. In this essay, I shall..."
}
document2 = {
"title": "The Sun: Our Age-Old Friend",
"contents": "Although often underappreciated, the sun provides several notable benefits..."
}
model_input = tokenizer.apply_chat_template(
messages,
documents=[document1, document2]
)
```
## ๊ณ ๊ธ: ์ฑํ
ํ
ํ๋ฆฟ์ ์ด๋ป๊ฒ ์๋ํ๋์?[[advanced-how-do-chat-templates-work]]
๋ชจ๋ธ์ ์ฑํ
ํ
ํ๋ฆฟ์ `tokenizer.chat_template` ์์ฑ์ ์ ์ฅ๋ฉ๋๋ค. ์ฑํ
ํ
ํ๋ฆฟ์ด ์ค์ ๋์ง ์์ ๊ฒฝ์ฐ ํด๋น ๋ชจ๋ธ ํด๋์ค์ ๊ธฐ๋ณธ ํ
ํ๋ฆฟ์ด ๋์ ์ฌ์ฉ๋ฉ๋๋ค. `BlenderBot`์ ํ
ํ๋ฆฟ์ ์ดํด๋ณด๊ฒ ์ต๋๋ค:
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
>>> tokenizer.chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
```
์ฝ๊ฐ ๋ณต์กํด ๋ณด์ผ ์ ์์ต๋๋ค. ์ฝ๊ธฐ ์ฝ๊ฒ ์ ๋ฆฌํด ๋ณด๊ฒ ์ต๋๋ค. ์ด ๊ณผ์ ์์ ์ถ๊ฐํ๋ ์ค๋ฐ๊ฟ๊ณผ ๋ค์ฌ์ฐ๊ธฐ๊ฐ ํ
ํ๋ฆฟ ์ถ๋ ฅ์ ํฌํจ๋์ง ์๋๋ก ํด์ผ ํฉ๋๋ค. ์๋๋ [๊ณต๋ฐฑ์ ์ ๊ฑฐํ๋](#trimming-whitespace) ํ์
๋๋ค:
```
{%- for message in messages %}
{%- if message['role'] == 'user' %}
{{- ' ' }}
{%- endif %}
{{- message['content'] }}
{%- if not loop.last %}
{{- ' ' }}
{%- endif %}
{%- endfor %}
{{- eos_token }}
```
๋ง์ฝ ์ด์ ๊ฐ์ ํ์์ ์ฒ์ ๋ณธ๋ค๋ฉด, ์ด๊ฒ์ [Jinja ํ
ํ๋ฆฟ](https://jinja.palletsprojects.com/en/3.1.x/templates/)์
๋๋ค.
Jinja๋ ํ
์คํธ๋ฅผ ์์ฑํ๋ ๊ฐ๋จํ ์ฝ๋๋ฅผ ์์ฑํ ์ ์๋ ํ
ํ๋ฆฟ ์ธ์ด์
๋๋ค. ๋ง์ ๋ฉด์์ ์ฝ๋์ ๊ตฌ๋ฌธ์ด ํ์ด์ฌ๊ณผ ์ ์ฌํฉ๋๋ค. ์์ ํ์ด์ฌ์์๋ ์ด ํ
ํ๋ฆฟ์ด ๋ค์๊ณผ ๊ฐ์ด ๋ณด์ผ ๊ฒ์
๋๋ค:
```python
for idx, message in enumerate(messages):
if message['role'] == 'user':
print(' ')
print(message['content'])
if not idx == len(messages) - 1: # Check for the last message in the conversation
print(' ')
print(eos_token)
```
์ด ํ
ํ๋ฆฟ์ ์ธ ๊ฐ์ง ์ผ์ ํฉ๋๋ค:
1. ๊ฐ ๋ฉ์์ง์ ๋ํด, ๋ฉ์์ง๊ฐ ์ฌ์ฉ์ ๋ฉ์์ง์ธ ๊ฒฝ์ฐ ๊ณต๋ฐฑ์ ์ถ๊ฐํ๊ณ , ๊ทธ๋ ์ง ์์ผ๋ฉด ์๋ฌด๊ฒ๋ ์ถ๋ ฅํ์ง ์์ต๋๋ค.
2. ๋ฉ์์ง ๋ด์ฉ์ ์ถ๊ฐํฉ๋๋ค.
3. ๋ฉ์์ง๊ฐ ๋ง์ง๋ง ๋ฉ์์ง๊ฐ ์๋ ๊ฒฝ์ฐ ๋ ๊ฐ์ ๊ณต๋ฐฑ์ ์ถ๊ฐํฉ๋๋ค. ๋ง์ง๋ง ๋ฉ์์ง ํ์๋ EOS ํ ํฐ์ ์ถ๋ ฅํฉ๋๋ค.
์ด๊ฒ์ ๋งค์ฐ ๊ฐ๋จํ ํ
ํ๋ฆฟ์
๋๋ค. ์ ์ด ํ ํฐ์ ์ถ๊ฐํ์ง ์์ผ๋ฉฐ, ์ดํ ๋ํ์์ ๋ชจ๋ธ์ด ์ด๋ป๊ฒ ๋์ํด์ผ ํ๋์ง ์ง์ํ๋ "์์คํ
" ๋ฉ์์ง๋ฅผ ์ง์ํ์ง ์์ต๋๋ค. ํ์ง๋ง Jinja๋ ์ด๋ฌํ ์์
์ ์ํํ ์ ์๋ ๋ง์ ์ ์ฐ์ฑ์ ์ ๊ณตํฉ๋๋ค! LLaMA๊ฐ ์
๋ ฅ์ ํ์ํํ๋ ๋ฐฉ์๊ณผ ์ ์ฌํ ํ์์ Jinja ํ
ํ๋ฆฟ์ ์ดํด๋ณด๊ฒ ์ต๋๋ค(์ค์ LLaMA ํ
ํ๋ฆฟ์ ๊ธฐ๋ณธ ์์คํ
๋ฉ์์ง ์ฒ๋ฆฌ์ ์ผ๋ฐ์ ์ธ ์์คํ
๋ฉ์์ง ์ฒ๋ฆฌ๋ฅผ ํฌํจํ๊ณ ์์ต๋๋ค - ์ค์ ์ฝ๋์์๋ ์ด ํ
ํ๋ฆฟ์ ์ฌ์ฉํ์ง ๋ง์ธ์!).
```
{%- for message in messages %}
{%- if message['role'] == 'user' %}
{{- bos_token + '[INST] ' + message['content'] + ' [/INST]' }}
{%- elif message['role'] == 'system' %}
{{- '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }}
{%- elif message['role'] == 'assistant' %}
{{- ' ' + message['content'] + ' ' + eos_token }}
{%- endif %}
{%- endfor %}
```
์ด ํ
ํ๋ฆฟ์ ์ ์ ์ดํด๋ณด๋ฉด ๋ฌด์์ ํ๋์ง ์ดํดํ ์ ์์ต๋๋ค. ๋จผ์ , ๊ฐ ๋ฉ์์ง์ "role"์ ๋ฐ๋ผ ํน์ ํ ํฐ์ ์ถ๊ฐํ์ฌ ๋๊ฐ ๋ฉ์์ง๋ฅผ ๋ณด๋๋์ง ๋ชจ๋ธ์๊ฒ ๋ช
ํํ๊ฒ ์๋ ค์ค๋๋ค. ๋ํ ์ฌ์ฉ์, ์ด์์คํดํธ ๋ฐ ์์คํ
๋ฉ์์ง๋ ๊ฐ๊ฐ ๊ณ ์ ํ ํ ํฐ์ผ๋ก ๋ํ๋์ด ๋ชจ๋ธ์ด ๋ช
ํํ๊ฒ ๊ตฌ๋ถํ ์ ์์ต๋๋ค.
## ๊ณ ๊ธ: ์ฑํ
ํ
ํ๋ฆฟ ์ถ๊ฐ ๋ฐ ํธ์ง[[advanced-adding-and-editing-chat-templates]]
### ์ฑํ
ํ
ํ๋ฆฟ์ ์ด๋ป๊ฒ ๋ง๋ค ์ ์๋์?[[how-do-i-create-a-chat-template]]
๊ฐ๋จํฉ๋๋ค. Jinja ํ
ํ๋ฆฟ์ ์์ฑํ๊ณ `tokenizer.chat_template`์ ์ค์ ํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. ๋ค๋ฅธ ๋ชจ๋ธ์ ๊ธฐ์กด ํ
ํ๋ฆฟ์ ์์์ ์ผ๋ก ์ฌ์ฉํ๊ณ ํ์์ ๋ง๊ฒ ํธ์งํ๋ ๊ฒ์ด ๋ ์ฌ์ธ ๊ฒ ์
๋๋ค! ์๋ฅผ ๋ค์ด, ์์ LLaMA ํ
ํ๋ฆฟ์ ๊ฐ์ ธ์ ์ด์์คํดํธ ๋ฉ์์ง์ "[ASST]" ๋ฐ "[/ASST]"๋ฅผ ์ถ๊ฐํ ์ ์์ต๋๋ค:
```
{%- for message in messages %}
{%- if message['role'] == 'user' %}
{{- bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }}
{%- elif message['role'] == 'system' %}
{{- '<<SYS>>\\n' + message['content'].strip() + '\\n<</SYS>>\\n\\n' }}
{%- elif message['role'] == 'assistant' %}
{{- '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }}
{%- endif %}
{%- endfor %}
```
์ด์ `tokenizer.chat_template` ์์ฑ์ ์ค์ ํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ๋ค์์ [`~PreTrainedTokenizer.apply_chat_template`]๋ฅผ ์ฌ์ฉํ ๋ ์๋กญ๊ฒ ์ค์ ํ ํ
ํ๋ฆฟ์ด ์ฌ์ฉ๋ฉ๋๋ค! ์ด ์์ฑ์ `tokenizer_config.json` ํ์ผ์ ์ ์ฅ๋๋ฏ๋ก, [`~utils.PushToHubMixin.push_to_hub`]๋ฅผ ์ฌ์ฉํ์ฌ ์ ํ
ํ๋ฆฟ์ ํ๋ธ์ ์
๋ก๋ํ๊ณ ๋ชจ๋ ์ฌ์ฉ์๊ฐ ๋ชจ๋ธ์ ๋ง๋ ํ
ํ๋ฆฟ์ ์ฌ์ฉํ ์ ์๋๋ก ํ ์ ์์ต๋๋ค!
```python
template = tokenizer.chat_template
template = template.replace("SYS", "SYSTEM") # ์์คํ
ํ ํฐ ๋ณ๊ฒฝ
tokenizer.chat_template = template # ์ ํ
ํ๋ฆฟ ์ค์
tokenizer.push_to_hub("model_name") # ์ ํ
ํ๋ฆฟ์ ํ๋ธ์ ์
๋ก๋!
```
์ฑํ
ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๋ [`~PreTrainedTokenizer.apply_chat_template`] ๋ฉ์๋๋ [`TextGenerationPipeline`] ํด๋์ค์์ ํธ์ถ๋๋ฏ๋ก, ์ฌ๋ฐ๋ฅธ ์ฑํ
ํ
ํ๋ฆฟ์ ์ค์ ํ๋ฉด ๋ชจ๋ธ์ด ์๋์ผ๋ก [`TextGenerationPipeline`]๊ณผ ํธํ๋ฉ๋๋ค.
<Tip>
๋ชจ๋ธ์ ์ฑํ
์ฉ๋๋ก ๋ฏธ์ธ ์กฐ์ ํ๋ ๊ฒฝ์ฐ, ์ฑํ
ํ
ํ๋ฆฟ์ ์ค์ ํ๋ ๊ฒ ์ธ์๋ ์ ์ฑํ
์ ์ด ํ ํฐ์ ํ ํฌ๋์ด์ ์ ํน๋ณ ํ ํฐ์ผ๋ก ์ถ๊ฐํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ํน๋ณ ํ ํฐ์ ์ ๋๋ก ๋ถํ ๋์ง ์์ผ๋ฏ๋ก, ์ ์ด ํ ํฐ์ด ์ฌ๋ฌ ์กฐ๊ฐ์ผ๋ก ํ ํฐํ๋๋ ๊ฒ์ ๋ฐฉ์งํฉ๋๋ค. ๋ํ, ํ
ํ๋ฆฟ์์ ์ด์์คํดํธ ์์ฑ์ ๋์ ๋ํ๋ด๋ ํ ํฐ์ผ๋ก ํ ํฌ๋์ด์ ์ `eos_token` ์์ฑ์ ์ค์ ํด์ผ ํฉ๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ํ
์คํธ ์์ฑ ๋๊ตฌ๊ฐ ํ
์คํธ ์์ฑ์ ์ธ์ ์ค์งํด์ผ ํ ์ง ์ ํํ ์ ์ ์์ต๋๋ค.
</Tip>
### ์ ์ผ๋ถ ๋ชจ๋ธ์ ์ฌ๋ฌ ๊ฐ์ ํ
ํ๋ฆฟ์ ๊ฐ์ง๊ณ ์๋์?[[why-do-some-models-have-multiple-templates]]
์ผ๋ถ ๋ชจ๋ธ์ ๋ค๋ฅธ ์ฌ์ฉ ์ฌ๋ก์ ๋ํด ๋ค๋ฅธ ํ
ํ๋ฆฟ์ ์ฌ์ฉํฉ๋๋ค. ์๋ฅผ ๋ค์ด, ์ผ๋ฐ ์ฑํ
์ ์ํ ํ
ํ๋ฆฟ๊ณผ ๋๊ตฌ ์ฌ์ฉ ๋๋ ๊ฒ์ ์ฆ๊ฐ ์์ฑ์ ๋ํ ํ
ํ๋ฆฟ์ ๋ณ๋๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ด๋ฌํ ๊ฒฝ์ฐ `tokenizer.chat_template`๋ ๋์
๋๋ฆฌ์
๋๋ค. ์ด๊ฒ์ ์ฝ๊ฐ์ ํผ๋์ ์ด๋ํ ์ ์์ผ๋ฉฐ, ๊ฐ๋ฅํ ํ ๋ชจ๋ ์ฌ์ฉ ์ฌ๋ก์ ๋ํด ๋จ์ผ ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค. `if tools is defined`์ ๊ฐ์ Jinja ๋ฌธ์ฅ๊ณผ `{% macro %}` ์ ์๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ๋ฌ ์ฝ๋ ๊ฒฝ๋ก๋ฅผ ๋จ์ผ ํ
ํ๋ฆฟ์ ์ฝ๊ฒ ๋ํํ ์ ์์ต๋๋ค.
ํ ํฌ๋์ด์ ์ ์ฌ๋ฌ ๊ฐ์ ํ
ํ๋ฆฟ์ด ์๋ ๊ฒฝ์ฐ, `tokenizer.chat_template`๋ ํ
ํ๋ฆฟ ์ด๋ฆ์ด ํค์ธ `๋์
๋๋ฆฌ`์
๋๋ค. `apply_chat_template` ๋ฉ์๋๋ ํน์ ํ
ํ๋ฆฟ ์ด๋ฆ์ ๋ํ ํน๋ณํ ์ฒ๋ฆฌ๋ฅผ ํฉ๋๋ค: ์ผ๋ฐ์ ์ผ๋ก `default`๋ผ๋ ํ
ํ๋ฆฟ์ ์ฐพ๊ณ , ์ฐพ์ ์ ์์ผ๋ฉด ์ค๋ฅ๋ฅผ ๋ฐ์์ํต๋๋ค. ๊ทธ๋ฌ๋ ์ฌ์ฉ์๊ฐ `tools` ์ธ์๋ฅผ ์ ๋ฌํ ๋ `tool_use`๋ผ๋ ํ
ํ๋ฆฟ์ด ์กด์ฌํ๋ฉด ๋์ ๊ทธ๊ฒ์ ์ฌ์ฉํฉ๋๋ค. ๋ค๋ฅธ ์ด๋ฆ์ ํ
ํ๋ฆฟ์ ์ ๊ทผํ๋ ค๋ฉด `apply_chat_template()`์ `chat_template` ์ธ์์ ์ํ๋ ํ
ํ๋ฆฟ ์ด๋ฆ์ ์ ๋ฌํ๋ฉด ๋ฉ๋๋ค.
์ฌ์ฉ์์๊ฒ ์ฝ๊ฐ์ ํผ๋์ ์ค ์ ์์ผ๋ฏ๋ก, ํ
ํ๋ฆฟ์ ์ง์ ์์ฑํ๋ ๊ฒฝ์ฐ ๊ฐ๋ฅํ ํ ๋จ์ผ ํ
ํ๋ฆฟ์ ๋ชจ๋ ๊ฒ์ ๋ฃ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค!
### ์ด๋ค ํ
ํ๋ฆฟ์ ์ฌ์ฉํด์ผ ํ๋์?[[what-template-should-i-use]]
์ด๋ฏธ ์ฑํ
์ฉ์ผ๋ก ํ๋ จ๋ ๋ชจ๋ธ์ ํ
ํ๋ฆฟ์ ์ค์ ํ ๋๋ ํ
ํ๋ฆฟ์ด ํ๋ จ ์ค ๋ชจ๋ธ์ด ๋ณธ ๋ฉ์์ง ํ์๊ณผ ์ ํํ ์ผ์นํ๋๋ก ํด์ผ ํฉ๋๋ค. ๊ทธ๋ ์ง ์์ผ๋ฉด ์ฑ๋ฅ ์ ํ๋ฅผ ๊ฒฝํํ ๊ฐ๋ฅ์ฑ์ด ํฝ๋๋ค. ์ด๋ ๋ชจ๋ธ์ ์ถ๊ฐ๋ก ํ๋ จํ ๋๋ ๋ง์ฐฌ๊ฐ์ง์
๋๋ค. ์ฑํ
ํ ํฐ์ ์ผ์ ํ๊ฒ ์ ์งํ๋ ๊ฒ์ด ์ต์์ ์ฑ๋ฅ์ ์ป๋ ๋ฐฉ๋ฒ์
๋๋ค. ์ด๋ ํ ํฐํ์ ๋งค์ฐ ์ ์ฌํฉ๋๋ค. ํ๋ จ ์ค์ ์ฌ์ฉ๋ ํ ํฐํ๋ฅผ ์ ํํ ์ผ์น์ํฌ ๋ ์ถ๋ก ์ด๋ ๋ฏธ์ธ ์กฐ์ ์์ ์ต๊ณ ์ ์ฑ๋ฅ์ ์ป์ ์ ์์ต๋๋ค.
๋ฐ๋ฉด์ ์ฒ์๋ถํฐ ๋ชจ๋ธ์ ํ๋ จ์ํค๊ฑฐ๋ ์ฑํ
์ฉ์ผ๋ก ๊ธฐ๋ณธ ์ธ์ด ๋ชจ๋ธ์ ๋ฏธ์ธ ์กฐ์ ํ๋ ๊ฒฝ์ฐ, ์ ์ ํ ํ
ํ๋ฆฟ์ ์ ํํ ์ ์๋ ๋ง์ ์์ ๊ฐ ์์ต๋๋ค. LLM์ ๋ค์ํ ์
๋ ฅ ํ์์ ์ฒ๋ฆฌํ ๋งํผ ์ถฉ๋ถํ ๋๋ํฉ๋๋ค. ์ธ๊ธฐ ์๋ ์ ํ ์ค ํ๋๋ `ChatML` ํ์์ด๋ฉฐ, ์ด๋ ๋ง์ ์ฌ์ฉ ์ฌ๋ก์ ์ ์ฐํ๊ฒ ์ฌ์ฉํ ์ ์๋ ์ข์ ์ ํ์
๋๋ค. ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```
{%- for message in messages %}
{{- '<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }}
{%- endfor %}
```
์ด ํ
ํ๋ฆฟ์ด ๋ง์์ ๋ ๋ค๋ฉด, ์ฝ๋์ ๋ฐ๋ก ๋ณต์ฌํ์ฌ ์ฌ์ฉํ ์ ์๋ ํ ์ค ๋ฒ์ ์ ์ ๊ณตํ๊ฒ ์ต๋๋ค. ์ด ํ ์ค ๋ฒ์ ์ [์์ฑ ํ๋กฌํํธ](#what-are-generation-prompts)์ ๋ํ ํธ๋ฆฌํ ์ง์๋ ํฌํจํ๊ณ ์์ง๋ง, BOS๋ EOS ํ ํฐ์ ์ถ๊ฐํ์ง ์๋๋ค๋ ์ ์ ์ ์ํ์ธ์! ๋ชจ๋ธ์ด ํด๋น ํ ํฐ์ ๊ธฐ๋ํ๋๋ผ๋, `apply_chat_template`์ ์ํด ์๋์ผ๋ก ์ถ๊ฐ๋์ง ์์ต๋๋ค. ์ฆ, ํ
์คํธ๋ `add_special_tokens=False`์ ์ํด ํ ํฐํ๋ฉ๋๋ค. ์ด๋ ํ
ํ๋ฆฟ๊ณผ `add_special_tokens` ๋
ผ๋ฆฌ ๊ฐ์ ์ ์ฌ์ ์ธ ์ถฉ๋์ ํผํ๊ธฐ ์ํจ์
๋๋ค. ๋ชจ๋ธ์ด ํน๋ณ ํ ํฐ์ ๊ธฐ๋ํ๋ ๊ฒฝ์ฐ, ํ
ํ๋ฆฟ์ ์ง์ ์ถ๊ฐํด์ผ ํฉ๋๋ค!
```python
tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
```
์ด ํ
ํ๋ฆฟ์ ๊ฐ ๋ฉ์์ง๋ฅผ `<|im_start|>` ์ `<|im_end|>`ํ ํฐ์ผ๋ก ๊ฐ์ธ๊ณ , ์ญํ ์ ๋ฌธ์์ด๋ก ์์ฑํ์ฌ ํ๋ จ ์ ์ฌ์ฉํ๋ ์ญํ ์ ๋ํ ์ ์ฐ์ฑ์ ์ ๊ณตํฉ๋๋ค. ์ถ๋ ฅ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```text
<|im_start|>system
You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
I'm doing great!<|im_end|>
```
"์ฌ์ฉ์", "์์คํ
" ๋ฐ "์ด์์คํดํธ" ์ญํ ์ ์ฑํ
์ ํ์ค์ด๋ฉฐ, ๊ฐ๋ฅํ ๋ ์ด๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค. ํนํ ๋ชจ๋ธ์ด [`TextGenerationPipeline`]๊ณผ ์ ์๋ํ๋๋ก ํ๋ ค๋ฉด ๊ทธ๋ ์ต๋๋ค. ๊ทธ๋ฌ๋ ์ด๋ฌํ ์ญํ ์๋ง ๊ตญํ๋์ง ์์ต๋๋ค. ํ
ํ๋ฆฟ์ ๋งค์ฐ ์ ์ฐํ๋ฉฐ, ์ด๋ค ๋ฌธ์์ด์ด๋ ์ญํ ๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค.
### ์ฑํ
ํ
ํ๋ฆฟ์ ์ถ๊ฐํ๊ณ ์ถ์ต๋๋ค! ์ด๋ป๊ฒ ์์ํด์ผ ํ๋์?[[i-want-to-add-some-chat-templates-how-should-i-get-started]]
์ฑํ
๋ชจ๋ธ์ด ์๋ ๊ฒฝ์ฐ, ํด๋น ๋ชจ๋ธ์ `tokenizer.chat_template` ์์ฑ์ ์ค์ ํ๊ณ [`~PreTrainedTokenizer.apply_chat_template`]๋ฅผ ์ฌ์ฉํ์ฌ ํ
์คํธํ ๋ค์ ์
๋ฐ์ดํธ๋ ํ ํฌ๋์ด์ ๋ฅผ ํ๋ธ์ ํธ์ํด์ผ ํฉ๋๋ค. ์ด๋ ๋ชจ๋ธ ์์ ์๊ฐ ์๋ ๊ฒฝ์ฐ์๋ ์ ์ฉ๋ฉ๋๋ค. ๋น ์ฑํ
ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๋ ๋ชจ๋ธ์ด๋ ์ฌ์ ํ ๊ธฐ๋ณธ ํด๋์ค ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๋ ๋ชจ๋ธ์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ, [ํ ๋ฆฌํ์คํธ](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)๋ฅผ ๋ชจ๋ธ ๋ฆฌํฌ์งํ ๋ฆฌ์ ์ด์ด ์ด ์์ฑ์ ์ฌ๋ฐ๋ฅด๊ฒ ์ค์ ํ ์ ์๋๋ก ํ์ธ์!
์์ฑ์ ์ค์ ํ๋ฉด ๋์
๋๋ค! `tokenizer.apply_chat_template`๊ฐ ์ด์ ํด๋น ๋ชจ๋ธ์ ๋ํด ์ฌ๋ฐ๋ฅด๊ฒ ์๋ํ๋ฏ๋ก, `TextGenerationPipeline`๊ณผ ๊ฐ์ ๊ณณ์์๋ ์๋์ผ๋ก ์ง์๋ฉ๋๋ค!
๋ชจ๋ธ์ ์ด ์์ฑ์ ์ค์ ํจ์ผ๋ก์จ, ์คํ ์์ค ๋ชจ๋ธ์ ์ ์ฒด ๊ธฐ๋ฅ์ ์ปค๋ฎค๋ํฐ๊ฐ ์ฌ์ฉํ ์ ์๋๋ก ํ ์ ์์ต๋๋ค. ํ์ ๋ถ์ผ์น๋ ์ด ๋ถ์ผ์์ ์ค๋ซ๋์ ์ฑ๋ฅ์ ์ ํ์ํค๋ ๋ฌธ์ ์์ผ๋ฏ๋ก, ์ด์ ์ด๋ฅผ ๋๋ผ ๋์
๋๋ค!
## ๊ณ ๊ธ: ํ
ํ๋ฆฟ ์์ฑ ํ[[advanced-template-writing-tips]]
Jinja์ ์ต์ํ์ง ์์ ๊ฒฝ์ฐ, ์ฑํ
ํ
ํ๋ฆฟ์ ์์ฑํ๋ ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ์ ๋จผ์ ๋ฉ์์ง๋ฅผ ์ํ๋ ๋ฐฉ์์ผ๋ก ํ์ํํ๋ ์งง์ ํ์ด์ฌ ์คํฌ๋ฆฝํธ๋ฅผ ์์ฑํ ๋ค์, ํด๋น ์คํฌ๋ฆฝํธ๋ฅผ ํ
ํ๋ฆฟ์ผ๋ก ๋ณํํ๋ ๊ฒ์
๋๋ค.
ํ
ํ๋ฆฟ ํธ๋ค๋ฌ๋ `messages`๋ผ๋ ๋ณ์๋ก ๋ํ ๊ธฐ๋ก์ ๋ฐ์ต๋๋ค. ํ์ด์ฌ์์์ ๋ง์ฐฌ๊ฐ์ง๋ก ํ
ํ๋ฆฟ ๋ด์ `messages`์ ์ ๊ทผํ ์ ์์ผ๋ฉฐ, `{% for message in messages %}`๋ก ๋ฐ๋ณตํ๊ฑฐ๋ `{{ messages[0] }}`์ ๊ฐ์ด ๊ฐ๋ณ ๋ฉ์์ง์ ์ ๊ทผํ ์ ์์ต๋๋ค.
๋ค์ ํ์ ์ฌ์ฉํ์ฌ ์ฝ๋๋ฅผ Jinja๋ก ๋ณํํ ์๋ ์์ต๋๋ค:
### ๊ณต๋ฐฑ ์ ๊ฑฐ[[trimming-whitespace]]
๊ธฐ๋ณธ์ ์ผ๋ก Jinja๋ ๋ธ๋ก ์ ํ์ ๊ณต๋ฐฑ์ ์ถ๋ ฅํฉ๋๋ค. ์ด๋ ์ผ๋ฐ์ ์ผ๋ก ๊ณต๋ฐฑ์ ๋งค์ฐ ์ ํํ๊ฒ ๋ค๋ฃจ๊ณ ์ ํ๋ ์ฑํ
ํ
ํ๋ฆฟ์์๋ ๋ฌธ์ ๊ฐ ๋ ์ ์์ต๋๋ค! ์ด๋ฅผ ํผํ๊ธฐ ์ํด ํ
ํ๋ฆฟ์ ๋ค์๊ณผ ๊ฐ์ด ์์ฑํ๋ ๊ฒ์ด ์ข์ต๋๋ค:
```
{%- for message in messages %}
{{- message['role'] + message['content'] }}
{%- endfor %}
```
์๋์ ๊ฐ์ด ์์ฑํ์ง ๋ง์ธ์:
```
{% for message in messages %}
{{ message['role'] + message['content'] }}
{% endfor %}
```
`-`๋ฅผ ์ถ๊ฐํ๋ฉด ๋ธ๋ก ์ ํ์ ๊ณต๋ฐฑ์ด ์ ๊ฑฐ๋ฉ๋๋ค. ๋ ๋ฒ์งธ ์์ ๋ ๋ฌดํดํด ๋ณด์ด์ง๋ง, ์ค๋ฐ๊ฟ๊ณผ ๋ค์ฌ์ฐ๊ธฐ๊ฐ ์ถ๋ ฅ์ ํฌํจ๋ ์ ์์ผ๋ฉฐ, ์ด๋ ์ํ์ง ์๋ ๊ฒฐ๊ณผ์ผ ์ ์์ต๋๋ค!
### ๋ฐ๋ณต๋ฌธ[[for-loops]]
Jinja์์ ๋ฐ๋ณต๋ฌธ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```
{%- for message in messages %}
{{- message['content'] }}
{%- endfor %}
```
{{ ํํ์ ๋ธ๋ก }} ๋ด๋ถ์ ์๋ ๋ชจ๋ ๊ฒ์ด ์ถ๋ ฅ์ผ๋ก ์ธ์๋ฉ๋๋ค. `+`์ ๊ฐ์ ์ฐ์ฐ์๋ฅผ ์ฌ์ฉํ์ฌ ํํ์ ๋ธ๋ก ๋ด๋ถ์์ ๋ฌธ์์ด์ ๊ฒฐํฉํ ์ ์์ต๋๋ค.
### ์กฐ๊ฑด๋ฌธ[[if-statements]]
Jinja์์ ์กฐ๊ฑด๋ฌธ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```
{%- if message['role'] == 'user' %}
{{- message['content'] }}
{%- endif %}
```
ํ์ด์ฌ์ด ๊ณต๋ฐฑ์ ์ฌ์ฉํ์ฌ `for` ๋ฐ `if` ๋ธ๋ก์ ์์๊ณผ ๋์ ํ์ํ๋ ๋ฐ๋ฉด, Jinja๋ `{% endfor %}` ๋ฐ `{% endif %}`๋ก ๋ช
์์ ์ผ๋ก ๋์ ํ์ํด์ผ ํฉ๋๋ค.
### ํน์ ๋ณ์[[special-variables]]
ํ
ํ๋ฆฟ ๋ด๋ถ์์๋ `messages` ๋ชฉ๋ก์ ์ ๊ทผํ ์ ์์ ๋ฟ๋ง ์๋๋ผ ์ฌ๋ฌ ๋ค๋ฅธ ํน์ ๋ณ์์๋ ์ ๊ทผํ ์ ์์ต๋๋ค. ์ฌ๊ธฐ์๋ `bos_token` ๋ฐ `eos_token`๊ณผ ๊ฐ์ ํน๋ณ ํ ํฐ๊ณผ ์์ ๋
ผ์ํ `add_generation_prompt` ๋ณ์๊ฐ ํฌํจ๋ฉ๋๋ค. ๋ํ `loop` ๋ณ์๋ฅผ ์ฌ์ฉํ์ฌ ํ์ฌ ๋ฐ๋ณต์ ๋ํ ์ ๋ณด๋ฅผ ์ป์ ์ ์์ผ๋ฉฐ, ์๋ฅผ ๋ค์ด `{% if loop.last %}`๋ฅผ ์ฌ์ฉํ์ฌ ํ์ฌ ๋ฉ์์ง๊ฐ ๋ํ์ ๋ง์ง๋ง ๋ฉ์์ง์ธ์ง ํ์ธํ ์ ์์ต๋๋ค. `add_generation_prompt`๊ฐ `True`์ธ ๊ฒฝ์ฐ ๋ํ ๋์ ์์ฑ ํ๋กฌํํธ๋ฅผ ์ถ๊ฐํ๋ ์์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
```
{%- if loop.last and add_generation_prompt %}
{{- bos_token + 'Assistant:\n' }}
{%- endif %}
```
### ๋นํ์ด์ฌ Jinja์์ ํธํ์ฑ[[compatibility-with-non-python-jinja]]
Jinja์ ์ฌ๋ฌ ๊ตฌํ์ ๋ค์ํ ์ธ์ด๋ก ์ ๊ณต๋ฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ๋์ผํ ๊ตฌ๋ฌธ์ ์ฌ์ฉํ์ง๋ง, ์ฃผ์ ์ฐจ์ด์ ์ ํ์ด์ฌ์์ ํ
ํ๋ฆฟ์ ์์ฑํ ๋ ํ์ด์ฌ ๋ฉ์๋๋ฅผ ์ฌ์ฉํ ์ ์๋ค๋ ์ ์
๋๋ค. ์๋ฅผ ๋ค์ด, ๋ฌธ์์ด์ `.lower()`๋ฅผ ์ฌ์ฉํ๊ฑฐ๋ ๋์
๋๋ฆฌ์ `.items()`๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ์ด๋ ๋นํ์ด์ฌ Jinja ๊ตฌํ์์ ํ
ํ๋ฆฟ์ ์ฌ์ฉํ๋ ค๊ณ ํ ๋ ๋ฌธ์ ๊ฐ ๋ฐ์ํ ์ ์์ต๋๋ค. ํนํ JS์ Rust๊ฐ ์ธ๊ธฐ ์๋ ๋ฐฐํฌ ํ๊ฒฝ์์๋ ๋นํ์ด์ฌ ๊ตฌํ์ด ํํฉ๋๋ค.
ํ์ง๋ง ๊ฑฑ์ ํ์ง ๋ง์ธ์! ๋ชจ๋ Jinja ๊ตฌํ์์ ํธํ์ฑ์ ๋ณด์ฅํ๊ธฐ ์ํด ํ
ํ๋ฆฟ์ ์ฝ๊ฒ ๋ณ๊ฒฝํ ์ ์๋ ๋ช ๊ฐ์ง ๋ฐฉ๋ฒ์ด ์์ต๋๋ค:
- ํ์ด์ฌ ๋ฉ์๋๋ฅผ Jinja ํํฐ๋ก ๋์ฒดํ์ธ์. ์ผ๋ฐ์ ์ผ๋ก ๊ฐ์ ์ด๋ฆ์ ๊ฐ์ง๋ฉฐ, ์๋ฅผ ๋ค์ด `string.lower()`๋ `string|lower`๋ก, `dict.items()`๋ `dict|items`๋ก ๋์ฒดํ ์ ์์ต๋๋ค. ์ฃผ๋ชฉํ ๋งํ ๋ณ๊ฒฝ ์ฌํญ์ `string.strip()`์ด `string|trim`์ผ๋ก ๋ฐ๋๋ ๊ฒ์
๋๋ค. ๋ ์์ธํ ๋ด์ฉ์ Jinja ๋ฌธ์์ [๋ด์ฅ ํํฐ ๋ชฉ๋ก](https://jinja.palletsprojects.com/en/3.1.x/templates/#builtin-filters)์ ์ฐธ์กฐํ์ธ์.
- ํ์ด์ฌ์ ํนํ๋ `True`, `False`, `None`์ ๊ฐ๊ฐ `true`, `false`, `none`์ผ๋ก ๋์ฒดํ์ธ์.
- ๋์
๋๋ฆฌ๋ ๋ฆฌ์คํธ๋ฅผ ์ง์ ๋ ๋๋งํ ๋ ๋ค๋ฅธ ๊ตฌํ์์๋ ๊ฒฐ๊ณผ๊ฐ ๋ค๋ฅผ ์ ์์ต๋๋ค(์: ๋ฌธ์์ด ํญ๋ชฉ์ด ๋จ์ผ ๋ฐ์ดํ์์ ์ด์ค ๋ฐ์ดํ๋ก ๋ณ๊ฒฝ๋ ์ ์์ต๋๋ค). `tojson` ํํฐ๋ฅผ ์ถ๊ฐํ๋ฉด ์ผ๊ด์ฑ์ ์ ์งํ๋ ๋ฐ ๋์์ด ๋ฉ๋๋ค.
|
transformers/docs/source/ko/chat_templating.md/0
|
{
"file_path": "transformers/docs/source/ko/chat_templating.md",
"repo_id": "transformers",
"token_count": 29216
}
| 299
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.