| """ |
| Dynamic Quantization |
| ==================== |
| |
| In this recipe you will see how to take advantage of Dynamic |
| Quantization to accelerate inference on an LSTM-style recurrent neural |
| network. This reduces the size of the model weights and speeds up model |
| execution. |
| |
| Introduction |
| ------------- |
| |
| There are a number of trade-offs that can be made when designing neural |
| networks. During model developmenet and training you can alter the |
| number of layers and number of parameters in a recurrent neural network |
| and trade-off accuracy against model size and/or model latency or |
| throughput. Such changes can take lot of time and compute resources |
| because you are iterating over the model training. Quantization gives |
| you a way to make a similar trade off between performance and model |
| accuracy with a known model after training is completed. |
| |
| You can give it a try in a single session and you will certainly reduce |
| your model size significantly and may get a significant latency |
| reduction without losing a lot of accuracy. |
| |
| What is dynamic quantization? |
| ------------- |
| |
| Quantizing a network means converting it to use a reduced precision |
| integer representation for the weights and/or activations. This saves on |
| model size and allows the use of higher throughput math operations on |
| your CPU or GPU. |
| |
| When converting from floating point to integer values you are |
| essentially multiplying the floating point value by some scale factor |
| and rounding the result to a whole number. The various quantization |
| approaches differ in the way they approach determining that scale |
| factor. |
| |
| The key idea with dynamic quantization as described here is that we are |
| going to determine the scale factor for activations dynamically based on |
| the data range observed at runtime. This ensures that the scale factor |
| is "tuned" so that as much signal as possible about each observed |
| dataset is preserved. |
| |
| The model parameters on the other hand are known during model conversion |
| and they are converted ahead of time and stored in INT8 form. |
| |
| Arithmetic in the quantized model is done using vectorized INT8 |
| instructions. Accumulation is typically done with INT16 or INT32 to |
| avoid overflow. This higher precision value is scaled back to INT8 if |
| the next layer is quantized or converted to FP32 for output. |
| |
| Dynamic quantization is relatively free of tuning parameters which makes |
| it well suited to be added into production pipelines as a standard part |
| of converting LSTM models to deployment. |
| |
| |
| |
| .. note:: |
| Limitations on the approach taken here |
| |
| |
| This recipe provides a quick introduction to the dynamic quantization |
| features in PyTorch and the workflow for using it. Our focus is on |
| explaining the specific functions used to convert the model. We will |
| make a number of significant simplifications in the interest of brevity |
| and clarity |
| |
| |
| 1. You will start with a minimal LSTM network |
| 2. You are simply going to initialize the network with a random hidden |
| state |
| 3. You are going to test the network with random inputs |
| 4. You are not going to train the network in this tutorial |
| 5. You will see that the quantized form of this network is smaller and |
| runs faster than the floating point network we started with |
| 6. You will see that the output values are generally in the same |
| ballpark as the output of the FP32 network, but we are not |
| demonstrating here the expected accuracy loss on a real trained |
| network |
| |
| You will see how dynamic quantization is done and be able to see |
| suggestive reductions in memory use and latency times. Providing a |
| demonstration that the technique can preserve high levels of model |
| accuracy on a trained LSTM is left to a more advanced tutorial. If you |
| want to move right away to that more rigorous treatment please proceed |
| to the `advanced dynamic quantization |
| tutorial <https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html>`__. |
| |
| Steps |
| ------------- |
| |
| This recipe has 5 steps. |
| |
| 1. Set Up - Here you define a very simple LSTM, import modules, and establish |
| some random input tensors. |
| |
| 2. Do the Quantization - Here you instantiate a floating point model and then create quantized |
| version of it. |
| |
| 3. Look at Model Size - Here you show that the model size gets smaller. |
| |
| 4. Look at Latency - Here you run the two models and compare model runtime (latency). |
| |
| 5. Look at Accuracy - Here you run the two models and compare outputs. |
| |
| |
| 1: Set Up |
| ~~~~~~~~~~~~~~~ |
| This is a straightfoward bit of code to set up for the rest of the |
| recipe. |
| |
| The unique module we are importing here is torch.quantization which |
| includes PyTorch's quantized operators and conversion functions. We also |
| define a very simple LSTM model and set up some inputs. |
| |
| """ |
|
|
| |
| import torch |
| import torch.quantization |
| import torch.nn as nn |
| import copy |
| import os |
| import time |
|
|
| |
| |
| |
| |
| |
| class lstm_for_demonstration(nn.Module): |
| """Elementary Long Short Term Memory style model which simply wraps nn.LSTM |
| Not to be used for anything other than demonstration. |
| """ |
| def __init__(self,in_dim,out_dim,depth): |
| super(lstm_for_demonstration,self).__init__() |
| self.lstm = nn.LSTM(in_dim,out_dim,depth) |
|
|
| def forward(self,inputs,hidden): |
| out,hidden = self.lstm(inputs,hidden) |
| return out, hidden |
|
|
|
|
| torch.manual_seed(29592) |
|
|
| |
| model_dimension=8 |
| sequence_length=20 |
| batch_size=1 |
| lstm_depth=1 |
|
|
| |
| inputs = torch.randn(sequence_length,batch_size,model_dimension) |
| |
| hidden = (torch.randn(lstm_depth,batch_size,model_dimension), torch.randn(lstm_depth,batch_size,model_dimension)) |
|
|
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| |
| float_lstm = lstm_for_demonstration(model_dimension, model_dimension,lstm_depth) |
|
|
| |
| quantized_lstm = torch.quantization.quantize_dynamic( |
| float_lstm, {nn.LSTM, nn.Linear}, dtype=torch.qint8 |
| ) |
|
|
| |
| print('Here is the floating point version of this module:') |
| print(float_lstm) |
| print('') |
| print('and now the quantized version:') |
| print(quantized_lstm) |
|
|
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| def print_size_of_model(model, label=""): |
| torch.save(model.state_dict(), "temp.p") |
| size=os.path.getsize("temp.p") |
| print("model: ",label,' \t','Size (KB):', size/1e3) |
| os.remove('temp.p') |
| return size |
|
|
| |
| f=print_size_of_model(float_lstm,"fp32") |
| q=print_size_of_model(quantized_lstm,"int8") |
| print("{0:.2f} times smaller".format(f/q)) |
|
|
| |
| |
|
|
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| |
| print("Floating point FP32") |
| |
|
|
| print("Quantized INT8") |
| |
|
|
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|
| |
| out1, hidden1 = float_lstm(inputs, hidden) |
| mag1 = torch.mean(abs(out1)).item() |
| print('mean absolute value of output tensor values in the FP32 model is {0:.5f} '.format(mag1)) |
|
|
| |
| out2, hidden2 = quantized_lstm(inputs, hidden) |
| mag2 = torch.mean(abs(out2)).item() |
| print('mean absolute value of output tensor values in the INT8 model is {0:.5f}'.format(mag2)) |
|
|
| |
| mag3 = torch.mean(abs(out1-out2)).item() |
| print('mean absolute value of the difference between the output tensors is {0:.5f} or {1:.2f} percent'.format(mag3,mag3/mag1*100)) |
|
|
|
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
|