aimlbd / HuggingFace.Quantization_in_Depth /Transcript /2_Quantize and De-quantize a Tensor_HFQD.txt
swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
14.5 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:02.135 --> 00:00:06.673
In this lesson, you will dive deep
into the theory of linear quantization.
2
00:00:07.107 --> 00:00:11.778
You will implement from scratch the
asymmetric variant of linear quantization.
3
00:00:12.212 --> 00:00:15.749
You will also learn about
the scaling factor and the zero point.
4
00:00:16.149 --> 00:00:19.152
Let's get started.
5
00:00:20.487 --> 00:00:22.522
Quantization refers to the process
6
00:00:22.522 --> 00:00:25.792
of mapping a large
set to a smaller set of values.
7
00:00:25.825 --> 00:00:28.194
There are many quantization techniques.
8
00:00:28.194 --> 00:00:31.931
In this course,
we will focus only on linear quantization.
9
00:00:32.265 --> 00:00:34.300
Let's have a look at an example.
10
00:00:34.300 --> 00:00:38.605
On your left you can see
the original tensor in floating .32.
11
00:00:39.072 --> 00:00:42.175
And we have the quantized tensor
on the right.
12
00:00:42.509 --> 00:00:46.012
The quantized tensor is quantized
in torch.int8,
13
00:00:46.613 --> 00:00:49.749
and we use linear quantization
to get this tensor.
14
00:00:49.783 --> 00:00:53.953
We will see in this lesson
how we get this quantized tensor.
15
00:00:54.254 --> 00:00:57.924
But also
how do we get back to the original tensor.
16
00:00:58.224 --> 00:01:02.562
Let's have a quick recap on what
we can quantize in a neural network.
17
00:01:02.662 --> 00:01:05.665
In a neural network
you can quantize the weights.
18
00:01:05.665 --> 00:01:08.301
That is to say,
the neural network parameters.
19
00:01:08.301 --> 00:01:11.704
But you can also
quantize the activations.
20
00:01:12.705 --> 00:01:13.873
The activations are
21
00:01:13.873 --> 00:01:17.243
values that propagates
through the layers of the neural network.
22
00:01:17.243 --> 00:01:21.714
And if you quantize a neural network
after it has been trained,
23
00:01:22.315 --> 00:01:25.585
you are doing something called
post-training quantization.
24
00:01:26.119 --> 00:01:29.055
There are multiple advantages
of quantization.
25
00:01:29.055 --> 00:01:33.259
Of course you get a smaller model,
but you can also get speed gains
26
00:01:33.560 --> 00:01:38.560
from the memory bandwidth
and faster operation, such as the matrix
27
00:01:39.132 --> 00:01:42.836
to matrix multiplication
and the matrix to vector multiplication.
28
00:01:43.002 --> 00:01:46.339
We will see why it is
the case in the next lesson
29
00:01:46.339 --> 00:01:50.343
when we talked about how to perform
inference with a quantized model.
30
00:01:50.543 --> 00:01:53.513
There are many challenges to quantization.
31
00:01:53.646 --> 00:01:58.284
We will deep dive into these challenges
in the last lesson of this short course.
32
00:01:58.651 --> 00:02:02.422
But now I'm going to give you a quick
preview of these challenges.
33
00:02:02.689 --> 00:02:05.925
Now, let's jump on the theory of linear
quantization.
34
00:02:05.992 --> 00:02:10.597
Linear quantization uses a linear
mapping to map the higher precision range.
35
00:02:10.597 --> 00:02:15.597
For example, floating point 32 to a lower
precision range for example int8.
36
00:02:16.402 --> 00:02:19.772
There are two parameters
in linear quantization.
37
00:02:19.973 --> 00:02:23.443
We have the scale S and the zero point z.
38
00:02:24.010 --> 00:02:28.181
The scale is stored in the same data
type as the original tensor,
39
00:02:28.548 --> 00:02:32.719
and z is stored in the same datatype
as the quantized tensor.
40
00:02:32.952 --> 00:02:35.722
We will see why in the next few slides.
41
00:02:35.722 --> 00:02:37.957
Now let's check a quick example.
42
00:02:37.957 --> 00:02:41.995
Let's say the scale is equal to two
and the zero point is equal to zero.
43
00:02:42.328 --> 00:02:46.232
If we have a quantized value of ten,
the dequantized value
44
00:02:46.266 --> 00:02:50.069
would be equal to 2(q-0),
45
00:02:50.403 --> 00:02:54.274
which will be equal to 2*10,
which will be equal to 20.
46
00:02:54.741 --> 00:02:56.743
If we look at the example
47
00:02:56.743 --> 00:03:00.547
we presented in the first few slides,
we would have something like this:
48
00:03:01.748 --> 00:03:06.186
So, here we have the original tensor.
49
00:03:06.219 --> 00:03:08.388
We have the quantized tensor here.
50
00:03:08.388 --> 00:03:12.091
And the zero point is equals to -77.
51
00:03:12.625 --> 00:03:15.995
And the scale is equal to 3.58.
52
00:03:16.529 --> 00:03:19.732
We will see how we get the zero point
and the scale
53
00:03:20.133 --> 00:03:21.534
in the next few slides.
54
00:03:21.534 --> 00:03:23.870
But first, we have the original tensor
55
00:03:23.870 --> 00:03:25.605
and we need to quantize this tensor.
56
00:03:25.605 --> 00:03:28.975
So, how do we get Q? If you remember well,
57
00:03:28.975 --> 00:03:32.879
the relationship is r=s(q-z).
58
00:03:33.346 --> 00:03:36.749
So how do we get q?
To get the quantized tensor
59
00:03:36.749 --> 00:03:40.987
we just need to isolate q
and we get the following formula.
60
00:03:41.120 --> 00:03:45.959
So, in order to get the quantized tensor,
as I said before, you need to isolate q.
61
00:03:46.159 --> 00:03:49.996
So first, we have r=s(q-z).
62
00:03:50.530 --> 00:03:55.530
We need to pass s to the left side
by dividing it by s.
63
00:03:56.369 --> 00:03:59.973
Then we put the zero point
on the other side
64
00:04:00.139 --> 00:04:04.043
by adding a z on this side
and on this side.
65
00:04:04.677 --> 00:04:06.746
So we get the following results.
66
00:04:06.746 --> 00:04:10.316
As you know the quantized tensor is on
67
00:04:10.316 --> 00:04:14.153
the specific d-type
which can be eight-bit integers.
68
00:04:14.787 --> 00:04:17.023
So we need to round that number.
69
00:04:17.023 --> 00:04:22.023
And the last step would be to cast this
value to the correct d-type such as int8.
70
00:04:23.630 --> 00:04:27.233
Let's code that. In this classroom
the libraries have
71
00:04:27.233 --> 00:04:28.635
already been installed for you.
72
00:04:28.635 --> 00:04:32.705
But if you are running this
on your own machine, all you need to do
73
00:04:32.705 --> 00:04:36.109
is to type the following
command in order to install torch.
74
00:04:37.744 --> 00:04:40.747
Pip install torch.
75
00:04:41.714 --> 00:04:45.084
Since in this classroom
the libraries have already been installed,
76
00:04:45.118 --> 00:04:48.154
I won't be running this comment,
so I will just comment it out.
77
00:04:49.355 --> 00:04:52.358
Now, all we do need to do
is to import torch.
78
00:04:52.759 --> 00:04:55.094
Now, let's code the function
79
00:04:55.094 --> 00:04:58.398
that will give us the quantize tensor
80
00:04:58.731 --> 00:05:01.467
knowing the scale and the zero points.
81
00:05:01.467 --> 00:05:05.705
So, we define a function called linear
82
00:05:07.540 --> 00:05:10.543
q for quantization with
83
00:05:11.177 --> 00:05:14.247
scale and zero point.
84
00:05:18.584 --> 00:05:21.587
This function
will take multiple arguments.
85
00:05:21.854 --> 00:05:24.857
So we have the tensor.
86
00:05:25.892 --> 00:05:28.127
We have the scale.
87
00:05:28.127 --> 00:05:30.530
We have the zero point.
88
00:05:30.530 --> 00:05:35.034
And we also need to define the d type
which will be equal
89
00:05:35.268 --> 00:05:38.271
by default to torch.int8.
90
00:05:40.606 --> 00:05:43.876
So, the first step is to get the scaled
and shifted tensor.
91
00:05:44.410 --> 00:05:47.347
As you can see in the formula right here.
92
00:05:47.347 --> 00:05:50.283
So, (r/s+z).
93
00:05:52.051 --> 00:05:55.054
So we are going to first calculate that.
94
00:05:56.255 --> 00:05:58.091
So this specific tensor
95
00:05:58.091 --> 00:06:01.094
will be equal to tensor
96
00:06:01.461 --> 00:06:04.430
divided by scale,
97
00:06:05.098 --> 00:06:08.101
plus zero points.
98
00:06:11.337 --> 00:06:12.772
We need to run the tensor.
99
00:06:12.772 --> 00:06:15.775
As you can see in the formula.
100
00:06:16.376 --> 00:06:18.311
So we will just create the variable
101
00:06:18.311 --> 00:06:21.314
around the tensor.
102
00:06:23.416 --> 00:06:26.419
Which will be equal to torch.round.
103
00:06:26.819 --> 00:06:29.789
The round method will enable
104
00:06:29.789 --> 00:06:32.592
the torch.round methods.
105
00:06:32.592 --> 00:06:35.595
We round the tensor that we pass.
106
00:06:38.231 --> 00:06:41.601
And the last step
is to make sure that our rounded tensor
107
00:06:41.601 --> 00:06:45.938
is between the minimum quantized value
and the maximum quantized value.
108
00:06:46.272 --> 00:06:49.809
And then we can finally cast it
to the specified type.
109
00:06:50.076 --> 00:06:50.977
Let's do that.
110
00:06:50.977 --> 00:06:51.711
So first,
111
00:06:51.711 --> 00:06:55.848
we need to get the minimum quantized value
and the maximum quantized value.
112
00:06:56.749 --> 00:06:59.919
So to get the minimum quantized value
113
00:06:59.952 --> 00:07:02.955
we will use the torch.iinfo methods.
114
00:07:03.389 --> 00:07:06.392
We will pass the dtype that we define
115
00:07:06.959 --> 00:07:09.429
in the attribute of the function.
116
00:07:09.429 --> 00:07:12.432
And to get the minimum
we just need to pass min.
117
00:07:12.765 --> 00:07:15.601
We do the same
thing for the maximum value.
118
00:07:18.237 --> 00:07:21.040
Now, we can define the quantized tensor
119
00:07:21.040 --> 00:07:24.043
which will be
120
00:07:26.179 --> 00:07:29.182
=rounded_tensor.clamp(q_min_max).
121
00:07:33.152 --> 00:07:36.155
And we can cast this tensor
122
00:07:36.222 --> 00:07:38.958
to the quantized dtype you want,
123
00:07:38.958 --> 00:07:41.961
such as int8.
124
00:07:43.062 --> 00:07:44.363
And the last step
125
00:07:44.363 --> 00:07:47.366
is to return the quantized tensor.
126
00:07:48.668 --> 00:07:49.969
Now that we have coded
127
00:07:49.969 --> 00:07:52.972
our function let's test or implementation.
128
00:07:53.139 --> 00:07:55.775
So we'll define the test tensor.
129
00:07:55.775 --> 00:08:00.775
We will define the same tensor
that you saw in the example on the slides.
130
00:08:01.881 --> 00:08:05.685
And we will assign random values
for scale and zero point.
131
00:08:05.718 --> 00:08:08.688
Since we don't know how to get them yet.
132
00:08:09.789 --> 00:08:13.759
So I'll just put scale equals to 3.5
133
00:08:13.759 --> 00:08:16.762
and the zero point to -70.
134
00:08:17.797 --> 00:08:22.797
Then let's get our quantized tensor
by calling the linear
135
00:08:23.803 --> 00:08:27.139
q with scale and zero
point function that we just coded.
136
00:08:29.041 --> 00:08:31.644
And we need to pass
137
00:08:31.644 --> 00:08:34.380
the test tensor,
138
00:08:34.380 --> 00:08:37.250
the scale that we and the zero point
139
00:08:37.250 --> 00:08:40.219
we defined earlier.
140
00:08:42.655 --> 00:08:45.658
And now let's check the quantized tensor.
141
00:08:47.760 --> 00:08:51.063
As you can see
we managed to quantize the tensor.
142
00:08:51.063 --> 00:08:55.801
And we can see that the dtype of
the tensor is indeed torch.int8.
143
00:08:56.035 --> 00:08:59.505
So now that we have our quantized tensor,
let's dequantize it
144
00:08:59.505 --> 00:09:02.708
to see how precise the quantization is.
145
00:09:03.142 --> 00:09:08.142
So, the quantization
formula is the one we saw in the slides.
146
00:09:09.148 --> 00:09:12.151
We have r=(q-z).
147
00:09:12.218 --> 00:09:14.153
And we will use just that.
148
00:09:14.153 --> 00:09:19.153
So, to get the dequantized tensor
we will just do
149
00:09:19.191 --> 00:09:23.663
scale * (quantized_tensor.float()
because we need to cast it to a float.
150
00:09:23.729 --> 00:09:27.967
Otherwise we will get weird behaviors
with underflow and overflows.
151
00:09:28.134 --> 00:09:33.134
Since we are doing a subtraction
between two int8 integers.
152
00:09:36.475 --> 00:09:39.445
Let's check the results.
153
00:09:39.445 --> 00:09:42.315
So we get these following values.
154
00:09:42.315 --> 00:09:45.551
But let's check
what happens if we don't cast
155
00:09:45.551 --> 00:09:48.554
quantized tensor to float.
156
00:09:49.622 --> 00:09:52.625
What we will get is the following results.
157
00:09:57.229 --> 00:09:58.998
Which is not the same
158
00:09:58.998 --> 00:10:03.998
as you can see here we have 686
and now we have -210.
159
00:10:05.638 --> 00:10:08.774
Now, let's put it into a function
called linear
160
00:10:08.841 --> 00:10:11.844
dequantization.
161
00:10:12.645 --> 00:10:13.813
So, for the linear
162
00:10:13.813 --> 00:10:17.483
dequantization function
we need to put as arguments
163
00:10:17.850 --> 00:10:21.053
the quantized tensor
the scale and the zero point.
164
00:10:24.890 --> 00:10:27.560
And then we just need to return
165
00:10:27.560 --> 00:10:30.563
what we called it above.
166
00:10:34.066 --> 00:10:35.635
As you can see on the right,
167
00:10:35.635 --> 00:10:38.638
you have the quantization error tensor.
168
00:10:38.704 --> 00:10:41.841
We have for some entries
pretty small values,
169
00:10:42.141 --> 00:10:44.910
which shows that the quantization worked
pretty well.
170
00:10:44.910 --> 00:10:48.648
But, as you can see here
we have also pretty big values.
171
00:10:48.948 --> 00:10:51.717
To get the quantization error tensor
172
00:10:51.717 --> 00:10:55.488
we just subtract the original tensor
and the dequantized tensor
173
00:10:55.688 --> 00:10:59.325
and we take the absolute
value of the entire matrix.
174
00:11:06.298 --> 00:11:08.034
And at the end, as you can see,
175
00:11:08.034 --> 00:11:12.004
we end up with a quantization
error of around 170.
176
00:11:12.238 --> 00:11:16.809
The error is quite high
because in this example
177
00:11:16.809 --> 00:11:20.513
we assign a random value to scale and zero
points.
178
00:11:20.946 --> 00:11:25.751
Let's cover in the next section
how to find out those optimal values.