aimlbd / HuggingFace.Quantization_in_Depth /Transcript /3_Get the Scale and Zero Point_HFQD.txt
swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
16.3 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:02.002 --> 00:00:06.573
As we saw in the notebook, the last piece
we are missing is how to determine
2
00:00:06.840 --> 00:00:11.711
the optimal s and z. To obtain the scale
and the zero point,
3
00:00:12.012 --> 00:00:16.649
we need to look at the extreme values
r min should mapped q, min
4
00:00:16.950 --> 00:00:19.652
and r max should map to q max,
5
00:00:19.652 --> 00:00:22.655
and we get the following two equation.
6
00:00:22.655 --> 00:00:27.327
Since we have two unknowns s
and z, we can solve this equation.
7
00:00:27.660 --> 00:00:32.660
If we subtract the first equation
from the second one, we can get the scale.
8
00:00:32.866 --> 00:00:37.103
So this equation minus
this one will give us the scale.
9
00:00:37.404 --> 00:00:38.972
And for the zero point.
10
00:00:38.972 --> 00:00:41.941
Since we always determine s,
11
00:00:41.941 --> 00:00:45.812
we just need for example,
to use the first equation and replace
12
00:00:45.812 --> 00:00:50.350
s by the value
we got before to get the zero point.
13
00:00:50.884 --> 00:00:53.987
And at the end
we end up with this specific formula.
14
00:00:54.320 --> 00:00:58.091
We also need to round the value
and to cast it to the correct d-type
15
00:00:58.258 --> 00:01:01.761
since we saw that z has the same d-type
as the quantized value.
16
00:01:01.928 --> 00:01:06.232
If you want to have a look at the details
of how we derived the scale
17
00:01:06.232 --> 00:01:09.269
and the zero point,
I invite you to pause the video
18
00:01:09.536 --> 00:01:12.439
and take a screenshot
at the following slides.
19
00:01:12.439 --> 00:01:15.375
So this one is for the scales derivation,
and this one is for
20
00:01:15.375 --> 00:01:20.375
So this one is for the scales derivation,
and this one is for
21
00:01:21.915 --> 00:01:26.915
the zero point derivation.
22
00:01:29.055 --> 00:01:30.390
As you saw previously, we
23
00:01:30.390 --> 00:01:34.060
make z as the same
the d-type as the quantized tensor.
24
00:01:34.394 --> 00:01:39.332
For example, as an integer and
this is not the same d-type as the scale.
25
00:01:39.699 --> 00:01:42.936
The goal behind
this choice is to represent zero
26
00:01:43.236 --> 00:01:47.607
in the original range
as an integer in the quantized range.
27
00:01:47.640 --> 00:01:50.743
So thanks to that,
when you quantize the value zero,
28
00:01:51.044 --> 00:01:54.314
it will take the value
z in the quantized range
29
00:01:54.314 --> 00:01:58.251
and what is great
is that if you'd dequantize the value z,
30
00:01:58.551 --> 00:02:00.787
it will become zero again.
31
00:02:00.787 --> 00:02:03.690
Now let's have a quick
look at how we calculate the scale
32
00:02:03.690 --> 00:02:08.261
and the zero point on this example
that you saw in the previous slides.
33
00:02:09.062 --> 00:02:11.364
So first, we need to get
34
00:02:11.364 --> 00:02:14.834
the maximum
and minimum range of the original tensor.
35
00:02:14.834 --> 00:02:19.834
So we have -184 and 728.6.
36
00:02:21.040 --> 00:02:24.010
So this is the maximum value
and this is the minimum value.
37
00:02:24.010 --> 00:02:28.915
And for the range of the quantized value
since we are quantize it in torch.int8.
38
00:02:28.948 --> 00:02:33.948
In it the minimum value is -128
and the maximum value is 127.
39
00:02:34.787 --> 00:02:38.258
So if you take the formula
we learned before, you get that
40
00:02:38.258 --> 00:02:41.261
the scale is equal to 3.58
41
00:02:41.294 --> 00:02:44.731
and the zero point is equal to -77.
42
00:02:44.964 --> 00:02:45.999
The last case
43
00:02:45.999 --> 00:02:50.603
we need to figure out is what happens
when the zero point is out of range.
44
00:02:50.737 --> 00:02:55.542
For example, since we need to cast
z to the quantized datatype,
45
00:02:55.575 --> 00:02:59.345
such as int8,
what should we do when z is out of range?
46
00:02:59.712 --> 00:03:04.050
So if z_min less than q_min,
we set z equal to q_min,
47
00:03:04.083 --> 00:03:09.083
and if z is superior to q_max,
we set that z to be equal to q_max.
48
00:03:09.389 --> 00:03:12.325
So this way
we don't have overflow and underflow.
49
00:03:12.325 --> 00:03:16.596
Now we have everything to code
how to get the scale and the zero point.
50
00:03:16.696 --> 00:03:17.830
Let's do that.
51
00:03:17.830 --> 00:03:19.632
And don't worry about the slide.
52
00:03:19.632 --> 00:03:21.467
Will code it directly in the notebook.
53
00:03:22.835 --> 00:03:25.838
Now, let's
get the scale and the zero point.
54
00:03:25.972 --> 00:03:27.707
Let's first start with the scale.
55
00:03:27.707 --> 00:03:30.810
As you saw in the formula
we need r_max , r_min.
56
00:03:31.044 --> 00:03:35.481
Q_max and q_min. We already saw
how to get the q_max in the q_min.
57
00:03:35.715 --> 00:03:37.951
So, I'll just copy-paste the code.
58
00:03:37.951 --> 00:03:40.787
So q_mean will be equal to the minimum
59
00:03:40.787 --> 00:03:44.991
value of the torch.int8 information.
60
00:03:45.858 --> 00:03:49.229
And the same for q_max
where we have max here.
61
00:03:49.862 --> 00:03:52.865
And as you saw in the example
q_min should be equal to
62
00:03:53.099 --> 00:03:56.803
minus 128 and q_max should be equal to 127
63
00:03:57.136 --> 00:04:00.139
Let's have a look.
64
00:04:03.176 --> 00:04:05.712
And we indeed have the same results.
65
00:04:05.712 --> 00:04:08.715
Now we need to get r_min and r_max,
66
00:04:08.848 --> 00:04:12.185
to get the minimum value of the tensor.
67
00:04:12.185 --> 00:04:14.621
We can just use the min methods.
68
00:04:14.621 --> 00:04:18.358
And we also need to call item
to get the value and not the tensor.
69
00:04:22.228 --> 00:04:24.864
As you can see here we have the tensor.
70
00:04:24.864 --> 00:04:28.635
But we need to call
also item to only get the value.
71
00:04:29.869 --> 00:04:32.472
We do the same thing for r_max,
72
00:04:32.472 --> 00:04:35.475
but this time we can get the maximum value
73
00:04:35.475 --> 00:04:38.478
by calling max.
74
00:04:40.947 --> 00:04:43.750
Now we have everything to get the scale.
75
00:04:43.750 --> 00:04:47.920
As we said earlier, the scale is equal to
76
00:04:48.855 --> 00:04:51.858
(r_max-r_min)
77
00:04:54.460 --> 00:04:57.463
/(q_max-q_min).
78
00:05:04.404 --> 00:05:06.973
And if you remember the example
we just saw before,
79
00:05:06.973 --> 00:05:10.476
we have the right scale around 3.58.
80
00:05:11.244 --> 00:05:14.247
As you can see here.
81
00:05:14.947 --> 00:05:17.950
Now let's get the zero point.
82
00:05:18.251 --> 00:05:19.652
To get the zero point.
83
00:05:19.652 --> 00:05:21.988
We just use the formula.
84
00:05:21.988 --> 00:05:26.988
So zero_point=q_min-
(r_min/scale)
85
00:05:31.898 --> 00:05:33.900
And let's have a look at the zero point.
86
00:05:35.635 --> 00:05:38.638
We have -76.5 around.
87
00:05:38.638 --> 00:05:42.909
So we need to run and cast it to int.
88
00:05:43.376 --> 00:05:48.376
And we get that the zero point is equal
to -77.
89
00:05:48.715 --> 00:05:50.116
As we saw before,
90
00:05:50.116 --> 00:05:54.954
if the zero point was inferior to q_min
we will set it to q_min.
91
00:05:54.954 --> 00:05:59.392
And if the zero point was superior
q_max we will set it to q_max.
92
00:05:59.425 --> 00:06:03.062
Now let's define the general function
to get the scale and the zero point.
93
00:06:03.296 --> 00:06:06.299
We'll call it "get q scale and zero point."
94
00:06:06.399 --> 00:06:09.602
This function takes two arguments:
the tensor
95
00:06:10.103 --> 00:06:12.638
and the type.
96
00:06:12.638 --> 00:06:17.343
And we'll set it to torch.int8 by default.
97
00:06:19.445 --> 00:06:21.013
As we saw before,
98
00:06:21.013 --> 00:06:24.784
we need to define the q_min and the q_max.
99
00:06:25.184 --> 00:06:29.088
Then we need to define the r_min
and the max of the tensor.
100
00:06:31.657 --> 00:06:34.560
We then define the scale
101
00:06:34.560 --> 00:06:37.563
and the zero point.
102
00:06:37.997 --> 00:06:39.665
For the zero point.
103
00:06:39.665 --> 00:06:41.901
As we saw in the slide.
104
00:06:41.901 --> 00:06:44.604
There are three cases.
105
00:06:44.604 --> 00:06:47.306
Indicates
the zero point is less than q_mean.
106
00:06:47.306 --> 00:06:50.309
We set the zero point to be equal to
q_min.
107
00:06:53.146 --> 00:06:56.149
Is the zero point is superior to q_max.
108
00:06:56.849 --> 00:06:59.819
We set it to q_max.
109
00:07:00.186 --> 00:07:03.556
And the last case is we just run it
110
00:07:03.556 --> 00:07:06.559
and cast it to an integer.
111
00:07:07.293 --> 00:07:10.296
And we just return the scale
and the zero point.
112
00:07:11.464 --> 00:07:16.269
Now let's test this general function
with the test tensor
113
00:07:16.269 --> 00:07:19.272
we define earlier.
114
00:07:19.906 --> 00:07:21.174
You can see that
115
00:07:21.174 --> 00:07:25.945
indeed we get the same scale
and the same zero point
116
00:07:26.045 --> 00:07:29.048
as the one we saw in the lecture
and before.
117
00:07:29.048 --> 00:07:34.048
Now using these new scales and new zero
point let's quantize r tensor.
118
00:07:34.754 --> 00:07:39.559
So we will call the linear
q with scale and zero point function
119
00:07:40.660 --> 00:07:43.396
by passing
the new scale and the new zero point.
120
00:07:43.396 --> 00:07:47.333
So the quantized tensor is equals
to this function
121
00:07:47.333 --> 00:07:50.336
where we pass this time
122
00:07:50.336 --> 00:07:53.339
the test tensor.
123
00:07:54.340 --> 00:07:56.776
But with the new scale
124
00:07:56.776 --> 00:07:59.779
and the new zero point.
125
00:08:02.849 --> 00:08:04.050
And as we did earlier.
126
00:08:04.050 --> 00:08:07.954
Also, we also need to dequantize
r tensor to compare
127
00:08:07.954 --> 00:08:09.555
with the original tensor.
128
00:08:09.555 --> 00:08:13.092
So we call the linear
the dequantization function
129
00:08:13.092 --> 00:08:16.128
where we pass the quantized tensor and
130
00:08:17.430 --> 00:08:20.433
the new scale and the new zero points.
131
00:08:20.500 --> 00:08:23.369
To have a summary of what we just did.
132
00:08:23.369 --> 00:08:28.241
Let's call the plot quantization
error function with the test tensor,
133
00:08:28.274 --> 00:08:31.277
the quantized tensor
and the dequantized tensor.
134
00:08:34.113 --> 00:08:35.882
And as you can see this time,
135
00:08:35.882 --> 00:08:39.552
the original tensor and the dequantized
tensor are very similar,
136
00:08:39.819 --> 00:08:43.923
and the quantization error
tensor looks also way much better.
137
00:08:44.090 --> 00:08:45.458
Now let's also have a look
138
00:08:45.458 --> 00:08:48.961
at the quantization error
to see if it has decreased a lot or not.
139
00:08:49.795 --> 00:08:52.798
So if you remember well,
to get the quantization error,
140
00:08:53.199 --> 00:08:56.302
you subtract the dequantized tensor
and the test tensor.
141
00:08:56.569 --> 00:08:58.337
We take the square and you do the min.
142
00:09:00.540 --> 00:09:01.407
And this time, as
143
00:09:01.407 --> 00:09:04.410
you can see, compared with the
144
00:09:05.244 --> 00:09:08.247
quantization error of around 170.
145
00:09:08.581 --> 00:09:11.717
Now we only have a quantization
error of around one.
146
00:09:12.118 --> 00:09:16.889
Now let's put everything inside
a linear quantization function
147
00:09:17.490 --> 00:09:21.694
that will only take a tensor
and will return to you
148
00:09:21.694 --> 00:09:25.197
the quantized tensor,
the scale and the zero point.
149
00:09:25.298 --> 00:09:28.301
So we defined the linear quantization
function.
150
00:09:28.501 --> 00:09:31.470
It takes as input a tensor
151
00:09:31.470 --> 00:09:36.470
and a d-type that we will set to torch.int8
by default.
152
00:09:37.109 --> 00:09:42.109
In this function, we will use
the two function that we coded before.
153
00:09:42.148 --> 00:09:45.851
So the get q scales and zero point
to get the scales
154
00:09:45.851 --> 00:09:48.854
and the zero point.
155
00:09:49.422 --> 00:09:51.457
So we just call that function
156
00:09:51.457 --> 00:09:54.460
and we just pass the tensor.
157
00:09:57.396 --> 00:10:00.399
And also the d-type.
158
00:10:02.501 --> 00:10:05.571
Then after getting the scale
and the zero point
159
00:10:05.571 --> 00:10:09.241
we can perform
the quantization of the tensor.
160
00:10:09.575 --> 00:10:11.877
So we will get the quantized tensor.
161
00:10:12.878 --> 00:10:14.814
If we use the linear q
162
00:10:14.814 --> 00:10:17.817
scale and zero point function
we coded before
163
00:10:18.417 --> 00:10:21.420
where we passed the tensor and
164
00:10:22.088 --> 00:10:23.422
the scale,
165
00:10:23.422 --> 00:10:26.425
the zero point,
166
00:10:27.760 --> 00:10:30.262
and the d-type.
167
00:10:30.262 --> 00:10:32.465
We just return the quantized sensor,
the scale
168
00:10:32.465 --> 00:10:35.468
and the zero point.
169
00:10:37.403 --> 00:10:38.371
Now let's play
170
00:10:38.371 --> 00:10:41.607
with this linear
quantizer on a random matrix.
171
00:10:42.208 --> 00:10:45.177
So we'll define a tensor.
172
00:10:47.013 --> 00:10:48.748
Which will take random values.
173
00:10:48.748 --> 00:10:51.717
And it will be of size 4x4.
174
00:10:53.819 --> 00:10:55.121
As you can see,
175
00:10:55.121 --> 00:10:58.391
we do have a random tensor of size 4x4.
176
00:10:59.258 --> 00:11:01.627
And we can just call the linear
177
00:11:01.627 --> 00:11:04.664
quantization function on our random tensor
178
00:11:04.764 --> 00:11:08.034
to get the quantized tensor, the scale
and the zero point.
179
00:11:09.168 --> 00:11:12.171
Let's have a look at the quantized tensor.
180
00:11:13.005 --> 00:11:13.873
As you can see,
181
00:11:13.873 --> 00:11:18.873
the tensor was quantized
and we also have the following values
182
00:11:20.346 --> 00:11:25.346
for the scales and the zero points to have
the summary of the quantization process,
183
00:11:25.751 --> 00:11:30.156
let's also dequantize the tensor
by calling the linear dequantization
184
00:11:30.856 --> 00:11:35.494
and by passing the quantized tensor,
the scale and the zero point.
185
00:11:39.231 --> 00:11:42.568
And we can use the plot quantization
error function
186
00:11:42.568 --> 00:11:46.138
to have the summary
of the quantization process.
187
00:11:47.306 --> 00:11:49.709
We passed the random tensor,
188
00:11:49.709 --> 00:11:52.678
the quantized tensor,
and the dequantized tensor.
189
00:11:53.546 --> 00:11:56.549
Oh, and as you can see,
190
00:11:57.416 --> 00:11:59.452
the original tensor here
191
00:11:59.452 --> 00:12:03.689
is pretty much the same
as the dequantized tensor,
192
00:12:03.723 --> 00:12:07.626
and the quantization errror tensor
is very small.
193
00:12:09.929 --> 00:12:11.797
And we can also print
194
00:12:11.797 --> 00:12:14.800
the quantization error.
195
00:12:15.201 --> 00:12:18.204
Which is also pretty low.
196
00:12:18.838 --> 00:12:20.706
And now I invite you to pause
197
00:12:20.706 --> 00:12:24.210
the video
and try to play with this quantization
198
00:12:24.210 --> 00:12:27.713
with your own inputs
and see how it performs.
199
00:12:28.013 --> 00:12:30.649
In the next lesson,
we will dive deeper into linear
200
00:12:30.649 --> 00:12:33.786
quantization
by learning its symmetric variants.
201
00:12:33.786 --> 00:12:38.290
And we will also look into quantization
granularity, such as per tensor,
202
00:12:38.457 --> 00:12:41.694
per channel and group quantization.
203
00:12:42.061 --> 00:12:47.061
Finally, we will also look at how to
perform inference with quantized models.