aimlbd / HuggingFace.Quantization_in_Depth /Transcript /9_Custom Build an 8-Bit Quantizer_HFQD.txt
swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
20.4 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:02.035 --> 00:00:02.936
In this lesson,
2
00:00:02.936 --> 00:00:05.872
you will leverage the tools
that you have just built in order
3
00:00:05.872 --> 00:00:09.976
to create your own quantizer to quantize
any model in eight-bit precision.
4
00:00:10.176 --> 00:00:14.080
This quantizer is modality agnostic,
meaning you can apply it on
5
00:00:14.080 --> 00:00:17.817
vision,
audio texts, and even multimodal models.
6
00:00:17.984 --> 00:00:20.987
Let's get started
and quantize some models.
7
00:00:23.089 --> 00:00:23.723
In this lesson,
8
00:00:23.723 --> 00:00:28.194
we will learn together about how to make
our own quantizer to quantize any model
9
00:00:28.194 --> 00:00:31.197
in eight-bit precision
using the per channel linear
10
00:00:31.197 --> 00:00:34.401
quantization scheme that you have seen
in the lesson previously.
11
00:00:34.634 --> 00:00:37.904
So for that, we'll break down our project
into multiple sub steps.
12
00:00:37.971 --> 00:00:42.971
So we'll first deep dive into creating a
W8A16 linear layer class.
13
00:00:44.511 --> 00:00:48.381
And "W8" stands for eight-bit weights
and "A16"
14
00:00:48.715 --> 00:00:51.384
simply stands for 16 bits activations.
15
00:00:51.384 --> 00:00:54.487
We use this class to store
eight-bit weights and scales,
16
00:00:54.487 --> 00:00:57.557
as you have seen previously
on the previous lesson,
17
00:00:57.791 --> 00:01:01.628
and then we will see how we can replace
all instances
18
00:01:01.628 --> 00:01:04.964
of torch nn layers
with that new class.
19
00:01:04.964 --> 00:01:08.668
And then we will build a quantizer
to quantize our model end to end.
20
00:01:08.668 --> 00:01:12.072
And we will test our quantizer
on many scenarios.
21
00:01:12.072 --> 00:01:16.109
And we will study the impact of the eight
bit quantization on different models.
22
00:01:16.643 --> 00:01:19.979
So let's start with the first subtask
that we have defined before.
23
00:01:20.380 --> 00:01:24.350
So building the W8A16
linear layer class.
24
00:01:24.717 --> 00:01:29.422
So for this task we'll also break it down
into different multiple subtasks.
25
00:01:29.622 --> 00:01:33.293
First of all you will build a forward
method called W8A16 forward
26
00:01:33.426 --> 00:01:37.597
that will take as input eight-bit weights,
27
00:01:38.765 --> 00:01:41.367
16 bit inputs,
28
00:01:41.367 --> 00:01:44.337
scales, and optional bias.
29
00:01:44.337 --> 00:01:46.706
Once you have built this method,
the idea is to call
30
00:01:46.706 --> 00:01:49.809
that method inside the linear
layers forward pass
31
00:01:50.844 --> 00:01:54.848
and pass
the 8 bit weights of the linear layer,
32
00:01:55.014 --> 00:01:59.752
input, the scales that are stored inside
the layer, and the optional bias as well.
33
00:02:00.487 --> 00:02:01.921
So let's get started.
34
00:02:01.921 --> 00:02:05.925
So what the W8A16 forward
method will do under the hood
35
00:02:06.126 --> 00:02:09.229
is to first cast the eight bit weights
36
00:02:09.229 --> 00:02:12.232
into the same data type as the input.
37
00:02:12.465 --> 00:02:16.336
So for example, in the case
the input is in float16 or b float16,
38
00:02:16.803 --> 00:02:19.939
cast the weights into that precision
while keeping
39
00:02:20.406 --> 00:02:23.409
the weights into the same range as before.
40
00:02:23.676 --> 00:02:27.580
So between -128 and 127,
41
00:02:27.747 --> 00:02:31.451
we'll just cast the data type of those weights in,
42
00:02:31.818 --> 00:02:35.755
half precision so that it will match the data
type of the input.
43
00:02:35.922 --> 00:02:38.892
Then we will perform, the linear
operation.
44
00:02:38.892 --> 00:02:40.193
So classic matrix
45
00:02:40.193 --> 00:02:43.830
multiplication
between the input and the casted weights.
46
00:02:44.030 --> 00:02:46.900
We will multiply these results together
with the scales
47
00:02:46.900 --> 00:02:49.903
of the model and optionally add the bias
48
00:02:50.103 --> 00:02:51.804
once this is done.
49
00:02:51.804 --> 00:02:52.972
So let's get started.
50
00:02:52.972 --> 00:02:57.972
So let's first import those modules that
we will use for implementing our method.
51
00:02:58.878 --> 00:03:03.878
And I'm also I'm also defining here
some random inputs a random int8 matrix,
52
00:03:04.584 --> 00:03:07.587
random hidden states, random scales,
and random bias.
53
00:03:07.854 --> 00:03:10.590
So, typically the workflow
would be something as follows.
54
00:03:10.590 --> 00:03:13.493
So we first cast the weights
55
00:03:13.493 --> 00:03:16.229
into the same data
type as the hidden states.
56
00:03:16.229 --> 00:03:21.100
Then on top of that,
you will perform the matrix multiplication
57
00:03:21.501 --> 00:03:24.637
by calling f.linear from PyTorch.
58
00:03:26.739 --> 00:03:27.974
All right.
59
00:03:27.974 --> 00:03:31.244
And then we'll multiply that
with the input scales
60
00:03:32.679 --> 00:03:35.682
and optionally add a bias term
61
00:03:35.949 --> 00:03:38.952
at the end of the operation.
62
00:03:41.054 --> 00:03:42.455
Perfect.
63
00:03:42.455 --> 00:03:45.491
And notice also for the weight matrix.
64
00:03:45.825 --> 00:03:50.129
So it has the shape output dimension
input dimension.
65
00:03:50.363 --> 00:03:53.633
When you perform the matrix multiplication
between the weight matrix
66
00:03:53.633 --> 00:03:58.633
and the input hidden states, you will have
a vector of batch size output dimension.
67
00:03:58.805 --> 00:04:00.106
So 132.
68
00:04:00.106 --> 00:04:03.009
So it's important that the scales have this
69
00:04:03.009 --> 00:04:06.479
the same shape
as the output shape of your weight matrix.
70
00:04:06.512 --> 00:04:09.849
And same comment for the bias
so that you can broadcast the operations
71
00:04:09.849 --> 00:04:14.849
between the output from here
and the scales and the whole output here
72
00:04:15.154 --> 00:04:16.456
and the bias. Perfect.
73
00:04:16.456 --> 00:04:19.459
So let's wrap everything
in a single method.
74
00:04:20.827 --> 00:04:22.228
Perfect.
75
00:04:22.228 --> 00:04:24.497
And let's also quickly try it out.
76
00:04:25.698 --> 00:04:27.734
With and without bias.
77
00:04:27.734 --> 00:04:29.969
Great. So it seems to work fine.
78
00:04:29.969 --> 00:04:32.972
So I guess we can move forward
with the next building block
79
00:04:33.039 --> 00:04:36.576
that will leverage the method
that we have just created. To continue
80
00:04:36.576 --> 00:04:38.511
building our linear layer class,
81
00:04:38.511 --> 00:04:42.382
we'll start implementing
the init method of that class.
82
00:04:42.715 --> 00:04:46.753
So recall for this linear layer
we need to store the int8 weights,
83
00:04:47.420 --> 00:04:49.956
the scales and the bias.
84
00:04:49.956 --> 00:04:53.092
Let's first start by implementing
the skeleton of the init method.
85
00:04:53.693 --> 00:04:56.596
So it has to kind of match,
86
00:04:56.596 --> 00:05:01.167
the signature of the init
method of a torch in our layer.
87
00:05:01.167 --> 00:05:05.104
So it has to contain
input features, output features
88
00:05:05.138 --> 00:05:08.808
in order to correctly initialize
the input matrix.
89
00:05:09.175 --> 00:05:13.479
The weights matrix bias, whether
the linear layer has a bias term or not,
90
00:05:13.913 --> 00:05:17.250
and data type which would correspond
to the data type of the bias.
91
00:05:17.550 --> 00:05:22.455
Because our weight matrix
will have a torch that int8 as data type.
92
00:05:22.755 --> 00:05:23.456
So here
93
00:05:23.456 --> 00:05:27.527
we're going to define our int8
weights, together with the scales
94
00:05:27.527 --> 00:05:29.395
that are going to be stored
in the linear layer.
95
00:05:29.395 --> 00:05:32.565
So if you have done any PyTorch
before this lab,
96
00:05:32.565 --> 00:05:36.035
you might be directly trying your hands on
doing something like this
97
00:05:36.035 --> 00:05:39.972
to create your int8 weights,
assigning the new attributes int8 weights,
98
00:05:40.807 --> 00:05:43.242
being a parameter.
99
00:05:43.242 --> 00:05:45.144
And then maybe do something like this.
100
00:05:45.144 --> 00:05:50.016
So the issue with this, with this approach
is that when you create
101
00:05:50.049 --> 00:05:53.052
an nn. parameter, PyTorch expects
102
00:05:53.086 --> 00:05:56.356
that parameter where it's able to compute
gradients on it.
103
00:05:56.589 --> 00:05:59.859
The issue is that with PyTorch,
you can't explicitly compute gradients
104
00:06:00.526 --> 00:06:04.197
on int8 tensors, yet,
so you should get an error
105
00:06:04.197 --> 00:06:07.200
if you try just to initialize
a dummy layer
106
00:06:07.667 --> 00:06:08.835
with this approach.
107
00:06:08.835 --> 00:06:11.904
So if you try that out, you get an error
108
00:06:12.372 --> 00:06:16.242
saying "only tensors of floating point
and complexity can require gradients."
109
00:06:16.809 --> 00:06:19.812
So, the right approach to store int8
weights
110
00:06:20.046 --> 00:06:24.317
is instead of saving attributes as being
111
00:06:24.317 --> 00:06:28.421
an endless parameter, is to call
this method called register buffer.
112
00:06:28.721 --> 00:06:32.291
That way instead of storing a parameter,
we just store a buffer, meaning
113
00:06:32.291 --> 00:06:35.261
we don't need to compute
gradients on the tensor,
114
00:06:35.261 --> 00:06:38.264
and you can initialize it
with whatever dtype that you want.
115
00:06:38.364 --> 00:06:42.435
So if you try that out and
initialize, it just works.
116
00:06:43.736 --> 00:06:46.272
So let's
continue designing our linear layer.
117
00:06:46.272 --> 00:06:48.875
So we have our int8 weights.
118
00:06:48.875 --> 00:06:51.878
And then we'll do the same thing
for scales
119
00:06:52.512 --> 00:06:55.381
as well by initializing
with the correct shape.
120
00:06:55.381 --> 00:06:58.518
And we're also going to call register
buffer on scales
121
00:06:58.518 --> 00:07:02.021
because again, here, we're just expecting
to do simple inference.
122
00:07:02.054 --> 00:07:04.157
We're not interested in doing training.
123
00:07:04.157 --> 00:07:07.460
So just calling registered
buffer is sufficient.
124
00:07:07.860 --> 00:07:09.829
And then we're going to store
an optional bias.
125
00:07:09.829 --> 00:07:14.500
So if bias is set to true we're just
starting a new buffer called bias.
126
00:07:15.067 --> 00:07:17.069
Otherwise we'll set it to none.
127
00:07:17.069 --> 00:07:21.507
So let's quickly try that out and create
a dummy instance of a linear layer
128
00:07:21.974 --> 00:07:24.977
and see if our attributes
have been correctly saved.
129
00:07:25.611 --> 00:07:27.447
Perfect. So yeah.
130
00:07:27.447 --> 00:07:31.350
So all the expected attributes have,
the expected shape.
131
00:07:31.784 --> 00:07:35.688
So output shape input
shape output output shape or the scales.
132
00:07:36.255 --> 00:07:39.325
I guess we can move forward with the next task,
133
00:07:39.759 --> 00:07:42.895
which is building the
forward pass of that class.
134
00:07:43.863 --> 00:07:46.666
So we're going to copy
135
00:07:46.666 --> 00:07:48.134
what we did here.
136
00:07:48.134 --> 00:07:52.839
And we're going to call the method that
we have defined in the first sub task.
137
00:07:52.972 --> 00:07:56.709
And we're just simply going to call it on
self.int8
138
00:07:56.709 --> 00:07:59.712
weights,
self.skills, and self.bias.
139
00:07:59.779 --> 00:08:02.048
And this method will do everything
under the hood for us.
140
00:08:02.048 --> 00:08:06.118
And we'll take care of casting the weights
into the correct dtype and multiplying
141
00:08:06.118 --> 00:08:09.555
everything with the scales and optionally
add the whole results with bias.
142
00:08:09.789 --> 00:08:11.657
All right. So let's create a new module.
143
00:08:11.657 --> 00:08:14.660
Some dummy hidden states with the shape
batch size,
144
00:08:14.827 --> 00:08:17.897
sequence length, hidden shape,
which should match the input
145
00:08:17.897 --> 00:08:21.033
hidden states shape
that we have, passed here.
146
00:08:24.337 --> 00:08:25.338
Perfect.
147
00:08:25.338 --> 00:08:27.707
So we still have batch
size, sequence length.
148
00:08:27.707 --> 00:08:30.042
And here, instead of input
shape, we have output shape.
149
00:08:31.677 --> 00:08:34.180
It's also check data type.
150
00:08:34.180 --> 00:08:36.215
So the dtype is correct Float32.
151
00:08:36.215 --> 00:08:38.351
Because we have initialized
a random tensor
152
00:08:38.351 --> 00:08:42.154
and by default PyTorch initialize
everything in torch, not Float32.
153
00:08:42.321 --> 00:08:43.089
Great.
154
00:08:43.089 --> 00:08:46.559
Now that we have a forward pass
that is working a linear layer class
155
00:08:46.559 --> 00:08:51.559
that has all the needed attributes,
we need to build, quantize method
156
00:08:52.532 --> 00:08:56.435
in order to perform
the linear quantization algorithm
157
00:08:56.435 --> 00:09:00.139
that you have seen in the previous lesson,
so that the weights gets correctly,
158
00:09:00.840 --> 00:09:01.741
quantized.
159
00:09:01.741 --> 00:09:03.543
Because right now everything is random.
160
00:09:03.543 --> 00:09:07.413
So you need to replace all the layers
with this, linear layer,
161
00:09:07.647 --> 00:09:10.316
you'll get gibberish output most likely.
162
00:09:10.316 --> 00:09:12.385
So just to give you more idea,
163
00:09:12.385 --> 00:09:15.821
once we have defined that quantize method,
the workflow will be the following.
164
00:09:15.821 --> 00:09:19.825
So, you have your base model
that is let's say in half precision.
165
00:09:20.092 --> 00:09:23.095
So either Fp16 or Vf16.
166
00:09:23.329 --> 00:09:26.999
Will loop over all the linear
layer classes, replace them
167
00:09:27.300 --> 00:09:30.836
with our new linear class,
and then call quantize
168
00:09:31.404 --> 00:09:36.008
by passing the old weights in order
to quantize the old weights into int8.
169
00:09:36.108 --> 00:09:38.945
So let's redefine our class again.
170
00:09:38.945 --> 00:09:41.247
And here start
thinking about the quantize method.
171
00:09:41.247 --> 00:09:46.247
So as I said the quantize method
will take the original weights as input.
172
00:09:46.752 --> 00:09:49.922
It will quantize the weights in Int8
precision,
173
00:09:50.690 --> 00:09:55.690
get the scales of the quantization,
and then manually assign int8
174
00:09:55.861 --> 00:09:59.599
weights and scales to the computed
quantized weights and scales.
175
00:10:00.032 --> 00:10:02.234
So let's do that step by step.
176
00:10:02.234 --> 00:10:07.234
So first of all, I would recommend
to upcast the weights in FP32
177
00:10:08.140 --> 00:10:09.408
for stability.
178
00:10:09.408 --> 00:10:13.546
So we'll get the weights in Fp32 first
and then we will use
179
00:10:13.679 --> 00:10:17.183
this simple formula
that you have seen in the previous lesson.
180
00:10:17.483 --> 00:10:21.554
So we'll first
get the absolute values of the weights.
181
00:10:22.054 --> 00:10:26.025
Get the maximum on the last dimension
and divide it by
182
00:10:26.292 --> 00:10:29.295
yeah 127 in order to get the scales.
183
00:10:29.295 --> 00:10:32.999
So we're going to assign
that to scales variable.
184
00:10:33.399 --> 00:10:37.036
And make sure that scales
has the same datatype as the input weights
185
00:10:38.004 --> 00:10:41.007
by calling two weights the dtype.
186
00:10:41.007 --> 00:10:44.477
And to get the int8 weights,
we'll just apply the formula
187
00:10:44.477 --> 00:10:47.747
that you have seen on the previous lesson
on linear quantization.
188
00:10:48.547 --> 00:10:51.017
So this is the per channel,
189
00:10:51.017 --> 00:10:53.552
linear quantization
190
00:10:53.552 --> 00:10:55.454
as you're getting the maximum
191
00:10:55.454 --> 00:10:58.424
on each element of the last dimension.
192
00:10:58.724 --> 00:11:01.060
So yeah basically
this is how you get the int8 weights.
193
00:11:01.060 --> 00:11:04.163
Again, it's based on the previous lesson
unrolled.
194
00:11:04.463 --> 00:11:07.233
We're just simply assigning self
195
00:11:07.233 --> 00:11:10.236
dot int8 weights and scales
196
00:11:10.569 --> 00:11:13.572
with these tensors.
197
00:11:14.774 --> 00:11:15.775
Perfect.
198
00:11:15.775 --> 00:11:18.778
And the forward pass will stay the same.
199
00:11:21.480 --> 00:11:23.683
Perfect.
200
00:11:23.683 --> 00:11:23.949
Okay.
201
00:11:23.949 --> 00:11:26.419
So let's let's try that out.
202
00:11:26.419 --> 00:11:28.454
So let's first initialize
the dummy module.
203
00:11:28.454 --> 00:11:31.724
Maybe print
the int8 weights before quantizing
204
00:11:32.024 --> 00:11:35.027
and we'll compare the results
afterwards as well.
205
00:11:35.061 --> 00:11:38.064
I might just take a smaller.
206
00:11:38.564 --> 00:11:39.965
Yeah.
207
00:11:39.965 --> 00:11:40.599
All right.
208
00:11:40.599 --> 00:11:43.302
Let's also pass some dummy random
209
00:11:44.303 --> 00:11:45.705
original weights.
210
00:11:45.705 --> 00:11:49.775
So this random matrix will act
as the original weights
211
00:11:49.775 --> 00:11:52.278
that we're going to use
to quantize your module.
212
00:11:52.278 --> 00:11:54.914
So let's call module.quantize.
213
00:11:54.914 --> 00:11:57.016
Perfect.
214
00:11:57.016 --> 00:12:00.419
So as you can see the weights tensor
are completely different.
215
00:12:00.886 --> 00:12:04.156
And because we quantized the module
with the correct weights.
216
00:12:04.890 --> 00:12:07.426
And also
217
00:12:07.426 --> 00:12:09.028
the int8 weights
218
00:12:09.028 --> 00:12:13.666
are now between -128 and 127.
219
00:12:14.033 --> 00:12:17.670
So those values did not exist before
because the module has been
220
00:12:17.670 --> 00:12:19.305
initialized randomly.
221
00:12:19.305 --> 00:12:22.608
And since here we're performing
abs max quantitzation.
222
00:12:22.875 --> 00:12:26.445
We always have these values
in the quantized int8 weights.
223
00:12:26.812 --> 00:12:27.046
Yeah.
224
00:12:27.046 --> 00:12:31.050
We can also inspect the scales
of our quantized module
225
00:12:32.118 --> 00:12:33.152
which look like this.
226
00:12:33.152 --> 00:12:36.155
I want you to quickly inspect
also the shape of scales.
227
00:12:36.722 --> 00:12:41.193
So you have a tensor of size four
which is the expected output shape.
228
00:12:42.194 --> 00:12:45.197
So we're going to do the same for
229
00:12:45.531 --> 00:12:46.432
int8 weights.
230
00:12:46.432 --> 00:12:48.167
So four eight.
231
00:12:48.167 --> 00:12:52.538
So if we directly multiply the two tensors
it won't work.
232
00:12:52.571 --> 00:12:55.641
So you have to add a new dimension
here in scales.
233
00:12:57.977 --> 00:12:58.844
All right.
234
00:12:58.844 --> 00:13:02.281
So let's compare
that against our original weights.
235
00:13:02.715 --> 00:13:05.518
Yeah. So as you can see,
236
00:13:05.518 --> 00:13:08.821
if you quickly look into it, the weights
look pretty close.
237
00:13:08.954 --> 00:13:12.124
We can have also a better idea
by computing the quantization error
238
00:13:12.725 --> 00:13:14.994
that can be done through this formula.
239
00:13:14.994 --> 00:13:16.695
So we just,
240
00:13:16.695 --> 00:13:18.998
subtracting both tensors.
241
00:13:18.998 --> 00:13:23.335
So the quantized weights
the original weights absolute value.
242
00:13:23.702 --> 00:13:27.473
So this is the average quantization error
for each element
243
00:13:27.473 --> 00:13:32.077
between the original weights
and the dequantized weights.
244
00:13:32.211 --> 00:13:32.745
Perfect.