aimlbd / HuggingFace.Quantization_in_Depth /Transcript /10_Replace PyTorch layers with Quantized Layers_HFQD.txt
swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
7.77 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:01.968 --> 00:00:03.069
We have now all
2
00:00:03.069 --> 00:00:06.072
our building blocks
to build our quantizer.
3
00:00:06.272 --> 00:00:10.310
So the quantizer is going to be
a quantization pipeline that will,
4
00:00:11.177 --> 00:00:15.382
iterate over
all linear modules of your original model
5
00:00:15.915 --> 00:00:19.419
and replace them with our new W8A16,
6
00:00:19.519 --> 00:00:24.519
linear layer module and call quantize
on using the original weights.
7
00:00:24.924 --> 00:00:26.993
Yeah. So let's do that step by step.
8
00:00:26.993 --> 00:00:31.993
So let's first build,
a method called replace linear with target
9
00:00:33.266 --> 00:00:37.070
that is going to loop over the model,
identify
10
00:00:37.437 --> 00:00:40.840
the modules that are an instance of torch
11
00:00:40.840 --> 00:00:43.910
that is not linear
and replace it with the new module.
12
00:00:44.344 --> 00:00:44.611
Yeah.
13
00:00:44.611 --> 00:00:46.913
So this is going to be
the signature of our method.
14
00:00:46.913 --> 00:00:49.916
So it's going to take a module
or also model.
15
00:00:49.983 --> 00:00:52.685
But since the method
is going to be recursive
16
00:00:52.685 --> 00:00:55.021
I decided to call it module so that yeah
17
00:00:55.021 --> 00:00:58.258
it's clear that you can pass a model,
but you can also pass a module.
18
00:00:58.525 --> 00:01:03.196
Target class is yeah, the target
class, of the new class that you're going
19
00:01:03.196 --> 00:01:08.196
to set, in replacement to the linear
layer and module name to exclude
20
00:01:08.401 --> 00:01:13.206
is the name of the module that we're going
to exclude in this replacement logic.
21
00:01:13.239 --> 00:01:16.509
So we're going to see later
for language models that usually it's
22
00:01:16.509 --> 00:01:20.380
better to keep the last module
unquantized for better results.
23
00:01:20.380 --> 00:01:23.883
So this is going to be useful for
you know this specific use cases.
24
00:01:24.084 --> 00:01:27.087
So we're going to simply loop over
25
00:01:27.153 --> 00:01:29.589
the modules named children.
26
00:01:29.589 --> 00:01:33.059
And if the sub module is an instance of
an nn linear
27
00:01:33.760 --> 00:01:37.630
and you don't have any name
that matches the names
28
00:01:37.630 --> 00:01:40.633
that are inside
the module name to exclude,
29
00:01:41.501 --> 00:01:44.504
then we're going to move forward
with the module replacement.
30
00:01:44.671 --> 00:01:48.441
So we're going to get the bias
of the sub module here,
31
00:01:48.475 --> 00:01:52.078
because we're going to use it
to create our new target class.
32
00:01:52.445 --> 00:01:54.247
And then we can create our new module.
33
00:01:59.519 --> 00:02:02.455
Which is going to be target class of.
34
00:02:02.455 --> 00:02:06.793
So in features out features should be
the same as the linear layers.
35
00:02:06.793 --> 00:02:09.395
The original layers one bias.
36
00:02:09.395 --> 00:02:12.632
We're just simply going to check
if old bias is not "none".
37
00:02:13.266 --> 00:02:17.070
Then we're going to use the same data
type as the submodules weight.
38
00:02:17.770 --> 00:02:22.108
And we're going to call set
attributes to the parent module.
39
00:02:22.442 --> 00:02:25.512
We're going to replace
the current attribute of the module
40
00:02:25.512 --> 00:02:28.515
by calling set attribute module name.
41
00:02:28.948 --> 00:02:32.819
Because name gives you
then the name of the current attribute
42
00:02:32.819 --> 00:02:35.822
we're going to modify and then new module.
43
00:02:36.122 --> 00:02:39.993
So this is simply going to replace
the parent modules attributes
44
00:02:39.993 --> 00:02:43.062
that has the name "name",
with the new module.
45
00:02:44.130 --> 00:02:47.133
And if the old module has a bias
46
00:02:47.333 --> 00:02:51.471
we're going to explicitly set the bias
of the new module to old bias.
47
00:02:51.738 --> 00:02:55.141
And yeah and as I said previously, we're
going to call that method recursively.
48
00:02:55.608 --> 00:02:59.012
So if we're not in this case
we're going to call that method again.
49
00:02:59.012 --> 00:03:03.416
But this time on the child module
using the same arguments.
50
00:03:03.850 --> 00:03:06.252
Okay.
So let's let's try this this method out.
51
00:03:06.252 --> 00:03:10.790
So we're going to create a
dummy module for testing purposes.
52
00:03:11.491 --> 00:03:12.192
Yeah with two
53
00:03:12.192 --> 00:03:15.195
linear layers, one language model head,
54
00:03:15.361 --> 00:03:18.264
which is usually the last module
in a transformer model.
55
00:03:18.264 --> 00:03:21.000
Since the method modifies
the model in place,
56
00:03:21.000 --> 00:03:23.036
we're going to create new two models.
57
00:03:23.036 --> 00:03:28.036
So one where we're going to test out
the module name to exclude feature,
58
00:03:28.675 --> 00:03:29.342
and the other one
59
00:03:29.342 --> 00:03:33.513
which is just going to replace all linear
layer instances with the new one.
60
00:03:33.813 --> 00:03:35.982
So let's try out the first case.
61
00:03:35.982 --> 00:03:38.451
So yeah we just have to call replace
62
00:03:38.451 --> 00:03:41.688
with target model one our target class.
63
00:03:41.988 --> 00:03:46.526
So this time we don't want to replace
the LM head with the new class.
64
00:03:47.393 --> 00:03:48.595
So perfect. It worked.
65
00:03:48.595 --> 00:03:52.131
And we were able to replace
all linear layers with new ones.
66
00:03:52.131 --> 00:03:54.234
Except for the Lm head.
67
00:03:54.234 --> 00:03:56.703
And let's see what happens
if we pass an empty list.
68
00:03:57.937 --> 00:03:58.238
Yeah.
69
00:03:58.238 --> 00:04:00.740
So as expected,
70
00:04:00.740 --> 00:04:02.442
for the second case, we replaced
71
00:04:02.442 --> 00:04:05.979
all instances of linear layers
within with the target class.
72
00:04:06.412 --> 00:04:09.148
Yeah. So now let's just tweak a bit.
73
00:04:09.148 --> 00:04:12.151
this method, in addition to replacing,
74
00:04:13.086 --> 00:04:15.188
all linear layers with target
75
00:04:15.188 --> 00:04:18.391
class, we're
also going to quantize the new module
76
00:04:18.524 --> 00:04:21.894
once we have replaced the old module
with the new one.
77
00:04:22.262 --> 00:04:25.231
So just going to copy this method
78
00:04:25.565 --> 00:04:28.501
and slightly replace it
79
00:04:28.501 --> 00:04:31.170
in order to quantize
the new module as well.
80
00:04:31.170 --> 00:04:34.140
So here
we can also retrieve the old weight.
81
00:04:38.311 --> 00:04:38.778
Perfect.
82
00:04:38.778 --> 00:04:42.015
So I think the quantization should happen
here.
83
00:04:42.048 --> 00:04:44.784
Once we have replaced the module
with the new module
84
00:04:44.784 --> 00:04:48.454
we can get that module again
with get attribute module name.
85
00:04:48.721 --> 00:04:52.492
And at this point
this should return the new module
86
00:04:52.992 --> 00:04:55.928
and call quantize and pass the old weight.
87
00:05:00.066 --> 00:05:00.733
Let's also
88
00:05:00.733 --> 00:05:03.736
update the recursive function call.
89
00:05:04.637 --> 00:05:07.707
So let's try out again
just to see if it works.
90
00:05:08.041 --> 00:05:11.044
Using a new dummy model.
91
00:05:13.813 --> 00:05:14.180
Perfect.
92
00:05:14.180 --> 00:05:17.183
So yeah, it seems that it worked.
93
00:05:17.784 --> 00:05:18.084
Great.