aimlbd / HuggingFace.Quantization_in_Depth /Transcript /11_Quantize any Open Source PyTorch Model_HFQD.txt
swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
14.3 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:02.002 --> 00:00:02.402
The thing is.
2
00:00:02.402 --> 00:00:05.038
That's right. Now,
we tried everything on a dummy model.
3
00:00:05.038 --> 00:00:09.142
We want to test out
also on real use cases.
4
00:00:09.909 --> 00:00:12.312
Yeah. Meaning useful models.
5
00:00:12.312 --> 00:00:16.282
So let's test out our implementation
and our quantizer right away
6
00:00:16.816 --> 00:00:19.819
on models that you can find
on Hugging Face transformers.
7
00:00:20.587 --> 00:00:22.622
So let's get started.
8
00:00:22.622 --> 00:00:25.625
So for this demo
we're going to use this model called
9
00:00:25.625 --> 00:00:28.962
Salesforce Codegen 350 m mono.
10
00:00:29.763 --> 00:00:32.732
So this is a language model
that has been fine-tuned on code.
11
00:00:32.832 --> 00:00:36.236
And it has only 350 million parameters.
12
00:00:36.603 --> 00:00:39.339
Let's use transformers
to load the model together
13
00:00:39.339 --> 00:00:42.342
with the tokenizer
and get some generation.
14
00:00:44.711 --> 00:00:45.879
And let's use
15
00:00:45.879 --> 00:00:48.915
the text generation pipeline
in order to get some text generation.
16
00:00:48.948 --> 00:00:52.786
So we're going to load pipeline
text generation and pass the model.
17
00:00:52.786 --> 00:00:53.720
And the tokenizer.
18
00:00:55.088 --> 00:00:56.189
And then since
19
00:00:56.189 --> 00:00:59.959
it's a model that has been fine
tuned on code, let's try to battle
20
00:00:59.959 --> 00:01:03.930
test the model by giving it some code
21
00:01:03.930 --> 00:01:07.067
completion task, and try to see if we can
22
00:01:08.268 --> 00:01:10.937
generate some consistent text
23
00:01:10.937 --> 00:01:13.873
in order to print "Hello World" in Python.
24
00:01:13.873 --> 00:01:15.942
So the "Hello World"
25
00:01:15.942 --> 00:01:18.945
print "Hello World",
which seems to be correct.
26
00:01:19.279 --> 00:01:22.582
And then the model has also suggested
to call the method right after.
27
00:01:22.582 --> 00:01:23.049
And then,
28
00:01:23.049 --> 00:01:26.853
well, it ended up with some comments
in perhaps Korean but that's fine.
29
00:01:26.886 --> 00:01:29.656
Yeah. Overall the model seems good.
30
00:01:29.656 --> 00:01:32.392
And also bear in mind
that the model is a small model.
31
00:01:32.392 --> 00:01:35.395
It's, 350 million parameters.
32
00:01:35.628 --> 00:01:39.632
You might get more impressive results
with larger models, but anyway, yeah.
33
00:01:40.400 --> 00:01:44.838
Let's just out of curiosity,
print the model before quantization.
34
00:01:44.971 --> 00:01:47.941
So we're going to shrink the model,
quantize it in eight-bit
35
00:01:47.941 --> 00:01:51.478
and try to get some generations
on the quantize model.
36
00:01:51.511 --> 00:01:53.546
So yeah this
this is how the model looks like.
37
00:01:53.546 --> 00:01:57.417
You have a lot of linear layers because
it's a transformer based architecture.
38
00:01:57.584 --> 00:02:02.584
So most of the most of the weights come
from, some linear layers on the model.
39
00:02:03.356 --> 00:02:05.492
And let's call our
40
00:02:06.993 --> 00:02:08.761
quantization API
41
00:02:08.761 --> 00:02:12.932
replace linear
with target and quantize model.
42
00:02:12.932 --> 00:02:14.033
Target class.
43
00:02:14.033 --> 00:02:18.438
And as I said we're not going to quantize
the language model head
44
00:02:18.671 --> 00:02:22.175
because since the model
is an autoregressive model, it uses
45
00:02:22.542 --> 00:02:26.880
the output from the previous iteration
to get the output of the next iteration.
46
00:02:26.980 --> 00:02:30.817
If you quantize the language
model head, a lot of errors might
47
00:02:30.917 --> 00:02:34.254
might be accumulating
over the generation steps.
48
00:02:34.654 --> 00:02:38.358
And you will most likely end up,
having some gibberish after some tokens.
49
00:02:38.791 --> 00:02:41.494
And though this method modifies
the model in place.
50
00:02:41.494 --> 00:02:43.563
So we're just going to inspect
51
00:02:43.563 --> 00:02:47.600
pipe dot model and check
if the model has been correctly quantized.
52
00:02:47.867 --> 00:02:51.571
So yeah, we can confirm,
the model has been quantized by checking
53
00:02:51.571 --> 00:02:56.476
that the linear layers has been replaced
with the, W8A16 in our layers.
54
00:02:56.643 --> 00:03:00.780
We can also confirm that the language
model had is still a torch and then linear
55
00:03:01.347 --> 00:03:02.916
which is expected intended.
56
00:03:04.083 --> 00:03:07.086
So let's
do some generation and see the results.
57
00:03:07.187 --> 00:03:10.190
Perfect. So it's able to generate,
58
00:03:10.223 --> 00:03:12.058
correct methods of printing
59
00:03:12.058 --> 00:03:13.693
"Hello world" in Python.
60
00:03:13.693 --> 00:03:16.729
It's also try to call Hello world
in the main Python file
61
00:03:16.729 --> 00:03:19.832
but somehow the model has commented
the code to the method,
62
00:03:19.832 --> 00:03:22.835
but I guess that's fine.
For generative models,
63
00:03:22.969 --> 00:03:26.105
since the model generates
output from past inputs.
64
00:03:26.105 --> 00:03:28.441
So it's, it's an autoregressive model.
65
00:03:28.441 --> 00:03:33.079
All the rounding errors can sum up once
you start generating a lot of tokens,
66
00:03:33.546 --> 00:03:38.084
until maybe all of these errors
get super large so that it affects,
67
00:03:38.284 --> 00:03:39.485
the model's performance.
68
00:03:39.485 --> 00:03:42.422
This specifically affects larger models.
69
00:03:42.422 --> 00:03:44.958
So maybe greater
than 6 billion parameters.
70
00:03:44.958 --> 00:03:48.428
And the the whole no performance
degradation
71
00:03:48.428 --> 00:03:52.532
quantization for LLMs
is a whole exciting topic.
72
00:03:52.865 --> 00:03:56.569
And, it has been also addressed
by many recent papers,
73
00:03:56.569 --> 00:04:00.940
such as LLM.int8,
SmoothQuant, GPTQ, QLoRA,
74
00:04:00.940 --> 00:04:02.508
AWQ and so on, and
75
00:04:02.508 --> 00:04:05.511
probably many,
many more that I forgot to mention.
76
00:04:05.511 --> 00:04:08.281
And we're also going to briefly,
77
00:04:08.281 --> 00:04:11.884
explain a little bit the insights
behind these papers in the next lesson.
78
00:04:12.318 --> 00:04:13.253
All right.
79
00:04:13.253 --> 00:04:17.257
We can also try out to quantize models
from other modalities.
80
00:04:17.457 --> 00:04:22.128
I wanted to show you how to call the
quantizer on an object detection model.
81
00:04:22.662 --> 00:04:25.231
So the workflow is going to be typically
the same.
82
00:04:25.231 --> 00:04:26.733
We're going to call the API
83
00:04:26.733 --> 00:04:30.970
that we have designed on a model
that we will load from transformers.
84
00:04:31.037 --> 00:04:34.774
So we will use this class Detr for object
detection.
85
00:04:34.774 --> 00:04:39.012
So Detr is an architecture
that has been designed by Facebook AI
86
00:04:40.013 --> 00:04:41.814
that is used for object detection.
87
00:04:41.814 --> 00:04:44.817
So you can inspect the
88
00:04:44.917 --> 00:04:48.087
the model card on the hub
in order to get the code snippets
89
00:04:48.087 --> 00:04:49.789
on how to run the model.
90
00:04:49.789 --> 00:04:54.789
So this is the way to load the processor
and the model, from Hugging Face hub.
91
00:04:54.961 --> 00:04:58.331
And we're going to call our quantizer
on the model itself.
92
00:04:58.598 --> 00:04:58.865
Okay.
93
00:04:58.865 --> 00:05:03.336
So before quantizing the model
we can get the memory footprint
94
00:05:03.336 --> 00:05:06.105
of the model before quantizing it.
95
00:05:06.105 --> 00:05:07.407
So we just have to call model.
96
00:05:07.407 --> 00:05:09.275
get_memory_footprint.
97
00:05:09.275 --> 00:05:13.546
And the size of the model
should be something around 170MB.
98
00:05:13.880 --> 00:05:15.315
Perfect. Yeah.
99
00:05:15.315 --> 00:05:19.752
I want to quickly test the model before
quantizing it and visualize some results.
100
00:05:20.787 --> 00:05:22.021
So we recently went to
101
00:05:22.021 --> 00:05:25.024
dinner altogether
and we took a nice picture.
102
00:05:25.091 --> 00:05:28.094
So we're going to use this picture
and try to detect as many
103
00:05:28.394 --> 00:05:31.097
as many objects as possible
using the model.
104
00:05:31.097 --> 00:05:32.198
Perfect. Yeah.
105
00:05:32.198 --> 00:05:37.198
So if you if you check out the model
card of the model, you can get an idea of,
106
00:05:38.071 --> 00:05:42.041
what is the code snippet
to, to use in order to run the model?
107
00:05:42.041 --> 00:05:46.713
So we're just going to use that
and call plot results on the results.
108
00:05:47.613 --> 00:05:48.748
So very cool.
109
00:05:48.748 --> 00:05:51.217
it was able to detect,
110
00:05:51.217 --> 00:05:53.553
all the people here.
111
00:05:53.553 --> 00:05:55.521
So with the correct class person,
112
00:05:55.521 --> 00:05:58.758
it was also able to detect, the table
here on the left, the phone,
113
00:05:58.758 --> 00:06:02.395
the cups, the knife, and also the card
that is on the background.
114
00:06:02.662 --> 00:06:05.865
And even this laptop,
there is a bit far in the image.
115
00:06:06.399 --> 00:06:07.233
That's really cool.
116
00:06:08.568 --> 00:06:08.868
Yeah.
117
00:06:08.868 --> 00:06:09.268
So let's
118
00:06:09.268 --> 00:06:12.772
try to quantize the model and visualize
the results of the quantized model.
119
00:06:13.172 --> 00:06:17.043
So, before doing that let's quickly
inspect the model.
120
00:06:17.577 --> 00:06:20.980
So the impact of the quantization
121
00:06:20.980 --> 00:06:24.384
is going to be a little bit lower
than a language model.
122
00:06:24.717 --> 00:06:26.519
Because as you can see here.
123
00:06:26.519 --> 00:06:29.389
So there are many convolutional
based layers.
124
00:06:29.389 --> 00:06:32.392
So these layers
are not going to be quantized.
125
00:06:32.558 --> 00:06:34.894
But if you go down
126
00:06:34.894 --> 00:06:37.897
the layers that are going to be quantized
are going to be,
127
00:06:38.131 --> 00:06:42.001
the linear layers
that are here in the encoder and decoder
128
00:06:43.403 --> 00:06:44.036
here.
129
00:06:44.036 --> 00:06:46.939
And yeah,
so we're still going to keep that practice
130
00:06:46.939 --> 00:06:50.343
of trying to not quantize the last layer.
131
00:06:50.676 --> 00:06:55.415
So the bounding box predictor we're going
to keep it in its original precision.
132
00:06:55.882 --> 00:07:00.486
So we're going to call replace
linear layer model target class 012.
133
00:07:01.320 --> 00:07:03.623
So that we don't quantize these layers
134
00:07:03.623 --> 00:07:06.192
and the class labels classifier as well.
135
00:07:06.192 --> 00:07:08.394
And everything is specified here.
136
00:07:08.394 --> 00:07:10.496
Perfect. Let's inspect the model again.
137
00:07:10.496 --> 00:07:13.933
So yeah as expected
the convolution layers are still the same.
138
00:07:14.400 --> 00:07:17.670
And the encoder
and decoder layers seem to be
139
00:07:18.671 --> 00:07:21.140
correctly quantized in int8.
140
00:07:21.140 --> 00:07:21.808
Perfect.
141
00:07:21.808 --> 00:07:26.808
And of course we kept the last modules,
in their original precision.
142
00:07:27.480 --> 00:07:30.483
Let's visualize the results
with the quantize model.
143
00:07:31.651 --> 00:07:32.318
Perfect. Yeah.
144
00:07:32.318 --> 00:07:35.888
So yeah, I think we got
pretty much the same results.
145
00:07:36.456 --> 00:07:39.459
I think we were able to detect
the same instances.
146
00:07:40.159 --> 00:07:40.460
Yeah.
147
00:07:40.460 --> 00:07:41.160
Even the computer
148
00:07:41.160 --> 00:07:44.564
that you can see on the background,
the chairs and the car as well.
149
00:07:45.031 --> 00:07:50.031
Let's also try to have, a good idea
of how much memory did we managed to save.
150
00:07:50.403 --> 00:07:53.673
So this is the new memory footprint.
151
00:07:54.040 --> 00:07:57.176
So yeah, we were able to save around 50MB.
152
00:07:57.210 --> 00:08:00.413
So before we were around 160, 170.
153
00:08:00.446 --> 00:08:05.151
So yeah, it's a reduction of maybe around
25 to 30%.
154
00:08:05.485 --> 00:08:06.719
So that's that's not bad.
155
00:08:06.719 --> 00:08:08.221
And we managed to keep
156
00:08:08.221 --> 00:08:10.389
most of the capabilities of the model.
157
00:08:10.389 --> 00:08:10.723
So yeah.
158
00:08:10.723 --> 00:08:14.994
Now, I invite you to pause the video
and you can also try out,
159
00:08:15.394 --> 00:08:19.432
this approach
on other models, other modalities.
160
00:08:19.465 --> 00:08:23.436
You can also try to maybe, break
the quantizer and see what went wrong.
161
00:08:23.436 --> 00:08:27.340
Maybe, I don't know, try to also quantize
the last module to see
162
00:08:27.540 --> 00:08:29.709
how does it affect
the model's performance.
163
00:08:29.709 --> 00:08:33.212
And yeah, you can also try that out
on as many modalities as you want.
164
00:08:33.246 --> 00:08:34.981
You can try it on a vision model.
165
00:08:34.981 --> 00:08:38.050
You can also try it on an audio model,
on a multimodal model.
166
00:08:38.251 --> 00:08:40.520
So yeah, feel free to pause the video.
167
00:08:40.520 --> 00:08:43.523
Try out the API we designs together
168
00:08:43.556 --> 00:08:44.557
on other models.
169
00:08:44.557 --> 00:08:48.861
And also bear in mind
that, the API modifies the model in place.
170
00:08:48.861 --> 00:08:53.833
So once you have loaded the model
and called the quantizer on the model,
171
00:08:54.100 --> 00:08:58.104
you need to reload the model if you want
to compare it with its, original version.