aimlbd / HuggingFace.Quantization_in_Depth /Transcript /16_Beyond Linear Quantization_HFQD.txt
swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
11.3 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:00.033 --> 00:00:03.303
We're going to wrap up the whole course
with these explanations.
2
00:00:03.536 --> 00:00:07.307
If you have followed the previous lab,
I've quickly mentioned,
3
00:00:07.874 --> 00:00:11.644
the notion of emergent features
for large language models.
4
00:00:12.045 --> 00:00:15.015
And this is part of
like one of the biggest challenges
5
00:00:15.015 --> 00:00:17.584
when when it comes to quantizing
large language models.
6
00:00:17.584 --> 00:00:21.688
Once in the open source community,
we had more and more large language
7
00:00:21.688 --> 00:00:26.688
models such as OPT the opened
pre-trained transformers from Facebook.
8
00:00:27.260 --> 00:00:32.260
In 2022, researchers started to directly
dive into the capabilities of the model,
9
00:00:33.166 --> 00:00:37.637
and they discovered
some so-called emergent features at scale.
10
00:00:37.771 --> 00:00:40.440
What do we mean
exactly by emergent features?
11
00:00:40.440 --> 00:00:44.911
Simply, some characteristics
or features that appear at scale
12
00:00:45.445 --> 00:00:46.846
so when the model is large.
13
00:00:46.846 --> 00:00:50.216
So it turns out that for some models
that scale
14
00:00:50.984 --> 00:00:54.054
the features predicted by the model,
15
00:00:54.788 --> 00:00:57.690
meaning the magnitude of the hidden states
16
00:00:57.690 --> 00:01:01.161
started to get large,
thus making the classic quantization
17
00:01:01.161 --> 00:01:04.531
schemes quite obsolete,
which led to, you know, classic,
18
00:01:05.198 --> 00:01:07.567
linear quantization algorithms,
19
00:01:07.567 --> 00:01:10.203
just failing on those models.
20
00:01:10.203 --> 00:01:14.541
Many papers today, since open
sourcing these large language models,
21
00:01:14.908 --> 00:01:17.911
decided to tackle this specific challenge
on how to deal
22
00:01:17.911 --> 00:01:20.914
with outlier features for large language
models.
23
00:01:20.914 --> 00:01:25.914
Again, outlier features simply means
hidden states with large magnitude.
24
00:01:26.352 --> 00:01:30.890
So there are some interesting papers,
such as Int8, SmoothQuant,
25
00:01:32.792 --> 00:01:33.993
AWQ, and I wanted to give
26
00:01:33.993 --> 00:01:37.664
a brief explanation of each paper
to just give you some insights of
27
00:01:37.797 --> 00:01:41.101
what could be the potential solutions
to address this specific issue.
28
00:01:43.903 --> 00:01:44.771
So LLM.int8
29
00:01:44.771 --> 00:01:48.708
proposes to decompose
the underlying
30
00:01:48.741 --> 00:01:52.979
matrix multiplication
of the linear layers in two stages.
31
00:01:53.847 --> 00:01:56.649
So if you consider the input hidden states
32
00:01:56.649 --> 00:01:59.652
that you can see in the big matrix here,
33
00:02:00.019 --> 00:02:04.591
it is possible
to decompose the matmul in two parts.
34
00:02:04.591 --> 00:02:09.062
So the outlier part,
all the hidden states that are greater
35
00:02:09.062 --> 00:02:12.799
than certain threshold
and the non outlier part.
36
00:02:13.399 --> 00:02:14.834
The idea is very simple.
37
00:02:14.834 --> 00:02:17.837
So you decompose the input into
38
00:02:18.872 --> 00:02:20.507
perform the non outlier
39
00:02:20.507 --> 00:02:23.543
part matrix multiplication in Int8.
40
00:02:23.910 --> 00:02:28.882
So you quantize you do
the matmul in eight-bit and then you do
41
00:02:28.915 --> 00:02:33.915
dequantize using the scales so that you get
the final results in the input datatype.
42
00:02:34.888 --> 00:02:38.158
And the second part, you do it
classically,
43
00:02:38.691 --> 00:02:41.828
with the original dtype
of the hidden state.
44
00:02:41.861 --> 00:02:43.563
So usually in half precision.
45
00:02:43.563 --> 00:02:45.265
And then you combine both results.
46
00:02:45.265 --> 00:02:47.400
So this way
it has been proven that you can,
47
00:02:48.334 --> 00:02:49.302
retain the full
48
00:02:49.302 --> 00:02:52.705
performance of the model
without any performance degradation.
49
00:02:53.306 --> 00:02:56.442
Another very interesting approach
is called SmoothQuant.
50
00:02:56.509 --> 00:03:01.509
SmoothQuant specifically applies
to A8W8 schemes.
51
00:03:01.814 --> 00:03:05.552
Meaning
we also want to quantize the activations.
52
00:03:05.885 --> 00:03:10.190
So meaning both the activation and
the weights are in eight bit precision.
53
00:03:10.590 --> 00:03:15.395
So the paper also tackles this issue of
outlier features in large language models.
54
00:03:15.895 --> 00:03:17.830
And they proposed to mitigate that
55
00:03:17.830 --> 00:03:21.434
by smoothening
both the activation and the weights.
56
00:03:21.968 --> 00:03:26.072
Given a factor that you determine
based on the input activation
57
00:03:26.439 --> 00:03:31.411
to migrate the quantization difficulty
in both during
58
00:03:31.411 --> 00:03:35.448
the quantization of the activations,
but also quantization of the weights.
59
00:03:35.748 --> 00:03:39.752
So that way you transfer the quantization
difficulty or all over to the weights
60
00:03:40.353 --> 00:03:43.890
equally to the weights
and to the weights and the activation.
61
00:03:44.424 --> 00:03:47.927
And that way you can also retain
the full capabilities of the model.
62
00:03:48.228 --> 00:03:51.231
A more recent paper called AWQ,
63
00:03:51.564 --> 00:03:54.567
also treats
the outlier feature in a special way.
64
00:03:54.567 --> 00:03:58.738
So the paper, which came out also from
the same lab as the SmoothQuant paper,
65
00:03:59.005 --> 00:04:01.841
proposes to first iterate over a dataset
66
00:04:01.841 --> 00:04:05.612
that we are going to call
a calibration dataset
67
00:04:05.945 --> 00:04:09.182
to get detailed idea of which channel
68
00:04:10.183 --> 00:04:11.884
in the input weights
69
00:04:11.884 --> 00:04:16.589
could be responsible of generating
outlier features called salient weights.
70
00:04:17.123 --> 00:04:20.126
And the idea is,
to use that information
71
00:04:20.326 --> 00:04:23.896
to scale the model weights
before quantization,
72
00:04:24.230 --> 00:04:28.468
and also use that scale during inference
to rescale the input as well.
73
00:04:28.701 --> 00:04:30.470
So these are just a few of them.
74
00:04:30.470 --> 00:04:33.973
There are numerous other papers
that specifically address
75
00:04:34.107 --> 00:04:37.910
this issue for an effective and efficient
large language model
76
00:04:37.910 --> 00:04:38.678
quantization.
77
00:04:38.678 --> 00:04:42.782
So here is a non-exhaustive list
of those quantization techniques.
78
00:04:42.782 --> 00:04:45.718
But perhaps you can find much more
at the time we speak.
79
00:04:45.718 --> 00:04:49.689
So yeah, if you are curious about this,
I invite you to read these papers
80
00:04:49.689 --> 00:04:50.890
in detail.
81
00:04:50.890 --> 00:04:54.327
And, you know, just dive into them
and try to understand these papers.
82
00:04:54.661 --> 00:04:57.497
These are one of the challenges
when it comes to quantizing
83
00:04:57.497 --> 00:05:00.566
large language models,
because the models are quite large.
84
00:05:00.733 --> 00:05:03.336
You can get some surprising behavior.
85
00:05:03.336 --> 00:05:04.904
There are also other challenges.
86
00:05:04.904 --> 00:05:07.907
So it seems the Quantization,
Aware Training field
87
00:05:08.107 --> 00:05:11.110
seems to be a little bit
maybe underexplored today.
88
00:05:11.277 --> 00:05:14.113
So training models in low bit
89
00:05:14.113 --> 00:05:17.116
could be also
an interesting topic to dive into.
90
00:05:17.417 --> 00:05:20.920
there is also this challenge on limited
hardware support.
91
00:05:20.920 --> 00:05:25.920
So right now for this course
we only focused on W8A16 scheme,
92
00:05:26.626 --> 00:05:31.197
meaning the weights are in eight bit
but the activations are in 16 bits.
93
00:05:31.230 --> 00:05:34.200
But for a more efficient
quantization scheme, you may
94
00:05:34.200 --> 00:05:39.200
be also interested in other schemes
such as W8A8 as well.
95
00:05:39.739 --> 00:05:44.177
But not all hardwares
do support eight bit operations.
96
00:05:44.477 --> 00:05:47.313
There is also this challenge around
calibration dataset.
97
00:05:47.313 --> 00:05:51.017
So for some quantization
methods, you need to have
98
00:05:51.351 --> 00:05:54.354
a calibration dataset
to perform some sort of,
99
00:05:54.854 --> 00:05:57.990
model pre-processing
to make the quantization model better.
100
00:05:58.191 --> 00:06:01.194
And also in terms of distribution
packing and unpacking.
101
00:06:01.294 --> 00:06:03.563
So yeah, if you are really interested
about this topic,
102
00:06:03.563 --> 00:06:06.866
I invite you to do some further
reading through
103
00:06:06.866 --> 00:06:10.536
for example,
the state of the art quantization papers.
104
00:06:10.737 --> 00:06:13.940
There is also a lab called MIT Han lab,
105
00:06:14.273 --> 00:06:18.010
which made some of these, state
of the art quantization papers.
106
00:06:18.511 --> 00:06:23.483
So they have also good resources on
which you can learn more about this topic.
107
00:06:23.516 --> 00:06:24.884
You can also check out the
108
00:06:24.884 --> 00:06:28.221
Hugging Face Transformers
quantization documentation and blog post.
109
00:06:28.254 --> 00:06:32.825
You can also,
have a look at the llama.cpp repository
110
00:06:32.825 --> 00:06:37.697
discussions where you can find really
some insightful experiments and talk.
111
00:06:37.730 --> 00:06:39.031
You can also check out Reddit.
112
00:06:39.031 --> 00:06:41.534
So there is a subreddit called r/LocalLlama
113
00:06:41.534 --> 00:06:44.804
where they share a lot of cool insights
about quantization
114
00:06:45.004 --> 00:06:48.374
and you can also you can also learn more
about the new method that come up
115
00:06:48.975 --> 00:06:49.809
and so on.
116
00:06:49.809 --> 00:06:52.812
And then of course,
probably missing many more resources.
117
00:06:53.212 --> 00:06:55.581
but yeah,
these are the ones that, that I know.
118
00:06:55.581 --> 00:06:57.517
So that's it for this lesson.
119
00:06:57.517 --> 00:07:02.388
So I hope you learned a lot, through this
course and that you can use,
120
00:07:03.389 --> 00:07:06.392
the things that we have showed, to you,
121
00:07:06.426 --> 00:07:09.595
for your work or for your projects
and that all of this
122
00:07:09.595 --> 00:07:13.166
could give you some ideas of cool things
that you can do around you.
123
00:07:13.599 --> 00:07:16.869
So, yeah,
we're going to move on to the next video.
124
00:07:16.903 --> 00:07:17.236
Yeah.
125
00:07:17.236 --> 00:07:19.439
We're we'll say thank you for,
126
00:07:19.439 --> 00:07:22.442
going through this course
and suggest potential next steps.
127
00:07:22.942 --> 00:07:23.509
See you there.