swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
8.81 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:02.102 --> 00:00:03.336
In this lesson, you'll get a
2
00:00:03.336 --> 00:00:06.373
sense of some common challenges
when it comes to applying low-bit
3
00:00:06.406 --> 00:00:10.410
quantization, such as 2 or 4 bits
by diving into weight spiking.
4
00:00:11.177 --> 00:00:14.381
In addition,
you'll wrap up the course by some insights
5
00:00:14.381 --> 00:00:17.083
into state of the art quantization
methods.
6
00:00:17.083 --> 00:00:20.086
Let's pack some weights.
7
00:00:21.087 --> 00:00:22.489
In this lesson,
8
00:00:22.489 --> 00:00:25.925
we are going to discuss about
the common challenges that you can face
9
00:00:26.092 --> 00:00:30.130
when you want to try out
low bit quantization, such as 2 or 4 bit.
10
00:00:30.563 --> 00:00:34.000
And we're going to implement from scratch
weight packing.
11
00:00:34.167 --> 00:00:36.036
So specifically in this lesson
you will learn
12
00:00:36.036 --> 00:00:39.039
why weight spiking is important
for storing quantized weights.
13
00:00:39.506 --> 00:00:42.409
We'll also store and load two
and four-bit weights
14
00:00:42.409 --> 00:00:45.412
in a packed unsigned int8 tensor.
15
00:00:45.445 --> 00:00:47.113
And we will also see together
16
00:00:47.113 --> 00:00:50.483
other challenges with quantizing
generative models such as LLMs.
17
00:00:50.817 --> 00:00:54.721
And quickly review some state of the art
LLM quantization methods.
18
00:00:54.821 --> 00:00:56.089
So let's get started.
19
00:00:56.089 --> 00:00:59.159
So before starting the lab,
I wanted to give some small context
20
00:00:59.159 --> 00:01:01.828
on why packing is important
and why do we need packing
21
00:01:01.828 --> 00:01:03.363
when storing quantized weights.
22
00:01:03.363 --> 00:01:05.131
So assume you have quantized.
23
00:01:05.131 --> 00:01:07.567
You want to quantize your model
in four-bit precision,
24
00:01:07.567 --> 00:01:09.936
and you want to store the weights
in a torch tensor.
25
00:01:09.936 --> 00:01:12.305
So ideally
you want to call something like this.
26
00:01:12.305 --> 00:01:15.408
Or you want to create a tensor
with some values.
27
00:01:15.408 --> 00:01:19.012
And then probably pass dtype=torch.int4.
28
00:01:19.312 --> 00:01:22.949
Or you can also do it
after cast to tensor int4.
29
00:01:22.949 --> 00:01:25.351
But the problem
is that the time at the time we speak,
30
00:01:25.351 --> 00:01:29.889
there is no native support
for four-bit weights in PyTorch.
31
00:01:30.523 --> 00:01:34.794
So we need to find a way to store those
four-bit weights in an efficient manner.
32
00:01:34.794 --> 00:01:39.199
So right now the only possible solution
is instead of saving the tensor
33
00:01:39.199 --> 00:01:44.199
in four-bit, we have to save it in
eight-bit as currently it's
34
00:01:44.304 --> 00:01:48.341
the data type with the smallest precision
that is available in PyTorch.
35
00:01:48.475 --> 00:01:51.144
So in practice
we need to save the tensor in eight-bit.
36
00:01:51.144 --> 00:01:56.015
But this is not really ideal
because the tensor will occupy eight-bit
37
00:01:56.015 --> 00:01:56.983
per data point.
38
00:01:56.983 --> 00:02:00.220
Despite in practice
it will only need four-bits
39
00:02:00.220 --> 00:02:03.957
because you have encoded your parameters
in four-bit precision,
40
00:02:03.957 --> 00:02:08.128
so it will definitely add
considerable overhead for large models.
41
00:02:08.394 --> 00:02:10.730
Therefore,
if we go for the naive approach,
42
00:02:10.730 --> 00:02:13.766
meaning if we store the four-bit weights
in an eight-bit tensor,
43
00:02:13.967 --> 00:02:17.237
there will be no point
quantizing the model into four-bit
44
00:02:17.604 --> 00:02:20.607
because all the parameters will be stored
in eight-bit precision.
45
00:02:20.740 --> 00:02:25.612
So for that, we need to pack the four-bit
weights into eight-bit tensor.
46
00:02:25.645 --> 00:02:28.648
So how those packing work in detail.
47
00:02:28.648 --> 00:02:31.284
So consider the tensor below that stores
48
00:02:31.284 --> 00:02:34.454
four values that can be represented
in two-bit precision.
49
00:02:34.687 --> 00:02:38.424
So recall in two-bit precision
you can encode four values.
50
00:02:38.424 --> 00:02:43.424
So in case of base two we can encode 0123.
51
00:02:44.063 --> 00:02:47.333
So we can code at most four values
two to the power of two.
52
00:02:47.734 --> 00:02:50.537
And those values will be 012 and three.
53
00:02:50.537 --> 00:02:54.474
So imagine
we have the a parameter of a model
54
00:02:54.474 --> 00:02:57.443
which we have encoded
in two-bit precision.
55
00:02:57.844 --> 00:02:59.846
And these are the parameters of the model.
56
00:02:59.846 --> 00:03:03.917
So right now in PyTorch we can store
the model weights in two-bits.
57
00:03:03.950 --> 00:03:06.920
So we have to store them in a bit
precision.
58
00:03:06.920 --> 00:03:11.724
So we'll have to end up with such a tensor
that will take four times
59
00:03:11.958 --> 00:03:14.961
eight-bits
in terms of memory memory footprint.
60
00:03:14.994 --> 00:03:18.464
So currently this weight tensor
is encoded as
61
00:03:19.065 --> 00:03:22.101
so, 1 in 8 bit, 0 in 8 bit,
62
00:03:22.402 --> 00:03:25.405
3 in 8 bits and 2 in eight-bits.
63
00:03:25.605 --> 00:03:29.542
So as I said this is not really optimal
because you need to allocate four times
64
00:03:29.542 --> 00:03:32.545
eight-bits in terms of memory
in order to store
65
00:03:32.612 --> 00:03:35.615
weights that can be encoded only in two bit.
66
00:03:35.782 --> 00:03:40.353
So what can we do to ignore these bits
that we don't need?
67
00:03:40.887 --> 00:03:45.558
That's exactly what packing does
and addresses this challenge by packing
68
00:03:45.625 --> 00:03:50.496
only the relevant bits
all together in a single eight-bit tensor.
69
00:03:50.830 --> 00:03:55.830
So if let's say we're going to pack
these four weights in a single bit tensor.
70
00:03:56.169 --> 00:03:58.204
So we're going to start
with the right one.
71
00:03:58.204 --> 00:04:01.307
Then we're going to insert it
in our new eight-bit parameter.
72
00:04:01.307 --> 00:04:04.944
So one zero we're going to put one zero
73
00:04:04.944 --> 00:04:08.514
on the first bits in the first bits
of our new eight-bit parameter.
74
00:04:08.548 --> 00:04:12.118
And then 110001.
75
00:04:12.752 --> 00:04:17.123
And if we store that in eight-
bits, we'll end up having a new tensor
76
00:04:17.123 --> 00:04:19.859
with only a single value
instead of four values.
77
00:04:19.859 --> 00:04:24.859
But this time this tensor encodes all the
parameters that are stored in two-bits.
78
00:04:25.164 --> 00:04:29.836
So this value in uint8
will end up being 177.
79
00:04:30.203 --> 00:04:34.374
So the advantage of packing is that it
reflects the true
80
00:04:34.774 --> 00:04:37.677
or real memory
footprint of the quantized weights.
81
00:04:37.677 --> 00:04:41.714
So again, if we go for the naive approach
when we need to allocate four times
82
00:04:41.714 --> 00:04:44.951
eight-bit precision,
whereas for the packed case
83
00:04:44.951 --> 00:04:48.554
we only need to store a single parameter
in eight-bit precision
84
00:04:48.955 --> 00:04:53.760
that will store all the two bit parameters
that we have.
85
00:04:54.093 --> 00:04:56.362
Of course, this has to come with a price.
86
00:04:56.362 --> 00:05:00.133
Whenever we want to perform inference,
we need to unpack the weights
87
00:05:00.366 --> 00:05:05.038
to come back to this state,
because, most of the operations
88
00:05:05.338 --> 00:05:09.075
are not supported in native two-bit
or four-bit in PyTorch.
89
00:05:09.342 --> 00:05:12.879
And also,
the unpacked tensors need to have a shape
90
00:05:12.879 --> 00:05:16.182
with a multiple of n divided
by the number of bits.
91
00:05:16.215 --> 00:05:21.215
And so if we have five parameters,
we'll need to allocate an extra eight-bit
92
00:05:21.454 --> 00:05:25.692
parameter here that will only encode
a single two-bit value.
93
00:05:25.725 --> 00:05:30.063
So ideally we need to have eight
divided by nine bits in case of two.
94
00:05:30.296 --> 00:05:35.101
Four we need to have multiple
of four parameters in the single tensor.
95
00:05:35.234 --> 00:05:35.568
Yeah.
96
00:05:35.568 --> 00:05:38.571
So let's see how does it looks like
in terms of implementation.
97
00:05:38.738 --> 00:05:40.406
And we're going to move on to the lab.