File size: 8,810 Bytes
0c08f5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533

1
00:00:02.102 --> 00:00:03.336
In this lesson, you'll get a

2
00:00:03.336 --> 00:00:06.373
sense of some common challenges
when it comes to applying low-bit

3
00:00:06.406 --> 00:00:10.410
quantization, such as 2 or 4 bits
by diving into weight spiking.

4
00:00:11.177 --> 00:00:14.381
In addition,
you'll wrap up the course by some insights

5
00:00:14.381 --> 00:00:17.083
into state of the art quantization
methods.

6
00:00:17.083 --> 00:00:20.086
Let's pack some weights.

7
00:00:21.087 --> 00:00:22.489
In this lesson,

8
00:00:22.489 --> 00:00:25.925
we are going to discuss about
the common challenges that you can face

9
00:00:26.092 --> 00:00:30.130
when you want to try out
low bit quantization, such as 2 or 4 bit.

10
00:00:30.563 --> 00:00:34.000
And we're going to implement from scratch
weight packing.

11
00:00:34.167 --> 00:00:36.036
So specifically in this lesson
you will learn

12
00:00:36.036 --> 00:00:39.039
why weight spiking is important
for storing quantized weights.

13
00:00:39.506 --> 00:00:42.409
We'll also store and load two
and four-bit weights

14
00:00:42.409 --> 00:00:45.412
in a packed unsigned int8 tensor.

15
00:00:45.445 --> 00:00:47.113
And we will also see together

16
00:00:47.113 --> 00:00:50.483
other challenges with quantizing
generative models such as LLMs.

17
00:00:50.817 --> 00:00:54.721
And quickly review some state of the art
LLM quantization methods.

18
00:00:54.821 --> 00:00:56.089
So let's get started.

19
00:00:56.089 --> 00:00:59.159
So before starting the lab,
I wanted to give some small context

20
00:00:59.159 --> 00:01:01.828
on why packing is important
and why do we need packing

21
00:01:01.828 --> 00:01:03.363
when storing quantized weights.

22
00:01:03.363 --> 00:01:05.131
So assume you have quantized.

23
00:01:05.131 --> 00:01:07.567
You want to quantize your model
in four-bit precision,

24
00:01:07.567 --> 00:01:09.936
and you want to store the weights
in a torch tensor.

25
00:01:09.936 --> 00:01:12.305
So ideally
you want to call something like this.

26
00:01:12.305 --> 00:01:15.408
Or you want to create a tensor
with some values.

27
00:01:15.408 --> 00:01:19.012
And then probably pass dtype=torch.int4.

28
00:01:19.312 --> 00:01:22.949
Or you can also do it
after cast to tensor int4.

29
00:01:22.949 --> 00:01:25.351
But the problem
is that the time at the time we speak,

30
00:01:25.351 --> 00:01:29.889
there is no native support
for four-bit weights in PyTorch.

31
00:01:30.523 --> 00:01:34.794
So we need to find a way to store those
four-bit weights in an efficient manner.

32
00:01:34.794 --> 00:01:39.199
So right now the only possible solution
is instead of saving the tensor

33
00:01:39.199 --> 00:01:44.199
in four-bit, we have to save it in
eight-bit as currently it's

34
00:01:44.304 --> 00:01:48.341
the data type with the smallest precision
that is available in PyTorch.

35
00:01:48.475 --> 00:01:51.144
So in practice
we need to save the tensor in eight-bit.

36
00:01:51.144 --> 00:01:56.015
But this is not really ideal
because the tensor will occupy eight-bit

37
00:01:56.015 --> 00:01:56.983
per data point.

38
00:01:56.983 --> 00:02:00.220
Despite in practice
it will only need four-bits

39
00:02:00.220 --> 00:02:03.957
because you have encoded your parameters
in four-bit precision,

40
00:02:03.957 --> 00:02:08.128
so it will definitely add
considerable overhead for large models.

41
00:02:08.394 --> 00:02:10.730
Therefore,
if we go for the naive approach,

42
00:02:10.730 --> 00:02:13.766
meaning if we store the four-bit weights
in an eight-bit tensor,

43
00:02:13.967 --> 00:02:17.237
there will be no point
quantizing the model into four-bit

44
00:02:17.604 --> 00:02:20.607
because all the parameters will be stored
in eight-bit precision.

45
00:02:20.740 --> 00:02:25.612
So for that, we need to pack the four-bit
weights into eight-bit tensor.

46
00:02:25.645 --> 00:02:28.648
So how those packing work in detail.

47
00:02:28.648 --> 00:02:31.284
So consider the tensor below that stores

48
00:02:31.284 --> 00:02:34.454
four values that can be represented
in two-bit precision.

49
00:02:34.687 --> 00:02:38.424
So recall in two-bit precision
you can encode four values.

50
00:02:38.424 --> 00:02:43.424
So in case of base two we can encode 0123.

51
00:02:44.063 --> 00:02:47.333
So we can code at most four values
two to the power of two.

52
00:02:47.734 --> 00:02:50.537
And those values will be 012 and three.

53
00:02:50.537 --> 00:02:54.474
So imagine
we have the a parameter of a model

54
00:02:54.474 --> 00:02:57.443
which we have encoded
in two-bit precision.

55
00:02:57.844 --> 00:02:59.846
And these are the parameters of the model.

56
00:02:59.846 --> 00:03:03.917
So right now in PyTorch we can store
the model weights in two-bits.

57
00:03:03.950 --> 00:03:06.920
So we have to store them in a bit
precision.

58
00:03:06.920 --> 00:03:11.724
So we'll have to end up with such a tensor
that will take four times

59
00:03:11.958 --> 00:03:14.961
eight-bits
in terms of memory memory footprint.

60
00:03:14.994 --> 00:03:18.464
So currently this weight tensor
is encoded as

61
00:03:19.065 --> 00:03:22.101
so, 1 in 8 bit, 0 in 8 bit,

62
00:03:22.402 --> 00:03:25.405
3 in 8 bits and 2 in eight-bits.

63
00:03:25.605 --> 00:03:29.542
So as I said this is not really optimal
because you need to allocate four times

64
00:03:29.542 --> 00:03:32.545
eight-bits in terms of memory
in order to store

65
00:03:32.612 --> 00:03:35.615
weights that can be encoded only in two bit.

66
00:03:35.782 --> 00:03:40.353
So what can we do to ignore these bits
that we don't need?

67
00:03:40.887 --> 00:03:45.558
That's exactly what packing does
and addresses this challenge by packing

68
00:03:45.625 --> 00:03:50.496
only the relevant bits
all together in a single eight-bit tensor.

69
00:03:50.830 --> 00:03:55.830
So if let's say we're going to pack
these four weights in a single bit tensor.

70
00:03:56.169 --> 00:03:58.204
So we're going to start
with the right one.

71
00:03:58.204 --> 00:04:01.307
Then we're going to insert it
in our new eight-bit parameter.

72
00:04:01.307 --> 00:04:04.944
So one zero we're going to put one zero

73
00:04:04.944 --> 00:04:08.514
on the first bits in the first bits
of our new eight-bit parameter.

74
00:04:08.548 --> 00:04:12.118
And then 110001.

75
00:04:12.752 --> 00:04:17.123
And if we store that in eight-
bits, we'll end up having a new tensor

76
00:04:17.123 --> 00:04:19.859
with only a single value
instead of four values.

77
00:04:19.859 --> 00:04:24.859
But this time this tensor encodes all the
parameters that are stored in two-bits.

78
00:04:25.164 --> 00:04:29.836
So this value in uint8
will end up being 177.

79
00:04:30.203 --> 00:04:34.374
So the advantage of packing is that it
reflects the true

80
00:04:34.774 --> 00:04:37.677
or real memory
footprint of the quantized weights.

81
00:04:37.677 --> 00:04:41.714
So again, if we go for the naive approach
when we need to allocate four times

82
00:04:41.714 --> 00:04:44.951
eight-bit precision,
whereas for the packed case

83
00:04:44.951 --> 00:04:48.554
we only need to store a single parameter
in eight-bit precision

84
00:04:48.955 --> 00:04:53.760
that will store all the two bit parameters
that we have.

85
00:04:54.093 --> 00:04:56.362
Of course, this has to come with a price.

86
00:04:56.362 --> 00:05:00.133
Whenever we want to perform inference,
we need to unpack the weights

87
00:05:00.366 --> 00:05:05.038
to come back to this state,
because, most of the operations

88
00:05:05.338 --> 00:05:09.075
are not supported in native two-bit
or four-bit in PyTorch.

89
00:05:09.342 --> 00:05:12.879
And also,
the unpacked tensors need to have a shape

90
00:05:12.879 --> 00:05:16.182
with a multiple of n divided
by the number of bits.

91
00:05:16.215 --> 00:05:21.215
And so if we have five parameters,
we'll need to allocate an extra eight-bit

92
00:05:21.454 --> 00:05:25.692
parameter here that will only encode
a single two-bit value.

93
00:05:25.725 --> 00:05:30.063
So ideally we need to have eight
divided by nine bits in case of two.

94
00:05:30.296 --> 00:05:35.101
Four we need to have multiple
of four parameters in the single tensor.

95
00:05:35.234 --> 00:05:35.568
Yeah.

96
00:05:35.568 --> 00:05:38.571
So let's see how does it looks like
in terms of implementation.

97
00:05:38.738 --> 00:05:40.406
And we're going to move on to the lab.