File size: 7,010 Bytes
0c08f5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533

1
00:00:00.000 --> 00:00:01.533
Welcome to this short course

2
00:00:01.533 --> 00:00:05.100
"Quantization in Depth,"
built in partnership with Hugging Face.

3
00:00:05.800 --> 00:00:09.200
In this course
you deep dive into the core technical

4
00:00:09.200 --> 00:00:13.533
building blocks of quantization,
which is a key part of the AI software

5
00:00:13.533 --> 00:00:17.233
stack for compressing
large language models and other models.

6
00:00:17.500 --> 00:00:18.800
You implement from scratch

7
00:00:18.800 --> 00:00:22.533
the most common variants of linear
quantization, called

8
00:00:22.600 --> 00:00:26.400
asymmetric and symmetric modes,
which relate to whether

9
00:00:26.733 --> 00:00:30.500
compression algorithm maps zero
in the original representation,

10
00:00:30.933 --> 00:00:33.933
to zero in decompress representation,

11
00:00:33.933 --> 00:00:37.233
or if is allowed to shift
the location of that zero.

12
00:00:37.933 --> 00:00:42.200
You also implement different forms
of quantization, such as a per tensor

13
00:00:42.600 --> 00:00:46.500
per channel, and per group quantization
using PyTorch,

14
00:00:46.833 --> 00:00:48.100
in which you can decide

15
00:00:48.100 --> 00:00:51.600
how big a chunk of your model
you want to quantize at one time.

16
00:00:52.500 --> 00:00:56.100
You end up building a quantizer
to quantize any model

17
00:00:56.100 --> 00:01:00.433
in eight-bit precision
using per channel linear quantization.

18
00:01:00.733 --> 00:01:05.200
If some of the terms I use don't make
sense yet, don't worry about it.

19
00:01:05.233 --> 00:01:06.100
These are all key

20
00:01:06.100 --> 00:01:09.833
technical concepts in quantization
that you learn about in this course.

21
00:01:10.033 --> 00:01:14.300
And in addition to understanding
all these quantization options,

22
00:01:14.400 --> 00:01:19.400
you also hone your intuition
about when to apply which technique.

23
00:01:19.700 --> 00:01:23.000
I'm delighted to introduce our instructors
for this course.

24
00:01:23.433 --> 00:01:26.700
Younes Belkada, a machine
learning engineer at Hugging Face

25
00:01:27.100 --> 00:01:29.600
has been involved in the open source team,

26
00:01:29.600 --> 00:01:32.933
where he works at the intersection
of many open source tools

27
00:01:33.100 --> 00:01:37.533
developed by Hugging Face
such as transformers, PETF, and TRL.

28
00:01:38.300 --> 00:01:42.300
And also Marc Sun, who's also a machine
learning engineer at Hugging Face.

29
00:01:42.733 --> 00:01:46.100
Marc is part of the Open source team,
where he contributes to libraries

30
00:01:46.100 --> 00:01:49.100
such as transformers or Accelerate.

31
00:01:49.633 --> 00:01:52.733
Marc and Younes are also deeply involved
in quantization

32
00:01:52.733 --> 00:01:56.300
in order to make large models
accessible to the community.

33
00:01:57.700 --> 00:01:58.800
Thanks, Andrew.

34
00:01:58.800 --> 00:02:01.200
We are excited to work with you
and your team on this.

35
00:02:01.200 --> 00:02:05.133
In this course, you will directly try
your hand on implementing

36
00:02:05.133 --> 00:02:08.300
from scratch
different variants of linear quantization,

37
00:02:08.400 --> 00:02:10.500
symmetric and asymmetric mode.

38
00:02:10.500 --> 00:02:14.033
You will also implement
different quantization granularities, such

39
00:02:14.033 --> 00:02:19.033
as per tensor, per channel
and per group quantization in pure PyTorch.

40
00:02:19.033 --> 00:02:23.400
Each one of these algorithms having
their own advantages and drawbacks.

41
00:02:23.800 --> 00:02:26.600
After that,
you'll build your own quantizer

42
00:02:26.600 --> 00:02:29.600
in order to quantize any model
in eight-bit precision.

43
00:02:29.633 --> 00:02:33.200
Using the per channel quantization scheme
that you have seen right before.

44
00:02:33.600 --> 00:02:35.400
You will see that you'll be able
to apply this

45
00:02:35.400 --> 00:02:39.333
method to any model regardless
of its modality, meaning you can apply

46
00:02:39.333 --> 00:02:43.033
to a text, vision, audio,
or even a multimodal model.

47
00:02:43.200 --> 00:02:46.133
Once you are happy with the quantizer,
it will try your hands on

48
00:02:46.133 --> 00:02:49.133
addressing common challenges
in quantization.

49
00:02:49.233 --> 00:02:52.700
At the time, we speak the most common
way of storing low-bit precision

50
00:02:52.700 --> 00:02:56.533
weights, such as four-bit or two-bit,
seemed to be weight spiking.

51
00:02:57.033 --> 00:03:00.300
With weight spiking,
you can pack altogether 2 or 4 bits

52
00:03:00.300 --> 00:03:04.233
tensors in a larger eight-bit tensor
without allocating any extra memory.

53
00:03:04.733 --> 00:03:06.600
We will see together
why this is important,

54
00:03:06.600 --> 00:03:09.900
and you will implement from scratch
packing and unpacking algorithms.

55
00:03:10.100 --> 00:03:12.933
Finally, we will learn together
about other challenges

56
00:03:12.933 --> 00:03:16.000
when it comes to quantizing large models
such as LLMS.

57
00:03:16.400 --> 00:03:20.300
We will review together current state
of the art approaches in order to perform

58
00:03:20.300 --> 00:03:21.800
no performance degradation

59
00:03:21.800 --> 00:03:26.100
quantization on LLMs and go through
how to do that within the Hugging Face

60
00:03:26.100 --> 00:03:27.133
ecosystem.

61
00:03:27.133 --> 00:03:29.800
Quantization is a really important part

62
00:03:29.800 --> 00:03:32.800
of practical use of large models today.

63
00:03:32.833 --> 00:03:35.833
So having in-depth knowledge of it
will help you to build,

64
00:03:35.900 --> 00:03:38.900
deploy, and use models more effectively.

65
00:03:39.300 --> 00:03:41.633
Many people have worked
to create this course.

66
00:03:41.633 --> 00:03:43.300
I like to thank on the Hugging Face

67
00:03:43.300 --> 00:03:47.100
side, the entire Hugging Face team
for the review of this course content,

68
00:03:47.200 --> 00:03:50.233
as well as the Hugging Face community
for their contributions

69
00:03:50.233 --> 00:03:54.300
to open source models and quantization
methods. From DeepLearning.AI,

70
00:03:54.533 --> 00:03:57.800
Eddy Shyu,
had also contributed to this course.

71
00:03:58.000 --> 00:04:01.000
Quantization is a fairly technical topic.

72
00:04:01.600 --> 00:04:04.633
After this course,
I hope you deeply understand it

73
00:04:04.633 --> 00:04:07.100
so you better say to others, "I now get it.

74
00:04:07.100 --> 00:04:09.700
I'm not worried about model compression."

75
00:04:09.700 --> 00:04:13.800
In other words, you can say:
"I'm not sweating the small stuff."

76
00:04:14.633 --> 00:04:16.733
Let's go on to the
next video and get started.