swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
5.11 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:00.000 --> 00:00:03.100
Quantization methods are used to make models smaller,
2
00:00:03.100 --> 00:00:06.200
which makes them more accessible to the AI community.
3
00:00:06.200 --> 00:00:09.800
In this lesson, you'll get an overview of what Quantization is,
4
00:00:09.800 --> 00:00:12.500
and how it works. Let's get started.
5
00:00:12.500 --> 00:00:15.500
We have seen previously
that quantization is an exciting topic
6
00:00:15.500 --> 00:00:19.733
as it enables us to shrink models
for better accessibility to the community.
7
00:00:19.900 --> 00:00:22.000
In this lesson,
we will learn how to implement
8
00:00:22.000 --> 00:00:25.400
some quantization primitives from scratch,
and we will also implement
9
00:00:25.400 --> 00:00:29.933
our own model quantizer and cover
some challenges that anyone can face
10
00:00:29.933 --> 00:00:33.700
when it comes to lower bit quantization,
such as weights packing.
11
00:00:34.300 --> 00:00:35.433
Let's get started.
12
00:00:35.433 --> 00:00:38.300
So let's first have a quick glance
on what we have learned
13
00:00:38.300 --> 00:00:40.200
so far from the first course.
14
00:00:40.200 --> 00:00:44.100
So, in the introduction of the first
course, we listed all available techniques
15
00:00:44.100 --> 00:00:47.433
that one could use in order
to compress a model in general.
16
00:00:48.000 --> 00:00:50.733
So first of all, quantization
aims at representing
17
00:00:50.733 --> 00:00:53.733
parameters of the model
in a lower precision.
18
00:00:53.900 --> 00:00:57.700
With knowledge distillation,
you can train a student model
19
00:00:57.700 --> 00:01:00.700
using the bigger teacher model outputs.
20
00:01:00.833 --> 00:01:04.300
And finally, with pruning,
you can simply remove some connections
21
00:01:04.300 --> 00:01:09.300
inside the model, meaning removing weights
to make the model more sparse.
22
00:01:10.100 --> 00:01:12.533
We also covered
common data types in machine
23
00:01:12.533 --> 00:01:16.700
learning, such as INT8 or float.
We also performed
24
00:01:16.800 --> 00:01:21.100
linear quantization using Hugging Face's
quantum library with few lines of code.
25
00:01:21.400 --> 00:01:24.200
And finally,
we wrapped up the course with an overview
26
00:01:24.200 --> 00:01:27.200
how quantization can be leveraged
in different use cases,
27
00:01:27.233 --> 00:01:29.900
such as large language
models, finetuning.
28
00:01:29.900 --> 00:01:32.900
So let's see together what we are going
to cover exactly in this course.
29
00:01:33.333 --> 00:01:37.433
So, first of all, we are going
to deep dive together into the internals
30
00:01:37.433 --> 00:01:41.733
of linear quantization and implement
some of their variants from scratch,
31
00:01:42.000 --> 00:01:46.000
such as per channel,
per tensor or per group quantization.
32
00:01:46.700 --> 00:01:50.200
We will study what are the advantages
and drawbacks for each of these methods,
33
00:01:50.200 --> 00:01:53.100
and we will see their impact
on some random tensors.
34
00:01:53.100 --> 00:01:56.700
And next,
we will try to build our own quantizer
35
00:01:56.900 --> 00:01:58.800
to quantize any model in eight-bit
36
00:01:58.800 --> 00:02:02.500
precision using one of the quantization
schemes presented before.
37
00:02:02.833 --> 00:02:06.200
Note the quantization scheme
is agnostic to modalities,
38
00:02:06.433 --> 00:02:11.000
meaning you can apply to any model as long
as your model contains linear layers.
39
00:02:11.400 --> 00:02:14.400
Technically, you will be able
to use your quantizer to quantize
40
00:02:14.400 --> 00:02:17.900
a vision, text, audio,
or even a multimodal model.
41
00:02:18.200 --> 00:02:21.433
And finally, we will wrap up the course
by learning more
42
00:02:21.500 --> 00:02:26.033
about some challenges that you can face
when it comes to extreme quantization,
43
00:02:26.033 --> 00:02:29.533
such as weight packing,
which is a common challenge these days.
44
00:02:30.000 --> 00:02:32.933
As of the time we speak, PyTorch
does not have
45
00:02:32.933 --> 00:02:36.100
a native support for two-bit or four-bit
precision weights.
46
00:02:36.400 --> 00:02:40.000
One way to address
this issue is to pack these low precision
47
00:02:40.000 --> 00:02:43.433
weights into a higher precision tensor,
for example INT8.
48
00:02:43.833 --> 00:02:45.600
And we will deep dive into that
49
00:02:45.600 --> 00:02:49.300
and implement
packing and unpacking algorithms together.
50
00:02:49.400 --> 00:02:53.300
And we will end the course by covering
what are the other common challenges
51
00:02:53.300 --> 00:02:56.333
when it comes to quantizing large models
such as LLMs.
52
00:02:56.500 --> 00:03:00.600
And review together some state of the art
quantization methods together.
53
00:03:00.833 --> 00:03:03.600
Yeah, so let's try to get started
and shrink some models.