swsuws's picture
Upload 200 files
0c08f5a verified
raw
history blame
7.01 kB
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533
1
00:00:00.000 --> 00:00:01.533
Welcome to this short course
2
00:00:01.533 --> 00:00:05.100
"Quantization in Depth,"
built in partnership with Hugging Face.
3
00:00:05.800 --> 00:00:09.200
In this course
you deep dive into the core technical
4
00:00:09.200 --> 00:00:13.533
building blocks of quantization,
which is a key part of the AI software
5
00:00:13.533 --> 00:00:17.233
stack for compressing
large language models and other models.
6
00:00:17.500 --> 00:00:18.800
You implement from scratch
7
00:00:18.800 --> 00:00:22.533
the most common variants of linear
quantization, called
8
00:00:22.600 --> 00:00:26.400
asymmetric and symmetric modes,
which relate to whether
9
00:00:26.733 --> 00:00:30.500
compression algorithm maps zero
in the original representation,
10
00:00:30.933 --> 00:00:33.933
to zero in decompress representation,
11
00:00:33.933 --> 00:00:37.233
or if is allowed to shift
the location of that zero.
12
00:00:37.933 --> 00:00:42.200
You also implement different forms
of quantization, such as a per tensor
13
00:00:42.600 --> 00:00:46.500
per channel, and per group quantization
using PyTorch,
14
00:00:46.833 --> 00:00:48.100
in which you can decide
15
00:00:48.100 --> 00:00:51.600
how big a chunk of your model
you want to quantize at one time.
16
00:00:52.500 --> 00:00:56.100
You end up building a quantizer
to quantize any model
17
00:00:56.100 --> 00:01:00.433
in eight-bit precision
using per channel linear quantization.
18
00:01:00.733 --> 00:01:05.200
If some of the terms I use don't make
sense yet, don't worry about it.
19
00:01:05.233 --> 00:01:06.100
These are all key
20
00:01:06.100 --> 00:01:09.833
technical concepts in quantization
that you learn about in this course.
21
00:01:10.033 --> 00:01:14.300
And in addition to understanding
all these quantization options,
22
00:01:14.400 --> 00:01:19.400
you also hone your intuition
about when to apply which technique.
23
00:01:19.700 --> 00:01:23.000
I'm delighted to introduce our instructors
for this course.
24
00:01:23.433 --> 00:01:26.700
Younes Belkada, a machine
learning engineer at Hugging Face
25
00:01:27.100 --> 00:01:29.600
has been involved in the open source team,
26
00:01:29.600 --> 00:01:32.933
where he works at the intersection
of many open source tools
27
00:01:33.100 --> 00:01:37.533
developed by Hugging Face
such as transformers, PETF, and TRL.
28
00:01:38.300 --> 00:01:42.300
And also Marc Sun, who's also a machine
learning engineer at Hugging Face.
29
00:01:42.733 --> 00:01:46.100
Marc is part of the Open source team,
where he contributes to libraries
30
00:01:46.100 --> 00:01:49.100
such as transformers or Accelerate.
31
00:01:49.633 --> 00:01:52.733
Marc and Younes are also deeply involved
in quantization
32
00:01:52.733 --> 00:01:56.300
in order to make large models
accessible to the community.
33
00:01:57.700 --> 00:01:58.800
Thanks, Andrew.
34
00:01:58.800 --> 00:02:01.200
We are excited to work with you
and your team on this.
35
00:02:01.200 --> 00:02:05.133
In this course, you will directly try
your hand on implementing
36
00:02:05.133 --> 00:02:08.300
from scratch
different variants of linear quantization,
37
00:02:08.400 --> 00:02:10.500
symmetric and asymmetric mode.
38
00:02:10.500 --> 00:02:14.033
You will also implement
different quantization granularities, such
39
00:02:14.033 --> 00:02:19.033
as per tensor, per channel
and per group quantization in pure PyTorch.
40
00:02:19.033 --> 00:02:23.400
Each one of these algorithms having
their own advantages and drawbacks.
41
00:02:23.800 --> 00:02:26.600
After that,
you'll build your own quantizer
42
00:02:26.600 --> 00:02:29.600
in order to quantize any model
in eight-bit precision.
43
00:02:29.633 --> 00:02:33.200
Using the per channel quantization scheme
that you have seen right before.
44
00:02:33.600 --> 00:02:35.400
You will see that you'll be able
to apply this
45
00:02:35.400 --> 00:02:39.333
method to any model regardless
of its modality, meaning you can apply
46
00:02:39.333 --> 00:02:43.033
to a text, vision, audio,
or even a multimodal model.
47
00:02:43.200 --> 00:02:46.133
Once you are happy with the quantizer,
it will try your hands on
48
00:02:46.133 --> 00:02:49.133
addressing common challenges
in quantization.
49
00:02:49.233 --> 00:02:52.700
At the time, we speak the most common
way of storing low-bit precision
50
00:02:52.700 --> 00:02:56.533
weights, such as four-bit or two-bit,
seemed to be weight spiking.
51
00:02:57.033 --> 00:03:00.300
With weight spiking,
you can pack altogether 2 or 4 bits
52
00:03:00.300 --> 00:03:04.233
tensors in a larger eight-bit tensor
without allocating any extra memory.
53
00:03:04.733 --> 00:03:06.600
We will see together
why this is important,
54
00:03:06.600 --> 00:03:09.900
and you will implement from scratch
packing and unpacking algorithms.
55
00:03:10.100 --> 00:03:12.933
Finally, we will learn together
about other challenges
56
00:03:12.933 --> 00:03:16.000
when it comes to quantizing large models
such as LLMS.
57
00:03:16.400 --> 00:03:20.300
We will review together current state
of the art approaches in order to perform
58
00:03:20.300 --> 00:03:21.800
no performance degradation
59
00:03:21.800 --> 00:03:26.100
quantization on LLMs and go through
how to do that within the Hugging Face
60
00:03:26.100 --> 00:03:27.133
ecosystem.
61
00:03:27.133 --> 00:03:29.800
Quantization is a really important part
62
00:03:29.800 --> 00:03:32.800
of practical use of large models today.
63
00:03:32.833 --> 00:03:35.833
So having in-depth knowledge of it
will help you to build,
64
00:03:35.900 --> 00:03:38.900
deploy, and use models more effectively.
65
00:03:39.300 --> 00:03:41.633
Many people have worked
to create this course.
66
00:03:41.633 --> 00:03:43.300
I like to thank on the Hugging Face
67
00:03:43.300 --> 00:03:47.100
side, the entire Hugging Face team
for the review of this course content,
68
00:03:47.200 --> 00:03:50.233
as well as the Hugging Face community
for their contributions
69
00:03:50.233 --> 00:03:54.300
to open source models and quantization
methods. From DeepLearning.AI,
70
00:03:54.533 --> 00:03:57.800
Eddy Shyu,
had also contributed to this course.
71
00:03:58.000 --> 00:04:01.000
Quantization is a fairly technical topic.
72
00:04:01.600 --> 00:04:04.633
After this course,
I hope you deeply understand it
73
00:04:04.633 --> 00:04:07.100
so you better say to others, "I now get it.
74
00:04:07.100 --> 00:04:09.700
I'm not worried about model compression."
75
00:04:09.700 --> 00:04:13.800
In other words, you can say:
"I'm not sweating the small stuff."
76
00:04:14.633 --> 00:04:16.733
Let's go on to the
next video and get started.