| WEBVTT | |
| X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533 | |
| 1 | |
| 00:00:02.002 --> 00:00:03.003 | |
| Now, let's go | |
| 2 | |
| 00:00:03.003 --> 00:00:05.972 | |
| even smaller and do per group | |
| quantization. | |
| 3 | |
| 00:00:05.972 --> 00:00:10.777 | |
| In per group quantization we perform | |
| quantization on groups of n elements. | |
| 4 | |
| 00:00:11.144 --> 00:00:13.813 | |
| Common values for n are 32, | |
| 5 | |
| 00:00:13.813 --> 00:00:16.883 | |
| 64, or 128. Per group | |
| 6 | |
| 00:00:16.883 --> 00:00:19.619 | |
| quantization can require a lot of memory. | |
| 7 | |
| 00:00:19.619 --> 00:00:20.086 | |
| Let's say, | |
| 8 | |
| 00:00:20.086 --> 00:00:25.086 | |
| we want to quantize a tensor in four-bit, | |
| and we choose a group size equal to 32. | |
| 9 | |
| 00:00:25.325 --> 00:00:26.459 | |
| We use symmetric mode. | |
| 10 | |
| 00:00:26.459 --> 00:00:29.062 | |
| That means that the zero point | |
| is equal to zero, | |
| 11 | |
| 00:00:29.062 --> 00:00:32.265 | |
| and we store the scales | |
| in floating point 16. | |
| 12 | |
| 00:00:32.599 --> 00:00:37.504 | |
| It means that we are actually quantizing | |
| the tensor in 4.5 bits. | |
| 13 | |
| 00:00:37.871 --> 00:00:42.709 | |
| Since we have four bits, since each | |
| element is stored using four bit | |
| 14 | |
| 00:00:43.343 --> 00:00:47.414 | |
| and we have 16 divided by 32 bit. | |
| 15 | |
| 00:00:47.747 --> 00:00:51.985 | |
| Since we need to store a scale in 16 bits | |
| 16 | |
| 00:00:51.985 --> 00:00:56.823 | |
| for every 32 elements for each element, | |
| you store it in four bit, | |
| 17 | |
| 00:00:56.923 --> 00:01:01.923 | |
| but you also have quantization parameters | |
| and you need to store once | |
| 18 | |
| 00:01:02.462 --> 00:01:07.033 | |
| a scale in 16 bits, | |
| so 16 bits every 32 elements. | |
| 19 | |
| 00:01:07.167 --> 00:01:09.069 | |
| Now let's jump to the code. | |
| 20 | |
| 00:01:09.069 --> 00:01:12.305 | |
| For simplicity, | |
| we will restrict ourselves to the case | |
| 21 | |
| 00:01:12.305 --> 00:01:16.709 | |
| where the tensor is of dimension two | |
| and we will be using the symmetric mode. | |
| 22 | |
| 00:01:16.943 --> 00:01:17.210 | |
| You don't | |
| 23 | |
| 00:01:17.210 --> 00:01:20.680 | |
| need to pay attention to this code | |
| since we will be coding in the notebook. | |
| 24 | |
| 00:01:20.980 --> 00:01:22.615 | |
| Now let's code it. | |
| 25 | |
| 00:01:22.615 --> 00:01:25.518 | |
| So we define the following function. | |
| 26 | |
| 00:01:25.518 --> 00:01:27.353 | |
| Linear q symmetric per group. | |
| 27 | |
| 00:01:28.955 --> 00:01:31.958 | |
| This will take as argument the tensor, | |
| 28 | |
| 00:01:33.126 --> 00:01:36.129 | |
| the group size and the d-type. | |
| 29 | |
| 00:01:37.797 --> 00:01:41.000 | |
| We set the default value torch.int8. | |
| 30 | |
| 00:01:42.502 --> 00:01:46.239 | |
| First, | |
| we need to get the shape of the tensor. | |
| 31 | |
| 00:01:50.276 --> 00:01:53.413 | |
| Then, another restriction for this function | |
| 32 | |
| 00:01:53.613 --> 00:01:57.750 | |
| is that we will be performing | |
| quantization on the rows. | |
| 33 | |
| 00:01:57.851 --> 00:02:02.851 | |
| This is why we also need to make sure | |
| that each row is divisible by group size. | |
| 34 | |
| 00:02:02.922 --> 00:02:05.892 | |
| To confirm that, | |
| we will just use assertion | |
| 35 | |
| 00:02:05.892 --> 00:02:10.892 | |
| so that the shape of the tensor along the | |
| rows is indeed divisible by group size. | |
| 36 | |
| 00:02:13.366 --> 00:02:17.637 | |
| Then, as I said, | |
| we will be restricting ourselves | |
| 37 | |
| 00:02:17.637 --> 00:02:20.640 | |
| to tensors of dimension two. | |
| 38 | |
| 00:02:22.509 --> 00:02:26.646 | |
| Now all we need to do | |
| is to reshape the tensor | |
| 39 | |
| 00:02:26.980 --> 00:02:31.084 | |
| so that we end up with rows of group | |
| size elements. | |
| 40 | |
| 00:02:31.784 --> 00:02:36.122 | |
| To do that, we will use the view function | |
| that we learned about. | |
| 41 | |
| 00:02:40.527 --> 00:02:42.862 | |
| So as you can see, what we do here | |
| 42 | |
| 00:02:42.862 --> 00:02:46.499 | |
| is to make sure that each row contains | |
| group size elements. | |
| 43 | |
| 00:02:46.799 --> 00:02:49.702 | |
| And we put the minus one here | |
| so that it infers | |
| 44 | |
| 00:02:49.702 --> 00:02:53.273 | |
| automatically the right dimension | |
| to have in the first dimension. | |
| 45 | |
| 00:02:53.540 --> 00:02:57.310 | |
| And now if you look at the tensor | |
| we has the setup | |
| 46 | |
| 00:02:57.577 --> 00:03:00.413 | |
| for performing positional quantization. | |
| 47 | |
| 00:03:00.413 --> 00:03:04.984 | |
| We resized this tensor | |
| so that we have rows of group size | |
| 48 | |
| 00:03:05.084 --> 00:03:09.088 | |
| so that we can use the function | |
| that we coded previously. | |
| 49 | |
| 00:03:09.522 --> 00:03:12.959 | |
| That is to say the linear q | |
| symmetric channel quantization. | |
| 50 | |
| 00:03:13.126 --> 00:03:16.129 | |
| So we have quantized tensor | |
| 51 | |
| 00:03:16.763 --> 00:03:17.864 | |
| and scale | |
| 52 | |
| 00:03:17.864 --> 00:03:21.768 | |
| which is equal to linear | |
| q symmetric per channel function. | |
| 53 | |
| 00:03:22.669 --> 00:03:26.506 | |
| And we need to put the tensor | |
| the right dimension. | |
| 54 | |
| 00:03:26.906 --> 00:03:29.976 | |
| So along the rows and the d-type. | |
| 55 | |
| 00:03:33.179 --> 00:03:35.348 | |
| After quantizing the tensor | |
| 56 | |
| 00:03:35.348 --> 00:03:38.551 | |
| we still need to reshape it | |
| to its original shape. | |
| 57 | |
| 00:03:38.851 --> 00:03:41.688 | |
| So we will use the shape | |
| that we stored before. | |
| 58 | |
| 00:03:41.688 --> 00:03:43.022 | |
| Here the d shape. | |
| 59 | |
| 00:03:46.292 --> 00:03:47.594 | |
| To reshape the tensor, | |
| 60 | |
| 00:03:47.594 --> 00:03:51.130 | |
| we use the view | |
| and we just pass this shape. | |
| 61 | |
| 00:03:51.564 --> 00:03:54.567 | |
| Then we can return the quantized tensor | |
| and the scale. | |
| 62 | |
| 00:03:55.902 --> 00:03:57.570 | |
| Now that we have coded | |
| 63 | |
| 00:03:57.570 --> 00:04:02.342 | |
| the per group quantization, now let's code | |
| the linear quantization | |
| 64 | |
| 00:04:02.575 --> 00:04:07.380 | |
| for the quantization | |
| in order to verify our results. | |
| 65 | |
| 00:04:07.947 --> 00:04:10.950 | |
| So we need to define | |
| 66 | |
| 00:04:10.950 --> 00:04:11.951 | |
| this function. | |
| 67 | |
| 00:04:11.951 --> 00:04:15.021 | |
| In that function | |
| we need the quantized tensor | |
| 68 | |
| 00:04:16.022 --> 00:04:18.658 | |
| to scale. | |
| 69 | |
| 00:04:18.658 --> 00:04:21.661 | |
| But we also need the group size. | |
| 70 | |
| 00:04:25.098 --> 00:04:29.068 | |
| Then we need to get the shape | |
| of the quantized tensor. | |
| 71 | |
| 00:04:29.102 --> 00:04:32.105 | |
| That will be useful. | |
| 72 | |
| 00:04:33.106 --> 00:04:35.842 | |
| Then we need to reshape | |
| 73 | |
| 00:04:35.842 --> 00:04:39.479 | |
| the quantized tensor | |
| so that we have rows that contain | |
| 74 | |
| 00:04:39.479 --> 00:04:42.482 | |
| only group size elements. | |
| 75 | |
| 00:04:42.548 --> 00:04:46.986 | |
| To do that, | |
| we put in the view methods minus | |
| 76 | |
| 00:04:46.986 --> 00:04:51.190 | |
| one for the first value and group size | |
| for the second one. | |
| 77 | |
| 00:04:52.191 --> 00:04:56.062 | |
| Then we can reuse the linear | |
| 78 | |
| 00:04:56.162 --> 00:05:00.366 | |
| dequantization methods | |
| we coded before to dequantize the tensor. | |
| 79 | |
| 00:05:00.600 --> 00:05:05.600 | |
| We need to pass the quantized tensor, | |
| the scale and decimal point. | |
| 80 | |
| 00:05:06.072 --> 00:05:08.641 | |
| But since we are | |
| doing symmetric quantization, | |
| 81 | |
| 00:05:09.942 --> 00:05:11.878 | |
| the zero point is equal to zero. | |
| 82 | |
| 00:05:11.878 --> 00:05:16.382 | |
| Then all we need to do is to reshape | |
| the dequantized tensor | |
| 83 | |
| 00:05:16.749 --> 00:05:21.554 | |
| with the shape of the original tensor, | |
| and the shape is stored in q shape. | |
| 84 | |
| 00:05:24.223 --> 00:05:27.226 | |
| Then we return the dequantized tensor. | |
| 85 | |
| 00:05:27.994 --> 00:05:30.997 | |
| Now let's test our implementation. | |
| 86 | |
| 00:05:30.997 --> 00:05:33.933 | |
| We will test a random tensor of size six | |
| 87 | |
| 00:05:33.933 --> 00:05:36.903 | |
| by six and | |
| 88 | |
| 00:05:37.937 --> 00:05:40.840 | |
| let's set group size to be equal to three. | |
| 89 | |
| 00:05:40.840 --> 00:05:45.478 | |
| So, we will get the quantized tensor | |
| and the scale | |
| 90 | |
| 00:05:46.546 --> 00:05:50.450 | |
| using the linear q symmetric group function. | |
| 91 | |
| 00:05:50.850 --> 00:05:53.553 | |
| And we need to pass the test | |
| 92 | |
| 00:05:53.553 --> 00:05:56.556 | |
| tensor as well as the group size. | |
| 93 | |
| 00:05:56.756 --> 00:06:00.293 | |
| Then to verify our results | |
| we also need to | |
| 94 | |
| 00:06:00.293 --> 00:06:03.296 | |
| dequantize the tensor using | |
| 95 | |
| 00:06:03.296 --> 00:06:07.500 | |
| the linear dequantization function | |
| where we need to pass | |
| 96 | |
| 00:06:08.101 --> 00:06:10.737 | |
| the quantized tensor, | |
| 97 | |
| 00:06:10.737 --> 00:06:13.306 | |
| the scale, and the group size. | |
| 98 | |
| 00:06:13.306 --> 00:06:16.743 | |
| Finally, to have the summary | |
| of the quantization process, | |
| 99 | |
| 00:06:17.377 --> 00:06:20.380 | |
| we just need to pass inside the plot | |
| quantization error. | |
| 100 | |
| 00:06:20.813 --> 00:06:23.216 | |
| The following arguments. | |
| 101 | |
| 00:06:23.216 --> 00:06:26.419 | |
| So test tensor, | |
| quantized tensor and dequantized tensor. | |
| 102 | |
| 00:06:26.853 --> 00:06:31.224 | |
| And as you can see, | |
| if you look at the quantized tensor | |
| 103 | |
| 00:06:31.557 --> 00:06:34.761 | |
| you will see that | |
| every three elements in each row | |
| 104 | |
| 00:06:35.161 --> 00:06:38.364 | |
| you will have the maximum value 127. | |
| 105 | |
| 00:06:38.965 --> 00:06:42.135 | |
| It shows that we indeed managed | |
| to quantize | |
| 106 | |
| 00:06:42.435 --> 00:06:45.738 | |
| each three elements in this matrix | |
| along the rows. | |
| 107 | |
| 00:06:45.905 --> 00:06:49.442 | |
| So three elements | |
| here, three here, and so on. | |
| 108 | |
| 00:06:49.742 --> 00:06:51.411 | |
| And you have the quantized tensor. | |
| 109 | |
| 00:06:51.411 --> 00:06:52.979 | |
| As you can see on the right. | |
| 110 | |
| 00:06:52.979 --> 00:06:57.979 | |
| And you can see also that the quantization | |
| error tensor is very, very low | |
| 111 | |
| 00:06:58.518 --> 00:07:01.521 | |
| and that the dequantized tensor is | |
| 112 | |
| 00:07:01.988 --> 00:07:04.991 | |
| practically the same | |
| as the original tensor. | |
| 113 | |
| 00:07:05.057 --> 00:07:09.262 | |
| Let's also print the dequantization error | |
| using the dequantization error function. | |
| 114 | |
| 00:07:09.529 --> 00:07:12.632 | |
| And we just need to pass the test tensor | |
| and the quantized tensor. | |
| 115 | |
| 00:07:13.933 --> 00:07:16.903 | |
| And indeed we have a very very low | |
| quantization error. | |
| 116 | |
| 00:07:16.903 --> 00:07:20.573 | |
| Now is a good time to pause the video | |
| and try a couple of things. | |
| 117 | |
| 00:07:20.606 --> 00:07:23.042 | |
| You can try to change the test tensors. | |
| 118 | |
| 00:07:23.042 --> 00:07:25.311 | |
| Or you can also change the group size. | |
| 119 | |
| 00:07:25.311 --> 00:07:30.311 | |
| And to see what is the effect of the | |
| group size on the dequantization process. | |