WEBVTT X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533 1 00:00:02.002 --> 00:00:04.637 Now let's move on to the per channel quantization. 2 00:00:04.637 --> 00:00:08.808 We need to store the scales and the zero point for each row 3 00:00:08.842 --> 00:00:12.545 if we decide to quantize along the rows and we need to store them 4 00:00:12.545 --> 00:00:15.682 along each column, if we decide to quantize along the columns. 5 00:00:15.682 --> 00:00:19.753 The memory needed to store all these linear parameters is pretty small. 6 00:00:19.886 --> 00:00:24.190 We usually use per channel quantization when quantizing models in eight-bit. 7 00:00:24.457 --> 00:00:27.460 You will see that in the next lesson we use units. 8 00:00:27.594 --> 00:00:30.430 Now let's call the per channel quantization. 9 00:00:30.430 --> 00:00:32.098 And don't worry about this slide. 10 00:00:32.098 --> 00:00:33.900 We'll do it in the notebook. 11 00:00:33.900 --> 00:00:36.569 So let's code the per channel quantization. 12 00:00:36.569 --> 00:00:39.406 To simplify the work we will just restrict 13 00:00:39.406 --> 00:00:42.542 ourselves to the symmetric mode of the linear quantization. 14 00:00:42.742 --> 00:00:46.946 So the function will be called linear q symmetric per channel. 15 00:00:46.980 --> 00:00:51.084 So we expect this function to take as arguments the tensor. 16 00:00:51.084 --> 00:00:52.552 The dimension. 17 00:00:52.552 --> 00:00:55.321 So whether we want to quantize along the rows 18 00:00:55.321 --> 00:00:58.591 or the columns, if we are talking about the 2D matrix, 19 00:00:58.725 --> 00:01:02.896 we set the default value to be equal to torch.int8. 20 00:01:04.297 --> 00:01:07.634 And at the end we expect to get the quantized tensor and the scale. 21 00:01:07.667 --> 00:01:12.172 We don't need the zero point since we are trying to do it with the symmetric mode. 22 00:01:12.672 --> 00:01:15.241 So let's define a test tensor 23 00:01:15.241 --> 00:01:18.244 so that we can work through the code. 24 00:01:19.946 --> 00:01:23.283 We will use the test tensor that we defined previously. 25 00:01:23.683 --> 00:01:27.821 The first step is to know how big the scale tensor will be. 26 00:01:28.121 --> 00:01:33.121 Since we are doing per channel, we will be having more than one scale 27 00:01:33.193 --> 00:01:36.796 and we need to create a tensor to store these values. 28 00:01:36.963 --> 00:01:40.867 So the shape of the tensor scale would be equal 29 00:01:40.867 --> 00:01:43.870 to that. 30 00:01:45.371 --> 00:01:46.706 Tensor. 31 00:01:46.706 --> 00:01:49.709 shape. 32 00:01:53.847 --> 00:01:58.847 And with a specific dimension we need to set the dimension to be equals to zero. 33 00:01:59.085 --> 00:02:02.255 If we want to quantize along the rows. 34 00:02:02.589 --> 00:02:04.858 Otherwise we need to set it to one. 35 00:02:04.858 --> 00:02:07.026 If we want to quantize along the columns. 36 00:02:08.695 --> 00:02:11.297 So let's check the output dimension. 37 00:02:11.297 --> 00:02:12.799 As you can see we get three. 38 00:02:12.799 --> 00:02:17.637 And indeed we need three scales value. One for these numbers. 39 00:02:17.637 --> 00:02:20.507 This one. And the last one is this one. 40 00:02:20.507 --> 00:02:24.677 Now we can create the scale tensor using torch.zeros. 41 00:02:24.777 --> 00:02:28.915 This will create a tensor with the shape output dimension 42 00:02:29.182 --> 00:02:32.185 and each element will be equal to zero. 43 00:02:32.285 --> 00:02:35.288 Let's have a look. 44 00:02:36.523 --> 00:02:38.791 And indeed we get that. 45 00:02:38.791 --> 00:02:41.995 Now what we need to do is to iterate 46 00:02:42.428 --> 00:02:45.431 through each one of these rows 47 00:02:46.633 --> 00:02:49.669 and calculate the scale for each one of them. 48 00:02:50.637 --> 00:02:54.541 To do that, we will loop over the output dimension. 49 00:02:54.874 --> 00:02:57.243 Now we need to get a sub row. 50 00:02:57.243 --> 00:03:00.547 For example the first row, the second row or the third row. 51 00:03:00.580 --> 00:03:03.716 To do that we will use the select method 52 00:03:04.651 --> 00:03:07.587 and we need to set two arguments. 53 00:03:07.587 --> 00:03:11.658 First, the dimension and second, the index. 54 00:03:13.226 --> 00:03:17.230 And now just to be sure let's check out what the sub tensor looks like. 55 00:03:17.297 --> 00:03:20.300 We should have a tensor for each row. 56 00:03:20.967 --> 00:03:22.802 And indeed we were able 57 00:03:22.802 --> 00:03:25.805 to extract each row in a tensor. 58 00:03:26.372 --> 00:03:31.211 Now that we manage to get the sub tensor, all we need to do is to apply 59 00:03:31.211 --> 00:03:35.148 the get q scale symmetric function to that sub tensor 60 00:03:35.148 --> 00:03:38.318 in order to get the scale related to that row 61 00:03:38.885 --> 00:03:41.888 and store it inside the scale tensor. 62 00:03:45.024 --> 00:03:46.326 So we need to set 63 00:03:46.326 --> 00:03:49.329 in the index position of the tensor scale 64 00:03:49.495 --> 00:03:52.498 the scale for that particular sub tensor. 65 00:03:52.832 --> 00:03:57.070 To do that, we will use the get q scale symmetric function and we just pass 66 00:03:57.070 --> 00:03:58.104 the set answers. 67 00:03:58.104 --> 00:04:00.873 Let's check now what scale look like. 68 00:04:02.075 --> 00:04:03.142 We did manage to 69 00:04:03.142 --> 00:04:06.646 store the scales related to each row inside this tensor. 70 00:04:07.046 --> 00:04:11.317 Now that we have stored all scales, we need to do a little bit of processing 71 00:04:11.317 --> 00:04:16.317 in order to reshape the scale so that when we divide the original tensor 72 00:04:16.422 --> 00:04:20.927 by the tensor scale, each column is divided by the correct scale. 73 00:04:21.327 --> 00:04:24.998 To do that, we define the shape that the scale tensor should have. 74 00:04:26.332 --> 00:04:29.335 Let's have a look at this scale shape. 75 00:04:30.570 --> 00:04:32.739 It's full of one. 76 00:04:32.739 --> 00:04:35.742 Then we need to set the scale shape 77 00:04:36.009 --> 00:04:40.513 at index in to be equal to minus one, which will give us that. 78 00:04:40.947 --> 00:04:45.118 And the last thing we need to do is to reshape the scale using the view 79 00:04:45.118 --> 00:04:48.121 method, using the scale shape that we just defined. 80 00:04:49.522 --> 00:04:51.758 And we get the following scale. 81 00:04:51.758 --> 00:04:54.927 And this is the scale we need in order to be able 82 00:04:54.927 --> 00:04:59.465 to divide the original tensor by the tensor scale. 83 00:04:59.499 --> 00:05:04.037 So that each row is divided by each value of the scale. 84 00:05:04.170 --> 00:05:07.707 I know this is a bit complex, since it involves how to divide tensor 85 00:05:07.774 --> 00:05:09.342 by tensors in PyTorch. 86 00:05:09.342 --> 00:05:11.244 Let's have a look at an example. 87 00:05:11.244 --> 00:05:16.244 In order to understand how a view works and how to divide tensor by tensor 88 00:05:16.549 --> 00:05:21.521 in such way that you divide each rows or each columns. 89 00:05:21.888 --> 00:05:24.957 Let's say we have the following matrix. 90 00:05:27.627 --> 00:05:29.329 And we have the following scale. 91 00:05:29.329 --> 00:05:32.332 Just like in the previous example, 92 00:05:32.832 --> 00:05:35.601 the shape of the scale is three. 93 00:05:35.601 --> 00:05:40.601 We can reshape that tensor in such a way that the first dimension is of size one, 94 00:05:41.007 --> 00:05:44.010 and the second dimension can contain the rest. 95 00:05:44.277 --> 00:05:46.579 To do that, we can use the view function. 96 00:05:46.579 --> 00:05:50.283 The shape of the of the scale is three. 97 00:05:51.150 --> 00:05:55.254 We can reshape that tensor using the view function. 98 00:05:55.888 --> 00:05:58.991 For example, we can reshaped so that the first dimension 99 00:05:58.991 --> 00:06:01.994 is one and the second dimension is three. 100 00:06:02.562 --> 00:06:06.532 And as expected, we get a tensor of size one by three. 101 00:06:06.632 --> 00:06:11.170 An alternative way to do that is just to replace the three by minus one. 102 00:06:11.304 --> 00:06:15.775 What this does is that view will be able to find the right shape 103 00:06:16.209 --> 00:06:18.778 where you put the minus one. 104 00:06:18.778 --> 00:06:23.416 You can also reshape S so that the first dimension 105 00:06:23.416 --> 00:06:28.416 will end up with being three, and the last dimension to be one. 106 00:06:30.356 --> 00:06:31.324 By doing that. 107 00:06:31.324 --> 00:06:35.595 Now let's try to divide the matrix M along the row. 108 00:06:35.928 --> 00:06:40.928 So, the scale we need in order to divide each row is this one. 109 00:06:42.101 --> 00:06:44.837 As you can see, the scale shape is the following. 110 00:06:44.837 --> 00:06:49.837 We have three as the first dimension and one as the second dimension. 111 00:06:50.877 --> 00:06:53.880 And let's perform the division. 112 00:06:54.647 --> 00:06:56.048 And as you can see, 113 00:06:56.048 --> 00:06:59.719 we managed to, divide along the rows. 114 00:07:00.019 --> 00:07:04.357 You can see that this rule was untouched since it's always divided by one. 115 00:07:04.557 --> 00:07:07.560 The second one was divided by five, 116 00:07:07.560 --> 00:07:11.130 and the last one, the third one was divided by ten. 117 00:07:11.531 --> 00:07:14.534 If we use the following scale instead. 118 00:07:17.236 --> 00:07:20.239 With the following shape. 119 00:07:22.041 --> 00:07:23.810 One by three 120 00:07:23.810 --> 00:07:28.014 and we divide the matrix by the specific scale, we see that 121 00:07:28.014 --> 00:07:32.018 in this case we divided each column by the scales. 122 00:07:32.018 --> 00:07:34.787 So here as you can see this column is untouched. 123 00:07:34.787 --> 00:07:36.355 We have one four and seven. 124 00:07:36.355 --> 00:07:40.760 The second column was divided by five and the last column was divided by ten. 125 00:07:40.827 --> 00:07:43.796 Now let's go back to quantizing our tensor. 126 00:07:43.796 --> 00:07:47.800 If you remember well the scale that we got at the end was the following. 127 00:07:47.800 --> 00:07:48.801 And if we check 128 00:07:50.102 --> 00:07:52.972 the shape of the scale, 129 00:07:52.972 --> 00:07:56.976 this is the right shape for the scale in order to quantize each row. 130 00:07:57.176 --> 00:08:00.713 Now all we need to do is to quantize 131 00:08:00.713 --> 00:08:03.716 the tensor by, 132 00:08:03.749 --> 00:08:06.752 using the linear shape q scale and zero point 133 00:08:07.653 --> 00:08:10.256 function that we called it in the previous lab. 134 00:08:10.256 --> 00:08:13.226 And we just need to pass the test tensor, the scale, 135 00:08:13.626 --> 00:08:16.229 and the zero point which should be equal to zero. 136 00:08:16.229 --> 00:08:19.232 Since we are doing symmetric quantization. 137 00:08:21.634 --> 00:08:24.904 And as you can see, we end up with the following quantized tensor. 138 00:08:25.204 --> 00:08:27.940 Now let's put everything we did in a function 139 00:08:27.940 --> 00:08:30.943 called linear q symmetric per channel. 140 00:08:30.977 --> 00:08:33.980 Okay. 141 00:08:33.980 --> 00:08:36.983 As you can see here we get the output dimension. 142 00:08:37.216 --> 00:08:42.216 We create the scale tensor with the output dimension shape. 143 00:08:42.688 --> 00:08:44.690 We iterate through the output dimension. 144 00:08:44.690 --> 00:08:47.693 And for each index we get the sub tensor 145 00:08:47.693 --> 00:08:51.063 and we store the scale in the index position. 146 00:08:51.531 --> 00:08:54.066 Then we reshape the scale 147 00:08:55.067 --> 00:08:55.968 here. 148 00:08:55.968 --> 00:08:59.171 Lastly, we get the quantized tensor using the linear query 149 00:08:59.238 --> 00:09:01.407 scale and zero point function. 150 00:09:01.407 --> 00:09:02.041 And that's it. 151 00:09:02.041 --> 00:09:04.911 We get the quantized tensor and the scale. 152 00:09:04.911 --> 00:09:07.914 Now that we have our function, let's check 153 00:09:07.914 --> 00:09:11.918 if we were indeed able to quantize along a specific dimension. 154 00:09:12.451 --> 00:09:16.856 So I replaced the test tensor that we defined earlier. 155 00:09:17.056 --> 00:09:21.627 And this time we will quantize along the first dimension 156 00:09:21.627 --> 00:09:23.296 and the second dimension. 157 00:09:23.296 --> 00:09:28.034 So we'll have the quantized tensor zero and the scale zero. 158 00:09:28.034 --> 00:09:33.034 We get that by using the linear symmetric per channel function. 159 00:09:33.873 --> 00:09:36.876 And we need to pass the test tensor. 160 00:09:38.444 --> 00:09:40.012 And we need to precise 161 00:09:40.012 --> 00:09:43.015 that the dimension that we are quantizing is zero. 162 00:09:43.049 --> 00:09:46.619 Let's do the same for the other dimension. 163 00:09:48.821 --> 00:09:50.523 So we'll call it 164 00:09:50.523 --> 00:09:53.526 quantized in cell one and scale underscore one. 165 00:09:53.960 --> 00:09:57.830 To get the summary we need also to dequantize each tensor. 166 00:09:58.064 --> 00:10:01.334 So let's first do the case where the dimension is equal to zero. 167 00:10:01.434 --> 00:10:04.103 We have the dequantized tensor underscore zero 168 00:10:05.404 --> 00:10:08.107 which is equals to linear dequantization. 169 00:10:08.107 --> 00:10:10.843 And we need to precise 170 00:10:10.843 --> 00:10:13.846 the quantized tensor on a scale zero 171 00:10:13.913 --> 00:10:16.282 its scale and zero. 172 00:10:16.282 --> 00:10:18.584 Since the zero point is equal to zero. 173 00:10:18.584 --> 00:10:20.519 Now we have everything to get the summary 174 00:10:20.519 --> 00:10:23.589 using the plot quantization error function. 175 00:10:24.490 --> 00:10:25.725 And that's it. 176 00:10:25.725 --> 00:10:28.728 As you can see, we indeed quantized along the rows. 177 00:10:28.861 --> 00:10:32.932 You can see that we have the maximum quantized value here 178 00:10:33.199 --> 00:10:35.768 127 here, here and here. 179 00:10:35.768 --> 00:10:38.504 And the quantization was pretty good. 180 00:10:38.504 --> 00:10:41.907 As you can see, the original tensor is pretty close to the dequantized tensor. 181 00:10:42.375 --> 00:10:45.878 And that the quantization error tensor is not so bad. 182 00:10:46.512 --> 00:10:50.616 Let's have a better metric by computing the quantization error. 183 00:10:51.617 --> 00:10:55.421 And we get a quantization error of 1.8. 184 00:10:55.554 --> 00:10:59.158 If we remember well when we did the potential symmetric linear 185 00:10:59.158 --> 00:11:03.729 quantization, we had the quantization error around 2.5. 186 00:11:04.230 --> 00:11:07.533 Now let's check what happens if we quantize along the columns. 187 00:11:08.067 --> 00:11:10.903 We'll do the same thing as we did as a 188 00:11:10.903 --> 00:11:14.640 but with the quantized tensor underscore one. 189 00:11:15.207 --> 00:11:18.978 So as you can see here we define the dequantized tensor 190 00:11:18.978 --> 00:11:22.581 underscore one by using the linear dequantization. 191 00:11:22.715 --> 00:11:26.485 And we passed the quantization into underscore one and scale one. 192 00:11:26.786 --> 00:11:29.789 And then we plot the quantization error. 193 00:11:29.955 --> 00:11:31.791 This will give us the following summary. 194 00:11:31.791 --> 00:11:36.362 And as you can see here, we managed indeed to quantize along the columns. 195 00:11:36.629 --> 00:11:39.665 This time the quantization error is even lower. 196 00:11:39.899 --> 00:11:43.202 You see that we get a lower quantization error in both cases 197 00:11:43.202 --> 00:11:46.205 compared to tensor quantization. 198 00:11:46.205 --> 00:11:50.142 This is because outlier values will only impact the channel 199 00:11:50.142 --> 00:11:53.212 it was in, instead of the entire tensor.