File size: 12,407 Bytes
0c08f5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
WEBVTT
X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:144533

1
00:00:02.068 --> 00:00:02.302
Okay.

2
00:00:02.302 --> 00:00:04.337
So we're going to do that in pure PyTorch.

3
00:00:04.337 --> 00:00:08.842
Let's first import torch and then define
the signature of the method pack_weights.

4
00:00:09.275 --> 00:00:14.275
That is going to take the unsigned
int8 tensor and the number of bits.

5
00:00:14.514 --> 00:00:19.514
So the reason why we're using unsigned int
instead of int8 is that for int8 tensor,

6
00:00:20.120 --> 00:00:24.090
the first bit is used to determine
the sign of the tensor.

7
00:00:24.224 --> 00:00:27.427
So, just for simplicity
we're going to use unsigned int

8
00:00:27.961 --> 00:00:30.930
so that we don't have to deal
with the sign bit.

9
00:00:30.964 --> 00:00:33.600
So recall also
what I've said at the beginning.

10
00:00:33.600 --> 00:00:38.600
So for simplicity it's better
to have, the shape of the input tensor

11
00:00:39.339 --> 00:00:42.942
being a multiple of four eight
divided by number of bits.

12
00:00:43.209 --> 00:00:46.046
So we're just going to add
a small condition here checking

13
00:00:46.046 --> 00:00:49.049
that if the tensor does
not have the right shape,

14
00:00:49.049 --> 00:00:53.720
then we're going to raise a small error
saying that the input shape needs

15
00:00:53.720 --> 00:00:58.058
to be a multiple of the expected number
and print it to the users.

16
00:00:58.625 --> 00:00:59.759
So, now let's get the

17
00:01:00.794 --> 00:01:01.294
number of

18
00:01:01.294 --> 00:01:04.297
values
expected values of the packed tensor.

19
00:01:04.564 --> 00:01:06.933
So if we come back to our example.

20
00:01:06.933 --> 00:01:10.870
So this of course
depends on the number of bits that you use

21
00:01:10.870 --> 00:01:12.405
to encode your values.

22
00:01:12.405 --> 00:01:15.542
So in case of two bits
we have four values.

23
00:01:15.742 --> 00:01:19.712
So the number of expected packed values
is simply going to be

24
00:01:20.246 --> 00:01:24.651
the shape of your input tensor
multiplied by the number of bits.

25
00:01:24.751 --> 00:01:27.520
So here we have two bits per value.

26
00:01:27.520 --> 00:01:28.621
And then we have four values.

27
00:01:28.621 --> 00:01:32.158
So eight bit divided by eight

28
00:01:33.493 --> 00:01:36.496
because our packed tensor

29
00:01:36.796 --> 00:01:37.997
is going to be eight bits.

30
00:01:37.997 --> 00:01:42.001
So number of expected value
is simply going to be number of input

31
00:01:42.302 --> 00:01:46.106
low bits values
that you're going to retrieve with dot

32
00:01:46.139 --> 00:01:49.142
shape of zero times the number of bits.

33
00:01:49.142 --> 00:01:51.010
If either you are in 2 or 4 bits,

34
00:01:52.245 --> 00:01:55.248
everything divided by eight.

35
00:01:56.716 --> 00:01:57.550
Perfect.

36
00:01:57.550 --> 00:02:00.553
So let's come back again
to the example that we had before.

37
00:02:00.587 --> 00:02:05.587
So in case of two bits, each packet weight
will contain four values two bit.

38
00:02:06.326 --> 00:02:09.729
We're processing
four parameters per packed parameter.

39
00:02:10.096 --> 00:02:13.733
So this is going to be our number
of processing steps.

40
00:02:14.100 --> 00:02:17.370
So we're calling this value num steps.

41
00:02:17.403 --> 00:02:20.373
That is simply going to be
the number of bits

42
00:02:20.373 --> 00:02:23.243
of the pack tensor
divided by number of bits.

43
00:02:23.243 --> 00:02:25.979
Again in our example this should be

44
00:02:25.979 --> 00:02:30.917
four because of a processing
for two bit values for packed tensor.

45
00:02:31.050 --> 00:02:33.887
And we're going to

46
00:02:33.887 --> 00:02:36.723
declare this index

47
00:02:36.723 --> 00:02:38.725
because we're going to have a four loop.

48
00:02:38.725 --> 00:02:43.725
And then we initialize our packed tensor
which should contain num values

49
00:02:44.531 --> 00:02:47.767
and which should be in a
dtype of unsigned eight.

50
00:02:48.168 --> 00:02:48.868
Perfect.

51
00:02:48.868 --> 00:02:51.871
Let's first loop on each packed value.

52
00:02:52.172 --> 00:02:56.543
And then for each packed
value we pack num steps,

53
00:02:57.810 --> 00:02:59.345
low bit values.

54
00:02:59.345 --> 00:03:02.982
So we need to loop again here
using a new variable.

55
00:03:03.516 --> 00:03:08.154
And then we're going to use unpacked index
in order to keep track of

56
00:03:08.688 --> 00:03:11.925
which values
are we trying to pack in our algorithm.

57
00:03:12.525 --> 00:03:15.728
So, in the first iteration again

58
00:03:15.728 --> 00:03:18.731
if we consider the tensor
that we had in our example.

59
00:03:18.932 --> 00:03:20.934
So I'm going to write it here.

60
00:03:20.934 --> 00:03:23.937
So 1,0,3,2

61
00:03:24.137 --> 00:03:26.673
which should be in two bit.

62
00:03:26.673 --> 00:03:29.676
And then we have our

63
00:03:29.809 --> 00:03:31.878
so this is uint8 tensor.

64
00:03:31.878 --> 00:03:33.479
So it's encoded in Int8.

65
00:03:33.479 --> 00:03:36.349
And we only want to extract those bits.

66
00:03:36.349 --> 00:03:40.620
And then we have our plain packed tensor
which should only contain

67
00:03:41.254 --> 00:03:43.356
one value in uint8.

68
00:03:43.356 --> 00:03:44.057
All right.

69
00:03:44.057 --> 00:03:46.593
For each num steps here.

70
00:03:46.593 --> 00:03:48.661
So for each,

71
00:03:48.661 --> 00:03:50.463
num step.

72
00:03:50.463 --> 00:03:54.300
So each two bits, we're going to retrieve
the corresponding value.

73
00:03:54.667 --> 00:03:56.402
So here for example it should be one.

74
00:03:58.404 --> 00:03:58.805
And then,

75
00:03:58.805 --> 00:04:01.941
we're going to perform bitwise
shifting on the left

76
00:04:02.642 --> 00:04:05.645
for this tensor but encoded in eight bits.

77
00:04:06.012 --> 00:04:08.514
So let me try to break it down below.

78
00:04:08.514 --> 00:04:10.416
So here,

79
00:04:10.416 --> 00:04:12.752
this value.

80
00:04:12.752 --> 00:04:14.387
So it's encoded in uint8.

81
00:04:14.387 --> 00:04:17.290
So it should give us this value.

82
00:04:17.290 --> 00:04:18.424
All right.

83
00:04:18.424 --> 00:04:21.661
And then the idea is
that we're going to take this value

84
00:04:22.195 --> 00:04:26.366
shifted on the left by bits times J.

85
00:04:26.766 --> 00:04:30.970
So here since we are on
the first iteration it's going to be zero.

86
00:04:31.170 --> 00:04:33.106
So nothing is going to be applied here.

87
00:04:33.106 --> 00:04:35.375
So no shifting on the left.

88
00:04:35.375 --> 00:04:39.245
So this value should stay like this.

89
00:04:39.979 --> 00:04:40.913
All right.

90
00:04:40.913 --> 00:04:43.916
And then we're going to perform bitwise

91
00:04:44.550 --> 00:04:49.550
or operation on the current packed tensor.

92
00:04:49.722 --> 00:04:50.990
So let me explain you that.

93
00:04:50.990 --> 00:04:55.028
So in the first iteration this is how
the packed tensor would look like.

94
00:04:55.595 --> 00:04:59.499
And this is how the right side
of the equation would look like.

95
00:04:59.499 --> 00:04:59.932
Here.

96
00:05:00.900 --> 00:05:03.903
And again we want to pack this here.

97
00:05:04.037 --> 00:05:07.340
Then this here, this here, and this here.

98
00:05:07.807 --> 00:05:10.843
So we're going to perform
an bitwise or operation.

99
00:05:11.177 --> 00:05:14.681
So that zero and 0 or 1 would give us one

100
00:05:14.681 --> 00:05:17.684
0 or 0 zero and so on.

101
00:05:17.850 --> 00:05:20.420
So here after the first iteration
the packed tensor

102
00:05:20.420 --> 00:05:23.423
would exactly look like this. Okay.

103
00:05:23.423 --> 00:05:26.726
And then on the second iteration
then we're going to increment

104
00:05:26.993 --> 00:05:29.996
our unpacked index here.

105
00:05:31.297 --> 00:05:32.265
All right.

106
00:05:32.265 --> 00:05:36.002
And then on the second iteration
we're going to take this tensor

107
00:05:36.002 --> 00:05:39.005
but encoded in uint8.

108
00:05:41.374 --> 00:05:42.942
So now the packed tensor

109
00:05:42.942 --> 00:05:46.179
would look like this
because of the bitwise or operation.

110
00:05:46.713 --> 00:05:51.017
And this time the shifting coefficient
is going to be two.

111
00:05:51.784 --> 00:05:53.753
So you're going to take this tensor.

112
00:05:53.753 --> 00:05:55.088
It's 000.

113
00:05:55.088 --> 00:05:57.357
Shift it on the left by two.

114
00:05:57.357 --> 00:05:59.792
So again it's going to be 000.

115
00:05:59.792 --> 00:06:02.995
And then you're going to perform
bitwise Or operation

116
00:06:02.995 --> 00:06:06.899
between the shifted tensor
and the pack tensor.

117
00:06:07.300 --> 00:06:10.703
And it's still going to be 1000000.

118
00:06:11.070 --> 00:06:14.941
That's fine because we pack
the first tensor here in two bit.

119
00:06:15.241 --> 00:06:17.143
We pack the second
thing off here in two bits,

120
00:06:18.344 --> 00:06:20.279
and then
we're ready to move on to the next one.

121
00:06:20.279 --> 00:06:23.282
And then on the next iteration

122
00:06:23.383 --> 00:06:25.685
we're going to have this tensor.

123
00:06:25.685 --> 00:06:28.287
Okay. Again so encode you in uint8.

124
00:06:28.287 --> 00:06:30.790
We're going to shift it on the left
this time by.

125
00:06:30.790 --> 00:06:32.725
So J is going to be equal to two.

126
00:06:32.725 --> 00:06:35.061
So two times two four.

127
00:06:35.061 --> 00:06:37.330
So we're going to shift that by four bit.

128
00:06:37.330 --> 00:06:39.499
And it's going to look like this.

129
00:06:39.499 --> 00:06:42.602
And then you're going to perform
bitwise or operation.

130
00:06:42.935 --> 00:06:45.872
And the new pack tensor would look like

131
00:06:45.872 --> 00:06:48.341
this. Perfect.

132
00:06:48.341 --> 00:06:50.676
And then on the last iteration
you'll do the same thing

133
00:06:50.676 --> 00:06:54.180
but this time shifting by six on the left.

134
00:06:54.747 --> 00:06:57.183
So one zero would be here.

135
00:06:57.183 --> 00:07:01.921
And then bitwise or at the very end
you would end up like this.

136
00:07:02.422 --> 00:07:07.059
So the final packed tensor
would look theoretically like this.

137
00:07:11.664 --> 00:07:12.632
Perfect.

138
00:07:12.632 --> 00:07:12.932
Yeah.

139
00:07:12.932 --> 00:07:16.335
Let's try out quickly
our methods and test it.

140
00:07:16.669 --> 00:07:19.472
And we're going to test it on our toy
example.

141
00:07:19.472 --> 00:07:23.276
Let's say our unpacked tensor is encoded
as follows:

142
00:07:23.776 --> 00:07:26.345
1032.

143
00:07:26.345 --> 00:07:28.981
Again everything encoded in two bit.

144
00:07:28.981 --> 00:07:32.218
And yeah let's pack the weights
to see if it works.

145
00:07:32.452 --> 00:07:33.152
Perfect.

146
00:07:33.152 --> 00:07:36.222
So 177 should be encoded

147
00:07:36.856 --> 00:07:39.459
exactly like this in uint8.

148
00:07:39.459 --> 00:07:41.661
You can try that out
verifying the results.

149
00:07:41.661 --> 00:07:43.930
But yeah that should be
the correct result.

150
00:07:43.930 --> 00:07:46.899
Perfect. So yeah.

151
00:07:46.899 --> 00:07:48.701
You can pause the video

152
00:07:48.701 --> 00:07:51.571
and maybe try to understand
this whole logic.

153
00:07:51.571 --> 00:07:53.406
Try also maybe to enhance it a bit.

154
00:07:53.406 --> 00:07:54.841
Optimize it.

155
00:07:54.841 --> 00:07:58.611
You can also try out with four bits
and yeah, maybe you can also try out

156
00:07:58.611 --> 00:08:00.213
different combinations.

157
00:08:00.213 --> 00:08:03.216
For example, you can also battle test

158
00:08:03.249 --> 00:08:07.053
the method a bit if we add these values.

159
00:08:07.353 --> 00:08:09.388
So three and two which should be one one.

160
00:08:09.388 --> 00:08:12.391
So we should have 11111 here everywhere.

161
00:08:12.625 --> 00:08:15.561
And the second tensor should be 255

162
00:08:16.562 --> 00:08:17.363
which is the case.