srijandas07 commited on
Commit
1c990f3
·
verified ·
1 Parent(s): f01323c

Upload 52 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +221 -3
  2. clip/__init__.py +1 -0
  3. clip/__pycache__/__init__.cpython-37.pyc +0 -0
  4. clip/__pycache__/clip.cpython-37.pyc +0 -0
  5. clip/__pycache__/model.cpython-37.pyc +0 -0
  6. clip/__pycache__/simple_tokenizer.cpython-37.pyc +0 -0
  7. clip/bpe_simple_vocab_16e6.txt.gz +3 -0
  8. clip/clip.py +219 -0
  9. clip/model.py +521 -0
  10. clip/simple_tokenizer.py +132 -0
  11. configs/config_challenge_test.yaml +26 -0
  12. configs/config_challenge_train.yaml +23 -0
  13. crop_person.py +162 -0
  14. datasets/__init__.py +0 -0
  15. datasets/__pycache__/__init__.cpython-37.pyc +0 -0
  16. datasets/__pycache__/blending.cpython-37.pyc +0 -0
  17. datasets/__pycache__/build.cpython-37.pyc +0 -0
  18. datasets/__pycache__/pipeline.cpython-37.pyc +0 -0
  19. datasets/__pycache__/rand_augment.cpython-37.pyc +0 -0
  20. datasets/blending.py +214 -0
  21. datasets/build.py +316 -0
  22. datasets/pipeline.py +2362 -0
  23. datasets/rand_augment.py +532 -0
  24. labels/challenge.csv +46 -0
  25. labels/challenge_composite.csv +7 -0
  26. labels/etri_label.csv +56 -0
  27. labels/sh_cs_label.csv +32 -0
  28. main_challenge.py +292 -0
  29. main_train.py +281 -0
  30. merge_results.py +43 -0
  31. requirements.txt +12 -0
  32. script_crop.sh +1 -0
  33. script_test.sh +4 -0
  34. script_train.sh +3 -0
  35. trainers/__pycache__/vificlip.cpython-37.pyc +0 -0
  36. trainers/vificlip.py +248 -0
  37. utils/__init__.py +0 -0
  38. utils/__pycache__/__init__.cpython-37.pyc +0 -0
  39. utils/__pycache__/config.cpython-37.pyc +0 -0
  40. utils/__pycache__/logger.cpython-37.pyc +0 -0
  41. utils/__pycache__/optimizer.cpython-37.pyc +0 -0
  42. utils/__pycache__/tools.cpython-37.pyc +0 -0
  43. utils/config.py +140 -0
  44. utils/logger.py +34 -0
  45. utils/optimizer.py +63 -0
  46. utils/tools.py +123 -0
  47. val.csv +4616 -0
  48. work_dirs/.DS_Store +0 -0
  49. work_dirs/challenge_baseline_new/.DS_Store +0 -0
  50. work_dirs/challenge_baseline_new/best.pth +3 -0
README.md CHANGED
@@ -1,3 +1,221 @@
1
- ---
2
- license: cc
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fine-tuned CLIP models are efficient video learners [CVPR 2023]
2
+
3
+
4
+
5
+ > [**Fine-tuned CLIP models are efficient video learners**](https://arxiv.org/abs/2212.03640)<br>
6
+ > [Hanoona Rasheed*](https://scholar.google.com/citations?user=yhDdEuEAAAAJ&hl=en&authuser=1&oi=sra), [Muhammad Uzair Khattak*](https://scholar.google.com/citations?user=M6fFL4gAAAAJ&hl=en&authuser=1), [Muhammad Maaz](https://scholar.google.com/citations?user=vTy9Te8AAAAJ&hl=en&authuser=1&oi=sra), [Salman Khan](https://salman-h-khan.github.io/), [Fahad Shahbaz Khan](https://scholar.google.es/citations?user=zvaeYnUAAAAJ&hl=en)
7
+
8
+ *Equally contributing first authors
9
+
10
+ [![Website](https://img.shields.io/badge/Project-Website-87CEEB)](https://muzairkhattak.github.io/ViFi-CLIP/)
11
+ [![paper](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2212.03640)
12
+ [![video](https://img.shields.io/badge/Video-Presentation-F9D371)](https://www.youtube.com/watch?v=uqPLPIyWBb0)
13
+ [![slides](https://img.shields.io/badge/Presentation-Slides-B762C1)](https://drive.google.com/file/d/1_CITKY9u_Fh77iqQDP2TrcbVD5_61ArT/view?usp=sharing)
14
+ [![Jupyter Notebook](https://img.shields.io/badge/jupyter-%23FA0F00.svg?style=for-the-badge&logo=jupyter&logoColor=white)](https://github.com/muzairkhattak/ViFi-CLIP/blob/main/ViFi-CLIP_Inference_custom_video.ipynb)
15
+
16
+ Official implementation of the paper "[Fine-tuned CLIP models are efficient video learners](https://arxiv.org/abs/2212.03640)".
17
+ <hr />
18
+
19
+ [//]: # ([![PWC]&#40;https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/maple-multi-modal-prompt-learning/prompt-engineering-on-imagenet&#41;]&#40;https://paperswithcode.com/sota/prompt-engineering-on-imagenet?p=maple-multi-modal-prompt-learning&#41;)
20
+
21
+ [//]: # ([![PWC]&#40;https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/maple-multi-modal-prompt-learning/prompt-engineering-on-sun397&#41;]&#40;https://paperswithcode.com/sota/prompt-engineering-on-sun397?p=maple-multi-modal-prompt-learning&#41;)
22
+
23
+ [//]: # ([![PWC]&#40;https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/maple-multi-modal-prompt-learning/prompt-engineering-on-eurosat&#41;]&#40;https://paperswithcode.com/sota/prompt-engineering-on-eurosat?p=maple-multi-modal-prompt-learning&#41;)
24
+
25
+ [//]: # ([![PWC]&#40;https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/maple-multi-modal-prompt-learning/prompt-engineering-on-ucf101&#41;]&#40;https://paperswithcode.com/sota/prompt-engineering-on-ucf101?p=maple-multi-modal-prompt-learning&#41;)
26
+
27
+ [//]: # ([![PWC]&#40;https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/maple-multi-modal-prompt-learning/prompt-engineering-on-fgvc-aircraft&#41;]&#40;https://paperswithcode.com/sota/prompt-engineering-on-fgvc-aircraft?p=maple-multi-modal-prompt-learning&#41;)
28
+
29
+ [//]: # ()
30
+ [//]: # ()
31
+ [//]: # (<hr />)
32
+
33
+ # :rocket: News
34
+ * **(Nov 24, 2023)**
35
+ * Interactive notebook released. Inference with ViFi-CLIP on custom videos without significant installation dependencies!
36
+ * **(Feb 28, 2023)**
37
+ * Paper accepted at CVPR 2023 :tada:
38
+ * **(Dec 6, 2022)**
39
+ * Training and evaluation codes for [ViFi-CLIP](https://arxiv.org/abs/2212.03640), along with pretrained models are released.
40
+
41
+ <hr />
42
+
43
+ ## Highlights
44
+
45
+ ![main figure](docs/main_figure.png)
46
+ <p align="justify"> This work explores the capability of a simple baseline called ViFi-CLIP (Video Fine-tuned CLIP)
47
+ for adapting image pretrained CLIP to video domain. The figure compares the zero-shot performance of vanilla CLIP
48
+ and several of its variants adapted for videos (trained on Kinetics-400, evaluated on UCF-101 and HMDB-51).
49
+ The t-SNE visualizations of video-embeddings obtained from ViFi-CLIP (4th col.) are compared with embeddings
50
+ from vanilla CLIP (1st col.), individually tuned CLIP text (2nd col.) and image encoder (3rd col.) on videos,
51
+ and recent state-of-the-art work, XCLIP (last col.) (∆ represents difference over XCLIP). The embeddings of
52
+ ViFi-CLIP are better separable, indicating that a simple fine-tuning of CLIP is sufficient to learn suitable
53
+ video-specific inductive biases, and can perform competitive to more complex approaches having dedicated components
54
+ designed to model temporal information in videos. </p>
55
+
56
+
57
+ > **<p align="justify"> Abstract:** *Large-scale multi-modal training with image-text pairs imparts
58
+ > strong generalization to CLIP model. Since training on a similar scale for videos is infeasible,
59
+ > recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this
60
+ > pursuit, new parametric modules are added to learn temporal information and inter-frame relationships
61
+ > which require meticulous design efforts. Furthermore, when the resulting models are learned on videos
62
+ > , they tend to overfit on the given task distribution and lack in generalization aspect. This begs the
63
+ > following question: How to effectively transfer image-level CLIP representations to videos? In this work,
64
+ > we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the
65
+ > domain gap from images to videos. Our qualitative analysis illustrates that the frame-level processing
66
+ > from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text
67
+ > embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps
68
+ > the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data
69
+ > regimes where full fine-tuning is not viable, we propose a `bridge and prompt' approach that first
70
+ > uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to
71
+ > adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot,
72
+ > base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks.* </p>
73
+
74
+ ## Main Contributions
75
+
76
+ 1) **ViFi-CLIP:** We formulate and show the significance of an often neglected but
77
+ simple baseline for transferring image-based CLIP model to video domain. ViFi-CLIP (Video Fine-tuned CLIP) shows that a simple fine-tuning of CLIP is sufficient to learn suitable video-specific inductive biases,
78
+ and can perform competitive to more complex approaches having dedicated components designed to model temporal information in videos.
79
+ 2) **Base-to-novel generalization benchmark:** We introduce base-to-novel generalization benchmark for video-domain for evaluating the generalization ability of models for video action recognition.
80
+ 3) **Bridge and Prompt approach:** We show the effectiveness of our proposed ‘bridge and prompt’ approach to first bridge the modality gap through fine-tuning followed by prompt learning in both visual and language branches of the CLIP model
81
+ for low-data regimes.
82
+
83
+
84
+ # Model Zoo
85
+ NOTE: All models in our experiments below uses publicly available ViT/B-16 based CLIP model. The trained model weights against each experiment is provided in tables below.
86
+
87
+ ### Zero-shot results
88
+ All models are trained on Kinetics-400 and then evaluated directly on downstream datasets.
89
+
90
+ | Name (configs) | Input | HMDB-51 | UCF-101 | Kinetics-600 | Model |
91
+ |---------------------------------------------------------------------------|:------:|:-------:|:-------:|:------------:|:--------------------------------------------------------------------------------------------------------------------------------------------:|
92
+ | [CLIP image-FT](configs/zero_shot/train/k400/16_16_image_tuned_clip.yaml) | 32x224 | 49.0 | 72.9 | 62.2 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdA6n7TCQEFAse5X1g1I08AByLCWHM69axTyK9OyVZy86Q?e=NaipU1) |
93
+ | [CLIP text-FT](configs/zero_shot/train/k400/16_16_text_tuned_clip.yaml) | 32x224 | 48.5 | 69.8 | 68.5 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Eea6hW-_RBdJo4T5JJ_sWgEBNGFdA91tPTq9MQ-XkO5dMg?e=hGneeQ) |
94
+ | [ViFi-CLIP](configs/zero_shot/train/k400/16_16_vifi_clip.yaml) | 32x224 | 51.3 | 76.8 | 71.2 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EW0shb6XYDxFi3BH6DT70rgBPDwgW_knQ8jDsarxINXezw?e=RbixXc) |
95
+
96
+
97
+ ### Base-to-novel generalization results
98
+ Here, we divide each dataset into base and novel classes.
99
+ All models are trained on base classes and evaluated on both base and novel classes. Results are averaged over 3 seeds for each experiment.
100
+
101
+ #### Kinetics-400
102
+ | Name (configs) | Input | Base Acc. | Novel Acc. | HM | Model |
103
+ |----------------------------------------------------------------|:------:|:---------:|:----------:|:----:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
104
+ | [CLIP image-FT](configs/base2novel/finetuning_base2novel/k400) | 32x224 | 72.9 | 58.0 | 64.6 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EXaQGUrODN9DjxtWuSylHJIBbFtAimZHdubKSHPlTT79eg?e=WiNFM9)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ESKX8BXvQoBHn5jq04EoowEB0zR6iPxlkxjSuWJbHupceg?e=vDniMa)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Ed1D7oXTF6VKtAVSleSJHowBJxsdu1kNNDRk4LBGOfzokg?e=gqt8en) |
105
+ | [CLIP text-FT](configs/base2novel/finetuning_base2novel/k400) | 32x224 | 73.4 | 59.7 | 65.8 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EduVCGSp11tFlwyCKg5ee7wBMJQwGHN9gKNBJozpZCgPEg?e=NPeIjf)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ERw9FPot9T9PrVw0kxdsQvkBcpDuDYYUnIFLTjm_xqz8zA?e=8dkLY8)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EU1lFXfDColIuYqGRrujImwBVqz2vP5gpTAM446HPa7erA?e=MCcZ6t) |
106
+ | [ViFi-CLIP](configs/base2novel/finetuning_base2novel/k400) | 32x224 | 76.4 | 61.1 | 67.9 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EVEyFxODEvtFt6FVpuIQvNQBgi5bfxce_nqgzqsjuxB48g?e=rOAu0o)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EcCHHh5FvnlPnlQTHLUk2v0Bv6MMTWHpkluBiQ1MdbZWFA?e=d3NkTX)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ETG_gS_l-E1Ai6BkPq8WlzgB8L5PDYDoVrgzia9832j3wg?e=rfJzPs) |
107
+
108
+ #### HMDB-51
109
+ | Name (configs) | Input | Base Acc. | Novel Acc. | HM | Model |
110
+ |----------------------------------------------------------------|:------:|:---------:|:----------:|:----:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
111
+ | [CLIP image-FT](configs/base2novel/finetuning_base2novel/hmdb) | 32x224 | 62.6 | 47.5 | 54.0 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EVNYYAhsZtZMtzoQcfKx7rQBlrEYkvUyDVfauuMobgAA0g?e=GQ2D8z)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EQaX5EzlLfhGhbZEzsgST0cB4HD0saOuoYgBCW7K8bzaBg?e=tKNkqY)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EUiGmlJa3M9Fgx6epCvPiRkBtkON4YKMWEtQSkwqC3dXWw?e=72DTbt) |
112
+ | [CLIP text-FT](configs/base2novel/finetuning_base2novel/hmdb) | 32x224 | 70.0 | 51.2 | 59.1 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ETJ12FfB_8RLg22CHxKPHosBPmFL52G9kbKGayQiqoHXYQ?e=hTb1tv)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EXVDioTuv6dKgroWI-qmrEUBZV5njUMUndR_XJDZNTcXcw?e=rgPF49)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EQMykf375n1Iqm2IgABFT2oBokF2ooseZITmvyx2RKX4TA?e=1XNgtI) |
113
+ | [ViFi-CLIP](configs/base2novel/finetuning_base2novel/hmdb) | 32x224 | 73.8 | 53.3 | 61.9 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ETbI3yeoedBNqvAf3oz-faIBeGDy862_Tx_ZQT1soM6hZQ?e=2Y5Vxg)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EZ-JcyYOVthCu2pU4ou-AWgBHzMYzWsSKC7eL4KBU3xyLg?e=0bj1ed)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EeUfaRWGtEpPn9hVrpb8pCsBJGAMrGZgXLKOOzNNY1DGqA?e=6B7dJy) |
114
+
115
+ #### UCF-101
116
+ | Name (configs) | Input | Base Acc. | Novel Acc. | HM | Model |
117
+ |---------------------------------------------------------------|:------:|:---------:|:----------:|:----:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
118
+ | [CLIP image-FT](configs/base2novel/finetuning_base2novel/ucf) | 32x224 | 86.4 | 65.3 | 74.4 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EexxOxwJE8dHtk8ykBn39k4B9OaJK88L-N4c8AYOvj4LNA?e=ZCsYlc)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdC89wvjprhJgG-Q3DuzF_0BoKT0fxQWSeRLgJ6urodhaw?e=U3gU8U)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Efsw7nYSffZKhzwnIpwX89kBsqSSMhexheB-fb-xFn0fOQ?e=Q69d2d) |
119
+ | [CLIP text-FT](configs/base2novel/finetuning_base2novel/ucf) | 32x224 | 90.9 | 67.4 | 77.4 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EZmAd-E7FXZBoOKa9XY8RfcBG9Qk7nhlLwHin8oN89IKMg?e=Xnmtn9)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ERIpU_6ZhUpKjTZ6QQVfKPwBQkUiWLM6yRSOJmFZGOK4-Q?e=pkENDN)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EVoHq04lVOhIpE1pqaI7lmYBhHoh_6Nndgx7xMCZqeXTMw?e=qkcbFm) |
120
+ | [ViFi-CLIP](configs/base2novel/finetuning_base2novel/ucf) | 32x224 | 92.9 | 67.7 | 78.3 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EXwqEdOLKSdIpY6AfTSbRMQB0UqZdTKiaWjw-2gf8Ctcyw?e=h2MvBZ)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdOmRlCM4zZJpr-Z497OfB4B5YK8qTiApht1StA7xJ3ClA?e=9zYxfS)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdgjBDJ0iXtMpdkNqOE5otcBgWgbfrrQBG1W0wICrD9qiA?e=x7VXl2) |
121
+
122
+ #### SSv2
123
+ | Name (configs) | Input | Base Acc. | Novel Acc. | HM | Model |
124
+ |----------------------------------------------------------------|:------:|:---------:|:----------:|:----:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
125
+ | [CLIP image-FT](configs/base2novel/finetuning_base2novel/ssv2) | 32x224 | 9.2 | 8.5 | 8.8 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EfLcOFvIHK1Hjj-Yw7z_TQ8BSwmptokbOsPuzWnqAm8iTg?e=3gb20s)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EfAL5G3trhJHue-6RTF4HhsBStMma3XEvzWv_0wQnh1YlA?e=sTnbDG)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Eff55gcBtRxDuCGebyc0zTIBoAPgwDusk0U5jg7-ddjDDg?e=bXB25M) |
126
+ | [CLIP text-FT](configs/base2novel/finetuning_base2novel/ssv2) | 32x224 | 12.4 | 9.5 | 10.8 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdYLS33jyZZDsIy71Lk3TfwB76xrHL3BIRrUiNeSvWfnWg?e=ndm1JL)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EbpzILaqXJBKgPmTKBA32d0BsFrErjRCAwMwaXNKB39G5w?e=FbLCaN)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EY_VHJNKBhlFuir2dL1frOQB5GbG2UeSoG4p65Wh5wOHNg?e=HncWmy) |
127
+ | [ViFi-CLIP](configs/base2novel/finetuning_base2novel/ssv2) | 32x224 | 16.2 | 12.1 | 13.9 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Ee9-LsJzAeROj0rsXZ_Kq2gBWfDTJX9yI3NhsP3Wx9XT7g?e=QTh28B)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ETWroKKSa3VJmktA1qGcrUIBSWdSK8JaclCD7GpxXWMMRw?e=bNM8PS)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Efl1L1g_OdJHvLu24Yzh3w4BMrTcdll8DilX13lB6rXaFw?e=lLvOiJ) |
128
+
129
+
130
+ #### VL Prompting approach: Base-to-Novel
131
+ ViFi-CLIP is first trained on K400 and then vision and language prompts are further fine-tuned on the downstream datasets.
132
+
133
+ | Dataset (configs) | Input | Base Acc. | Novel Acc. | HM | Model |
134
+ |---------------------------------------------------------|:------:|:---------:|:----------:|:----:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
135
+ | [HMDB-51](configs/base2novel/prompting_base2novel/hmdb) | 32x224 | 77.1 | 54.9 | 64.1 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Ee1tEk7Tw-dNibQEMVZYBPMBhYj2--lFdIceS1DNN55mUQ?e=qzP1vE)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EWxj-A1_EldJggHhBgVTFPIBdcGAXZn1yiWBATvgTKvLYg?e=WLfYUT)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EXT2ezu2RZBEnKuzkEwYb48BE9LYaXoh-cT9dNSruYiKyg?e=b5cbmX) |
136
+ | [UCF-101](configs/base2novel/prompting_base2novel/ucf) | 32x224 | 95.9 | 74.1 | 83.6 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EYNvOOiV0qZIj-YIZlIH-dcBr-8eALRnPse189llN7QiPQ?e=wbbxDB)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EeBoMzLQ-YNNtl5YAKS0MmkBoKWpxblQQk3ieT50OtwlQQ?e=16jKbC)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EWwQJkz41o9KgXkgpDoJnjYBCyCD4bV0pBS9XtAD8VpLoQ?e=VKyBNc) |
137
+ | [SSv2](configs/base2novel/prompting_base2novel/ssv2) | 32x224 | 15.8 | 11.5 | 13.3 | [seed1](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ESey1Xo8Ka1HoJtu04xsng0BSTFIRgOty4AwIlnQL7iuJQ?e=n27FNI)/[seed2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EeLJ6F4mXxBHgBj0qQEXkjkBOCImmwSns3J51yG9YIkjAQ?e=eoXWyd)/[seed3](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EQ8Vjdf0t8ZEuJFGlTBwP2sBmDRhM7FWuYmyOh0UZJdhPg?e=ZMppVA) |
138
+
139
+
140
+ ### Few-shot results
141
+ Below table shows few-shot results of ViFi-CLIP for K=2, 4, 8 and 16.
142
+
143
+ | Name (configs) | Dataset | K (shots) | Input | Top-1 Acc. | Model |
144
+ |---------------------------------------------------------------------------------------|:-------:|:---------:|:-------|:----------:|:--------------------------------------------------------------------------------------------------------------------------------------------:|
145
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/hmdb51/16_32_vifi_clip_2_shot.yaml) | HMDB-51 | 2 | 32x224 | 57.2 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EZfPCFy69GlLms0xE9hacYsBMRDZolyy5-5kh7urW6U5Hg?e=PRR4dj) |
146
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/hmdb51/16_32_vifi_clip_4_shot.yaml) | HMDB-51 | 4 | 32x224 | 62.7 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EYSoKhu-CEdFtDIPDB-9mcYBTocR1z6S4pB2prm8M3y86w?e=MgiPpY) |
147
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/hmdb51/16_32_vifi_clip_8_shot.yaml) | HMDB-51 | 8 | 32x224 | 64.5 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EXLoRgDpJERKnxWf6GGGqzoBy-jbAuO-IcV4QSWmtT2mBg?e=piTDRc) |
148
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/hmdb51/16_32_vifi_clip_16_shot.yaml) | HMDB-51 | 16 | 32x224 | 66.8 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdA4jgYynRBHrhy1ftn-s9gBFRFYCPdaD5y9AQBClaziWg?e=x2tHpP) |
149
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ucf101/16_32_vifi_clip_2_shot.yaml) | UCF-101 | 2 | 32x224 | 80.7 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ERaxz4xkUBdGkGCKopmsctgBWj0aoxf4eNWRFIQPtZja6A?e=FzpFnl) |
150
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ucf101/16_32_vifi_clip_4_shot.yaml) | UCF-101 | 4 | 32x224 | 85.1 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ETa1Ym63eYtDt9Fzlq_5YuEBcNCPlUPbD12zhc4YGusGyg?e=Z1Si0j) |
151
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ucf101/16_32_vifi_clip_8_shot.yaml) | UCF-101 | 8 | 32x224 | 90.0 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EaHr57kr7GBGno5v6Qb7sLUBERvoInzco0yfbO81davqWQ?e=V2Odqn) |
152
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ucf101/16_32_vifi_clip_16_shot.yaml) | UCF-101 | 16 | 32x224 | 92.7 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ERGWnUJHBiVJluMvaUrbDPcB3iIGXAet0W-AfwDJy1bL2w?e=0fSQJb) |
153
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ssv2/16_32_vifi_clip_2_shot.yaml) | SSv2 | 2 | 32x224 | 6.2 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EfmVXJyo9VxHheDrVrm7b88BJ_MXRyI_dhuI9pWMUpfPww?e=JPmnt2) |
154
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ssv2/16_32_vifi_clip_4_shot.yaml) | SSv2 | 4 | 32x224 | 7.4 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ET1MeS3-C_NLpg-rAJMnf0cBruk16K56NDCwySFwse1tsQ?e=1fV3k2) |
155
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ssv2/16_32_vifi_clip_8_shot.yaml) | SSv2 | 8 | 32x224 | 8.5 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EWp7ERV-Dn9GiiTgKWyjDyMBUVoLXyPdHcBpAPah3XvZmw?e=r5Xmii) |
156
+ | [ViFi-CLIP](configs/few_shot/finetuning_few_shot/ssv2/16_32_vifi_clip_16_shot.yaml) | SSv2 | 16 | 32x224 | 12.4 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EZJB66ssj_VBhZB6e59wI9oB1qHGKujTAhoSKyqvnpEzDw?e=Vdjp5n) |
157
+
158
+ NOTE: Few-shot results for other CLIP Fine-tuned variants are presented in our main paper (Table 3). Model weights for other variants are provided [here](https://mbzuaiac-my.sharepoint.com/:f:/g/personal/uzair_khattak_mbzuai_ac_ae/Elz1joid4FlAkgDnr_O1ZLMBNxK3jZOlzdAHv5yopYakJQ?e=wyDe8r).
159
+
160
+ #### VL Prompting approach: Few-shot
161
+ ViFi-CLIP is first trained on K400 and then vision and language prompts are further fine-tuned on the downstream datasets in few-shot manner.
162
+
163
+ | Dataset (configs) | Input | K=2 | K=4 | K=8 | K=16 | Model |
164
+ |-------------------------------------------------------|:------:|:-----|:----:|:----:|:----:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
165
+ | [HMDB-51](configs/few_shot/prompting_few_shot/hmdb51) | 32x224 | 63.0 | 65.1 | 69.6 | 72.0 | [K=2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EQ0oDcnAJLtJt4CmdhnApVIBiWD2YwAO5x01TYy0mpEmzA?e=iFvoSV)/[K=4](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Ed3LaBQWcrhLgqStigS5HAsBimR0K6DR5l2x_dI6kWuDCA?e=QfYeRd)/[K=8](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EbpdVbtqUUlLt86s2Ze5gBoBAvG4KgWJVbYFVMMErX7Smw?e=lxRFPs)/[K=16](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EQTj4Xe9veRHqmypVCJ6rRMBEMO6Rky3m_V2Q8f7lqrpEw?e=V1DJRH) |
166
+ | [UCF-101](configs/few_shot/prompting_few_shot/ucf101) | 32x224 | 91.0 | 93.7 | 95.0 | 96.4 | [K=2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EXz03SRz-NdCmVcWpSt-GEwBxrBWmlGbitXq9iRGz8EczQ?e=zpongw)/[K=4](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EaFyG9bOXUhEnOsviO0BhowBpjvbRcJb9zCehcgyXdhHRQ?e=fl7H6a)/[K=8](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ERM8RDkpandOshsedBwL0fQBrdQd26zjbaBGGGw1XhuTuQ?e=z8GDng)/[K=16](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EbUEylUsTyBOhPZqy99sY8UBMtE0AA46TdY-MTDs8ma0AA?e=g2038u) |
167
+ | [SSv2](configs/few_shot/prompting_few_shot/ssv2) | 32x224 | 6.7 | 7.9 | 10.2 | 13.5 | [K=2](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EbBoLMM3RnNAvoZ3TdoGYSMBFCfsB_gfaz3svxtyKUdxEA?e=KeIl1s)/[K=4](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EbzbcG00RgJFsezk_tnDmQkBCf7wPPIexuKEgUZJKgmMew?e=lEXJ45)/[K=8](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/ESVYfUXIjZ9CppbPt8mgKOABVKSljMNI2JiD9PLkoABSoQ?e=fxI1l1)/[K=16](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/Ec3wlwVsJ4FDprzfChxkZeoBqz4AH7Y4JRF1SjsvMsOWcw?e=ya59zp) |
168
+
169
+
170
+
171
+ ### Fully-supervised results on Kinetics-400
172
+ | Name (configs) | FLOPS(G) | Input | Top-1 Acc. | Top-5 Acc. | Model |
173
+ |----------------------------------------------------------------------------|:--------:|:------:|:----------:|:----------:|:--------------------------------------------------------------------------------------------------------------------------------------------:|
174
+ | [CLIP image-FT](configs/fully_supervised/k400/16_16_image_tuned_clip.yaml) | 281 | 16x224 | 82.8 | 96.2 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EdmXN3BQe79BgW1Tuw3Q--QBPbSc4b1N5-ahEIaK-SxRRA?e=e4bLz7) |
175
+ | [CLIP text-FT](configs/fully_supervised/k400/16_16_text_tuned_clip.yaml) | 281 | 16x224 | 73.1 | 91.2 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EeKqDguvX8NPvz5MIKmVPBIBLxL0wkzh0SCmpfs8ZebdZQ?e=2mKBTr) |
176
+ | [ViFi-CLIP](configs/fully_supervised/k400/16_16_vifi_clip.yaml) | 281 | 16x224 | 83.9 | 96.3 | [link](https://mbzuaiac-my.sharepoint.com/:u:/g/personal/uzair_khattak_mbzuai_ac_ae/EfqisYTGKlVIiPI0QHG-pxMBuBMA0906jX_kPpaRGw9Ksw?e=TdbaBU) |
177
+
178
+ ## Installation
179
+ For installation and other package requirements, please follow the instructions detailed in [INSTALL.md](docs/INSTALL.md).
180
+
181
+ ## Data preparation
182
+ Please follow the instructions at [DATASETS.md](docs/DATASETS.md) to prepare all datasets.
183
+
184
+ # Training
185
+ For all experiments shown in above tables, we provide config files in `configs` folder. For example, to train ViFi-CLIP (tunes both image and text encoder) on Kinetics-400, run the following command:
186
+ ```
187
+ python -m torch.distributed.launch --nproc_per_node=8 \
188
+ main.py -cfg configs/fully_supervised/k400/16_16_vifi_clip.yaml --output /PATH/TO/OUTPUT
189
+ ```
190
+
191
+ **Note:**
192
+ - We recommend keeping the total batch size as mentioned in respective config files. Please use `--accumulation-steps` to maintain the total batch size. Specifically, here the effective total batch size is 8(`GPUs_NUM`) x 4(`TRAIN.BATCH_SIZE`) x 16(`TRAIN.ACCUMULATION_STEPS`) = 512.
193
+ - After setting up the datasets as instructed [DATASETS.md](docs/DATASETS.md), only argument in the config file that should be specified is data path. All other settings in config files are pre-set.
194
+
195
+ For detailed training instructions for all experimental setup, please refer to [TRAIN.md](docs/TRAIN.md).
196
+
197
+ # Evaluating models
198
+ To evaluate a model, please use a suitable config and corresponding model weights. For example, to evaluate ViFi-CLIP with 16 frames on Kinetics-400, run the command below:
199
+ ```
200
+ python -m torch.distributed.launch --nproc_per_node=8 main.py \
201
+ -cfg configs/fully_supervised/k400/16_16_vifi_clip.yaml --output /PATH/TO/OUTPUT \
202
+ --only_test --resume /PATH/TO/CKPT --opts TEST.NUM_CLIP 4 TEST.NUM_CROP 3
203
+ ```
204
+
205
+ ## Contact
206
+ If you have any questions, please create an issue on this repository or contact at uzair.khattak@mbzuai.ac.ae or hanoona.bangalath@mbzuai.ac.ae .
207
+
208
+
209
+ # Citation
210
+ If you use our approach (code, model or dataset splits) in your research, please consider citing:
211
+ ```
212
+ @inproceedings{hanoonavificlip,
213
+ title={Finetuned CLIP models are efficient video learners},
214
+ author={Rasheed, Hanoona and khattak, Muhammad Uzair and Maaz, Muhammad and Khan, Salman and Khan, Fahad Shahbaz},
215
+ booktitle={The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
216
+ year={2023}
217
+ }
218
+ ```
219
+
220
+ # Acknowledgements
221
+ Our code is based on [XCLIP's repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). We sincerely thank the authors for releasing their code. If you use our model and code, please consider citing XCLIP as well.
clip/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .clip import *
clip/__pycache__/__init__.cpython-37.pyc ADDED
Binary file (153 Bytes). View file
 
clip/__pycache__/clip.cpython-37.pyc ADDED
Binary file (7.49 kB). View file
 
clip/__pycache__/model.cpython-37.pyc ADDED
Binary file (16.4 kB). View file
 
clip/__pycache__/simple_tokenizer.cpython-37.pyc ADDED
Binary file (5.75 kB). View file
 
clip/bpe_simple_vocab_16e6.txt.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924691ac288e54409236115652ad4aa250f48203de50a9e4722a6ecd48d6804a
3
+ size 1356917
clip/clip.py ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import hashlib
2
+ import os
3
+ import urllib
4
+ import warnings
5
+ from typing import Union, List
6
+
7
+ import torch
8
+ from PIL import Image
9
+ from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
10
+ from tqdm import tqdm
11
+
12
+ from .model import build_model
13
+ from .simple_tokenizer import SimpleTokenizer as _Tokenizer
14
+
15
+ try:
16
+ from torchvision.transforms import InterpolationMode
17
+ BICUBIC = InterpolationMode.BICUBIC
18
+ except ImportError:
19
+ BICUBIC = Image.BICUBIC
20
+
21
+
22
+ if torch.__version__.split(".") < ["1", "7", "1"]:
23
+ warnings.warn("PyTorch version 1.7.1 or higher is recommended")
24
+
25
+
26
+ __all__ = ["available_models", "load", "tokenize"]
27
+ _tokenizer = _Tokenizer()
28
+
29
+ _MODELS = {
30
+ "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
31
+ "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
32
+ "ViT-L/14": "https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt",
33
+ "ViT-L/14@336px": "https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt",
34
+ }
35
+
36
+
37
+ def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
38
+ os.makedirs(root, exist_ok=True)
39
+ filename = os.path.basename(url)
40
+
41
+ expected_sha256 = url.split("/")[-2]
42
+ download_target = os.path.join(root, filename)
43
+
44
+ if os.path.exists(download_target) and not os.path.isfile(download_target):
45
+ raise RuntimeError(f"{download_target} exists and is not a regular file")
46
+
47
+ if os.path.isfile(download_target):
48
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
49
+ return download_target
50
+ else:
51
+ warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
52
+
53
+ with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
54
+ with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop:
55
+ while True:
56
+ buffer = source.read(8192)
57
+ if not buffer:
58
+ break
59
+
60
+ output.write(buffer)
61
+ loop.update(len(buffer))
62
+
63
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
64
+ raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
65
+
66
+ return download_target
67
+
68
+
69
+ def _transform(n_px):
70
+ return Compose([
71
+ Resize(n_px, interpolation=BICUBIC),
72
+ CenterCrop(n_px),
73
+ lambda image: image.convert("RGB"),
74
+ ToTensor(),
75
+ Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
76
+ ])
77
+
78
+
79
+ def available_models() -> List[str]:
80
+ """Returns the names of available CLIP models"""
81
+ return list(_MODELS.keys())
82
+
83
+
84
+ def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=False):
85
+ """Load a CLIP model
86
+
87
+ Parameters
88
+ ----------
89
+ name : str
90
+ A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
91
+
92
+ device : Union[str, torch.device]
93
+ The device to put the loaded model
94
+
95
+ jit : bool
96
+ Whether to load the optimized JIT model or more hackable non-JIT model (default).
97
+
98
+ Returns
99
+ -------
100
+ model : torch.nn.Module
101
+ The CLIP model
102
+
103
+ preprocess : Callable[[PIL.Image], torch.Tensor]
104
+ A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
105
+ """
106
+ if name in _MODELS:
107
+ model_path = _download(_MODELS[name])
108
+ elif os.path.isfile(name):
109
+ model_path = name
110
+ else:
111
+ raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
112
+
113
+ try:
114
+ # loading JIT archive
115
+ model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
116
+ state_dict = None
117
+ except RuntimeError:
118
+ # loading saved state dict
119
+ if jit:
120
+ warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
121
+ jit = False
122
+ state_dict = torch.load(model_path, map_location="cpu")
123
+
124
+ if not jit:
125
+ model = build_model(state_dict or model.state_dict()).to(device)
126
+ if str(device) == "cpu":
127
+ model.float()
128
+ return model, _transform(model.visual.input_resolution)
129
+
130
+ # patch the device names
131
+ device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
132
+ device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
133
+
134
+ def patch_device(module):
135
+ try:
136
+ graphs = [module.graph] if hasattr(module, "graph") else []
137
+ except RuntimeError:
138
+ graphs = []
139
+
140
+ if hasattr(module, "forward1"):
141
+ graphs.append(module.forward1.graph)
142
+
143
+ for graph in graphs:
144
+ for node in graph.findAllNodes("prim::Constant"):
145
+ if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
146
+ node.copyAttributes(device_node)
147
+
148
+ model.apply(patch_device)
149
+ patch_device(model.encode_image)
150
+ patch_device(model.encode_text)
151
+
152
+ # patch dtype to float32 on CPU
153
+ if str(device) == "cpu":
154
+ float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
155
+ float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
156
+ float_node = float_input.node()
157
+
158
+ def patch_float(module):
159
+ try:
160
+ graphs = [module.graph] if hasattr(module, "graph") else []
161
+ except RuntimeError:
162
+ graphs = []
163
+
164
+ if hasattr(module, "forward1"):
165
+ graphs.append(module.forward1.graph)
166
+
167
+ for graph in graphs:
168
+ for node in graph.findAllNodes("aten::to"):
169
+ inputs = list(node.inputs())
170
+ for i in [1, 2]: # dtype can be the second or third argument to aten::to()
171
+ if inputs[i].node()["value"] == 5:
172
+ inputs[i].node().copyAttributes(float_node)
173
+
174
+ model.apply(patch_float)
175
+ patch_float(model.encode_image)
176
+ patch_float(model.encode_text)
177
+
178
+ model.float()
179
+
180
+ return model, _transform(model.input_resolution.item())
181
+
182
+
183
+ def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> torch.LongTensor:
184
+ """
185
+ Returns the tokenized representation of given input string(s)
186
+
187
+ Parameters
188
+ ----------
189
+ texts : Union[str, List[str]]
190
+ An input string or a list of input strings to tokenize
191
+
192
+ context_length : int
193
+ The context length to use; all CLIP models use 77 as the context length
194
+
195
+ truncate: bool
196
+ Whether to truncate the text in case its encoding is longer than the context length
197
+
198
+ Returns
199
+ -------
200
+ A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
201
+ """
202
+ if isinstance(texts, str):
203
+ texts = [texts]
204
+
205
+ sot_token = _tokenizer.encoder["<|startoftext|>"]
206
+ eot_token = _tokenizer.encoder["<|endoftext|>"]
207
+ all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
208
+ result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
209
+
210
+ for i, tokens in enumerate(all_tokens):
211
+ if len(tokens) > context_length:
212
+ if truncate:
213
+ tokens = tokens[:context_length]
214
+ tokens[-1] = eot_token
215
+ else:
216
+ raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
217
+ result[i, :len(tokens)] = torch.tensor(tokens)
218
+
219
+ return result
clip/model.py ADDED
@@ -0,0 +1,521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import OrderedDict
2
+ from typing import Tuple, Union
3
+
4
+ import numpy as np
5
+ import torch
6
+ import torch.nn.functional as F
7
+ from torch import nn
8
+
9
+
10
+ class Bottleneck(nn.Module):
11
+ expansion = 4
12
+
13
+ def __init__(self, inplanes, planes, stride=1):
14
+ super().__init__()
15
+
16
+ # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
17
+ self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
18
+ self.bn1 = nn.BatchNorm2d(planes)
19
+
20
+ self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
21
+ self.bn2 = nn.BatchNorm2d(planes)
22
+
23
+ self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
24
+
25
+ self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
26
+ self.bn3 = nn.BatchNorm2d(planes * self.expansion)
27
+
28
+ self.relu = nn.ReLU(inplace=True)
29
+ self.downsample = None
30
+ self.stride = stride
31
+
32
+ if stride > 1 or inplanes != planes * Bottleneck.expansion:
33
+ # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
34
+ self.downsample = nn.Sequential(OrderedDict([
35
+ ("-1", nn.AvgPool2d(stride)),
36
+ ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
37
+ ("1", nn.BatchNorm2d(planes * self.expansion))
38
+ ]))
39
+
40
+ def forward(self, x: torch.Tensor):
41
+ identity = x
42
+
43
+ out = self.relu(self.bn1(self.conv1(x)))
44
+ out = self.relu(self.bn2(self.conv2(out)))
45
+ out = self.avgpool(out)
46
+ out = self.bn3(self.conv3(out))
47
+
48
+ if self.downsample is not None:
49
+ identity = self.downsample(x)
50
+
51
+ out += identity
52
+ out = self.relu(out)
53
+ return out
54
+
55
+
56
+ class AttentionPool2d(nn.Module):
57
+ def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
58
+ super().__init__()
59
+ self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
60
+ self.k_proj = nn.Linear(embed_dim, embed_dim)
61
+ self.q_proj = nn.Linear(embed_dim, embed_dim)
62
+ self.v_proj = nn.Linear(embed_dim, embed_dim)
63
+ self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
64
+ self.num_heads = num_heads
65
+
66
+ def forward(self, x):
67
+ x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
68
+ x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
69
+ x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
70
+ x, _ = F.multi_head_attention_forward(
71
+ query=x, key=x, value=x,
72
+ embed_dim_to_check=x.shape[-1],
73
+ num_heads=self.num_heads,
74
+ q_proj_weight=self.q_proj.weight,
75
+ k_proj_weight=self.k_proj.weight,
76
+ v_proj_weight=self.v_proj.weight,
77
+ in_proj_weight=None,
78
+ in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
79
+ bias_k=None,
80
+ bias_v=None,
81
+ add_zero_attn=False,
82
+ dropout_p=0,
83
+ out_proj_weight=self.c_proj.weight,
84
+ out_proj_bias=self.c_proj.bias,
85
+ use_separate_proj_weight=True,
86
+ training=self.training,
87
+ need_weights=False
88
+ )
89
+
90
+ return x[0]
91
+
92
+
93
+ class ModifiedResNet(nn.Module):
94
+ """
95
+ A ResNet class that is similar to torchvision's but contains the following changes:
96
+ - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
97
+ - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
98
+ - The final pooling layer is a QKV attention instead of an average pool
99
+ """
100
+
101
+ def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
102
+ super().__init__()
103
+ self.output_dim = output_dim
104
+ self.input_resolution = input_resolution
105
+
106
+ # the 3-layer stem
107
+ self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
108
+ self.bn1 = nn.BatchNorm2d(width // 2)
109
+ self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
110
+ self.bn2 = nn.BatchNorm2d(width // 2)
111
+ self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
112
+ self.bn3 = nn.BatchNorm2d(width)
113
+ self.avgpool = nn.AvgPool2d(2)
114
+ self.relu = nn.ReLU(inplace=True)
115
+
116
+ # residual layers
117
+ self._inplanes = width # this is a *mutable* variable used during construction
118
+ self.layer1 = self._make_layer(width, layers[0])
119
+ self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
120
+ self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
121
+ self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
122
+
123
+ embed_dim = width * 32 # the ResNet feature dimension
124
+ self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
125
+
126
+ def _make_layer(self, planes, blocks, stride=1):
127
+ layers = [Bottleneck(self._inplanes, planes, stride)]
128
+
129
+ self._inplanes = planes * Bottleneck.expansion
130
+ for _ in range(1, blocks):
131
+ layers.append(Bottleneck(self._inplanes, planes))
132
+
133
+ return nn.Sequential(*layers)
134
+
135
+ def forward(self, x):
136
+ def stem(x):
137
+ for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
138
+ x = self.relu(bn(conv(x)))
139
+ x = self.avgpool(x)
140
+ return x
141
+
142
+ x = x.type(self.conv1.weight.dtype)
143
+ x = stem(x)
144
+ x = self.layer1(x)
145
+ x = self.layer2(x)
146
+ x = self.layer3(x)
147
+ x = self.layer4(x)
148
+ x = self.attnpool(x)
149
+
150
+ return x
151
+
152
+
153
+ class LayerNorm(nn.LayerNorm):
154
+ """Subclass torch's LayerNorm to handle fp16."""
155
+
156
+ def forward(self, x: torch.Tensor):
157
+ orig_type = x.dtype
158
+ ret = super().forward(x.type(torch.float32))
159
+ return ret.type(orig_type)
160
+
161
+
162
+ class QuickGELU(nn.Module):
163
+ def forward(self, x: torch.Tensor):
164
+ return x * torch.sigmoid(1.702 * x)
165
+
166
+
167
+ class ResidualAttentionBlock_ViFi_CLIP(nn.Module):
168
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None, add_prompt=False,
169
+ text_layer=False, i=0, design_details=None):
170
+ super().__init__()
171
+
172
+ self.attn = nn.MultiheadAttention(d_model, n_head)
173
+ self.ln_1 = LayerNorm(d_model)
174
+ self.mlp = nn.Sequential(OrderedDict([
175
+ ("c_fc", nn.Linear(d_model, d_model * 4)),
176
+ ("gelu", QuickGELU()),
177
+ ("c_proj", nn.Linear(d_model * 4, d_model))
178
+ ]))
179
+ self.ln_2 = LayerNorm(d_model)
180
+ # Only add learnable tokens if flag is set True
181
+ # For the first iteration i, we should not add the learnable parameters
182
+ # as it is already been taken care of in the very start, for both text
183
+ # and the visual branch
184
+ self.text_layer = text_layer
185
+ self.attn_mask = attn_mask
186
+ if i != 0:
187
+ self.add_prompt = add_prompt
188
+ if self.add_prompt:
189
+ if self.text_layer:
190
+ self.n_ctx_text = design_details["language_ctx"] # hyperparameter
191
+ ctx_vectors = torch.empty(self.n_ctx_text, d_model)
192
+ else:
193
+ self.n_ctx_visual = design_details["vision_ctx"] # hyperparameter
194
+ ctx_vectors = torch.empty(self.n_ctx_visual, d_model)
195
+ # Code snippet for per layer visual prompts
196
+ nn.init.normal_(ctx_vectors, std=0.02)
197
+ self.VPT_shallow = nn.Parameter(ctx_vectors)
198
+ else:
199
+ self.add_prompt = False
200
+
201
+ def attention(self, x: torch.Tensor):
202
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
203
+ return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
204
+
205
+ def forward(self, x: torch.Tensor):
206
+ # Will need to append the learnable tokens for this layer here
207
+ # Check if flag was set for this layer or not
208
+ if self.add_prompt:
209
+ # Also see if this is textual transformer layer or not
210
+ if not self.text_layer:
211
+ # Remove the outputs produced by learnable tokens of previous layer
212
+ prefix = x[0:x.shape[0] - self.n_ctx_visual, :, :]
213
+ # Create/configure learnable tokens of this layer
214
+ visual_context = self.VPT_shallow.expand(x.shape[1], -1, -1).permute(1, 0, 2).half()
215
+ # Add the learnable tokens of this layer with the input, by replacing the previous
216
+ # layer learnable tokens
217
+ x = torch.cat([prefix, visual_context], dim=0)
218
+ else:
219
+ # Appending the learnable tokens in different way
220
+ # x -> [77, NCLS, DIM]
221
+ # First remove the learnable tokens from previous layer
222
+ prefix = x[:1, :, :]
223
+ suffix = x[1 + self.n_ctx_text:, :, :]
224
+ # Create/configure learnable tokens of this layer
225
+ textual_context = self.VPT_shallow.expand(x.shape[1], -1, -1).permute(1, 0, 2).half()
226
+ # Add the learnable tokens of this layer with the input, replaced by previous
227
+ # layer learnable tokens
228
+ x = torch.cat([prefix, textual_context, suffix], dim=0)
229
+
230
+ x = x + self.attention(self.ln_1(x))
231
+ x = x + self.mlp(self.ln_2(x))
232
+ return x
233
+
234
+
235
+ class Transformer(nn.Module):
236
+ def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, prompts_needed=0,
237
+ text_layer=False, design_details=None):
238
+ super().__init__()
239
+ self.width = width
240
+ self.layers = layers
241
+ self.resblocks = nn.Sequential(*[ResidualAttentionBlock_ViFi_CLIP(width, heads, attn_mask, True,
242
+ text_layer, i,
243
+ design_details) if prompts_needed > i
244
+ else ResidualAttentionBlock_ViFi_CLIP(width, heads, attn_mask, False,
245
+ text_layer, i, design_details)
246
+ for i in range(layers)])
247
+
248
+ def forward(self, x: torch.Tensor):
249
+ return self.resblocks(x)
250
+
251
+
252
+ class VisionTransformer(nn.Module):
253
+ def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int,
254
+ output_dim: int, design_details):
255
+ super().__init__()
256
+ self.input_resolution = input_resolution
257
+ self.output_dim = output_dim
258
+ self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
259
+ if design_details["vision_depth"] == 0:
260
+ self.VPT_shallow = False
261
+ else:
262
+ self.VPT_shallow = True
263
+ if self.VPT_shallow:
264
+ # Add visual prompt tokens here
265
+ n_ctx = design_details["vision_ctx"] # hyperparameter
266
+ ctx_vectors = torch.empty(n_ctx, width)
267
+ nn.init.normal_(ctx_vectors, std=0.02)
268
+ self.VPT = nn.Parameter(ctx_vectors)
269
+ # self.VPT.half()
270
+ scale = width ** -0.5
271
+ self.class_embedding = nn.Parameter(scale * torch.randn(width))
272
+ self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
273
+ self.ln_pre = LayerNorm(width)
274
+ # hyper-parameter if need to add prompt embeddings inside to the input
275
+ # of transformer block or not:
276
+ self.prompt_till_layer_visual = design_details["vision_depth"]
277
+ self.transformer = Transformer(width, layers, heads, prompts_needed=self.prompt_till_layer_visual,
278
+ design_details=design_details)
279
+
280
+ self.ln_post = LayerNorm(width)
281
+ self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
282
+
283
+ def forward(self, x: torch.Tensor):
284
+ x = self.conv1(x) # shape = [*, width, grid, grid]
285
+ x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
286
+ x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
287
+ x = torch.cat(
288
+ [self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device),
289
+ x], dim=1) # shape = [*, grid ** 2 + 1, width]
290
+ x = x + self.positional_embedding.to(x.dtype)
291
+ # After positional embeddings, we will attach prompts with the model, remember only those
292
+ # are trainable parameters here in whole image encoder.
293
+ if self.VPT_shallow:
294
+ visual_ctx = self.VPT.expand(x.shape[0], -1, -1).half()
295
+ x = torch.cat([x, visual_ctx], dim=1)
296
+ else:
297
+ assert self.prompt_till_layer_visual == 0
298
+
299
+ # Normal code as before
300
+ x = self.ln_pre(x)
301
+
302
+ x = x.permute(1, 0, 2) # NLD -> LND
303
+ x = self.transformer(x)
304
+ x = x.permute(1, 0, 2) # LND -> NLD
305
+
306
+ x = self.ln_post(x[:, 0, :])
307
+
308
+ if self.proj is not None:
309
+ x = x @ self.proj
310
+
311
+ return x
312
+
313
+
314
+ class CLIP(nn.Module):
315
+ def __init__(self,
316
+ embed_dim: int,
317
+ # vision
318
+ image_resolution: int,
319
+ vision_layers: Union[Tuple[int, int, int, int], int],
320
+ vision_width: int,
321
+ vision_patch_size: int,
322
+ # text
323
+ context_length: int,
324
+ vocab_size: int,
325
+ transformer_width: int,
326
+ transformer_heads: int,
327
+ transformer_layers: int,
328
+ design_details
329
+ ):
330
+ super().__init__()
331
+
332
+ self.context_length = context_length
333
+ trainer = design_details['trainer']
334
+
335
+ if isinstance(vision_layers, (tuple, list)):
336
+ vision_heads = vision_width * 32 // 64
337
+ self.visual = ModifiedResNet(
338
+ layers=vision_layers,
339
+ output_dim=embed_dim,
340
+ heads=vision_heads,
341
+ input_resolution=image_resolution,
342
+ width=vision_width
343
+ )
344
+ else:
345
+ vision_heads = vision_width // 64
346
+ self.visual = VisionTransformer(
347
+ input_resolution=image_resolution,
348
+ patch_size=vision_patch_size,
349
+ width=vision_width,
350
+ layers=vision_layers,
351
+ heads=vision_heads,
352
+ output_dim=embed_dim,
353
+ design_details=design_details
354
+ )
355
+ # hyper-parameter if need to add prompt embeddings inside to the input
356
+ # of transformer block or not:
357
+ prompt_till_layer_text = design_details['language_depth']
358
+ self.transformer = Transformer(
359
+ width=transformer_width,
360
+ layers=transformer_layers,
361
+ heads=transformer_heads,
362
+ attn_mask=self.build_attention_mask(),
363
+ prompts_needed=prompt_till_layer_text,
364
+ text_layer=True,
365
+ design_details=design_details
366
+ )
367
+
368
+ self.vocab_size = vocab_size
369
+ self.token_embedding = nn.Embedding(vocab_size, transformer_width)
370
+ self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
371
+ self.ln_final = LayerNorm(transformer_width)
372
+
373
+ self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
374
+ self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
375
+
376
+ self.initialize_parameters()
377
+
378
+ def initialize_parameters(self):
379
+ nn.init.normal_(self.token_embedding.weight, std=0.02)
380
+ nn.init.normal_(self.positional_embedding, std=0.01)
381
+
382
+ if isinstance(self.visual, ModifiedResNet):
383
+ if self.visual.attnpool is not None:
384
+ std = self.visual.attnpool.c_proj.in_features ** -0.5
385
+ nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
386
+ nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
387
+ nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
388
+ nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
389
+
390
+ for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
391
+ for name, param in resnet_block.named_parameters():
392
+ if name.endswith("bn3.weight"):
393
+ nn.init.zeros_(param)
394
+
395
+ proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
396
+ attn_std = self.transformer.width ** -0.5
397
+ fc_std = (2 * self.transformer.width) ** -0.5
398
+ for block in self.transformer.resblocks:
399
+ nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
400
+ nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
401
+ nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
402
+ nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
403
+
404
+ if self.text_projection is not None:
405
+ nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
406
+
407
+ def build_attention_mask(self):
408
+ # lazily create causal attention mask, with full attention between the vision tokens
409
+ # pytorch uses additive attention mask; fill with -inf
410
+ mask = torch.empty(self.context_length, self.context_length)
411
+ mask.fill_(float("-inf"))
412
+ mask.triu_(1) # zero out the lower diagonal
413
+ return mask
414
+
415
+ @property
416
+ def dtype(self):
417
+ return self.visual.conv1.weight.dtype
418
+
419
+ def encode_image(self, image):
420
+ return self.visual(image.type(self.dtype))
421
+
422
+ def encode_text(self, text):
423
+ x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
424
+
425
+ x = x + self.positional_embedding.type(self.dtype)
426
+ x = x.permute(1, 0, 2) # NLD -> LND
427
+ x = self.transformer(x)
428
+ x = x.permute(1, 0, 2) # LND -> NLD
429
+ x = self.ln_final(x).type(self.dtype)
430
+
431
+ # x.shape = [batch_size, n_ctx, transformer.width]
432
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
433
+ x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
434
+
435
+ return x
436
+
437
+ def forward(self, image, text):
438
+ image_features = self.encode_image(image)
439
+ text_features = self.encode_text(text)
440
+
441
+ # normalized features
442
+ image_features = image_features / image_features.norm(dim=-1, keepdim=True)
443
+ text_features = text_features / text_features.norm(dim=-1, keepdim=True)
444
+
445
+ # cosine similarity as logits
446
+ logit_scale = self.logit_scale.exp()
447
+ logits_per_image = logit_scale * image_features @ text_features.t()
448
+ logits_per_text = logit_scale * text_features @ image_features.t()
449
+
450
+ # shape = [global_batch_size, global_batch_size]
451
+ return logits_per_image, logits_per_text
452
+
453
+
454
+ def convert_weights(model: nn.Module):
455
+ """Convert applicable model parameters to fp16"""
456
+
457
+ def _convert_weights_to_fp16(l):
458
+ if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
459
+ l.weight.data = l.weight.data.half()
460
+ if l.bias is not None:
461
+ l.bias.data = l.bias.data.half()
462
+
463
+ if isinstance(l, nn.MultiheadAttention):
464
+ for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
465
+ tensor = getattr(l, attr)
466
+ if tensor is not None:
467
+ tensor.data = tensor.data.half()
468
+
469
+ for name in ["text_projection", "proj"]:
470
+ if hasattr(l, name):
471
+ attr = getattr(l, name)
472
+ if attr is not None:
473
+ attr.data = attr.data.half()
474
+
475
+ model.apply(_convert_weights_to_fp16)
476
+
477
+
478
+ def build_model(state_dict: dict, design_details):
479
+ vit = "visual.proj" in state_dict
480
+
481
+ if vit:
482
+ vision_width = state_dict["visual.conv1.weight"].shape[0]
483
+ vision_layers = len(
484
+ [k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
485
+ vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
486
+ grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
487
+ image_resolution = vision_patch_size * grid_size
488
+ else:
489
+ counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in
490
+ [1, 2, 3, 4]]
491
+ vision_layers = tuple(counts)
492
+ vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
493
+ output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
494
+ vision_patch_size = None
495
+ assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
496
+ image_resolution = output_width * 32
497
+
498
+ embed_dim = state_dict["text_projection"].shape[1]
499
+ context_length = state_dict["positional_embedding"].shape[0]
500
+ vocab_size = state_dict["token_embedding.weight"].shape[0]
501
+ transformer_width = state_dict["ln_final.weight"].shape[0]
502
+ transformer_heads = transformer_width // 64
503
+ transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
504
+
505
+ model = CLIP(
506
+ embed_dim,
507
+ image_resolution, vision_layers, vision_width, vision_patch_size,
508
+ context_length, vocab_size, transformer_width, transformer_heads, transformer_layers, design_details
509
+ )
510
+
511
+ for key in ["input_resolution", "context_length", "vocab_size"]:
512
+ if key in state_dict:
513
+ del state_dict[key]
514
+
515
+ convert_weights(model)
516
+ try:
517
+ model.load_state_dict(state_dict)
518
+ except:
519
+ missing_keys, _ = model.load_state_dict(state_dict, strict=False)
520
+ print('Weights not found for some missing keys: ', missing_keys)
521
+ return model.eval()
clip/simple_tokenizer.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+ import html
3
+ import os
4
+ from functools import lru_cache
5
+
6
+ import ftfy
7
+ import regex as re
8
+
9
+
10
+ @lru_cache()
11
+ def default_bpe():
12
+ return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
13
+
14
+
15
+ @lru_cache()
16
+ def bytes_to_unicode():
17
+ """
18
+ Returns list of utf-8 byte and a corresponding list of unicode strings.
19
+ The reversible bpe codes work on unicode strings.
20
+ This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
21
+ When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
22
+ This is a signficant percentage of your normal, say, 32K bpe vocab.
23
+ To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
24
+ And avoids mapping to whitespace/control characters the bpe code barfs on.
25
+ """
26
+ bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
27
+ cs = bs[:]
28
+ n = 0
29
+ for b in range(2**8):
30
+ if b not in bs:
31
+ bs.append(b)
32
+ cs.append(2**8+n)
33
+ n += 1
34
+ cs = [chr(n) for n in cs]
35
+ return dict(zip(bs, cs))
36
+
37
+
38
+ def get_pairs(word):
39
+ """Return set of symbol pairs in a word.
40
+ Word is represented as tuple of symbols (symbols being variable-length strings).
41
+ """
42
+ pairs = set()
43
+ prev_char = word[0]
44
+ for char in word[1:]:
45
+ pairs.add((prev_char, char))
46
+ prev_char = char
47
+ return pairs
48
+
49
+
50
+ def basic_clean(text):
51
+ text = ftfy.fix_text(text)
52
+ text = html.unescape(html.unescape(text))
53
+ return text.strip()
54
+
55
+
56
+ def whitespace_clean(text):
57
+ text = re.sub(r'\s+', ' ', text)
58
+ text = text.strip()
59
+ return text
60
+
61
+
62
+ class SimpleTokenizer(object):
63
+ def __init__(self, bpe_path: str = default_bpe()):
64
+ self.byte_encoder = bytes_to_unicode()
65
+ self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
66
+ merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
67
+ merges = merges[1:49152-256-2+1]
68
+ merges = [tuple(merge.split()) for merge in merges]
69
+ vocab = list(bytes_to_unicode().values())
70
+ vocab = vocab + [v+'</w>' for v in vocab]
71
+ for merge in merges:
72
+ vocab.append(''.join(merge))
73
+ vocab.extend(['<|startoftext|>', '<|endoftext|>'])
74
+ self.encoder = dict(zip(vocab, range(len(vocab))))
75
+ self.decoder = {v: k for k, v in self.encoder.items()}
76
+ self.bpe_ranks = dict(zip(merges, range(len(merges))))
77
+ self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
78
+ self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
79
+
80
+ def bpe(self, token):
81
+ if token in self.cache:
82
+ return self.cache[token]
83
+ word = tuple(token[:-1]) + ( token[-1] + '</w>',)
84
+ pairs = get_pairs(word)
85
+
86
+ if not pairs:
87
+ return token+'</w>'
88
+
89
+ while True:
90
+ bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
91
+ if bigram not in self.bpe_ranks:
92
+ break
93
+ first, second = bigram
94
+ new_word = []
95
+ i = 0
96
+ while i < len(word):
97
+ try:
98
+ j = word.index(first, i)
99
+ new_word.extend(word[i:j])
100
+ i = j
101
+ except:
102
+ new_word.extend(word[i:])
103
+ break
104
+
105
+ if word[i] == first and i < len(word)-1 and word[i+1] == second:
106
+ new_word.append(first+second)
107
+ i += 2
108
+ else:
109
+ new_word.append(word[i])
110
+ i += 1
111
+ new_word = tuple(new_word)
112
+ word = new_word
113
+ if len(word) == 1:
114
+ break
115
+ else:
116
+ pairs = get_pairs(word)
117
+ word = ' '.join(word)
118
+ self.cache[token] = word
119
+ return word
120
+
121
+ def encode(self, text):
122
+ bpe_tokens = []
123
+ text = whitespace_clean(basic_clean(text)).lower()
124
+ for token in re.findall(self.pat, text):
125
+ token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
126
+ bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
127
+ return bpe_tokens
128
+
129
+ def decode(self, tokens):
130
+ text = ''.join([self.decoder[token] for token in tokens])
131
+ text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
132
+ return text
configs/config_challenge_test.yaml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DATA:
2
+ ROOT: '/data/vidlab_datasets/challenge_crop/rgb'
3
+ TRAIN_FILE: '/data/users/sdas/scripts/ViFi-CLIP/merged_dataset.csv'
4
+ VAL_FILE: '/data/vidlab_datasets/challenge_crop/split/val.csv'
5
+ DATASET: kinetics400
6
+ NUM_FRAMES: 16
7
+ NUM_CLASSES: 45
8
+ LABEL_LIST: '/data/users/sdas/scripts/ViFi-CLIP/labels/challenge.csv'
9
+ SAVE_FREQ: 5
10
+ MODEL:
11
+ ARCH: ViT-B/16
12
+ RESUME: './work_dirs/challenge_baseline_new/best.pth'
13
+ TRAIN:
14
+ BATCH_SIZE: 1 # BS 512
15
+ ACCUMULATION_STEPS: 64
16
+ EPOCHS: 30
17
+ LR: 2.2e-05
18
+ TEST:
19
+ MULTI_VIEW_INFERENCE: False
20
+ NUM_CLIP: 1
21
+ NUM_CROP: 1
22
+ ONLY_TEST: True
23
+ TRAINER:
24
+ ViFi_CLIP:
25
+ ZS_EVAL: False # Make True only during test mode to evaluate zero-shot vanilla CLIP performance
26
+ USE: "both" # both refers to complete fine-tuning of CLIP (text+image encoders)
configs/config_challenge_train.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DATA:
2
+ ROOT: ''
3
+ TRAIN_FILE: '/data/users/sdas/scripts/ViFi-CLIP/merged_dataset_new.csv'
4
+ VAL_FILE: '/data/users/sdas/scripts/ViFi-CLIP/merged_dataset_new.csv'
5
+ DATASET: kinetics400
6
+ NUM_FRAMES: 16
7
+ NUM_CLASSES: 45
8
+ LABEL_LIST: '/data/users/sdas/scripts/ViFi-CLIP/labels/challenge.csv'
9
+ SAVE_FREQ: 30
10
+ MODEL:
11
+ ARCH: ViT-B/16
12
+ TRAIN:
13
+ BATCH_SIZE: 8 # BS 512
14
+ ACCUMULATION_STEPS: 8
15
+ EPOCHS: 30
16
+ LR: 2.2e-05
17
+ TEST:
18
+ MULTI_VIEW_INFERENCE: False
19
+ ONLY_TEST: False
20
+ TRAINER:
21
+ ViFi_CLIP:
22
+ ZS_EVAL: False # Make True only during test mode to evaluate zero-shot vanilla CLIP performance
23
+ USE: "both" # both refers to complete fine-tuning of CLIP (text+image encoders)
crop_person.py ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ '''
2
+ Imports and cropping utils
3
+ '''
4
+ import matplotlib.pyplot as plt
5
+ import numpy as np
6
+ import cv2
7
+
8
+ from ultralytics import YOLO
9
+
10
+ def process_frame(frame, tlc, brc, new_shape=None, slack=48, keypoints=None):
11
+ '''
12
+ Crop a frame given a bounding box. Return the cropped frame and adjusted keypoints
13
+ ** Arguments **
14
+ frame : np.ndarray
15
+ The frame to crop
16
+ tlc : tuple[int]
17
+ Top Left Corner of bounding box
18
+ brc : tuple[int]
19
+ Bottom Right Corner of bounding box
20
+ new_shape : tuple[int] (optional)
21
+ Shape to resize the crop to
22
+ slack : int (optional)
23
+ Padding to add to the bounding box
24
+ keypoints : np.ndarray (optional)
25
+ Keypoints associated with the frame
26
+ '''
27
+ frame_w, frame_h = frame.shape[1], frame.shape[0]
28
+ box_w, box_h = (brc[0] - tlc[0]), (brc[1] - tlc[1])
29
+
30
+ assert slack == 0 or slack % 2 == 0, "Slack must be divisible by 2"
31
+
32
+ # add slack to bounding box
33
+ tlc = (tlc[0] - slack, tlc[1] - slack)
34
+ brc = (brc[0] + slack, brc[1] + slack)
35
+
36
+ bbox_contained_in_frame = True
37
+ if (tlc[0] < 0) or (tlc[1] < 0) or (brc[0] >= frame_w) or (brc[1] >= frame_h):
38
+ bbox_contained_in_frame = False
39
+
40
+ # pad image if bbox extends past frame boundaries
41
+ if not bbox_contained_in_frame:
42
+ bsz = (box_h, box_h, box_w, box_w) # border size (top, bot, left, right). We can always assume top=bot and left=right
43
+ frame = cv2.copyMakeBorder(frame, *bsz, cv2.BORDER_CONSTANT)
44
+ else:
45
+ bsz = (0, 0, 0, 0)
46
+
47
+ # adjust top-left-corner and bottom-right-corner to match padded image
48
+ tlc = tlc[0] + bsz[2], tlc[1] + bsz[0]
49
+ brc = brc[0] + bsz[2], brc[1] + bsz[0]
50
+
51
+ frame = frame[tlc[1] : brc[1],
52
+ tlc[0] : brc[0]]
53
+
54
+ # adjust frame keypoints to match padded image
55
+ if keypoints is not None:
56
+ keypoints[:, 0] += bsz[2] - (tlc[0])
57
+ keypoints[:, 1] += bsz[0] - (brc[1] - box_h)
58
+
59
+ if new_shape:
60
+ cur_shape = frame.shape
61
+ frame = cv2.resize(frame, new_shape)
62
+
63
+ x_ratio, y_ratio = (new_shape[0] / cur_shape[1]), (new_shape[1] / cur_shape[0])
64
+
65
+ if keypoints is not None:
66
+ keypoints[:, 0] *= x_ratio
67
+ keypoints[:, 1] *= y_ratio
68
+
69
+ return frame, keypoints
70
+
71
+ '''
72
+ Loading the model
73
+ '''
74
+ # Load a pretrained YOLO model
75
+ model = YOLO("yolo11n.pt")
76
+
77
+ '''
78
+ !!! This is the code you should change to process the ETRI videos
79
+ This code will perform the cropping on a single video and save the processed video to a directory. This is the function you should call for each video in ETRI.
80
+ '''
81
+ # vid_path = '/data/vidlab_datasets/eval_FO_ids/1986.mp4'
82
+ write_dir = '/data/vidlab_datasets/challenge_crop/' # directory to write processed videos to. filename will be the same as the original video
83
+ write_shape = (224, 224) # shape to resize the processed video to
84
+
85
+ def crop_and_save_video_yolo(vid_path, write_dir, write_shape):
86
+ # error logging things
87
+ num_frames_skipped_nohuman = 0
88
+
89
+ # load video and show first frame with matplotlib
90
+ cap = cv2.VideoCapture(vid_path)
91
+ if not cap.isOpened():
92
+ print(f"Failed to open the video file at {video_path}")
93
+ else:
94
+ print("Video file opened successfully")
95
+ w, h, frame_rate, num_frames = int(cap.get(3)), int(cap.get(4)), int(cap.get(5)), int(cap.get(7))
96
+
97
+ vid_filename = vid_path.split('/')[-1].replace('.avi', '.mp4')
98
+ writer = cv2.VideoWriter(f'{write_dir}/{vid_filename}', cv2.VideoWriter_fourcc(*'mp4v'), frame_rate, write_shape, True)
99
+
100
+ show_frame = False
101
+
102
+ while True:
103
+ ret, frame = cap.read()
104
+ if not ret:
105
+ break
106
+
107
+ if show_frame: # debugging
108
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
109
+ plt.imshow(frame)
110
+ break
111
+
112
+ det_results = model(frame, verbose=False)
113
+ class_names = det_results[0].names # e.g. {0: 'person', 1: 'bicycle', ...}
114
+
115
+ boxes = det_results[0].boxes.cpu().numpy()
116
+ boxes_xyxy = boxes.xyxy.astype(int) # bounding boxes as integers. shape: (n_boxes, N, 4), N is number of boxes detected
117
+ box_cls_ids = boxes.cls # class ids as integers
118
+ box_cls_names = [class_names[cls_id] for cls_id in box_cls_ids] # class names as strings
119
+ box_conf = boxes.conf # confidence scores
120
+
121
+ human_box_idxs = [1 if cls_name == 'person' else 0 for cls_name in box_cls_names]
122
+ human_box_idxs = np.array(human_box_idxs).astype(bool)
123
+ human_boxes = boxes_xyxy[human_box_idxs]
124
+
125
+ if human_boxes.size == 0: # no humans detected
126
+ print("No humans detected in frame, skipping this frame in the video.")
127
+ num_frames_skipped_nohuman += 1
128
+ continue
129
+
130
+ if human_boxes.shape[0] > 2: # more than 2 humans detected - but NTU contains 2 humans max
131
+ # print("More than 2 humans detected in frame, naively selecting the first 2")
132
+ human_boxes = human_boxes[:2]
133
+
134
+ if human_boxes.shape[0] == 2: # two humans detected - one of the NTU actions with 2 people. We combine their box into a single box that encompasses both people
135
+ tlc = (min(human_boxes[0, 0], human_boxes[1, 0]), min(human_boxes[0, 1], human_boxes[1, 1]))
136
+ brc = (max(human_boxes[0, 2], human_boxes[1, 2]), max(human_boxes[0, 3], human_boxes[1, 3]))
137
+ else:
138
+ tlc = (human_boxes[0, 0], human_boxes[0, 1])
139
+ brc = (human_boxes[0, 2], human_boxes[0, 3])
140
+
141
+ new_frame, _ = process_frame(frame, tlc, brc, new_shape=write_shape)
142
+
143
+ writer.write(new_frame)
144
+
145
+ print(f"Number of frames skipped ({vid_path}) due to no humans detected: {num_frames_skipped_nohuman}")
146
+
147
+ writer.release()
148
+ cap.release()
149
+
150
+ import os
151
+ from pathlib import Path
152
+
153
+ path = '/data/vidlab_datasets/eval_FO_ids/'
154
+ vid_file = sorted(os.listdir(path))
155
+ #vid_ = vid_file.reverse()
156
+ #print(vid_)
157
+ for video in vid_file[1:]:
158
+ print(Path(write_dir+video))
159
+ if Path(write_dir+video).exists():
160
+ continue
161
+ else:
162
+ crop_and_save_video_yolo(path+video, write_dir, write_shape)
datasets/__init__.py ADDED
File without changes
datasets/__pycache__/__init__.cpython-37.pyc ADDED
Binary file (136 Bytes). View file
 
datasets/__pycache__/blending.cpython-37.pyc ADDED
Binary file (8.52 kB). View file
 
datasets/__pycache__/build.cpython-37.pyc ADDED
Binary file (10.4 kB). View file
 
datasets/__pycache__/pipeline.cpython-37.pyc ADDED
Binary file (74 kB). View file
 
datasets/__pycache__/rand_augment.cpython-37.pyc ADDED
Binary file (13.2 kB). View file
 
datasets/blending.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from abc import ABCMeta, abstractmethod
2
+
3
+ import torch
4
+ import torch.nn.functional as F
5
+ from torch.distributions.beta import Beta
6
+ import numpy as np
7
+
8
+
9
+ def one_hot(x, num_classes, on_value=1., off_value=0., device='cuda'):
10
+ x = x.long().view(-1, 1)
11
+ return torch.full((x.size()[0], num_classes), off_value, device=device).scatter_(1, x, on_value)
12
+
13
+
14
+ class BaseMiniBatchBlending(metaclass=ABCMeta):
15
+ """Base class for Image Aliasing."""
16
+
17
+ def __init__(self, num_classes, smoothing=0.):
18
+ self.num_classes = num_classes
19
+ self.off_value = smoothing / self.num_classes
20
+ self.on_value = 1. - smoothing + self.off_value
21
+
22
+ @abstractmethod
23
+ def do_blending(self, imgs, label, **kwargs):
24
+ pass
25
+
26
+ def __call__(self, imgs, label, **kwargs):
27
+ """Blending data in a mini-batch.
28
+
29
+ Images are float tensors with the shape of (B, N, C, H, W) for 2D
30
+ recognizers or (B, N, C, T, H, W) for 3D recognizers.
31
+
32
+ Besides, labels are converted from hard labels to soft labels.
33
+ Hard labels are integer tensors with the shape of (B, 1) and all of the
34
+ elements are in the range [0, num_classes - 1].
35
+ Soft labels (probablity distribution over classes) are float tensors
36
+ with the shape of (B, 1, num_classes) and all of the elements are in
37
+ the range [0, 1].
38
+
39
+ Args:
40
+ imgs (torch.Tensor): Model input images, float tensor with the
41
+ shape of (B, N, C, H, W) or (B, N, C, T, H, W).
42
+ label (torch.Tensor): Hard labels, integer tensor with the shape
43
+ of (B, 1) and all elements are in range [0, num_classes).
44
+ kwargs (dict, optional): Other keyword argument to be used to
45
+ blending imgs and labels in a mini-batch.
46
+
47
+ Returns:
48
+ mixed_imgs (torch.Tensor): Blending images, float tensor with the
49
+ same shape of the input imgs.
50
+ mixed_label (torch.Tensor): Blended soft labels, float tensor with
51
+ the shape of (B, 1, num_classes) and all elements are in range
52
+ [0, 1].
53
+ """
54
+ one_hot_label = one_hot(label, num_classes=self.num_classes, on_value=self.on_value, off_value=self.off_value, device=label.device)
55
+
56
+ mixed_imgs, mixed_label = self.do_blending(imgs, one_hot_label,
57
+ **kwargs)
58
+
59
+ return mixed_imgs, mixed_label
60
+
61
+
62
+ class MixupBlending(BaseMiniBatchBlending):
63
+ """Implementing Mixup in a mini-batch.
64
+
65
+ This module is proposed in `mixup: Beyond Empirical Risk Minimization
66
+ <https://arxiv.org/abs/1710.09412>`_.
67
+ Code Reference https://github.com/open-mmlab/mmclassification/blob/master/mmcls/models/utils/mixup.py # noqa
68
+
69
+ Args:
70
+ num_classes (int): The number of classes.
71
+ alpha (float): Parameters for Beta distribution.
72
+ """
73
+
74
+ def __init__(self, num_classes, alpha=.2, smoothing=0.):
75
+ super().__init__(num_classes=num_classes, smoothing=smoothing)
76
+ self.beta = Beta(alpha, alpha)
77
+
78
+ def do_blending(self, imgs, label, **kwargs):
79
+ """Blending images with mixup."""
80
+ assert len(kwargs) == 0, f'unexpected kwargs for mixup {kwargs}'
81
+
82
+ lam = self.beta.sample()
83
+ batch_size = imgs.size(0)
84
+ rand_index = torch.randperm(batch_size)
85
+
86
+ mixed_imgs = lam * imgs + (1 - lam) * imgs[rand_index, :]
87
+ mixed_label = lam * label + (1 - lam) * label[rand_index, :]
88
+
89
+ return mixed_imgs, mixed_label
90
+
91
+
92
+ class CutmixBlending(BaseMiniBatchBlending):
93
+ """Implementing Cutmix in a mini-batch.
94
+ This module is proposed in `CutMix: Regularization Strategy to Train Strong
95
+ Classifiers with Localizable Features <https://arxiv.org/abs/1905.04899>`_.
96
+ Code Reference https://github.com/clovaai/CutMix-PyTorch
97
+ Args:
98
+ num_classes (int): The number of classes.
99
+ alpha (float): Parameters for Beta distribution.
100
+ """
101
+
102
+ def __init__(self, num_classes, alpha=.2, smoothing=0.):
103
+ super().__init__(num_classes=num_classes, smoothing=smoothing)
104
+ self.beta = Beta(alpha, alpha)
105
+
106
+ @staticmethod
107
+ def rand_bbox(img_size, lam):
108
+ """Generate a random boudning box."""
109
+ w = img_size[-1]
110
+ h = img_size[-2]
111
+ cut_rat = torch.sqrt(1. - lam)
112
+ cut_w = torch.tensor(int(w * cut_rat))
113
+ cut_h = torch.tensor(int(h * cut_rat))
114
+
115
+ # uniform
116
+ cx = torch.randint(w, (1, ))[0]
117
+ cy = torch.randint(h, (1, ))[0]
118
+
119
+ bbx1 = torch.clamp(cx - cut_w // 2, 0, w)
120
+ bby1 = torch.clamp(cy - cut_h // 2, 0, h)
121
+ bbx2 = torch.clamp(cx + cut_w // 2, 0, w)
122
+ bby2 = torch.clamp(cy + cut_h // 2, 0, h)
123
+
124
+ return bbx1, bby1, bbx2, bby2
125
+
126
+ def do_blending(self, imgs, label, **kwargs):
127
+ """Blending images with cutmix."""
128
+ assert len(kwargs) == 0, f'unexpected kwargs for cutmix {kwargs}'
129
+
130
+ batch_size = imgs.size(0)
131
+ rand_index = torch.randperm(batch_size)
132
+ lam = self.beta.sample()
133
+
134
+ bbx1, bby1, bbx2, bby2 = self.rand_bbox(imgs.size(), lam)
135
+ imgs[:, ..., bby1:bby2, bbx1:bbx2] = imgs[rand_index, ..., bby1:bby2,
136
+ bbx1:bbx2]
137
+ lam = 1 - (1.0 * (bbx2 - bbx1) * (bby2 - bby1) /
138
+ (imgs.size()[-1] * imgs.size()[-2]))
139
+
140
+ label = lam * label + (1 - lam) * label[rand_index, :]
141
+
142
+ return imgs, label
143
+
144
+
145
+ class LabelSmoothing(BaseMiniBatchBlending):
146
+ def do_blending(self, imgs, label, **kwargs):
147
+ return imgs, label
148
+
149
+
150
+ class CutmixMixupBlending(BaseMiniBatchBlending):
151
+ def __init__(self, num_classes=400, smoothing=0.1, mixup_alpha=.8, cutmix_alpha=1, switch_prob=0.5):
152
+ super().__init__(num_classes=num_classes, smoothing=smoothing)
153
+ self.mixup_beta = Beta(mixup_alpha, mixup_alpha)
154
+ self.cutmix_beta = Beta(cutmix_alpha, cutmix_alpha)
155
+ self.switch_prob = switch_prob
156
+
157
+ @staticmethod
158
+ def rand_bbox(img_size, lam):
159
+ """Generate a random boudning box."""
160
+ w = img_size[-1]
161
+ h = img_size[-2]
162
+ cut_rat = torch.sqrt(1. - lam)
163
+ cut_w = torch.tensor(int(w * cut_rat))
164
+ cut_h = torch.tensor(int(h * cut_rat))
165
+
166
+ # uniform
167
+ cx = torch.randint(w, (1, ))[0]
168
+ cy = torch.randint(h, (1, ))[0]
169
+
170
+ bbx1 = torch.clamp(cx - cut_w // 2, 0, w)
171
+ bby1 = torch.clamp(cy - cut_h // 2, 0, h)
172
+ bbx2 = torch.clamp(cx + cut_w // 2, 0, w)
173
+ bby2 = torch.clamp(cy + cut_h // 2, 0, h)
174
+
175
+ return bbx1, bby1, bbx2, bby2
176
+
177
+ def do_cutmix(self, imgs, label, **kwargs):
178
+ """Blending images with cutmix."""
179
+ assert len(kwargs) == 0, f'unexpected kwargs for cutmix {kwargs}'
180
+
181
+ batch_size = imgs.size(0)
182
+ rand_index = torch.randperm(batch_size)
183
+ lam = self.cutmix_beta.sample()
184
+
185
+ bbx1, bby1, bbx2, bby2 = self.rand_bbox(imgs.size(), lam)
186
+ imgs[:, ..., bby1:bby2, bbx1:bbx2] = imgs[rand_index, ..., bby1:bby2,
187
+ bbx1:bbx2]
188
+ lam = 1 - (1.0 * (bbx2 - bbx1) * (bby2 - bby1) /
189
+ (imgs.size()[-1] * imgs.size()[-2]))
190
+
191
+ label = lam * label + (1 - lam) * label[rand_index, :]
192
+ return imgs, label
193
+
194
+ def do_mixup(self, imgs, label, **kwargs):
195
+ """Blending images with mixup."""
196
+ assert len(kwargs) == 0, f'unexpected kwargs for mixup {kwargs}'
197
+
198
+ lam = self.mixup_beta.sample()
199
+ batch_size = imgs.size(0)
200
+ rand_index = torch.randperm(batch_size)
201
+
202
+ mixed_imgs = lam * imgs + (1 - lam) * imgs[rand_index, :]
203
+ mixed_label = lam * label + (1 - lam) * label[rand_index, :]
204
+
205
+ return mixed_imgs, mixed_label
206
+
207
+ def do_blending(self, imgs, label, **kwargs):
208
+ """Blending images with MViT style. Cutmix for half for mixup for the other half."""
209
+ assert len(kwargs) == 0, f'unexpected kwargs for cutmix_half_mixup {kwargs}'
210
+
211
+ if np.random.rand() < self.switch_prob :
212
+ return self.do_cutmix(imgs, label)
213
+ else:
214
+ return self.do_mixup(imgs, label)
datasets/build.py ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from logging import Logger
2
+ from torch.utils.data import DataLoader
3
+ import torch.distributed as dist
4
+ import torch
5
+ import numpy as np
6
+ from functools import partial
7
+ import random
8
+
9
+ import io
10
+ import os
11
+ import os.path as osp
12
+ import shutil
13
+ import warnings
14
+ from collections.abc import Mapping, Sequence
15
+ from mmcv.utils import Registry, build_from_cfg
16
+ from torch.utils.data import Dataset
17
+ import copy
18
+ import os.path as osp
19
+ import warnings
20
+ from abc import ABCMeta, abstractmethod
21
+ from collections import OrderedDict, defaultdict
22
+ import os.path as osp
23
+ import mmcv
24
+ import numpy as np
25
+ import torch
26
+ import tarfile
27
+ from .pipeline import *
28
+ from torch.utils.data import DataLoader
29
+ from torch.utils.data.dataloader import default_collate
30
+ from mmcv.parallel import collate
31
+ import pandas as pd
32
+
33
+ PIPELINES = Registry('pipeline')
34
+ img_norm_cfg = dict(
35
+ mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_bgr=False)
36
+
37
+
38
+ class BaseDataset(Dataset, metaclass=ABCMeta):
39
+ def __init__(self,
40
+ ann_file,
41
+ pipeline,
42
+ repeat = 1,
43
+ data_prefix=None,
44
+ test_mode=False,
45
+ multi_class=False,
46
+ num_classes=None,
47
+ start_index=1,
48
+ modality='RGB',
49
+ sample_by_class=False,
50
+ power=0,
51
+ dynamic_length=False,):
52
+ super().__init__()
53
+ self.use_tar_format = True if ".tar" in data_prefix else False
54
+ data_prefix = data_prefix.replace(".tar", "")
55
+ self.ann_file = ann_file
56
+ self.repeat = repeat
57
+ self.data_prefix = osp.realpath(
58
+ data_prefix) if data_prefix is not None and osp.isdir(
59
+ data_prefix) else data_prefix
60
+ self.test_mode = test_mode
61
+ self.multi_class = multi_class
62
+ self.num_classes = num_classes
63
+ self.start_index = start_index
64
+ self.modality = modality
65
+ self.sample_by_class = sample_by_class
66
+ self.power = power
67
+ self.dynamic_length = dynamic_length
68
+
69
+ assert not (self.multi_class and self.sample_by_class)
70
+
71
+ self.pipeline = Compose(pipeline)
72
+ self.video_infos = self.load_annotations()
73
+ if self.sample_by_class:
74
+ self.video_infos_by_class = self.parse_by_class()
75
+
76
+ class_prob = []
77
+ for _, samples in self.video_infos_by_class.items():
78
+ class_prob.append(len(samples) / len(self.video_infos))
79
+ class_prob = [x**self.power for x in class_prob]
80
+
81
+ summ = sum(class_prob)
82
+ class_prob = [x / summ for x in class_prob]
83
+
84
+ self.class_prob = dict(zip(self.video_infos_by_class, class_prob))
85
+
86
+ @abstractmethod
87
+ def load_annotations(self):
88
+ """Load the annotation according to ann_file into video_infos."""
89
+
90
+ # json annotations already looks like video_infos, so for each dataset,
91
+ # this func should be the same
92
+ def load_json_annotations(self):
93
+ """Load json annotation file to get video information."""
94
+ video_infos = mmcv.load(self.ann_file)
95
+ num_videos = len(video_infos)
96
+ path_key = 'frame_dir' if 'frame_dir' in video_infos[0] else 'filename'
97
+ for i in range(num_videos):
98
+ path_value = video_infos[i][path_key]
99
+ if self.data_prefix is not None:
100
+ path_value = osp.join(self.data_prefix, path_value)
101
+ video_infos[i][path_key] = path_value
102
+ if self.multi_class:
103
+ assert self.num_classes is not None
104
+ else:
105
+ assert len(video_infos[i]['label']) == 1
106
+ video_infos[i]['label'] = video_infos[i]['label'][0]
107
+ return video_infos
108
+
109
+ def parse_by_class(self):
110
+ video_infos_by_class = defaultdict(list)
111
+ for item in self.video_infos:
112
+ label = item['label']
113
+ video_infos_by_class[label].append(item)
114
+ return video_infos_by_class
115
+
116
+ @staticmethod
117
+ def label2array(num, label):
118
+ arr = np.zeros(num, dtype=np.float32)
119
+ arr[label] = 1.
120
+ return arr
121
+
122
+ @staticmethod
123
+ def dump_results(results, out):
124
+ """Dump data to json/yaml/pickle strings or files."""
125
+ return mmcv.dump(results, out)
126
+
127
+ def prepare_train_frames(self, idx):
128
+ """Prepare the frames for training given the index."""
129
+ results = copy.deepcopy(self.video_infos[idx])
130
+ results['modality'] = self.modality
131
+ results['start_index'] = self.start_index
132
+
133
+ # prepare tensor in getitem
134
+ # If HVU, type(results['label']) is dict
135
+ if self.multi_class and isinstance(results['label'], list):
136
+ onehot = torch.zeros(self.num_classes)
137
+ onehot[results['label']] = 1.
138
+ results['label'] = onehot
139
+
140
+ aug1 = self.pipeline(results)
141
+ if self.repeat > 1:
142
+ aug2 = self.pipeline(results)
143
+ ret = {"imgs": torch.cat((aug1['imgs'], aug2['imgs']), 0),
144
+ "label": aug1['label'].repeat(2),
145
+ }
146
+ return ret
147
+ else:
148
+ return aug1
149
+
150
+ def prepare_test_frames(self, idx):
151
+ """Prepare the frames for testing given the index."""
152
+ results = copy.deepcopy(self.video_infos[idx])
153
+ results['modality'] = self.modality
154
+ results['start_index'] = self.start_index
155
+
156
+ # prepare tensor in getitem
157
+ # If HVU, type(results['label']) is dict
158
+ if self.multi_class and isinstance(results['label'], list):
159
+ onehot = torch.zeros(self.num_classes)
160
+ onehot[results['label']] = 1.
161
+ results['label'] = onehot
162
+
163
+ return self.pipeline(results)
164
+
165
+ def __len__(self):
166
+ """Get the size of the dataset."""
167
+ return len(self.video_infos)
168
+
169
+ def __getitem__(self, idx):
170
+ """Get the sample for either training or testing given index."""
171
+ if self.test_mode:
172
+ return self.prepare_test_frames(idx)
173
+
174
+ return self.prepare_train_frames(idx)
175
+
176
+ class VideoDataset(BaseDataset):
177
+ def __init__(self, ann_file, pipeline, labels_file, start_index=0, **kwargs):
178
+ super().__init__(ann_file, pipeline, start_index=start_index, **kwargs)
179
+ self.labels_file = labels_file
180
+
181
+ @property
182
+ def classes(self):
183
+ classes_all = pd.read_csv(self.labels_file)
184
+ print(classes_all)
185
+ return classes_all.values.tolist()
186
+
187
+ def load_annotations(self):
188
+ """Load annotation file to get video information."""
189
+ if self.ann_file.endswith('.json'):
190
+ return self.load_json_annotations()
191
+
192
+ video_infos = []
193
+ with open(self.ann_file, 'r') as fin:
194
+ for line in fin:
195
+ line_split = line.strip().split()
196
+ if self.multi_class:
197
+ assert self.num_classes is not None
198
+ filename, label = line_split[0], line_split[1:]
199
+ label = list(map(int, label))
200
+ else:
201
+ line_split = line_split[0].split(',')
202
+ filename, label = line_split
203
+ label = int(label)
204
+ if self.data_prefix is not None:
205
+ filename = osp.join(self.data_prefix, filename)
206
+ video_infos.append(dict(filename=filename, label=label, tar=self.use_tar_format))
207
+ return video_infos
208
+
209
+
210
+ class SubsetRandomSampler(torch.utils.data.Sampler):
211
+ r"""Samples elements randomly from a given list of indices, without replacement.
212
+
213
+ Arguments:
214
+ indices (sequence): a sequence of indices
215
+ """
216
+
217
+ def __init__(self, indices):
218
+ self.epoch = 0
219
+ self.indices = indices
220
+
221
+ def __iter__(self):
222
+ return (self.indices[i] for i in torch.randperm(len(self.indices)))
223
+
224
+ def __len__(self):
225
+ return len(self.indices)
226
+
227
+ def set_epoch(self, epoch):
228
+ self.epoch = epoch
229
+
230
+
231
+ def mmcv_collate(batch, samples_per_gpu=1):
232
+ if not isinstance(batch, Sequence):
233
+ raise TypeError(f'{batch.dtype} is not supported.')
234
+ if isinstance(batch[0], Sequence):
235
+ transposed = zip(*batch)
236
+ return [collate(samples, samples_per_gpu) for samples in transposed]
237
+ elif isinstance(batch[0], Mapping):
238
+ return {
239
+ key: mmcv_collate([d[key] for d in batch], samples_per_gpu)
240
+ for key in batch[0]
241
+ }
242
+ else:
243
+ return default_collate(batch)
244
+
245
+
246
+ def build_dataloader(logger, config):
247
+ scale_resize = int(256 / 224 * config.DATA.INPUT_SIZE)
248
+
249
+ train_pipeline = [
250
+ dict(type='DecordInit'),
251
+ dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=config.DATA.NUM_FRAMES),
252
+ dict(type='DecordDecode'),
253
+ dict(type='Resize', scale=(-1, scale_resize)),
254
+ dict(
255
+ type='MultiScaleCrop',
256
+ input_size=config.DATA.INPUT_SIZE,
257
+ scales=(1, 0.875, 0.75, 0.66),
258
+ random_crop=False,
259
+ max_wh_scale_gap=1),
260
+ dict(type='Resize', scale=(config.DATA.INPUT_SIZE, config.DATA.INPUT_SIZE), keep_ratio=False),
261
+ dict(type='Flip', flip_ratio=0.5),
262
+ dict(type='ColorJitter', p=config.AUG.COLOR_JITTER),
263
+ dict(type='GrayScale', p=config.AUG.GRAY_SCALE),
264
+ dict(type='Normalize', **img_norm_cfg),
265
+ dict(type='FormatShape', input_format='NCHW'),
266
+ dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
267
+ dict(type='ToTensor', keys=['imgs', 'label']),
268
+ ]
269
+
270
+
271
+ train_data = VideoDataset(ann_file=config.DATA.TRAIN_FILE, data_prefix=config.DATA.ROOT,
272
+ labels_file=config.DATA.LABEL_LIST, pipeline=train_pipeline)
273
+ num_tasks = dist.get_world_size()
274
+ global_rank = dist.get_rank()
275
+ sampler_train = torch.utils.data.DistributedSampler(
276
+ train_data, num_replicas=num_tasks, rank=global_rank, shuffle=True
277
+ )
278
+ train_loader = DataLoader(
279
+ train_data, sampler=sampler_train,
280
+ batch_size=config.TRAIN.BATCH_SIZE,
281
+ num_workers=16,
282
+ pin_memory=True,
283
+ drop_last=True,
284
+ collate_fn=partial(mmcv_collate, samples_per_gpu=config.TRAIN.BATCH_SIZE),
285
+ )
286
+
287
+ val_pipeline = [
288
+ dict(type='DecordInit'),
289
+ dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=config.DATA.NUM_FRAMES, test_mode=True),
290
+ dict(type='DecordDecode'),
291
+ dict(type='Resize', scale=(-1, scale_resize)),
292
+ dict(type='CenterCrop', crop_size=config.DATA.INPUT_SIZE),
293
+ dict(type='Normalize', **img_norm_cfg),
294
+ dict(type='FormatShape', input_format='NCHW'),
295
+ dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
296
+ dict(type='ToTensor', keys=['imgs'])
297
+ ]
298
+ if config.TEST.NUM_CROP == 3:
299
+ val_pipeline[3] = dict(type='Resize', scale=(-1, config.DATA.INPUT_SIZE))
300
+ val_pipeline[4] = dict(type='ThreeCrop', crop_size=config.DATA.INPUT_SIZE)
301
+ if config.TEST.NUM_CLIP > 1:
302
+ val_pipeline[1] = dict(type='SampleFrames', clip_len=1, frame_interval=1, num_clips=config.DATA.NUM_FRAMES, multiview=config.TEST.NUM_CLIP)
303
+
304
+ val_data = VideoDataset(ann_file=config.DATA.VAL_FILE, data_prefix=config.DATA.ROOT, labels_file=config.DATA.LABEL_LIST, pipeline=val_pipeline)
305
+ indices = np.arange(dist.get_rank(), len(val_data), dist.get_world_size())
306
+ sampler_val = SubsetRandomSampler(indices)
307
+ val_loader = DataLoader(
308
+ val_data, sampler=indices,
309
+ batch_size=2,
310
+ num_workers=16,
311
+ pin_memory=True,
312
+ drop_last=True,
313
+ collate_fn=partial(mmcv_collate, samples_per_gpu=2),
314
+ )
315
+
316
+ return train_data, val_data, train_loader, val_loader
datasets/pipeline.py ADDED
@@ -0,0 +1,2362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import io
2
+ import os
3
+ import os.path as osp
4
+ import shutil
5
+ import warnings
6
+ from collections.abc import Sequence
7
+ from mmcv.utils import Registry, build_from_cfg
8
+ from torch.utils.data import Dataset
9
+ import copy
10
+ import os.path as osp
11
+ import warnings
12
+ from abc import ABCMeta, abstractmethod
13
+ from collections import OrderedDict, defaultdict
14
+ import os.path as osp
15
+ import mmcv
16
+ import numpy as np
17
+ import torch
18
+ import tarfile
19
+ import timm.data as tdata
20
+ from torch.nn.modules.utils import _pair
21
+ import random
22
+ import torchvision
23
+ from PIL import Image
24
+ from .rand_augment import rand_augment_transform
25
+ from torchvision import transforms
26
+ from mmcv.fileio import FileClient
27
+
28
+ PIPELINES = Registry('pipeline')
29
+
30
+ def _init_lazy_if_proper(results, lazy):
31
+ """Initialize lazy operation properly.
32
+
33
+ Make sure that a lazy operation is properly initialized,
34
+ and avoid a non-lazy operation accidentally getting mixed in.
35
+
36
+ Required keys in results are "imgs" if "img_shape" not in results,
37
+ otherwise, Required keys in results are "img_shape", add or modified keys
38
+ are "img_shape", "lazy".
39
+ Add or modified keys in "lazy" are "original_shape", "crop_bbox", "flip",
40
+ "flip_direction", "interpolation".
41
+
42
+ Args:
43
+ results (dict): A dict stores data pipeline result.
44
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
45
+ """
46
+
47
+ if 'img_shape' not in results:
48
+ results['img_shape'] = results['imgs'][0].shape[:2]
49
+ if lazy:
50
+ if 'lazy' not in results:
51
+ img_h, img_w = results['img_shape']
52
+ lazyop = dict()
53
+ lazyop['original_shape'] = results['img_shape']
54
+ lazyop['crop_bbox'] = np.array([0, 0, img_w, img_h],
55
+ dtype=np.float32)
56
+ lazyop['flip'] = False
57
+ lazyop['flip_direction'] = None
58
+ lazyop['interpolation'] = None
59
+ results['lazy'] = lazyop
60
+ else:
61
+ assert 'lazy' not in results, 'Use Fuse after lazy operations'
62
+
63
+
64
+ def _pil_interp(method):
65
+ if method == "bicubic":
66
+ return Image.BICUBIC
67
+ elif method == "lanczos":
68
+ return Image.LANCZOS
69
+ elif method == "hamming":
70
+ return Image.HAMMING
71
+ else:
72
+ return Image.BILINEAR
73
+
74
+
75
+ class EntityBoxRescale:
76
+
77
+ def __init__(self, scale_factor):
78
+ raise NotImplementedError(
79
+ 'This component should not be used in the '
80
+ 'data pipeline and is removed in PR #782. Details see '
81
+ 'https://github.com/open-mmlab/mmaction2/pull/782')
82
+
83
+
84
+ @PIPELINES.register_module()
85
+ class EntityBoxCrop:
86
+
87
+ def __init__(self, crop_bbox):
88
+ raise NotImplementedError(
89
+ 'This component should not be used in the '
90
+ 'data pipeline and is removed in PR #782. Details see '
91
+ 'https://github.com/open-mmlab/mmaction2/pull/782')
92
+
93
+
94
+ @PIPELINES.register_module()
95
+ class EntityBoxFlip:
96
+
97
+ def __init__(self, img_shape):
98
+ raise NotImplementedError(
99
+ 'This component should not be used in the '
100
+ 'data pipeline and is removed in PR #782. Details see '
101
+ 'https://github.com/open-mmlab/mmaction2/pull/782')
102
+
103
+
104
+ @PIPELINES.register_module()
105
+ class Imgaug:
106
+ """Imgaug augmentation.
107
+ Adds custom transformations from imgaug library.
108
+ Please visit `https://imgaug.readthedocs.io/en/latest/index.html`
109
+ to get more information. Two demo configs could be found in tsn and i3d
110
+ config folder.
111
+ It's better to use uint8 images as inputs since imgaug works best with
112
+ numpy dtype uint8 and isn't well tested with other dtypes. It should be
113
+ noted that not all of the augmenters have the same input and output dtype,
114
+ which may cause unexpected results.
115
+ Required keys are "imgs", "img_shape"(if "gt_bboxes" is not None) and
116
+ "modality", added or modified keys are "imgs", "img_shape", "gt_bboxes"
117
+ and "proposals".
118
+ It is worth mentioning that `Imgaug` will NOT create custom keys like
119
+ "interpolation", "crop_bbox", "flip_direction", etc. So when using
120
+ `Imgaug` along with other mmaction2 pipelines, we should pay more attention
121
+ to required keys.
122
+ Two steps to use `Imgaug` pipeline:
123
+ 1. Create initialization parameter `transforms`. There are three ways
124
+ to create `transforms`.
125
+ 1) string: only support `default` for now.
126
+ e.g. `transforms='default'`
127
+ 2) list[dict]: create a list of augmenters by a list of dicts, each
128
+ dict corresponds to one augmenter. Every dict MUST contain a key
129
+ named `type`. `type` should be a string(iaa.Augmenter's name) or
130
+ an iaa.Augmenter subclass.
131
+ e.g. `transforms=[dict(type='Rotate', rotate=(-20, 20))]`
132
+ e.g. `transforms=[dict(type=iaa.Rotate, rotate=(-20, 20))]`
133
+ 3) iaa.Augmenter: create an imgaug.Augmenter object.
134
+ e.g. `transforms=iaa.Rotate(rotate=(-20, 20))`
135
+ 2. Add `Imgaug` in dataset pipeline. It is recommended to insert imgaug
136
+ pipeline before `Normalize`. A demo pipeline is listed as follows.
137
+ ```
138
+ pipeline = [
139
+ dict(
140
+ type='SampleFrames',
141
+ clip_len=1,
142
+ frame_interval=1,
143
+ num_clips=16,
144
+ ),
145
+ dict(type='RawFrameDecode'),
146
+ dict(type='Resize', scale=(-1, 256)),
147
+ dict(
148
+ type='MultiScaleCrop',
149
+ input_size=224,
150
+ scales=(1, 0.875, 0.75, 0.66),
151
+ random_crop=False,
152
+ max_wh_scale_gap=1,
153
+ num_fixed_crops=13),
154
+ dict(type='Resize', scale=(224, 224), keep_ratio=False),
155
+ dict(type='Flip', flip_ratio=0.5),
156
+ dict(type='Imgaug', transforms='default'),
157
+ # dict(type='Imgaug', transforms=[
158
+ # dict(type='Rotate', rotate=(-20, 20))
159
+ # ]),
160
+ dict(type='Normalize', **img_norm_cfg),
161
+ dict(type='FormatShape', input_format='NCHW'),
162
+ dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
163
+ dict(type='ToTensor', keys=['imgs', 'label'])
164
+ ]
165
+ ```
166
+ Args:
167
+ transforms (str | list[dict] | :obj:`iaa.Augmenter`): Three different
168
+ ways to create imgaug augmenter.
169
+ """
170
+
171
+ def __init__(self, transforms):
172
+ import imgaug.augmenters as iaa
173
+
174
+ if transforms == 'default':
175
+ self.transforms = self.default_transforms()
176
+ elif isinstance(transforms, list):
177
+ assert all(isinstance(trans, dict) for trans in transforms)
178
+ self.transforms = transforms
179
+ elif isinstance(transforms, iaa.Augmenter):
180
+ self.aug = self.transforms = transforms
181
+ else:
182
+ raise ValueError('transforms must be `default` or a list of dicts'
183
+ ' or iaa.Augmenter object')
184
+
185
+ if not isinstance(transforms, iaa.Augmenter):
186
+ self.aug = iaa.Sequential(
187
+ [self.imgaug_builder(t) for t in self.transforms])
188
+
189
+ @staticmethod
190
+ def default_transforms():
191
+ """Default transforms for imgaug.
192
+ Implement RandAugment by imgaug.
193
+ Plase visit `https://arxiv.org/abs/1909.13719` for more information.
194
+ Augmenters and hyper parameters are borrowed from the following repo:
195
+ https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/autoaugment.py # noqa
196
+ Miss one augmenter ``SolarizeAdd`` since imgaug doesn't support this.
197
+ Returns:
198
+ dict: The constructed RandAugment transforms.
199
+ """
200
+ # RandAugment hyper params
201
+ num_augmenters = 2
202
+ cur_magnitude, max_magnitude = 9, 10
203
+ cur_level = 1.0 * cur_magnitude / max_magnitude
204
+
205
+ return [
206
+ dict(
207
+ type='SomeOf',
208
+ n=num_augmenters,
209
+ children=[
210
+ dict(
211
+ type='ShearX',
212
+ shear=17.19 * cur_level * random.choice([-1, 1])),
213
+ dict(
214
+ type='ShearY',
215
+ shear=17.19 * cur_level * random.choice([-1, 1])),
216
+ dict(
217
+ type='TranslateX',
218
+ percent=.2 * cur_level * random.choice([-1, 1])),
219
+ dict(
220
+ type='TranslateY',
221
+ percent=.2 * cur_level * random.choice([-1, 1])),
222
+ dict(
223
+ type='Rotate',
224
+ rotate=30 * cur_level * random.choice([-1, 1])),
225
+ dict(type='Posterize', nb_bits=max(1, int(4 * cur_level))),
226
+ dict(type='Solarize', threshold=256 * cur_level),
227
+ dict(type='EnhanceColor', factor=1.8 * cur_level + .1),
228
+ dict(type='EnhanceContrast', factor=1.8 * cur_level + .1),
229
+ dict(
230
+ type='EnhanceBrightness', factor=1.8 * cur_level + .1),
231
+ dict(type='EnhanceSharpness', factor=1.8 * cur_level + .1),
232
+ dict(type='Autocontrast', cutoff=0),
233
+ dict(type='Equalize'),
234
+ dict(type='Invert', p=1.),
235
+ dict(
236
+ type='Cutout',
237
+ nb_iterations=1,
238
+ size=0.2 * cur_level,
239
+ squared=True)
240
+ ])
241
+ ]
242
+
243
+ def imgaug_builder(self, cfg):
244
+ """Import a module from imgaug.
245
+ It follows the logic of :func:`build_from_cfg`. Use a dict object to
246
+ create an iaa.Augmenter object.
247
+ Args:
248
+ cfg (dict): Config dict. It should at least contain the key "type".
249
+ Returns:
250
+ obj:`iaa.Augmenter`: The constructed imgaug augmenter.
251
+ """
252
+ import imgaug.augmenters as iaa
253
+
254
+ assert isinstance(cfg, dict) and 'type' in cfg
255
+ args = cfg.copy()
256
+
257
+ obj_type = args.pop('type')
258
+ if mmcv.is_str(obj_type):
259
+ obj_cls = getattr(iaa, obj_type) if hasattr(iaa, obj_type) \
260
+ else getattr(iaa.pillike, obj_type)
261
+ elif issubclass(obj_type, iaa.Augmenter):
262
+ obj_cls = obj_type
263
+ else:
264
+ raise TypeError(
265
+ f'type must be a str or valid type, but got {type(obj_type)}')
266
+
267
+ if 'children' in args:
268
+ args['children'] = [
269
+ self.imgaug_builder(child) for child in args['children']
270
+ ]
271
+
272
+ return obj_cls(**args)
273
+
274
+ def __repr__(self):
275
+ repr_str = self.__class__.__name__ + f'(transforms={self.aug})'
276
+ return repr_str
277
+
278
+ def __call__(self, results):
279
+ assert results['modality'] == 'RGB', 'Imgaug only support RGB images.'
280
+ in_type = results['imgs'][0].dtype.type
281
+
282
+ cur_aug = self.aug.to_deterministic()
283
+
284
+ results['imgs'] = [
285
+ cur_aug.augment_image(frame) for frame in results['imgs']
286
+ ]
287
+ img_h, img_w, _ = results['imgs'][0].shape
288
+
289
+ out_type = results['imgs'][0].dtype.type
290
+ assert in_type == out_type, \
291
+ ('Imgaug input dtype and output dtype are not the same. ',
292
+ f'Convert from {in_type} to {out_type}')
293
+
294
+ if 'gt_bboxes' in results:
295
+ from imgaug.augmentables import bbs
296
+ bbox_list = [
297
+ bbs.BoundingBox(
298
+ x1=bbox[0], y1=bbox[1], x2=bbox[2], y2=bbox[3])
299
+ for bbox in results['gt_bboxes']
300
+ ]
301
+ bboxes = bbs.BoundingBoxesOnImage(
302
+ bbox_list, shape=results['img_shape'])
303
+ bbox_aug, *_ = cur_aug.augment_bounding_boxes([bboxes])
304
+ results['gt_bboxes'] = [[
305
+ max(bbox.x1, 0),
306
+ max(bbox.y1, 0),
307
+ min(bbox.x2, img_w),
308
+ min(bbox.y2, img_h)
309
+ ] for bbox in bbox_aug.items]
310
+ if 'proposals' in results:
311
+ bbox_list = [
312
+ bbs.BoundingBox(
313
+ x1=bbox[0], y1=bbox[1], x2=bbox[2], y2=bbox[3])
314
+ for bbox in results['proposals']
315
+ ]
316
+ bboxes = bbs.BoundingBoxesOnImage(
317
+ bbox_list, shape=results['img_shape'])
318
+ bbox_aug, *_ = cur_aug.augment_bounding_boxes([bboxes])
319
+ results['proposals'] = [[
320
+ max(bbox.x1, 0),
321
+ max(bbox.y1, 0),
322
+ min(bbox.x2, img_w),
323
+ min(bbox.y2, img_h)
324
+ ] for bbox in bbox_aug.items]
325
+
326
+ results['img_shape'] = (img_h, img_w)
327
+
328
+ return results
329
+
330
+ @PIPELINES.register_module()
331
+ class RandomErasing(tdata.random_erasing.RandomErasing):
332
+ def __init__(self, device='cpu', **args):
333
+ super().__init__(device=device, **args)
334
+
335
+ def __call__(self, results):
336
+ in_type = results['imgs'][0].dtype.type
337
+
338
+ rand_state = random.getstate()
339
+ torchrand_state = torch.get_rng_state()
340
+ numpyrand_state = np.random.get_state()
341
+ # not using cuda to preserve the determiness
342
+
343
+ out_frame = []
344
+ for frame in results['imgs']:
345
+ random.setstate(rand_state)
346
+ torch.set_rng_state(torchrand_state)
347
+ np.random.set_state(numpyrand_state)
348
+ frame = super().__call__(torch.from_numpy(frame).permute(2, 0, 1)).permute(1, 2, 0).numpy()
349
+ out_frame.append(frame)
350
+
351
+ results['imgs'] = out_frame
352
+ img_h, img_w, _ = results['imgs'][0].shape
353
+
354
+ out_type = results['imgs'][0].dtype.type
355
+ assert in_type == out_type, \
356
+ ('Timmaug input dtype and output dtype are not the same. ',
357
+ f'Convert from {in_type} to {out_type}')
358
+
359
+ if 'gt_bboxes' in results:
360
+ raise NotImplementedError('only support recognition now')
361
+ assert results['img_shape'] == (img_h, img_w)
362
+
363
+ return results
364
+
365
+
366
+ @PIPELINES.register_module()
367
+ class Fuse:
368
+ """Fuse lazy operations.
369
+
370
+ Fusion order:
371
+ crop -> resize -> flip
372
+
373
+ Required keys are "imgs", "img_shape" and "lazy", added or modified keys
374
+ are "imgs", "lazy".
375
+ Required keys in "lazy" are "crop_bbox", "interpolation", "flip_direction".
376
+ """
377
+
378
+ def __call__(self, results):
379
+ if 'lazy' not in results:
380
+ raise ValueError('No lazy operation detected')
381
+ lazyop = results['lazy']
382
+ imgs = results['imgs']
383
+
384
+ # crop
385
+ left, top, right, bottom = lazyop['crop_bbox'].round().astype(int)
386
+ imgs = [img[top:bottom, left:right] for img in imgs]
387
+
388
+ # resize
389
+ img_h, img_w = results['img_shape']
390
+ if lazyop['interpolation'] is None:
391
+ interpolation = 'bilinear'
392
+ else:
393
+ interpolation = lazyop['interpolation']
394
+ imgs = [
395
+ mmcv.imresize(img, (img_w, img_h), interpolation=interpolation)
396
+ for img in imgs
397
+ ]
398
+
399
+ # flip
400
+ if lazyop['flip']:
401
+ for img in imgs:
402
+ mmcv.imflip_(img, lazyop['flip_direction'])
403
+
404
+ results['imgs'] = imgs
405
+ del results['lazy']
406
+
407
+ return results
408
+
409
+
410
+ @PIPELINES.register_module()
411
+ class RandomScale:
412
+ """Resize images by a random scale.
413
+
414
+ Required keys are "imgs", "img_shape", "modality", added or modified
415
+ keys are "imgs", "img_shape", "keep_ratio", "scale_factor", "lazy",
416
+ "scale", "resize_size". Required keys in "lazy" is None, added or
417
+ modified key is "interpolation".
418
+
419
+ Args:
420
+ scales (tuple[int]): Tuple of scales to be chosen for resize.
421
+ mode (str): Selection mode for choosing the scale. Options are "range"
422
+ and "value". If set to "range", The short edge will be randomly
423
+ chosen from the range of minimum and maximum on the shorter one
424
+ in all tuples. Otherwise, the longer edge will be randomly chosen
425
+ from the range of minimum and maximum on the longer one in all
426
+ tuples. Default: 'range'.
427
+ """
428
+
429
+ def __init__(self, scales, mode='range', **kwargs):
430
+ warnings.warn('"RandomScale" is deprecated and will be removed in '
431
+ 'later versions. It is currently not used in MMAction2')
432
+ self.mode = mode
433
+ if self.mode not in ['range', 'value']:
434
+ raise ValueError(f"mode should be 'range' or 'value', "
435
+ f'but got {self.mode}')
436
+ self.scales = scales
437
+ self.kwargs = kwargs
438
+
439
+ def select_scale(self, scales):
440
+ num_scales = len(scales)
441
+ if num_scales == 1:
442
+ # specify a fixed scale
443
+ scale = scales[0]
444
+ elif num_scales == 2:
445
+ if self.mode == 'range':
446
+ scale_long = [max(s) for s in scales]
447
+ scale_short = [min(s) for s in scales]
448
+ long_edge = np.random.randint(
449
+ min(scale_long),
450
+ max(scale_long) + 1)
451
+ short_edge = np.random.randint(
452
+ min(scale_short),
453
+ max(scale_short) + 1)
454
+ scale = (long_edge, short_edge)
455
+ elif self.mode == 'value':
456
+ scale = random.choice(scales)
457
+ else:
458
+ if self.mode != 'value':
459
+ raise ValueError("Only 'value' mode supports more than "
460
+ '2 image scales')
461
+ scale = random.choice(scales)
462
+
463
+ return scale
464
+
465
+ def __call__(self, results):
466
+ scale = self.select_scale(self.scales)
467
+ results['scale'] = scale
468
+ resize = Resize(scale, **self.kwargs)
469
+ results = resize(results)
470
+ return results
471
+
472
+ def __repr__(self):
473
+ repr_str = (f'{self.__class__.__name__}('
474
+ f'scales={self.scales}, mode={self.mode})')
475
+ return repr_str
476
+
477
+
478
+ @PIPELINES.register_module()
479
+ class RandomCrop:
480
+ """Vanilla square random crop that specifics the output size.
481
+
482
+ Required keys in results are "img_shape", "keypoint" (optional), "imgs"
483
+ (optional), added or modified keys are "keypoint", "imgs", "lazy"; Required
484
+ keys in "lazy" are "flip", "crop_bbox", added or modified key is
485
+ "crop_bbox".
486
+
487
+ Args:
488
+ size (int): The output size of the images.
489
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
490
+ """
491
+
492
+ def __init__(self, size, lazy=False):
493
+ if not isinstance(size, int):
494
+ raise TypeError(f'Size must be an int, but got {type(size)}')
495
+ self.size = size
496
+ self.lazy = lazy
497
+
498
+ @staticmethod
499
+ def _crop_kps(kps, crop_bbox):
500
+ return kps - crop_bbox[:2]
501
+
502
+ @staticmethod
503
+ def _crop_imgs(imgs, crop_bbox):
504
+ x1, y1, x2, y2 = crop_bbox
505
+ return [img[y1:y2, x1:x2] for img in imgs]
506
+
507
+ @staticmethod
508
+ def _box_crop(box, crop_bbox):
509
+ """Crop the bounding boxes according to the crop_bbox.
510
+
511
+ Args:
512
+ box (np.ndarray): The bounding boxes.
513
+ crop_bbox(np.ndarray): The bbox used to crop the original image.
514
+ """
515
+
516
+ x1, y1, x2, y2 = crop_bbox
517
+ img_w, img_h = x2 - x1, y2 - y1
518
+
519
+ box_ = box.copy()
520
+ box_[..., 0::2] = np.clip(box[..., 0::2] - x1, 0, img_w - 1)
521
+ box_[..., 1::2] = np.clip(box[..., 1::2] - y1, 0, img_h - 1)
522
+ return box_
523
+
524
+ def _all_box_crop(self, results, crop_bbox):
525
+ """Crop the gt_bboxes and proposals in results according to crop_bbox.
526
+
527
+ Args:
528
+ results (dict): All information about the sample, which contain
529
+ 'gt_bboxes' and 'proposals' (optional).
530
+ crop_bbox(np.ndarray): The bbox used to crop the original image.
531
+ """
532
+ results['gt_bboxes'] = self._box_crop(results['gt_bboxes'], crop_bbox)
533
+ if 'proposals' in results and results['proposals'] is not None:
534
+ assert results['proposals'].shape[1] == 4
535
+ results['proposals'] = self._box_crop(results['proposals'],
536
+ crop_bbox)
537
+ return results
538
+
539
+ def __call__(self, results):
540
+ """Performs the RandomCrop augmentation.
541
+
542
+ Args:
543
+ results (dict): The resulting dict to be modified and passed
544
+ to the next transform in pipeline.
545
+ """
546
+ _init_lazy_if_proper(results, self.lazy)
547
+ if 'keypoint' in results:
548
+ assert not self.lazy, ('Keypoint Augmentations are not compatible '
549
+ 'with lazy == True')
550
+
551
+ img_h, img_w = results['img_shape']
552
+ assert self.size <= img_h and self.size <= img_w
553
+
554
+ y_offset = 0
555
+ x_offset = 0
556
+ if img_h > self.size:
557
+ y_offset = int(np.random.randint(0, img_h - self.size))
558
+ if img_w > self.size:
559
+ x_offset = int(np.random.randint(0, img_w - self.size))
560
+
561
+ if 'crop_quadruple' not in results:
562
+ results['crop_quadruple'] = np.array(
563
+ [0, 0, 1, 1], # x, y, w, h
564
+ dtype=np.float32)
565
+
566
+ x_ratio, y_ratio = x_offset / img_w, y_offset / img_h
567
+ w_ratio, h_ratio = self.size / img_w, self.size / img_h
568
+
569
+ old_crop_quadruple = results['crop_quadruple']
570
+ old_x_ratio, old_y_ratio = old_crop_quadruple[0], old_crop_quadruple[1]
571
+ old_w_ratio, old_h_ratio = old_crop_quadruple[2], old_crop_quadruple[3]
572
+ new_crop_quadruple = [
573
+ old_x_ratio + x_ratio * old_w_ratio,
574
+ old_y_ratio + y_ratio * old_h_ratio, w_ratio * old_w_ratio,
575
+ h_ratio * old_x_ratio
576
+ ]
577
+ results['crop_quadruple'] = np.array(
578
+ new_crop_quadruple, dtype=np.float32)
579
+
580
+ new_h, new_w = self.size, self.size
581
+
582
+ crop_bbox = np.array(
583
+ [x_offset, y_offset, x_offset + new_w, y_offset + new_h])
584
+ results['crop_bbox'] = crop_bbox
585
+
586
+ results['img_shape'] = (new_h, new_w)
587
+
588
+ if not self.lazy:
589
+ if 'keypoint' in results:
590
+ results['keypoint'] = self._crop_kps(results['keypoint'],
591
+ crop_bbox)
592
+ if 'imgs' in results:
593
+ results['imgs'] = self._crop_imgs(results['imgs'], crop_bbox)
594
+ else:
595
+ lazyop = results['lazy']
596
+ if lazyop['flip']:
597
+ raise NotImplementedError('Put Flip at last for now')
598
+
599
+ # record crop_bbox in lazyop dict to ensure only crop once in Fuse
600
+ lazy_left, lazy_top, lazy_right, lazy_bottom = lazyop['crop_bbox']
601
+ left = x_offset * (lazy_right - lazy_left) / img_w
602
+ right = (x_offset + new_w) * (lazy_right - lazy_left) / img_w
603
+ top = y_offset * (lazy_bottom - lazy_top) / img_h
604
+ bottom = (y_offset + new_h) * (lazy_bottom - lazy_top) / img_h
605
+ lazyop['crop_bbox'] = np.array([(lazy_left + left),
606
+ (lazy_top + top),
607
+ (lazy_left + right),
608
+ (lazy_top + bottom)],
609
+ dtype=np.float32)
610
+
611
+ # Process entity boxes
612
+ if 'gt_bboxes' in results:
613
+ assert not self.lazy
614
+ results = self._all_box_crop(results, results['crop_bbox'])
615
+
616
+ return results
617
+
618
+ def __repr__(self):
619
+ repr_str = (f'{self.__class__.__name__}(size={self.size}, '
620
+ f'lazy={self.lazy})')
621
+ return repr_str
622
+
623
+
624
+ @PIPELINES.register_module()
625
+ class RandomResizedCrop(RandomCrop):
626
+ """Random crop that specifics the area and height-weight ratio range.
627
+
628
+ Required keys in results are "img_shape", "crop_bbox", "imgs" (optional),
629
+ "keypoint" (optional), added or modified keys are "imgs", "keypoint",
630
+ "crop_bbox" and "lazy"; Required keys in "lazy" are "flip", "crop_bbox",
631
+ added or modified key is "crop_bbox".
632
+
633
+ Args:
634
+ area_range (Tuple[float]): The candidate area scales range of
635
+ output cropped images. Default: (0.08, 1.0).
636
+ aspect_ratio_range (Tuple[float]): The candidate aspect ratio range of
637
+ output cropped images. Default: (3 / 4, 4 / 3).
638
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
639
+ """
640
+
641
+ def __init__(self,
642
+ area_range=(0.08, 1.0),
643
+ aspect_ratio_range=(3 / 4, 4 / 3),
644
+ lazy=False):
645
+ self.area_range = area_range
646
+ self.aspect_ratio_range = aspect_ratio_range
647
+ self.lazy = lazy
648
+ if not mmcv.is_tuple_of(self.area_range, float):
649
+ raise TypeError(f'Area_range must be a tuple of float, '
650
+ f'but got {type(area_range)}')
651
+ if not mmcv.is_tuple_of(self.aspect_ratio_range, float):
652
+ raise TypeError(f'Aspect_ratio_range must be a tuple of float, '
653
+ f'but got {type(aspect_ratio_range)}')
654
+
655
+ @staticmethod
656
+ def get_crop_bbox(img_shape,
657
+ area_range,
658
+ aspect_ratio_range,
659
+ max_attempts=10):
660
+ """Get a crop bbox given the area range and aspect ratio range.
661
+
662
+ Args:
663
+ img_shape (Tuple[int]): Image shape
664
+ area_range (Tuple[float]): The candidate area scales range of
665
+ output cropped images. Default: (0.08, 1.0).
666
+ aspect_ratio_range (Tuple[float]): The candidate aspect
667
+ ratio range of output cropped images. Default: (3 / 4, 4 / 3).
668
+ max_attempts (int): The maximum of attempts. Default: 10.
669
+ max_attempts (int): Max attempts times to generate random candidate
670
+ bounding box. If it doesn't qualified one, the center bounding
671
+ box will be used.
672
+ Returns:
673
+ (list[int]) A random crop bbox within the area range and aspect
674
+ ratio range.
675
+ """
676
+ assert 0 < area_range[0] <= area_range[1] <= 1
677
+ assert 0 < aspect_ratio_range[0] <= aspect_ratio_range[1]
678
+
679
+ img_h, img_w = img_shape
680
+ area = img_h * img_w
681
+
682
+ min_ar, max_ar = aspect_ratio_range
683
+ aspect_ratios = np.exp(
684
+ np.random.uniform(
685
+ np.log(min_ar), np.log(max_ar), size=max_attempts))
686
+ target_areas = np.random.uniform(*area_range, size=max_attempts) * area
687
+ candidate_crop_w = np.round(np.sqrt(target_areas *
688
+ aspect_ratios)).astype(np.int32)
689
+ candidate_crop_h = np.round(np.sqrt(target_areas /
690
+ aspect_ratios)).astype(np.int32)
691
+
692
+ for i in range(max_attempts):
693
+ crop_w = candidate_crop_w[i]
694
+ crop_h = candidate_crop_h[i]
695
+ if crop_h <= img_h and crop_w <= img_w:
696
+ x_offset = random.randint(0, img_w - crop_w)
697
+ y_offset = random.randint(0, img_h - crop_h)
698
+ return x_offset, y_offset, x_offset + crop_w, y_offset + crop_h
699
+
700
+ # Fallback
701
+ crop_size = min(img_h, img_w)
702
+ x_offset = (img_w - crop_size) // 2
703
+ y_offset = (img_h - crop_size) // 2
704
+ return x_offset, y_offset, x_offset + crop_size, y_offset + crop_size
705
+
706
+ def __call__(self, results):
707
+ """Performs the RandomResizeCrop augmentation.
708
+
709
+ Args:
710
+ results (dict): The resulting dict to be modified and passed
711
+ to the next transform in pipeline.
712
+ """
713
+ _init_lazy_if_proper(results, self.lazy)
714
+ if 'keypoint' in results:
715
+ assert not self.lazy, ('Keypoint Augmentations are not compatible '
716
+ 'with lazy == True')
717
+
718
+ img_h, img_w = results['img_shape']
719
+
720
+ left, top, right, bottom = self.get_crop_bbox(
721
+ (img_h, img_w), self.area_range, self.aspect_ratio_range)
722
+ new_h, new_w = bottom - top, right - left
723
+
724
+ if 'crop_quadruple' not in results:
725
+ results['crop_quadruple'] = np.array(
726
+ [0, 0, 1, 1], # x, y, w, h
727
+ dtype=np.float32)
728
+
729
+ x_ratio, y_ratio = left / img_w, top / img_h
730
+ w_ratio, h_ratio = new_w / img_w, new_h / img_h
731
+
732
+ old_crop_quadruple = results['crop_quadruple']
733
+ old_x_ratio, old_y_ratio = old_crop_quadruple[0], old_crop_quadruple[1]
734
+ old_w_ratio, old_h_ratio = old_crop_quadruple[2], old_crop_quadruple[3]
735
+ new_crop_quadruple = [
736
+ old_x_ratio + x_ratio * old_w_ratio,
737
+ old_y_ratio + y_ratio * old_h_ratio, w_ratio * old_w_ratio,
738
+ h_ratio * old_x_ratio
739
+ ]
740
+ results['crop_quadruple'] = np.array(
741
+ new_crop_quadruple, dtype=np.float32)
742
+
743
+ crop_bbox = np.array([left, top, right, bottom])
744
+ results['crop_bbox'] = crop_bbox
745
+ results['img_shape'] = (new_h, new_w)
746
+
747
+ if not self.lazy:
748
+ if 'keypoint' in results:
749
+ results['keypoint'] = self._crop_kps(results['keypoint'],
750
+ crop_bbox)
751
+ if 'imgs' in results:
752
+ results['imgs'] = self._crop_imgs(results['imgs'], crop_bbox)
753
+ else:
754
+ lazyop = results['lazy']
755
+ if lazyop['flip']:
756
+ raise NotImplementedError('Put Flip at last for now')
757
+
758
+ # record crop_bbox in lazyop dict to ensure only crop once in Fuse
759
+ lazy_left, lazy_top, lazy_right, lazy_bottom = lazyop['crop_bbox']
760
+ left = left * (lazy_right - lazy_left) / img_w
761
+ right = right * (lazy_right - lazy_left) / img_w
762
+ top = top * (lazy_bottom - lazy_top) / img_h
763
+ bottom = bottom * (lazy_bottom - lazy_top) / img_h
764
+ lazyop['crop_bbox'] = np.array([(lazy_left + left),
765
+ (lazy_top + top),
766
+ (lazy_left + right),
767
+ (lazy_top + bottom)],
768
+ dtype=np.float32)
769
+
770
+ if 'gt_bboxes' in results:
771
+ assert not self.lazy
772
+ results = self._all_box_crop(results, results['crop_bbox'])
773
+
774
+ return results
775
+
776
+ def __repr__(self):
777
+ repr_str = (f'{self.__class__.__name__}('
778
+ f'area_range={self.area_range}, '
779
+ f'aspect_ratio_range={self.aspect_ratio_range}, '
780
+ f'lazy={self.lazy})')
781
+ return repr_str
782
+
783
+
784
+ @PIPELINES.register_module()
785
+ class MultiScaleCrop(RandomCrop):
786
+ """Crop images with a list of randomly selected scales.
787
+
788
+ Randomly select the w and h scales from a list of scales. Scale of 1 means
789
+ the base size, which is the minimal of image width and height. The scale
790
+ level of w and h is controlled to be smaller than a certain value to
791
+ prevent too large or small aspect ratio.
792
+
793
+ Required keys are "img_shape", "imgs" (optional), "keypoint" (optional),
794
+ added or modified keys are "imgs", "crop_bbox", "img_shape", "lazy" and
795
+ "scales". Required keys in "lazy" are "crop_bbox", added or modified key is
796
+ "crop_bbox".
797
+
798
+ Args:
799
+ input_size (int | tuple[int]): (w, h) of network input.
800
+ scales (tuple[float]): width and height scales to be selected.
801
+ max_wh_scale_gap (int): Maximum gap of w and h scale levels.
802
+ Default: 1.
803
+ random_crop (bool): If set to True, the cropping bbox will be randomly
804
+ sampled, otherwise it will be sampler from fixed regions.
805
+ Default: False.
806
+ num_fixed_crops (int): If set to 5, the cropping bbox will keep 5
807
+ basic fixed regions: "upper left", "upper right", "lower left",
808
+ "lower right", "center". If set to 13, the cropping bbox will
809
+ append another 8 fix regions: "center left", "center right",
810
+ "lower center", "upper center", "upper left quarter",
811
+ "upper right quarter", "lower left quarter", "lower right quarter".
812
+ Default: 5.
813
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
814
+ """
815
+
816
+ def __init__(self,
817
+ input_size,
818
+ scales=(1, ),
819
+ max_wh_scale_gap=1,
820
+ random_crop=False,
821
+ num_fixed_crops=5,
822
+ lazy=False):
823
+ self.input_size = _pair(input_size)
824
+ if not mmcv.is_tuple_of(self.input_size, int):
825
+ raise TypeError(f'Input_size must be int or tuple of int, '
826
+ f'but got {type(input_size)}')
827
+
828
+ if not isinstance(scales, tuple):
829
+ raise TypeError(f'Scales must be tuple, but got {type(scales)}')
830
+
831
+ if num_fixed_crops not in [5, 13]:
832
+ raise ValueError(f'Num_fix_crops must be in {[5, 13]}, '
833
+ f'but got {num_fixed_crops}')
834
+
835
+ self.scales = scales
836
+ self.max_wh_scale_gap = max_wh_scale_gap
837
+ self.random_crop = random_crop
838
+ self.num_fixed_crops = num_fixed_crops
839
+ self.lazy = lazy
840
+
841
+ def __call__(self, results):
842
+ """Performs the MultiScaleCrop augmentation.
843
+
844
+ Args:
845
+ results (dict): The resulting dict to be modified and passed
846
+ to the next transform in pipeline.
847
+ """
848
+ _init_lazy_if_proper(results, self.lazy)
849
+ if 'keypoint' in results:
850
+ assert not self.lazy, ('Keypoint Augmentations are not compatible '
851
+ 'with lazy == True')
852
+
853
+ img_h, img_w = results['img_shape']
854
+ base_size = min(img_h, img_w)
855
+ crop_sizes = [int(base_size * s) for s in self.scales]
856
+
857
+ candidate_sizes = []
858
+ for i, h in enumerate(crop_sizes):
859
+ for j, w in enumerate(crop_sizes):
860
+ if abs(i - j) <= self.max_wh_scale_gap:
861
+ candidate_sizes.append([w, h])
862
+
863
+ crop_size = random.choice(candidate_sizes)
864
+ for i in range(2):
865
+ if abs(crop_size[i] - self.input_size[i]) < 3:
866
+ crop_size[i] = self.input_size[i]
867
+
868
+ crop_w, crop_h = crop_size
869
+
870
+ if self.random_crop:
871
+ x_offset = random.randint(0, img_w - crop_w)
872
+ y_offset = random.randint(0, img_h - crop_h)
873
+ else:
874
+ w_step = (img_w - crop_w) // 4
875
+ h_step = (img_h - crop_h) // 4
876
+ candidate_offsets = [
877
+ (0, 0), # upper left
878
+ (4 * w_step, 0), # upper right
879
+ (0, 4 * h_step), # lower left
880
+ (4 * w_step, 4 * h_step), # lower right
881
+ (2 * w_step, 2 * h_step), # center
882
+ ]
883
+ if self.num_fixed_crops == 13:
884
+ extra_candidate_offsets = [
885
+ (0, 2 * h_step), # center left
886
+ (4 * w_step, 2 * h_step), # center right
887
+ (2 * w_step, 4 * h_step), # lower center
888
+ (2 * w_step, 0 * h_step), # upper center
889
+ (1 * w_step, 1 * h_step), # upper left quarter
890
+ (3 * w_step, 1 * h_step), # upper right quarter
891
+ (1 * w_step, 3 * h_step), # lower left quarter
892
+ (3 * w_step, 3 * h_step) # lower right quarter
893
+ ]
894
+ candidate_offsets.extend(extra_candidate_offsets)
895
+ x_offset, y_offset = random.choice(candidate_offsets)
896
+
897
+ new_h, new_w = crop_h, crop_w
898
+
899
+ crop_bbox = np.array(
900
+ [x_offset, y_offset, x_offset + new_w, y_offset + new_h])
901
+ results['crop_bbox'] = crop_bbox
902
+ results['img_shape'] = (new_h, new_w)
903
+ results['scales'] = self.scales
904
+
905
+ if 'crop_quadruple' not in results:
906
+ results['crop_quadruple'] = np.array(
907
+ [0, 0, 1, 1], # x, y, w, h
908
+ dtype=np.float32)
909
+
910
+ x_ratio, y_ratio = x_offset / img_w, y_offset / img_h
911
+ w_ratio, h_ratio = new_w / img_w, new_h / img_h
912
+
913
+ old_crop_quadruple = results['crop_quadruple']
914
+ old_x_ratio, old_y_ratio = old_crop_quadruple[0], old_crop_quadruple[1]
915
+ old_w_ratio, old_h_ratio = old_crop_quadruple[2], old_crop_quadruple[3]
916
+ new_crop_quadruple = [
917
+ old_x_ratio + x_ratio * old_w_ratio,
918
+ old_y_ratio + y_ratio * old_h_ratio, w_ratio * old_w_ratio,
919
+ h_ratio * old_x_ratio
920
+ ]
921
+ results['crop_quadruple'] = np.array(
922
+ new_crop_quadruple, dtype=np.float32)
923
+
924
+ if not self.lazy:
925
+ if 'keypoint' in results:
926
+ results['keypoint'] = self._crop_kps(results['keypoint'],
927
+ crop_bbox)
928
+ if 'imgs' in results:
929
+ results['imgs'] = self._crop_imgs(results['imgs'], crop_bbox)
930
+ else:
931
+ lazyop = results['lazy']
932
+ if lazyop['flip']:
933
+ raise NotImplementedError('Put Flip at last for now')
934
+
935
+ # record crop_bbox in lazyop dict to ensure only crop once in Fuse
936
+ lazy_left, lazy_top, lazy_right, lazy_bottom = lazyop['crop_bbox']
937
+ left = x_offset * (lazy_right - lazy_left) / img_w
938
+ right = (x_offset + new_w) * (lazy_right - lazy_left) / img_w
939
+ top = y_offset * (lazy_bottom - lazy_top) / img_h
940
+ bottom = (y_offset + new_h) * (lazy_bottom - lazy_top) / img_h
941
+ lazyop['crop_bbox'] = np.array([(lazy_left + left),
942
+ (lazy_top + top),
943
+ (lazy_left + right),
944
+ (lazy_top + bottom)],
945
+ dtype=np.float32)
946
+
947
+ if 'gt_bboxes' in results:
948
+ assert not self.lazy
949
+ results = self._all_box_crop(results, results['crop_bbox'])
950
+
951
+ return results
952
+
953
+ def __repr__(self):
954
+ repr_str = (f'{self.__class__.__name__}('
955
+ f'input_size={self.input_size}, scales={self.scales}, '
956
+ f'max_wh_scale_gap={self.max_wh_scale_gap}, '
957
+ f'random_crop={self.random_crop}, '
958
+ f'num_fixed_crops={self.num_fixed_crops}, '
959
+ f'lazy={self.lazy})')
960
+ return repr_str
961
+
962
+
963
+ @PIPELINES.register_module()
964
+ class Resize:
965
+ """Resize images to a specific size.
966
+
967
+ Required keys are "img_shape", "modality", "imgs" (optional), "keypoint"
968
+ (optional), added or modified keys are "imgs", "img_shape", "keep_ratio",
969
+ "scale_factor", "lazy", "resize_size". Required keys in "lazy" is None,
970
+ added or modified key is "interpolation".
971
+
972
+ Args:
973
+ scale (float | Tuple[int]): If keep_ratio is True, it serves as scaling
974
+ factor or maximum size:
975
+ If it is a float number, the image will be rescaled by this
976
+ factor, else if it is a tuple of 2 integers, the image will
977
+ be rescaled as large as possible within the scale.
978
+ Otherwise, it serves as (w, h) of output size.
979
+ keep_ratio (bool): If set to True, Images will be resized without
980
+ changing the aspect ratio. Otherwise, it will resize images to a
981
+ given size. Default: True.
982
+ interpolation (str): Algorithm used for interpolation:
983
+ "nearest" | "bilinear". Default: "bilinear".
984
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
985
+ """
986
+
987
+ def __init__(self,
988
+ scale,
989
+ keep_ratio=True,
990
+ interpolation='bilinear',
991
+ lazy=False):
992
+ if isinstance(scale, float):
993
+ if scale <= 0:
994
+ raise ValueError(f'Invalid scale {scale}, must be positive.')
995
+ elif isinstance(scale, tuple):
996
+ max_long_edge = max(scale)
997
+ max_short_edge = min(scale)
998
+ if max_short_edge == -1:
999
+ # assign np.inf to long edge for rescaling short edge later.
1000
+ scale = (np.inf, max_long_edge)
1001
+ else:
1002
+ raise TypeError(
1003
+ f'Scale must be float or tuple of int, but got {type(scale)}')
1004
+ self.scale = scale
1005
+ self.keep_ratio = keep_ratio
1006
+ self.interpolation = interpolation
1007
+ self.lazy = lazy
1008
+
1009
+ def _resize_imgs(self, imgs, new_w, new_h):
1010
+ return [
1011
+ mmcv.imresize(
1012
+ img, (new_w, new_h), interpolation=self.interpolation)
1013
+ for img in imgs
1014
+ ]
1015
+
1016
+ @staticmethod
1017
+ def _resize_kps(kps, scale_factor):
1018
+ return kps * scale_factor
1019
+
1020
+ @staticmethod
1021
+ def _box_resize(box, scale_factor):
1022
+ """Rescale the bounding boxes according to the scale_factor.
1023
+
1024
+ Args:
1025
+ box (np.ndarray): The bounding boxes.
1026
+ scale_factor (np.ndarray): The scale factor used for rescaling.
1027
+ """
1028
+ assert len(scale_factor) == 2
1029
+ scale_factor = np.concatenate([scale_factor, scale_factor])
1030
+ return box * scale_factor
1031
+
1032
+ def __call__(self, results):
1033
+ """Performs the Resize augmentation.
1034
+
1035
+ Args:
1036
+ results (dict): The resulting dict to be modified and passed
1037
+ to the next transform in pipeline.
1038
+ """
1039
+
1040
+ _init_lazy_if_proper(results, self.lazy)
1041
+ if 'keypoint' in results:
1042
+ assert not self.lazy, ('Keypoint Augmentations are not compatible '
1043
+ 'with lazy == True')
1044
+
1045
+ if 'scale_factor' not in results:
1046
+ results['scale_factor'] = np.array([1, 1], dtype=np.float32)
1047
+ img_h, img_w = results['img_shape']
1048
+
1049
+ if self.keep_ratio:
1050
+ new_w, new_h = mmcv.rescale_size((img_w, img_h), self.scale)
1051
+ else:
1052
+ new_w, new_h = self.scale
1053
+ self.scale_factor = np.array([new_w / img_w, new_h / img_h],
1054
+ dtype=np.float32)
1055
+
1056
+ results['img_shape'] = (new_h, new_w)
1057
+ results['keep_ratio'] = self.keep_ratio
1058
+ results['scale_factor'] = results['scale_factor'] * self.scale_factor
1059
+
1060
+ if not self.lazy:
1061
+ if 'imgs' in results:
1062
+ results['imgs'] = self._resize_imgs(results['imgs'], new_w,
1063
+ new_h)
1064
+ if 'keypoint' in results:
1065
+ results['keypoint'] = self._resize_kps(results['keypoint'],
1066
+ self.scale_factor)
1067
+ else:
1068
+ lazyop = results['lazy']
1069
+ if lazyop['flip']:
1070
+ raise NotImplementedError('Put Flip at last for now')
1071
+ lazyop['interpolation'] = self.interpolation
1072
+
1073
+ if 'gt_bboxes' in results:
1074
+ assert not self.lazy
1075
+ results['gt_bboxes'] = self._box_resize(results['gt_bboxes'],
1076
+ self.scale_factor)
1077
+ if 'proposals' in results and results['proposals'] is not None:
1078
+ assert results['proposals'].shape[1] == 4
1079
+ results['proposals'] = self._box_resize(
1080
+ results['proposals'], self.scale_factor)
1081
+
1082
+ return results
1083
+
1084
+ def __repr__(self):
1085
+ repr_str = (f'{self.__class__.__name__}('
1086
+ f'scale={self.scale}, keep_ratio={self.keep_ratio}, '
1087
+ f'interpolation={self.interpolation}, '
1088
+ f'lazy={self.lazy})')
1089
+ return repr_str
1090
+
1091
+
1092
+ @PIPELINES.register_module()
1093
+ class RandomRescale:
1094
+ """Randomly resize images so that the short_edge is resized to a specific
1095
+ size in a given range. The scale ratio is unchanged after resizing.
1096
+
1097
+ Required keys are "imgs", "img_shape", "modality", added or modified
1098
+ keys are "imgs", "img_shape", "keep_ratio", "scale_factor", "resize_size",
1099
+ "short_edge".
1100
+
1101
+ Args:
1102
+ scale_range (tuple[int]): The range of short edge length. A closed
1103
+ interval.
1104
+ interpolation (str): Algorithm used for interpolation:
1105
+ "nearest" | "bilinear". Default: "bilinear".
1106
+ """
1107
+
1108
+ def __init__(self, scale_range, interpolation='bilinear'):
1109
+ self.scale_range = scale_range
1110
+ # make sure scale_range is legal, first make sure the type is OK
1111
+ assert mmcv.is_tuple_of(scale_range, int)
1112
+ assert len(scale_range) == 2
1113
+ assert scale_range[0] < scale_range[1]
1114
+ assert np.all([x > 0 for x in scale_range])
1115
+
1116
+ self.keep_ratio = True
1117
+ self.interpolation = interpolation
1118
+
1119
+ def __call__(self, results):
1120
+ """Performs the Resize augmentation.
1121
+
1122
+ Args:
1123
+ results (dict): The resulting dict to be modified and passed
1124
+ to the next transform in pipeline.
1125
+ """
1126
+ short_edge = np.random.randint(self.scale_range[0],
1127
+ self.scale_range[1] + 1)
1128
+ resize = Resize((-1, short_edge),
1129
+ keep_ratio=True,
1130
+ interpolation=self.interpolation,
1131
+ lazy=False)
1132
+ results = resize(results)
1133
+
1134
+ results['short_edge'] = short_edge
1135
+ return results
1136
+
1137
+ def __repr__(self):
1138
+ scale_range = self.scale_range
1139
+ repr_str = (f'{self.__class__.__name__}('
1140
+ f'scale_range=({scale_range[0]}, {scale_range[1]}), '
1141
+ f'interpolation={self.interpolation})')
1142
+ return repr_str
1143
+
1144
+
1145
+ @PIPELINES.register_module()
1146
+ class Flip:
1147
+ """Flip the input images with a probability.
1148
+
1149
+ Reverse the order of elements in the given imgs with a specific direction.
1150
+ The shape of the imgs is preserved, but the elements are reordered.
1151
+
1152
+ Required keys are "img_shape", "modality", "imgs" (optional), "keypoint"
1153
+ (optional), added or modified keys are "imgs", "keypoint", "lazy" and
1154
+ "flip_direction". Required keys in "lazy" is None, added or modified key
1155
+ are "flip" and "flip_direction". The Flip augmentation should be placed
1156
+ after any cropping / reshaping augmentations, to make sure crop_quadruple
1157
+ is calculated properly.
1158
+
1159
+ Args:
1160
+ flip_ratio (float): Probability of implementing flip. Default: 0.5.
1161
+ direction (str): Flip imgs horizontally or vertically. Options are
1162
+ "horizontal" | "vertical". Default: "horizontal".
1163
+ flip_label_map (Dict[int, int] | None): Transform the label of the
1164
+ flipped image with the specific label. Default: None.
1165
+ left_kp (list[int]): Indexes of left keypoints, used to flip keypoints.
1166
+ Default: None.
1167
+ right_kp (list[ind]): Indexes of right keypoints, used to flip
1168
+ keypoints. Default: None.
1169
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
1170
+ """
1171
+ _directions = ['horizontal', 'vertical']
1172
+
1173
+ def __init__(self,
1174
+ flip_ratio=0.5,
1175
+ direction='horizontal',
1176
+ flip_label_map=None,
1177
+ left_kp=None,
1178
+ right_kp=None,
1179
+ lazy=False):
1180
+ if direction not in self._directions:
1181
+ raise ValueError(f'Direction {direction} is not supported. '
1182
+ f'Currently support ones are {self._directions}')
1183
+ self.flip_ratio = flip_ratio
1184
+ self.direction = direction
1185
+ self.flip_label_map = flip_label_map
1186
+ self.left_kp = left_kp
1187
+ self.right_kp = right_kp
1188
+ self.lazy = lazy
1189
+
1190
+ def _flip_imgs(self, imgs, modality):
1191
+ _ = [mmcv.imflip_(img, self.direction) for img in imgs]
1192
+ lt = len(imgs)
1193
+ if modality == 'Flow':
1194
+ # The 1st frame of each 2 frames is flow-x
1195
+ for i in range(0, lt, 2):
1196
+ imgs[i] = mmcv.iminvert(imgs[i])
1197
+ return imgs
1198
+
1199
+ def _flip_kps(self, kps, kpscores, img_width):
1200
+ kp_x = kps[..., 0]
1201
+ kp_x[kp_x != 0] = img_width - kp_x[kp_x != 0]
1202
+ new_order = list(range(kps.shape[2]))
1203
+ if self.left_kp is not None and self.right_kp is not None:
1204
+ for left, right in zip(self.left_kp, self.right_kp):
1205
+ new_order[left] = right
1206
+ new_order[right] = left
1207
+ kps = kps[:, :, new_order]
1208
+ if kpscores is not None:
1209
+ kpscores = kpscores[:, :, new_order]
1210
+ return kps, kpscores
1211
+
1212
+ @staticmethod
1213
+ def _box_flip(box, img_width):
1214
+ """Flip the bounding boxes given the width of the image.
1215
+
1216
+ Args:
1217
+ box (np.ndarray): The bounding boxes.
1218
+ img_width (int): The img width.
1219
+ """
1220
+ box_ = box.copy()
1221
+ box_[..., 0::4] = img_width - box[..., 2::4]
1222
+ box_[..., 2::4] = img_width - box[..., 0::4]
1223
+ return box_
1224
+
1225
+ def __call__(self, results):
1226
+ """Performs the Flip augmentation.
1227
+
1228
+ Args:
1229
+ results (dict): The resulting dict to be modified and passed
1230
+ to the next transform in pipeline.
1231
+ """
1232
+ _init_lazy_if_proper(results, self.lazy)
1233
+ if 'keypoint' in results:
1234
+ assert not self.lazy, ('Keypoint Augmentations are not compatible '
1235
+ 'with lazy == True')
1236
+ assert self.direction == 'horizontal', (
1237
+ 'Only horizontal flips are'
1238
+ 'supported for human keypoints')
1239
+
1240
+ modality = results['modality']
1241
+ if modality == 'Flow':
1242
+ assert self.direction == 'horizontal'
1243
+
1244
+ flip = np.random.rand() < self.flip_ratio
1245
+
1246
+ results['flip'] = flip
1247
+ results['flip_direction'] = self.direction
1248
+ img_width = results['img_shape'][1]
1249
+
1250
+ if self.flip_label_map is not None and flip:
1251
+ results['label'] = self.flip_label_map.get(results['label'],
1252
+ results['label'])
1253
+
1254
+ if not self.lazy:
1255
+ if flip:
1256
+ if 'imgs' in results:
1257
+ results['imgs'] = self._flip_imgs(results['imgs'],
1258
+ modality)
1259
+ if 'keypoint' in results:
1260
+ kp = results['keypoint']
1261
+ kpscore = results.get('keypoint_score', None)
1262
+ kp, kpscore = self._flip_kps(kp, kpscore, img_width)
1263
+ results['keypoint'] = kp
1264
+ if 'keypoint_score' in results:
1265
+ results['keypoint_score'] = kpscore
1266
+ else:
1267
+ lazyop = results['lazy']
1268
+ if lazyop['flip']:
1269
+ raise NotImplementedError('Use one Flip please')
1270
+ lazyop['flip'] = flip
1271
+ lazyop['flip_direction'] = self.direction
1272
+
1273
+ if 'gt_bboxes' in results and flip:
1274
+ assert not self.lazy and self.direction == 'horizontal'
1275
+ width = results['img_shape'][1]
1276
+ results['gt_bboxes'] = self._box_flip(results['gt_bboxes'], width)
1277
+ if 'proposals' in results and results['proposals'] is not None:
1278
+ assert results['proposals'].shape[1] == 4
1279
+ results['proposals'] = self._box_flip(results['proposals'],
1280
+ width)
1281
+
1282
+ return results
1283
+
1284
+ def __repr__(self):
1285
+ repr_str = (
1286
+ f'{self.__class__.__name__}('
1287
+ f'flip_ratio={self.flip_ratio}, direction={self.direction}, '
1288
+ f'flip_label_map={self.flip_label_map}, lazy={self.lazy})')
1289
+ return repr_str
1290
+
1291
+
1292
+ @PIPELINES.register_module()
1293
+ class Normalize:
1294
+ """Normalize images with the given mean and std value.
1295
+
1296
+ Required keys are "imgs", "img_shape", "modality", added or modified
1297
+ keys are "imgs" and "img_norm_cfg". If modality is 'Flow', additional
1298
+ keys "scale_factor" is required
1299
+
1300
+ Args:
1301
+ mean (Sequence[float]): Mean values of different channels.
1302
+ std (Sequence[float]): Std values of different channels.
1303
+ to_bgr (bool): Whether to convert channels from RGB to BGR.
1304
+ Default: False.
1305
+ adjust_magnitude (bool): Indicate whether to adjust the flow magnitude
1306
+ on 'scale_factor' when modality is 'Flow'. Default: False.
1307
+ """
1308
+
1309
+ def __init__(self, mean, std, to_bgr=False, adjust_magnitude=False):
1310
+ if not isinstance(mean, Sequence):
1311
+ raise TypeError(
1312
+ f'Mean must be list, tuple or np.ndarray, but got {type(mean)}'
1313
+ )
1314
+
1315
+ if not isinstance(std, Sequence):
1316
+ raise TypeError(
1317
+ f'Std must be list, tuple or np.ndarray, but got {type(std)}')
1318
+
1319
+ self.mean = np.array(mean, dtype=np.float32)
1320
+ self.std = np.array(std, dtype=np.float32)
1321
+ self.to_bgr = to_bgr
1322
+ self.adjust_magnitude = adjust_magnitude
1323
+
1324
+ def __call__(self, results):
1325
+ modality = results['modality']
1326
+
1327
+ if modality == 'RGB':
1328
+ n = len(results['imgs'])
1329
+ h, w, c = results['imgs'][0].shape
1330
+ imgs = np.empty((n, h, w, c), dtype=np.float32)
1331
+ for i, img in enumerate(results['imgs']):
1332
+ imgs[i] = img
1333
+
1334
+ for img in imgs:
1335
+ mmcv.imnormalize_(img, self.mean, self.std, self.to_bgr)
1336
+ results['imgs'] = imgs
1337
+ results['img_norm_cfg'] = dict(
1338
+ mean=self.mean, std=self.std, to_bgr=self.to_bgr)
1339
+ return results
1340
+ if modality == 'Flow':
1341
+ num_imgs = len(results['imgs'])
1342
+ assert num_imgs % 2 == 0
1343
+ assert self.mean.shape[0] == 2
1344
+ assert self.std.shape[0] == 2
1345
+ n = num_imgs // 2
1346
+ h, w = results['imgs'][0].shape
1347
+ x_flow = np.empty((n, h, w), dtype=np.float32)
1348
+ y_flow = np.empty((n, h, w), dtype=np.float32)
1349
+ for i in range(n):
1350
+ x_flow[i] = results['imgs'][2 * i]
1351
+ y_flow[i] = results['imgs'][2 * i + 1]
1352
+ x_flow = (x_flow - self.mean[0]) / self.std[0]
1353
+ y_flow = (y_flow - self.mean[1]) / self.std[1]
1354
+ if self.adjust_magnitude:
1355
+ x_flow = x_flow * results['scale_factor'][0]
1356
+ y_flow = y_flow * results['scale_factor'][1]
1357
+ imgs = np.stack([x_flow, y_flow], axis=-1)
1358
+ results['imgs'] = imgs
1359
+ args = dict(
1360
+ mean=self.mean,
1361
+ std=self.std,
1362
+ to_bgr=self.to_bgr,
1363
+ adjust_magnitude=self.adjust_magnitude)
1364
+ results['img_norm_cfg'] = args
1365
+ return results
1366
+ raise NotImplementedError
1367
+
1368
+ def __repr__(self):
1369
+ repr_str = (f'{self.__class__.__name__}('
1370
+ f'mean={self.mean}, '
1371
+ f'std={self.std}, '
1372
+ f'to_bgr={self.to_bgr}, '
1373
+ f'adjust_magnitude={self.adjust_magnitude})')
1374
+ return repr_str
1375
+
1376
+ @PIPELINES.register_module()
1377
+ class CenterCrop(RandomCrop):
1378
+ """Crop the center area from images.
1379
+
1380
+ Required keys are "img_shape", "imgs" (optional), "keypoint" (optional),
1381
+ added or modified keys are "imgs", "keypoint", "crop_bbox", "lazy" and
1382
+ "img_shape". Required keys in "lazy" is "crop_bbox", added or modified key
1383
+ is "crop_bbox".
1384
+
1385
+ Args:
1386
+ crop_size (int | tuple[int]): (w, h) of crop size.
1387
+ lazy (bool): Determine whether to apply lazy operation. Default: False.
1388
+ """
1389
+
1390
+ def __init__(self, crop_size, lazy=False):
1391
+ self.crop_size = _pair(crop_size)
1392
+ self.lazy = lazy
1393
+ if not mmcv.is_tuple_of(self.crop_size, int):
1394
+ raise TypeError(f'Crop_size must be int or tuple of int, '
1395
+ f'but got {type(crop_size)}')
1396
+
1397
+ def __call__(self, results):
1398
+ """Performs the CenterCrop augmentation.
1399
+
1400
+ Args:
1401
+ results (dict): The resulting dict to be modified and passed
1402
+ to the next transform in pipeline.
1403
+ """
1404
+ _init_lazy_if_proper(results, self.lazy)
1405
+ if 'keypoint' in results:
1406
+ assert not self.lazy, ('Keypoint Augmentations are not compatible '
1407
+ 'with lazy == True')
1408
+
1409
+ img_h, img_w = results['img_shape']
1410
+ crop_w, crop_h = self.crop_size
1411
+
1412
+ left = (img_w - crop_w) // 2
1413
+ top = (img_h - crop_h) // 2
1414
+ right = left + crop_w
1415
+ bottom = top + crop_h
1416
+ new_h, new_w = bottom - top, right - left
1417
+
1418
+ crop_bbox = np.array([left, top, right, bottom])
1419
+ results['crop_bbox'] = crop_bbox
1420
+ results['img_shape'] = (new_h, new_w)
1421
+
1422
+ if 'crop_quadruple' not in results:
1423
+ results['crop_quadruple'] = np.array(
1424
+ [0, 0, 1, 1], # x, y, w, h
1425
+ dtype=np.float32)
1426
+
1427
+ x_ratio, y_ratio = left / img_w, top / img_h
1428
+ w_ratio, h_ratio = new_w / img_w, new_h / img_h
1429
+
1430
+ old_crop_quadruple = results['crop_quadruple']
1431
+ old_x_ratio, old_y_ratio = old_crop_quadruple[0], old_crop_quadruple[1]
1432
+ old_w_ratio, old_h_ratio = old_crop_quadruple[2], old_crop_quadruple[3]
1433
+ new_crop_quadruple = [
1434
+ old_x_ratio + x_ratio * old_w_ratio,
1435
+ old_y_ratio + y_ratio * old_h_ratio, w_ratio * old_w_ratio,
1436
+ h_ratio * old_x_ratio
1437
+ ]
1438
+ results['crop_quadruple'] = np.array(
1439
+ new_crop_quadruple, dtype=np.float32)
1440
+
1441
+ if not self.lazy:
1442
+ if 'keypoint' in results:
1443
+ results['keypoint'] = self._crop_kps(results['keypoint'],
1444
+ crop_bbox)
1445
+ if 'imgs' in results:
1446
+ results['imgs'] = self._crop_imgs(results['imgs'], crop_bbox)
1447
+ else:
1448
+ lazyop = results['lazy']
1449
+ if lazyop['flip']:
1450
+ raise NotImplementedError('Put Flip at last for now')
1451
+
1452
+ # record crop_bbox in lazyop dict to ensure only crop once in Fuse
1453
+ lazy_left, lazy_top, lazy_right, lazy_bottom = lazyop['crop_bbox']
1454
+ left = left * (lazy_right - lazy_left) / img_w
1455
+ right = right * (lazy_right - lazy_left) / img_w
1456
+ top = top * (lazy_bottom - lazy_top) / img_h
1457
+ bottom = bottom * (lazy_bottom - lazy_top) / img_h
1458
+ lazyop['crop_bbox'] = np.array([(lazy_left + left),
1459
+ (lazy_top + top),
1460
+ (lazy_left + right),
1461
+ (lazy_top + bottom)],
1462
+ dtype=np.float32)
1463
+
1464
+ if 'gt_bboxes' in results:
1465
+ assert not self.lazy
1466
+ results = self._all_box_crop(results, results['crop_bbox'])
1467
+
1468
+ return results
1469
+
1470
+ def __repr__(self):
1471
+ repr_str = (f'{self.__class__.__name__}(crop_size={self.crop_size}, '
1472
+ f'lazy={self.lazy})')
1473
+ return repr_str
1474
+
1475
+
1476
+ @PIPELINES.register_module()
1477
+ class ThreeCrop:
1478
+ """Crop images into three crops.
1479
+
1480
+ Crop the images equally into three crops with equal intervals along the
1481
+ shorter side.
1482
+ Required keys are "imgs", "img_shape", added or modified keys are "imgs",
1483
+ "crop_bbox" and "img_shape".
1484
+
1485
+ Args:
1486
+ crop_size(int | tuple[int]): (w, h) of crop size.
1487
+ """
1488
+
1489
+ def __init__(self, crop_size):
1490
+ self.crop_size = _pair(crop_size)
1491
+ if not mmcv.is_tuple_of(self.crop_size, int):
1492
+ raise TypeError(f'Crop_size must be int or tuple of int, '
1493
+ f'but got {type(crop_size)}')
1494
+
1495
+ def __call__(self, results):
1496
+ """Performs the ThreeCrop augmentation.
1497
+
1498
+ Args:
1499
+ results (dict): The resulting dict to be modified and passed
1500
+ to the next transform in pipeline.
1501
+ """
1502
+ _init_lazy_if_proper(results, False)
1503
+ if 'gt_bboxes' in results or 'proposals' in results:
1504
+ warnings.warn('ThreeCrop cannot process bounding boxes')
1505
+
1506
+ imgs = results['imgs']
1507
+ img_h, img_w = results['imgs'][0].shape[:2]
1508
+
1509
+ crop_w, crop_h = self.crop_size
1510
+ assert crop_h == img_h or crop_w == img_w
1511
+
1512
+ if crop_h == img_h:
1513
+ w_step = (img_w - crop_w) // 2
1514
+ offsets = [
1515
+ (0, 0), # left
1516
+ (2 * w_step, 0), # right
1517
+ (w_step, 0), # middle
1518
+ ]
1519
+ elif crop_w == img_w:
1520
+ h_step = (img_h - crop_h) // 2
1521
+ offsets = [
1522
+ (0, 0), # top
1523
+ (0, 2 * h_step), # down
1524
+ (0, h_step), # middle
1525
+ ]
1526
+
1527
+ cropped = []
1528
+ crop_bboxes = []
1529
+ for x_offset, y_offset in offsets:
1530
+ bbox = [x_offset, y_offset, x_offset + crop_w, y_offset + crop_h]
1531
+ crop = [
1532
+ img[y_offset:y_offset + crop_h, x_offset:x_offset + crop_w]
1533
+ for img in imgs
1534
+ ]
1535
+ cropped.extend(crop)
1536
+ crop_bboxes.extend([bbox for _ in range(len(imgs))])
1537
+
1538
+ crop_bboxes = np.array(crop_bboxes)
1539
+ results['imgs'] = cropped
1540
+ results['crop_bbox'] = crop_bboxes
1541
+ results['img_shape'] = results['imgs'][0].shape[:2]
1542
+
1543
+ return results
1544
+
1545
+ def __repr__(self):
1546
+ repr_str = f'{self.__class__.__name__}(crop_size={self.crop_size})'
1547
+ return repr_str
1548
+
1549
+
1550
+ @PIPELINES.register_module()
1551
+ class TenCrop:
1552
+ """Crop the images into 10 crops (corner + center + flip).
1553
+
1554
+ Crop the four corners and the center part of the image with the same
1555
+ given crop_size, and flip it horizontally.
1556
+ Required keys are "imgs", "img_shape", added or modified keys are "imgs",
1557
+ "crop_bbox" and "img_shape".
1558
+
1559
+ Args:
1560
+ crop_size(int | tuple[int]): (w, h) of crop size.
1561
+ """
1562
+
1563
+ def __init__(self, crop_size):
1564
+ self.crop_size = _pair(crop_size)
1565
+ if not mmcv.is_tuple_of(self.crop_size, int):
1566
+ raise TypeError(f'Crop_size must be int or tuple of int, '
1567
+ f'but got {type(crop_size)}')
1568
+
1569
+ def __call__(self, results):
1570
+ """Performs the TenCrop augmentation.
1571
+
1572
+ Args:
1573
+ results (dict): The resulting dict to be modified and passed
1574
+ to the next transform in pipeline.
1575
+ """
1576
+ _init_lazy_if_proper(results, False)
1577
+
1578
+ if 'gt_bboxes' in results or 'proposals' in results:
1579
+ warnings.warn('TenCrop cannot process bounding boxes')
1580
+
1581
+ imgs = results['imgs']
1582
+
1583
+ img_h, img_w = results['imgs'][0].shape[:2]
1584
+ crop_w, crop_h = self.crop_size
1585
+
1586
+ w_step = (img_w - crop_w) // 4
1587
+ h_step = (img_h - crop_h) // 4
1588
+
1589
+ offsets = [
1590
+ (0, 0), # upper left
1591
+ (4 * w_step, 0), # upper right
1592
+ (0, 4 * h_step), # lower left
1593
+ (4 * w_step, 4 * h_step), # lower right
1594
+ (2 * w_step, 2 * h_step), # center
1595
+ ]
1596
+
1597
+ img_crops = list()
1598
+ crop_bboxes = list()
1599
+ for x_offset, y_offsets in offsets:
1600
+ crop = [
1601
+ img[y_offsets:y_offsets + crop_h, x_offset:x_offset + crop_w]
1602
+ for img in imgs
1603
+ ]
1604
+ # import pdb
1605
+ # pdb.set_trace()
1606
+ flip_crop = [np.flip(c, axis=1).copy() for c in crop]
1607
+ bbox = [x_offset, y_offsets, x_offset + crop_w, y_offsets + crop_h]
1608
+ img_crops.extend(crop)
1609
+ img_crops.extend(flip_crop)
1610
+ crop_bboxes.extend([bbox for _ in range(len(imgs) * 2)])
1611
+
1612
+ crop_bboxes = np.array(crop_bboxes)
1613
+
1614
+ results['imgs'] = img_crops
1615
+ results['crop_bbox'] = crop_bboxes
1616
+ results['img_shape'] = results['imgs'][0].shape[:2]
1617
+
1618
+ return results
1619
+
1620
+ def __repr__(self):
1621
+ repr_str = f'{self.__class__.__name__}(crop_size={self.crop_size})'
1622
+ return repr_str
1623
+
1624
+
1625
+ @PIPELINES.register_module()
1626
+ class MultiGroupCrop:
1627
+ """Randomly crop the images into several groups.
1628
+
1629
+ Crop the random region with the same given crop_size and bounding box
1630
+ into several groups.
1631
+ Required keys are "imgs", added or modified keys are "imgs", "crop_bbox"
1632
+ and "img_shape".
1633
+
1634
+ Args:
1635
+ crop_size(int | tuple[int]): (w, h) of crop size.
1636
+ groups(int): Number of groups.
1637
+ """
1638
+
1639
+ def __init__(self, crop_size, groups):
1640
+ self.crop_size = _pair(crop_size)
1641
+ self.groups = groups
1642
+ if not mmcv.is_tuple_of(self.crop_size, int):
1643
+ raise TypeError('Crop size must be int or tuple of int, '
1644
+ f'but got {type(crop_size)}')
1645
+
1646
+ if not isinstance(groups, int):
1647
+ raise TypeError(f'Groups must be int, but got {type(groups)}.')
1648
+
1649
+ if groups <= 0:
1650
+ raise ValueError('Groups must be positive.')
1651
+
1652
+ def __call__(self, results):
1653
+ """Performs the MultiGroupCrop augmentation.
1654
+
1655
+ Args:
1656
+ results (dict): The resulting dict to be modified and passed
1657
+ to the next transform in pipeline.
1658
+ """
1659
+ if 'gt_bboxes' in results or 'proposals' in results:
1660
+ warnings.warn('MultiGroupCrop cannot process bounding boxes')
1661
+
1662
+ imgs = results['imgs']
1663
+ img_h, img_w = imgs[0].shape[:2]
1664
+ crop_w, crop_h = self.crop_size
1665
+
1666
+ img_crops = []
1667
+ crop_bboxes = []
1668
+ for _ in range(self.groups):
1669
+ x_offset = random.randint(0, img_w - crop_w)
1670
+ y_offset = random.randint(0, img_h - crop_h)
1671
+
1672
+ bbox = [x_offset, y_offset, x_offset + crop_w, y_offset + crop_h]
1673
+ crop = [
1674
+ img[y_offset:y_offset + crop_h, x_offset:x_offset + crop_w]
1675
+ for img in imgs
1676
+ ]
1677
+ img_crops.extend(crop)
1678
+ crop_bboxes.extend([bbox for _ in range(len(imgs))])
1679
+
1680
+ crop_bboxes = np.array(crop_bboxes)
1681
+ results['imgs'] = img_crops
1682
+ results['crop_bbox'] = crop_bboxes
1683
+ results['img_shape'] = results['imgs'][0].shape[:2]
1684
+
1685
+ return results
1686
+
1687
+ def __repr__(self):
1688
+ repr_str = (f'{self.__class__.__name__}'
1689
+ f'(crop_size={self.crop_size}, '
1690
+ f'groups={self.groups})')
1691
+ return repr_str
1692
+
1693
+
1694
+ @PIPELINES.register_module()
1695
+ class ColorJitter:
1696
+ def __init__(self, p=0.8, p_gray=0.2, brightness=0.4, contrast=0.4,saturation=0.2, hue=0.1):
1697
+ self.p = p
1698
+ self.p_gray = p_gray
1699
+ self.worker = torchvision.transforms.ColorJitter(brightness=brightness, contrast=contrast, saturation=saturation, hue=hue)
1700
+
1701
+
1702
+ def __call__(self, results):
1703
+ imgs = results['imgs']
1704
+ v = random.random()
1705
+ if v < self.p:
1706
+ imgs = [np.asarray(self.worker(Image.fromarray(img))) for img in imgs]
1707
+
1708
+ results['imgs'] = imgs
1709
+ return results
1710
+
1711
+ def __repr__(self):
1712
+ repr_str = (f'{self.__class__.__name__}')
1713
+ return repr_str
1714
+
1715
+ @PIPELINES.register_module()
1716
+ class GrayScale:
1717
+ def __init__(self, p=0.2):
1718
+ self.p = p
1719
+ self.worker_gray = torchvision.transforms.Grayscale(num_output_channels=3)
1720
+
1721
+ def __call__(self, results):
1722
+ imgs = results['imgs']
1723
+ v = random.random()
1724
+ if v < self.p:
1725
+ imgs = [np.asarray(self.worker_gray(Image.fromarray(img))) for img in imgs]
1726
+
1727
+ results['imgs'] = imgs
1728
+ return results
1729
+
1730
+ def __repr__(self):
1731
+ repr_str = (f'{self.__class__.__name__}')
1732
+ return repr_str
1733
+
1734
+
1735
+ @PIPELINES.register_module()
1736
+ class Compose:
1737
+ def __init__(self, transforms):
1738
+ assert isinstance(transforms, Sequence)
1739
+ self.transforms = []
1740
+ for transform in transforms:
1741
+ if isinstance(transform, dict):
1742
+ transform = build_from_cfg(transform, PIPELINES)
1743
+ self.transforms.append(transform)
1744
+ elif callable(transform):
1745
+ self.transforms.append(transform)
1746
+ else:
1747
+ raise TypeError(f'transform must be callable or a dict, '
1748
+ f'but got {type(transform)}')
1749
+
1750
+ def __call__(self, data):
1751
+ for t in self.transforms:
1752
+ data = t(data)
1753
+ if data is None:
1754
+ return None
1755
+ return data
1756
+
1757
+ def __repr__(self):
1758
+ format_string = self.__class__.__name__ + '('
1759
+ for t in self.transforms:
1760
+ format_string += '\n'
1761
+ format_string += ' {0}'.format(t)
1762
+ format_string += '\n)'
1763
+ return format_string
1764
+
1765
+ @PIPELINES.register_module()
1766
+ class DecordInit:
1767
+ """Using decord to initialize the video_reader.
1768
+
1769
+ Decord: https://github.com/dmlc/decord
1770
+
1771
+ Required keys are "filename",
1772
+ added or modified keys are "video_reader" and "total_frames".
1773
+ """
1774
+
1775
+ def __init__(self, io_backend='disk', num_threads=1, **kwargs):
1776
+ self.io_backend = io_backend
1777
+ self.num_threads = num_threads
1778
+ self.kwargs = kwargs
1779
+ self.file_client = None
1780
+ self.tarfile = None
1781
+
1782
+ def __call__(self, results):
1783
+ """Perform the Decord initialization.
1784
+
1785
+ Args:
1786
+ results (dict): The resulting dict to be modified and passed
1787
+ to the next transform in pipeline.
1788
+ """
1789
+ try:
1790
+ import decord
1791
+ except ImportError:
1792
+ raise ImportError(
1793
+ 'Please run "pip install decord" to install Decord first.')
1794
+ if results['tar'] is False:
1795
+ if self.file_client is None:
1796
+ self.file_client = FileClient(self.io_backend, **self.kwargs)
1797
+
1798
+ file_obj = io.BytesIO(self.file_client.get(results['filename']))
1799
+
1800
+ # print(results['filename'])
1801
+ else:
1802
+ if self.tarfile is None:
1803
+ data_root = os.path.dirname(results['filename']) + '.tar'
1804
+ self.tarfile = tarfile.open(data_root)
1805
+ video_name = results['filename'].split('/')[-1]
1806
+ iob = self.tarfile.extractfile(video_name)
1807
+ iob = iob.read()
1808
+ file_obj = io.BytesIO(iob)
1809
+ try:
1810
+ container = decord.VideoReader(file_obj, num_threads=self.num_threads)
1811
+ except Exception as e:
1812
+ print(results['filename'])
1813
+ original_path = results['filename']
1814
+ new_path = original_path.replace('/challenge_crop/rgb', '/eval_FO_ids')
1815
+ results['filename'] = new_path
1816
+ file_obj = io.BytesIO(self.file_client.get(results['filename']))
1817
+ container = decord.VideoReader(file_obj, num_threads=self.num_threads)
1818
+
1819
+ results['video_reader'] = container
1820
+ results['total_frames'] = len(container)
1821
+ return results
1822
+
1823
+ def __repr__(self):
1824
+ repr_str = (f'{self.__class__.__name__}('
1825
+ f'io_backend={self.io_backend}, '
1826
+ f'num_threads={self.num_threads})')
1827
+ return repr_str
1828
+
1829
+ @PIPELINES.register_module()
1830
+ class DecordDecode:
1831
+ """Using decord to decode the video.
1832
+
1833
+ Decord: https://github.com/dmlc/decord
1834
+
1835
+ Required keys are "video_reader", "filename" and "frame_inds",
1836
+ added or modified keys are "imgs" and "original_shape".
1837
+ """
1838
+
1839
+ def __call__(self, results):
1840
+ """Perform the Decord decoding.
1841
+
1842
+ Args:
1843
+ results (dict): The resulting dict to be modified and passed
1844
+ to the next transform in pipeline.
1845
+ """
1846
+ container = results['video_reader']
1847
+
1848
+ if results['frame_inds'].ndim != 1:
1849
+ results['frame_inds'] = np.squeeze(results['frame_inds'])
1850
+
1851
+ frame_inds = results['frame_inds']
1852
+ # Generate frame index mapping in order
1853
+ frame_dict = {
1854
+ idx: container[idx].asnumpy()
1855
+ for idx in np.unique(frame_inds)
1856
+ }
1857
+
1858
+ imgs = [frame_dict[idx] for idx in frame_inds]
1859
+
1860
+ results['video_reader'] = None
1861
+ del container
1862
+
1863
+ results['imgs'] = imgs
1864
+ results['original_shape'] = imgs[0].shape[:2]
1865
+ results['img_shape'] = imgs[0].shape[:2]
1866
+
1867
+ return results
1868
+
1869
+ @PIPELINES.register_module()
1870
+ class SampleFrames:
1871
+ """Sample frames from the video.
1872
+
1873
+ Required keys are "total_frames", "start_index" , added or modified keys
1874
+ are "frame_inds", "frame_interval" and "num_clips".
1875
+
1876
+ Args:
1877
+ clip_len (int): Frames of each sampled output clip.
1878
+ frame_interval (int): Temporal interval of adjacent sampled frames.
1879
+ Default: 1.
1880
+ num_clips (int): Number of clips to be sampled. Default: 1.
1881
+ temporal_jitter (bool): Whether to apply temporal jittering.
1882
+ Default: False.
1883
+ twice_sample (bool): Whether to use twice sample when testing.
1884
+ If set to True, it will sample frames with and without fixed shift,
1885
+ which is commonly used for testing in TSM model. Default: False.
1886
+ out_of_bound_opt (str): The way to deal with out of bounds frame
1887
+ indexes. Available options are 'loop', 'repeat_last'.
1888
+ Default: 'loop'.
1889
+ test_mode (bool): Store True when building test or validation dataset.
1890
+ Default: False.
1891
+ start_index (None): This argument is deprecated and moved to dataset
1892
+ class (``BaseDataset``, ``VideoDatset``, ``RawframeDataset``, etc),
1893
+ see this: https://github.com/open-mmlab/mmaction2/pull/89.
1894
+ """
1895
+
1896
+ def __init__(self,
1897
+ clip_len,
1898
+ frame_interval=1,
1899
+ num_clips=1,
1900
+ temporal_jitter=False,
1901
+ twice_sample=False,
1902
+ out_of_bound_opt='loop',
1903
+ test_mode=False,
1904
+ start_index=None,
1905
+ frame_uniform=False,
1906
+ multiview=1):
1907
+
1908
+ self.clip_len = clip_len
1909
+ self.frame_interval = frame_interval
1910
+ self.num_clips = num_clips
1911
+ self.temporal_jitter = temporal_jitter
1912
+ self.twice_sample = twice_sample
1913
+ self.out_of_bound_opt = out_of_bound_opt
1914
+ self.test_mode = test_mode
1915
+ self.frame_uniform = frame_uniform
1916
+ self.multiview=multiview
1917
+ assert self.out_of_bound_opt in ['loop', 'repeat_last']
1918
+
1919
+ if start_index is not None:
1920
+ warnings.warn('No longer support "start_index" in "SampleFrames", '
1921
+ 'it should be set in dataset class, see this pr: '
1922
+ 'https://github.com/open-mmlab/mmaction2/pull/89')
1923
+
1924
+ def _get_train_clips(self, num_frames):
1925
+ """Get clip offsets in train mode.
1926
+
1927
+ It will calculate the average interval for selected frames,
1928
+ and randomly shift them within offsets between [0, avg_interval].
1929
+ If the total number of frames is smaller than clips num or origin
1930
+ frames length, it will return all zero indices.
1931
+
1932
+ Args:
1933
+ num_frames (int): Total number of frame in the video.
1934
+
1935
+ Returns:
1936
+ np.ndarray: Sampled frame indices in train mode.
1937
+ """
1938
+ ori_clip_len = self.clip_len * self.frame_interval
1939
+ avg_interval = (num_frames - ori_clip_len + 1) // self.num_clips
1940
+
1941
+ if avg_interval > 0:
1942
+ base_offsets = np.arange(self.num_clips) * avg_interval
1943
+ clip_offsets = base_offsets + np.random.randint(
1944
+ avg_interval, size=self.num_clips)
1945
+ elif num_frames > max(self.num_clips, ori_clip_len):
1946
+ clip_offsets = np.sort(
1947
+ np.random.randint(
1948
+ num_frames - ori_clip_len + 1, size=self.num_clips))
1949
+ elif avg_interval == 0:
1950
+ ratio = (num_frames - ori_clip_len + 1.0) / self.num_clips
1951
+ clip_offsets = np.around(np.arange(self.num_clips) * ratio)
1952
+ else:
1953
+ clip_offsets = np.zeros((self.num_clips, ), dtype=np.int)
1954
+
1955
+ return clip_offsets
1956
+
1957
+ def _get_test_clips(self, num_frames):
1958
+ """Get clip offsets in test mode.
1959
+
1960
+ Calculate the average interval for selected frames, and shift them
1961
+ fixedly by avg_interval/2. If set twice_sample True, it will sample
1962
+ frames together without fixed shift. If the total number of frames is
1963
+ not enough, it will return all zero indices.
1964
+
1965
+ Args:
1966
+ num_frames (int): Total number of frame in the video.
1967
+
1968
+ Returns:
1969
+ np.ndarray: Sampled frame indices in test mode.
1970
+ """
1971
+ ori_clip_len = self.clip_len * self.frame_interval
1972
+ avg_interval = (num_frames - ori_clip_len + 1) / float(self.num_clips)
1973
+ if num_frames > ori_clip_len - 1:
1974
+ base_offsets = np.arange(self.num_clips) * avg_interval
1975
+ clip_offsets = (base_offsets + avg_interval / 2.0).astype(np.int)
1976
+ if self.twice_sample:
1977
+ clip_offsets = np.concatenate([clip_offsets, base_offsets])
1978
+ else:
1979
+ clip_offsets = np.zeros((self.num_clips, ), dtype=np.int)
1980
+ return clip_offsets
1981
+
1982
+ def _sample_clips(self, num_frames):
1983
+ """Choose clip offsets for the video in a given mode.
1984
+
1985
+ Args:
1986
+ num_frames (int): Total number of frame in the video.
1987
+
1988
+ Returns:
1989
+ np.ndarray: Sampled frame indices.
1990
+ """
1991
+ if self.test_mode:
1992
+ clip_offsets = self._get_test_clips(num_frames)
1993
+ else:
1994
+ if self.multiview == 1:
1995
+ clip_offsets = self._get_train_clips(num_frames)
1996
+ else:
1997
+ clip_offsets = np.concatenate([self._get_train_clips(num_frames) for _ in range(self.multiview)])
1998
+
1999
+ return clip_offsets
2000
+
2001
+ def get_seq_frames(self, num_frames):
2002
+ """
2003
+ Modified from https://github.com/facebookresearch/SlowFast/blob/64abcc90ccfdcbb11cf91d6e525bed60e92a8796/slowfast/datasets/ssv2.py#L159
2004
+ Given the video index, return the list of sampled frame indexes.
2005
+ Args:
2006
+ num_frames (int): Total number of frame in the video.
2007
+ Returns:
2008
+ seq (list): the indexes of frames of sampled from the video.
2009
+ """
2010
+ seg_size = float(num_frames - 1) / self.clip_len
2011
+ seq = []
2012
+ for i in range(self.clip_len):
2013
+ start = int(np.round(seg_size * i))
2014
+ end = int(np.round(seg_size * (i + 1)))
2015
+ if not self.test_mode:
2016
+ seq.append(random.randint(start, end))
2017
+ else:
2018
+ seq.append((start + end) // 2)
2019
+
2020
+ return np.array(seq)
2021
+
2022
+ def __call__(self, results):
2023
+ """Perform the SampleFrames loading.
2024
+
2025
+ Args:
2026
+ results (dict): The resulting dict to be modified and passed
2027
+ to the next transform in pipeline.
2028
+ """
2029
+ total_frames = results['total_frames']
2030
+ if self.frame_uniform: # sthv2 sampling strategy
2031
+ assert results['start_index'] == 0
2032
+ frame_inds = self.get_seq_frames(total_frames)
2033
+ else:
2034
+ clip_offsets = self._sample_clips(total_frames)
2035
+ frame_inds = clip_offsets[:, None] + np.arange(
2036
+ self.clip_len)[None, :] * self.frame_interval
2037
+ frame_inds = np.concatenate(frame_inds)
2038
+
2039
+ if self.temporal_jitter:
2040
+ perframe_offsets = np.random.randint(
2041
+ self.frame_interval, size=len(frame_inds))
2042
+ frame_inds += perframe_offsets
2043
+
2044
+ frame_inds = frame_inds.reshape((-1, self.clip_len))
2045
+ if self.out_of_bound_opt == 'loop':
2046
+ frame_inds = np.mod(frame_inds, total_frames)
2047
+ elif self.out_of_bound_opt == 'repeat_last':
2048
+ safe_inds = frame_inds < total_frames
2049
+ unsafe_inds = 1 - safe_inds
2050
+ last_ind = np.max(safe_inds * frame_inds, axis=1)
2051
+ new_inds = (safe_inds * frame_inds + (unsafe_inds.T * last_ind).T)
2052
+ frame_inds = new_inds
2053
+ else:
2054
+ raise ValueError('Illegal out_of_bound option.')
2055
+
2056
+ start_index = results['start_index']
2057
+ frame_inds = np.concatenate(frame_inds) + start_index
2058
+
2059
+ results['frame_inds'] = frame_inds.astype(np.int)
2060
+ results['clip_len'] = self.clip_len
2061
+ results['frame_interval'] = self.frame_interval
2062
+ results['num_clips'] = self.num_clips
2063
+ return results
2064
+
2065
+ def __repr__(self):
2066
+ repr_str = (f'{self.__class__.__name__}('
2067
+ f'clip_len={self.clip_len}, '
2068
+ f'frame_interval={self.frame_interval}, '
2069
+ f'num_clips={self.num_clips}, '
2070
+ f'temporal_jitter={self.temporal_jitter}, '
2071
+ f'twice_sample={self.twice_sample}, '
2072
+ f'out_of_bound_opt={self.out_of_bound_opt}, '
2073
+ f'test_mode={self.test_mode})')
2074
+ return repr_str
2075
+
2076
+
2077
+ @PIPELINES.register_module()
2078
+ class FormatShape:
2079
+ """Format final imgs shape to the given input_format.
2080
+
2081
+ Required keys are "imgs", "num_clips" and "clip_len", added or modified
2082
+ keys are "imgs" and "input_shape".
2083
+
2084
+ Args:
2085
+ input_format (str): Define the final imgs format.
2086
+ collapse (bool): To collpase input_format N... to ... (NCTHW to CTHW,
2087
+ etc.) if N is 1. Should be set as True when training and testing
2088
+ detectors. Default: False.
2089
+ """
2090
+
2091
+ def __init__(self, input_format, collapse=False):
2092
+ self.input_format = input_format
2093
+ self.collapse = collapse
2094
+ if self.input_format not in ['NCTHW', 'NCHW', 'NCHW_Flow', 'NPTCHW']:
2095
+ raise ValueError(
2096
+ f'The input format {self.input_format} is invalid.')
2097
+
2098
+ def __call__(self, results):
2099
+ """Performs the FormatShape formating.
2100
+
2101
+ Args:
2102
+ results (dict): The resulting dict to be modified and passed
2103
+ to the next transform in pipeline.
2104
+ """
2105
+ if not isinstance(results['imgs'], np.ndarray):
2106
+ results['imgs'] = np.array(results['imgs'])
2107
+ imgs = results['imgs']
2108
+ # [M x H x W x C]
2109
+ # M = 1 * N_crops * N_clips * L
2110
+ if self.collapse:
2111
+ assert results['num_clips'] == 1
2112
+
2113
+ if self.input_format == 'NCTHW':
2114
+ num_clips = results['num_clips']
2115
+ clip_len = results['clip_len']
2116
+
2117
+ imgs = imgs.reshape((-1, num_clips, clip_len) + imgs.shape[1:])
2118
+ # N_crops x N_clips x L x H x W x C
2119
+ imgs = np.transpose(imgs, (0, 1, 5, 2, 3, 4))
2120
+ # N_crops x N_clips x C x L x H x W
2121
+ imgs = imgs.reshape((-1, ) + imgs.shape[2:])
2122
+ # M' x C x L x H x W
2123
+ # M' = N_crops x N_clips
2124
+ elif self.input_format == 'NCHW':
2125
+ imgs = np.transpose(imgs, (0, 3, 1, 2))
2126
+ # M x C x H x W
2127
+ elif self.input_format == 'NCHW_Flow':
2128
+ num_clips = results['num_clips']
2129
+ clip_len = results['clip_len']
2130
+ imgs = imgs.reshape((-1, num_clips, clip_len) + imgs.shape[1:])
2131
+ # N_crops x N_clips x L x H x W x C
2132
+ imgs = np.transpose(imgs, (0, 1, 2, 5, 3, 4))
2133
+ # N_crops x N_clips x L x C x H x W
2134
+ imgs = imgs.reshape((-1, imgs.shape[2] * imgs.shape[3]) +
2135
+ imgs.shape[4:])
2136
+ # M' x C' x H x W
2137
+ # M' = N_crops x N_clips
2138
+ # C' = L x C
2139
+ elif self.input_format == 'NPTCHW':
2140
+ num_proposals = results['num_proposals']
2141
+ num_clips = results['num_clips']
2142
+ clip_len = results['clip_len']
2143
+ imgs = imgs.reshape((num_proposals, num_clips * clip_len) +
2144
+ imgs.shape[1:])
2145
+ # P x M x H x W x C
2146
+ # M = N_clips x L
2147
+ imgs = np.transpose(imgs, (0, 1, 4, 2, 3))
2148
+ # P x M x C x H x W
2149
+ if self.collapse:
2150
+ assert imgs.shape[0] == 1
2151
+ imgs = imgs.squeeze(0)
2152
+
2153
+ results['imgs'] = imgs
2154
+ results['input_shape'] = imgs.shape
2155
+ return results
2156
+
2157
+ def __repr__(self):
2158
+ repr_str = self.__class__.__name__
2159
+ repr_str += f"(input_format='{self.input_format}')"
2160
+ return repr_str
2161
+
2162
+ @PIPELINES.register_module()
2163
+ class Collect:
2164
+ """Collect data from the loader relevant to the specific task.
2165
+
2166
+ This keeps the items in ``keys`` as it is, and collect items in
2167
+ ``meta_keys`` into a meta item called ``meta_name``.This is usually
2168
+ the last stage of the data loader pipeline.
2169
+ For example, when keys='imgs', meta_keys=('filename', 'label',
2170
+ 'original_shape'), meta_name='img_metas', the results will be a dict with
2171
+ keys 'imgs' and 'img_metas', where 'img_metas' is a DataContainer of
2172
+ another dict with keys 'filename', 'label', 'original_shape'.
2173
+
2174
+ Args:
2175
+ keys (Sequence[str]): Required keys to be collected.
2176
+ meta_name (str): The name of the key that contains meta infomation.
2177
+ This key is always populated. Default: "img_metas".
2178
+ meta_keys (Sequence[str]): Keys that are collected under meta_name.
2179
+ The contents of the ``meta_name`` dictionary depends on
2180
+ ``meta_keys``.
2181
+ By default this includes:
2182
+
2183
+ - "filename": path to the image file
2184
+ - "label": label of the image file
2185
+ - "original_shape": original shape of the image as a tuple
2186
+ (h, w, c)
2187
+ - "img_shape": shape of the image input to the network as a tuple
2188
+ (h, w, c). Note that images may be zero padded on the
2189
+ bottom/right, if the batch tensor is larger than this shape.
2190
+ - "pad_shape": image shape after padding
2191
+ - "flip_direction": a str in ("horiziontal", "vertival") to
2192
+ indicate if the image is fliped horizontally or vertically.
2193
+ - "img_norm_cfg": a dict of normalization information:
2194
+ - mean - per channel mean subtraction
2195
+ - std - per channel std divisor
2196
+ - to_rgb - bool indicating if bgr was converted to rgb
2197
+ nested (bool): If set as True, will apply data[x] = [data[x]] to all
2198
+ items in data. The arg is added for compatibility. Default: False.
2199
+ """
2200
+
2201
+ def __init__(self,
2202
+ keys,
2203
+ meta_keys=('filename', 'label', 'original_shape', 'img_shape',
2204
+ 'pad_shape', 'flip_direction', 'img_norm_cfg'),
2205
+ meta_name='img_metas',
2206
+ nested=False):
2207
+ self.keys = keys
2208
+ self.meta_keys = meta_keys
2209
+ self.meta_name = meta_name
2210
+ self.nested = nested
2211
+
2212
+ def __call__(self, results):
2213
+ """Performs the Collect formating.
2214
+
2215
+ Args:
2216
+ results (dict): The resulting dict to be modified and passed
2217
+ to the next transform in pipeline.
2218
+ """
2219
+ data = {}
2220
+ for key in self.keys:
2221
+ data[key] = results[key]
2222
+
2223
+ if len(self.meta_keys) != 0:
2224
+ meta = {}
2225
+ for key in self.meta_keys:
2226
+ meta[key] = results[key]
2227
+ data[self.meta_name] = DC(meta, cpu_only=True)
2228
+ if self.nested:
2229
+ for k in data:
2230
+ data[k] = [data[k]]
2231
+
2232
+ return data
2233
+
2234
+ def __repr__(self):
2235
+ return (f'{self.__class__.__name__}('
2236
+ f'keys={self.keys}, meta_keys={self.meta_keys}, '
2237
+ f'nested={self.nested})')
2238
+
2239
+
2240
+ def to_tensor(data):
2241
+ """Convert objects of various python types to :obj:`torch.Tensor`.
2242
+
2243
+ Supported types are: :class:`numpy.ndarray`, :class:`torch.Tensor`,
2244
+ :class:`Sequence`, :class:`int` and :class:`float`.
2245
+ """
2246
+ if isinstance(data, torch.Tensor):
2247
+ return data
2248
+ if isinstance(data, np.ndarray):
2249
+ return torch.from_numpy(data)
2250
+ if isinstance(data, Sequence) and not mmcv.is_str(data):
2251
+ return torch.tensor(data)
2252
+ if isinstance(data, int):
2253
+ return torch.LongTensor([data])
2254
+ if isinstance(data, float):
2255
+ return torch.FloatTensor([data])
2256
+ raise TypeError(f'type {type(data)} cannot be converted to tensor.')
2257
+
2258
+
2259
+ @PIPELINES.register_module()
2260
+ class ToTensor:
2261
+ """Convert some values in results dict to `torch.Tensor` type in data
2262
+ loader pipeline.
2263
+
2264
+ Args:
2265
+ keys (Sequence[str]): Required keys to be converted.
2266
+ """
2267
+
2268
+ def __init__(self, keys):
2269
+ self.keys = keys
2270
+
2271
+ def __call__(self, results):
2272
+ """Performs the ToTensor formating.
2273
+
2274
+ Args:
2275
+ results (dict): The resulting dict to be modified and passed
2276
+ to the next transform in pipeline.
2277
+ """
2278
+ for key in self.keys:
2279
+ results[key] = to_tensor(results[key])
2280
+ return results
2281
+
2282
+ def __repr__(self):
2283
+ return f'{self.__class__.__name__}(keys={self.keys})'
2284
+
2285
+
2286
+
2287
+ @PIPELINES.register_module()
2288
+ class RandAugment:
2289
+ def __init__(self, auto_augment, input_size=224, interpolation='bicubic', level='video'):
2290
+ if isinstance(input_size, tuple):
2291
+ img_size = input_size[-2:]
2292
+ else:
2293
+ img_size = input_size
2294
+
2295
+ if auto_augment:
2296
+ assert isinstance(auto_augment, str)
2297
+ if isinstance(img_size, tuple):
2298
+ img_size_min = min(img_size)
2299
+ else:
2300
+ img_size_min = img_size
2301
+ aa_params = {"translate_const": int(img_size_min * 0.45)}
2302
+ if interpolation and interpolation != "random":
2303
+ aa_params["interpolation"] = _pil_interp(interpolation)
2304
+ self.auto_augment = auto_augment
2305
+ self.aa_params = aa_params
2306
+ self.level = level
2307
+
2308
+ def do_ops(self, ops, buf):
2309
+ for op in ops:
2310
+ buf = op(buf)
2311
+ return buf
2312
+
2313
+ def get_ops(self, ra_ops, num_layers, choice_weights):
2314
+ return np.random.choice(
2315
+ ra_ops,
2316
+ num_layers,
2317
+ replace=choice_weights is None,
2318
+ p=choice_weights,
2319
+ )
2320
+
2321
+ def __call__(self, results):
2322
+ if self.auto_augment.startswith("rand"):
2323
+ ra_ops, num_layers, choice_weights = rand_augment_transform(self.auto_augment, self.aa_params)
2324
+
2325
+ assert results['modality'] == 'RGB', 'Imgaug only support RGB images.'
2326
+ in_type = results['imgs'][0].dtype.type
2327
+
2328
+ if self.level == 'video':
2329
+ ops = self.get_ops(ra_ops, num_layers, choice_weights)
2330
+ buffer = [
2331
+ transforms.ToPILImage()(frame) for frame in results['imgs']
2332
+ ]
2333
+ results['imgs'] = [
2334
+ np.asarray(self.do_ops(ops, buf)) for buf in buffer
2335
+ ]
2336
+
2337
+ elif self.level == 'image':
2338
+ buffer = [
2339
+ transforms.ToPILImage()(frame) for frame in results['imgs']
2340
+ ]
2341
+ results['imgs'] = []
2342
+ for buf in buffer:
2343
+ ops = self.get_ops(ra_ops, num_layers, choice_weights)
2344
+ buf = self.do_ops(ops, buf)
2345
+ results['imgs'].append(np.asarray(buf))
2346
+ else:
2347
+ assert False, 'Unknown RandAugment config section'
2348
+
2349
+ img_h, img_w, _ = results['imgs'][0].shape
2350
+ out_type = results['imgs'][0].dtype.type
2351
+ assert in_type == out_type, \
2352
+ ('Imgaug input dtype and output dtype are not the same. ',
2353
+ f'Convert from {in_type} to {out_type}')
2354
+
2355
+ results['img_shape'] = (img_h, img_w)
2356
+
2357
+ return results
2358
+
2359
+
2360
+ def __repr__(self):
2361
+ repr_str = self.__class__.__name__ + f'(transforms={self.aug})'
2362
+ return repr_str
datasets/rand_augment.py ADDED
@@ -0,0 +1,532 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ This implementation is based on
3
+ https://github.com/rwightman/pytorch-image-models/blob/master/timm/data/auto_augment.py
4
+ pulished under an Apache License 2.0.
5
+
6
+ COMMENT FROM ORIGINAL:
7
+ AutoAugment, RandAugment, and AugMix for PyTorch
8
+ This code implements the searched ImageNet policies with various tweaks and
9
+ improvements and does not include any of the search code. AA and RA
10
+ Implementation adapted from:
11
+ https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/autoaugment.py
12
+ AugMix adapted from:
13
+ https://github.com/google-research/augmix
14
+ Papers:
15
+ AutoAugment: Learning Augmentation Policies from Data
16
+ https://arxiv.org/abs/1805.09501
17
+ Learning Data Augmentation Strategies for Object Detection
18
+ https://arxiv.org/abs/1906.11172
19
+ RandAugment: Practical automated data augmentation...
20
+ https://arxiv.org/abs/1909.13719
21
+ AugMix: A Simple Data Processing Method to Improve Robustness and
22
+ Uncertainty https://arxiv.org/abs/1912.02781
23
+
24
+ Hacked together by / Copyright 2020 Ross Wightman
25
+ """
26
+
27
+ import math
28
+ import numpy as np
29
+ import random
30
+ import re
31
+ import PIL
32
+ from PIL import Image, ImageEnhance, ImageOps
33
+
34
+ _PIL_VER = tuple([int(x) for x in PIL.__version__.split(".")[:2]])
35
+
36
+ _FILL = (128, 128, 128)
37
+
38
+ # This signifies the max integer that the controller RNN could predict for the
39
+ # augmentation scheme.
40
+ _MAX_LEVEL = 10.0
41
+
42
+ _HPARAMS_DEFAULT = {
43
+ "translate_const": 250,
44
+ "img_mean": _FILL,
45
+ }
46
+
47
+ _RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)
48
+
49
+
50
+ def _interpolation(kwargs):
51
+ interpolation = kwargs.pop("resample", Image.BILINEAR)
52
+ if isinstance(interpolation, (list, tuple)):
53
+ return random.choice(interpolation)
54
+ else:
55
+ return interpolation
56
+
57
+
58
+ def _check_args_tf(kwargs):
59
+ if "fillcolor" in kwargs and _PIL_VER < (5, 0):
60
+ kwargs.pop("fillcolor")
61
+ kwargs["resample"] = _interpolation(kwargs)
62
+
63
+
64
+ def shear_x(img, factor, **kwargs):
65
+ _check_args_tf(kwargs)
66
+ return img.transform(
67
+ img.size, Image.AFFINE, (1, factor, 0, 0, 1, 0), **kwargs
68
+ )
69
+
70
+
71
+ def shear_y(img, factor, **kwargs):
72
+ _check_args_tf(kwargs)
73
+ return img.transform(
74
+ img.size, Image.AFFINE, (1, 0, 0, factor, 1, 0), **kwargs
75
+ )
76
+
77
+
78
+ def translate_x_rel(img, pct, **kwargs):
79
+ pixels = pct * img.size[0]
80
+ _check_args_tf(kwargs)
81
+ return img.transform(
82
+ img.size, Image.AFFINE, (1, 0, pixels, 0, 1, 0), **kwargs
83
+ )
84
+
85
+
86
+ def translate_y_rel(img, pct, **kwargs):
87
+ pixels = pct * img.size[1]
88
+ _check_args_tf(kwargs)
89
+ return img.transform(
90
+ img.size, Image.AFFINE, (1, 0, 0, 0, 1, pixels), **kwargs
91
+ )
92
+
93
+
94
+ def translate_x_abs(img, pixels, **kwargs):
95
+ _check_args_tf(kwargs)
96
+ return img.transform(
97
+ img.size, Image.AFFINE, (1, 0, pixels, 0, 1, 0), **kwargs
98
+ )
99
+
100
+
101
+ def translate_y_abs(img, pixels, **kwargs):
102
+ _check_args_tf(kwargs)
103
+ return img.transform(
104
+ img.size, Image.AFFINE, (1, 0, 0, 0, 1, pixels), **kwargs
105
+ )
106
+
107
+
108
+ def rotate(img, degrees, **kwargs):
109
+ _check_args_tf(kwargs)
110
+ if _PIL_VER >= (5, 2):
111
+ return img.rotate(degrees, **kwargs)
112
+ elif _PIL_VER >= (5, 0):
113
+ w, h = img.size
114
+ post_trans = (0, 0)
115
+ rotn_center = (w / 2.0, h / 2.0)
116
+ angle = -math.radians(degrees)
117
+ matrix = [
118
+ round(math.cos(angle), 15),
119
+ round(math.sin(angle), 15),
120
+ 0.0,
121
+ round(-math.sin(angle), 15),
122
+ round(math.cos(angle), 15),
123
+ 0.0,
124
+ ]
125
+
126
+ def transform(x, y, matrix):
127
+ (a, b, c, d, e, f) = matrix
128
+ return a * x + b * y + c, d * x + e * y + f
129
+
130
+ matrix[2], matrix[5] = transform(
131
+ -rotn_center[0] - post_trans[0],
132
+ -rotn_center[1] - post_trans[1],
133
+ matrix,
134
+ )
135
+ matrix[2] += rotn_center[0]
136
+ matrix[5] += rotn_center[1]
137
+ return img.transform(img.size, Image.AFFINE, matrix, **kwargs)
138
+ else:
139
+ return img.rotate(degrees, resample=kwargs["resample"])
140
+
141
+
142
+ def auto_contrast(img, **__):
143
+ return ImageOps.autocontrast(img)
144
+
145
+
146
+ def invert(img, **__):
147
+ return ImageOps.invert(img)
148
+
149
+
150
+ def equalize(img, **__):
151
+ return ImageOps.equalize(img)
152
+
153
+
154
+ def solarize(img, thresh, **__):
155
+ return ImageOps.solarize(img, thresh)
156
+
157
+
158
+ def solarize_add(img, add, thresh=128, **__):
159
+ lut = []
160
+ for i in range(256):
161
+ if i < thresh:
162
+ lut.append(min(255, i + add))
163
+ else:
164
+ lut.append(i)
165
+ if img.mode in ("L", "RGB"):
166
+ if img.mode == "RGB" and len(lut) == 256:
167
+ lut = lut + lut + lut
168
+ return img.point(lut)
169
+ else:
170
+ return img
171
+
172
+
173
+ def posterize(img, bits_to_keep, **__):
174
+ if bits_to_keep >= 8:
175
+ return img
176
+ return ImageOps.posterize(img, bits_to_keep)
177
+
178
+
179
+ def contrast(img, factor, **__):
180
+ return ImageEnhance.Contrast(img).enhance(factor)
181
+
182
+
183
+ def color(img, factor, **__):
184
+ return ImageEnhance.Color(img).enhance(factor)
185
+
186
+
187
+ def brightness(img, factor, **__):
188
+ return ImageEnhance.Brightness(img).enhance(factor)
189
+
190
+
191
+ def sharpness(img, factor, **__):
192
+ return ImageEnhance.Sharpness(img).enhance(factor)
193
+
194
+
195
+ def _randomly_negate(v):
196
+ """With 50% prob, negate the value"""
197
+ return -v if random.random() > 0.5 else v
198
+
199
+
200
+ def _rotate_level_to_arg(level, _hparams):
201
+ # range [-30, 30]
202
+ level = (level / _MAX_LEVEL) * 30.0
203
+ level = _randomly_negate(level)
204
+ return (level,)
205
+
206
+
207
+ def _enhance_level_to_arg(level, _hparams):
208
+ # range [0.1, 1.9]
209
+ return ((level / _MAX_LEVEL) * 1.8 + 0.1,)
210
+
211
+
212
+ def _enhance_increasing_level_to_arg(level, _hparams):
213
+ # the 'no change' level is 1.0, moving away from that towards 0. or 2.0 increases the enhancement blend
214
+ # range [0.1, 1.9]
215
+ level = (level / _MAX_LEVEL) * 0.9
216
+ level = 1.0 + _randomly_negate(level)
217
+ return (level,)
218
+
219
+
220
+ def _shear_level_to_arg(level, _hparams):
221
+ # range [-0.3, 0.3]
222
+ level = (level / _MAX_LEVEL) * 0.3
223
+ level = _randomly_negate(level)
224
+ return (level,)
225
+
226
+
227
+ def _translate_abs_level_to_arg(level, hparams):
228
+ translate_const = hparams["translate_const"]
229
+ level = (level / _MAX_LEVEL) * float(translate_const)
230
+ level = _randomly_negate(level)
231
+ return (level,)
232
+
233
+
234
+ def _translate_rel_level_to_arg(level, hparams):
235
+ # default range [-0.45, 0.45]
236
+ translate_pct = hparams.get("translate_pct", 0.45)
237
+ level = (level / _MAX_LEVEL) * translate_pct
238
+ level = _randomly_negate(level)
239
+ return (level,)
240
+
241
+
242
+ def _posterize_level_to_arg(level, _hparams):
243
+ # As per Tensorflow TPU EfficientNet impl
244
+ # range [0, 4], 'keep 0 up to 4 MSB of original image'
245
+ # intensity/severity of augmentation decreases with level
246
+ return (int((level / _MAX_LEVEL) * 4),)
247
+
248
+
249
+ def _posterize_increasing_level_to_arg(level, hparams):
250
+ # As per Tensorflow models research and UDA impl
251
+ # range [4, 0], 'keep 4 down to 0 MSB of original image',
252
+ # intensity/severity of augmentation increases with level
253
+ return (4 - _posterize_level_to_arg(level, hparams)[0],)
254
+
255
+
256
+ def _posterize_original_level_to_arg(level, _hparams):
257
+ # As per original AutoAugment paper description
258
+ # range [4, 8], 'keep 4 up to 8 MSB of image'
259
+ # intensity/severity of augmentation decreases with level
260
+ return (int((level / _MAX_LEVEL) * 4) + 4,)
261
+
262
+
263
+ def _solarize_level_to_arg(level, _hparams):
264
+ # range [0, 256]
265
+ # intensity/severity of augmentation decreases with level
266
+ return (int((level / _MAX_LEVEL) * 256),)
267
+
268
+
269
+ def _solarize_increasing_level_to_arg(level, _hparams):
270
+ # range [0, 256]
271
+ # intensity/severity of augmentation increases with level
272
+ return (256 - _solarize_level_to_arg(level, _hparams)[0],)
273
+
274
+
275
+ def _solarize_add_level_to_arg(level, _hparams):
276
+ # range [0, 110]
277
+ return (int((level / _MAX_LEVEL) * 110),)
278
+
279
+
280
+ LEVEL_TO_ARG = {
281
+ "AutoContrast": None,
282
+ "Equalize": None,
283
+ "Invert": None,
284
+ "Rotate": _rotate_level_to_arg,
285
+ # There are several variations of the posterize level scaling in various Tensorflow/Google repositories/papers
286
+ "Posterize": _posterize_level_to_arg,
287
+ "PosterizeIncreasing": _posterize_increasing_level_to_arg,
288
+ "PosterizeOriginal": _posterize_original_level_to_arg,
289
+ "Solarize": _solarize_level_to_arg,
290
+ "SolarizeIncreasing": _solarize_increasing_level_to_arg,
291
+ "SolarizeAdd": _solarize_add_level_to_arg,
292
+ "Color": _enhance_level_to_arg,
293
+ "ColorIncreasing": _enhance_increasing_level_to_arg,
294
+ "Contrast": _enhance_level_to_arg,
295
+ "ContrastIncreasing": _enhance_increasing_level_to_arg,
296
+ "Brightness": _enhance_level_to_arg,
297
+ "BrightnessIncreasing": _enhance_increasing_level_to_arg,
298
+ "Sharpness": _enhance_level_to_arg,
299
+ "SharpnessIncreasing": _enhance_increasing_level_to_arg,
300
+ "ShearX": _shear_level_to_arg,
301
+ "ShearY": _shear_level_to_arg,
302
+ "TranslateX": _translate_abs_level_to_arg,
303
+ "TranslateY": _translate_abs_level_to_arg,
304
+ "TranslateXRel": _translate_rel_level_to_arg,
305
+ "TranslateYRel": _translate_rel_level_to_arg,
306
+ }
307
+
308
+
309
+ NAME_TO_OP = {
310
+ "AutoContrast": auto_contrast,
311
+ "Equalize": equalize,
312
+ "Invert": invert,
313
+ "Rotate": rotate,
314
+ "Posterize": posterize,
315
+ "PosterizeIncreasing": posterize,
316
+ "PosterizeOriginal": posterize,
317
+ "Solarize": solarize,
318
+ "SolarizeIncreasing": solarize,
319
+ "SolarizeAdd": solarize_add,
320
+ "Color": color,
321
+ "ColorIncreasing": color,
322
+ "Contrast": contrast,
323
+ "ContrastIncreasing": contrast,
324
+ "Brightness": brightness,
325
+ "BrightnessIncreasing": brightness,
326
+ "Sharpness": sharpness,
327
+ "SharpnessIncreasing": sharpness,
328
+ "ShearX": shear_x,
329
+ "ShearY": shear_y,
330
+ "TranslateX": translate_x_abs,
331
+ "TranslateY": translate_y_abs,
332
+ "TranslateXRel": translate_x_rel,
333
+ "TranslateYRel": translate_y_rel,
334
+ }
335
+
336
+
337
+ class AugmentOp:
338
+ """
339
+ Apply for video.
340
+ """
341
+
342
+ def __init__(self, name, prob=0.5, magnitude=10, hparams=None):
343
+ hparams = hparams or _HPARAMS_DEFAULT
344
+ self.aug_fn = NAME_TO_OP[name]
345
+ self.level_fn = LEVEL_TO_ARG[name]
346
+ self.prob = prob
347
+ self.magnitude = magnitude
348
+ self.hparams = hparams.copy()
349
+ self.kwargs = {
350
+ "fillcolor": hparams["img_mean"]
351
+ if "img_mean" in hparams
352
+ else _FILL,
353
+ "resample": hparams["interpolation"]
354
+ if "interpolation" in hparams
355
+ else _RANDOM_INTERPOLATION,
356
+ }
357
+
358
+ # If magnitude_std is > 0, we introduce some randomness
359
+ # in the usually fixed policy and sample magnitude from a normal distribution
360
+ # with mean `magnitude` and std-dev of `magnitude_std`.
361
+ # NOTE This is my own hack, being tested, not in papers or reference impls.
362
+ self.magnitude_std = self.hparams.get("magnitude_std", 0)
363
+
364
+ def __call__(self, img_list):
365
+ if self.prob < 1.0 and random.random() > self.prob:
366
+ return img_list
367
+ magnitude = self.magnitude
368
+ if self.magnitude_std and self.magnitude_std > 0:
369
+ magnitude = random.gauss(magnitude, self.magnitude_std)
370
+ magnitude = min(_MAX_LEVEL, max(0, magnitude)) # clip to valid range
371
+ level_args = (
372
+ self.level_fn(magnitude, self.hparams)
373
+ if self.level_fn is not None
374
+ else ()
375
+ )
376
+
377
+ if isinstance(img_list, list):
378
+ return [
379
+ self.aug_fn(img, *level_args, **self.kwargs) for img in img_list
380
+ ]
381
+ else:
382
+ return self.aug_fn(img_list, *level_args, **self.kwargs)
383
+
384
+
385
+ _RAND_TRANSFORMS = [
386
+ "AutoContrast",
387
+ "Equalize",
388
+ "Invert",
389
+ "Rotate",
390
+ "Posterize",
391
+ "Solarize",
392
+ "SolarizeAdd",
393
+ "Color",
394
+ "Contrast",
395
+ "Brightness",
396
+ "Sharpness",
397
+ "ShearX",
398
+ "ShearY",
399
+ "TranslateXRel",
400
+ "TranslateYRel",
401
+ ]
402
+
403
+
404
+ _RAND_INCREASING_TRANSFORMS = [
405
+ "AutoContrast",
406
+ "Equalize",
407
+ "Invert",
408
+ "Rotate",
409
+ "PosterizeIncreasing",
410
+ "SolarizeIncreasing",
411
+ "SolarizeAdd",
412
+ "ColorIncreasing",
413
+ "ContrastIncreasing",
414
+ "BrightnessIncreasing",
415
+ "SharpnessIncreasing",
416
+ "ShearX",
417
+ "ShearY",
418
+ "TranslateXRel",
419
+ "TranslateYRel",
420
+ ]
421
+
422
+
423
+ # These experimental weights are based loosely on the relative improvements mentioned in paper.
424
+ # They may not result in increased performance, but could likely be tuned to so.
425
+ _RAND_CHOICE_WEIGHTS_0 = {
426
+ "Rotate": 0.3,
427
+ "ShearX": 0.2,
428
+ "ShearY": 0.2,
429
+ "TranslateXRel": 0.1,
430
+ "TranslateYRel": 0.1,
431
+ "Color": 0.025,
432
+ "Sharpness": 0.025,
433
+ "AutoContrast": 0.025,
434
+ "Solarize": 0.005,
435
+ "SolarizeAdd": 0.005,
436
+ "Contrast": 0.005,
437
+ "Brightness": 0.005,
438
+ "Equalize": 0.005,
439
+ "Posterize": 0,
440
+ "Invert": 0,
441
+ }
442
+
443
+
444
+ def _select_rand_weights(weight_idx=0, transforms=None):
445
+ transforms = transforms or _RAND_TRANSFORMS
446
+ assert weight_idx == 0 # only one set of weights currently
447
+ rand_weights = _RAND_CHOICE_WEIGHTS_0
448
+ probs = [rand_weights[k] for k in transforms]
449
+ probs /= np.sum(probs)
450
+ return probs
451
+
452
+
453
+ def rand_augment_ops(magnitude=10, hparams=None, transforms=None):
454
+ hparams = hparams or _HPARAMS_DEFAULT
455
+ transforms = transforms or _RAND_TRANSFORMS
456
+ return [
457
+ AugmentOp(name, prob=0.5, magnitude=magnitude, hparams=hparams)
458
+ for name in transforms
459
+ ]
460
+
461
+
462
+ class RandAugment:
463
+ def __init__(self, ops, num_layers=2, choice_weights=None):
464
+ self.ops = ops
465
+ self.num_layers = num_layers
466
+ self.choice_weights = choice_weights
467
+
468
+ def __call__(self, img):
469
+ # no replacement when using weighted choice
470
+ ops = np.random.choice(
471
+ self.ops,
472
+ self.num_layers,
473
+ replace=self.choice_weights is None,
474
+ p=self.choice_weights,
475
+ )
476
+ for op in ops:
477
+ img = op(img)
478
+ return img
479
+
480
+
481
+ def rand_augment_transform(config_str, hparams):
482
+ """
483
+ RandAugment: Practical automated data augmentation... - https://arxiv.org/abs/1909.13719
484
+
485
+ Create a RandAugment transform
486
+ :param config_str: String defining configuration of random augmentation. Consists of multiple sections separated by
487
+ dashes ('-'). The first section defines the specific variant of rand augment (currently only 'rand'). The remaining
488
+ sections, not order sepecific determine
489
+ 'm' - integer magnitude of rand augment
490
+ 'n' - integer num layers (number of transform ops selected per image)
491
+ 'w' - integer probabiliy weight index (index of a set of weights to influence choice of op)
492
+ 'mstd' - float std deviation of magnitude noise applied
493
+ 'inc' - integer (bool), use augmentations that increase in severity with magnitude (default: 0)
494
+ Ex 'rand-m9-n3-mstd0.5' results in RandAugment with magnitude 9, num_layers 3, magnitude_std 0.5
495
+ 'rand-mstd1-w0' results in magnitude_std 1.0, weights 0, default magnitude of 10 and num_layers 2
496
+ :param hparams: Other hparams (kwargs) for the RandAugmentation scheme
497
+ :return: A PyTorch compatible Transform
498
+ """
499
+ magnitude = _MAX_LEVEL # default to _MAX_LEVEL for magnitude (currently 10)
500
+ num_layers = 2 # default to 2 ops per image
501
+ weight_idx = None # default to no probability weights for op choice
502
+ transforms = _RAND_TRANSFORMS
503
+ config = config_str.split("-")
504
+ assert config[0] == "rand"
505
+ config = config[1:]
506
+ for c in config:
507
+ cs = re.split(r"(\d.*)", c)
508
+ if len(cs) < 2:
509
+ continue
510
+ key, val = cs[:2]
511
+ if key == "mstd":
512
+ # noise param injected via hparams for now
513
+ hparams.setdefault("magnitude_std", float(val))
514
+ elif key == "inc":
515
+ if bool(val):
516
+ transforms = _RAND_INCREASING_TRANSFORMS
517
+ elif key == "m":
518
+ magnitude = int(val)
519
+ elif key == "n":
520
+ num_layers = int(val)
521
+ elif key == "w":
522
+ weight_idx = int(val)
523
+ else:
524
+ assert NotImplementedError
525
+ ra_ops = rand_augment_ops(
526
+ magnitude=magnitude, hparams=hparams, transforms=transforms
527
+ )
528
+ choice_weights = (
529
+ None if weight_idx is None else _select_rand_weights(weight_idx)
530
+ )
531
+ return ra_ops, num_layers, choice_weights
532
+ # return RandAugment(ra_ops, num_layers, choice_weights=choice_weights)
labels/challenge.csv ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ id,name
2
+ 0,walking
3
+ 1,entering a space
4
+ 2,leaving a space
5
+ 3,sitting
6
+ 4,standing
7
+ 5,sitting down
8
+ 6,getting up
9
+ 7,lying down
10
+ 8,cooking
11
+ 9,making coffee
12
+ 10,making tea
13
+ 11,wiping table
14
+ 12,spreading bedding
15
+ 13,folding bedding
16
+ 14,cleaning dishes
17
+ 15,vacuuming
18
+ 16,looking for something
19
+ 17,putting on shoes
20
+ 18,taking off shoes
21
+ 19,putting on glasses
22
+ 20,taking off glasses
23
+ 21,cleaning
24
+ 22,writing
25
+ 23,talking
26
+ 24,beckoning
27
+ 25,waving a hand
28
+ 26,clapping
29
+ 27,pointing
30
+ 28,shaking hands
31
+ 29,phone calls
32
+ 30,using a telephone
33
+ 31,hugging
34
+ 32,washing face
35
+ 33,washing hands
36
+ 34,brushing teeth
37
+ 35,brushing hair
38
+ 36,massaging a shoulder oneself
39
+ 37,taking medicine
40
+ 38,eating
41
+ 39,drinking
42
+ 40,watching tv
43
+ 41,reading
44
+ 42,using a laptop
45
+ 43,using a tablet
46
+ 44,exercising
labels/challenge_composite.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ id,name
2
+ 0,locomotion
3
+ 1,manipulation
4
+ 2,communication
5
+ 3,hygiene
6
+ 4,eating_drinking
7
+ 5,leisure
labels/etri_label.csv ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ id,name
2
+ 1,eating food with a fork
3
+ 2,pouring water into a cup
4
+ 3,taking medicine
5
+ 4,drinking water
6
+ 5,putting food in the fridge/taking food from the fridge   
7
+ 6,trimming vegetables
8
+ 7,peeling fruit
9
+ 8,using a gas stove
10
+ 9,cutting vegetable on the cutting board
11
+ 10,brushing teeth
12
+ 11,washing hands
13
+ 12,washing face
14
+ 13,wiping face with a towel
15
+ 14,putting on cosmetics
16
+ 15,putting on lipstick
17
+ 16,brushing hair
18
+ 17,blow drying hair
19
+ 18,putting on a jacket
20
+ 19,taking off a jacket
21
+ 20,putting on/taking off shoes
22
+ 21,putting on/taking off glasses
23
+ 22,washing the dishes
24
+ 23,vacuumming the floor
25
+ 24,scrubbing the floor with a rag
26
+ 25,wipping off the dinning table
27
+ 26,rubbing up furniture
28
+ 27,spreading bedding/folding bedding
29
+ 28,washing a towel by hands
30
+ 29,hanging out laundry
31
+ 30,looking around for something
32
+ 31,using a remote control
33
+ 32,reading a book
34
+ 33,reading a newspaper
35
+ 34,handwriting
36
+ 35,talking on the phone
37
+ 36,playing with a mobile phone
38
+ 37,using a computer
39
+ 38,smoking
40
+ 39,clapping
41
+ 40,rubbing face with hands
42
+ 41,doing freehand exercise
43
+ 42,doing neck roll exercise
44
+ 43,massaging a shoulder oneself
45
+ 44,taking a bow
46
+ 45,talking to each other
47
+ 46,handshaking
48
+ 47,hugging each other
49
+ 48,fighting each other
50
+ 49,waving a hand
51
+ 50,flapping a hand up and down (beckoning)     
52
+ 51,pointing with a finger
53
+ 52,opening the door and walking in
54
+ 53,fallen on the floor
55
+ 54,sitting up/standing up
56
+ 55,lying down
labels/sh_cs_label.csv ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ id,name
2
+ 0,Pour.Frombottle
3
+ 1,WatchTV
4
+ 2,Readbook
5
+ 3,Sitdown
6
+ 4,Walk
7
+ 5,Eat.Snack
8
+ 6,Cook.Cleanup
9
+ 7,Cook.Stir
10
+ 8,Enter
11
+ 9,Eat.Attable
12
+ 10,Drink.Fromcup
13
+ 11,Cook.Cleandishes
14
+ 12,Laydown
15
+ 13,Drink.Frombottle
16
+ 14,Leave
17
+ 15,Drink.Fromcan
18
+ 16,Getup
19
+ 17,Pour.Fromkettle
20
+ 18,Usetelephone
21
+ 19,Takepills
22
+ 20,Maketea.Boilwater
23
+ 21,Uselaptop
24
+ 22,Cutbread
25
+ 23,Cook.Usestove
26
+ 24,Makecoffee.Pourgrains
27
+ 25,Cook.Cut
28
+ 26,Pour.Fromcan
29
+ 27,Usetablet
30
+ 28,Maketea.Insertteabag
31
+ 29,Drink.Fromglass
32
+ 30,Makecoffee.Pourwater
main_challenge.py ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import torch.nn as nn
4
+ import torch.backends.cudnn as cudnn
5
+ import torch.distributed as dist
6
+ import argparse
7
+ import datetime
8
+ import shutil
9
+ from pathlib import Path
10
+ from utils.config import get_config
11
+ from utils.optimizer import build_optimizer, build_scheduler
12
+ from utils.tools import AverageMeter, reduce_tensor, epoch_saving, load_checkpoint, generate_text, auto_resume_helper
13
+ from datasets.build import build_dataloader
14
+ from utils.logger import create_logger
15
+ import time, csv
16
+ import numpy as np
17
+ import random
18
+ from apex import amp
19
+ from timm.loss import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy
20
+ from datasets.blending import CutmixMixupBlending
21
+ from utils.config import get_config
22
+ from trainers import vificlip
23
+
24
+
25
+ def parse_option():
26
+ parser = argparse.ArgumentParser()
27
+ parser.add_argument('--config', '-cfg', required=True, type=str, default='configs/k400/32_8.yaml')
28
+ parser.add_argument(
29
+ "--opts",
30
+ help="Modify config options by adding 'KEY VALUE' pairs. ",
31
+ default=None,
32
+ nargs='+',
33
+ )
34
+ parser.add_argument('--output', type=str, default="exp")
35
+ parser.add_argument('--resume', type=str)
36
+ parser.add_argument('--pretrained', type=str)
37
+ parser.add_argument('--only_test', action='store_true')
38
+ parser.add_argument('--batch-size', type=int)
39
+ parser.add_argument('--accumulation-steps', type=int)
40
+
41
+ parser.add_argument("--local_rank", type=int, default=-1, help='local rank for DistributedDataParallel')
42
+ args = parser.parse_args()
43
+
44
+ config = get_config(args)
45
+
46
+ return args, config
47
+
48
+
49
+ def main(config):
50
+ train_data, val_data, train_loader, val_loader = build_dataloader(logger, config)
51
+ class_names = [class_name for i, class_name in train_data.classes]
52
+
53
+ # Custom trainer for different variants of ViFi-CLIP
54
+ model = vificlip.returnCLIP(config,
55
+ logger=logger,
56
+ class_names=class_names,)
57
+
58
+ model = model.cuda() # changing to cuda here
59
+
60
+ mixup_fn = None
61
+ if config.AUG.MIXUP > 0:
62
+ criterion = SoftTargetCrossEntropy()
63
+ mixup_fn = CutmixMixupBlending(num_classes=config.DATA.NUM_CLASSES,
64
+ smoothing=config.AUG.LABEL_SMOOTH,
65
+ mixup_alpha=config.AUG.MIXUP,
66
+ cutmix_alpha=config.AUG.CUTMIX,
67
+ switch_prob=config.AUG.MIXUP_SWITCH_PROB)
68
+ elif config.AUG.LABEL_SMOOTH > 0:
69
+ criterion = LabelSmoothingCrossEntropy(smoothing=config.AUG.LABEL_SMOOTH)
70
+ else:
71
+ criterion = nn.CrossEntropyLoss()
72
+
73
+ optimizer = build_optimizer(config, model)
74
+ lr_scheduler = build_scheduler(config, optimizer, len(train_loader))
75
+ if config.TRAIN.OPT_LEVEL != 'O0':
76
+ model, optimizer = amp.initialize(models=model, optimizers=optimizer, opt_level=config.TRAIN.OPT_LEVEL)
77
+
78
+ model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[config.LOCAL_RANK], broadcast_buffers=False,
79
+ find_unused_parameters=False)
80
+
81
+ start_epoch, max_accuracy = 0, 0.0
82
+
83
+ if config.TRAIN.AUTO_RESUME:
84
+ resume_file = auto_resume_helper(config.OUTPUT)
85
+ if resume_file:
86
+ config.defrost()
87
+ config.MODEL.RESUME = resume_file
88
+ config.freeze()
89
+ logger.info(f'auto resuming from {resume_file}')
90
+ else:
91
+ logger.info(f'no checkpoint found in {config.OUTPUT}, ignoring auto resume')
92
+
93
+ if config.MODEL.RESUME:
94
+ start_epoch, max_accuracy = load_checkpoint(config, model, optimizer, lr_scheduler, logger)
95
+ if start_epoch > 1:
96
+ logger.info("resetting epochs no and max. accuracy to 0 after loading pre-trained weights")
97
+ start_epoch = 0
98
+ max_accuracy = 0
99
+ if config.TEST.ONLY_TEST:
100
+ acc1 = validate(val_loader, model, config)
101
+ logger.info(f"Accuracy of the network on the {len(val_data)} test videos: {acc1:.1f}%")
102
+ return
103
+
104
+ for epoch in range(start_epoch, config.TRAIN.EPOCHS):
105
+ train_loader.sampler.set_epoch(epoch)
106
+ train_one_epoch(epoch, model, criterion, optimizer, lr_scheduler, train_loader, config, mixup_fn)
107
+
108
+ if epoch % config.SAVE_FREQ == 0 or epoch == (config.TRAIN.EPOCHS - 1):
109
+ acc1 = validate(val_loader, model, config)
110
+ logger.info(f"Accuracy of the network on the {len(val_data)} test videos: {acc1:.1f}%")
111
+ is_best = acc1 > max_accuracy
112
+ max_accuracy = max(max_accuracy, acc1)
113
+ logger.info(f'Max accuracy: {max_accuracy:.2f}%')
114
+ if dist.get_rank() == 0 and (
115
+ epoch % config.SAVE_FREQ == 0 or epoch == (config.TRAIN.EPOCHS - 1) or is_best):
116
+ epoch_saving(config, epoch, model, max_accuracy, optimizer, lr_scheduler, logger, config.OUTPUT,
117
+ is_best)
118
+ # Now doing the multi-view inference crop for videos
119
+ # 4 CLIPs are obtained from each video, and for each CLIP, we get 3 crops (augmentations)
120
+ multi_view_inference = config.TEST.MULTI_VIEW_INFERENCE
121
+ if multi_view_inference:
122
+ config.defrost()
123
+ config.TEST.NUM_CLIP = 4
124
+ config.TEST.NUM_CROP = 3
125
+ config.freeze()
126
+ train_data, val_data, train_loader, val_loader = build_dataloader(logger, config)
127
+ acc1 = validate(val_loader, model, config)
128
+ logger.info(f"Accuracy of the network on the {len(val_data)} test videos: {acc1:.1f}%")
129
+
130
+
131
+ def train_one_epoch(epoch, model, criterion, optimizer, lr_scheduler, train_loader, config, mixup_fn):
132
+ model.train()
133
+ optimizer.zero_grad()
134
+
135
+ num_steps = len(train_loader)
136
+ batch_time = AverageMeter()
137
+ tot_loss_meter = AverageMeter()
138
+
139
+ start = time.time()
140
+ end = time.time()
141
+
142
+
143
+ for idx, batch_data in enumerate(train_loader):
144
+
145
+ images = batch_data["imgs"].cuda(non_blocking=True)
146
+ label_id = batch_data["label"].cuda(non_blocking=True)
147
+ label_id = label_id.reshape(-1)
148
+ images = images.view((-1, config.DATA.NUM_FRAMES, 3) + images.size()[-2:])
149
+
150
+ if mixup_fn is not None:
151
+ images, label_id = mixup_fn(images, label_id)
152
+
153
+ output = model(images)
154
+
155
+ total_loss = criterion(output, label_id)
156
+ total_loss = total_loss / config.TRAIN.ACCUMULATION_STEPS
157
+
158
+ if config.TRAIN.ACCUMULATION_STEPS == 1:
159
+ optimizer.zero_grad()
160
+ if config.TRAIN.OPT_LEVEL != 'O0':
161
+ with amp.scale_loss(total_loss, optimizer) as scaled_loss:
162
+ scaled_loss.backward()
163
+ else:
164
+ total_loss.backward()
165
+ if config.TRAIN.ACCUMULATION_STEPS > 1:
166
+ if (idx + 1) % config.TRAIN.ACCUMULATION_STEPS == 0:
167
+ optimizer.step()
168
+ optimizer.zero_grad()
169
+ lr_scheduler.step_update(epoch * num_steps + idx)
170
+ else:
171
+ optimizer.step()
172
+ lr_scheduler.step_update(epoch * num_steps + idx)
173
+
174
+ torch.cuda.synchronize()
175
+
176
+ tot_loss_meter.update(total_loss.item(), len(label_id))
177
+ batch_time.update(time.time() - end)
178
+ end = time.time()
179
+
180
+ if idx % config.PRINT_FREQ == 0:
181
+ lr = optimizer.param_groups[0]['lr']
182
+ memory_used = torch.cuda.max_memory_allocated() / (1024.0 * 1024.0)
183
+ etas = batch_time.avg * (num_steps - idx)
184
+ logger.info(
185
+ f'Train: [{epoch}/{config.TRAIN.EPOCHS}][{idx}/{num_steps}]\t'
186
+ f'eta {datetime.timedelta(seconds=int(etas))} lr {lr:.9f}\t'
187
+ f'time {batch_time.val:.4f} ({batch_time.avg:.4f})\t'
188
+ f'tot_loss {tot_loss_meter.val:.4f} ({tot_loss_meter.avg:.4f})\t'
189
+ f'mem {memory_used:.0f}MB')
190
+ epoch_time = time.time() - start
191
+ logger.info(f"EPOCH {epoch} training takes {datetime.timedelta(seconds=int(epoch_time))}")
192
+
193
+
194
+ @torch.no_grad()
195
+ def validate(val_loader, model, config):
196
+ model.eval()
197
+ results = []
198
+ id = 0
199
+ acc1_meter, acc5_meter = AverageMeter(), AverageMeter()
200
+ with torch.no_grad():
201
+ logger.info(f"{config.TEST.NUM_CLIP * config.TEST.NUM_CROP} views inference")
202
+ for idx, batch_data in enumerate(val_loader):
203
+ _image = batch_data["imgs"]
204
+ label_id = batch_data["label"]
205
+ label_id = label_id.reshape(-1)
206
+ # print(idx)
207
+
208
+ b, tn, c, h, w = _image.size()
209
+ t = config.DATA.NUM_FRAMES
210
+ n = tn // t
211
+ _image = _image.view(b, n, t, c, h, w)
212
+
213
+ tot_similarity = torch.zeros((b, config.DATA.NUM_CLASSES)).cuda()
214
+ for i in range(n):
215
+ image = _image[:, i, :, :, :, :] # [b,t,c,h,w]
216
+ label_id = label_id.cuda(non_blocking=True)
217
+ image_input = image.cuda(non_blocking=True)
218
+
219
+ if config.TRAIN.OPT_LEVEL == 'O2':
220
+ image_input = image_input.half()
221
+
222
+ output = model(image_input)
223
+
224
+ similarity = output.view(b, -1).softmax(dim=-1)
225
+ tot_similarity += similarity
226
+
227
+ values_1, indices_1 = tot_similarity.topk(1, dim=-1)
228
+ values_5, indices_5 = tot_similarity.topk(5, dim=-1)
229
+ for i in range(b):
230
+ results.append((id, indices_1[i].squeeze().tolist()))
231
+ id += 1
232
+ '''
233
+ acc1, acc5 = 0, 0
234
+ for i in range(b):
235
+ if indices_1[i] == label_id[i]:
236
+ acc1 += 1
237
+ if label_id[i] in indices_5[i]:
238
+ acc5 += 1
239
+
240
+ acc1_meter.update(float(acc1) / b * 100, b)
241
+ acc5_meter.update(float(acc5) / b * 100, b)
242
+ if idx % config.PRINT_FREQ == 0:
243
+ logger.info(
244
+ f'Test: [{idx}/{len(val_loader)}]\t'
245
+ f'Acc@1: {acc1_meter.avg:.3f}\t'
246
+ )
247
+ '''
248
+ with open('batch_indices.csv', 'w', newline='') as f:
249
+ writer = csv.writer(f)
250
+ writer.writerow(['id', 'Top-1 Index'])
251
+ writer.writerows(results)
252
+ #acc1_meter.sync()
253
+ #acc5_meter.sync()
254
+ #logger.info(f' * Acc@1 {acc1_meter.avg:.3f} Acc@5 {acc5_meter.avg:.3f}')
255
+ return acc1_meter.avg
256
+
257
+
258
+ if __name__ == '__main__':
259
+ # prepare config
260
+ args, config = parse_option()
261
+
262
+ # init_distributed
263
+ if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
264
+ rank = int(os.environ["RANK"])
265
+ world_size = int(os.environ['WORLD_SIZE'])
266
+ print(f"RANK and WORLD_SIZE in environ: {rank}/{world_size}")
267
+ else:
268
+ rank = -1
269
+ world_size = -1
270
+ torch.cuda.set_device(args.local_rank)
271
+ torch.distributed.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
272
+ torch.distributed.barrier(device_ids=[args.local_rank])
273
+
274
+ seed = config.SEED + dist.get_rank()
275
+ torch.manual_seed(seed)
276
+ np.random.seed(seed)
277
+ random.seed(seed)
278
+ cudnn.benchmark = True
279
+
280
+ # create working_dir
281
+ Path(config.OUTPUT).mkdir(parents=True, exist_ok=True)
282
+
283
+ # logger
284
+ logger = create_logger(output_dir=config.OUTPUT, dist_rank=dist.get_rank(), name=f"{config.MODEL.ARCH}")
285
+ logger.info(f"working dir: {config.OUTPUT}")
286
+
287
+ # save config
288
+ if dist.get_rank() == 0:
289
+ logger.info(config)
290
+ shutil.copy(args.config, config.OUTPUT)
291
+
292
+ main(config)
main_train.py ADDED
@@ -0,0 +1,281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import torch.nn as nn
4
+ import torch.backends.cudnn as cudnn
5
+ import torch.distributed as dist
6
+ import argparse
7
+ import datetime
8
+ import shutil
9
+ from pathlib import Path
10
+ from utils.config import get_config
11
+ from utils.optimizer import build_optimizer, build_scheduler
12
+ from utils.tools import AverageMeter, reduce_tensor, epoch_saving, load_checkpoint, generate_text, auto_resume_helper
13
+ from datasets.build import build_dataloader
14
+ from utils.logger import create_logger
15
+ import time
16
+ import numpy as np
17
+ import random
18
+ from apex import amp
19
+ from timm.loss import LabelSmoothingCrossEntropy, SoftTargetCrossEntropy
20
+ from datasets.blending import CutmixMixupBlending
21
+ from utils.config import get_config
22
+ from trainers import vificlip
23
+
24
+
25
+ def parse_option():
26
+ parser = argparse.ArgumentParser()
27
+ parser.add_argument('--config', '-cfg', required=True, type=str, default='configs/k400/32_8.yaml')
28
+ parser.add_argument(
29
+ "--opts",
30
+ help="Modify config options by adding 'KEY VALUE' pairs. ",
31
+ default=None,
32
+ nargs='+',
33
+ )
34
+ parser.add_argument('--output', type=str, default="exp")
35
+ parser.add_argument('--resume', type=str)
36
+ parser.add_argument('--pretrained', type=str)
37
+ parser.add_argument('--only_test', action='store_true')
38
+ parser.add_argument('--batch-size', type=int)
39
+ parser.add_argument('--accumulation-steps', type=int)
40
+
41
+ parser.add_argument("--local_rank", type=int, default=-1, help='local rank for DistributedDataParallel')
42
+ args = parser.parse_args()
43
+
44
+ config = get_config(args)
45
+
46
+ return args, config
47
+
48
+
49
+ def main(config):
50
+ train_data, val_data, train_loader, val_loader = build_dataloader(logger, config)
51
+ class_names = [class_name for i, class_name in train_data.classes]
52
+
53
+ # Custom trainer for different variants of ViFi-CLIP
54
+ model = vificlip.returnCLIP(config,
55
+ logger=logger,
56
+ class_names=class_names,)
57
+
58
+ model = model.cuda() # changing to cuda here
59
+
60
+ mixup_fn = None
61
+ if config.AUG.MIXUP > 0:
62
+ criterion = SoftTargetCrossEntropy()
63
+ mixup_fn = CutmixMixupBlending(num_classes=config.DATA.NUM_CLASSES,
64
+ smoothing=config.AUG.LABEL_SMOOTH,
65
+ mixup_alpha=config.AUG.MIXUP,
66
+ cutmix_alpha=config.AUG.CUTMIX,
67
+ switch_prob=config.AUG.MIXUP_SWITCH_PROB)
68
+ elif config.AUG.LABEL_SMOOTH > 0:
69
+ criterion = LabelSmoothingCrossEntropy(smoothing=config.AUG.LABEL_SMOOTH)
70
+ else:
71
+ criterion = nn.CrossEntropyLoss()
72
+
73
+ optimizer = build_optimizer(config, model)
74
+ lr_scheduler = build_scheduler(config, optimizer, len(train_loader))
75
+ if config.TRAIN.OPT_LEVEL != 'O0':
76
+ model, optimizer = amp.initialize(models=model, optimizers=optimizer, opt_level=config.TRAIN.OPT_LEVEL)
77
+
78
+ model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[config.LOCAL_RANK], broadcast_buffers=False,
79
+ find_unused_parameters=False)
80
+
81
+ start_epoch, max_accuracy = 0, 0.0
82
+
83
+ if config.TRAIN.AUTO_RESUME:
84
+ resume_file = auto_resume_helper(config.OUTPUT)
85
+ if resume_file:
86
+ config.defrost()
87
+ config.MODEL.RESUME = resume_file
88
+ config.freeze()
89
+ logger.info(f'auto resuming from {resume_file}')
90
+ else:
91
+ logger.info(f'no checkpoint found in {config.OUTPUT}, ignoring auto resume')
92
+
93
+ if config.MODEL.RESUME:
94
+ start_epoch, max_accuracy = load_checkpoint(config, model, optimizer, lr_scheduler, logger)
95
+ if start_epoch > 1:
96
+ logger.info("resetting epochs no and max. accuracy to 0 after loading pre-trained weights")
97
+ start_epoch = 0
98
+ max_accuracy = 0
99
+ if config.TEST.ONLY_TEST:
100
+ acc1 = validate(val_loader, model, config)
101
+ logger.info(f"Accuracy of the network on the {len(val_data)} test videos: {acc1:.1f}%")
102
+ return
103
+
104
+ for epoch in range(start_epoch, config.TRAIN.EPOCHS):
105
+ train_loader.sampler.set_epoch(epoch)
106
+ train_one_epoch(epoch, model, criterion, optimizer, lr_scheduler, train_loader, config, mixup_fn)
107
+
108
+ if epoch % config.SAVE_FREQ == 0 or epoch == (config.TRAIN.EPOCHS - 1):
109
+ acc1 = validate(val_loader, model, config)
110
+ logger.info(f"Accuracy of the network on the {len(val_data)} test videos: {acc1:.1f}%")
111
+ is_best = acc1 > max_accuracy
112
+ max_accuracy = max(max_accuracy, acc1)
113
+ logger.info(f'Max accuracy: {max_accuracy:.2f}%')
114
+ if dist.get_rank() == 0 and (
115
+ epoch % config.SAVE_FREQ == 0 or epoch == (config.TRAIN.EPOCHS - 1) or is_best):
116
+ epoch_saving(config, epoch, model, max_accuracy, optimizer, lr_scheduler, logger, config.OUTPUT,
117
+ is_best)
118
+ # Now doing the multi-view inference crop for videos
119
+ # 4 CLIPs are obtained from each video, and for each CLIP, we get 3 crops (augmentations)
120
+ multi_view_inference = config.TEST.MULTI_VIEW_INFERENCE
121
+ if multi_view_inference:
122
+ config.defrost()
123
+ config.TEST.NUM_CLIP = 4
124
+ config.TEST.NUM_CROP = 3
125
+ config.freeze()
126
+ train_data, val_data, train_loader, val_loader = build_dataloader(logger, config)
127
+ acc1 = validate(val_loader, model, config)
128
+ logger.info(f"Accuracy of the network on the {len(val_data)} test videos: {acc1:.1f}%")
129
+
130
+
131
+ def train_one_epoch(epoch, model, criterion, optimizer, lr_scheduler, train_loader, config, mixup_fn):
132
+ model.train()
133
+ optimizer.zero_grad()
134
+
135
+ num_steps = len(train_loader)
136
+ batch_time = AverageMeter()
137
+ tot_loss_meter = AverageMeter()
138
+
139
+ start = time.time()
140
+ end = time.time()
141
+
142
+
143
+ for idx, batch_data in enumerate(train_loader):
144
+
145
+ images = batch_data["imgs"].cuda(non_blocking=True)
146
+ label_id = batch_data["label"].cuda(non_blocking=True)
147
+ label_id = label_id.reshape(-1)
148
+ images = images.view((-1, config.DATA.NUM_FRAMES, 3) + images.size()[-2:])
149
+
150
+ if mixup_fn is not None:
151
+ images, label_id = mixup_fn(images, label_id)
152
+
153
+ output = model(images)
154
+
155
+ total_loss = criterion(output, label_id)
156
+ total_loss = total_loss / config.TRAIN.ACCUMULATION_STEPS
157
+
158
+ if config.TRAIN.ACCUMULATION_STEPS == 1:
159
+ optimizer.zero_grad()
160
+ if config.TRAIN.OPT_LEVEL != 'O0':
161
+ with amp.scale_loss(total_loss, optimizer) as scaled_loss:
162
+ scaled_loss.backward()
163
+ else:
164
+ total_loss.backward()
165
+ if config.TRAIN.ACCUMULATION_STEPS > 1:
166
+ if (idx + 1) % config.TRAIN.ACCUMULATION_STEPS == 0:
167
+ optimizer.step()
168
+ optimizer.zero_grad()
169
+ lr_scheduler.step_update(epoch * num_steps + idx)
170
+ else:
171
+ optimizer.step()
172
+ lr_scheduler.step_update(epoch * num_steps + idx)
173
+
174
+ torch.cuda.synchronize()
175
+
176
+ tot_loss_meter.update(total_loss.item(), len(label_id))
177
+ batch_time.update(time.time() - end)
178
+ end = time.time()
179
+
180
+ if idx % config.PRINT_FREQ == 0:
181
+ lr = optimizer.param_groups[0]['lr']
182
+ memory_used = torch.cuda.max_memory_allocated() / (1024.0 * 1024.0)
183
+ etas = batch_time.avg * (num_steps - idx)
184
+ logger.info(
185
+ f'Train: [{epoch}/{config.TRAIN.EPOCHS}][{idx}/{num_steps}]\t'
186
+ f'eta {datetime.timedelta(seconds=int(etas))} lr {lr:.9f}\t'
187
+ f'time {batch_time.val:.4f} ({batch_time.avg:.4f})\t'
188
+ f'tot_loss {tot_loss_meter.val:.4f} ({tot_loss_meter.avg:.4f})\t'
189
+ f'mem {memory_used:.0f}MB')
190
+ epoch_time = time.time() - start
191
+ logger.info(f"EPOCH {epoch} training takes {datetime.timedelta(seconds=int(epoch_time))}")
192
+
193
+
194
+ @torch.no_grad()
195
+ def validate(val_loader, model, config):
196
+ model.eval()
197
+
198
+ acc1_meter, acc5_meter = AverageMeter(), AverageMeter()
199
+ with torch.no_grad():
200
+ logger.info(f"{config.TEST.NUM_CLIP * config.TEST.NUM_CROP} views inference")
201
+ for idx, batch_data in enumerate(val_loader):
202
+ _image = batch_data["imgs"]
203
+ label_id = batch_data["label"]
204
+ label_id = label_id.reshape(-1)
205
+
206
+ b, tn, c, h, w = _image.size()
207
+ t = config.DATA.NUM_FRAMES
208
+ n = tn // t
209
+ _image = _image.view(b, n, t, c, h, w)
210
+
211
+ tot_similarity = torch.zeros((b, config.DATA.NUM_CLASSES)).cuda()
212
+ for i in range(n):
213
+ image = _image[:, i, :, :, :, :] # [b,t,c,h,w]
214
+ label_id = label_id.cuda(non_blocking=True)
215
+ image_input = image.cuda(non_blocking=True)
216
+
217
+ if config.TRAIN.OPT_LEVEL == 'O2':
218
+ image_input = image_input.half()
219
+
220
+ output = model(image_input)
221
+
222
+ similarity = output.view(b, -1).softmax(dim=-1)
223
+ tot_similarity += similarity
224
+
225
+ values_1, indices_1 = tot_similarity.topk(1, dim=-1)
226
+ values_5, indices_5 = tot_similarity.topk(5, dim=-1)
227
+ acc1, acc5 = 0, 0
228
+ for i in range(b):
229
+ if indices_1[i] == label_id[i]:
230
+ acc1 += 1
231
+ if label_id[i] in indices_5[i]:
232
+ acc5 += 1
233
+
234
+ acc1_meter.update(float(acc1) / b * 100, b)
235
+ acc5_meter.update(float(acc5) / b * 100, b)
236
+ if idx % config.PRINT_FREQ == 0:
237
+ logger.info(
238
+ f'Test: [{idx}/{len(val_loader)}]\t'
239
+ f'Acc@1: {acc1_meter.avg:.3f}\t'
240
+ )
241
+ acc1_meter.sync()
242
+ acc5_meter.sync()
243
+ logger.info(f' * Acc@1 {acc1_meter.avg:.3f} Acc@5 {acc5_meter.avg:.3f}')
244
+ return acc1_meter.avg
245
+
246
+
247
+ if __name__ == '__main__':
248
+ # prepare config
249
+ args, config = parse_option()
250
+
251
+ # init_distributed
252
+ if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
253
+ rank = int(os.environ["RANK"])
254
+ world_size = int(os.environ['WORLD_SIZE'])
255
+ print(f"RANK and WORLD_SIZE in environ: {rank}/{world_size}")
256
+ else:
257
+ rank = -1
258
+ world_size = -1
259
+ torch.cuda.set_device(args.local_rank)
260
+ torch.distributed.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
261
+ torch.distributed.barrier(device_ids=[args.local_rank])
262
+
263
+ seed = config.SEED + dist.get_rank()
264
+ torch.manual_seed(seed)
265
+ np.random.seed(seed)
266
+ random.seed(seed)
267
+ cudnn.benchmark = True
268
+
269
+ # create working_dir
270
+ Path(config.OUTPUT).mkdir(parents=True, exist_ok=True)
271
+
272
+ # logger
273
+ logger = create_logger(output_dir=config.OUTPUT, dist_rank=dist.get_rank(), name=f"{config.MODEL.ARCH}")
274
+ logger.info(f"working dir: {config.OUTPUT}")
275
+
276
+ # save config
277
+ if dist.get_rank() == 0:
278
+ logger.info(config)
279
+ shutil.copy(args.config, config.OUTPUT)
280
+
281
+ main(config)
merge_results.py ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import csv
3
+
4
+ # Load the CSV files
5
+ video_mapping_df = pd.read_csv('/data/vidlab_datasets/challenge_crop/split/val.csv', header=None, names=['video', 'index'], skiprows=0)
6
+ prediction_results_df = pd.read_csv('batch_indices.csv', header=None, names=['id', 'top_1_index'], skiprows=1)
7
+ labels_df = pd.read_csv('/data/users/sdas/scripts/ViFi-CLIP/labels/challenge.csv', header=None, names=['id', 'label'], skiprows=1)
8
+
9
+ # Convert types for accurate merging
10
+ video_mapping_df['index'] = video_mapping_df['index'].astype(int)
11
+ prediction_results_df['id'] = prediction_results_df['id'].astype(int)
12
+ prediction_results_df['top_1_index'] = prediction_results_df['top_1_index'].astype(int)
13
+ labels_df['id'] = labels_df['id'].astype(int)
14
+
15
+ # Assuming prediction_results_df and video_mapping_df are already defined and properly indexed
16
+ with open('output.csv', mode='w', newline='') as file:
17
+ writer = csv.writer(file)
18
+
19
+ # Write the header
20
+ writer.writerow(['video_name', 'action_category'])
21
+
22
+ # Initialize k to start at the first index of video_mapping_df
23
+ k = 0
24
+
25
+ # Loop through the 'top_1_index' in prediction_results_df
26
+ for i in prediction_results_df['top_1_index']:
27
+ if 0 <= i <= 7:
28
+ action_category = 'locomotion'
29
+ elif 8 <= i <= 22:
30
+ action_category = 'manipulation'
31
+ elif 23 <= i <= 31:
32
+ action_category = 'communication'
33
+ elif 32 <= i <= 37:
34
+ action_category = 'hygiene'
35
+ elif i in {38, 39}: # This corrects the logic to check for both values 38 and 39
36
+ action_category = 'eating_drinking'
37
+ else:
38
+ action_category = 'leisure'
39
+
40
+ # Write the row to the CSV file
41
+ writer.writerow([video_mapping_df['video'][k], action_category])
42
+ k += 1
43
+
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ torch==1.11.0
2
+ torchvision==0.12.0
3
+ pathlib
4
+ mmcv-full
5
+ decord
6
+ ftfy
7
+ einops
8
+ termcolor
9
+ timm
10
+ regex
11
+ yacs
12
+ pandas
script_crop.sh ADDED
@@ -0,0 +1 @@
 
 
1
+ python crop_person.py
script_test.sh ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ export MASTER_ADDR='localhost' # Adjust this to the IP as required
2
+ export MASTER_PORT=29800 # Ensure this port is available
3
+ python -m torch.distributed.launch --nproc_per_node=1 --master_addr $MASTER_ADDR --master_port $MSTER_PORT main_challenge.py -cfg ./configs/config_challenge_test.yaml --output ./work_dirs/
4
+ python merge_results.py
script_train.sh ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+
2
+ python -m torch.distributed.launch --nproc_per_node=8 main_train.py -cfg ./configs/config_challenge_train.yaml --output ./work_dirs/challenge_baseline_new/ --opts TEST.NUM_CLIP 1 TEST.NUM_CROP 1
3
+
trainers/__pycache__/vificlip.cpython-37.pyc ADDED
Binary file (7.27 kB). View file
 
trainers/vificlip.py ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os.path as osp
2
+ from collections import OrderedDict
3
+ import math
4
+
5
+ import torch
6
+ import torch.nn as nn
7
+ from torch.nn import functional as F
8
+ from torch.cuda.amp import GradScaler, autocast
9
+
10
+ from clip import clip
11
+ from clip.simple_tokenizer import SimpleTokenizer as _Tokenizer
12
+
13
+ _tokenizer = _Tokenizer()
14
+
15
+
16
+ def load_clip_to_cpu(cfg):
17
+ backbone_name = cfg.MODEL.ARCH
18
+ url = clip._MODELS[backbone_name]
19
+ model_path = clip._download(url)
20
+
21
+ try:
22
+ # loading JIT archive
23
+ model = torch.jit.load(model_path, map_location="cpu").eval()
24
+ state_dict = None
25
+
26
+ except RuntimeError:
27
+ state_dict = torch.load(model_path, map_location="cpu")
28
+ design_details = {"trainer": 'ViFi_CLIP',
29
+ "vision_depth": cfg.TRAINER.ViFi_CLIP.PROMPT_DEPTH_VISION,
30
+ "language_depth": cfg.TRAINER.ViFi_CLIP.PROMPT_DEPTH_TEXT,
31
+ "vision_ctx": cfg.TRAINER.ViFi_CLIP.N_CTX_VISION,
32
+ "language_ctx": cfg.TRAINER.ViFi_CLIP.N_CTX_TEXT}
33
+ model = clip.build_model(state_dict or model.state_dict(), design_details)
34
+
35
+ return model
36
+
37
+
38
+ class TextEncoder(nn.Module):
39
+ def __init__(self, clip_model):
40
+ super().__init__()
41
+ self.transformer = clip_model.transformer
42
+ self.positional_embedding = clip_model.positional_embedding
43
+ self.ln_final = clip_model.ln_final
44
+ self.text_projection = clip_model.text_projection
45
+ self.dtype = clip_model.dtype
46
+
47
+ def forward(self, prompts, tokenized_prompts):
48
+ x = prompts + self.positional_embedding.type(self.dtype)
49
+ x = x.permute(1, 0, 2) # NLD -> LND
50
+ x = self.transformer(x)
51
+ x = x.permute(1, 0, 2) # LND -> NLD
52
+ x = self.ln_final(x).type(self.dtype)
53
+
54
+ # x.shape = [batch_size, n_ctx, transformer.width]
55
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
56
+ x = x[torch.arange(x.shape[0]), tokenized_prompts.argmax(dim=-1)] @ self.text_projection
57
+
58
+ return x
59
+
60
+
61
+ class VLPromptLearner(nn.Module):
62
+ def __init__(self, cfg, classnames, clip_model, logger):
63
+ super().__init__()
64
+ dtype = clip_model.dtype
65
+ self.use_prompt_stage = cfg.TRAINER.ViFi_CLIP.PROMPT_MODEL
66
+ ctx_init = cfg.TRAINER.ViFi_CLIP.CTX_INIT
67
+ ZS_evaluation = cfg.TRAINER.ViFi_CLIP.ZS_EVAL
68
+ if ZS_evaluation:
69
+ text_aug = f"{{}}"
70
+ tokenized_prompts = torch.cat([clip.tokenize(text_aug.format(c), context_length=77) for c in classnames])
71
+ embedding = clip_model.token_embedding(tokenized_prompts).type(dtype).cuda()
72
+ self.register_buffer("complete_text_embeddings", embedding)
73
+ self.tokenized_prompts = tokenized_prompts # torch.Tensor
74
+ elif self.use_prompt_stage:
75
+ n_cls = len(classnames)
76
+ # Make sure Language depth >= 1
77
+ assert cfg.TRAINER.ViFi_CLIP.PROMPT_DEPTH_TEXT >= 1, "In VL prompting, Language prompt depth should be >=1" \
78
+ "\nPlease use VPT trainer if you want to learn only vision " \
79
+ "branch "
80
+ n_ctx = cfg.TRAINER.ViFi_CLIP.N_CTX_TEXT
81
+ ctx_dim = clip_model.ln_final.weight.shape[0]
82
+
83
+ if ctx_init and (n_ctx) <= 4:
84
+ # use given words to initialize context vectors
85
+ ctx_init = ctx_init.replace("_", " ")
86
+ n_ctx = n_ctx
87
+ prompt = clip.tokenize(ctx_init)
88
+ with torch.no_grad():
89
+ embedding = clip_model.token_embedding(prompt).type(dtype)
90
+ ctx_vectors = embedding[0, 1: 1 + n_ctx, :]
91
+ prompt_prefix = ctx_init
92
+ else:
93
+ # random initialization
94
+ ctx_vectors = torch.empty(n_ctx, ctx_dim, dtype=dtype)
95
+ nn.init.normal_(ctx_vectors, std=0.02)
96
+ prompt_prefix = " ".join(["X"] * n_ctx)
97
+ logger.info(f"V-L design")
98
+ logger.info(f'Initial text context: "{prompt_prefix}"')
99
+ logger.info(f"Number of context words (tokens) for Language prompting: {n_ctx}")
100
+ logger.info(f"Number of context words (tokens) for Vision prompting: {cfg.TRAINER.ViFi_CLIP.N_CTX_VISION}")
101
+ self.ctx = nn.Parameter(ctx_vectors)
102
+
103
+ classnames = [name.replace("_", " ") for name in classnames]
104
+ prompts = [prompt_prefix + " " + name + "." for name in classnames]
105
+
106
+ tokenized_prompts = torch.cat([clip.tokenize(p) for p in prompts]) # (n_cls, n_tkn)
107
+ with torch.no_grad():
108
+ embedding = clip_model.token_embedding(tokenized_prompts).type(dtype)
109
+
110
+ # These token vectors will be saved when in save_model(),
111
+ # but they should be ignored in load_model() as we want to use
112
+ # those computed using the current class names
113
+ self.register_buffer("token_prefix", embedding[:, :1, :]) # SOS
114
+ self.register_buffer("token_suffix", embedding[:, 1 + n_ctx:, :]) # CLS, EOS
115
+ self.n_cls = n_cls
116
+ self.tokenized_prompts = tokenized_prompts # torch.Tensor
117
+ else:
118
+ # No prompting
119
+ ctx_init = ctx_init.replace("_", " ")
120
+ prompt_prefix = ctx_init
121
+ prompts = [prompt_prefix + " " + name + "." for name in classnames]
122
+ tokenized_prompts = torch.cat([clip.tokenize(p) for p in prompts]) # (n_cls, n_tkn)
123
+ with torch.no_grad():
124
+ embedding = clip_model.token_embedding(tokenized_prompts).type(dtype)
125
+ self.register_buffer("complete_text_embeddings", embedding)
126
+ self.tokenized_prompts = tokenized_prompts # torch.Tensor
127
+
128
+ def construct_prompts(self, ctx, prefix, suffix, label=None):
129
+ # dim0 is either batch_size (during training) or n_cls (during testing)
130
+ # ctx: context tokens, with shape of (dim0, n_ctx, ctx_dim)
131
+ # prefix: the sos token, with shape of (n_cls, 1, ctx_dim)
132
+ # suffix: remaining tokens, with shape of (n_cls, *, ctx_dim)
133
+
134
+ if label is not None:
135
+ prefix = prefix[label]
136
+ suffix = suffix[label]
137
+
138
+ prompts = torch.cat(
139
+ [
140
+ prefix, # (dim0, 1, dim)
141
+ ctx, # (dim0, n_ctx, dim)
142
+ suffix, # (dim0, *, dim)
143
+ ],
144
+ dim=1,
145
+ )
146
+
147
+ return prompts
148
+
149
+ def forward(self):
150
+ if self.use_prompt_stage:
151
+ ctx = self.ctx
152
+ if ctx.dim() == 2:
153
+ ctx = ctx.unsqueeze(0).expand(self.n_cls, -1, -1)
154
+
155
+ prefix = self.token_prefix
156
+ suffix = self.token_suffix
157
+ prompts = self.construct_prompts(ctx, prefix, suffix)
158
+ else:
159
+ prompts = self.complete_text_embeddings
160
+
161
+ return prompts
162
+
163
+
164
+ class ViFiCLIP(nn.Module):
165
+ def __init__(self, cfg, classnames, clip_model, logger):
166
+ super().__init__()
167
+ self.prompt_learner = VLPromptLearner(cfg, classnames, clip_model, logger)
168
+ self.tokenized_prompts = self.prompt_learner.tokenized_prompts
169
+ self.image_encoder = clip_model.visual
170
+ self.text_encoder = TextEncoder(clip_model)
171
+ self.logit_scale = clip_model.logit_scale
172
+ self.dtype = clip_model.dtype
173
+ def forward(self, image):
174
+ tokenized_prompts = self.tokenized_prompts
175
+ logit_scale = self.logit_scale.exp()
176
+ prompts = self.prompt_learner()
177
+
178
+ # b = image.shape[0]
179
+ # Lets encode the video into required format
180
+ b, t, c, h, w = image.size()
181
+ # Remove the batch dimensions
182
+ image = image.reshape(-1, c, h, w)
183
+ # Now pass the image into CLIP visual encoder
184
+ image_features = self.image_encoder(image.type(self.dtype))
185
+ # Now again attach the batch dimensions
186
+ image_features = image_features.view(b, t, -1) # [B, T, 512]
187
+ # Now take the mean along the temporal direction
188
+ image_features = image_features.mean(dim=1, keepdim=False) # image features are now ready
189
+
190
+ # Finally, make the text features
191
+ text_features = self.text_encoder(prompts, tokenized_prompts)
192
+
193
+ image_features = image_features / image_features.norm(dim=-1, keepdim=True)
194
+ text_features = text_features / text_features.norm(dim=-1, keepdim=True)
195
+ logits = logit_scale * image_features @ text_features.t()
196
+
197
+ return logits
198
+
199
+
200
+ def returnCLIP(config, logger=None,
201
+ class_names=None):
202
+ logger.info(f"Loading CLIP (backbone: {config.MODEL.ARCH})")
203
+ clip_model = load_clip_to_cpu(config)
204
+
205
+ logger.info("Building ViFi-CLIP CLIP")
206
+ model = ViFiCLIP(config, class_names, clip_model, logger)
207
+
208
+ if config.TRAINER.ViFi_CLIP.PROMPT_MODEL:
209
+ logger.info("Turning off gradients in both the image and the text encoder")
210
+ name_to_update = "prompt_learner"
211
+ for name, param in model.named_parameters():
212
+ if name_to_update not in name:
213
+ # Make sure that VPT prompts are updated
214
+ if "VPT" in name:
215
+ param.requires_grad_(True)
216
+ else:
217
+ param.requires_grad_(False)
218
+ else:
219
+ # Now need to control freezing of CLIP for fine-tuning
220
+ train_complete_clip = config.TRAINER.ViFi_CLIP.USE
221
+ if train_complete_clip == "both":
222
+ logger.info("Turning on gradients for COMPLETE ViFi-CLIP model")
223
+ for name, param in model.named_parameters():
224
+ param.requires_grad_(True)
225
+ else:
226
+ if train_complete_clip == "image":
227
+ logger.info("Turning on gradients for image side the ViFi-CLIP model")
228
+ for name, param in model.named_parameters():
229
+ if "image_encoder" in name: # replace by "text_encoder" incase you want to freeze text
230
+ param.requires_grad_(True)
231
+ else:
232
+ param.requires_grad_(False)
233
+ else:
234
+ logger.info("Turning on gradients for TEXT side the ViFi-CLIP model")
235
+ for name, param in model.named_parameters():
236
+ if "text_encoder" in name: # replace by "text_encoder" incase you want to freeze text
237
+ param.requires_grad_(True)
238
+ else:
239
+ param.requires_grad_(False)
240
+ # Double check
241
+ enabled = set()
242
+ for name, param in model.named_parameters():
243
+ if param.requires_grad:
244
+ enabled.add(name)
245
+ logger.info(f"Parameters to be updated: {enabled}")
246
+ logger.info(f"Total learnable items: {len(enabled)}")
247
+ model.float()
248
+ return model
utils/__init__.py ADDED
File without changes
utils/__pycache__/__init__.cpython-37.pyc ADDED
Binary file (133 Bytes). View file
 
utils/__pycache__/config.cpython-37.pyc ADDED
Binary file (2.71 kB). View file
 
utils/__pycache__/logger.cpython-37.pyc ADDED
Binary file (1.07 kB). View file
 
utils/__pycache__/optimizer.cpython-37.pyc ADDED
Binary file (1.88 kB). View file
 
utils/__pycache__/tools.cpython-37.pyc ADDED
Binary file (4.1 kB). View file
 
utils/config.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import yaml
3
+ from yacs.config import CfgNode as CN
4
+
5
+ _C = CN()
6
+
7
+ # Base config files
8
+ _C.BASE = ['']
9
+
10
+ # -----------------------------------------------------------------------------
11
+ # Data settings
12
+ # -----------------------------------------------------------------------------
13
+ _C.DATA = CN()
14
+ _C.DATA.ROOT = ''
15
+ _C.DATA.TRAIN_FILE = ''
16
+ _C.DATA.VAL_FILE = ''
17
+ _C.DATA.DATASET = 'kinetics400'
18
+ _C.DATA.INPUT_SIZE = 224
19
+ _C.DATA.NUM_FRAMES = 8
20
+ _C.DATA.NUM_CLASSES = 400
21
+ _C.DATA.LABEL_LIST = 'labels/kinetics_400_labels.csv'
22
+
23
+ # -----------------------------------------------------------------------------
24
+ # Model settings
25
+ # -----------------------------------------------------------------------------
26
+ _C.MODEL = CN()
27
+ _C.MODEL.ARCH = 'ViT-B/32'
28
+ _C.MODEL.DROP_PATH_RATE = 0.
29
+ _C.MODEL.PRETRAINED = None
30
+ _C.MODEL.RESUME = None
31
+ _C.MODEL.FIX_TEXT = True
32
+
33
+
34
+ # -----------------------------------------------------------------------------
35
+ # Custom trainer settings
36
+ # -----------------------------------------------------------------------------
37
+ _C.TRAINER = CN()
38
+ # Config for ViFi-CLIP
39
+ _C.TRAINER.ViFi_CLIP = CN()
40
+ _C.TRAINER.ViFi_CLIP.PROMPT_MODEL = False # second stage prompting?
41
+ _C.TRAINER.ViFi_CLIP.N_CTX_VISION = 0 # number of context vectors at the vision branch
42
+ _C.TRAINER.ViFi_CLIP.N_CTX_TEXT = 0 # number of context vectors at the language branch
43
+ _C.TRAINER.ViFi_CLIP.CTX_INIT = "a photo of a" # initialization words (only for language prompts)
44
+ _C.TRAINER.ViFi_CLIP.PROMPT_DEPTH_VISION = 0 # max 12, min 0, for 0 it will act as shallow vision prompting (first layer)
45
+ _C.TRAINER.ViFi_CLIP.PROMPT_DEPTH_TEXT = 1 # max 12, min 0, for 0 it will act as shallow language prompting (first layer)
46
+ _C.TRAINER.ViFi_CLIP.USE = "both" # fine-tuning complete CLIP model by default
47
+ _C.TRAINER.ViFi_CLIP.ZS_EVAL = False # make True only during test mode to evaluate zero-shot vanilla CLIP performance
48
+ # -----------------------------------------------------------------------------
49
+ # Training settings
50
+ # -----------------------------------------------------------------------------
51
+ _C.TRAIN = CN()
52
+ _C.TRAIN.EPOCHS = 30
53
+ _C.TRAIN.WARMUP_EPOCHS = 5
54
+ _C.TRAIN.WEIGHT_DECAY = 0.001
55
+ _C.TRAIN.LR = 8.e-6
56
+ _C.TRAIN.BATCH_SIZE = 8
57
+ _C.TRAIN.ACCUMULATION_STEPS = 1
58
+ _C.TRAIN.LR_SCHEDULER = 'cosine'
59
+ _C.TRAIN.OPTIMIZER = 'adamw'
60
+ _C.TRAIN.OPT_LEVEL = 'O1'
61
+ _C.TRAIN.AUTO_RESUME = False
62
+ _C.TRAIN.USE_CHECKPOINT = False
63
+
64
+ # -----------------------------------------------------------------------------
65
+ # Augmentation settings
66
+ # -----------------------------------------------------------------------------
67
+ _C.AUG = CN()
68
+ _C.AUG.LABEL_SMOOTH = 0.1
69
+ _C.AUG.COLOR_JITTER = 0.8
70
+ _C.AUG.GRAY_SCALE = 0.2
71
+ _C.AUG.MIXUP = 0.8
72
+ _C.AUG.CUTMIX = 1.0
73
+ _C.AUG.MIXUP_SWITCH_PROB = 0.5
74
+
75
+ # -----------------------------------------------------------------------------
76
+ # Testing settings
77
+ # -----------------------------------------------------------------------------
78
+ _C.TEST = CN()
79
+ _C.TEST.NUM_CLIP = 1
80
+ _C.TEST.NUM_CROP = 1
81
+ _C.TEST.ONLY_TEST = False
82
+ _C.TEST.MULTI_VIEW_INFERENCE = False
83
+
84
+ # -----------------------------------------------------------------------------
85
+ # Misc
86
+ # -----------------------------------------------------------------------------
87
+ _C.OUTPUT = ''
88
+ _C.SAVE_FREQ = 1
89
+ _C.PRINT_FREQ = 50
90
+ _C.SEED = 1024
91
+
92
+
93
+
94
+ def _update_config_from_file(config, cfg_file):
95
+ config.defrost()
96
+ with open(cfg_file, 'r') as f:
97
+ yaml_cfg = yaml.load(f, Loader=yaml.FullLoader)
98
+
99
+ for cfg in yaml_cfg.setdefault('BASE', ['']):
100
+ if cfg:
101
+ _update_config_from_file(
102
+ config, os.path.join(os.path.dirname(cfg_file), cfg)
103
+ )
104
+ print('=> merge config from {}'.format(cfg_file))
105
+ config.merge_from_file(cfg_file)
106
+ config.freeze()
107
+
108
+
109
+ def update_config(config, args):
110
+ _update_config_from_file(config, args.config)
111
+
112
+ config.defrost()
113
+ if args.opts:
114
+ config.merge_from_list(args.opts)
115
+ # merge from specific arguments
116
+ if args.batch_size:
117
+ config.TRAIN.BATCH_SIZE = args.batch_size
118
+ if args.pretrained:
119
+ config.MODEL.PRETRAINED = args.pretrained
120
+ if args.resume:
121
+ config.MODEL.RESUME = args.resume
122
+ if args.accumulation_steps:
123
+ config.TRAIN.ACCUMULATION_STEPS = args.accumulation_steps
124
+ if args.output:
125
+ config.OUTPUT = args.output
126
+ if args.only_test:
127
+ config.TEST.ONLY_TEST = True
128
+ # set local rank for distributed training
129
+ config.LOCAL_RANK = args.local_rank
130
+ config.freeze()
131
+
132
+
133
+ def get_config(args):
134
+ """Get a yacs CfgNode object with default values."""
135
+ # Return a clone so that the defaults will not be altered
136
+ # This is for the "local variable" use pattern
137
+ config = _C.clone()
138
+ update_config(config, args)
139
+
140
+ return config
utils/logger.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import logging
4
+ import functools
5
+ from termcolor import colored
6
+
7
+
8
+ @functools.lru_cache()
9
+ def create_logger(output_dir, dist_rank=0, name=''):
10
+ # create logger
11
+ logger = logging.getLogger(name)
12
+ logger.setLevel(logging.DEBUG)
13
+ logger.propagate = False
14
+
15
+ # create formatter
16
+ fmt = '[%(asctime)s %(name)s] (%(filename)s %(lineno)d): %(levelname)s %(message)s'
17
+ color_fmt = colored('[%(asctime)s %(name)s]', 'green') + \
18
+ colored('(%(filename)s %(lineno)d)', 'yellow') + ': %(levelname)s %(message)s'
19
+
20
+ # create console handlers for master process
21
+ if dist_rank == 0:
22
+ console_handler = logging.StreamHandler(sys.stdout)
23
+ console_handler.setLevel(logging.DEBUG)
24
+ console_handler.setFormatter(
25
+ logging.Formatter(fmt=color_fmt, datefmt='%Y-%m-%d %H:%M:%S'))
26
+ logger.addHandler(console_handler)
27
+
28
+ # create file handlers
29
+ file_handler = logging.FileHandler(os.path.join(output_dir, f'log_rank{dist_rank}.txt'), mode='a')
30
+ file_handler.setLevel(logging.DEBUG)
31
+ file_handler.setFormatter(logging.Formatter(fmt=fmt, datefmt='%Y-%m-%d %H:%M:%S'))
32
+ logger.addHandler(file_handler)
33
+
34
+ return logger
utils/optimizer.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import torch.optim as optim
3
+ from timm.scheduler.cosine_lr import CosineLRScheduler
4
+ import torch.distributed as dist
5
+
6
+
7
+ def is_main_process():
8
+ return dist.get_rank() == 0
9
+
10
+
11
+ def check_keywords_in_name(name, keywords=()):
12
+ isin = False
13
+ for keyword in keywords:
14
+ if keyword in name:
15
+ isin = True
16
+ return isin
17
+
18
+
19
+ def set_weight_decay(model, skip_list=(), skip_keywords=(), weight_decay=0.001, lr=2e-6, have=(), not_have=()):
20
+ has_decay = []
21
+ no_decay = []
22
+ for name, param in model.named_parameters():
23
+ if not param.requires_grad:
24
+ continue # frozen weights
25
+ if len(have) > 0 and not check_keywords_in_name(name, have):
26
+ continue
27
+ if len(not_have) > 0 and check_keywords_in_name(name, not_have):
28
+ continue
29
+ if len(param.shape) == 1 or name.endswith(".bias") or (name in skip_list) or \
30
+ check_keywords_in_name(name, skip_keywords):
31
+ no_decay.append(param)
32
+ else:
33
+ has_decay.append(param)
34
+
35
+ return [{'params': has_decay, 'weight_decay': weight_decay, 'lr': lr},
36
+ {'params': no_decay, 'weight_decay': 0., 'lr': lr}]
37
+
38
+
39
+ def build_optimizer(config, model):
40
+ model = model.module if hasattr(model, 'module') else model
41
+
42
+ optimizer = optim.AdamW(model.parameters(), lr=config.TRAIN.LR,
43
+ weight_decay=config.TRAIN.WEIGHT_DECAY,
44
+ betas=(0.9, 0.98), eps=1e-8, )
45
+
46
+ return optimizer
47
+
48
+
49
+ def build_scheduler(config, optimizer, n_iter_per_epoch):
50
+ num_steps = int(config.TRAIN.EPOCHS * n_iter_per_epoch)
51
+ warmup_steps = int(config.TRAIN.WARMUP_EPOCHS * n_iter_per_epoch)
52
+
53
+ lr_scheduler = CosineLRScheduler(
54
+ optimizer,
55
+ t_initial=num_steps,
56
+ lr_min=config.TRAIN.LR / 100,
57
+ warmup_lr_init=0,
58
+ warmup_t=warmup_steps,
59
+ cycle_limit=1,
60
+ t_in_epochs=False,
61
+ )
62
+
63
+ return lr_scheduler
utils/tools.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy
2
+ import torch.distributed as dist
3
+ import torch
4
+ import clip
5
+ import os
6
+
7
+
8
+ def reduce_tensor(tensor, n=None):
9
+ if n is None:
10
+ n = dist.get_world_size()
11
+ rt = tensor.clone()
12
+ dist.all_reduce(rt, op=dist.ReduceOp.SUM)
13
+ rt = rt / n
14
+ return rt
15
+
16
+
17
+ class AverageMeter:
18
+ """Computes and stores the average and current value"""
19
+ def __init__(self):
20
+ self.reset()
21
+
22
+ def reset(self):
23
+ self.val = 0
24
+ self.avg = 0
25
+ self.sum = 0
26
+ self.count = 0
27
+
28
+ def update(self, val, n=1):
29
+ self.val = val
30
+ self.sum += val * n
31
+ self.count += n
32
+ self.avg = self.sum / self.count
33
+
34
+ def sync(self):
35
+ rank = dist.get_rank()
36
+ world_size = dist.get_world_size()
37
+ val = torch.tensor(self.val).cuda()
38
+ sum_v = torch.tensor(self.sum).cuda()
39
+ count = torch.tensor(self.count).cuda()
40
+ self.val = reduce_tensor(val, world_size).item()
41
+ self.sum = reduce_tensor(sum_v, 1).item()
42
+ self.count = reduce_tensor(count, 1).item()
43
+ self.avg = self.sum / self.count
44
+
45
+
46
+ def epoch_saving(config, epoch, model, max_accuracy, optimizer, lr_scheduler, logger, working_dir, is_best):
47
+ save_state = {'model': model.state_dict(),
48
+ 'optimizer': optimizer.state_dict(),
49
+ 'lr_scheduler': lr_scheduler.state_dict(),
50
+ 'max_accuracy': max_accuracy,
51
+ 'epoch': epoch,
52
+ 'config': config}
53
+
54
+ save_path = os.path.join(working_dir, f'ckpt_epoch_{epoch}.pth')
55
+ logger.info(f"{save_path} saving......")
56
+ torch.save(save_state, save_path)
57
+ logger.info(f"{save_path} saved !!!")
58
+ if is_best:
59
+ best_path = os.path.join(working_dir, f'best.pth')
60
+ torch.save(save_state, best_path)
61
+ logger.info(f"{best_path} saved !!!")
62
+
63
+
64
+ def load_checkpoint(config, model, optimizer, lr_scheduler, logger):
65
+ if os.path.isfile(config.MODEL.RESUME):
66
+ logger.info(f"==============> Resuming form {config.MODEL.RESUME}....................")
67
+ checkpoint = torch.load(config.MODEL.RESUME, map_location='cpu')
68
+ load_state_dict = checkpoint['model']
69
+
70
+ # now remove the unwanted keys:
71
+ if "module.prompt_learner.token_prefix" in load_state_dict:
72
+ del load_state_dict["module.prompt_learner.token_prefix"]
73
+
74
+ if "module.prompt_learner.token_suffix" in load_state_dict:
75
+ del load_state_dict["module.prompt_learner.token_suffix"]
76
+
77
+ if "module.prompt_learner.complete_text_embeddings" in load_state_dict:
78
+ del load_state_dict["module.prompt_learner.complete_text_embeddings"]
79
+
80
+ msg = model.load_state_dict(load_state_dict, strict=False)
81
+ logger.info(f"resume model: {msg}")
82
+
83
+ try:
84
+ optimizer.load_state_dict(checkpoint['optimizer'])
85
+ lr_scheduler.load_state_dict(checkpoint['lr_scheduler'])
86
+
87
+ start_epoch = checkpoint['epoch'] + 1
88
+ max_accuracy = checkpoint['max_accuracy']
89
+
90
+ logger.info(f"=> loaded successfully '{config.MODEL.RESUME}' (epoch {checkpoint['epoch']})")
91
+
92
+ del checkpoint
93
+ torch.cuda.empty_cache()
94
+
95
+ return start_epoch, max_accuracy
96
+ except:
97
+ del checkpoint
98
+ torch.cuda.empty_cache()
99
+ return 0, 0.
100
+
101
+ else:
102
+ logger.info(("=> no checkpoint found at '{}'".format(config.MODEL.RESUME)))
103
+ return 0, 0
104
+
105
+
106
+ def auto_resume_helper(output_dir):
107
+ checkpoints = os.listdir(output_dir)
108
+ checkpoints = [ckpt for ckpt in checkpoints if ckpt.endswith('pth')]
109
+ print(f"All checkpoints founded in {output_dir}: {checkpoints}")
110
+ if len(checkpoints) > 0:
111
+ latest_checkpoint = max([os.path.join(output_dir, d) for d in checkpoints], key=os.path.getmtime)
112
+ print(f"The latest checkpoint founded: {latest_checkpoint}")
113
+ resume_file = latest_checkpoint
114
+ else:
115
+ resume_file = None
116
+ return resume_file
117
+
118
+
119
+ def generate_text(data):
120
+ text_aug = f"{{}}"
121
+ classes = torch.cat([clip.tokenize(text_aug.format(c), context_length=77) for i, c in data.classes])
122
+
123
+ return classes
val.csv ADDED
@@ -0,0 +1,4616 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 0000.mp4,0
2
+ 0003.mp4,0
3
+ 0005.mp4,0
4
+ 0006.mp4,0
5
+ 0007.mp4,0
6
+ 0008.mp4,0
7
+ 0009.mp4,0
8
+ 0010.mp4,0
9
+ 0011.mp4,0
10
+ 0012.mp4,0
11
+ 0013.mp4,0
12
+ 0014.mp4,0
13
+ 0015.mp4,0
14
+ 0016.mp4,0
15
+ 0017.mp4,0
16
+ 0018.mp4,0
17
+ 0019.mp4,0
18
+ 0020.mp4,0
19
+ 0021.mp4,0
20
+ 0022.mp4,0
21
+ 0023.mp4,0
22
+ 0024.mp4,0
23
+ 0025.mp4,0
24
+ 0026.mp4,0
25
+ 0027.mp4,0
26
+ 0028.mp4,0
27
+ 0029.mp4,0
28
+ 0030.mp4,0
29
+ 0031.mp4,0
30
+ 0032.mp4,0
31
+ 0033.mp4,0
32
+ 0034.mp4,0
33
+ 0035.mp4,0
34
+ 0036.mp4,0
35
+ 0037.mp4,0
36
+ 0038.mp4,0
37
+ 0039.mp4,0
38
+ 0040.mp4,0
39
+ 0041.mp4,0
40
+ 0042.mp4,0
41
+ 0043.mp4,0
42
+ 0044.mp4,0
43
+ 0045.mp4,0
44
+ 0046.mp4,0
45
+ 0047.mp4,0
46
+ 0048.mp4,0
47
+ 0049.mp4,0
48
+ 0050.mp4,0
49
+ 0051.mp4,0
50
+ 0052.mp4,0
51
+ 0053.mp4,0
52
+ 0054.mp4,0
53
+ 0055.mp4,0
54
+ 0056.mp4,0
55
+ 0057.mp4,0
56
+ 0058.mp4,0
57
+ 0059.mp4,0
58
+ 0060.mp4,0
59
+ 0061.mp4,0
60
+ 0062.mp4,0
61
+ 0063.mp4,0
62
+ 0064.mp4,0
63
+ 0065.mp4,0
64
+ 0066.mp4,0
65
+ 0067.mp4,0
66
+ 0068.mp4,0
67
+ 0069.mp4,0
68
+ 0070.mp4,0
69
+ 0071.mp4,0
70
+ 0072.mp4,0
71
+ 0073.mp4,0
72
+ 0074.mp4,0
73
+ 0075.mp4,0
74
+ 0076.mp4,0
75
+ 0077.mp4,0
76
+ 0078.mp4,0
77
+ 0079.mp4,0
78
+ 0080.mp4,0
79
+ 0081.mp4,0
80
+ 0082.mp4,0
81
+ 0083.mp4,0
82
+ 0084.mp4,0
83
+ 0085.mp4,0
84
+ 0086.mp4,0
85
+ 0087.mp4,0
86
+ 0088.mp4,0
87
+ 0089.mp4,0
88
+ 0090.mp4,0
89
+ 0091.mp4,0
90
+ 0092.mp4,0
91
+ 0093.mp4,0
92
+ 0094.mp4,0
93
+ 0095.mp4,0
94
+ 0096.mp4,0
95
+ 0097.mp4,0
96
+ 0098.mp4,0
97
+ 0099.mp4,0
98
+ 0100.mp4,0
99
+ 0101.mp4,0
100
+ 0102.mp4,0
101
+ 0103.mp4,0
102
+ 0104.mp4,0
103
+ 0105.mp4,0
104
+ 0106.mp4,0
105
+ 0107.mp4,0
106
+ 0108.mp4,0
107
+ 0109.mp4,0
108
+ 0110.mp4,0
109
+ 0111.mp4,0
110
+ 0112.mp4,0
111
+ 0113.mp4,0
112
+ 0114.mp4,0
113
+ 0115.mp4,0
114
+ 0116.mp4,0
115
+ 0117.mp4,0
116
+ 0118.mp4,0
117
+ 0119.mp4,0
118
+ 0120.mp4,0
119
+ 0121.mp4,0
120
+ 0122.mp4,0
121
+ 0123.mp4,0
122
+ 0124.mp4,0
123
+ 0125.mp4,0
124
+ 0126.mp4,0
125
+ 0127.mp4,0
126
+ 0128.mp4,0
127
+ 0129.mp4,0
128
+ 0130.mp4,0
129
+ 0131.mp4,0
130
+ 0132.mp4,0
131
+ 0133.mp4,0
132
+ 0134.mp4,0
133
+ 0135.mp4,0
134
+ 0136.mp4,0
135
+ 0137.mp4,0
136
+ 0138.mp4,0
137
+ 0139.mp4,0
138
+ 0140.mp4,0
139
+ 0141.mp4,0
140
+ 0142.mp4,0
141
+ 0143.mp4,0
142
+ 0144.mp4,0
143
+ 0145.mp4,0
144
+ 0146.mp4,0
145
+ 0147.mp4,0
146
+ 0148.mp4,0
147
+ 0149.mp4,0
148
+ 0150.mp4,0
149
+ 0151.mp4,0
150
+ 0152.mp4,0
151
+ 0153.mp4,0
152
+ 0154.mp4,0
153
+ 0155.mp4,0
154
+ 0156.mp4,0
155
+ 0157.mp4,0
156
+ 0158.mp4,0
157
+ 0159.mp4,0
158
+ 0160.mp4,0
159
+ 0161.mp4,0
160
+ 0162.mp4,0
161
+ 0163.mp4,0
162
+ 0164.mp4,0
163
+ 0165.mp4,0
164
+ 0166.mp4,0
165
+ 0167.mp4,0
166
+ 0168.mp4,0
167
+ 0169.mp4,0
168
+ 0170.mp4,0
169
+ 0171.mp4,0
170
+ 0172.mp4,0
171
+ 0173.mp4,0
172
+ 0174.mp4,0
173
+ 0175.mp4,0
174
+ 0176.mp4,0
175
+ 0177.mp4,0
176
+ 0178.mp4,0
177
+ 0179.mp4,0
178
+ 0180.mp4,0
179
+ 0181.mp4,0
180
+ 0182.mp4,0
181
+ 0183.mp4,0
182
+ 0184.mp4,0
183
+ 0185.mp4,0
184
+ 0186.mp4,0
185
+ 0187.mp4,0
186
+ 0188.mp4,0
187
+ 0189.mp4,0
188
+ 0190.mp4,0
189
+ 0191.mp4,0
190
+ 0192.mp4,0
191
+ 0193.mp4,0
192
+ 0194.mp4,0
193
+ 0195.mp4,0
194
+ 0196.mp4,0
195
+ 0197.mp4,0
196
+ 0198.mp4,0
197
+ 0199.mp4,0
198
+ 0200.mp4,0
199
+ 0201.mp4,0
200
+ 0202.mp4,0
201
+ 0203.mp4,0
202
+ 0204.mp4,0
203
+ 0205.mp4,0
204
+ 0206.mp4,0
205
+ 0207.mp4,0
206
+ 0208.mp4,0
207
+ 0209.mp4,0
208
+ 0210.mp4,0
209
+ 0211.mp4,0
210
+ 0212.mp4,0
211
+ 0213.mp4,0
212
+ 0214.mp4,0
213
+ 0215.mp4,0
214
+ 0216.mp4,0
215
+ 0217.mp4,0
216
+ 0218.mp4,0
217
+ 0219.mp4,0
218
+ 0220.mp4,0
219
+ 0221.mp4,0
220
+ 0222.mp4,0
221
+ 0223.mp4,0
222
+ 0224.mp4,0
223
+ 0225.mp4,0
224
+ 0226.mp4,0
225
+ 0227.mp4,0
226
+ 0228.mp4,0
227
+ 0229.mp4,0
228
+ 0230.mp4,0
229
+ 0231.mp4,0
230
+ 0232.mp4,0
231
+ 0233.mp4,0
232
+ 0234.mp4,0
233
+ 0235.mp4,0
234
+ 0236.mp4,0
235
+ 0237.mp4,0
236
+ 0238.mp4,0
237
+ 0239.mp4,0
238
+ 0240.mp4,0
239
+ 0241.mp4,0
240
+ 0242.mp4,0
241
+ 0243.mp4,0
242
+ 0244.mp4,0
243
+ 0245.mp4,0
244
+ 0246.mp4,0
245
+ 0247.mp4,0
246
+ 0248.mp4,0
247
+ 0249.mp4,0
248
+ 0250.mp4,0
249
+ 0251.mp4,0
250
+ 0252.mp4,0
251
+ 0253.mp4,0
252
+ 0254.mp4,0
253
+ 0255.mp4,0
254
+ 0256.mp4,0
255
+ 0257.mp4,0
256
+ 0258.mp4,0
257
+ 0259.mp4,0
258
+ 0260.mp4,0
259
+ 0261.mp4,0
260
+ 0262.mp4,0
261
+ 0263.mp4,0
262
+ 0264.mp4,0
263
+ 0265.mp4,0
264
+ 0266.mp4,0
265
+ 0267.mp4,0
266
+ 0268.mp4,0
267
+ 0269.mp4,0
268
+ 0270.mp4,0
269
+ 0271.mp4,0
270
+ 0272.mp4,0
271
+ 0273.mp4,0
272
+ 0274.mp4,0
273
+ 0275.mp4,0
274
+ 0276.mp4,0
275
+ 0277.mp4,0
276
+ 0278.mp4,0
277
+ 0279.mp4,0
278
+ 0280.mp4,0
279
+ 0281.mp4,0
280
+ 0282.mp4,0
281
+ 0283.mp4,0
282
+ 0284.mp4,0
283
+ 0285.mp4,0
284
+ 0286.mp4,0
285
+ 0287.mp4,0
286
+ 0288.mp4,0
287
+ 0289.mp4,0
288
+ 0290.mp4,0
289
+ 0291.mp4,0
290
+ 0292.mp4,0
291
+ 0293.mp4,0
292
+ 0294.mp4,0
293
+ 0295.mp4,0
294
+ 0296.mp4,0
295
+ 0297.mp4,0
296
+ 0298.mp4,0
297
+ 0299.mp4,0
298
+ 0300.mp4,0
299
+ 0301.mp4,0
300
+ 0302.mp4,0
301
+ 0303.mp4,0
302
+ 0304.mp4,0
303
+ 0305.mp4,0
304
+ 0306.mp4,0
305
+ 0307.mp4,0
306
+ 0308.mp4,0
307
+ 0309.mp4,0
308
+ 0310.mp4,0
309
+ 0311.mp4,0
310
+ 0312.mp4,0
311
+ 0313.mp4,0
312
+ 0314.mp4,0
313
+ 0315.mp4,0
314
+ 0316.mp4,0
315
+ 0317.mp4,0
316
+ 0318.mp4,0
317
+ 0319.mp4,0
318
+ 0320.mp4,0
319
+ 0321.mp4,0
320
+ 0322.mp4,0
321
+ 0323.mp4,0
322
+ 0324.mp4,0
323
+ 0325.mp4,0
324
+ 0326.mp4,0
325
+ 0327.mp4,0
326
+ 0328.mp4,0
327
+ 0329.mp4,0
328
+ 0330.mp4,0
329
+ 0331.mp4,0
330
+ 0332.mp4,0
331
+ 0333.mp4,0
332
+ 0334.mp4,0
333
+ 0335.mp4,0
334
+ 0336.mp4,0
335
+ 0337.mp4,0
336
+ 0338.mp4,0
337
+ 0339.mp4,0
338
+ 0340.mp4,0
339
+ 0341.mp4,0
340
+ 0342.mp4,0
341
+ 0343.mp4,0
342
+ 0344.mp4,0
343
+ 0345.mp4,0
344
+ 0346.mp4,0
345
+ 0347.mp4,0
346
+ 0348.mp4,0
347
+ 0349.mp4,0
348
+ 0350.mp4,0
349
+ 0351.mp4,0
350
+ 0352.mp4,0
351
+ 0353.mp4,0
352
+ 0354.mp4,0
353
+ 0355.mp4,0
354
+ 0356.mp4,0
355
+ 0357.mp4,0
356
+ 0358.mp4,0
357
+ 0359.mp4,0
358
+ 0360.mp4,0
359
+ 0361.mp4,0
360
+ 0362.mp4,0
361
+ 0363.mp4,0
362
+ 0364.mp4,0
363
+ 0365.mp4,0
364
+ 0366.mp4,0
365
+ 0367.mp4,0
366
+ 0368.mp4,0
367
+ 0369.mp4,0
368
+ 0370.mp4,0
369
+ 0371.mp4,0
370
+ 0372.mp4,0
371
+ 0373.mp4,0
372
+ 0374.mp4,0
373
+ 0375.mp4,0
374
+ 0376.mp4,0
375
+ 0377.mp4,0
376
+ 0378.mp4,0
377
+ 0379.mp4,0
378
+ 0380.mp4,0
379
+ 0381.mp4,0
380
+ 0382.mp4,0
381
+ 0383.mp4,0
382
+ 0384.mp4,0
383
+ 0385.mp4,0
384
+ 0386.mp4,0
385
+ 0387.mp4,0
386
+ 0388.mp4,0
387
+ 0389.mp4,0
388
+ 0390.mp4,0
389
+ 0391.mp4,0
390
+ 0392.mp4,0
391
+ 0393.mp4,0
392
+ 0394.mp4,0
393
+ 0395.mp4,0
394
+ 0396.mp4,0
395
+ 0397.mp4,0
396
+ 0398.mp4,0
397
+ 0399.mp4,0
398
+ 0400.mp4,0
399
+ 0401.mp4,0
400
+ 0402.mp4,0
401
+ 0403.mp4,0
402
+ 0404.mp4,0
403
+ 0405.mp4,0
404
+ 0406.mp4,0
405
+ 0407.mp4,0
406
+ 0408.mp4,0
407
+ 0409.mp4,0
408
+ 0410.mp4,0
409
+ 0411.mp4,0
410
+ 0412.mp4,0
411
+ 0413.mp4,0
412
+ 0414.mp4,0
413
+ 0415.mp4,0
414
+ 0416.mp4,0
415
+ 0417.mp4,0
416
+ 0418.mp4,0
417
+ 0419.mp4,0
418
+ 0420.mp4,0
419
+ 0421.mp4,0
420
+ 0422.mp4,0
421
+ 0423.mp4,0
422
+ 0424.mp4,0
423
+ 0425.mp4,0
424
+ 0426.mp4,0
425
+ 0427.mp4,0
426
+ 0428.mp4,0
427
+ 0429.mp4,0
428
+ 0430.mp4,0
429
+ 0431.mp4,0
430
+ 0432.mp4,0
431
+ 0433.mp4,0
432
+ 0434.mp4,0
433
+ 0435.mp4,0
434
+ 0436.mp4,0
435
+ 0437.mp4,0
436
+ 0438.mp4,0
437
+ 0439.mp4,0
438
+ 0440.mp4,0
439
+ 0441.mp4,0
440
+ 0442.mp4,0
441
+ 0443.mp4,0
442
+ 0444.mp4,0
443
+ 0445.mp4,0
444
+ 0446.mp4,0
445
+ 0447.mp4,0
446
+ 0448.mp4,0
447
+ 0449.mp4,0
448
+ 0450.mp4,0
449
+ 0451.mp4,0
450
+ 0452.mp4,0
451
+ 0453.mp4,0
452
+ 0454.mp4,0
453
+ 0455.mp4,0
454
+ 0456.mp4,0
455
+ 0457.mp4,0
456
+ 0458.mp4,0
457
+ 0459.mp4,0
458
+ 0460.mp4,0
459
+ 0461.mp4,0
460
+ 0462.mp4,0
461
+ 0463.mp4,0
462
+ 0464.mp4,0
463
+ 0465.mp4,0
464
+ 0466.mp4,0
465
+ 0467.mp4,0
466
+ 0468.mp4,0
467
+ 0469.mp4,0
468
+ 0470.mp4,0
469
+ 0471.mp4,0
470
+ 0472.mp4,0
471
+ 0473.mp4,0
472
+ 0474.mp4,0
473
+ 0475.mp4,0
474
+ 0476.mp4,0
475
+ 0477.mp4,0
476
+ 0478.mp4,0
477
+ 0479.mp4,0
478
+ 0480.mp4,0
479
+ 0481.mp4,0
480
+ 0482.mp4,0
481
+ 0483.mp4,0
482
+ 0484.mp4,0
483
+ 0485.mp4,0
484
+ 0486.mp4,0
485
+ 0487.mp4,0
486
+ 0488.mp4,0
487
+ 0489.mp4,0
488
+ 0490.mp4,0
489
+ 0491.mp4,0
490
+ 0492.mp4,0
491
+ 0493.mp4,0
492
+ 0494.mp4,0
493
+ 0495.mp4,0
494
+ 0496.mp4,0
495
+ 0497.mp4,0
496
+ 0498.mp4,0
497
+ 0499.mp4,0
498
+ 0500.mp4,0
499
+ 0501.mp4,0
500
+ 0502.mp4,0
501
+ 0503.mp4,0
502
+ 0504.mp4,0
503
+ 0505.mp4,0
504
+ 0506.mp4,0
505
+ 0507.mp4,0
506
+ 0508.mp4,0
507
+ 0509.mp4,0
508
+ 0510.mp4,0
509
+ 0511.mp4,0
510
+ 0512.mp4,0
511
+ 0513.mp4,0
512
+ 0514.mp4,0
513
+ 0515.mp4,0
514
+ 0516.mp4,0
515
+ 0517.mp4,0
516
+ 0518.mp4,0
517
+ 0519.mp4,0
518
+ 0520.mp4,0
519
+ 0521.mp4,0
520
+ 0522.mp4,0
521
+ 0523.mp4,0
522
+ 0524.mp4,0
523
+ 0525.mp4,0
524
+ 0526.mp4,0
525
+ 0527.mp4,0
526
+ 0528.mp4,0
527
+ 0529.mp4,0
528
+ 0530.mp4,0
529
+ 0531.mp4,0
530
+ 0532.mp4,0
531
+ 0533.mp4,0
532
+ 0534.mp4,0
533
+ 0535.mp4,0
534
+ 0536.mp4,0
535
+ 0537.mp4,0
536
+ 0538.mp4,0
537
+ 0539.mp4,0
538
+ 0540.mp4,0
539
+ 0541.mp4,0
540
+ 0542.mp4,0
541
+ 0543.mp4,0
542
+ 0544.mp4,0
543
+ 0545.mp4,0
544
+ 0546.mp4,0
545
+ 0547.mp4,0
546
+ 0548.mp4,0
547
+ 0549.mp4,0
548
+ 0550.mp4,0
549
+ 0551.mp4,0
550
+ 0552.mp4,0
551
+ 0553.mp4,0
552
+ 0554.mp4,0
553
+ 0555.mp4,0
554
+ 0556.mp4,0
555
+ 0557.mp4,0
556
+ 0558.mp4,0
557
+ 0559.mp4,0
558
+ 0560.mp4,0
559
+ 0561.mp4,0
560
+ 0562.mp4,0
561
+ 0563.mp4,0
562
+ 0564.mp4,0
563
+ 0565.mp4,0
564
+ 0566.mp4,0
565
+ 0567.mp4,0
566
+ 0568.mp4,0
567
+ 0569.mp4,0
568
+ 0570.mp4,0
569
+ 0571.mp4,0
570
+ 0572.mp4,0
571
+ 0573.mp4,0
572
+ 0574.mp4,0
573
+ 0575.mp4,0
574
+ 0576.mp4,0
575
+ 0577.mp4,0
576
+ 0578.mp4,0
577
+ 0579.mp4,0
578
+ 0580.mp4,0
579
+ 0581.mp4,0
580
+ 0582.mp4,0
581
+ 0583.mp4,0
582
+ 0584.mp4,0
583
+ 0585.mp4,0
584
+ 0586.mp4,0
585
+ 0587.mp4,0
586
+ 0588.mp4,0
587
+ 0589.mp4,0
588
+ 0590.mp4,0
589
+ 0591.mp4,0
590
+ 0592.mp4,0
591
+ 0593.mp4,0
592
+ 0594.mp4,0
593
+ 0595.mp4,0
594
+ 0596.mp4,0
595
+ 0597.mp4,0
596
+ 0598.mp4,0
597
+ 0599.mp4,0
598
+ 0600.mp4,0
599
+ 0601.mp4,0
600
+ 0602.mp4,0
601
+ 0603.mp4,0
602
+ 0604.mp4,0
603
+ 0605.mp4,0
604
+ 0606.mp4,0
605
+ 0607.mp4,0
606
+ 0608.mp4,0
607
+ 0609.mp4,0
608
+ 0610.mp4,0
609
+ 0611.mp4,0
610
+ 0612.mp4,0
611
+ 0613.mp4,0
612
+ 0614.mp4,0
613
+ 0615.mp4,0
614
+ 0616.mp4,0
615
+ 0617.mp4,0
616
+ 0618.mp4,0
617
+ 0619.mp4,0
618
+ 0620.mp4,0
619
+ 0621.mp4,0
620
+ 0622.mp4,0
621
+ 0623.mp4,0
622
+ 0624.mp4,0
623
+ 0625.mp4,0
624
+ 0626.mp4,0
625
+ 0627.mp4,0
626
+ 0628.mp4,0
627
+ 0629.mp4,0
628
+ 0630.mp4,0
629
+ 0631.mp4,0
630
+ 0632.mp4,0
631
+ 0633.mp4,0
632
+ 0634.mp4,0
633
+ 0635.mp4,0
634
+ 0636.mp4,0
635
+ 0637.mp4,0
636
+ 0638.mp4,0
637
+ 0639.mp4,0
638
+ 0640.mp4,0
639
+ 0641.mp4,0
640
+ 0642.mp4,0
641
+ 0643.mp4,0
642
+ 0644.mp4,0
643
+ 0645.mp4,0
644
+ 0646.mp4,0
645
+ 0647.mp4,0
646
+ 0648.mp4,0
647
+ 0649.mp4,0
648
+ 0650.mp4,0
649
+ 0651.mp4,0
650
+ 0652.mp4,0
651
+ 0653.mp4,0
652
+ 0654.mp4,0
653
+ 0655.mp4,0
654
+ 0656.mp4,0
655
+ 0657.mp4,0
656
+ 0658.mp4,0
657
+ 0659.mp4,0
658
+ 0660.mp4,0
659
+ 0661.mp4,0
660
+ 0662.mp4,0
661
+ 0663.mp4,0
662
+ 0664.mp4,0
663
+ 0665.mp4,0
664
+ 0666.mp4,0
665
+ 0667.mp4,0
666
+ 0668.mp4,0
667
+ 0669.mp4,0
668
+ 0670.mp4,0
669
+ 0671.mp4,0
670
+ 0672.mp4,0
671
+ 0673.mp4,0
672
+ 0674.mp4,0
673
+ 0675.mp4,0
674
+ 0676.mp4,0
675
+ 0677.mp4,0
676
+ 0678.mp4,0
677
+ 0679.mp4,0
678
+ 0680.mp4,0
679
+ 0681.mp4,0
680
+ 0682.mp4,0
681
+ 0683.mp4,0
682
+ 0684.mp4,0
683
+ 0685.mp4,0
684
+ 0686.mp4,0
685
+ 0687.mp4,0
686
+ 0688.mp4,0
687
+ 0689.mp4,0
688
+ 0690.mp4,0
689
+ 0691.mp4,0
690
+ 0692.mp4,0
691
+ 0693.mp4,0
692
+ 0694.mp4,0
693
+ 0695.mp4,0
694
+ 0696.mp4,0
695
+ 0697.mp4,0
696
+ 0698.mp4,0
697
+ 0699.mp4,0
698
+ 0700.mp4,0
699
+ 0701.mp4,0
700
+ 0702.mp4,0
701
+ 0703.mp4,0
702
+ 0704.mp4,0
703
+ 0705.mp4,0
704
+ 0706.mp4,0
705
+ 0707.mp4,0
706
+ 0708.mp4,0
707
+ 0709.mp4,0
708
+ 0710.mp4,0
709
+ 0711.mp4,0
710
+ 0712.mp4,0
711
+ 0713.mp4,0
712
+ 0714.mp4,0
713
+ 0715.mp4,0
714
+ 0716.mp4,0
715
+ 0717.mp4,0
716
+ 0718.mp4,0
717
+ 0719.mp4,0
718
+ 0720.mp4,0
719
+ 0721.mp4,0
720
+ 0722.mp4,0
721
+ 0723.mp4,0
722
+ 0724.mp4,0
723
+ 0725.mp4,0
724
+ 0726.mp4,0
725
+ 0727.mp4,0
726
+ 0728.mp4,0
727
+ 0729.mp4,0
728
+ 0730.mp4,0
729
+ 0731.mp4,0
730
+ 0732.mp4,0
731
+ 0733.mp4,0
732
+ 0734.mp4,0
733
+ 0735.mp4,0
734
+ 0736.mp4,0
735
+ 0737.mp4,0
736
+ 0738.mp4,0
737
+ 0739.mp4,0
738
+ 0740.mp4,0
739
+ 0741.mp4,0
740
+ 0742.mp4,0
741
+ 0743.mp4,0
742
+ 0744.mp4,0
743
+ 0745.mp4,0
744
+ 0746.mp4,0
745
+ 0747.mp4,0
746
+ 0748.mp4,0
747
+ 0749.mp4,0
748
+ 0750.mp4,0
749
+ 0751.mp4,0
750
+ 0752.mp4,0
751
+ 0753.mp4,0
752
+ 0754.mp4,0
753
+ 0755.mp4,0
754
+ 0756.mp4,0
755
+ 0757.mp4,0
756
+ 0758.mp4,0
757
+ 0759.mp4,0
758
+ 0760.mp4,0
759
+ 0761.mp4,0
760
+ 0762.mp4,0
761
+ 0763.mp4,0
762
+ 0764.mp4,0
763
+ 0765.mp4,0
764
+ 0766.mp4,0
765
+ 0767.mp4,0
766
+ 0768.mp4,0
767
+ 0769.mp4,0
768
+ 0770.mp4,0
769
+ 0771.mp4,0
770
+ 0772.mp4,0
771
+ 0773.mp4,0
772
+ 0774.mp4,0
773
+ 0775.mp4,0
774
+ 0776.mp4,0
775
+ 0777.mp4,0
776
+ 0778.mp4,0
777
+ 0779.mp4,0
778
+ 0780.mp4,0
779
+ 0781.mp4,0
780
+ 0782.mp4,0
781
+ 0783.mp4,0
782
+ 0784.mp4,0
783
+ 0785.mp4,0
784
+ 0786.mp4,0
785
+ 0787.mp4,0
786
+ 0788.mp4,0
787
+ 0789.mp4,0
788
+ 0790.mp4,0
789
+ 0791.mp4,0
790
+ 0792.mp4,0
791
+ 0793.mp4,0
792
+ 0794.mp4,0
793
+ 0795.mp4,0
794
+ 0796.mp4,0
795
+ 0797.mp4,0
796
+ 0798.mp4,0
797
+ 0799.mp4,0
798
+ 0800.mp4,0
799
+ 0801.mp4,0
800
+ 0802.mp4,0
801
+ 0803.mp4,0
802
+ 0804.mp4,0
803
+ 0805.mp4,0
804
+ 0806.mp4,0
805
+ 0807.mp4,0
806
+ 0808.mp4,0
807
+ 0809.mp4,0
808
+ 0810.mp4,0
809
+ 0811.mp4,0
810
+ 0812.mp4,0
811
+ 0813.mp4,0
812
+ 0814.mp4,0
813
+ 0815.mp4,0
814
+ 0816.mp4,0
815
+ 0817.mp4,0
816
+ 0818.mp4,0
817
+ 0819.mp4,0
818
+ 0820.mp4,0
819
+ 0821.mp4,0
820
+ 0822.mp4,0
821
+ 0823.mp4,0
822
+ 0824.mp4,0
823
+ 0825.mp4,0
824
+ 0826.mp4,0
825
+ 0827.mp4,0
826
+ 0828.mp4,0
827
+ 0829.mp4,0
828
+ 0830.mp4,0
829
+ 0831.mp4,0
830
+ 0832.mp4,0
831
+ 0833.mp4,0
832
+ 0834.mp4,0
833
+ 0835.mp4,0
834
+ 0836.mp4,0
835
+ 0837.mp4,0
836
+ 0838.mp4,0
837
+ 0839.mp4,0
838
+ 0840.mp4,0
839
+ 0841.mp4,0
840
+ 0842.mp4,0
841
+ 0843.mp4,0
842
+ 0844.mp4,0
843
+ 0845.mp4,0
844
+ 0846.mp4,0
845
+ 0847.mp4,0
846
+ 0848.mp4,0
847
+ 0849.mp4,0
848
+ 0850.mp4,0
849
+ 0851.mp4,0
850
+ 0852.mp4,0
851
+ 0853.mp4,0
852
+ 0854.mp4,0
853
+ 0855.mp4,0
854
+ 0856.mp4,0
855
+ 0857.mp4,0
856
+ 0858.mp4,0
857
+ 0859.mp4,0
858
+ 0860.mp4,0
859
+ 0861.mp4,0
860
+ 0862.mp4,0
861
+ 0863.mp4,0
862
+ 0864.mp4,0
863
+ 0865.mp4,0
864
+ 0866.mp4,0
865
+ 0867.mp4,0
866
+ 0868.mp4,0
867
+ 0869.mp4,0
868
+ 0870.mp4,0
869
+ 0871.mp4,0
870
+ 0872.mp4,0
871
+ 0873.mp4,0
872
+ 0874.mp4,0
873
+ 0875.mp4,0
874
+ 0876.mp4,0
875
+ 0877.mp4,0
876
+ 0878.mp4,0
877
+ 0879.mp4,0
878
+ 0880.mp4,0
879
+ 0881.mp4,0
880
+ 0882.mp4,0
881
+ 0883.mp4,0
882
+ 0884.mp4,0
883
+ 0885.mp4,0
884
+ 0886.mp4,0
885
+ 0887.mp4,0
886
+ 0888.mp4,0
887
+ 0889.mp4,0
888
+ 0890.mp4,0
889
+ 0891.mp4,0
890
+ 0892.mp4,0
891
+ 0893.mp4,0
892
+ 0894.mp4,0
893
+ 0895.mp4,0
894
+ 0896.mp4,0
895
+ 0897.mp4,0
896
+ 0898.mp4,0
897
+ 0899.mp4,0
898
+ 0900.mp4,0
899
+ 0901.mp4,0
900
+ 0902.mp4,0
901
+ 0903.mp4,0
902
+ 0904.mp4,0
903
+ 0905.mp4,0
904
+ 0906.mp4,0
905
+ 0907.mp4,0
906
+ 0908.mp4,0
907
+ 0909.mp4,0
908
+ 0910.mp4,0
909
+ 0911.mp4,0
910
+ 0912.mp4,0
911
+ 0913.mp4,0
912
+ 0914.mp4,0
913
+ 0915.mp4,0
914
+ 0916.mp4,0
915
+ 0917.mp4,0
916
+ 0918.mp4,0
917
+ 0919.mp4,0
918
+ 0920.mp4,0
919
+ 0921.mp4,0
920
+ 0922.mp4,0
921
+ 0923.mp4,0
922
+ 0924.mp4,0
923
+ 0925.mp4,0
924
+ 0926.mp4,0
925
+ 0927.mp4,0
926
+ 0928.mp4,0
927
+ 0929.mp4,0
928
+ 0930.mp4,0
929
+ 0931.mp4,0
930
+ 0932.mp4,0
931
+ 0933.mp4,0
932
+ 0934.mp4,0
933
+ 0935.mp4,0
934
+ 0936.mp4,0
935
+ 0937.mp4,0
936
+ 0938.mp4,0
937
+ 0939.mp4,0
938
+ 0940.mp4,0
939
+ 0941.mp4,0
940
+ 0942.mp4,0
941
+ 0943.mp4,0
942
+ 0944.mp4,0
943
+ 0945.mp4,0
944
+ 0946.mp4,0
945
+ 0947.mp4,0
946
+ 0948.mp4,0
947
+ 0949.mp4,0
948
+ 0950.mp4,0
949
+ 0951.mp4,0
950
+ 0952.mp4,0
951
+ 0953.mp4,0
952
+ 0954.mp4,0
953
+ 0955.mp4,0
954
+ 0956.mp4,0
955
+ 0957.mp4,0
956
+ 0958.mp4,0
957
+ 0959.mp4,0
958
+ 0960.mp4,0
959
+ 0961.mp4,0
960
+ 0962.mp4,0
961
+ 0963.mp4,0
962
+ 0964.mp4,0
963
+ 0965.mp4,0
964
+ 0966.mp4,0
965
+ 0967.mp4,0
966
+ 0968.mp4,0
967
+ 0969.mp4,0
968
+ 0970.mp4,0
969
+ 0971.mp4,0
970
+ 0972.mp4,0
971
+ 0973.mp4,0
972
+ 0974.mp4,0
973
+ 0975.mp4,0
974
+ 0976.mp4,0
975
+ 0977.mp4,0
976
+ 0978.mp4,0
977
+ 0979.mp4,0
978
+ 0980.mp4,0
979
+ 0981.mp4,0
980
+ 0982.mp4,0
981
+ 0983.mp4,0
982
+ 0984.mp4,0
983
+ 0985.mp4,0
984
+ 0986.mp4,0
985
+ 0987.mp4,0
986
+ 0988.mp4,0
987
+ 0989.mp4,0
988
+ 0990.mp4,0
989
+ 0991.mp4,0
990
+ 0992.mp4,0
991
+ 0993.mp4,0
992
+ 0994.mp4,0
993
+ 0995.mp4,0
994
+ 0996.mp4,0
995
+ 0997.mp4,0
996
+ 0998.mp4,0
997
+ 0999.mp4,0
998
+ 1000.mp4,0
999
+ 1001.mp4,0
1000
+ 1002.mp4,0
1001
+ 1003.mp4,0
1002
+ 1004.mp4,0
1003
+ 1005.mp4,0
1004
+ 1006.mp4,0
1005
+ 1007.mp4,0
1006
+ 1008.mp4,0
1007
+ 1009.mp4,0
1008
+ 1010.mp4,0
1009
+ 1011.mp4,0
1010
+ 1012.mp4,0
1011
+ 1013.mp4,0
1012
+ 1014.mp4,0
1013
+ 1015.mp4,0
1014
+ 1016.mp4,0
1015
+ 1017.mp4,0
1016
+ 1018.mp4,0
1017
+ 1019.mp4,0
1018
+ 1020.mp4,0
1019
+ 1021.mp4,0
1020
+ 1022.mp4,0
1021
+ 1023.mp4,0
1022
+ 1024.mp4,0
1023
+ 1025.mp4,0
1024
+ 1026.mp4,0
1025
+ 1027.mp4,0
1026
+ 1028.mp4,0
1027
+ 1029.mp4,0
1028
+ 1030.mp4,0
1029
+ 1031.mp4,0
1030
+ 1032.mp4,0
1031
+ 1033.mp4,0
1032
+ 1034.mp4,0
1033
+ 1035.mp4,0
1034
+ 1036.mp4,0
1035
+ 1037.mp4,0
1036
+ 1038.mp4,0
1037
+ 1039.mp4,0
1038
+ 1040.mp4,0
1039
+ 1041.mp4,0
1040
+ 1042.mp4,0
1041
+ 1044.mp4,0
1042
+ 1045.mp4,0
1043
+ 1046.mp4,0
1044
+ 1047.mp4,0
1045
+ 1048.mp4,0
1046
+ 1049.mp4,0
1047
+ 1050.mp4,0
1048
+ 1052.mp4,0
1049
+ 1053.mp4,0
1050
+ 1054.mp4,0
1051
+ 1055.mp4,0
1052
+ 1056.mp4,0
1053
+ 1057.mp4,0
1054
+ 1058.mp4,0
1055
+ 1060.mp4,0
1056
+ 1061.mp4,0
1057
+ 1062.mp4,0
1058
+ 1063.mp4,0
1059
+ 1064.mp4,0
1060
+ 1066.mp4,0
1061
+ 1067.mp4,0
1062
+ 1068.mp4,0
1063
+ 1070.mp4,0
1064
+ 1071.mp4,0
1065
+ 1074.mp4,0
1066
+ 1075.mp4,0
1067
+ 1076.mp4,0
1068
+ 1078.mp4,0
1069
+ 1079.mp4,0
1070
+ 1081.mp4,0
1071
+ 1082.mp4,0
1072
+ 1083.mp4,0
1073
+ 1084.mp4,0
1074
+ 1086.mp4,0
1075
+ 1087.mp4,0
1076
+ 1088.mp4,0
1077
+ 1089.mp4,0
1078
+ 1090.mp4,0
1079
+ 1091.mp4,0
1080
+ 1092.mp4,0
1081
+ 1093.mp4,0
1082
+ 1094.mp4,0
1083
+ 1095.mp4,0
1084
+ 1096.mp4,0
1085
+ 1097.mp4,0
1086
+ 1098.mp4,0
1087
+ 1099.mp4,0
1088
+ 1101.mp4,0
1089
+ 1102.mp4,0
1090
+ 1103.mp4,0
1091
+ 1105.mp4,0
1092
+ 1106.mp4,0
1093
+ 1107.mp4,0
1094
+ 1108.mp4,0
1095
+ 1109.mp4,0
1096
+ 1110.mp4,0
1097
+ 1111.mp4,0
1098
+ 1112.mp4,0
1099
+ 1113.mp4,0
1100
+ 1114.mp4,0
1101
+ 1115.mp4,0
1102
+ 1117.mp4,0
1103
+ 1118.mp4,0
1104
+ 1119.mp4,0
1105
+ 1121.mp4,0
1106
+ 1122.mp4,0
1107
+ 1123.mp4,0
1108
+ 1124.mp4,0
1109
+ 1125.mp4,0
1110
+ 1126.mp4,0
1111
+ 1127.mp4,0
1112
+ 1128.mp4,0
1113
+ 1130.mp4,0
1114
+ 1131.mp4,0
1115
+ 1133.mp4,0
1116
+ 1134.mp4,0
1117
+ 1135.mp4,0
1118
+ 1136.mp4,0
1119
+ 1138.mp4,0
1120
+ 1139.mp4,0
1121
+ 1140.mp4,0
1122
+ 1141.mp4,0
1123
+ 1142.mp4,0
1124
+ 1143.mp4,0
1125
+ 1144.mp4,0
1126
+ 1145.mp4,0
1127
+ 1146.mp4,0
1128
+ 1147.mp4,0
1129
+ 1149.mp4,0
1130
+ 1150.mp4,0
1131
+ 1152.mp4,0
1132
+ 1153.mp4,0
1133
+ 1156.mp4,0
1134
+ 1157.mp4,0
1135
+ 1158.mp4,0
1136
+ 1159.mp4,0
1137
+ 1160.mp4,0
1138
+ 1161.mp4,0
1139
+ 1162.mp4,0
1140
+ 1163.mp4,0
1141
+ 1164.mp4,0
1142
+ 1165.mp4,0
1143
+ 1166.mp4,0
1144
+ 1167.mp4,0
1145
+ 1168.mp4,0
1146
+ 1171.mp4,0
1147
+ 1172.mp4,0
1148
+ 1173.mp4,0
1149
+ 1174.mp4,0
1150
+ 1176.mp4,0
1151
+ 1177.mp4,0
1152
+ 1178.mp4,0
1153
+ 1180.mp4,0
1154
+ 1181.mp4,0
1155
+ 1182.mp4,0
1156
+ 1183.mp4,0
1157
+ 1185.mp4,0
1158
+ 1186.mp4,0
1159
+ 1187.mp4,0
1160
+ 1188.mp4,0
1161
+ 1189.mp4,0
1162
+ 1192.mp4,0
1163
+ 1193.mp4,0
1164
+ 1194.mp4,0
1165
+ 1195.mp4,0
1166
+ 1196.mp4,0
1167
+ 1197.mp4,0
1168
+ 1198.mp4,0
1169
+ 1199.mp4,0
1170
+ 1200.mp4,0
1171
+ 1201.mp4,0
1172
+ 1202.mp4,0
1173
+ 1203.mp4,0
1174
+ 1204.mp4,0
1175
+ 1205.mp4,0
1176
+ 1206.mp4,0
1177
+ 1207.mp4,0
1178
+ 1209.mp4,0
1179
+ 1211.mp4,0
1180
+ 1212.mp4,0
1181
+ 1213.mp4,0
1182
+ 1214.mp4,0
1183
+ 1215.mp4,0
1184
+ 1217.mp4,0
1185
+ 1218.mp4,0
1186
+ 1219.mp4,0
1187
+ 1220.mp4,0
1188
+ 1221.mp4,0
1189
+ 1222.mp4,0
1190
+ 1223.mp4,0
1191
+ 1224.mp4,0
1192
+ 1225.mp4,0
1193
+ 1226.mp4,0
1194
+ 1227.mp4,0
1195
+ 1228.mp4,0
1196
+ 1229.mp4,0
1197
+ 1230.mp4,0
1198
+ 1231.mp4,0
1199
+ 1232.mp4,0
1200
+ 1233.mp4,0
1201
+ 1234.mp4,0
1202
+ 1235.mp4,0
1203
+ 1237.mp4,0
1204
+ 1238.mp4,0
1205
+ 1239.mp4,0
1206
+ 1240.mp4,0
1207
+ 1241.mp4,0
1208
+ 1242.mp4,0
1209
+ 1243.mp4,0
1210
+ 1244.mp4,0
1211
+ 1245.mp4,0
1212
+ 1246.mp4,0
1213
+ 1248.mp4,0
1214
+ 1249.mp4,0
1215
+ 1250.mp4,0
1216
+ 1252.mp4,0
1217
+ 1255.mp4,0
1218
+ 1256.mp4,0
1219
+ 1257.mp4,0
1220
+ 1258.mp4,0
1221
+ 1259.mp4,0
1222
+ 1260.mp4,0
1223
+ 1261.mp4,0
1224
+ 1262.mp4,0
1225
+ 1263.mp4,0
1226
+ 1264.mp4,0
1227
+ 1266.mp4,0
1228
+ 1267.mp4,0
1229
+ 1268.mp4,0
1230
+ 1269.mp4,0
1231
+ 1270.mp4,0
1232
+ 1271.mp4,0
1233
+ 1274.mp4,0
1234
+ 1275.mp4,0
1235
+ 1276.mp4,0
1236
+ 1277.mp4,0
1237
+ 1279.mp4,0
1238
+ 1282.mp4,0
1239
+ 1283.mp4,0
1240
+ 1284.mp4,0
1241
+ 1285.mp4,0
1242
+ 1286.mp4,0
1243
+ 1287.mp4,0
1244
+ 1288.mp4,0
1245
+ 1289.mp4,0
1246
+ 1290.mp4,0
1247
+ 1291.mp4,0
1248
+ 1292.mp4,0
1249
+ 1293.mp4,0
1250
+ 1295.mp4,0
1251
+ 1296.mp4,0
1252
+ 1297.mp4,0
1253
+ 1298.mp4,0
1254
+ 1299.mp4,0
1255
+ 1300.mp4,0
1256
+ 1301.mp4,0
1257
+ 1302.mp4,0
1258
+ 1303.mp4,0
1259
+ 1304.mp4,0
1260
+ 1305.mp4,0
1261
+ 1306.mp4,0
1262
+ 1307.mp4,0
1263
+ 1308.mp4,0
1264
+ 1309.mp4,0
1265
+ 1311.mp4,0
1266
+ 1312.mp4,0
1267
+ 1313.mp4,0
1268
+ 1314.mp4,0
1269
+ 1315.mp4,0
1270
+ 1316.mp4,0
1271
+ 1317.mp4,0
1272
+ 1318.mp4,0
1273
+ 1319.mp4,0
1274
+ 1321.mp4,0
1275
+ 1322.mp4,0
1276
+ 1323.mp4,0
1277
+ 1324.mp4,0
1278
+ 1325.mp4,0
1279
+ 1326.mp4,0
1280
+ 1327.mp4,0
1281
+ 1328.mp4,0
1282
+ 1329.mp4,0
1283
+ 1330.mp4,0
1284
+ 1331.mp4,0
1285
+ 1332.mp4,0
1286
+ 1333.mp4,0
1287
+ 1334.mp4,0
1288
+ 1335.mp4,0
1289
+ 1336.mp4,0
1290
+ 1337.mp4,0
1291
+ 1338.mp4,0
1292
+ 1339.mp4,0
1293
+ 1340.mp4,0
1294
+ 1342.mp4,0
1295
+ 1343.mp4,0
1296
+ 1344.mp4,0
1297
+ 1345.mp4,0
1298
+ 1346.mp4,0
1299
+ 1347.mp4,0
1300
+ 1348.mp4,0
1301
+ 1349.mp4,0
1302
+ 1351.mp4,0
1303
+ 1352.mp4,0
1304
+ 1354.mp4,0
1305
+ 1355.mp4,0
1306
+ 1356.mp4,0
1307
+ 1358.mp4,0
1308
+ 1359.mp4,0
1309
+ 1360.mp4,0
1310
+ 1361.mp4,0
1311
+ 1363.mp4,0
1312
+ 1364.mp4,0
1313
+ 1365.mp4,0
1314
+ 1366.mp4,0
1315
+ 1367.mp4,0
1316
+ 1368.mp4,0
1317
+ 1369.mp4,0
1318
+ 1370.mp4,0
1319
+ 1371.mp4,0
1320
+ 1372.mp4,0
1321
+ 1373.mp4,0
1322
+ 1374.mp4,0
1323
+ 1375.mp4,0
1324
+ 1376.mp4,0
1325
+ 1377.mp4,0
1326
+ 1378.mp4,0
1327
+ 1379.mp4,0
1328
+ 1380.mp4,0
1329
+ 1381.mp4,0
1330
+ 1382.mp4,0
1331
+ 1384.mp4,0
1332
+ 1385.mp4,0
1333
+ 1386.mp4,0
1334
+ 1387.mp4,0
1335
+ 1389.mp4,0
1336
+ 1390.mp4,0
1337
+ 1392.mp4,0
1338
+ 1393.mp4,0
1339
+ 1394.mp4,0
1340
+ 1395.mp4,0
1341
+ 1396.mp4,0
1342
+ 1397.mp4,0
1343
+ 1398.mp4,0
1344
+ 1399.mp4,0
1345
+ 1400.mp4,0
1346
+ 1401.mp4,0
1347
+ 1402.mp4,0
1348
+ 1403.mp4,0
1349
+ 1405.mp4,0
1350
+ 1406.mp4,0
1351
+ 1408.mp4,0
1352
+ 1409.mp4,0
1353
+ 1410.mp4,0
1354
+ 1411.mp4,0
1355
+ 1412.mp4,0
1356
+ 1413.mp4,0
1357
+ 1414.mp4,0
1358
+ 1415.mp4,0
1359
+ 1416.mp4,0
1360
+ 1417.mp4,0
1361
+ 1418.mp4,0
1362
+ 1419.mp4,0
1363
+ 1420.mp4,0
1364
+ 1421.mp4,0
1365
+ 1422.mp4,0
1366
+ 1423.mp4,0
1367
+ 1424.mp4,0
1368
+ 1425.mp4,0
1369
+ 1426.mp4,0
1370
+ 1428.mp4,0
1371
+ 1429.mp4,0
1372
+ 1430.mp4,0
1373
+ 1431.mp4,0
1374
+ 1432.mp4,0
1375
+ 1433.mp4,0
1376
+ 1434.mp4,0
1377
+ 1435.mp4,0
1378
+ 1436.mp4,0
1379
+ 1438.mp4,0
1380
+ 1439.mp4,0
1381
+ 1440.mp4,0
1382
+ 1441.mp4,0
1383
+ 1442.mp4,0
1384
+ 1443.mp4,0
1385
+ 1445.mp4,0
1386
+ 1446.mp4,0
1387
+ 1447.mp4,0
1388
+ 1449.mp4,0
1389
+ 1450.mp4,0
1390
+ 1451.mp4,0
1391
+ 1452.mp4,0
1392
+ 1455.mp4,0
1393
+ 1456.mp4,0
1394
+ 1457.mp4,0
1395
+ 1458.mp4,0
1396
+ 1459.mp4,0
1397
+ 1462.mp4,0
1398
+ 1464.mp4,0
1399
+ 1465.mp4,0
1400
+ 1466.mp4,0
1401
+ 1467.mp4,0
1402
+ 1468.mp4,0
1403
+ 1469.mp4,0
1404
+ 1470.mp4,0
1405
+ 1472.mp4,0
1406
+ 1473.mp4,0
1407
+ 1474.mp4,0
1408
+ 1475.mp4,0
1409
+ 1476.mp4,0
1410
+ 1477.mp4,0
1411
+ 1478.mp4,0
1412
+ 1479.mp4,0
1413
+ 1480.mp4,0
1414
+ 1481.mp4,0
1415
+ 1482.mp4,0
1416
+ 1483.mp4,0
1417
+ 1484.mp4,0
1418
+ 1487.mp4,0
1419
+ 1488.mp4,0
1420
+ 1489.mp4,0
1421
+ 1491.mp4,0
1422
+ 1492.mp4,0
1423
+ 1493.mp4,0
1424
+ 1494.mp4,0
1425
+ 1495.mp4,0
1426
+ 1496.mp4,0
1427
+ 1497.mp4,0
1428
+ 1498.mp4,0
1429
+ 1499.mp4,0
1430
+ 1500.mp4,0
1431
+ 1501.mp4,0
1432
+ 1502.mp4,0
1433
+ 1503.mp4,0
1434
+ 1504.mp4,0
1435
+ 1505.mp4,0
1436
+ 1507.mp4,0
1437
+ 1508.mp4,0
1438
+ 1510.mp4,0
1439
+ 1511.mp4,0
1440
+ 1512.mp4,0
1441
+ 1514.mp4,0
1442
+ 1515.mp4,0
1443
+ 1516.mp4,0
1444
+ 1517.mp4,0
1445
+ 1518.mp4,0
1446
+ 1519.mp4,0
1447
+ 1520.mp4,0
1448
+ 1521.mp4,0
1449
+ 1522.mp4,0
1450
+ 1524.mp4,0
1451
+ 1525.mp4,0
1452
+ 1526.mp4,0
1453
+ 1527.mp4,0
1454
+ 1528.mp4,0
1455
+ 1530.mp4,0
1456
+ 1531.mp4,0
1457
+ 1532.mp4,0
1458
+ 1533.mp4,0
1459
+ 1534.mp4,0
1460
+ 1535.mp4,0
1461
+ 1536.mp4,0
1462
+ 1537.mp4,0
1463
+ 1538.mp4,0
1464
+ 1539.mp4,0
1465
+ 1541.mp4,0
1466
+ 1542.mp4,0
1467
+ 1543.mp4,0
1468
+ 1544.mp4,0
1469
+ 1545.mp4,0
1470
+ 1546.mp4,0
1471
+ 1547.mp4,0
1472
+ 1548.mp4,0
1473
+ 1549.mp4,0
1474
+ 1550.mp4,0
1475
+ 1551.mp4,0
1476
+ 1552.mp4,0
1477
+ 1553.mp4,0
1478
+ 1554.mp4,0
1479
+ 1556.mp4,0
1480
+ 1557.mp4,0
1481
+ 1558.mp4,0
1482
+ 1559.mp4,0
1483
+ 1560.mp4,0
1484
+ 1562.mp4,0
1485
+ 1563.mp4,0
1486
+ 1564.mp4,0
1487
+ 1565.mp4,0
1488
+ 1566.mp4,0
1489
+ 1567.mp4,0
1490
+ 1568.mp4,0
1491
+ 1569.mp4,0
1492
+ 1570.mp4,0
1493
+ 1572.mp4,0
1494
+ 1573.mp4,0
1495
+ 1574.mp4,0
1496
+ 1575.mp4,0
1497
+ 1576.mp4,0
1498
+ 1578.mp4,0
1499
+ 1579.mp4,0
1500
+ 1580.mp4,0
1501
+ 1581.mp4,0
1502
+ 1582.mp4,0
1503
+ 1584.mp4,0
1504
+ 1585.mp4,0
1505
+ 1586.mp4,0
1506
+ 1587.mp4,0
1507
+ 1588.mp4,0
1508
+ 1589.mp4,0
1509
+ 1591.mp4,0
1510
+ 1592.mp4,0
1511
+ 1594.mp4,0
1512
+ 1595.mp4,0
1513
+ 1596.mp4,0
1514
+ 1597.mp4,0
1515
+ 1598.mp4,0
1516
+ 1599.mp4,0
1517
+ 1600.mp4,0
1518
+ 1601.mp4,0
1519
+ 1602.mp4,0
1520
+ 1603.mp4,0
1521
+ 1604.mp4,0
1522
+ 1605.mp4,0
1523
+ 1606.mp4,0
1524
+ 1607.mp4,0
1525
+ 1608.mp4,0
1526
+ 1609.mp4,0
1527
+ 1610.mp4,0
1528
+ 1611.mp4,0
1529
+ 1612.mp4,0
1530
+ 1613.mp4,0
1531
+ 1614.mp4,0
1532
+ 1615.mp4,0
1533
+ 1617.mp4,0
1534
+ 1618.mp4,0
1535
+ 1619.mp4,0
1536
+ 1620.mp4,0
1537
+ 1621.mp4,0
1538
+ 1622.mp4,0
1539
+ 1623.mp4,0
1540
+ 1624.mp4,0
1541
+ 1625.mp4,0
1542
+ 1627.mp4,0
1543
+ 1628.mp4,0
1544
+ 1629.mp4,0
1545
+ 1630.mp4,0
1546
+ 1633.mp4,0
1547
+ 1634.mp4,0
1548
+ 1635.mp4,0
1549
+ 1636.mp4,0
1550
+ 1637.mp4,0
1551
+ 1639.mp4,0
1552
+ 1640.mp4,0
1553
+ 1641.mp4,0
1554
+ 1642.mp4,0
1555
+ 1643.mp4,0
1556
+ 1644.mp4,0
1557
+ 1645.mp4,0
1558
+ 1646.mp4,0
1559
+ 1647.mp4,0
1560
+ 1648.mp4,0
1561
+ 1649.mp4,0
1562
+ 1652.mp4,0
1563
+ 1653.mp4,0
1564
+ 1654.mp4,0
1565
+ 1655.mp4,0
1566
+ 1656.mp4,0
1567
+ 1657.mp4,0
1568
+ 1658.mp4,0
1569
+ 1659.mp4,0
1570
+ 1660.mp4,0
1571
+ 1661.mp4,0
1572
+ 1662.mp4,0
1573
+ 1663.mp4,0
1574
+ 1664.mp4,0
1575
+ 1665.mp4,0
1576
+ 1667.mp4,0
1577
+ 1668.mp4,0
1578
+ 1669.mp4,0
1579
+ 1671.mp4,0
1580
+ 1672.mp4,0
1581
+ 1673.mp4,0
1582
+ 1674.mp4,0
1583
+ 1675.mp4,0
1584
+ 1676.mp4,0
1585
+ 1677.mp4,0
1586
+ 1678.mp4,0
1587
+ 1679.mp4,0
1588
+ 1680.mp4,0
1589
+ 1681.mp4,0
1590
+ 1683.mp4,0
1591
+ 1684.mp4,0
1592
+ 1685.mp4,0
1593
+ 1686.mp4,0
1594
+ 1687.mp4,0
1595
+ 1688.mp4,0
1596
+ 1689.mp4,0
1597
+ 1690.mp4,0
1598
+ 1691.mp4,0
1599
+ 1692.mp4,0
1600
+ 1693.mp4,0
1601
+ 1694.mp4,0
1602
+ 1696.mp4,0
1603
+ 1697.mp4,0
1604
+ 1698.mp4,0
1605
+ 1699.mp4,0
1606
+ 1701.mp4,0
1607
+ 1702.mp4,0
1608
+ 1703.mp4,0
1609
+ 1704.mp4,0
1610
+ 1705.mp4,0
1611
+ 1706.mp4,0
1612
+ 1707.mp4,0
1613
+ 1708.mp4,0
1614
+ 1709.mp4,0
1615
+ 1710.mp4,0
1616
+ 1711.mp4,0
1617
+ 1714.mp4,0
1618
+ 1715.mp4,0
1619
+ 1716.mp4,0
1620
+ 1717.mp4,0
1621
+ 1718.mp4,0
1622
+ 1719.mp4,0
1623
+ 1720.mp4,0
1624
+ 1721.mp4,0
1625
+ 1722.mp4,0
1626
+ 1724.mp4,0
1627
+ 1725.mp4,0
1628
+ 1726.mp4,0
1629
+ 1727.mp4,0
1630
+ 1729.mp4,0
1631
+ 1730.mp4,0
1632
+ 1731.mp4,0
1633
+ 1732.mp4,0
1634
+ 1733.mp4,0
1635
+ 1734.mp4,0
1636
+ 1735.mp4,0
1637
+ 1738.mp4,0
1638
+ 1739.mp4,0
1639
+ 1740.mp4,0
1640
+ 1741.mp4,0
1641
+ 1742.mp4,0
1642
+ 1743.mp4,0
1643
+ 1744.mp4,0
1644
+ 1745.mp4,0
1645
+ 1746.mp4,0
1646
+ 1747.mp4,0
1647
+ 1748.mp4,0
1648
+ 1749.mp4,0
1649
+ 1750.mp4,0
1650
+ 1751.mp4,0
1651
+ 1753.mp4,0
1652
+ 1754.mp4,0
1653
+ 1755.mp4,0
1654
+ 1756.mp4,0
1655
+ 1757.mp4,0
1656
+ 1758.mp4,0
1657
+ 1759.mp4,0
1658
+ 1760.mp4,0
1659
+ 1761.mp4,0
1660
+ 1762.mp4,0
1661
+ 1763.mp4,0
1662
+ 1764.mp4,0
1663
+ 1765.mp4,0
1664
+ 1766.mp4,0
1665
+ 1767.mp4,0
1666
+ 1768.mp4,0
1667
+ 1769.mp4,0
1668
+ 1770.mp4,0
1669
+ 1771.mp4,0
1670
+ 1772.mp4,0
1671
+ 1773.mp4,0
1672
+ 1774.mp4,0
1673
+ 1775.mp4,0
1674
+ 1777.mp4,0
1675
+ 1778.mp4,0
1676
+ 1779.mp4,0
1677
+ 1780.mp4,0
1678
+ 1781.mp4,0
1679
+ 1784.mp4,0
1680
+ 1785.mp4,0
1681
+ 1786.mp4,0
1682
+ 1788.mp4,0
1683
+ 1792.mp4,0
1684
+ 1794.mp4,0
1685
+ 1795.mp4,0
1686
+ 1796.mp4,0
1687
+ 1797.mp4,0
1688
+ 1798.mp4,0
1689
+ 1799.mp4,0
1690
+ 1801.mp4,0
1691
+ 1802.mp4,0
1692
+ 1803.mp4,0
1693
+ 1804.mp4,0
1694
+ 1806.mp4,0
1695
+ 1807.mp4,0
1696
+ 1808.mp4,0
1697
+ 1809.mp4,0
1698
+ 1810.mp4,0
1699
+ 1811.mp4,0
1700
+ 1812.mp4,0
1701
+ 1813.mp4,0
1702
+ 1814.mp4,0
1703
+ 1815.mp4,0
1704
+ 1816.mp4,0
1705
+ 1817.mp4,0
1706
+ 1818.mp4,0
1707
+ 1819.mp4,0
1708
+ 1820.mp4,0
1709
+ 1821.mp4,0
1710
+ 1822.mp4,0
1711
+ 1823.mp4,0
1712
+ 1824.mp4,0
1713
+ 1825.mp4,0
1714
+ 1826.mp4,0
1715
+ 1828.mp4,0
1716
+ 1830.mp4,0
1717
+ 1831.mp4,0
1718
+ 1832.mp4,0
1719
+ 1835.mp4,0
1720
+ 1836.mp4,0
1721
+ 1837.mp4,0
1722
+ 1839.mp4,0
1723
+ 1840.mp4,0
1724
+ 1841.mp4,0
1725
+ 1843.mp4,0
1726
+ 1844.mp4,0
1727
+ 1845.mp4,0
1728
+ 1846.mp4,0
1729
+ 1847.mp4,0
1730
+ 1848.mp4,0
1731
+ 1849.mp4,0
1732
+ 1850.mp4,0
1733
+ 1851.mp4,0
1734
+ 1852.mp4,0
1735
+ 1853.mp4,0
1736
+ 1854.mp4,0
1737
+ 1856.mp4,0
1738
+ 1858.mp4,0
1739
+ 1860.mp4,0
1740
+ 1861.mp4,0
1741
+ 1862.mp4,0
1742
+ 1863.mp4,0
1743
+ 1864.mp4,0
1744
+ 1865.mp4,0
1745
+ 1866.mp4,0
1746
+ 1867.mp4,0
1747
+ 1868.mp4,0
1748
+ 1869.mp4,0
1749
+ 1870.mp4,0
1750
+ 1871.mp4,0
1751
+ 1872.mp4,0
1752
+ 1873.mp4,0
1753
+ 1874.mp4,0
1754
+ 1875.mp4,0
1755
+ 1876.mp4,0
1756
+ 1877.mp4,0
1757
+ 1878.mp4,0
1758
+ 1879.mp4,0
1759
+ 1880.mp4,0
1760
+ 1882.mp4,0
1761
+ 1883.mp4,0
1762
+ 1884.mp4,0
1763
+ 1885.mp4,0
1764
+ 1886.mp4,0
1765
+ 1887.mp4,0
1766
+ 1889.mp4,0
1767
+ 1891.mp4,0
1768
+ 1892.mp4,0
1769
+ 1893.mp4,0
1770
+ 1894.mp4,0
1771
+ 1895.mp4,0
1772
+ 1896.mp4,0
1773
+ 1897.mp4,0
1774
+ 1898.mp4,0
1775
+ 1899.mp4,0
1776
+ 1900.mp4,0
1777
+ 1901.mp4,0
1778
+ 1902.mp4,0
1779
+ 1903.mp4,0
1780
+ 1905.mp4,0
1781
+ 1908.mp4,0
1782
+ 1910.mp4,0
1783
+ 1911.mp4,0
1784
+ 1912.mp4,0
1785
+ 1913.mp4,0
1786
+ 1914.mp4,0
1787
+ 1916.mp4,0
1788
+ 1917.mp4,0
1789
+ 1918.mp4,0
1790
+ 1919.mp4,0
1791
+ 1920.mp4,0
1792
+ 1921.mp4,0
1793
+ 1922.mp4,0
1794
+ 1923.mp4,0
1795
+ 1924.mp4,0
1796
+ 1925.mp4,0
1797
+ 1926.mp4,0
1798
+ 1928.mp4,0
1799
+ 1930.mp4,0
1800
+ 1931.mp4,0
1801
+ 1932.mp4,0
1802
+ 1933.mp4,0
1803
+ 1934.mp4,0
1804
+ 1937.mp4,0
1805
+ 1938.mp4,0
1806
+ 1939.mp4,0
1807
+ 1940.mp4,0
1808
+ 1941.mp4,0
1809
+ 1942.mp4,0
1810
+ 1943.mp4,0
1811
+ 1944.mp4,0
1812
+ 1945.mp4,0
1813
+ 1946.mp4,0
1814
+ 1949.mp4,0
1815
+ 1950.mp4,0
1816
+ 1951.mp4,0
1817
+ 1952.mp4,0
1818
+ 1953.mp4,0
1819
+ 1954.mp4,0
1820
+ 1955.mp4,0
1821
+ 1956.mp4,0
1822
+ 1958.mp4,0
1823
+ 1959.mp4,0
1824
+ 1960.mp4,0
1825
+ 1961.mp4,0
1826
+ 1962.mp4,0
1827
+ 1963.mp4,0
1828
+ 1964.mp4,0
1829
+ 1965.mp4,0
1830
+ 1966.mp4,0
1831
+ 1967.mp4,0
1832
+ 1968.mp4,0
1833
+ 1969.mp4,0
1834
+ 1970.mp4,0
1835
+ 1971.mp4,0
1836
+ 1972.mp4,0
1837
+ 1974.mp4,0
1838
+ 1976.mp4,0
1839
+ 1977.mp4,0
1840
+ 1978.mp4,0
1841
+ 1979.mp4,0
1842
+ 1980.mp4,0
1843
+ 1981.mp4,0
1844
+ 1982.mp4,0
1845
+ 1983.mp4,0
1846
+ 1985.mp4,0
1847
+ 1986.mp4,0
1848
+ 1987.mp4,0
1849
+ 1988.mp4,0
1850
+ 1989.mp4,0
1851
+ 1990.mp4,0
1852
+ 1991.mp4,0
1853
+ 1992.mp4,0
1854
+ 1993.mp4,0
1855
+ 1994.mp4,0
1856
+ 1995.mp4,0
1857
+ 1996.mp4,0
1858
+ 1998.mp4,0
1859
+ 1999.mp4,0
1860
+ 2000.mp4,0
1861
+ 2001.mp4,0
1862
+ 2003.mp4,0
1863
+ 2004.mp4,0
1864
+ 2005.mp4,0
1865
+ 2007.mp4,0
1866
+ 2008.mp4,0
1867
+ 2010.mp4,0
1868
+ 2011.mp4,0
1869
+ 2012.mp4,0
1870
+ 2013.mp4,0
1871
+ 2014.mp4,0
1872
+ 2016.mp4,0
1873
+ 2019.mp4,0
1874
+ 2020.mp4,0
1875
+ 2021.mp4,0
1876
+ 2022.mp4,0
1877
+ 2025.mp4,0
1878
+ 2026.mp4,0
1879
+ 2027.mp4,0
1880
+ 2028.mp4,0
1881
+ 2029.mp4,0
1882
+ 2030.mp4,0
1883
+ 2031.mp4,0
1884
+ 2032.mp4,0
1885
+ 2035.mp4,0
1886
+ 2036.mp4,0
1887
+ 2037.mp4,0
1888
+ 2039.mp4,0
1889
+ 2040.mp4,0
1890
+ 2043.mp4,0
1891
+ 2044.mp4,0
1892
+ 2045.mp4,0
1893
+ 2046.mp4,0
1894
+ 2047.mp4,0
1895
+ 2048.mp4,0
1896
+ 2050.mp4,0
1897
+ 2053.mp4,0
1898
+ 2055.mp4,0
1899
+ 2058.mp4,0
1900
+ 2059.mp4,0
1901
+ 2060.mp4,0
1902
+ 2061.mp4,0
1903
+ 2063.mp4,0
1904
+ 2064.mp4,0
1905
+ 2065.mp4,0
1906
+ 2066.mp4,0
1907
+ 2067.mp4,0
1908
+ 2068.mp4,0
1909
+ 2069.mp4,0
1910
+ 2070.mp4,0
1911
+ 2071.mp4,0
1912
+ 2072.mp4,0
1913
+ 2073.mp4,0
1914
+ 2074.mp4,0
1915
+ 2075.mp4,0
1916
+ 2078.mp4,0
1917
+ 2081.mp4,0
1918
+ 2082.mp4,0
1919
+ 2083.mp4,0
1920
+ 2084.mp4,0
1921
+ 2085.mp4,0
1922
+ 2087.mp4,0
1923
+ 2090.mp4,0
1924
+ 2091.mp4,0
1925
+ 2092.mp4,0
1926
+ 2093.mp4,0
1927
+ 2094.mp4,0
1928
+ 2096.mp4,0
1929
+ 2097.mp4,0
1930
+ 2098.mp4,0
1931
+ 2100.mp4,0
1932
+ 2101.mp4,0
1933
+ 2102.mp4,0
1934
+ 2103.mp4,0
1935
+ 2104.mp4,0
1936
+ 2105.mp4,0
1937
+ 2106.mp4,0
1938
+ 2107.mp4,0
1939
+ 2108.mp4,0
1940
+ 2109.mp4,0
1941
+ 2110.mp4,0
1942
+ 2111.mp4,0
1943
+ 2112.mp4,0
1944
+ 2113.mp4,0
1945
+ 2114.mp4,0
1946
+ 2118.mp4,0
1947
+ 2119.mp4,0
1948
+ 2120.mp4,0
1949
+ 2121.mp4,0
1950
+ 2122.mp4,0
1951
+ 2123.mp4,0
1952
+ 2124.mp4,0
1953
+ 2125.mp4,0
1954
+ 2126.mp4,0
1955
+ 2127.mp4,0
1956
+ 2128.mp4,0
1957
+ 2129.mp4,0
1958
+ 2130.mp4,0
1959
+ 2131.mp4,0
1960
+ 2133.mp4,0
1961
+ 2135.mp4,0
1962
+ 2136.mp4,0
1963
+ 2138.mp4,0
1964
+ 2143.mp4,0
1965
+ 2145.mp4,0
1966
+ 2146.mp4,0
1967
+ 2147.mp4,0
1968
+ 2148.mp4,0
1969
+ 2149.mp4,0
1970
+ 2150.mp4,0
1971
+ 2151.mp4,0
1972
+ 2152.mp4,0
1973
+ 2154.mp4,0
1974
+ 2155.mp4,0
1975
+ 2156.mp4,0
1976
+ 2157.mp4,0
1977
+ 2158.mp4,0
1978
+ 2160.mp4,0
1979
+ 2162.mp4,0
1980
+ 2163.mp4,0
1981
+ 2164.mp4,0
1982
+ 2165.mp4,0
1983
+ 2166.mp4,0
1984
+ 2168.mp4,0
1985
+ 2170.mp4,0
1986
+ 2171.mp4,0
1987
+ 2172.mp4,0
1988
+ 2174.mp4,0
1989
+ 2176.mp4,0
1990
+ 2177.mp4,0
1991
+ 2178.mp4,0
1992
+ 2179.mp4,0
1993
+ 2183.mp4,0
1994
+ 2184.mp4,0
1995
+ 2185.mp4,0
1996
+ 2186.mp4,0
1997
+ 2189.mp4,0
1998
+ 2190.mp4,0
1999
+ 2191.mp4,0
2000
+ 2192.mp4,0
2001
+ 2193.mp4,0
2002
+ 2194.mp4,0
2003
+ 2196.mp4,0
2004
+ 2197.mp4,0
2005
+ 2198.mp4,0
2006
+ 2200.mp4,0
2007
+ 2201.mp4,0
2008
+ 2204.mp4,0
2009
+ 2205.mp4,0
2010
+ 2206.mp4,0
2011
+ 2207.mp4,0
2012
+ 2208.mp4,0
2013
+ 2209.mp4,0
2014
+ 2210.mp4,0
2015
+ 2211.mp4,0
2016
+ 2212.mp4,0
2017
+ 2213.mp4,0
2018
+ 2214.mp4,0
2019
+ 2215.mp4,0
2020
+ 2216.mp4,0
2021
+ 2217.mp4,0
2022
+ 2218.mp4,0
2023
+ 2219.mp4,0
2024
+ 2221.mp4,0
2025
+ 2223.mp4,0
2026
+ 2224.mp4,0
2027
+ 2225.mp4,0
2028
+ 2227.mp4,0
2029
+ 2229.mp4,0
2030
+ 2232.mp4,0
2031
+ 2233.mp4,0
2032
+ 2235.mp4,0
2033
+ 2237.mp4,0
2034
+ 2238.mp4,0
2035
+ 2239.mp4,0
2036
+ 2240.mp4,0
2037
+ 2241.mp4,0
2038
+ 2242.mp4,0
2039
+ 2243.mp4,0
2040
+ 2244.mp4,0
2041
+ 2245.mp4,0
2042
+ 2246.mp4,0
2043
+ 2248.mp4,0
2044
+ 2249.mp4,0
2045
+ 2251.mp4,0
2046
+ 2253.mp4,0
2047
+ 2254.mp4,0
2048
+ 2255.mp4,0
2049
+ 2256.mp4,0
2050
+ 2257.mp4,0
2051
+ 2258.mp4,0
2052
+ 2259.mp4,0
2053
+ 2260.mp4,0
2054
+ 2261.mp4,0
2055
+ 2262.mp4,0
2056
+ 2264.mp4,0
2057
+ 2265.mp4,0
2058
+ 2266.mp4,0
2059
+ 2268.mp4,0
2060
+ 2271.mp4,0
2061
+ 2272.mp4,0
2062
+ 2273.mp4,0
2063
+ 2274.mp4,0
2064
+ 2275.mp4,0
2065
+ 2276.mp4,0
2066
+ 2278.mp4,0
2067
+ 2279.mp4,0
2068
+ 2283.mp4,0
2069
+ 2284.mp4,0
2070
+ 2286.mp4,0
2071
+ 2287.mp4,0
2072
+ 2288.mp4,0
2073
+ 2289.mp4,0
2074
+ 2290.mp4,0
2075
+ 2291.mp4,0
2076
+ 2292.mp4,0
2077
+ 2293.mp4,0
2078
+ 2294.mp4,0
2079
+ 2295.mp4,0
2080
+ 2296.mp4,0
2081
+ 2297.mp4,0
2082
+ 2298.mp4,0
2083
+ 2299.mp4,0
2084
+ 2300.mp4,0
2085
+ 2301.mp4,0
2086
+ 2304.mp4,0
2087
+ 2306.mp4,0
2088
+ 2307.mp4,0
2089
+ 2308.mp4,0
2090
+ 2309.mp4,0
2091
+ 2310.mp4,0
2092
+ 2311.mp4,0
2093
+ 2312.mp4,0
2094
+ 2313.mp4,0
2095
+ 2314.mp4,0
2096
+ 2315.mp4,0
2097
+ 2316.mp4,0
2098
+ 2317.mp4,0
2099
+ 2318.mp4,0
2100
+ 2320.mp4,0
2101
+ 2321.mp4,0
2102
+ 2324.mp4,0
2103
+ 2325.mp4,0
2104
+ 2326.mp4,0
2105
+ 2327.mp4,0
2106
+ 2328.mp4,0
2107
+ 2329.mp4,0
2108
+ 2330.mp4,0
2109
+ 2332.mp4,0
2110
+ 2333.mp4,0
2111
+ 2334.mp4,0
2112
+ 2335.mp4,0
2113
+ 2336.mp4,0
2114
+ 2337.mp4,0
2115
+ 2338.mp4,0
2116
+ 2339.mp4,0
2117
+ 2340.mp4,0
2118
+ 2341.mp4,0
2119
+ 2343.mp4,0
2120
+ 2345.mp4,0
2121
+ 2346.mp4,0
2122
+ 2347.mp4,0
2123
+ 2348.mp4,0
2124
+ 2349.mp4,0
2125
+ 2350.mp4,0
2126
+ 2351.mp4,0
2127
+ 2352.mp4,0
2128
+ 2353.mp4,0
2129
+ 2355.mp4,0
2130
+ 2357.mp4,0
2131
+ 2358.mp4,0
2132
+ 2361.mp4,0
2133
+ 2362.mp4,0
2134
+ 2363.mp4,0
2135
+ 2364.mp4,0
2136
+ 2365.mp4,0
2137
+ 2366.mp4,0
2138
+ 2367.mp4,0
2139
+ 2368.mp4,0
2140
+ 2369.mp4,0
2141
+ 2370.mp4,0
2142
+ 2372.mp4,0
2143
+ 2373.mp4,0
2144
+ 2374.mp4,0
2145
+ 2375.mp4,0
2146
+ 2377.mp4,0
2147
+ 2380.mp4,0
2148
+ 2381.mp4,0
2149
+ 2385.mp4,0
2150
+ 2388.mp4,0
2151
+ 2389.mp4,0
2152
+ 2390.mp4,0
2153
+ 2391.mp4,0
2154
+ 2393.mp4,0
2155
+ 2394.mp4,0
2156
+ 2395.mp4,0
2157
+ 2396.mp4,0
2158
+ 2397.mp4,0
2159
+ 2398.mp4,0
2160
+ 2399.mp4,0
2161
+ 2400.mp4,0
2162
+ 2401.mp4,0
2163
+ 2402.mp4,0
2164
+ 2403.mp4,0
2165
+ 2404.mp4,0
2166
+ 2405.mp4,0
2167
+ 2407.mp4,0
2168
+ 2408.mp4,0
2169
+ 2409.mp4,0
2170
+ 2410.mp4,0
2171
+ 2411.mp4,0
2172
+ 2412.mp4,0
2173
+ 2413.mp4,0
2174
+ 2415.mp4,0
2175
+ 2416.mp4,0
2176
+ 2418.mp4,0
2177
+ 2419.mp4,0
2178
+ 2420.mp4,0
2179
+ 2421.mp4,0
2180
+ 2423.mp4,0
2181
+ 2425.mp4,0
2182
+ 2428.mp4,0
2183
+ 2429.mp4,0
2184
+ 2430.mp4,0
2185
+ 2431.mp4,0
2186
+ 2432.mp4,0
2187
+ 2433.mp4,0
2188
+ 2434.mp4,0
2189
+ 2436.mp4,0
2190
+ 2437.mp4,0
2191
+ 2438.mp4,0
2192
+ 2439.mp4,0
2193
+ 2441.mp4,0
2194
+ 2442.mp4,0
2195
+ 2443.mp4,0
2196
+ 2444.mp4,0
2197
+ 2445.mp4,0
2198
+ 2446.mp4,0
2199
+ 2448.mp4,0
2200
+ 2450.mp4,0
2201
+ 2451.mp4,0
2202
+ 2453.mp4,0
2203
+ 2454.mp4,0
2204
+ 2455.mp4,0
2205
+ 2456.mp4,0
2206
+ 2457.mp4,0
2207
+ 2458.mp4,0
2208
+ 2459.mp4,0
2209
+ 2460.mp4,0
2210
+ 2462.mp4,0
2211
+ 2464.mp4,0
2212
+ 2465.mp4,0
2213
+ 2466.mp4,0
2214
+ 2467.mp4,0
2215
+ 2468.mp4,0
2216
+ 2469.mp4,0
2217
+ 2470.mp4,0
2218
+ 2471.mp4,0
2219
+ 2473.mp4,0
2220
+ 2475.mp4,0
2221
+ 2476.mp4,0
2222
+ 2477.mp4,0
2223
+ 2478.mp4,0
2224
+ 2479.mp4,0
2225
+ 2480.mp4,0
2226
+ 2481.mp4,0
2227
+ 2482.mp4,0
2228
+ 2483.mp4,0
2229
+ 2484.mp4,0
2230
+ 2486.mp4,0
2231
+ 2487.mp4,0
2232
+ 2488.mp4,0
2233
+ 2489.mp4,0
2234
+ 2490.mp4,0
2235
+ 2491.mp4,0
2236
+ 2492.mp4,0
2237
+ 2493.mp4,0
2238
+ 2494.mp4,0
2239
+ 2495.mp4,0
2240
+ 2496.mp4,0
2241
+ 2497.mp4,0
2242
+ 2498.mp4,0
2243
+ 2500.mp4,0
2244
+ 2501.mp4,0
2245
+ 2502.mp4,0
2246
+ 2505.mp4,0
2247
+ 2506.mp4,0
2248
+ 2508.mp4,0
2249
+ 2510.mp4,0
2250
+ 2512.mp4,0
2251
+ 2513.mp4,0
2252
+ 2515.mp4,0
2253
+ 2516.mp4,0
2254
+ 2519.mp4,0
2255
+ 2520.mp4,0
2256
+ 2521.mp4,0
2257
+ 2522.mp4,0
2258
+ 2523.mp4,0
2259
+ 2525.mp4,0
2260
+ 2526.mp4,0
2261
+ 2527.mp4,0
2262
+ 2528.mp4,0
2263
+ 2534.mp4,0
2264
+ 2536.mp4,0
2265
+ 2537.mp4,0
2266
+ 2538.mp4,0
2267
+ 2539.mp4,0
2268
+ 2540.mp4,0
2269
+ 2541.mp4,0
2270
+ 2544.mp4,0
2271
+ 2545.mp4,0
2272
+ 2546.mp4,0
2273
+ 2547.mp4,0
2274
+ 2548.mp4,0
2275
+ 2549.mp4,0
2276
+ 2550.mp4,0
2277
+ 2551.mp4,0
2278
+ 2552.mp4,0
2279
+ 2553.mp4,0
2280
+ 2554.mp4,0
2281
+ 2555.mp4,0
2282
+ 2557.mp4,0
2283
+ 2558.mp4,0
2284
+ 2559.mp4,0
2285
+ 2560.mp4,0
2286
+ 2561.mp4,0
2287
+ 2563.mp4,0
2288
+ 2564.mp4,0
2289
+ 2565.mp4,0
2290
+ 2566.mp4,0
2291
+ 2568.mp4,0
2292
+ 2569.mp4,0
2293
+ 2570.mp4,0
2294
+ 2571.mp4,0
2295
+ 2573.mp4,0
2296
+ 2574.mp4,0
2297
+ 2575.mp4,0
2298
+ 2576.mp4,0
2299
+ 2577.mp4,0
2300
+ 2578.mp4,0
2301
+ 2579.mp4,0
2302
+ 2580.mp4,0
2303
+ 2582.mp4,0
2304
+ 2584.mp4,0
2305
+ 2585.mp4,0
2306
+ 2586.mp4,0
2307
+ 2587.mp4,0
2308
+ 2588.mp4,0
2309
+ 2589.mp4,0
2310
+ 2590.mp4,0
2311
+ 2592.mp4,0
2312
+ 2593.mp4,0
2313
+ 2594.mp4,0
2314
+ 2595.mp4,0
2315
+ 2596.mp4,0
2316
+ 2597.mp4,0
2317
+ 2598.mp4,0
2318
+ 2601.mp4,0
2319
+ 2602.mp4,0
2320
+ 2603.mp4,0
2321
+ 2604.mp4,0
2322
+ 2605.mp4,0
2323
+ 2606.mp4,0
2324
+ 2607.mp4,0
2325
+ 2610.mp4,0
2326
+ 2612.mp4,0
2327
+ 2613.mp4,0
2328
+ 2614.mp4,0
2329
+ 2615.mp4,0
2330
+ 2616.mp4,0
2331
+ 2617.mp4,0
2332
+ 2618.mp4,0
2333
+ 2619.mp4,0
2334
+ 2620.mp4,0
2335
+ 2621.mp4,0
2336
+ 2624.mp4,0
2337
+ 2625.mp4,0
2338
+ 2626.mp4,0
2339
+ 2627.mp4,0
2340
+ 2628.mp4,0
2341
+ 2629.mp4,0
2342
+ 2630.mp4,0
2343
+ 2631.mp4,0
2344
+ 2632.mp4,0
2345
+ 2633.mp4,0
2346
+ 2634.mp4,0
2347
+ 2636.mp4,0
2348
+ 2637.mp4,0
2349
+ 2638.mp4,0
2350
+ 2639.mp4,0
2351
+ 2640.mp4,0
2352
+ 2641.mp4,0
2353
+ 2643.mp4,0
2354
+ 2644.mp4,0
2355
+ 2645.mp4,0
2356
+ 2646.mp4,0
2357
+ 2647.mp4,0
2358
+ 2648.mp4,0
2359
+ 2649.mp4,0
2360
+ 2650.mp4,0
2361
+ 2651.mp4,0
2362
+ 2652.mp4,0
2363
+ 2653.mp4,0
2364
+ 2654.mp4,0
2365
+ 2655.mp4,0
2366
+ 2657.mp4,0
2367
+ 2659.mp4,0
2368
+ 2660.mp4,0
2369
+ 2661.mp4,0
2370
+ 2662.mp4,0
2371
+ 2663.mp4,0
2372
+ 2664.mp4,0
2373
+ 2665.mp4,0
2374
+ 2666.mp4,0
2375
+ 2667.mp4,0
2376
+ 2669.mp4,0
2377
+ 2670.mp4,0
2378
+ 2671.mp4,0
2379
+ 2672.mp4,0
2380
+ 2676.mp4,0
2381
+ 2677.mp4,0
2382
+ 2679.mp4,0
2383
+ 2681.mp4,0
2384
+ 2683.mp4,0
2385
+ 2684.mp4,0
2386
+ 2685.mp4,0
2387
+ 2686.mp4,0
2388
+ 2687.mp4,0
2389
+ 2688.mp4,0
2390
+ 2689.mp4,0
2391
+ 2690.mp4,0
2392
+ 2691.mp4,0
2393
+ 2692.mp4,0
2394
+ 2693.mp4,0
2395
+ 2694.mp4,0
2396
+ 2696.mp4,0
2397
+ 2697.mp4,0
2398
+ 2698.mp4,0
2399
+ 2699.mp4,0
2400
+ 2700.mp4,0
2401
+ 2701.mp4,0
2402
+ 2702.mp4,0
2403
+ 2703.mp4,0
2404
+ 2705.mp4,0
2405
+ 2706.mp4,0
2406
+ 2707.mp4,0
2407
+ 2708.mp4,0
2408
+ 2709.mp4,0
2409
+ 2710.mp4,0
2410
+ 2711.mp4,0
2411
+ 2714.mp4,0
2412
+ 2715.mp4,0
2413
+ 2716.mp4,0
2414
+ 2719.mp4,0
2415
+ 2722.mp4,0
2416
+ 2723.mp4,0
2417
+ 2724.mp4,0
2418
+ 2726.mp4,0
2419
+ 2727.mp4,0
2420
+ 2729.mp4,0
2421
+ 2730.mp4,0
2422
+ 2731.mp4,0
2423
+ 2732.mp4,0
2424
+ 2733.mp4,0
2425
+ 2734.mp4,0
2426
+ 2735.mp4,0
2427
+ 2736.mp4,0
2428
+ 2737.mp4,0
2429
+ 2738.mp4,0
2430
+ 2740.mp4,0
2431
+ 2743.mp4,0
2432
+ 2746.mp4,0
2433
+ 2747.mp4,0
2434
+ 2748.mp4,0
2435
+ 2749.mp4,0
2436
+ 2750.mp4,0
2437
+ 2751.mp4,0
2438
+ 2752.mp4,0
2439
+ 2754.mp4,0
2440
+ 2758.mp4,0
2441
+ 2759.mp4,0
2442
+ 2760.mp4,0
2443
+ 2762.mp4,0
2444
+ 2763.mp4,0
2445
+ 2764.mp4,0
2446
+ 2765.mp4,0
2447
+ 2766.mp4,0
2448
+ 2767.mp4,0
2449
+ 2768.mp4,0
2450
+ 2771.mp4,0
2451
+ 2772.mp4,0
2452
+ 2773.mp4,0
2453
+ 2776.mp4,0
2454
+ 2777.mp4,0
2455
+ 2778.mp4,0
2456
+ 2779.mp4,0
2457
+ 2781.mp4,0
2458
+ 2782.mp4,0
2459
+ 2783.mp4,0
2460
+ 2784.mp4,0
2461
+ 2785.mp4,0
2462
+ 2786.mp4,0
2463
+ 2787.mp4,0
2464
+ 2788.mp4,0
2465
+ 2790.mp4,0
2466
+ 2791.mp4,0
2467
+ 2792.mp4,0
2468
+ 2793.mp4,0
2469
+ 2794.mp4,0
2470
+ 2795.mp4,0
2471
+ 2796.mp4,0
2472
+ 2799.mp4,0
2473
+ 2800.mp4,0
2474
+ 2801.mp4,0
2475
+ 2802.mp4,0
2476
+ 2803.mp4,0
2477
+ 2805.mp4,0
2478
+ 2806.mp4,0
2479
+ 2807.mp4,0
2480
+ 2809.mp4,0
2481
+ 2810.mp4,0
2482
+ 2811.mp4,0
2483
+ 2813.mp4,0
2484
+ 2814.mp4,0
2485
+ 2816.mp4,0
2486
+ 2818.mp4,0
2487
+ 2819.mp4,0
2488
+ 2820.mp4,0
2489
+ 2821.mp4,0
2490
+ 2823.mp4,0
2491
+ 2830.mp4,0
2492
+ 2831.mp4,0
2493
+ 2832.mp4,0
2494
+ 2833.mp4,0
2495
+ 2835.mp4,0
2496
+ 2836.mp4,0
2497
+ 2837.mp4,0
2498
+ 2838.mp4,0
2499
+ 2839.mp4,0
2500
+ 2841.mp4,0
2501
+ 2842.mp4,0
2502
+ 2843.mp4,0
2503
+ 2844.mp4,0
2504
+ 2846.mp4,0
2505
+ 2847.mp4,0
2506
+ 2849.mp4,0
2507
+ 2851.mp4,0
2508
+ 2852.mp4,0
2509
+ 2854.mp4,0
2510
+ 2855.mp4,0
2511
+ 2857.mp4,0
2512
+ 2859.mp4,0
2513
+ 2860.mp4,0
2514
+ 2861.mp4,0
2515
+ 2863.mp4,0
2516
+ 2864.mp4,0
2517
+ 2867.mp4,0
2518
+ 2868.mp4,0
2519
+ 2869.mp4,0
2520
+ 2870.mp4,0
2521
+ 2871.mp4,0
2522
+ 2873.mp4,0
2523
+ 2874.mp4,0
2524
+ 2875.mp4,0
2525
+ 2876.mp4,0
2526
+ 2877.mp4,0
2527
+ 2878.mp4,0
2528
+ 2880.mp4,0
2529
+ 2881.mp4,0
2530
+ 2882.mp4,0
2531
+ 2883.mp4,0
2532
+ 2884.mp4,0
2533
+ 2885.mp4,0
2534
+ 2886.mp4,0
2535
+ 2887.mp4,0
2536
+ 2888.mp4,0
2537
+ 2889.mp4,0
2538
+ 2890.mp4,0
2539
+ 2893.mp4,0
2540
+ 2894.mp4,0
2541
+ 2895.mp4,0
2542
+ 2896.mp4,0
2543
+ 2897.mp4,0
2544
+ 2898.mp4,0
2545
+ 2899.mp4,0
2546
+ 2901.mp4,0
2547
+ 2903.mp4,0
2548
+ 2904.mp4,0
2549
+ 2906.mp4,0
2550
+ 2907.mp4,0
2551
+ 2908.mp4,0
2552
+ 2910.mp4,0
2553
+ 2912.mp4,0
2554
+ 2914.mp4,0
2555
+ 2915.mp4,0
2556
+ 2916.mp4,0
2557
+ 2917.mp4,0
2558
+ 2918.mp4,0
2559
+ 2919.mp4,0
2560
+ 2921.mp4,0
2561
+ 2922.mp4,0
2562
+ 2923.mp4,0
2563
+ 2924.mp4,0
2564
+ 2925.mp4,0
2565
+ 2927.mp4,0
2566
+ 2928.mp4,0
2567
+ 2929.mp4,0
2568
+ 2930.mp4,0
2569
+ 2932.mp4,0
2570
+ 2933.mp4,0
2571
+ 2934.mp4,0
2572
+ 2935.mp4,0
2573
+ 2936.mp4,0
2574
+ 2937.mp4,0
2575
+ 2939.mp4,0
2576
+ 2941.mp4,0
2577
+ 2943.mp4,0
2578
+ 2944.mp4,0
2579
+ 2945.mp4,0
2580
+ 2946.mp4,0
2581
+ 2948.mp4,0
2582
+ 2949.mp4,0
2583
+ 2950.mp4,0
2584
+ 2951.mp4,0
2585
+ 2952.mp4,0
2586
+ 2953.mp4,0
2587
+ 2954.mp4,0
2588
+ 2957.mp4,0
2589
+ 2958.mp4,0
2590
+ 2960.mp4,0
2591
+ 2962.mp4,0
2592
+ 2963.mp4,0
2593
+ 2964.mp4,0
2594
+ 2965.mp4,0
2595
+ 2966.mp4,0
2596
+ 2967.mp4,0
2597
+ 2968.mp4,0
2598
+ 2970.mp4,0
2599
+ 2973.mp4,0
2600
+ 2974.mp4,0
2601
+ 2976.mp4,0
2602
+ 2977.mp4,0
2603
+ 2979.mp4,0
2604
+ 2980.mp4,0
2605
+ 2981.mp4,0
2606
+ 2982.mp4,0
2607
+ 2983.mp4,0
2608
+ 2985.mp4,0
2609
+ 2986.mp4,0
2610
+ 2987.mp4,0
2611
+ 2989.mp4,0
2612
+ 2990.mp4,0
2613
+ 2991.mp4,0
2614
+ 2992.mp4,0
2615
+ 2993.mp4,0
2616
+ 2994.mp4,0
2617
+ 2996.mp4,0
2618
+ 2997.mp4,0
2619
+ 2998.mp4,0
2620
+ 2999.mp4,0
2621
+ 3000.mp4,0
2622
+ 3001.mp4,0
2623
+ 3005.mp4,0
2624
+ 3007.mp4,0
2625
+ 3008.mp4,0
2626
+ 3009.mp4,0
2627
+ 3010.mp4,0
2628
+ 3011.mp4,0
2629
+ 3012.mp4,0
2630
+ 3013.mp4,0
2631
+ 3014.mp4,0
2632
+ 3015.mp4,0
2633
+ 3016.mp4,0
2634
+ 3017.mp4,0
2635
+ 3018.mp4,0
2636
+ 3019.mp4,0
2637
+ 3020.mp4,0
2638
+ 3021.mp4,0
2639
+ 3022.mp4,0
2640
+ 3023.mp4,0
2641
+ 3024.mp4,0
2642
+ 3025.mp4,0
2643
+ 3026.mp4,0
2644
+ 3027.mp4,0
2645
+ 3028.mp4,0
2646
+ 3029.mp4,0
2647
+ 3030.mp4,0
2648
+ 3031.mp4,0
2649
+ 3032.mp4,0
2650
+ 3033.mp4,0
2651
+ 3034.mp4,0
2652
+ 3035.mp4,0
2653
+ 3036.mp4,0
2654
+ 3037.mp4,0
2655
+ 3038.mp4,0
2656
+ 3039.mp4,0
2657
+ 3040.mp4,0
2658
+ 3041.mp4,0
2659
+ 3042.mp4,0
2660
+ 3043.mp4,0
2661
+ 3044.mp4,0
2662
+ 3045.mp4,0
2663
+ 3046.mp4,0
2664
+ 3047.mp4,0
2665
+ 3048.mp4,0
2666
+ 3049.mp4,0
2667
+ 3050.mp4,0
2668
+ 3051.mp4,0
2669
+ 3052.mp4,0
2670
+ 3053.mp4,0
2671
+ 3054.mp4,0
2672
+ 3055.mp4,0
2673
+ 3056.mp4,0
2674
+ 3057.mp4,0
2675
+ 3058.mp4,0
2676
+ 3059.mp4,0
2677
+ 3060.mp4,0
2678
+ 3061.mp4,0
2679
+ 3062.mp4,0
2680
+ 3063.mp4,0
2681
+ 3064.mp4,0
2682
+ 3065.mp4,0
2683
+ 3066.mp4,0
2684
+ 3067.mp4,0
2685
+ 3068.mp4,0
2686
+ 3069.mp4,0
2687
+ 3070.mp4,0
2688
+ 3071.mp4,0
2689
+ 3072.mp4,0
2690
+ 3073.mp4,0
2691
+ 3074.mp4,0
2692
+ 3075.mp4,0
2693
+ 3076.mp4,0
2694
+ 3077.mp4,0
2695
+ 3078.mp4,0
2696
+ 3079.mp4,0
2697
+ 3080.mp4,0
2698
+ 3081.mp4,0
2699
+ 3082.mp4,0
2700
+ 3083.mp4,0
2701
+ 3084.mp4,0
2702
+ 3085.mp4,0
2703
+ 3086.mp4,0
2704
+ 3087.mp4,0
2705
+ 3088.mp4,0
2706
+ 3089.mp4,0
2707
+ 3090.mp4,0
2708
+ 3091.mp4,0
2709
+ 3092.mp4,0
2710
+ 3093.mp4,0
2711
+ 3094.mp4,0
2712
+ 3095.mp4,0
2713
+ 3096.mp4,0
2714
+ 3097.mp4,0
2715
+ 3098.mp4,0
2716
+ 3099.mp4,0
2717
+ 3100.mp4,0
2718
+ 3101.mp4,0
2719
+ 3102.mp4,0
2720
+ 3103.mp4,0
2721
+ 3104.mp4,0
2722
+ 3105.mp4,0
2723
+ 3106.mp4,0
2724
+ 3107.mp4,0
2725
+ 3108.mp4,0
2726
+ 3109.mp4,0
2727
+ 3110.mp4,0
2728
+ 3111.mp4,0
2729
+ 3112.mp4,0
2730
+ 3113.mp4,0
2731
+ 3114.mp4,0
2732
+ 3115.mp4,0
2733
+ 3116.mp4,0
2734
+ 3117.mp4,0
2735
+ 3118.mp4,0
2736
+ 3119.mp4,0
2737
+ 3120.mp4,0
2738
+ 3121.mp4,0
2739
+ 3122.mp4,0
2740
+ 3123.mp4,0
2741
+ 3124.mp4,0
2742
+ 3125.mp4,0
2743
+ 3126.mp4,0
2744
+ 3127.mp4,0
2745
+ 3128.mp4,0
2746
+ 3129.mp4,0
2747
+ 3130.mp4,0
2748
+ 3131.mp4,0
2749
+ 3132.mp4,0
2750
+ 3133.mp4,0
2751
+ 3134.mp4,0
2752
+ 3135.mp4,0
2753
+ 3136.mp4,0
2754
+ 3137.mp4,0
2755
+ 3138.mp4,0
2756
+ 3139.mp4,0
2757
+ 3140.mp4,0
2758
+ 3141.mp4,0
2759
+ 3142.mp4,0
2760
+ 3143.mp4,0
2761
+ 3144.mp4,0
2762
+ 3145.mp4,0
2763
+ 3146.mp4,0
2764
+ 3147.mp4,0
2765
+ 3148.mp4,0
2766
+ 3149.mp4,0
2767
+ 3150.mp4,0
2768
+ 3151.mp4,0
2769
+ 3152.mp4,0
2770
+ 3153.mp4,0
2771
+ 3154.mp4,0
2772
+ 3155.mp4,0
2773
+ 3156.mp4,0
2774
+ 3157.mp4,0
2775
+ 3158.mp4,0
2776
+ 3159.mp4,0
2777
+ 3160.mp4,0
2778
+ 3161.mp4,0
2779
+ 3162.mp4,0
2780
+ 3163.mp4,0
2781
+ 3164.mp4,0
2782
+ 3165.mp4,0
2783
+ 3166.mp4,0
2784
+ 3167.mp4,0
2785
+ 3168.mp4,0
2786
+ 3169.mp4,0
2787
+ 3170.mp4,0
2788
+ 3171.mp4,0
2789
+ 3172.mp4,0
2790
+ 3173.mp4,0
2791
+ 3174.mp4,0
2792
+ 3175.mp4,0
2793
+ 3176.mp4,0
2794
+ 3177.mp4,0
2795
+ 3178.mp4,0
2796
+ 3179.mp4,0
2797
+ 3180.mp4,0
2798
+ 3181.mp4,0
2799
+ 3182.mp4,0
2800
+ 3183.mp4,0
2801
+ 3184.mp4,0
2802
+ 3185.mp4,0
2803
+ 3186.mp4,0
2804
+ 3187.mp4,0
2805
+ 3188.mp4,0
2806
+ 3189.mp4,0
2807
+ 3190.mp4,0
2808
+ 3191.mp4,0
2809
+ 3192.mp4,0
2810
+ 3193.mp4,0
2811
+ 3194.mp4,0
2812
+ 3195.mp4,0
2813
+ 3196.mp4,0
2814
+ 3197.mp4,0
2815
+ 3198.mp4,0
2816
+ 3199.mp4,0
2817
+ 3200.mp4,0
2818
+ 3201.mp4,0
2819
+ 3202.mp4,0
2820
+ 3203.mp4,0
2821
+ 3204.mp4,0
2822
+ 3205.mp4,0
2823
+ 3206.mp4,0
2824
+ 3207.mp4,0
2825
+ 3208.mp4,0
2826
+ 3209.mp4,0
2827
+ 3210.mp4,0
2828
+ 3211.mp4,0
2829
+ 3212.mp4,0
2830
+ 3213.mp4,0
2831
+ 3214.mp4,0
2832
+ 3215.mp4,0
2833
+ 3216.mp4,0
2834
+ 3217.mp4,0
2835
+ 3218.mp4,0
2836
+ 3219.mp4,0
2837
+ 3220.mp4,0
2838
+ 3221.mp4,0
2839
+ 3222.mp4,0
2840
+ 3223.mp4,0
2841
+ 3224.mp4,0
2842
+ 3225.mp4,0
2843
+ 3226.mp4,0
2844
+ 3227.mp4,0
2845
+ 3228.mp4,0
2846
+ 3229.mp4,0
2847
+ 3230.mp4,0
2848
+ 3231.mp4,0
2849
+ 3232.mp4,0
2850
+ 3233.mp4,0
2851
+ 3234.mp4,0
2852
+ 3235.mp4,0
2853
+ 3236.mp4,0
2854
+ 3237.mp4,0
2855
+ 3238.mp4,0
2856
+ 3239.mp4,0
2857
+ 3240.mp4,0
2858
+ 3241.mp4,0
2859
+ 3242.mp4,0
2860
+ 3243.mp4,0
2861
+ 3244.mp4,0
2862
+ 3245.mp4,0
2863
+ 3246.mp4,0
2864
+ 3247.mp4,0
2865
+ 3248.mp4,0
2866
+ 3249.mp4,0
2867
+ 3250.mp4,0
2868
+ 3251.mp4,0
2869
+ 3252.mp4,0
2870
+ 3253.mp4,0
2871
+ 3254.mp4,0
2872
+ 3255.mp4,0
2873
+ 3256.mp4,0
2874
+ 3257.mp4,0
2875
+ 3258.mp4,0
2876
+ 3259.mp4,0
2877
+ 3260.mp4,0
2878
+ 3261.mp4,0
2879
+ 3262.mp4,0
2880
+ 3263.mp4,0
2881
+ 3264.mp4,0
2882
+ 3265.mp4,0
2883
+ 3266.mp4,0
2884
+ 3267.mp4,0
2885
+ 3268.mp4,0
2886
+ 3269.mp4,0
2887
+ 3270.mp4,0
2888
+ 3271.mp4,0
2889
+ 3272.mp4,0
2890
+ 3273.mp4,0
2891
+ 3274.mp4,0
2892
+ 3275.mp4,0
2893
+ 3276.mp4,0
2894
+ 3277.mp4,0
2895
+ 3278.mp4,0
2896
+ 3279.mp4,0
2897
+ 3280.mp4,0
2898
+ 3281.mp4,0
2899
+ 3282.mp4,0
2900
+ 3283.mp4,0
2901
+ 3284.mp4,0
2902
+ 3285.mp4,0
2903
+ 3286.mp4,0
2904
+ 3287.mp4,0
2905
+ 3288.mp4,0
2906
+ 3289.mp4,0
2907
+ 3290.mp4,0
2908
+ 3291.mp4,0
2909
+ 3292.mp4,0
2910
+ 3293.mp4,0
2911
+ 3294.mp4,0
2912
+ 3295.mp4,0
2913
+ 3296.mp4,0
2914
+ 3297.mp4,0
2915
+ 3298.mp4,0
2916
+ 3299.mp4,0
2917
+ 3300.mp4,0
2918
+ 3301.mp4,0
2919
+ 3302.mp4,0
2920
+ 3303.mp4,0
2921
+ 3304.mp4,0
2922
+ 3305.mp4,0
2923
+ 3306.mp4,0
2924
+ 3307.mp4,0
2925
+ 3308.mp4,0
2926
+ 3309.mp4,0
2927
+ 3310.mp4,0
2928
+ 3311.mp4,0
2929
+ 3312.mp4,0
2930
+ 3313.mp4,0
2931
+ 3314.mp4,0
2932
+ 3315.mp4,0
2933
+ 3316.mp4,0
2934
+ 3317.mp4,0
2935
+ 3318.mp4,0
2936
+ 3319.mp4,0
2937
+ 3320.mp4,0
2938
+ 3321.mp4,0
2939
+ 3322.mp4,0
2940
+ 3323.mp4,0
2941
+ 3324.mp4,0
2942
+ 3325.mp4,0
2943
+ 3326.mp4,0
2944
+ 3327.mp4,0
2945
+ 3328.mp4,0
2946
+ 3329.mp4,0
2947
+ 3330.mp4,0
2948
+ 3331.mp4,0
2949
+ 3332.mp4,0
2950
+ 3333.mp4,0
2951
+ 3334.mp4,0
2952
+ 3335.mp4,0
2953
+ 3336.mp4,0
2954
+ 3337.mp4,0
2955
+ 3338.mp4,0
2956
+ 3339.mp4,0
2957
+ 3340.mp4,0
2958
+ 3341.mp4,0
2959
+ 3342.mp4,0
2960
+ 3343.mp4,0
2961
+ 3344.mp4,0
2962
+ 3345.mp4,0
2963
+ 3346.mp4,0
2964
+ 3347.mp4,0
2965
+ 3348.mp4,0
2966
+ 3349.mp4,0
2967
+ 3350.mp4,0
2968
+ 3351.mp4,0
2969
+ 3352.mp4,0
2970
+ 3353.mp4,0
2971
+ 3354.mp4,0
2972
+ 3355.mp4,0
2973
+ 3356.mp4,0
2974
+ 3357.mp4,0
2975
+ 3358.mp4,0
2976
+ 3359.mp4,0
2977
+ 3360.mp4,0
2978
+ 3361.mp4,0
2979
+ 3362.mp4,0
2980
+ 3363.mp4,0
2981
+ 3364.mp4,0
2982
+ 3365.mp4,0
2983
+ 3366.mp4,0
2984
+ 3367.mp4,0
2985
+ 3368.mp4,0
2986
+ 3369.mp4,0
2987
+ 3370.mp4,0
2988
+ 3371.mp4,0
2989
+ 3372.mp4,0
2990
+ 3373.mp4,0
2991
+ 3374.mp4,0
2992
+ 3375.mp4,0
2993
+ 3376.mp4,0
2994
+ 3377.mp4,0
2995
+ 3378.mp4,0
2996
+ 3379.mp4,0
2997
+ 3380.mp4,0
2998
+ 3381.mp4,0
2999
+ 3382.mp4,0
3000
+ 3383.mp4,0
3001
+ 3384.mp4,0
3002
+ 3385.mp4,0
3003
+ 3386.mp4,0
3004
+ 3387.mp4,0
3005
+ 3388.mp4,0
3006
+ 3389.mp4,0
3007
+ 3390.mp4,0
3008
+ 3391.mp4,0
3009
+ 3392.mp4,0
3010
+ 3393.mp4,0
3011
+ 3394.mp4,0
3012
+ 3395.mp4,0
3013
+ 3396.mp4,0
3014
+ 3397.mp4,0
3015
+ 3398.mp4,0
3016
+ 3399.mp4,0
3017
+ 3400.mp4,0
3018
+ 3401.mp4,0
3019
+ 3402.mp4,0
3020
+ 3403.mp4,0
3021
+ 3404.mp4,0
3022
+ 3405.mp4,0
3023
+ 3406.mp4,0
3024
+ 3407.mp4,0
3025
+ 3408.mp4,0
3026
+ 3409.mp4,0
3027
+ 3410.mp4,0
3028
+ 3411.mp4,0
3029
+ 3412.mp4,0
3030
+ 3413.mp4,0
3031
+ 3414.mp4,0
3032
+ 3415.mp4,0
3033
+ 3416.mp4,0
3034
+ 3417.mp4,0
3035
+ 3418.mp4,0
3036
+ 3419.mp4,0
3037
+ 3420.mp4,0
3038
+ 3421.mp4,0
3039
+ 3422.mp4,0
3040
+ 3423.mp4,0
3041
+ 3424.mp4,0
3042
+ 3425.mp4,0
3043
+ 3426.mp4,0
3044
+ 3427.mp4,0
3045
+ 3428.mp4,0
3046
+ 3429.mp4,0
3047
+ 3430.mp4,0
3048
+ 3431.mp4,0
3049
+ 3432.mp4,0
3050
+ 3433.mp4,0
3051
+ 3434.mp4,0
3052
+ 3435.mp4,0
3053
+ 3436.mp4,0
3054
+ 3437.mp4,0
3055
+ 3438.mp4,0
3056
+ 3439.mp4,0
3057
+ 3440.mp4,0
3058
+ 3441.mp4,0
3059
+ 3442.mp4,0
3060
+ 3443.mp4,0
3061
+ 3444.mp4,0
3062
+ 3445.mp4,0
3063
+ 3446.mp4,0
3064
+ 3447.mp4,0
3065
+ 3448.mp4,0
3066
+ 3449.mp4,0
3067
+ 3450.mp4,0
3068
+ 3451.mp4,0
3069
+ 3452.mp4,0
3070
+ 3453.mp4,0
3071
+ 3454.mp4,0
3072
+ 3455.mp4,0
3073
+ 3456.mp4,0
3074
+ 3457.mp4,0
3075
+ 3458.mp4,0
3076
+ 3459.mp4,0
3077
+ 3460.mp4,0
3078
+ 3461.mp4,0
3079
+ 3462.mp4,0
3080
+ 3463.mp4,0
3081
+ 3464.mp4,0
3082
+ 3465.mp4,0
3083
+ 3466.mp4,0
3084
+ 3467.mp4,0
3085
+ 3468.mp4,0
3086
+ 3469.mp4,0
3087
+ 3470.mp4,0
3088
+ 3471.mp4,0
3089
+ 3472.mp4,0
3090
+ 3473.mp4,0
3091
+ 3474.mp4,0
3092
+ 3475.mp4,0
3093
+ 3476.mp4,0
3094
+ 3477.mp4,0
3095
+ 3478.mp4,0
3096
+ 3479.mp4,0
3097
+ 3480.mp4,0
3098
+ 3481.mp4,0
3099
+ 3482.mp4,0
3100
+ 3483.mp4,0
3101
+ 3484.mp4,0
3102
+ 3485.mp4,0
3103
+ 3486.mp4,0
3104
+ 3487.mp4,0
3105
+ 3488.mp4,0
3106
+ 3489.mp4,0
3107
+ 3490.mp4,0
3108
+ 3491.mp4,0
3109
+ 3492.mp4,0
3110
+ 3493.mp4,0
3111
+ 3494.mp4,0
3112
+ 3495.mp4,0
3113
+ 3496.mp4,0
3114
+ 3497.mp4,0
3115
+ 3498.mp4,0
3116
+ 3499.mp4,0
3117
+ 3500.mp4,0
3118
+ 3501.mp4,0
3119
+ 3502.mp4,0
3120
+ 3503.mp4,0
3121
+ 3504.mp4,0
3122
+ 3505.mp4,0
3123
+ 3506.mp4,0
3124
+ 3507.mp4,0
3125
+ 3508.mp4,0
3126
+ 3509.mp4,0
3127
+ 3510.mp4,0
3128
+ 3511.mp4,0
3129
+ 3512.mp4,0
3130
+ 3513.mp4,0
3131
+ 3514.mp4,0
3132
+ 3515.mp4,0
3133
+ 3516.mp4,0
3134
+ 3517.mp4,0
3135
+ 3518.mp4,0
3136
+ 3519.mp4,0
3137
+ 3520.mp4,0
3138
+ 3521.mp4,0
3139
+ 3522.mp4,0
3140
+ 3523.mp4,0
3141
+ 3524.mp4,0
3142
+ 3525.mp4,0
3143
+ 3526.mp4,0
3144
+ 3527.mp4,0
3145
+ 3528.mp4,0
3146
+ 3529.mp4,0
3147
+ 3530.mp4,0
3148
+ 3531.mp4,0
3149
+ 3532.mp4,0
3150
+ 3533.mp4,0
3151
+ 3534.mp4,0
3152
+ 3535.mp4,0
3153
+ 3536.mp4,0
3154
+ 3537.mp4,0
3155
+ 3538.mp4,0
3156
+ 3539.mp4,0
3157
+ 3540.mp4,0
3158
+ 3541.mp4,0
3159
+ 3542.mp4,0
3160
+ 3543.mp4,0
3161
+ 3544.mp4,0
3162
+ 3545.mp4,0
3163
+ 3546.mp4,0
3164
+ 3547.mp4,0
3165
+ 3548.mp4,0
3166
+ 3549.mp4,0
3167
+ 3550.mp4,0
3168
+ 3551.mp4,0
3169
+ 3552.mp4,0
3170
+ 3553.mp4,0
3171
+ 3554.mp4,0
3172
+ 3555.mp4,0
3173
+ 3556.mp4,0
3174
+ 3557.mp4,0
3175
+ 3558.mp4,0
3176
+ 3559.mp4,0
3177
+ 3560.mp4,0
3178
+ 3561.mp4,0
3179
+ 3562.mp4,0
3180
+ 3563.mp4,0
3181
+ 3564.mp4,0
3182
+ 3565.mp4,0
3183
+ 3566.mp4,0
3184
+ 3567.mp4,0
3185
+ 3568.mp4,0
3186
+ 3569.mp4,0
3187
+ 3570.mp4,0
3188
+ 3571.mp4,0
3189
+ 3572.mp4,0
3190
+ 3573.mp4,0
3191
+ 3574.mp4,0
3192
+ 3575.mp4,0
3193
+ 3576.mp4,0
3194
+ 3577.mp4,0
3195
+ 3578.mp4,0
3196
+ 3579.mp4,0
3197
+ 3580.mp4,0
3198
+ 3581.mp4,0
3199
+ 3582.mp4,0
3200
+ 3583.mp4,0
3201
+ 3584.mp4,0
3202
+ 3585.mp4,0
3203
+ 3586.mp4,0
3204
+ 3587.mp4,0
3205
+ 3588.mp4,0
3206
+ 3589.mp4,0
3207
+ 3590.mp4,0
3208
+ 3591.mp4,0
3209
+ 3592.mp4,0
3210
+ 3593.mp4,0
3211
+ 3594.mp4,0
3212
+ 3595.mp4,0
3213
+ 3596.mp4,0
3214
+ 3597.mp4,0
3215
+ 3598.mp4,0
3216
+ 3599.mp4,0
3217
+ 3600.mp4,0
3218
+ 3601.mp4,0
3219
+ 3602.mp4,0
3220
+ 3603.mp4,0
3221
+ 3604.mp4,0
3222
+ 3605.mp4,0
3223
+ 3606.mp4,0
3224
+ 3607.mp4,0
3225
+ 3608.mp4,0
3226
+ 3609.mp4,0
3227
+ 3610.mp4,0
3228
+ 3611.mp4,0
3229
+ 3612.mp4,0
3230
+ 3613.mp4,0
3231
+ 3614.mp4,0
3232
+ 3615.mp4,0
3233
+ 3616.mp4,0
3234
+ 3617.mp4,0
3235
+ 3618.mp4,0
3236
+ 3619.mp4,0
3237
+ 3620.mp4,0
3238
+ 3621.mp4,0
3239
+ 3622.mp4,0
3240
+ 3623.mp4,0
3241
+ 3624.mp4,0
3242
+ 3625.mp4,0
3243
+ 3626.mp4,0
3244
+ 3627.mp4,0
3245
+ 3628.mp4,0
3246
+ 3629.mp4,0
3247
+ 3630.mp4,0
3248
+ 3631.mp4,0
3249
+ 3632.mp4,0
3250
+ 3633.mp4,0
3251
+ 3634.mp4,0
3252
+ 3635.mp4,0
3253
+ 3636.mp4,0
3254
+ 3637.mp4,0
3255
+ 3638.mp4,0
3256
+ 3639.mp4,0
3257
+ 3640.mp4,0
3258
+ 3641.mp4,0
3259
+ 3642.mp4,0
3260
+ 3643.mp4,0
3261
+ 3644.mp4,0
3262
+ 3645.mp4,0
3263
+ 3646.mp4,0
3264
+ 3647.mp4,0
3265
+ 3648.mp4,0
3266
+ 3649.mp4,0
3267
+ 3650.mp4,0
3268
+ 3651.mp4,0
3269
+ 3652.mp4,0
3270
+ 3653.mp4,0
3271
+ 3654.mp4,0
3272
+ 3655.mp4,0
3273
+ 3656.mp4,0
3274
+ 3657.mp4,0
3275
+ 3658.mp4,0
3276
+ 3659.mp4,0
3277
+ 3660.mp4,0
3278
+ 3661.mp4,0
3279
+ 3662.mp4,0
3280
+ 3663.mp4,0
3281
+ 3664.mp4,0
3282
+ 3665.mp4,0
3283
+ 3666.mp4,0
3284
+ 3667.mp4,0
3285
+ 3668.mp4,0
3286
+ 3669.mp4,0
3287
+ 3670.mp4,0
3288
+ 3671.mp4,0
3289
+ 3672.mp4,0
3290
+ 3673.mp4,0
3291
+ 3674.mp4,0
3292
+ 3675.mp4,0
3293
+ 3676.mp4,0
3294
+ 3677.mp4,0
3295
+ 3678.mp4,0
3296
+ 3679.mp4,0
3297
+ 3680.mp4,0
3298
+ 3681.mp4,0
3299
+ 3682.mp4,0
3300
+ 3683.mp4,0
3301
+ 3684.mp4,0
3302
+ 3685.mp4,0
3303
+ 3686.mp4,0
3304
+ 3687.mp4,0
3305
+ 3688.mp4,0
3306
+ 3689.mp4,0
3307
+ 3690.mp4,0
3308
+ 3691.mp4,0
3309
+ 3692.mp4,0
3310
+ 3693.mp4,0
3311
+ 3694.mp4,0
3312
+ 3695.mp4,0
3313
+ 3696.mp4,0
3314
+ 3697.mp4,0
3315
+ 3698.mp4,0
3316
+ 3699.mp4,0
3317
+ 3700.mp4,0
3318
+ 3701.mp4,0
3319
+ 3702.mp4,0
3320
+ 3703.mp4,0
3321
+ 3704.mp4,0
3322
+ 3705.mp4,0
3323
+ 3706.mp4,0
3324
+ 3707.mp4,0
3325
+ 3708.mp4,0
3326
+ 3709.mp4,0
3327
+ 3710.mp4,0
3328
+ 3711.mp4,0
3329
+ 3712.mp4,0
3330
+ 3713.mp4,0
3331
+ 3714.mp4,0
3332
+ 3715.mp4,0
3333
+ 3716.mp4,0
3334
+ 3717.mp4,0
3335
+ 3718.mp4,0
3336
+ 3719.mp4,0
3337
+ 3720.mp4,0
3338
+ 3721.mp4,0
3339
+ 3722.mp4,0
3340
+ 3723.mp4,0
3341
+ 3724.mp4,0
3342
+ 3725.mp4,0
3343
+ 3726.mp4,0
3344
+ 3727.mp4,0
3345
+ 3728.mp4,0
3346
+ 3729.mp4,0
3347
+ 3730.mp4,0
3348
+ 3731.mp4,0
3349
+ 3732.mp4,0
3350
+ 3733.mp4,0
3351
+ 3734.mp4,0
3352
+ 3735.mp4,0
3353
+ 3736.mp4,0
3354
+ 3737.mp4,0
3355
+ 3738.mp4,0
3356
+ 3739.mp4,0
3357
+ 3740.mp4,0
3358
+ 3741.mp4,0
3359
+ 3742.mp4,0
3360
+ 3743.mp4,0
3361
+ 3744.mp4,0
3362
+ 3745.mp4,0
3363
+ 3746.mp4,0
3364
+ 3747.mp4,0
3365
+ 3748.mp4,0
3366
+ 3749.mp4,0
3367
+ 3750.mp4,0
3368
+ 3751.mp4,0
3369
+ 3752.mp4,0
3370
+ 3753.mp4,0
3371
+ 3754.mp4,0
3372
+ 3755.mp4,0
3373
+ 3756.mp4,0
3374
+ 3757.mp4,0
3375
+ 3758.mp4,0
3376
+ 3759.mp4,0
3377
+ 3760.mp4,0
3378
+ 3761.mp4,0
3379
+ 3762.mp4,0
3380
+ 3763.mp4,0
3381
+ 3764.mp4,0
3382
+ 3765.mp4,0
3383
+ 3766.mp4,0
3384
+ 3767.mp4,0
3385
+ 3768.mp4,0
3386
+ 3769.mp4,0
3387
+ 3770.mp4,0
3388
+ 3771.mp4,0
3389
+ 3772.mp4,0
3390
+ 3773.mp4,0
3391
+ 3774.mp4,0
3392
+ 3775.mp4,0
3393
+ 3776.mp4,0
3394
+ 3777.mp4,0
3395
+ 3778.mp4,0
3396
+ 3779.mp4,0
3397
+ 3780.mp4,0
3398
+ 3781.mp4,0
3399
+ 3782.mp4,0
3400
+ 3783.mp4,0
3401
+ 3784.mp4,0
3402
+ 3785.mp4,0
3403
+ 3786.mp4,0
3404
+ 3787.mp4,0
3405
+ 3788.mp4,0
3406
+ 3789.mp4,0
3407
+ 3790.mp4,0
3408
+ 3791.mp4,0
3409
+ 3792.mp4,0
3410
+ 3793.mp4,0
3411
+ 3794.mp4,0
3412
+ 3795.mp4,0
3413
+ 3796.mp4,0
3414
+ 3797.mp4,0
3415
+ 3798.mp4,0
3416
+ 3799.mp4,0
3417
+ 3800.mp4,0
3418
+ 3801.mp4,0
3419
+ 3802.mp4,0
3420
+ 3803.mp4,0
3421
+ 3804.mp4,0
3422
+ 3805.mp4,0
3423
+ 3806.mp4,0
3424
+ 3807.mp4,0
3425
+ 3808.mp4,0
3426
+ 3809.mp4,0
3427
+ 3810.mp4,0
3428
+ 3811.mp4,0
3429
+ 3812.mp4,0
3430
+ 3813.mp4,0
3431
+ 3814.mp4,0
3432
+ 3815.mp4,0
3433
+ 3816.mp4,0
3434
+ 3817.mp4,0
3435
+ 3818.mp4,0
3436
+ 3819.mp4,0
3437
+ 3820.mp4,0
3438
+ 3821.mp4,0
3439
+ 3822.mp4,0
3440
+ 3823.mp4,0
3441
+ 3824.mp4,0
3442
+ 3825.mp4,0
3443
+ 3826.mp4,0
3444
+ 3827.mp4,0
3445
+ 3828.mp4,0
3446
+ 3829.mp4,0
3447
+ 3830.mp4,0
3448
+ 3831.mp4,0
3449
+ 3832.mp4,0
3450
+ 3833.mp4,0
3451
+ 3834.mp4,0
3452
+ 3835.mp4,0
3453
+ 3836.mp4,0
3454
+ 3837.mp4,0
3455
+ 3838.mp4,0
3456
+ 3839.mp4,0
3457
+ 3840.mp4,0
3458
+ 3841.mp4,0
3459
+ 3842.mp4,0
3460
+ 3843.mp4,0
3461
+ 3844.mp4,0
3462
+ 3845.mp4,0
3463
+ 3846.mp4,0
3464
+ 3847.mp4,0
3465
+ 3848.mp4,0
3466
+ 3849.mp4,0
3467
+ 3850.mp4,0
3468
+ 3851.mp4,0
3469
+ 3852.mp4,0
3470
+ 3853.mp4,0
3471
+ 3854.mp4,0
3472
+ 3855.mp4,0
3473
+ 3856.mp4,0
3474
+ 3857.mp4,0
3475
+ 3858.mp4,0
3476
+ 3859.mp4,0
3477
+ 3860.mp4,0
3478
+ 3861.mp4,0
3479
+ 3862.mp4,0
3480
+ 3863.mp4,0
3481
+ 3864.mp4,0
3482
+ 3865.mp4,0
3483
+ 3866.mp4,0
3484
+ 3867.mp4,0
3485
+ 3868.mp4,0
3486
+ 3869.mp4,0
3487
+ 3870.mp4,0
3488
+ 3871.mp4,0
3489
+ 3872.mp4,0
3490
+ 3873.mp4,0
3491
+ 3874.mp4,0
3492
+ 3875.mp4,0
3493
+ 3876.mp4,0
3494
+ 3877.mp4,0
3495
+ 3878.mp4,0
3496
+ 3879.mp4,0
3497
+ 3880.mp4,0
3498
+ 3881.mp4,0
3499
+ 3882.mp4,0
3500
+ 3883.mp4,0
3501
+ 3884.mp4,0
3502
+ 3885.mp4,0
3503
+ 3886.mp4,0
3504
+ 3887.mp4,0
3505
+ 3888.mp4,0
3506
+ 3889.mp4,0
3507
+ 3890.mp4,0
3508
+ 3891.mp4,0
3509
+ 3892.mp4,0
3510
+ 3893.mp4,0
3511
+ 3894.mp4,0
3512
+ 3895.mp4,0
3513
+ 3896.mp4,0
3514
+ 3897.mp4,0
3515
+ 3898.mp4,0
3516
+ 3899.mp4,0
3517
+ 3900.mp4,0
3518
+ 3901.mp4,0
3519
+ 3902.mp4,0
3520
+ 3903.mp4,0
3521
+ 3904.mp4,0
3522
+ 3905.mp4,0
3523
+ 3906.mp4,0
3524
+ 3907.mp4,0
3525
+ 3908.mp4,0
3526
+ 3909.mp4,0
3527
+ 3910.mp4,0
3528
+ 3911.mp4,0
3529
+ 3912.mp4,0
3530
+ 3913.mp4,0
3531
+ 3914.mp4,0
3532
+ 3915.mp4,0
3533
+ 3916.mp4,0
3534
+ 3917.mp4,0
3535
+ 3918.mp4,0
3536
+ 3919.mp4,0
3537
+ 3920.mp4,0
3538
+ 3921.mp4,0
3539
+ 3922.mp4,0
3540
+ 3923.mp4,0
3541
+ 3924.mp4,0
3542
+ 3925.mp4,0
3543
+ 3926.mp4,0
3544
+ 3927.mp4,0
3545
+ 3928.mp4,0
3546
+ 3929.mp4,0
3547
+ 3930.mp4,0
3548
+ 3931.mp4,0
3549
+ 3932.mp4,0
3550
+ 3933.mp4,0
3551
+ 3934.mp4,0
3552
+ 3935.mp4,0
3553
+ 3936.mp4,0
3554
+ 3937.mp4,0
3555
+ 3938.mp4,0
3556
+ 3939.mp4,0
3557
+ 3940.mp4,0
3558
+ 3941.mp4,0
3559
+ 3942.mp4,0
3560
+ 3943.mp4,0
3561
+ 3944.mp4,0
3562
+ 3945.mp4,0
3563
+ 3946.mp4,0
3564
+ 3947.mp4,0
3565
+ 3948.mp4,0
3566
+ 3949.mp4,0
3567
+ 3950.mp4,0
3568
+ 3951.mp4,0
3569
+ 3952.mp4,0
3570
+ 3953.mp4,0
3571
+ 3954.mp4,0
3572
+ 3955.mp4,0
3573
+ 3956.mp4,0
3574
+ 3957.mp4,0
3575
+ 3958.mp4,0
3576
+ 3959.mp4,0
3577
+ 3960.mp4,0
3578
+ 3961.mp4,0
3579
+ 3962.mp4,0
3580
+ 3963.mp4,0
3581
+ 3964.mp4,0
3582
+ 3965.mp4,0
3583
+ 3966.mp4,0
3584
+ 3967.mp4,0
3585
+ 3968.mp4,0
3586
+ 3969.mp4,0
3587
+ 3970.mp4,0
3588
+ 3971.mp4,0
3589
+ 3972.mp4,0
3590
+ 3973.mp4,0
3591
+ 3974.mp4,0
3592
+ 3975.mp4,0
3593
+ 3976.mp4,0
3594
+ 3977.mp4,0
3595
+ 3978.mp4,0
3596
+ 3979.mp4,0
3597
+ 3980.mp4,0
3598
+ 3981.mp4,0
3599
+ 3982.mp4,0
3600
+ 3983.mp4,0
3601
+ 3984.mp4,0
3602
+ 3985.mp4,0
3603
+ 3986.mp4,0
3604
+ 3987.mp4,0
3605
+ 3988.mp4,0
3606
+ 3989.mp4,0
3607
+ 3990.mp4,0
3608
+ 3991.mp4,0
3609
+ 3992.mp4,0
3610
+ 3993.mp4,0
3611
+ 3994.mp4,0
3612
+ 3995.mp4,0
3613
+ 3996.mp4,0
3614
+ 3997.mp4,0
3615
+ 3998.mp4,0
3616
+ 3999.mp4,0
3617
+ 4000.mp4,0
3618
+ 4001.mp4,0
3619
+ 4002.mp4,0
3620
+ 4003.mp4,0
3621
+ 4004.mp4,0
3622
+ 4005.mp4,0
3623
+ 4006.mp4,0
3624
+ 4007.mp4,0
3625
+ 4008.mp4,0
3626
+ 4009.mp4,0
3627
+ 4010.mp4,0
3628
+ 4011.mp4,0
3629
+ 4012.mp4,0
3630
+ 4013.mp4,0
3631
+ 4014.mp4,0
3632
+ 4015.mp4,0
3633
+ 4016.mp4,0
3634
+ 4017.mp4,0
3635
+ 4018.mp4,0
3636
+ 4019.mp4,0
3637
+ 4020.mp4,0
3638
+ 4021.mp4,0
3639
+ 4022.mp4,0
3640
+ 4023.mp4,0
3641
+ 4024.mp4,0
3642
+ 4025.mp4,0
3643
+ 4026.mp4,0
3644
+ 4027.mp4,0
3645
+ 4028.mp4,0
3646
+ 4029.mp4,0
3647
+ 4030.mp4,0
3648
+ 4031.mp4,0
3649
+ 4032.mp4,0
3650
+ 4033.mp4,0
3651
+ 4034.mp4,0
3652
+ 4035.mp4,0
3653
+ 4036.mp4,0
3654
+ 4037.mp4,0
3655
+ 4038.mp4,0
3656
+ 4039.mp4,0
3657
+ 4040.mp4,0
3658
+ 4041.mp4,0
3659
+ 4042.mp4,0
3660
+ 4043.mp4,0
3661
+ 4044.mp4,0
3662
+ 4045.mp4,0
3663
+ 4046.mp4,0
3664
+ 4047.mp4,0
3665
+ 4048.mp4,0
3666
+ 4049.mp4,0
3667
+ 4050.mp4,0
3668
+ 4051.mp4,0
3669
+ 4052.mp4,0
3670
+ 4053.mp4,0
3671
+ 4054.mp4,0
3672
+ 4055.mp4,0
3673
+ 4056.mp4,0
3674
+ 4057.mp4,0
3675
+ 4058.mp4,0
3676
+ 4059.mp4,0
3677
+ 4060.mp4,0
3678
+ 4061.mp4,0
3679
+ 4062.mp4,0
3680
+ 4063.mp4,0
3681
+ 4064.mp4,0
3682
+ 4065.mp4,0
3683
+ 4066.mp4,0
3684
+ 4067.mp4,0
3685
+ 4068.mp4,0
3686
+ 4069.mp4,0
3687
+ 4070.mp4,0
3688
+ 4071.mp4,0
3689
+ 4072.mp4,0
3690
+ 4073.mp4,0
3691
+ 4074.mp4,0
3692
+ 4075.mp4,0
3693
+ 4076.mp4,0
3694
+ 4077.mp4,0
3695
+ 4078.mp4,0
3696
+ 4079.mp4,0
3697
+ 4080.mp4,0
3698
+ 4081.mp4,0
3699
+ 4082.mp4,0
3700
+ 4083.mp4,0
3701
+ 4084.mp4,0
3702
+ 4085.mp4,0
3703
+ 4086.mp4,0
3704
+ 4087.mp4,0
3705
+ 4088.mp4,0
3706
+ 4089.mp4,0
3707
+ 4090.mp4,0
3708
+ 4091.mp4,0
3709
+ 4092.mp4,0
3710
+ 4093.mp4,0
3711
+ 4094.mp4,0
3712
+ 4095.mp4,0
3713
+ 4096.mp4,0
3714
+ 4097.mp4,0
3715
+ 4098.mp4,0
3716
+ 4099.mp4,0
3717
+ 4100.mp4,0
3718
+ 4101.mp4,0
3719
+ 4102.mp4,0
3720
+ 4103.mp4,0
3721
+ 4104.mp4,0
3722
+ 4105.mp4,0
3723
+ 4106.mp4,0
3724
+ 4107.mp4,0
3725
+ 4108.mp4,0
3726
+ 4109.mp4,0
3727
+ 4110.mp4,0
3728
+ 4111.mp4,0
3729
+ 4112.mp4,0
3730
+ 4113.mp4,0
3731
+ 4114.mp4,0
3732
+ 4115.mp4,0
3733
+ 4116.mp4,0
3734
+ 4117.mp4,0
3735
+ 4118.mp4,0
3736
+ 4119.mp4,0
3737
+ 4120.mp4,0
3738
+ 4121.mp4,0
3739
+ 4122.mp4,0
3740
+ 4123.mp4,0
3741
+ 4124.mp4,0
3742
+ 4125.mp4,0
3743
+ 4126.mp4,0
3744
+ 4127.mp4,0
3745
+ 4128.mp4,0
3746
+ 4129.mp4,0
3747
+ 4130.mp4,0
3748
+ 4131.mp4,0
3749
+ 4132.mp4,0
3750
+ 4133.mp4,0
3751
+ 4134.mp4,0
3752
+ 4135.mp4,0
3753
+ 4136.mp4,0
3754
+ 4137.mp4,0
3755
+ 4138.mp4,0
3756
+ 4139.mp4,0
3757
+ 4140.mp4,0
3758
+ 4141.mp4,0
3759
+ 4142.mp4,0
3760
+ 4143.mp4,0
3761
+ 4144.mp4,0
3762
+ 4145.mp4,0
3763
+ 4146.mp4,0
3764
+ 4147.mp4,0
3765
+ 4148.mp4,0
3766
+ 4149.mp4,0
3767
+ 4150.mp4,0
3768
+ 4151.mp4,0
3769
+ 4152.mp4,0
3770
+ 4153.mp4,0
3771
+ 4154.mp4,0
3772
+ 4155.mp4,0
3773
+ 4156.mp4,0
3774
+ 4157.mp4,0
3775
+ 4158.mp4,0
3776
+ 4159.mp4,0
3777
+ 4160.mp4,0
3778
+ 4161.mp4,0
3779
+ 4162.mp4,0
3780
+ 4163.mp4,0
3781
+ 4164.mp4,0
3782
+ 4165.mp4,0
3783
+ 4166.mp4,0
3784
+ 4167.mp4,0
3785
+ 4168.mp4,0
3786
+ 4169.mp4,0
3787
+ 4170.mp4,0
3788
+ 4171.mp4,0
3789
+ 4172.mp4,0
3790
+ 4173.mp4,0
3791
+ 4174.mp4,0
3792
+ 4175.mp4,0
3793
+ 4176.mp4,0
3794
+ 4177.mp4,0
3795
+ 4178.mp4,0
3796
+ 4179.mp4,0
3797
+ 4180.mp4,0
3798
+ 4181.mp4,0
3799
+ 4182.mp4,0
3800
+ 4183.mp4,0
3801
+ 4184.mp4,0
3802
+ 4185.mp4,0
3803
+ 4186.mp4,0
3804
+ 4187.mp4,0
3805
+ 4188.mp4,0
3806
+ 4189.mp4,0
3807
+ 4190.mp4,0
3808
+ 4191.mp4,0
3809
+ 4192.mp4,0
3810
+ 4193.mp4,0
3811
+ 4194.mp4,0
3812
+ 4195.mp4,0
3813
+ 4196.mp4,0
3814
+ 4197.mp4,0
3815
+ 4198.mp4,0
3816
+ 4199.mp4,0
3817
+ 4200.mp4,0
3818
+ 4201.mp4,0
3819
+ 4202.mp4,0
3820
+ 4203.mp4,0
3821
+ 4204.mp4,0
3822
+ 4205.mp4,0
3823
+ 4206.mp4,0
3824
+ 4207.mp4,0
3825
+ 4208.mp4,0
3826
+ 4209.mp4,0
3827
+ 4210.mp4,0
3828
+ 4211.mp4,0
3829
+ 4212.mp4,0
3830
+ 4213.mp4,0
3831
+ 4214.mp4,0
3832
+ 4215.mp4,0
3833
+ 4216.mp4,0
3834
+ 4217.mp4,0
3835
+ 4218.mp4,0
3836
+ 4219.mp4,0
3837
+ 4220.mp4,0
3838
+ 4221.mp4,0
3839
+ 4222.mp4,0
3840
+ 4223.mp4,0
3841
+ 4224.mp4,0
3842
+ 4225.mp4,0
3843
+ 4226.mp4,0
3844
+ 4227.mp4,0
3845
+ 4228.mp4,0
3846
+ 4229.mp4,0
3847
+ 4230.mp4,0
3848
+ 4231.mp4,0
3849
+ 4232.mp4,0
3850
+ 4233.mp4,0
3851
+ 4234.mp4,0
3852
+ 4235.mp4,0
3853
+ 4236.mp4,0
3854
+ 4237.mp4,0
3855
+ 4238.mp4,0
3856
+ 4239.mp4,0
3857
+ 4240.mp4,0
3858
+ 4241.mp4,0
3859
+ 4242.mp4,0
3860
+ 4243.mp4,0
3861
+ 4244.mp4,0
3862
+ 4245.mp4,0
3863
+ 4246.mp4,0
3864
+ 4247.mp4,0
3865
+ 4248.mp4,0
3866
+ 4249.mp4,0
3867
+ 4250.mp4,0
3868
+ 4251.mp4,0
3869
+ 4252.mp4,0
3870
+ 4253.mp4,0
3871
+ 4254.mp4,0
3872
+ 4255.mp4,0
3873
+ 4256.mp4,0
3874
+ 4257.mp4,0
3875
+ 4258.mp4,0
3876
+ 4259.mp4,0
3877
+ 4260.mp4,0
3878
+ 4261.mp4,0
3879
+ 4262.mp4,0
3880
+ 4263.mp4,0
3881
+ 4264.mp4,0
3882
+ 4265.mp4,0
3883
+ 4266.mp4,0
3884
+ 4267.mp4,0
3885
+ 4268.mp4,0
3886
+ 4269.mp4,0
3887
+ 4270.mp4,0
3888
+ 4271.mp4,0
3889
+ 4272.mp4,0
3890
+ 4273.mp4,0
3891
+ 4274.mp4,0
3892
+ 4275.mp4,0
3893
+ 4276.mp4,0
3894
+ 4277.mp4,0
3895
+ 4278.mp4,0
3896
+ 4279.mp4,0
3897
+ 4280.mp4,0
3898
+ 4281.mp4,0
3899
+ 4282.mp4,0
3900
+ 4283.mp4,0
3901
+ 4284.mp4,0
3902
+ 4285.mp4,0
3903
+ 4286.mp4,0
3904
+ 4287.mp4,0
3905
+ 4288.mp4,0
3906
+ 4289.mp4,0
3907
+ 4290.mp4,0
3908
+ 4291.mp4,0
3909
+ 4292.mp4,0
3910
+ 4293.mp4,0
3911
+ 4294.mp4,0
3912
+ 4295.mp4,0
3913
+ 4296.mp4,0
3914
+ 4297.mp4,0
3915
+ 4298.mp4,0
3916
+ 4299.mp4,0
3917
+ 4300.mp4,0
3918
+ 4301.mp4,0
3919
+ 4302.mp4,0
3920
+ 4303.mp4,0
3921
+ 4304.mp4,0
3922
+ 4305.mp4,0
3923
+ 4306.mp4,0
3924
+ 4307.mp4,0
3925
+ 4308.mp4,0
3926
+ 4309.mp4,0
3927
+ 4310.mp4,0
3928
+ 4311.mp4,0
3929
+ 4312.mp4,0
3930
+ 4313.mp4,0
3931
+ 4314.mp4,0
3932
+ 4315.mp4,0
3933
+ 4316.mp4,0
3934
+ 4317.mp4,0
3935
+ 4318.mp4,0
3936
+ 4319.mp4,0
3937
+ 4320.mp4,0
3938
+ 4321.mp4,0
3939
+ 4322.mp4,0
3940
+ 4323.mp4,0
3941
+ 4324.mp4,0
3942
+ 4325.mp4,0
3943
+ 4326.mp4,0
3944
+ 4327.mp4,0
3945
+ 4328.mp4,0
3946
+ 4329.mp4,0
3947
+ 4330.mp4,0
3948
+ 4331.mp4,0
3949
+ 4332.mp4,0
3950
+ 4333.mp4,0
3951
+ 4334.mp4,0
3952
+ 4335.mp4,0
3953
+ 4336.mp4,0
3954
+ 4337.mp4,0
3955
+ 4338.mp4,0
3956
+ 4339.mp4,0
3957
+ 4340.mp4,0
3958
+ 4341.mp4,0
3959
+ 4342.mp4,0
3960
+ 4343.mp4,0
3961
+ 4344.mp4,0
3962
+ 4345.mp4,0
3963
+ 4346.mp4,0
3964
+ 4347.mp4,0
3965
+ 4348.mp4,0
3966
+ 4349.mp4,0
3967
+ 4350.mp4,0
3968
+ 4351.mp4,0
3969
+ 4352.mp4,0
3970
+ 4353.mp4,0
3971
+ 4354.mp4,0
3972
+ 4355.mp4,0
3973
+ 4356.mp4,0
3974
+ 4357.mp4,0
3975
+ 4358.mp4,0
3976
+ 4359.mp4,0
3977
+ 4360.mp4,0
3978
+ 4361.mp4,0
3979
+ 4362.mp4,0
3980
+ 4363.mp4,0
3981
+ 4364.mp4,0
3982
+ 4365.mp4,0
3983
+ 4366.mp4,0
3984
+ 4367.mp4,0
3985
+ 4368.mp4,0
3986
+ 4369.mp4,0
3987
+ 4370.mp4,0
3988
+ 4371.mp4,0
3989
+ 4372.mp4,0
3990
+ 4373.mp4,0
3991
+ 4374.mp4,0
3992
+ 4375.mp4,0
3993
+ 4376.mp4,0
3994
+ 4377.mp4,0
3995
+ 4378.mp4,0
3996
+ 4379.mp4,0
3997
+ 4380.mp4,0
3998
+ 4381.mp4,0
3999
+ 4382.mp4,0
4000
+ 4383.mp4,0
4001
+ 4384.mp4,0
4002
+ 4385.mp4,0
4003
+ 4386.mp4,0
4004
+ 4387.mp4,0
4005
+ 4388.mp4,0
4006
+ 4389.mp4,0
4007
+ 4390.mp4,0
4008
+ 4391.mp4,0
4009
+ 4392.mp4,0
4010
+ 4393.mp4,0
4011
+ 4394.mp4,0
4012
+ 4395.mp4,0
4013
+ 4396.mp4,0
4014
+ 4397.mp4,0
4015
+ 4398.mp4,0
4016
+ 4399.mp4,0
4017
+ 4400.mp4,0
4018
+ 4401.mp4,0
4019
+ 4402.mp4,0
4020
+ 4403.mp4,0
4021
+ 4404.mp4,0
4022
+ 4405.mp4,0
4023
+ 4406.mp4,0
4024
+ 4407.mp4,0
4025
+ 4408.mp4,0
4026
+ 4409.mp4,0
4027
+ 4410.mp4,0
4028
+ 4411.mp4,0
4029
+ 4412.mp4,0
4030
+ 4413.mp4,0
4031
+ 4414.mp4,0
4032
+ 4415.mp4,0
4033
+ 4416.mp4,0
4034
+ 4417.mp4,0
4035
+ 4418.mp4,0
4036
+ 4419.mp4,0
4037
+ 4420.mp4,0
4038
+ 4421.mp4,0
4039
+ 4422.mp4,0
4040
+ 4423.mp4,0
4041
+ 4424.mp4,0
4042
+ 4425.mp4,0
4043
+ 4426.mp4,0
4044
+ 4427.mp4,0
4045
+ 4428.mp4,0
4046
+ 4429.mp4,0
4047
+ 4430.mp4,0
4048
+ 4431.mp4,0
4049
+ 4432.mp4,0
4050
+ 4433.mp4,0
4051
+ 4434.mp4,0
4052
+ 4435.mp4,0
4053
+ 4436.mp4,0
4054
+ 4437.mp4,0
4055
+ 4438.mp4,0
4056
+ 4439.mp4,0
4057
+ 4440.mp4,0
4058
+ 4441.mp4,0
4059
+ 4442.mp4,0
4060
+ 4443.mp4,0
4061
+ 4444.mp4,0
4062
+ 4445.mp4,0
4063
+ 4446.mp4,0
4064
+ 4447.mp4,0
4065
+ 4448.mp4,0
4066
+ 4449.mp4,0
4067
+ 4450.mp4,0
4068
+ 4451.mp4,0
4069
+ 4452.mp4,0
4070
+ 4453.mp4,0
4071
+ 4454.mp4,0
4072
+ 4455.mp4,0
4073
+ 4456.mp4,0
4074
+ 4457.mp4,0
4075
+ 4458.mp4,0
4076
+ 4459.mp4,0
4077
+ 4460.mp4,0
4078
+ 4461.mp4,0
4079
+ 4462.mp4,0
4080
+ 4463.mp4,0
4081
+ 4464.mp4,0
4082
+ 4465.mp4,0
4083
+ 4466.mp4,0
4084
+ 4467.mp4,0
4085
+ 4468.mp4,0
4086
+ 4469.mp4,0
4087
+ 4470.mp4,0
4088
+ 4471.mp4,0
4089
+ 4472.mp4,0
4090
+ 4473.mp4,0
4091
+ 4474.mp4,0
4092
+ 4475.mp4,0
4093
+ 4476.mp4,0
4094
+ 4477.mp4,0
4095
+ 4478.mp4,0
4096
+ 4479.mp4,0
4097
+ 4480.mp4,0
4098
+ 4481.mp4,0
4099
+ 4482.mp4,0
4100
+ 4483.mp4,0
4101
+ 4484.mp4,0
4102
+ 4485.mp4,0
4103
+ 4486.mp4,0
4104
+ 4487.mp4,0
4105
+ 4488.mp4,0
4106
+ 4489.mp4,0
4107
+ 4490.mp4,0
4108
+ 4491.mp4,0
4109
+ 4492.mp4,0
4110
+ 4493.mp4,0
4111
+ 4494.mp4,0
4112
+ 4495.mp4,0
4113
+ 4496.mp4,0
4114
+ 4497.mp4,0
4115
+ 4498.mp4,0
4116
+ 4499.mp4,0
4117
+ 4500.mp4,0
4118
+ 4501.mp4,0
4119
+ 4502.mp4,0
4120
+ 4503.mp4,0
4121
+ 4504.mp4,0
4122
+ 4505.mp4,0
4123
+ 4506.mp4,0
4124
+ 4507.mp4,0
4125
+ 4508.mp4,0
4126
+ 4509.mp4,0
4127
+ 4510.mp4,0
4128
+ 4511.mp4,0
4129
+ 4512.mp4,0
4130
+ 4513.mp4,0
4131
+ 4514.mp4,0
4132
+ 4515.mp4,0
4133
+ 4516.mp4,0
4134
+ 4517.mp4,0
4135
+ 4518.mp4,0
4136
+ 4519.mp4,0
4137
+ 4520.mp4,0
4138
+ 4521.mp4,0
4139
+ 4522.mp4,0
4140
+ 4523.mp4,0
4141
+ 4524.mp4,0
4142
+ 4525.mp4,0
4143
+ 4526.mp4,0
4144
+ 4527.mp4,0
4145
+ 4528.mp4,0
4146
+ 4529.mp4,0
4147
+ 4530.mp4,0
4148
+ 4531.mp4,0
4149
+ 4532.mp4,0
4150
+ 4533.mp4,0
4151
+ 4534.mp4,0
4152
+ 4535.mp4,0
4153
+ 4536.mp4,0
4154
+ 4537.mp4,0
4155
+ 4538.mp4,0
4156
+ 4539.mp4,0
4157
+ 4540.mp4,0
4158
+ 4541.mp4,0
4159
+ 4542.mp4,0
4160
+ 4543.mp4,0
4161
+ 4544.mp4,0
4162
+ 4545.mp4,0
4163
+ 4546.mp4,0
4164
+ 4547.mp4,0
4165
+ 4548.mp4,0
4166
+ 4549.mp4,0
4167
+ 4550.mp4,0
4168
+ 4551.mp4,0
4169
+ 4552.mp4,0
4170
+ 4553.mp4,0
4171
+ 4554.mp4,0
4172
+ 4555.mp4,0
4173
+ 4556.mp4,0
4174
+ 4557.mp4,0
4175
+ 4558.mp4,0
4176
+ 4559.mp4,0
4177
+ 4560.mp4,0
4178
+ 4561.mp4,0
4179
+ 4562.mp4,0
4180
+ 4563.mp4,0
4181
+ 4564.mp4,0
4182
+ 4565.mp4,0
4183
+ 4566.mp4,0
4184
+ 4567.mp4,0
4185
+ 4568.mp4,0
4186
+ 4569.mp4,0
4187
+ 4570.mp4,0
4188
+ 4571.mp4,0
4189
+ 4572.mp4,0
4190
+ 4573.mp4,0
4191
+ 4574.mp4,0
4192
+ 4575.mp4,0
4193
+ 4576.mp4,0
4194
+ 4577.mp4,0
4195
+ 4578.mp4,0
4196
+ 4579.mp4,0
4197
+ 4580.mp4,0
4198
+ 4581.mp4,0
4199
+ 4582.mp4,0
4200
+ 4583.mp4,0
4201
+ 4584.mp4,0
4202
+ 4585.mp4,0
4203
+ 4586.mp4,0
4204
+ 4587.mp4,0
4205
+ 4588.mp4,0
4206
+ 4589.mp4,0
4207
+ 4590.mp4,0
4208
+ 4591.mp4,0
4209
+ 4592.mp4,0
4210
+ 4593.mp4,0
4211
+ 4594.mp4,0
4212
+ 4595.mp4,0
4213
+ 4596.mp4,0
4214
+ 4597.mp4,0
4215
+ 4598.mp4,0
4216
+ 4599.mp4,0
4217
+ 4600.mp4,0
4218
+ 4601.mp4,0
4219
+ 4602.mp4,0
4220
+ 4603.mp4,0
4221
+ 4604.mp4,0
4222
+ 4605.mp4,0
4223
+ 4606.mp4,0
4224
+ 4607.mp4,0
4225
+ 4608.mp4,0
4226
+ 4609.mp4,0
4227
+ 4610.mp4,0
4228
+ 4611.mp4,0
4229
+ 4612.mp4,0
4230
+ 4613.mp4,0
4231
+ 4614.mp4,0
4232
+ 4615.mp4,0
4233
+ 4616.mp4,0
4234
+ 4617.mp4,0
4235
+ 4618.mp4,0
4236
+ 4619.mp4,0
4237
+ 4620.mp4,0
4238
+ 4621.mp4,0
4239
+ 4622.mp4,0
4240
+ 4623.mp4,0
4241
+ 4624.mp4,0
4242
+ 4625.mp4,0
4243
+ 4626.mp4,0
4244
+ 4627.mp4,0
4245
+ 4628.mp4,0
4246
+ 4629.mp4,0
4247
+ 4630.mp4,0
4248
+ 4631.mp4,0
4249
+ 4632.mp4,0
4250
+ 4633.mp4,0
4251
+ 4634.mp4,0
4252
+ 4635.mp4,0
4253
+ 4636.mp4,0
4254
+ 4637.mp4,0
4255
+ 4638.mp4,0
4256
+ 4639.mp4,0
4257
+ 4640.mp4,0
4258
+ 4641.mp4,0
4259
+ 4642.mp4,0
4260
+ 4643.mp4,0
4261
+ 4644.mp4,0
4262
+ 4645.mp4,0
4263
+ 4646.mp4,0
4264
+ 4647.mp4,0
4265
+ 4648.mp4,0
4266
+ 4649.mp4,0
4267
+ 4650.mp4,0
4268
+ 4651.mp4,0
4269
+ 4652.mp4,0
4270
+ 4653.mp4,0
4271
+ 4654.mp4,0
4272
+ 4655.mp4,0
4273
+ 4656.mp4,0
4274
+ 4657.mp4,0
4275
+ 4658.mp4,0
4276
+ 4659.mp4,0
4277
+ 4660.mp4,0
4278
+ 4661.mp4,0
4279
+ 4662.mp4,0
4280
+ 4663.mp4,0
4281
+ 4664.mp4,0
4282
+ 4665.mp4,0
4283
+ 4666.mp4,0
4284
+ 4667.mp4,0
4285
+ 4668.mp4,0
4286
+ 4669.mp4,0
4287
+ 4670.mp4,0
4288
+ 4671.mp4,0
4289
+ 4672.mp4,0
4290
+ 4673.mp4,0
4291
+ 4674.mp4,0
4292
+ 4675.mp4,0
4293
+ 4676.mp4,0
4294
+ 4677.mp4,0
4295
+ 4678.mp4,0
4296
+ 4679.mp4,0
4297
+ 4680.mp4,0
4298
+ 4681.mp4,0
4299
+ 4682.mp4,0
4300
+ 4683.mp4,0
4301
+ 4684.mp4,0
4302
+ 4685.mp4,0
4303
+ 4686.mp4,0
4304
+ 4687.mp4,0
4305
+ 4688.mp4,0
4306
+ 4689.mp4,0
4307
+ 4690.mp4,0
4308
+ 4691.mp4,0
4309
+ 4692.mp4,0
4310
+ 4693.mp4,0
4311
+ 4694.mp4,0
4312
+ 4695.mp4,0
4313
+ 4696.mp4,0
4314
+ 4697.mp4,0
4315
+ 4698.mp4,0
4316
+ 4699.mp4,0
4317
+ 4700.mp4,0
4318
+ 4701.mp4,0
4319
+ 4702.mp4,0
4320
+ 4703.mp4,0
4321
+ 4704.mp4,0
4322
+ 4705.mp4,0
4323
+ 4706.mp4,0
4324
+ 4707.mp4,0
4325
+ 4708.mp4,0
4326
+ 4709.mp4,0
4327
+ 4710.mp4,0
4328
+ 4711.mp4,0
4329
+ 4712.mp4,0
4330
+ 4713.mp4,0
4331
+ 4714.mp4,0
4332
+ 4715.mp4,0
4333
+ 4716.mp4,0
4334
+ 4717.mp4,0
4335
+ 4718.mp4,0
4336
+ 4719.mp4,0
4337
+ 4720.mp4,0
4338
+ 4721.mp4,0
4339
+ 4722.mp4,0
4340
+ 4723.mp4,0
4341
+ 4724.mp4,0
4342
+ 4725.mp4,0
4343
+ 4726.mp4,0
4344
+ 4727.mp4,0
4345
+ 4728.mp4,0
4346
+ 4729.mp4,0
4347
+ 4730.mp4,0
4348
+ 4731.mp4,0
4349
+ 4732.mp4,0
4350
+ 4733.mp4,0
4351
+ 4734.mp4,0
4352
+ 4735.mp4,0
4353
+ 4736.mp4,0
4354
+ 4737.mp4,0
4355
+ 4738.mp4,0
4356
+ 4739.mp4,0
4357
+ 4740.mp4,0
4358
+ 4741.mp4,0
4359
+ 4742.mp4,0
4360
+ 4743.mp4,0
4361
+ 4744.mp4,0
4362
+ 4745.mp4,0
4363
+ 4746.mp4,0
4364
+ 4747.mp4,0
4365
+ 4748.mp4,0
4366
+ 4749.mp4,0
4367
+ 4750.mp4,0
4368
+ 4751.mp4,0
4369
+ 4752.mp4,0
4370
+ 4753.mp4,0
4371
+ 4754.mp4,0
4372
+ 4755.mp4,0
4373
+ 4756.mp4,0
4374
+ 4757.mp4,0
4375
+ 4758.mp4,0
4376
+ 4759.mp4,0
4377
+ 4760.mp4,0
4378
+ 4761.mp4,0
4379
+ 4762.mp4,0
4380
+ 4763.mp4,0
4381
+ 4764.mp4,0
4382
+ 4765.mp4,0
4383
+ 4766.mp4,0
4384
+ 4767.mp4,0
4385
+ 4768.mp4,0
4386
+ 4769.mp4,0
4387
+ 4770.mp4,0
4388
+ 4771.mp4,0
4389
+ 4772.mp4,0
4390
+ 4773.mp4,0
4391
+ 4774.mp4,0
4392
+ 4775.mp4,0
4393
+ 4776.mp4,0
4394
+ 4777.mp4,0
4395
+ 4778.mp4,0
4396
+ 4779.mp4,0
4397
+ 4780.mp4,0
4398
+ 4781.mp4,0
4399
+ 4782.mp4,0
4400
+ 4783.mp4,0
4401
+ 4784.mp4,0
4402
+ 4785.mp4,0
4403
+ 4786.mp4,0
4404
+ 4787.mp4,0
4405
+ 4788.mp4,0
4406
+ 4789.mp4,0
4407
+ 4790.mp4,0
4408
+ 4791.mp4,0
4409
+ 4792.mp4,0
4410
+ 4793.mp4,0
4411
+ 4794.mp4,0
4412
+ 4795.mp4,0
4413
+ 4796.mp4,0
4414
+ 4797.mp4,0
4415
+ 4798.mp4,0
4416
+ 4799.mp4,0
4417
+ 4800.mp4,0
4418
+ 4801.mp4,0
4419
+ 4802.mp4,0
4420
+ 4803.mp4,0
4421
+ 4804.mp4,0
4422
+ 4805.mp4,0
4423
+ 4806.mp4,0
4424
+ 4807.mp4,0
4425
+ 4808.mp4,0
4426
+ 4809.mp4,0
4427
+ 4810.mp4,0
4428
+ 4811.mp4,0
4429
+ 4812.mp4,0
4430
+ 4813.mp4,0
4431
+ 4814.mp4,0
4432
+ 4815.mp4,0
4433
+ 4816.mp4,0
4434
+ 4817.mp4,0
4435
+ 4818.mp4,0
4436
+ 4819.mp4,0
4437
+ 4820.mp4,0
4438
+ 4821.mp4,0
4439
+ 4822.mp4,0
4440
+ 4823.mp4,0
4441
+ 4824.mp4,0
4442
+ 4825.mp4,0
4443
+ 4826.mp4,0
4444
+ 4827.mp4,0
4445
+ 4828.mp4,0
4446
+ 4829.mp4,0
4447
+ 4830.mp4,0
4448
+ 4831.mp4,0
4449
+ 4832.mp4,0
4450
+ 4833.mp4,0
4451
+ 4834.mp4,0
4452
+ 4835.mp4,0
4453
+ 4836.mp4,0
4454
+ 4837.mp4,0
4455
+ 4838.mp4,0
4456
+ 4839.mp4,0
4457
+ 4840.mp4,0
4458
+ 4841.mp4,0
4459
+ 4842.mp4,0
4460
+ 4843.mp4,0
4461
+ 4844.mp4,0
4462
+ 4845.mp4,0
4463
+ 4846.mp4,0
4464
+ 4847.mp4,0
4465
+ 4848.mp4,0
4466
+ 4849.mp4,0
4467
+ 4850.mp4,0
4468
+ 4851.mp4,0
4469
+ 4852.mp4,0
4470
+ 4853.mp4,0
4471
+ 4854.mp4,0
4472
+ 4855.mp4,0
4473
+ 4856.mp4,0
4474
+ 4857.mp4,0
4475
+ 4858.mp4,0
4476
+ 4859.mp4,0
4477
+ 4860.mp4,0
4478
+ 4861.mp4,0
4479
+ 4862.mp4,0
4480
+ 4863.mp4,0
4481
+ 4864.mp4,0
4482
+ 4865.mp4,0
4483
+ 4866.mp4,0
4484
+ 4867.mp4,0
4485
+ 4868.mp4,0
4486
+ 4869.mp4,0
4487
+ 4870.mp4,0
4488
+ 4871.mp4,0
4489
+ 4872.mp4,0
4490
+ 4873.mp4,0
4491
+ 4874.mp4,0
4492
+ 4875.mp4,0
4493
+ 4876.mp4,0
4494
+ 4877.mp4,0
4495
+ 4878.mp4,0
4496
+ 4879.mp4,0
4497
+ 4880.mp4,0
4498
+ 4881.mp4,0
4499
+ 4882.mp4,0
4500
+ 4883.mp4,0
4501
+ 4884.mp4,0
4502
+ 4885.mp4,0
4503
+ 4886.mp4,0
4504
+ 4887.mp4,0
4505
+ 4888.mp4,0
4506
+ 4889.mp4,0
4507
+ 4890.mp4,0
4508
+ 4891.mp4,0
4509
+ 4892.mp4,0
4510
+ 4893.mp4,0
4511
+ 4894.mp4,0
4512
+ 4895.mp4,0
4513
+ 4896.mp4,0
4514
+ 4897.mp4,0
4515
+ 4898.mp4,0
4516
+ 4899.mp4,0
4517
+ 4900.mp4,0
4518
+ 4901.mp4,0
4519
+ 4902.mp4,0
4520
+ 4903.mp4,0
4521
+ 4904.mp4,0
4522
+ 4905.mp4,0
4523
+ 4906.mp4,0
4524
+ 4907.mp4,0
4525
+ 4908.mp4,0
4526
+ 4909.mp4,0
4527
+ 4910.mp4,0
4528
+ 4911.mp4,0
4529
+ 4912.mp4,0
4530
+ 4913.mp4,0
4531
+ 4914.mp4,0
4532
+ 4915.mp4,0
4533
+ 4916.mp4,0
4534
+ 4917.mp4,0
4535
+ 4918.mp4,0
4536
+ 4919.mp4,0
4537
+ 4920.mp4,0
4538
+ 4921.mp4,0
4539
+ 4922.mp4,0
4540
+ 4923.mp4,0
4541
+ 4924.mp4,0
4542
+ 4925.mp4,0
4543
+ 4926.mp4,0
4544
+ 4927.mp4,0
4545
+ 4928.mp4,0
4546
+ 4929.mp4,0
4547
+ 4930.mp4,0
4548
+ 4931.mp4,0
4549
+ 4932.mp4,0
4550
+ 4933.mp4,0
4551
+ 4934.mp4,0
4552
+ 4935.mp4,0
4553
+ 4936.mp4,0
4554
+ 4937.mp4,0
4555
+ 4938.mp4,0
4556
+ 4939.mp4,0
4557
+ 4940.mp4,0
4558
+ 4941.mp4,0
4559
+ 4942.mp4,0
4560
+ 4943.mp4,0
4561
+ 4944.mp4,0
4562
+ 4945.mp4,0
4563
+ 4946.mp4,0
4564
+ 4947.mp4,0
4565
+ 4948.mp4,0
4566
+ 4949.mp4,0
4567
+ 4950.mp4,0
4568
+ 4951.mp4,0
4569
+ 4952.mp4,0
4570
+ 4953.mp4,0
4571
+ 4954.mp4,0
4572
+ 4955.mp4,0
4573
+ 4956.mp4,0
4574
+ 4957.mp4,0
4575
+ 4958.mp4,0
4576
+ 4959.mp4,0
4577
+ 4960.mp4,0
4578
+ 4961.mp4,0
4579
+ 4962.mp4,0
4580
+ 4963.mp4,0
4581
+ 4964.mp4,0
4582
+ 4965.mp4,0
4583
+ 4966.mp4,0
4584
+ 4967.mp4,0
4585
+ 4968.mp4,0
4586
+ 4969.mp4,0
4587
+ 4970.mp4,0
4588
+ 4971.mp4,0
4589
+ 4972.mp4,0
4590
+ 4973.mp4,0
4591
+ 4974.mp4,0
4592
+ 4975.mp4,0
4593
+ 4976.mp4,0
4594
+ 4977.mp4,0
4595
+ 4978.mp4,0
4596
+ 4979.mp4,0
4597
+ 4980.mp4,0
4598
+ 4981.mp4,0
4599
+ 4982.mp4,0
4600
+ 4983.mp4,0
4601
+ 4984.mp4,0
4602
+ 4985.mp4,0
4603
+ 4986.mp4,0
4604
+ 4987.mp4,0
4605
+ 4988.mp4,0
4606
+ 4989.mp4,0
4607
+ 4990.mp4,0
4608
+ 4991.mp4,0
4609
+ 4992.mp4,0
4610
+ 4993.mp4,0
4611
+ 4994.mp4,0
4612
+ 4995.mp4,0
4613
+ 4996.mp4,0
4614
+ 4997.mp4,0
4615
+ 4998.mp4,0
4616
+ 4999.mp4,0
work_dirs/.DS_Store ADDED
Binary file (6.15 kB). View file
 
work_dirs/challenge_baseline_new/.DS_Store ADDED
Binary file (6.15 kB). View file
 
work_dirs/challenge_baseline_new/best.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd0caf0f24c2cc4efe7286242fecd4fc445301ff389cb07a938e4303e47dabd2
3
+ size 1499298208