Add files using upload-large-folder tool
Browse files- ASLLRP_utterances_mapping.txt +0 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/README.md +162 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/__init__.py +0 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/configs/CSL-Daily.yaml +4 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/configs/phoenix2014-T.yaml +4 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/corr_map_generation/resnet.py +350 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/dataset/dataloader_video.py +197 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/demo.py +176 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/generate_corr_map.py +88 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/generate_weight_map.py +136 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/main.py +301 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/modules/BiLSTM.py +96 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/modules/__init__.py +2 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/modules/resnet.py +295 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/modules/tconv.py +125 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess-CSL-Daily.py +178 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess-CSL.py +141 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess-T.py +129 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess.py +131 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/requirements.txt +9 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/seq_scripts.py +164 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/slr_network.py +146 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/test_one_video.py +124 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/__init__.py +7 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/__pycache__/decode.cpython-38.pyc +0 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/decode.py +66 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/device.py +57 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/optimizer.py +59 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/pack_code.py +24 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/parameters.py +159 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/random_state.py +32 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/record.py +51 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/utils/video_augmentation.py +336 -0
- CorrNet_Plus/CorrNet_Plus_CSLR/weight_map_generation/resnet.py +309 -0
- CorrNet_Plus/README.md +310 -0
- slt_new/README.md +162 -0
- slt_new/__init__.py +0 -0
- slt_new/comparison_checklist.md +144 -0
- slt_new/demo.py +176 -0
- slt_new/generate_corr_map.py +88 -0
- slt_new/generate_weight_map.py +136 -0
- slt_new/main.py +329 -0
- slt_new/requirements.txt +9 -0
- slt_new/seq_scripts.py +166 -0
- slt_new/slr_network.py +170 -0
- slt_new/train_asllrp.sh +23 -0
- slt_new/train_asllrp_single.sh +6 -0
- slt_new/训练细节.txt +54 -0
- upload.log +0 -0
ASLLRP_utterances_mapping.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
CorrNet_Plus/CorrNet_Plus_CSLR/README.md
ADDED
|
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CorrNet+_CSLR
|
| 2 |
+
This repo holds codes of the paper: CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation, which is an extension of our previous work (CVPR 2023) [[paper]](https://arxiv.org/abs/2303.03202)
|
| 3 |
+
|
| 4 |
+
This sub-repo holds the code for supporting the continuous sign language recognition task with CorrNet+.
|
| 5 |
+
|
| 6 |
+
(**Update on 2025/01/28**) We release a demo for Continuous sign language recognition that supports multi-images and video inputs! You can watch the demo video to watch its effects, or deploy a demo locally to test its performance.
|
| 7 |
+
|
| 8 |
+
https://github.com/user-attachments/assets/a7354510-e5e0-44af-b283-39707f625a9b
|
| 9 |
+
|
| 10 |
+
<div align=center>
|
| 11 |
+
The web demo video
|
| 12 |
+
</div>
|
| 13 |
+
|
| 14 |
+
## Prerequisites
|
| 15 |
+
|
| 16 |
+
- This project is implemented in Pytorch (better >=1.13 to be compatible with ctcdecode or these may exist errors). Thus please install Pytorch first.
|
| 17 |
+
|
| 18 |
+
- ctcdecode==0.4 [[parlance/ctcdecode]](https://github.com/parlance/ctcdecode),for beam search decode.
|
| 19 |
+
|
| 20 |
+
- [Optional] sclite [[kaldi-asr/kaldi]](https://github.com/kaldi-asr/kaldi), install kaldi tool to get sclite for evaluation. After installation, create a soft link toward the sclite:
|
| 21 |
+
`mkdir ./software`
|
| 22 |
+
`ln -s PATH_TO_KALDI/tools/sctk-2.4.10/bin/sclite ./software/sclite`
|
| 23 |
+
|
| 24 |
+
You may use the python version evaluation tool for convenience (by setting 'evaluate_tool' as 'python' in line 16 of ./configs/baseline.yaml), but sclite can provide more detailed statistics.
|
| 25 |
+
|
| 26 |
+
- You can install other required modules by conducting
|
| 27 |
+
`pip install -r requirements.txt`
|
| 28 |
+
|
| 29 |
+
## Implementation
|
| 30 |
+
The implementation for the CorrNet+ is given in [./modules/resnet.py](https://github.com/hulianyuyy/CorrNet_Plus/CorrNet_Plus_CSLR/modules/resnet.py).
|
| 31 |
+
|
| 32 |
+
It's then equipped with after each stage in ResNet in line 195 [./modules/resnet.py](https://github.com/hulianyuyy/CorrNet_Plus/CorrNet_Plus_CSLR/modules/resnet.py).
|
| 33 |
+
|
| 34 |
+
We later found that the Identification Module with only spatial decomposition could perform on par with what we report in the paper (spatial-temporal decomposition) and is slighter faster, and thus implement it as such.
|
| 35 |
+
|
| 36 |
+
## Data Preparation
|
| 37 |
+
You can choose any one of following datasets to verify the effectiveness of CorrNet+.
|
| 38 |
+
|
| 39 |
+
### PHOENIX2014 dataset
|
| 40 |
+
1. Download the RWTH-PHOENIX-Weather 2014 Dataset [[download link]](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX/). Our experiments based on phoenix-2014.v3.tar.gz.
|
| 41 |
+
|
| 42 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 43 |
+
`ln -s PATH_TO_DATASET/phoenix2014-release ./dataset/phoenix2014`
|
| 44 |
+
|
| 45 |
+
3. The original image sequence is 210x260, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
cd ./preprocess
|
| 49 |
+
python dataset_preprocess.py --process-image --multiprocessing
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### PHOENIX2014-T dataset
|
| 53 |
+
1. Download the RWTH-PHOENIX-Weather 2014 Dataset [[download link]](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/)
|
| 54 |
+
|
| 55 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 56 |
+
`ln -s PATH_TO_DATASET/PHOENIX-2014-T-release-v3/PHOENIX-2014-T ./dataset/phoenix2014-T`
|
| 57 |
+
|
| 58 |
+
3. The original image sequence is 210x260, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
cd ./preprocess
|
| 62 |
+
python dataset_preprocess-T.py --process-image --multiprocessing
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
if you get an error like ```IndexError: list index out of range``` on the PHOENIX2014-T dataset, you may refer to [this issue](https://github.com/hulianyuyy/CorrNet/issues/10#issuecomment-1660363025) to tackle the problem.
|
| 66 |
+
### CSL dataset
|
| 67 |
+
|
| 68 |
+
1. Request the CSL Dataset from this website [[download link]](https://ustc-slr.github.io/openresources/cslr-dataset-2015/index.html)
|
| 69 |
+
|
| 70 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 71 |
+
`ln -s PATH_TO_DATASET ./dataset/CSL`
|
| 72 |
+
|
| 73 |
+
3. The original image sequence is 1280x720, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
cd ./preprocess
|
| 77 |
+
python dataset_preprocess-CSL.py --process-image --multiprocessing
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### CSL-Daily dataset
|
| 81 |
+
|
| 82 |
+
1. Request the CSL-Daily Dataset from this website [[download link]](http://home.ustc.edu.cn/~zhouh156/dataset/csl-daily/)
|
| 83 |
+
|
| 84 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 85 |
+
`ln -s PATH_TO_DATASET ./dataset/CSL-Daily`
|
| 86 |
+
|
| 87 |
+
3. The original image sequence is 1280x720, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
cd ./preprocess
|
| 91 |
+
python dataset_preprocess-CSL-Daily.py --process-image --multiprocessing
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Inference
|
| 95 |
+
|
| 96 |
+
### PHOENIX2014 dataset
|
| 97 |
+
|
| 98 |
+
| Backbone | Dev WER | Test WER | Pretrained model |
|
| 99 |
+
| -------- | ---------- | ----------- | --- |
|
| 100 |
+
| ResNet18 | 18.0% | 18.2% | [[Baidu]](https://pan.baidu.com/s/1vlCMSuqZiZkvidg4wrDlZQ?pwd=w5w9) <br />[[Google Drive]](https://drive.google.com/file/d/1jcRv4Gl98mvS4mmLH5dBU_-iN3qGq8Si/view?usp=sharing) |
|
| 101 |
+
|
| 102 |
+
|
| 103 |
+
### PHOENIX2014-T dataset
|
| 104 |
+
|
| 105 |
+
| Backbone | Dev WER | Test WER | Pretrained model |
|
| 106 |
+
| -------- | ---------- | ----------- | --- |
|
| 107 |
+
| ResNet18 | 17.2% | 19.1% | [[Baidu]](https://pan.baidu.com/s/1PcQtWOhiTEq9RFgBZ2hWhQ?pwd=nm3c) <br />[[Google Drive]](https://drive.google.com/file/d/1uBaKoB2JaB3ydYXmpn1tv0mBZ7cAF8J9/view?usp=sharing) |
|
| 108 |
+
|
| 109 |
+
### CSL-Daily dataset
|
| 110 |
+
|
| 111 |
+
| Backbone | Dev WER | Test WER | Pretrained model |
|
| 112 |
+
| -------- | ---------- | ----------- | --- |
|
| 113 |
+
| ResNet18 | 28.6% | 28.2% | [[Baidu]](https://pan.baidu.com/s/1SbulBImqn78FEYFZV5Oz1w?pwd=mx8m) <br />[[Google Drive]](https://drive.google.com/file/d/1Ve_uzEB1teTmebuQ1XAMFQ0UV0EVEGyM/view?usp=sharing) |
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
To evaluate the pretrained model, choose the dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml first, and run the command below:
|
| 117 |
+
`python main.py --config ./config/baseline.yaml --device your_device --load-weights path_to_weight.pt --phase test`
|
| 118 |
+
|
| 119 |
+
## Training
|
| 120 |
+
|
| 121 |
+
The priorities of configuration files are: command line > config file > default values of argparse. To train the SLR model, run the command below:
|
| 122 |
+
|
| 123 |
+
`python main.py --config ./config/baseline.yaml --device your_device`
|
| 124 |
+
|
| 125 |
+
Note that you can choose the target dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml.
|
| 126 |
+
|
| 127 |
+
## Visualizations
|
| 128 |
+
For Grad-CAM visualization of spatial weight maps, you can replace the resnet.py under "./modules" with the resnet.py under "./weight_map_generation", and then run ```python generate_weight_map.py``` with your own hyperparameters.
|
| 129 |
+
|
| 130 |
+
For Grad-CAM visualization of correlation maps, you can replace the resnet.py under "./modules" with the resnet.py under "./corr_map_generation", and then run ```python generate_corr_map.py``` with your own hyperparameters.
|
| 131 |
+
|
| 132 |
+
### Test with one video input
|
| 133 |
+
Except performing inference on datasets, we provide a `test_one_video.py` to perform inference with only one video input. An example command is
|
| 134 |
+
|
| 135 |
+
`python test_one_video.py --model_path /path_to_pretrained_weights --video_path /path_to_your_video --device your_device`
|
| 136 |
+
|
| 137 |
+
The `video_path` can be the path to a video file or a dir contains extracted images from a video.
|
| 138 |
+
|
| 139 |
+
Acceptable paramters:
|
| 140 |
+
- `model_path`, the path to pretrained weights.
|
| 141 |
+
- `video_path`, the path to a video file or a dir contains extracted images from a video.
|
| 142 |
+
- `device`, which device to run inference, default=0.
|
| 143 |
+
- `language`, the target sign language, default='phoenix', choices=['phoenix', 'csl'].
|
| 144 |
+
- `max_frames_num`, the max input frames sampled from an input video, default=360.
|
| 145 |
+
|
| 146 |
+
### Demo
|
| 147 |
+
We provide a demo to allow deploying continuous sign language recognition models locally to test its effects. The demo page is shown as follows.
|
| 148 |
+
<div align=center>
|
| 149 |
+
<img width="800" src="./demo.jpg"/>
|
| 150 |
+
<h4> The page of our demo</h4>
|
| 151 |
+
</div>
|
| 152 |
+
The demo video can be found in the top of this page. An example command is
|
| 153 |
+
|
| 154 |
+
`python demo.py --model_path /path_to_pretrained_weights --device your_device`
|
| 155 |
+
|
| 156 |
+
Acceptable paramters:
|
| 157 |
+
- `model_path`, the path to pretrained weights.
|
| 158 |
+
- `device`, which device to run inference, default=0.
|
| 159 |
+
- `language`, the target sign language, default='phoenix', choices=['phoenix', 'csl'].
|
| 160 |
+
- `max_frames_num`, the max input frames sampled from an input video, default=360.
|
| 161 |
+
|
| 162 |
+
After running the command, you can visit `http://0.0.0.0:7862` to play with the demo. You can also change it into an public URL by setting `share=True` in line 176 in `demo.py`.
|
CorrNet_Plus/CorrNet_Plus_CSLR/__init__.py
ADDED
|
File without changes
|
CorrNet_Plus/CorrNet_Plus_CSLR/configs/CSL-Daily.yaml
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
dataset_root: ./dataset/CSL-Daily
|
| 2 |
+
dict_path: ./preprocess/CSL-Daily/gloss_dict.npy
|
| 3 |
+
evaluation_dir: ./evaluation/slr_eval
|
| 4 |
+
evaluation_prefix: CSL-Daily-groundtruth
|
CorrNet_Plus/CorrNet_Plus_CSLR/configs/phoenix2014-T.yaml
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
dataset_root: ./dataset/phoenix2014-T
|
| 2 |
+
dict_path: ./preprocess/phoenix2014-T/gloss_dict.npy
|
| 3 |
+
evaluation_dir: ./evaluation/slr_eval
|
| 4 |
+
evaluation_prefix: phoenix2014-T-groundtruth
|
CorrNet_Plus/CorrNet_Plus_CSLR/corr_map_generation/resnet.py
ADDED
|
@@ -0,0 +1,350 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import torch.nn as nn
|
| 3 |
+
import torch.utils.model_zoo as model_zoo
|
| 4 |
+
import torch.nn.functional as F
|
| 5 |
+
from torch.utils.checkpoint import checkpoint
|
| 6 |
+
import cv2
|
| 7 |
+
import numpy as np
|
| 8 |
+
import os
|
| 9 |
+
__all__ = [
|
| 10 |
+
'ResNet', 'resnet10', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
|
| 11 |
+
'resnet152', 'resnet200'
|
| 12 |
+
]
|
| 13 |
+
model_urls = {
|
| 14 |
+
'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',
|
| 15 |
+
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
|
| 16 |
+
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
|
| 17 |
+
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
|
| 18 |
+
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
|
| 19 |
+
}
|
| 20 |
+
|
| 21 |
+
class AttentionPool2d(nn.Module):
|
| 22 |
+
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None, clusters=1):
|
| 23 |
+
super().__init__()
|
| 24 |
+
#self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
|
| 25 |
+
self.k_proj = nn.Linear(embed_dim, embed_dim)
|
| 26 |
+
self.q_proj = nn.Linear(embed_dim, embed_dim)
|
| 27 |
+
self.v_proj = nn.Linear(embed_dim, embed_dim)
|
| 28 |
+
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
|
| 29 |
+
self.num_heads = num_heads
|
| 30 |
+
self.clusters = clusters
|
| 31 |
+
self.query = nn.Parameter(torch.rand(self.clusters, 1, embed_dim), requires_grad=True)
|
| 32 |
+
|
| 33 |
+
def forward(self, x):
|
| 34 |
+
N, C, T, H, W= x.shape
|
| 35 |
+
x = x.flatten(start_dim=3).permute(3, 0, 2, 1).reshape(-1, N*T, C).contiguous() # NCTHW -> (HW)(NT)C
|
| 36 |
+
#x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)(NT)C
|
| 37 |
+
#x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)(NT)C
|
| 38 |
+
x, _ = F.multi_head_attention_forward(
|
| 39 |
+
#query=x[:1], key=x, value=x,
|
| 40 |
+
query=self.query.repeat(1,N*T,1), key=x, value=x,
|
| 41 |
+
embed_dim_to_check=x.shape[-1],
|
| 42 |
+
num_heads=self.num_heads,
|
| 43 |
+
q_proj_weight=self.q_proj.weight,
|
| 44 |
+
k_proj_weight=self.k_proj.weight,
|
| 45 |
+
v_proj_weight=self.v_proj.weight,
|
| 46 |
+
in_proj_weight=None,
|
| 47 |
+
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
|
| 48 |
+
bias_k=None,
|
| 49 |
+
bias_v=None,
|
| 50 |
+
add_zero_attn=False,
|
| 51 |
+
dropout_p=0,
|
| 52 |
+
out_proj_weight=self.c_proj.weight,
|
| 53 |
+
out_proj_bias=self.c_proj.bias,
|
| 54 |
+
use_separate_proj_weight=True,
|
| 55 |
+
training=self.training,
|
| 56 |
+
need_weights=False
|
| 57 |
+
)
|
| 58 |
+
return x.view(self.clusters,N,T,C).contiguous().permute(1,3,2,0) #PNTC->NCTP
|
| 59 |
+
|
| 60 |
+
class UnfoldTemporalWindows(nn.Module):
|
| 61 |
+
def __init__(self, window_size=9, window_stride=1, window_dilation=1):
|
| 62 |
+
super().__init__()
|
| 63 |
+
self.window_size = window_size
|
| 64 |
+
self.window_stride = window_stride
|
| 65 |
+
self.window_dilation = window_dilation
|
| 66 |
+
|
| 67 |
+
self.padding = (window_size + (window_size-1) * (window_dilation-1) - 1) // 2
|
| 68 |
+
self.unfold = nn.Unfold(kernel_size=(self.window_size, 1),
|
| 69 |
+
dilation=(self.window_dilation, 1),
|
| 70 |
+
stride=(self.window_stride, 1),
|
| 71 |
+
padding=(self.padding, 0))
|
| 72 |
+
|
| 73 |
+
def forward(self, x):
|
| 74 |
+
# Input shape: (N,C,T,H,W), out: (N,C,T,V*window_size)
|
| 75 |
+
N, C, T, H, W = x.shape
|
| 76 |
+
x = x.view(N, C, T, H*W)
|
| 77 |
+
x = self.unfold(x) #(N, C*Window_Size, T, H*W)
|
| 78 |
+
# Permute extra channels from window size to the graph dimension; -1 for number of windows
|
| 79 |
+
x = x.view(N, C, self.window_size, T, H, W).permute(0,1,3,2,4,5).reshape(N, C, T, self.window_size, H, W).contiguous()# NCTSHW
|
| 80 |
+
return x
|
| 81 |
+
|
| 82 |
+
class Temporal_weighting(nn.Module):
|
| 83 |
+
def __init__(self, input_size ):
|
| 84 |
+
super().__init__()
|
| 85 |
+
hidden_size = input_size//16
|
| 86 |
+
self.conv_transform = nn.Conv1d(input_size, hidden_size, kernel_size=1, stride=1, padding=0)
|
| 87 |
+
self.conv_back = nn.Conv1d(hidden_size, input_size, kernel_size=1, stride=1, padding=0)
|
| 88 |
+
self.num = 3
|
| 89 |
+
self.conv_enhance = nn.ModuleList([
|
| 90 |
+
nn.Conv1d(hidden_size, hidden_size, kernel_size=3, stride=1, padding=int(i+1), groups=hidden_size, dilation=int(i+1)) for i in range(self.num)
|
| 91 |
+
])
|
| 92 |
+
self.weights = nn.Parameter(torch.ones(self.num) / self.num, requires_grad=True)
|
| 93 |
+
self.alpha = nn.Parameter(torch.zeros(1), requires_grad=True)
|
| 94 |
+
self.relu = nn.ReLU(inplace=True)
|
| 95 |
+
|
| 96 |
+
def forward(self, x):
|
| 97 |
+
out = self.conv_transform(x.mean(-1).mean(-1))
|
| 98 |
+
aggregated_out = 0
|
| 99 |
+
for i in range(self.num):
|
| 100 |
+
aggregated_out += self.conv_enhance[i](out) * self.weights[i]
|
| 101 |
+
out = self.conv_back(aggregated_out)
|
| 102 |
+
return x*(F.sigmoid(out.unsqueeze(-1).unsqueeze(-1))-0.5) * self.alpha
|
| 103 |
+
|
| 104 |
+
class Get_Correlation(nn.Module):
|
| 105 |
+
def __init__(self, channels, neighbors=3):
|
| 106 |
+
super().__init__()
|
| 107 |
+
reduction_channel = channels//16
|
| 108 |
+
|
| 109 |
+
self.down_conv2 = nn.Conv3d(channels, channels, kernel_size=1, bias=False)
|
| 110 |
+
self.neighbors = neighbors
|
| 111 |
+
self.clusters = 1
|
| 112 |
+
self.weights2 = nn.Parameter(torch.ones(self.neighbors*2) / (self.neighbors*2), requires_grad=True)
|
| 113 |
+
self.unfold = UnfoldTemporalWindows(2*self.neighbors+1)
|
| 114 |
+
self.weights3 = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 115 |
+
self.weights4 = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 116 |
+
self.attpool = AttentionPool2d(spacial_dim=None, embed_dim=channels, num_heads=1, clusters=self.clusters)
|
| 117 |
+
self.mlp = nn.Sequential(nn.Conv3d(channels, reduction_channel, kernel_size=1),
|
| 118 |
+
nn.GELU(),
|
| 119 |
+
nn.Conv3d(reduction_channel, channels, kernel_size=1),)
|
| 120 |
+
|
| 121 |
+
# For generating aggregated_x with multi-scale conv
|
| 122 |
+
self.down_conv = nn.Conv3d(channels, reduction_channel, kernel_size=1, bias=False)
|
| 123 |
+
self.spatial_aggregation1 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,1,1), groups=reduction_channel)
|
| 124 |
+
self.spatial_aggregation2 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,2,2), dilation=(1,2,2), groups=reduction_channel)
|
| 125 |
+
self.spatial_aggregation3 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,3,3), dilation=(1,3,3), groups=reduction_channel)
|
| 126 |
+
self.weights = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 127 |
+
self.conv_back = nn.Conv3d(reduction_channel, channels, kernel_size=1, bias=False)
|
| 128 |
+
|
| 129 |
+
def forward(self, x, return_affinity=False):
|
| 130 |
+
N, C, T, H, W = x.shape
|
| 131 |
+
def clustering(query, key):
|
| 132 |
+
affinities = torch.einsum('bctp,bctl->btpl', query, key)
|
| 133 |
+
return torch.einsum('bctl,btpl->bctp', key, F.sigmoid(affinities)-0.5), affinities
|
| 134 |
+
|
| 135 |
+
x_mean = x.mean(3, keepdim=True).mean(4, keepdim=False)
|
| 136 |
+
x_max = x.max(-1, keepdim=False)[0].max(-1, keepdim=True)[0]
|
| 137 |
+
x_att = self.attpool(x) #NCTP
|
| 138 |
+
x2 = self.down_conv2(x)
|
| 139 |
+
upfold = self.unfold(x2)
|
| 140 |
+
upfold = (torch.concat([upfold[:,:,:,:self.neighbors], upfold[:,:,:,self.neighbors+1:]],3)* self.weights2.view(1, 1, 1, -1, 1, 1)).view(N, C, T, -1) #NCT(SHW)
|
| 141 |
+
x_mean = x_mean*self.weights4[0] + x_max*self.weights4[1] + x_att*self.weights4[2]
|
| 142 |
+
x_mean, affinities = clustering(x_mean, upfold)
|
| 143 |
+
features = x_mean.view(N, C, T, self.clusters, 1)
|
| 144 |
+
|
| 145 |
+
x_down = self.down_conv(x)
|
| 146 |
+
aggregated_x = self.spatial_aggregation1(x_down)*self.weights[0] + self.spatial_aggregation2(x_down)*self.weights[1] \
|
| 147 |
+
+ self.spatial_aggregation3(x_down)*self.weights[2]
|
| 148 |
+
aggregated_x = self.conv_back(aggregated_x)
|
| 149 |
+
|
| 150 |
+
features = features * (F.sigmoid(aggregated_x)-0.5)
|
| 151 |
+
if not return_affinity:
|
| 152 |
+
return features
|
| 153 |
+
else:
|
| 154 |
+
return features, affinities[0,:,0].view(-1, 2*self.neighbors, H, W) #T(2*neighbors)HW
|
| 155 |
+
|
| 156 |
+
def conv3x3(in_planes, out_planes, stride=1):
|
| 157 |
+
# 3x3x3 convolution with padding
|
| 158 |
+
return nn.Conv3d(
|
| 159 |
+
in_planes,
|
| 160 |
+
out_planes,
|
| 161 |
+
kernel_size=(1,3,3),
|
| 162 |
+
stride=(1,stride,stride),
|
| 163 |
+
padding=(0,1,1),
|
| 164 |
+
bias=False)
|
| 165 |
+
|
| 166 |
+
class BasicBlock(nn.Module):
|
| 167 |
+
expansion = 1
|
| 168 |
+
|
| 169 |
+
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
| 170 |
+
super(BasicBlock, self).__init__()
|
| 171 |
+
self.conv1 = conv3x3(inplanes, planes, stride)
|
| 172 |
+
self.bn1 = nn.BatchNorm3d(planes)
|
| 173 |
+
self.relu = nn.ReLU(inplace=True)
|
| 174 |
+
self.conv2 = conv3x3(planes, planes)
|
| 175 |
+
self.bn2 = nn.BatchNorm3d(planes)
|
| 176 |
+
self.downsample = downsample
|
| 177 |
+
self.stride = stride
|
| 178 |
+
|
| 179 |
+
def forward(self, x):
|
| 180 |
+
residual = x
|
| 181 |
+
|
| 182 |
+
out = self.conv1(x)
|
| 183 |
+
out = self.bn1(out)
|
| 184 |
+
out = self.relu(out)
|
| 185 |
+
|
| 186 |
+
out = self.conv2(out)
|
| 187 |
+
out = self.bn2(out)
|
| 188 |
+
|
| 189 |
+
if self.downsample is not None:
|
| 190 |
+
residual = self.downsample(x)
|
| 191 |
+
|
| 192 |
+
out += residual
|
| 193 |
+
out = self.relu(out)
|
| 194 |
+
|
| 195 |
+
return out
|
| 196 |
+
|
| 197 |
+
class ResNet(nn.Module):
|
| 198 |
+
|
| 199 |
+
def __init__(self, block, layers, num_classes=1000):
|
| 200 |
+
self.inplanes = 64
|
| 201 |
+
super(ResNet, self).__init__()
|
| 202 |
+
self.conv1 = nn.Conv3d(3, 64, kernel_size=(1,7,7), stride=(1,2,2), padding=(0,3,3),
|
| 203 |
+
bias=False)
|
| 204 |
+
self.bn1 = nn.BatchNorm3d(64)
|
| 205 |
+
self.relu = nn.ReLU(inplace=True)
|
| 206 |
+
self.maxpool = nn.MaxPool3d(kernel_size=(1,3,3), stride=(1,2,2), padding=(0,1,1))
|
| 207 |
+
self.layer1 = self._make_layer(block, 64, layers[0])
|
| 208 |
+
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
|
| 209 |
+
self.corr2 = Get_Correlation(self.inplanes, neighbors=1)
|
| 210 |
+
self.temporal_weight2 = Temporal_weighting(self.inplanes)
|
| 211 |
+
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
|
| 212 |
+
self.corr3 = Get_Correlation(self.inplanes, neighbors=3)
|
| 213 |
+
self.temporal_weight3 = Temporal_weighting(self.inplanes)
|
| 214 |
+
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
|
| 215 |
+
self.corr4 = Get_Correlation(self.inplanes, neighbors=5)
|
| 216 |
+
self.temporal_weight4 = Temporal_weighting(self.inplanes)
|
| 217 |
+
self.alpha = nn.Parameter(torch.zeros(3), requires_grad=True)
|
| 218 |
+
self.avgpool = nn.AvgPool2d(7, stride=1)
|
| 219 |
+
self.fc = nn.Linear(512 * block.expansion, num_classes)
|
| 220 |
+
|
| 221 |
+
for m in self.modules():
|
| 222 |
+
if isinstance(m, nn.Conv3d) or isinstance(m, nn.Conv2d):
|
| 223 |
+
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
| 224 |
+
elif isinstance(m, nn.BatchNorm3d) or isinstance(m, nn.BatchNorm2d):
|
| 225 |
+
nn.init.constant_(m.weight, 1)
|
| 226 |
+
nn.init.constant_(m.bias, 0)
|
| 227 |
+
|
| 228 |
+
def _make_layer(self, block, planes, blocks, stride=1):
|
| 229 |
+
downsample = None
|
| 230 |
+
if stride != 1 or self.inplanes != planes * block.expansion:
|
| 231 |
+
downsample = nn.Sequential(
|
| 232 |
+
nn.Conv3d(self.inplanes, planes * block.expansion,
|
| 233 |
+
kernel_size=1, stride=(1,stride,stride), bias=False),
|
| 234 |
+
nn.BatchNorm3d(planes * block.expansion),
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
layers = []
|
| 238 |
+
layers.append(block(self.inplanes, planes, stride, downsample))
|
| 239 |
+
self.inplanes = planes * block.expansion
|
| 240 |
+
for i in range(1, blocks):
|
| 241 |
+
layers.append(block(self.inplanes, planes))
|
| 242 |
+
|
| 243 |
+
return nn.Sequential(*layers)
|
| 244 |
+
|
| 245 |
+
def forward(self, x, dataset):
|
| 246 |
+
N, C, T, H, W = x.size()
|
| 247 |
+
vid = x
|
| 248 |
+
x = self.conv1(x)
|
| 249 |
+
x = self.bn1(x)
|
| 250 |
+
x = self.relu(x)
|
| 251 |
+
x = self.maxpool(x)
|
| 252 |
+
|
| 253 |
+
x = self.layer1(x)
|
| 254 |
+
x = self.layer2(x)
|
| 255 |
+
x = x + self.corr2(x) * self.alpha[0]
|
| 256 |
+
x = x + self.temporal_weight2(x)
|
| 257 |
+
x = self.layer3(x)
|
| 258 |
+
|
| 259 |
+
print(f'self.alpha: {self.alpha}')
|
| 260 |
+
update_feature, affinities = self.corr3(x, return_affinity=True) #bcthw, shw
|
| 261 |
+
x = x + update_feature * self.alpha[1]
|
| 262 |
+
show_corr_img(vid[0].permute(1,0,2,3), affinities, out_dir=f'./corr_map_layer3', clear_folder=True, dataset=dataset) #tchw, t(2*neighbors)hw
|
| 263 |
+
|
| 264 |
+
x = x + self.temporal_weight3(x)
|
| 265 |
+
x = self.layer4(x)
|
| 266 |
+
x = x + self.corr4(x) * self.alpha[2]
|
| 267 |
+
x = x + self.temporal_weight4(x)
|
| 268 |
+
|
| 269 |
+
x = x.transpose(1,2).contiguous()
|
| 270 |
+
x = x.view((-1,)+x.size()[2:]) #bt,c,h,w
|
| 271 |
+
|
| 272 |
+
x = self.avgpool(x)
|
| 273 |
+
x = x.view(x.size(0), -1) #bt,c
|
| 274 |
+
x = self.fc(x) #bt,c
|
| 275 |
+
|
| 276 |
+
return x
|
| 277 |
+
|
| 278 |
+
def show_corr_img(img, affinities, out_dir='./corr_map', clear_folder=False, dataset='phoenix2014'): # img: chw, feature_map: chw, grads: chw3
|
| 279 |
+
affinities = affinities.cpu().data.numpy()
|
| 280 |
+
if clear_folder:
|
| 281 |
+
if not os.path.exists(out_dir):
|
| 282 |
+
os.makedirs(out_dir)
|
| 283 |
+
else:
|
| 284 |
+
import shutil
|
| 285 |
+
shutil.rmtree(out_dir)
|
| 286 |
+
os.makedirs(out_dir)
|
| 287 |
+
|
| 288 |
+
predefined_padding = 6 # Note that there are 6 paddings in advance in the left/right
|
| 289 |
+
T, S, H, W = affinities.shape
|
| 290 |
+
neighbors = S//2
|
| 291 |
+
for t in range(predefined_padding, T-predefined_padding+1):
|
| 292 |
+
current_dir = out_dir + '/' + f'timestep_{t-predefined_padding}'
|
| 293 |
+
os.makedirs(current_dir)
|
| 294 |
+
for i in range(S):
|
| 295 |
+
if 'phoenix' in dataset:
|
| 296 |
+
out_cam = affinities[t,i] # only set as negative when alpha is positive for the layer
|
| 297 |
+
else:
|
| 298 |
+
out_cam = -affinities[t,i]
|
| 299 |
+
out_cam = out_cam - np.min(out_cam)
|
| 300 |
+
out_cam = out_cam / (1e-7 + out_cam.max())
|
| 301 |
+
out_cam = cv2.resize(out_cam, (img.shape[2], img.shape[3]))
|
| 302 |
+
out_cam = (255 * out_cam).astype(np.uint8)
|
| 303 |
+
heatmap = cv2.applyColorMap(out_cam, cv2.COLORMAP_JET)
|
| 304 |
+
# img[neighbors] is the current image
|
| 305 |
+
if i<neighbors:
|
| 306 |
+
cam_img = np.float32(heatmap) / 255 + (img[t-(neighbors-i)]/2+0.5).permute(1,2,0).cpu().data.numpy()
|
| 307 |
+
else:
|
| 308 |
+
cam_img = np.float32(heatmap) / 255 + (img[t+(i-neighbors)+1]/2+0.5).permute(1,2,0).cpu().data.numpy()
|
| 309 |
+
cam_img = cam_img/np.max(cam_img)
|
| 310 |
+
cam_img = np.uint8(255 * cam_img)
|
| 311 |
+
# img[neighbors] is the current image
|
| 312 |
+
if i<neighbors:
|
| 313 |
+
cv2.imwrite(f'{current_dir}/corr_map_{i}.jpg', cam_img)
|
| 314 |
+
else:
|
| 315 |
+
cv2.imwrite(f'{current_dir}/corr_map_{i+1}.jpg', cam_img)
|
| 316 |
+
current_img = (img[t]/2+0.5).permute(1,2,0).cpu().data.numpy()
|
| 317 |
+
current_img = current_img/np.max(current_img)
|
| 318 |
+
current_img = np.uint8(255 * current_img)
|
| 319 |
+
#interval = img.shape[2]//H
|
| 320 |
+
#current_img[i*interval:(i+1)*interval, j*interval:(j+1)*interval,:] = np.array([0,0,255]) #red
|
| 321 |
+
cv2.imwrite(f'{current_dir}/corr_map_{neighbors}_current.jpg', current_img)
|
| 322 |
+
|
| 323 |
+
def resnet18(**kwargs):
|
| 324 |
+
"""Constructs a ResNet-18 based model.
|
| 325 |
+
"""
|
| 326 |
+
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
|
| 327 |
+
checkpoint = model_zoo.load_url(model_urls['resnet18'], map_location=torch.device('cpu'))
|
| 328 |
+
layer_name = list(checkpoint.keys())
|
| 329 |
+
for ln in layer_name :
|
| 330 |
+
if 'conv' in ln or 'downsample.0.weight' in ln:
|
| 331 |
+
checkpoint[ln] = checkpoint[ln].unsqueeze(2)
|
| 332 |
+
model.load_state_dict(checkpoint, strict=False)
|
| 333 |
+
del checkpoint
|
| 334 |
+
import gc
|
| 335 |
+
gc.collect()
|
| 336 |
+
return model
|
| 337 |
+
|
| 338 |
+
|
| 339 |
+
def resnet34(**kwargs):
|
| 340 |
+
"""Constructs a ResNet-34 model.
|
| 341 |
+
"""
|
| 342 |
+
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
|
| 343 |
+
return model
|
| 344 |
+
|
| 345 |
+
def test():
|
| 346 |
+
net = resnet18()
|
| 347 |
+
y = net(torch.randn(1,3,224,224))
|
| 348 |
+
print(y.size())
|
| 349 |
+
|
| 350 |
+
#test()
|
CorrNet_Plus/CorrNet_Plus_CSLR/dataset/dataloader_video.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import cv2
|
| 3 |
+
import sys
|
| 4 |
+
import pdb
|
| 5 |
+
import six
|
| 6 |
+
import glob
|
| 7 |
+
import time
|
| 8 |
+
import torch
|
| 9 |
+
import random
|
| 10 |
+
import pandas
|
| 11 |
+
import warnings
|
| 12 |
+
|
| 13 |
+
warnings.simplefilter(action='ignore', category=FutureWarning)
|
| 14 |
+
|
| 15 |
+
import numpy as np
|
| 16 |
+
# import pyarrow as pa
|
| 17 |
+
from PIL import Image
|
| 18 |
+
import torch.utils.data as data
|
| 19 |
+
import matplotlib.pyplot as plt
|
| 20 |
+
from utils import video_augmentation
|
| 21 |
+
from torch.utils.data.sampler import Sampler
|
| 22 |
+
|
| 23 |
+
sys.path.append("..")
|
| 24 |
+
global kernel_sizes
|
| 25 |
+
|
| 26 |
+
class BaseFeeder(data.Dataset):
|
| 27 |
+
def __init__(self, prefix, gloss_dict, dataset='phoenix2014', drop_ratio=1, num_gloss=-1, mode="train", transform_mode=True,
|
| 28 |
+
datatype="lmdb", frame_interval=1, image_scale=1.0, kernel_size=1, input_size=224):
|
| 29 |
+
self.mode = mode
|
| 30 |
+
self.ng = num_gloss
|
| 31 |
+
self.prefix = prefix
|
| 32 |
+
self.dict = gloss_dict
|
| 33 |
+
self.data_type = datatype
|
| 34 |
+
self.dataset = dataset
|
| 35 |
+
self.input_size = input_size
|
| 36 |
+
global kernel_sizes
|
| 37 |
+
kernel_sizes = kernel_size
|
| 38 |
+
self.frame_interval = frame_interval # not implemented for read_features()
|
| 39 |
+
self.image_scale = image_scale # not implemented for read_features()
|
| 40 |
+
self.feat_prefix = f"{prefix}/features/fullFrame-256x256px/{mode}"
|
| 41 |
+
self.transform_mode = "train" if transform_mode else "test"
|
| 42 |
+
self.inputs_list = np.load(f"./preprocess/{dataset}/{mode}_info.npy", allow_pickle=True).item()
|
| 43 |
+
# self.inputs_list = np.load(f"{prefix}/annotations/manual/{mode}.corpus.npy", allow_pickle=True).item()
|
| 44 |
+
# self.inputs_list = np.load(f"{prefix}/annotations/manual/{mode}.corpus.npy", allow_pickle=True).item()
|
| 45 |
+
# self.inputs_list = dict([*filter(lambda x: isinstance(x[0], str) or x[0] < 10, self.inputs_list.items())])
|
| 46 |
+
print(mode, len(self))
|
| 47 |
+
self.data_aug = self.transform()
|
| 48 |
+
print("")
|
| 49 |
+
|
| 50 |
+
def __getitem__(self, idx):
|
| 51 |
+
if self.data_type == "video":
|
| 52 |
+
input_data, label, fi = self.read_video(idx)
|
| 53 |
+
input_data, label = self.normalize(input_data, label)
|
| 54 |
+
# input_data, label = self.normalize(input_data, label, fi['fileid'])
|
| 55 |
+
return input_data, torch.LongTensor(label), self.inputs_list[idx]['original_info']
|
| 56 |
+
elif self.data_type == "lmdb":
|
| 57 |
+
input_data, label, fi = self.read_lmdb(idx)
|
| 58 |
+
input_data, label = self.normalize(input_data, label)
|
| 59 |
+
return input_data, torch.LongTensor(label), self.inputs_list[idx]['original_info']
|
| 60 |
+
else:
|
| 61 |
+
input_data, label = self.read_features(idx)
|
| 62 |
+
return input_data, label, self.inputs_list[idx]['original_info']
|
| 63 |
+
|
| 64 |
+
def read_video(self, index):
|
| 65 |
+
# load file info
|
| 66 |
+
fi = self.inputs_list[index]
|
| 67 |
+
if 'phoenix' in self.dataset:
|
| 68 |
+
img_folder = os.path.join(self.prefix, "features/fullFrame-256x256px/" + fi['folder'])
|
| 69 |
+
elif self.dataset == 'CSL':
|
| 70 |
+
img_folder = os.path.join(self.prefix, "features/fullFrame-256x256px/" + fi['folder'] + "/*.jpg")
|
| 71 |
+
elif self.dataset == 'CSL-Daily':
|
| 72 |
+
img_folder = os.path.join(self.prefix, fi['folder'])
|
| 73 |
+
img_list = sorted(glob.glob(img_folder))
|
| 74 |
+
img_list = img_list[int(torch.randint(0, self.frame_interval, [1]))::self.frame_interval]
|
| 75 |
+
label_list = []
|
| 76 |
+
for phase in fi['label'].split(" "):
|
| 77 |
+
if phase == '':
|
| 78 |
+
continue
|
| 79 |
+
if phase in self.dict.keys():
|
| 80 |
+
label_list.append(self.dict[phase][0])
|
| 81 |
+
return [cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB) for img_path in img_list], label_list, fi
|
| 82 |
+
|
| 83 |
+
def read_features(self, index):
|
| 84 |
+
# load file info
|
| 85 |
+
fi = self.inputs_list[index]
|
| 86 |
+
data = np.load(f"./features/{self.mode}/{fi['fileid']}_features.npy", allow_pickle=True).item()
|
| 87 |
+
return data['features'], data['label']
|
| 88 |
+
|
| 89 |
+
def normalize(self, video, label, file_id=None):
|
| 90 |
+
video, label = self.data_aug(video, label, file_id)
|
| 91 |
+
video = video.float() / 127.5 - 1
|
| 92 |
+
return video, label
|
| 93 |
+
|
| 94 |
+
def transform(self):
|
| 95 |
+
if self.transform_mode == "train":
|
| 96 |
+
print("Apply training transform.")
|
| 97 |
+
return video_augmentation.Compose([
|
| 98 |
+
# video_augmentation.CenterCrop(224),
|
| 99 |
+
# video_augmentation.WERAugment('/lustre/wangtao/current_exp/exp/baseline/boundary.npy'),
|
| 100 |
+
video_augmentation.RandomCrop(self.input_size),
|
| 101 |
+
video_augmentation.RandomHorizontalFlip(0.5),
|
| 102 |
+
video_augmentation.Resize(self.image_scale),
|
| 103 |
+
video_augmentation.ToTensor(),
|
| 104 |
+
video_augmentation.TemporalRescale(0.2, self.frame_interval),
|
| 105 |
+
])
|
| 106 |
+
else:
|
| 107 |
+
print("Apply testing transform.")
|
| 108 |
+
return video_augmentation.Compose([
|
| 109 |
+
video_augmentation.CenterCrop(self.input_size),
|
| 110 |
+
video_augmentation.Resize(self.image_scale),
|
| 111 |
+
video_augmentation.ToTensor(),
|
| 112 |
+
])
|
| 113 |
+
|
| 114 |
+
def byte_to_img(self, byteflow):
|
| 115 |
+
unpacked = pa.deserialize(byteflow)
|
| 116 |
+
imgbuf = unpacked[0]
|
| 117 |
+
buf = six.BytesIO()
|
| 118 |
+
buf.write(imgbuf)
|
| 119 |
+
buf.seek(0)
|
| 120 |
+
img = Image.open(buf).convert('RGB')
|
| 121 |
+
return img
|
| 122 |
+
|
| 123 |
+
@staticmethod
|
| 124 |
+
def collate_fn(batch):
|
| 125 |
+
batch = [item for item in sorted(batch, key=lambda x: len(x[0]), reverse=True)]
|
| 126 |
+
video, label, info = list(zip(*batch))
|
| 127 |
+
|
| 128 |
+
left_pad = 0
|
| 129 |
+
last_stride = 1
|
| 130 |
+
total_stride = 1
|
| 131 |
+
global kernel_sizes
|
| 132 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 133 |
+
if ks[0] == 'K':
|
| 134 |
+
left_pad = left_pad * last_stride
|
| 135 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 136 |
+
elif ks[0] == 'P':
|
| 137 |
+
last_stride = int(ks[1])
|
| 138 |
+
total_stride = total_stride * last_stride
|
| 139 |
+
if len(video[0].shape) > 3:
|
| 140 |
+
max_len = len(video[0])
|
| 141 |
+
video_length = torch.LongTensor([np.ceil(len(vid) / total_stride) * total_stride + 2*left_pad for vid in video])
|
| 142 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 143 |
+
max_len = max_len + left_pad + right_pad
|
| 144 |
+
padded_video = [torch.cat(
|
| 145 |
+
(
|
| 146 |
+
vid[0][None].expand(left_pad, -1, -1, -1),
|
| 147 |
+
vid,
|
| 148 |
+
vid[-1][None].expand(max_len - len(vid) - left_pad, -1, -1, -1),
|
| 149 |
+
)
|
| 150 |
+
, dim=0)
|
| 151 |
+
for vid in video]
|
| 152 |
+
padded_video = torch.stack(padded_video)
|
| 153 |
+
else:
|
| 154 |
+
max_len = len(video[0])
|
| 155 |
+
video_length = torch.LongTensor([len(vid) for vid in video])
|
| 156 |
+
padded_video = [torch.cat(
|
| 157 |
+
(
|
| 158 |
+
vid,
|
| 159 |
+
vid[-1][None].expand(max_len - len(vid), -1),
|
| 160 |
+
)
|
| 161 |
+
, dim=0)
|
| 162 |
+
for vid in video]
|
| 163 |
+
padded_video = torch.stack(padded_video).permute(0, 2, 1)
|
| 164 |
+
label_length = torch.LongTensor([len(lab) for lab in label])
|
| 165 |
+
if max(label_length) == 0:
|
| 166 |
+
return padded_video, video_length, [], [], info
|
| 167 |
+
else:
|
| 168 |
+
padded_label = []
|
| 169 |
+
for lab in label:
|
| 170 |
+
padded_label.extend(lab)
|
| 171 |
+
padded_label = torch.LongTensor(padded_label)
|
| 172 |
+
return padded_video, video_length, padded_label, label_length, info
|
| 173 |
+
|
| 174 |
+
def __len__(self):
|
| 175 |
+
return len(self.inputs_list) - 1
|
| 176 |
+
|
| 177 |
+
def record_time(self):
|
| 178 |
+
self.cur_time = time.time()
|
| 179 |
+
return self.cur_time
|
| 180 |
+
|
| 181 |
+
def split_time(self):
|
| 182 |
+
split_time = time.time() - self.cur_time
|
| 183 |
+
self.record_time()
|
| 184 |
+
return split_time
|
| 185 |
+
|
| 186 |
+
|
| 187 |
+
if __name__ == "__main__":
|
| 188 |
+
feeder = BaseFeeder()
|
| 189 |
+
dataloader = torch.utils.data.DataLoader(
|
| 190 |
+
dataset=feeder,
|
| 191 |
+
batch_size=1,
|
| 192 |
+
shuffle=True,
|
| 193 |
+
drop_last=True,
|
| 194 |
+
num_workers=0,
|
| 195 |
+
)
|
| 196 |
+
for data in dataloader:
|
| 197 |
+
pdb.set_trace()
|
CorrNet_Plus/CorrNet_Plus_CSLR/demo.py
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import os
|
| 3 |
+
import glob
|
| 4 |
+
import cv2
|
| 5 |
+
from utils import video_augmentation
|
| 6 |
+
from slr_network import SLRModel
|
| 7 |
+
import torch
|
| 8 |
+
from collections import OrderedDict
|
| 9 |
+
import utils
|
| 10 |
+
from PIL import Image
|
| 11 |
+
import argparse
|
| 12 |
+
|
| 13 |
+
import numpy as np
|
| 14 |
+
VIDEO_FORMATS = [".mp4", ".avi", ".mov", ".mkv"]
|
| 15 |
+
os.environ['GRADIO_TEMP_DIR'] = 'gradio_temp'
|
| 16 |
+
import gradio as gr
|
| 17 |
+
import os
|
| 18 |
+
import warnings
|
| 19 |
+
from decord import VideoReader, cpu
|
| 20 |
+
warnings.filterwarnings("ignore")
|
| 21 |
+
|
| 22 |
+
def is_image_by_extension(file_path):
|
| 23 |
+
_, file_extension = os.path.splitext(file_path)
|
| 24 |
+
|
| 25 |
+
image_extensions = ['.jpg', '.jpeg', '.png', '.gif', '.bmp']
|
| 26 |
+
|
| 27 |
+
return file_extension.lower() in image_extensions
|
| 28 |
+
|
| 29 |
+
def load_video(video_path, max_frames_num=360):
|
| 30 |
+
if type(video_path) == str:
|
| 31 |
+
vr = VideoReader(video_path, ctx=cpu(0))
|
| 32 |
+
elif type(video_path) == list:
|
| 33 |
+
vr = VideoReader(video_path[0], ctx=cpu(0))
|
| 34 |
+
else:
|
| 35 |
+
raise ValueError(f"Not support video input : {type(video_path)}")
|
| 36 |
+
total_frame_num = len(vr)
|
| 37 |
+
if total_frame_num> max_frames_num:
|
| 38 |
+
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int)
|
| 39 |
+
else:
|
| 40 |
+
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, dtype=int)
|
| 41 |
+
frame_idx = uniform_sampled_frames.tolist()
|
| 42 |
+
spare_frames = vr.get_batch(frame_idx).asnumpy()
|
| 43 |
+
return [cv2.cvtColor(tmp, cv2.COLOR_BGR2RGB) for tmp in spare_frames] # (frames, height, width, channels)
|
| 44 |
+
|
| 45 |
+
def run_inference(inputs):
|
| 46 |
+
"""
|
| 47 |
+
Run inference on one input sample.
|
| 48 |
+
|
| 49 |
+
Args:
|
| 50 |
+
args: Command-line arguments.
|
| 51 |
+
"""
|
| 52 |
+
img_list = []
|
| 53 |
+
if isinstance(inputs, list): # Multi-image case
|
| 54 |
+
for x in inputs:
|
| 55 |
+
if is_image_by_extension(x):
|
| 56 |
+
img_list.append(cv2.cvtColor(cv2.imread(x), cv2.COLOR_BGR2RGB) )
|
| 57 |
+
|
| 58 |
+
elif os.path.splitext(inputs)[-1] in VIDEO_FORMATS: # Video case
|
| 59 |
+
try:
|
| 60 |
+
img_list = load_video(inputs, args.max_frames_num) # frames [height, width, channels]
|
| 61 |
+
except Exception as e:
|
| 62 |
+
raise ValueError(f"Error {e} in loading video")
|
| 63 |
+
else:
|
| 64 |
+
raise ValueError("Video path is incorrect!")
|
| 65 |
+
|
| 66 |
+
transform = video_augmentation.Compose([
|
| 67 |
+
video_augmentation.CenterCrop(224),
|
| 68 |
+
video_augmentation.Resize(1.0),
|
| 69 |
+
video_augmentation.ToTensor(),
|
| 70 |
+
])
|
| 71 |
+
vid, label = transform(img_list, None, None)
|
| 72 |
+
vid = vid.float() / 127.5 - 1
|
| 73 |
+
vid = vid.unsqueeze(0)
|
| 74 |
+
|
| 75 |
+
left_pad = 0
|
| 76 |
+
last_stride = 1
|
| 77 |
+
total_stride = 1
|
| 78 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 79 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 80 |
+
if ks[0] == 'K':
|
| 81 |
+
left_pad = left_pad * last_stride
|
| 82 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 83 |
+
elif ks[0] == 'P':
|
| 84 |
+
last_stride = int(ks[1])
|
| 85 |
+
total_stride = total_stride * last_stride
|
| 86 |
+
|
| 87 |
+
max_len = vid.size(1)
|
| 88 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 89 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 90 |
+
max_len = max_len + left_pad + right_pad
|
| 91 |
+
vid = torch.cat(
|
| 92 |
+
(
|
| 93 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 94 |
+
vid[0],
|
| 95 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 96 |
+
)
|
| 97 |
+
, dim=0).unsqueeze(0)
|
| 98 |
+
|
| 99 |
+
vid = device.data_to_device(vid)
|
| 100 |
+
vid_lgt = device.data_to_device(video_length)
|
| 101 |
+
ret_dict = model(vid, vid_lgt, label=None, label_lgt=None)
|
| 102 |
+
return ret_dict['recognized_sents'] # [[('ICH', 0), ('LUFT', 1), ('WETTER', 2), ('GERADE', 3), ('loc-SUEDWEST', 4), ('TEMPERATUR', 5), ('__PU__', 6), ('KUEHL', 7), ('SUED', 8), ('WARM', 9), ('ICH', 10), ('IX', 11)]]
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
def parse_args():
|
| 106 |
+
"""
|
| 107 |
+
Parse command-line arguments.
|
| 108 |
+
"""
|
| 109 |
+
parser = argparse.ArgumentParser()
|
| 110 |
+
parser.add_argument("--model_path", type=str, help="The path to pretrained weights")
|
| 111 |
+
parser.add_argument("--device", type=int, default=0)
|
| 112 |
+
parser.add_argument("--language", type=str, default='phoenix', choices=['phoenix', 'csl'])
|
| 113 |
+
parser.add_argument("--max_frames_num", type=int, default=360)
|
| 114 |
+
|
| 115 |
+
return parser.parse_args()
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
if __name__ == "__main__":
|
| 119 |
+
args = parse_args()
|
| 120 |
+
|
| 121 |
+
# Load tokenizer, model and image processor
|
| 122 |
+
model_path = os.path.expanduser(args.model_path)
|
| 123 |
+
|
| 124 |
+
device_id = args.device # specify which gpu to use
|
| 125 |
+
if args.language == 'phoenix':
|
| 126 |
+
dataset = 'phoenix2014'
|
| 127 |
+
elif args.language == 'csl':
|
| 128 |
+
dataset = 'CSL-Daily'
|
| 129 |
+
else:
|
| 130 |
+
raise ValueError("Please select target language from ['phoenix', 'csl'] in your command")
|
| 131 |
+
|
| 132 |
+
model_weights = args.model_path
|
| 133 |
+
|
| 134 |
+
# Load data and apply transformation
|
| 135 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy' # Use the gloss dict of phoenix14 dataset
|
| 136 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 137 |
+
|
| 138 |
+
device = utils.GpuDataParallel()
|
| 139 |
+
device.set_device(device_id)
|
| 140 |
+
# Define model and load state-dict
|
| 141 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 142 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 143 |
+
state_dict = torch.load(model_weights)['model_state_dict']
|
| 144 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 145 |
+
model.load_state_dict(state_dict, strict=True)
|
| 146 |
+
model = model.to(device.output_device)
|
| 147 |
+
model.cuda()
|
| 148 |
+
|
| 149 |
+
model.eval()
|
| 150 |
+
|
| 151 |
+
def identity(x):
|
| 152 |
+
return x
|
| 153 |
+
|
| 154 |
+
with gr.Blocks(title='Continuous sign language recognition') as demo:
|
| 155 |
+
gr.Markdown("<center><font size=5>Continuous sign language recognition</center></font>")
|
| 156 |
+
gr.Markdown("**Upload multiple images or a video** to get the recognized glossess.")
|
| 157 |
+
with gr.Tab('Multi-Images'):
|
| 158 |
+
with gr.Row():
|
| 159 |
+
with gr.Column(scale=1):
|
| 160 |
+
multiple_image_show = gr.Gallery(label="Show the input images", height=200)
|
| 161 |
+
Multi_image_input = gr.UploadButton(label="Click to upload multiple images", file_types = ['.png','.jpg','.jpeg', '.bmp'], file_count = "multiple")
|
| 162 |
+
multiple_image_button = gr.Button("Run")
|
| 163 |
+
with gr.Column(scale=1):
|
| 164 |
+
multiple_image_output = gr.Textbox(label="Output")
|
| 165 |
+
with gr.Tab('Video'):
|
| 166 |
+
with gr.Row():
|
| 167 |
+
with gr.Column(scale=1):
|
| 168 |
+
Video_input = gr.Video(sources=["upload"], label="Upload a video file")
|
| 169 |
+
video_button = gr.Button("Run")
|
| 170 |
+
with gr.Column(scale=1):
|
| 171 |
+
video_output = gr.Textbox(label="Output")
|
| 172 |
+
multiple_image_button.click(identity, inputs=[Multi_image_input], outputs=multiple_image_show)
|
| 173 |
+
multiple_image_button.click(run_inference, inputs=Multi_image_input, outputs=multiple_image_output)
|
| 174 |
+
video_button.click(run_inference, inputs=Video_input, outputs=video_output)
|
| 175 |
+
|
| 176 |
+
demo.launch(share=False,server_name="0.0.0.0", server_port=7862)
|
CorrNet_Plus/CorrNet_Plus_CSLR/generate_corr_map.py
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#Ref: https://blog.csdn.net/weixin_41735859/article/details/106474768
|
| 2 |
+
import numpy as np
|
| 3 |
+
import os
|
| 4 |
+
import glob
|
| 5 |
+
import cv2
|
| 6 |
+
from utils import video_augmentation
|
| 7 |
+
from slr_network import SLRModel
|
| 8 |
+
import torch
|
| 9 |
+
from collections import OrderedDict
|
| 10 |
+
import utils
|
| 11 |
+
|
| 12 |
+
gpu_id = 0 # The GPU to use
|
| 13 |
+
dataset = 'phoenix2014' # support [phoenix2014, phoenix2014-T, CSL-Daily]
|
| 14 |
+
prefix = './dataset/phoenix2014/phoenix-2014-multisigner' # ['./dataset/CSL-Daily', './dataset/phoenix2014-T', './dataset/phoenix2014/phoenix-2014-multisigner']
|
| 15 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy'
|
| 16 |
+
model_weights = 'path_to_model.pt'
|
| 17 |
+
select_id = 539 # The video selected to show. 539 for 31October_2009_Saturday_tagesschau_default-8, 0 for 01April_2010_Thursday_heute_default-1, 1 for 01August_2011_Monday_heute_default-6, 2 for 01December_2011_Thursday_heute_default-3
|
| 18 |
+
#name = '01April_2010_Thursday_heute_default-1'
|
| 19 |
+
|
| 20 |
+
# Load data and apply transformation
|
| 21 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 22 |
+
inputs_list = np.load(f"./preprocess/{dataset}/dev_info.npy", allow_pickle=True).item()
|
| 23 |
+
name = inputs_list[select_id]['fileid']
|
| 24 |
+
print(f'Generating correlation maps for {name}')
|
| 25 |
+
img_folder = os.path.join(prefix, "features/fullFrame-256x256px/" + inputs_list[select_id]['folder']) if 'phoenix' in dataset else os.path.join(prefix, inputs_list[select_id]['folder'])
|
| 26 |
+
img_list = sorted(glob.glob(img_folder))
|
| 27 |
+
img_list = [cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB) for img_path in img_list]
|
| 28 |
+
label_list = []
|
| 29 |
+
for phase in inputs_list[select_id]['label'].split(" "):
|
| 30 |
+
if phase == '':
|
| 31 |
+
continue
|
| 32 |
+
if phase in gloss_dict.keys():
|
| 33 |
+
label_list.append(gloss_dict[phase][0])
|
| 34 |
+
transform = video_augmentation.Compose([
|
| 35 |
+
video_augmentation.CenterCrop(224),
|
| 36 |
+
video_augmentation.Resize(1.0),
|
| 37 |
+
video_augmentation.ToTensor(),
|
| 38 |
+
])
|
| 39 |
+
vid, label = transform(img_list, label_list, None)
|
| 40 |
+
vid = vid.float() / 127.5 - 1
|
| 41 |
+
vid = vid.unsqueeze(0)
|
| 42 |
+
|
| 43 |
+
left_pad = 0
|
| 44 |
+
last_stride = 1
|
| 45 |
+
total_stride = 1
|
| 46 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 47 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 48 |
+
if ks[0] == 'K':
|
| 49 |
+
left_pad = left_pad * last_stride
|
| 50 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 51 |
+
elif ks[0] == 'P':
|
| 52 |
+
last_stride = int(ks[1])
|
| 53 |
+
total_stride = total_stride * last_stride
|
| 54 |
+
|
| 55 |
+
max_len = vid.size(1)
|
| 56 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 57 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 58 |
+
max_len = max_len + left_pad + right_pad
|
| 59 |
+
vid = torch.cat(
|
| 60 |
+
(
|
| 61 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 62 |
+
vid[0],
|
| 63 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 64 |
+
)
|
| 65 |
+
, dim=0).unsqueeze(0)
|
| 66 |
+
|
| 67 |
+
fmap_block = list()
|
| 68 |
+
#grad_block = list()
|
| 69 |
+
|
| 70 |
+
device = utils.GpuDataParallel()
|
| 71 |
+
device.set_device(gpu_id)
|
| 72 |
+
# Define model and load state-dict
|
| 73 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 74 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 75 |
+
state_dict = torch.load(model_weights)['model_state_dict']
|
| 76 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 77 |
+
model.load_state_dict(state_dict, strict=True)
|
| 78 |
+
model = model.to(device.output_device)
|
| 79 |
+
model.cuda()
|
| 80 |
+
|
| 81 |
+
model.eval()
|
| 82 |
+
|
| 83 |
+
print(vid.shape)
|
| 84 |
+
vid = device.data_to_device(vid)
|
| 85 |
+
vid_lgt = device.data_to_device(video_length)
|
| 86 |
+
label = device.data_to_device([torch.LongTensor(label)])
|
| 87 |
+
label_lgt = device.data_to_device(torch.LongTensor([len(label_list)]))
|
| 88 |
+
ret_dict = model(vid, vid_lgt, label=label, label_lgt=label_lgt, dataset=dataset)
|
CorrNet_Plus/CorrNet_Plus_CSLR/generate_weight_map.py
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#Ref: https://blog.csdn.net/weixin_41735859/article/details/106474768
|
| 2 |
+
import numpy as np
|
| 3 |
+
import os
|
| 4 |
+
import glob
|
| 5 |
+
import cv2
|
| 6 |
+
from utils import video_augmentation
|
| 7 |
+
from slr_network import SLRModel
|
| 8 |
+
import torch
|
| 9 |
+
from collections import OrderedDict
|
| 10 |
+
import utils
|
| 11 |
+
|
| 12 |
+
gpu_id = 0 # The GPU to use
|
| 13 |
+
dataset = 'phoenix2014' # support [phoenix2014, phoenix2014-T, CSL-Daily]
|
| 14 |
+
prefix = './dataset/phoenix2014/phoenix-2014-multisigner' # ['./dataset/CSL-Daily', './dataset/phoenix2014-T', './dataset/phoenix2014/phoenix-2014-multisigner']
|
| 15 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy'
|
| 16 |
+
model_weights = 'path_to_model.pt'
|
| 17 |
+
select_id = 2 # The video selected to show. 539 for 31October_2009_Saturday_tagesschau_default-8, 0 for 01April_2010_Thursday_heute_default-1, 1 for 01August_2011_Monday_heute_default-6, 2 for 01December_2011_Thursday_heute_default-3
|
| 18 |
+
#name = '01April_2010_Thursday_heute_default-1'
|
| 19 |
+
|
| 20 |
+
# Load data and apply transformation
|
| 21 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 22 |
+
inputs_list = np.load(f"./preprocess/{dataset}/dev_info.npy", allow_pickle=True).item()
|
| 23 |
+
name = inputs_list[select_id]['fileid']
|
| 24 |
+
print(f'Generating CAM for {name}')
|
| 25 |
+
img_folder = os.path.join(prefix, "features/fullFrame-256x256px/" + inputs_list[select_id]['folder']) if 'phoenix' in dataset else os.path.join(prefix, inputs_list[select_id]['folder'])
|
| 26 |
+
img_list = sorted(glob.glob(img_folder))
|
| 27 |
+
img_list = [cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB) for img_path in img_list]
|
| 28 |
+
label_list = []
|
| 29 |
+
for phase in inputs_list[select_id]['label'].split(" "):
|
| 30 |
+
if phase == '':
|
| 31 |
+
continue
|
| 32 |
+
if phase in gloss_dict.keys():
|
| 33 |
+
label_list.append(gloss_dict[phase][0])
|
| 34 |
+
transform = video_augmentation.Compose([
|
| 35 |
+
video_augmentation.CenterCrop(224),
|
| 36 |
+
video_augmentation.Resize(1.0),
|
| 37 |
+
video_augmentation.ToTensor(),
|
| 38 |
+
])
|
| 39 |
+
vid, label = transform(img_list, label_list, None)
|
| 40 |
+
vid = vid.float() / 127.5 - 1
|
| 41 |
+
vid = vid.unsqueeze(0)
|
| 42 |
+
|
| 43 |
+
left_pad = 0
|
| 44 |
+
last_stride = 1
|
| 45 |
+
total_stride = 1
|
| 46 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 47 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 48 |
+
if ks[0] == 'K':
|
| 49 |
+
left_pad = left_pad * last_stride
|
| 50 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 51 |
+
elif ks[0] == 'P':
|
| 52 |
+
last_stride = int(ks[1])
|
| 53 |
+
total_stride = total_stride * last_stride
|
| 54 |
+
|
| 55 |
+
max_len = vid.size(1)
|
| 56 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 57 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 58 |
+
max_len = max_len + left_pad + right_pad
|
| 59 |
+
vid = torch.cat(
|
| 60 |
+
(
|
| 61 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 62 |
+
vid[0],
|
| 63 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 64 |
+
)
|
| 65 |
+
, dim=0).unsqueeze(0)
|
| 66 |
+
|
| 67 |
+
fmap_block = list()
|
| 68 |
+
|
| 69 |
+
device = utils.GpuDataParallel()
|
| 70 |
+
device.set_device(gpu_id)
|
| 71 |
+
# Define model and load state-dict
|
| 72 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 73 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 74 |
+
state_dict = torch.load(model_weights)['model_state_dict']
|
| 75 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 76 |
+
model.load_state_dict(state_dict, strict=True)
|
| 77 |
+
model = model.to(device.output_device)
|
| 78 |
+
model.cuda()
|
| 79 |
+
|
| 80 |
+
model.train()
|
| 81 |
+
|
| 82 |
+
def forward_hook(module, input, output):
|
| 83 |
+
fmap_block.append(output) #N, C, T, H, ,W
|
| 84 |
+
if 'phoenix' in dataset:
|
| 85 |
+
model.conv2d.corr2.conv_back.register_forward_hook(forward_hook)
|
| 86 |
+
else:
|
| 87 |
+
model.conv2d.corr3.conv_back.register_forward_hook(forward_hook) # For CSL-Daily
|
| 88 |
+
#model.conv2d.layer4[-1].conv1.register_backward_hook(backward_hook)
|
| 89 |
+
|
| 90 |
+
def cam_show_img(img, feature_map, grads, out_dir): # img: ntchw, feature_map: ncthw, grads: ncthw
|
| 91 |
+
N, C, T, H, W = feature_map.shape
|
| 92 |
+
cam = np.zeros(feature_map.shape[2:], dtype=np.float32) # thw
|
| 93 |
+
grads = grads[0,:].reshape([C, T, -1])
|
| 94 |
+
weights = np.mean(grads, axis=-1)
|
| 95 |
+
for i in range(C):
|
| 96 |
+
for j in range(T):
|
| 97 |
+
cam[j] += weights[i,j] * feature_map[0, i, j, :, :]
|
| 98 |
+
cam = np.maximum(cam, 0)
|
| 99 |
+
|
| 100 |
+
if not os.path.exists(out_dir):
|
| 101 |
+
os.makedirs(out_dir)
|
| 102 |
+
else:
|
| 103 |
+
import shutil
|
| 104 |
+
shutil.rmtree(out_dir)
|
| 105 |
+
os.makedirs(out_dir)
|
| 106 |
+
for i in range(T):
|
| 107 |
+
out_cam = cam[i]
|
| 108 |
+
out_cam = out_cam - np.min(out_cam)
|
| 109 |
+
out_cam = out_cam / (1e-7 + out_cam.max())
|
| 110 |
+
out_cam = cv2.resize(out_cam, (img.shape[3], img.shape[4]))
|
| 111 |
+
out_cam = (255 * out_cam).astype(np.uint8)
|
| 112 |
+
heatmap = cv2.applyColorMap(out_cam, cv2.COLORMAP_JET)
|
| 113 |
+
cam_img = np.float32(heatmap) / 255 + (img[0,i]/2+0.5).permute(1,2,0).cpu().data.numpy()
|
| 114 |
+
cam_img = cam_img/np.max(cam_img)
|
| 115 |
+
cam_img = np.uint8(255 * cam_img)
|
| 116 |
+
path_cam_img = os.path.join(out_dir, f"cam_{i}.jpg")
|
| 117 |
+
cv2.imwrite(path_cam_img, cam_img)
|
| 118 |
+
print('Generate cam.jpg')
|
| 119 |
+
|
| 120 |
+
print(vid.shape)
|
| 121 |
+
vid = device.data_to_device(vid)
|
| 122 |
+
vid_lgt = device.data_to_device(video_length)
|
| 123 |
+
label = device.data_to_device([torch.LongTensor(label)])
|
| 124 |
+
label_lgt = device.data_to_device(torch.LongTensor([len(label_list)]))
|
| 125 |
+
ret_dict = model(vid, vid_lgt, label=label, label_lgt=label_lgt)
|
| 126 |
+
|
| 127 |
+
model.zero_grad()
|
| 128 |
+
for i in range(ret_dict['sequence_logits'].size(0)):
|
| 129 |
+
idx = np.argmax(ret_dict['sequence_logits'].cpu().data.numpy()[i,0]) #TBC
|
| 130 |
+
class_loss = ret_dict['sequence_logits'][i, 0, idx]
|
| 131 |
+
class_loss.backward(retain_graph=True)
|
| 132 |
+
# 生成cam
|
| 133 |
+
grads_val = torch.load('./weight_map.pth').cpu().data.numpy()
|
| 134 |
+
fmap = fmap_block[0].cpu().data.numpy()
|
| 135 |
+
# 保存cam图片
|
| 136 |
+
cam_show_img(vid, fmap, grads_val, out_dir='./agg_map')
|
CorrNet_Plus/CorrNet_Plus_CSLR/main.py
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
|
| 3 |
+
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
|
| 4 |
+
import pdb
|
| 5 |
+
import sys
|
| 6 |
+
import cv2
|
| 7 |
+
import yaml
|
| 8 |
+
import torch
|
| 9 |
+
import random
|
| 10 |
+
import importlib
|
| 11 |
+
import faulthandler
|
| 12 |
+
import numpy as np
|
| 13 |
+
import torch.nn as nn
|
| 14 |
+
import shutil
|
| 15 |
+
import inspect
|
| 16 |
+
import time
|
| 17 |
+
from collections import OrderedDict
|
| 18 |
+
|
| 19 |
+
faulthandler.enable()
|
| 20 |
+
import utils
|
| 21 |
+
from modules.sync_batchnorm import convert_model
|
| 22 |
+
from seq_scripts import seq_train, seq_eval, seq_feature_generation
|
| 23 |
+
from torch.cuda.amp import autocast as autocast
|
| 24 |
+
|
| 25 |
+
class Processor():
|
| 26 |
+
def __init__(self, arg):
|
| 27 |
+
self.arg = arg
|
| 28 |
+
if os.path.exists(self.arg.work_dir):
|
| 29 |
+
answer = input('Current dir exists, do you want to remove and refresh it?\n')
|
| 30 |
+
if answer in ['yes','y','ok','1']:
|
| 31 |
+
print('Dir removed !')
|
| 32 |
+
shutil.rmtree(self.arg.work_dir)
|
| 33 |
+
os.makedirs(self.arg.work_dir)
|
| 34 |
+
else:
|
| 35 |
+
print('Dir Not removed !')
|
| 36 |
+
else:
|
| 37 |
+
os.makedirs(self.arg.work_dir)
|
| 38 |
+
shutil.copy2(__file__, self.arg.work_dir)
|
| 39 |
+
shutil.copy2('./configs/baseline.yaml', self.arg.work_dir)
|
| 40 |
+
shutil.copy2('./modules/tconv.py', self.arg.work_dir)
|
| 41 |
+
shutil.copy2('./modules/resnet.py', self.arg.work_dir)
|
| 42 |
+
self.recoder = utils.Recorder(self.arg.work_dir, self.arg.print_log, self.arg.log_interval)
|
| 43 |
+
self.save_arg()
|
| 44 |
+
if self.arg.random_fix:
|
| 45 |
+
self.rng = utils.RandomState(seed=self.arg.random_seed)
|
| 46 |
+
self.device = utils.GpuDataParallel()
|
| 47 |
+
self.recoder = utils.Recorder(self.arg.work_dir, self.arg.print_log, self.arg.log_interval)
|
| 48 |
+
self.dataset = {}
|
| 49 |
+
self.data_loader = {}
|
| 50 |
+
self.gloss_dict = np.load(self.arg.dataset_info['dict_path'], allow_pickle=True).item()
|
| 51 |
+
self.arg.model_args['num_classes'] = len(self.gloss_dict) + 1
|
| 52 |
+
self.model, self.optimizer = self.loading()
|
| 53 |
+
|
| 54 |
+
def start(self):
|
| 55 |
+
if self.arg.phase == 'train':
|
| 56 |
+
best_dev = 100.0
|
| 57 |
+
best_epoch = 0
|
| 58 |
+
total_time = 0
|
| 59 |
+
epoch_time = 0
|
| 60 |
+
self.recoder.print_log('Parameters:\n{}\n'.format(str(vars(self.arg))))
|
| 61 |
+
seq_model_list = []
|
| 62 |
+
for epoch in range(self.arg.optimizer_args['start_epoch'], self.arg.num_epoch):
|
| 63 |
+
save_model = epoch % self.arg.save_interval == 0
|
| 64 |
+
eval_model = epoch % self.arg.eval_interval == 0
|
| 65 |
+
epoch_time = time.time()
|
| 66 |
+
# train end2end model
|
| 67 |
+
seq_train(self.data_loader['train'], self.model, self.optimizer,
|
| 68 |
+
self.device, epoch, self.recoder)
|
| 69 |
+
if eval_model:
|
| 70 |
+
dev_wer = seq_eval(self.arg, self.data_loader['dev'], self.model, self.device,
|
| 71 |
+
'dev', epoch, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 72 |
+
self.recoder.print_log("Dev WER: {:05.2f}%".format(dev_wer))
|
| 73 |
+
if dev_wer < best_dev:
|
| 74 |
+
best_dev = dev_wer
|
| 75 |
+
best_epoch = epoch
|
| 76 |
+
model_path = "{}_best_model.pt".format(self.arg.work_dir)
|
| 77 |
+
self.save_model(epoch, model_path)
|
| 78 |
+
self.recoder.print_log('Save best model')
|
| 79 |
+
self.recoder.print_log('Best_dev: {:05.2f}, Epoch : {}'.format(best_dev, best_epoch))
|
| 80 |
+
if save_model:
|
| 81 |
+
model_path = "{}dev_{:05.2f}_epoch{}_model.pt".format(self.arg.work_dir, dev_wer, epoch)
|
| 82 |
+
seq_model_list.append(model_path)
|
| 83 |
+
print("seq_model_list", seq_model_list)
|
| 84 |
+
self.save_model(epoch, model_path)
|
| 85 |
+
epoch_time = time.time() - epoch_time
|
| 86 |
+
total_time += epoch_time
|
| 87 |
+
torch.cuda.empty_cache()
|
| 88 |
+
self.recoder.print_log('Epoch {} costs {} mins {} seconds'.format(epoch, int(epoch_time)//60, int(epoch_time)%60))
|
| 89 |
+
self.recoder.print_log('Training costs {} hours {} mins {} seconds'.format(int(total_time)//60//60, int(total_time)//60%60, int(total_time)%60))
|
| 90 |
+
elif self.arg.phase == 'test':
|
| 91 |
+
if self.arg.load_weights is None and self.arg.load_checkpoints is None:
|
| 92 |
+
print('Please appoint --weights.')
|
| 93 |
+
self.recoder.print_log('Model: {}.'.format(self.arg.model))
|
| 94 |
+
self.recoder.print_log('Weights: {}.'.format(self.arg.load_weights))
|
| 95 |
+
# train_wer = seq_eval(self.arg, self.data_loader["train_eval"], self.model, self.device,
|
| 96 |
+
# "train", 6667, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 97 |
+
dev_wer = seq_eval(self.arg, self.data_loader["dev"], self.model, self.device,
|
| 98 |
+
"dev", 6667, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 99 |
+
test_wer = seq_eval(self.arg, self.data_loader["test"], self.model, self.device,
|
| 100 |
+
"test", 6667, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 101 |
+
self.recoder.print_log('Evaluation Done.\n')
|
| 102 |
+
elif self.arg.phase == "features":
|
| 103 |
+
for mode in ["train", "dev", "test"]:
|
| 104 |
+
seq_feature_generation(
|
| 105 |
+
self.data_loader[mode + "_eval" if mode == "train" else mode],
|
| 106 |
+
self.model, self.device, mode, self.arg.work_dir, self.recoder
|
| 107 |
+
)
|
| 108 |
+
elif self.arg.phase == 'finetune':
|
| 109 |
+
best_dev = 100.0
|
| 110 |
+
best_epoch = 0
|
| 111 |
+
total_time = 0
|
| 112 |
+
epoch_time = 0
|
| 113 |
+
self.recoder.print_log('Parameters:\n{}\n'.format(str(vars(self.arg))))
|
| 114 |
+
seq_model_list = []
|
| 115 |
+
for name, m in self.model.conv2d.named_modules():
|
| 116 |
+
m.requires_grad = False
|
| 117 |
+
for name, m in self.model.conv1d.named_modules():
|
| 118 |
+
if 'fc' not in name:
|
| 119 |
+
m.requires_grad = False
|
| 120 |
+
for name, m in self.model.temporal_model.named_modules():
|
| 121 |
+
m.requires_grad = False
|
| 122 |
+
from slr_network import NormLinear
|
| 123 |
+
self.model.classifier = NormLinear(1024, len(self.gloss_dict) + 1).cuda()
|
| 124 |
+
self.model.conv1d.fc = self.model.classifier
|
| 125 |
+
|
| 126 |
+
for epoch in range(self.arg.optimizer_args['start_epoch'], self.arg.num_epoch):
|
| 127 |
+
save_model = epoch % self.arg.save_interval == 0
|
| 128 |
+
eval_model = epoch % self.arg.eval_interval == 0
|
| 129 |
+
epoch_time = time.time()
|
| 130 |
+
# train end2end model
|
| 131 |
+
seq_train(self.data_loader['train'], self.model, self.optimizer,
|
| 132 |
+
self.device, epoch, self.recoder)
|
| 133 |
+
if eval_model:
|
| 134 |
+
dev_wer = seq_eval(self.arg, self.data_loader['dev'], self.model, self.device,
|
| 135 |
+
'dev', epoch, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 136 |
+
self.recoder.print_log("Dev WER: {:05.2f}%".format(dev_wer))
|
| 137 |
+
if dev_wer < best_dev:
|
| 138 |
+
best_dev = dev_wer
|
| 139 |
+
best_epoch = epoch
|
| 140 |
+
model_path = "{}_best_model.pt".format(self.arg.work_dir)
|
| 141 |
+
self.save_model(epoch, model_path)
|
| 142 |
+
self.recoder.print_log('Save best model')
|
| 143 |
+
self.recoder.print_log('Best_dev: {:05.2f}, Epoch : {}'.format(best_dev, best_epoch))
|
| 144 |
+
if save_model:
|
| 145 |
+
model_path = "{}dev_{:05.2f}_epoch{}_model.pt".format(self.arg.work_dir, dev_wer, epoch)
|
| 146 |
+
seq_model_list.append(model_path)
|
| 147 |
+
print("seq_model_list", seq_model_list)
|
| 148 |
+
self.save_model(epoch, model_path)
|
| 149 |
+
epoch_time = time.time() - epoch_time
|
| 150 |
+
total_time += epoch_time
|
| 151 |
+
torch.cuda.empty_cache()
|
| 152 |
+
self.recoder.print_log('Epoch {} costs {} mins {} seconds'.format(epoch, int(epoch_time)//60, int(epoch_time)%60))
|
| 153 |
+
self.recoder.print_log('Training costs {} hours {} mins {} seconds'.format(int(total_time)//60//60, int(total_time)//60%60, int(total_time)%60))
|
| 154 |
+
|
| 155 |
+
def save_arg(self):
|
| 156 |
+
arg_dict = vars(self.arg)
|
| 157 |
+
if not os.path.exists(self.arg.work_dir):
|
| 158 |
+
os.makedirs(self.arg.work_dir)
|
| 159 |
+
with open('{}/config.yaml'.format(self.arg.work_dir), 'w') as f:
|
| 160 |
+
yaml.dump(arg_dict, f)
|
| 161 |
+
|
| 162 |
+
def save_model(self, epoch, save_path):
|
| 163 |
+
torch.save({
|
| 164 |
+
'epoch': epoch,
|
| 165 |
+
'model_state_dict': self.model.state_dict(),
|
| 166 |
+
'optimizer_state_dict': self.optimizer.state_dict(),
|
| 167 |
+
'scheduler_state_dict': self.optimizer.scheduler.state_dict(),
|
| 168 |
+
'rng_state': self.rng.save_rng_state(),
|
| 169 |
+
}, save_path)
|
| 170 |
+
|
| 171 |
+
def loading(self):
|
| 172 |
+
self.device.set_device(self.arg.device)
|
| 173 |
+
print("Loading model")
|
| 174 |
+
model_class = import_class(self.arg.model)
|
| 175 |
+
model = model_class(
|
| 176 |
+
**self.arg.model_args,
|
| 177 |
+
gloss_dict=self.gloss_dict,
|
| 178 |
+
loss_weights=self.arg.loss_weights,
|
| 179 |
+
)
|
| 180 |
+
shutil.copy2(inspect.getfile(model_class), self.arg.work_dir)
|
| 181 |
+
optimizer = utils.Optimizer(model, self.arg.optimizer_args)
|
| 182 |
+
|
| 183 |
+
if self.arg.load_weights:
|
| 184 |
+
self.load_model_weights(model, self.arg.load_weights)
|
| 185 |
+
elif self.arg.load_checkpoints:
|
| 186 |
+
self.load_checkpoint_weights(model, optimizer)
|
| 187 |
+
model = self.model_to_device(model)
|
| 188 |
+
self.kernel_sizes = model.conv1d.kernel_size
|
| 189 |
+
print("Loading model finished.")
|
| 190 |
+
self.load_data()
|
| 191 |
+
return model, optimizer
|
| 192 |
+
|
| 193 |
+
def model_to_device(self, model):
|
| 194 |
+
model = model.to(self.device.output_device)
|
| 195 |
+
if len(self.device.gpu_list) > 1:
|
| 196 |
+
raise ValueError("AMP equipped with DataParallel has to manually write autocast() for each forward function, you can choose to do this by yourself")
|
| 197 |
+
#model.conv2d = nn.DataParallel(model.conv2d, device_ids=self.device.gpu_list, output_device=self.device.output_device)
|
| 198 |
+
model = convert_model(model)
|
| 199 |
+
model.cuda()
|
| 200 |
+
return model
|
| 201 |
+
|
| 202 |
+
def load_model_weights(self, model, weight_path):
|
| 203 |
+
state_dict = torch.load(weight_path)
|
| 204 |
+
if len(self.arg.ignore_weights):
|
| 205 |
+
for w in self.arg.ignore_weights:
|
| 206 |
+
if state_dict.pop(w, None) is not None:
|
| 207 |
+
print('Successfully Remove Weights: {}.'.format(w))
|
| 208 |
+
else:
|
| 209 |
+
print('Can Not Remove Weights: {}.'.format(w))
|
| 210 |
+
weights = self.modified_weights(state_dict['model_state_dict'], False)
|
| 211 |
+
# weights = self.modified_weights(state_dict['model_state_dict'])
|
| 212 |
+
model.load_state_dict(weights, strict=True)
|
| 213 |
+
|
| 214 |
+
@staticmethod
|
| 215 |
+
def modified_weights(state_dict, modified=False):
|
| 216 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 217 |
+
if not modified:
|
| 218 |
+
return state_dict
|
| 219 |
+
modified_dict = dict()
|
| 220 |
+
return modified_dict
|
| 221 |
+
|
| 222 |
+
def load_checkpoint_weights(self, model, optimizer):
|
| 223 |
+
self.load_model_weights(model, self.arg.load_checkpoints)
|
| 224 |
+
state_dict = torch.load(self.arg.load_checkpoints)
|
| 225 |
+
|
| 226 |
+
if len(torch.cuda.get_rng_state_all()) == len(state_dict['rng_state']['cuda']):
|
| 227 |
+
print("Loading random seeds...")
|
| 228 |
+
self.rng.set_rng_state(state_dict['rng_state'])
|
| 229 |
+
if "optimizer_state_dict" in state_dict.keys():
|
| 230 |
+
print("Loading optimizer parameters...")
|
| 231 |
+
optimizer.load_state_dict(state_dict["optimizer_state_dict"])
|
| 232 |
+
optimizer.to(self.device.output_device)
|
| 233 |
+
if "scheduler_state_dict" in state_dict.keys():
|
| 234 |
+
print("Loading scheduler parameters...")
|
| 235 |
+
optimizer.scheduler.load_state_dict(state_dict["scheduler_state_dict"])
|
| 236 |
+
|
| 237 |
+
self.arg.optimizer_args['start_epoch'] = state_dict["epoch"] + 1
|
| 238 |
+
self.recoder.print_log("Resuming from checkpoint: epoch {self.arg.optimizer_args['start_epoch']}")
|
| 239 |
+
|
| 240 |
+
def load_data(self):
|
| 241 |
+
print("Loading data")
|
| 242 |
+
self.feeder = import_class(self.arg.feeder)
|
| 243 |
+
shutil.copy2(inspect.getfile(self.feeder), self.arg.work_dir)
|
| 244 |
+
if self.arg.dataset == 'CSL':
|
| 245 |
+
dataset_list = zip(["train", "dev"], [True, False])
|
| 246 |
+
elif 'phoenix' in self.arg.dataset:
|
| 247 |
+
dataset_list = zip(["train", "dev", "test"], [True, False, False])
|
| 248 |
+
elif self.arg.dataset == 'CSL-Daily':
|
| 249 |
+
dataset_list = zip(["train", "dev", "test"], [True, False, False])
|
| 250 |
+
for idx, (mode, train_flag) in enumerate(dataset_list):
|
| 251 |
+
arg = self.arg.feeder_args
|
| 252 |
+
arg["prefix"] = self.arg.dataset_info['dataset_root']
|
| 253 |
+
arg["mode"] = mode.split("_")[0]
|
| 254 |
+
arg["transform_mode"] = train_flag
|
| 255 |
+
self.dataset[mode] = self.feeder(gloss_dict=self.gloss_dict, kernel_size= self.kernel_sizes, dataset=self.arg.dataset, **arg)
|
| 256 |
+
self.data_loader[mode] = self.build_dataloader(self.dataset[mode], mode, train_flag)
|
| 257 |
+
print("Loading data finished.")
|
| 258 |
+
def init_fn(self, worker_id):
|
| 259 |
+
np.random.seed(int(self.arg.random_seed)+worker_id)
|
| 260 |
+
def build_dataloader(self, dataset, mode, train_flag):
|
| 261 |
+
return torch.utils.data.DataLoader(
|
| 262 |
+
dataset,
|
| 263 |
+
batch_size=self.arg.batch_size if mode == "train" else self.arg.test_batch_size,
|
| 264 |
+
shuffle=train_flag,
|
| 265 |
+
drop_last=train_flag,
|
| 266 |
+
num_workers=self.arg.num_worker, # if train_flag else 0
|
| 267 |
+
collate_fn=self.feeder.collate_fn,
|
| 268 |
+
pin_memory=True,
|
| 269 |
+
worker_init_fn=self.init_fn,
|
| 270 |
+
)
|
| 271 |
+
|
| 272 |
+
|
| 273 |
+
def import_class(name):
|
| 274 |
+
components = name.rsplit('.', 1)
|
| 275 |
+
mod = importlib.import_module(components[0])
|
| 276 |
+
mod = getattr(mod, components[1])
|
| 277 |
+
return mod
|
| 278 |
+
|
| 279 |
+
|
| 280 |
+
if __name__ == '__main__':
|
| 281 |
+
sparser = utils.get_parser()
|
| 282 |
+
p = sparser.parse_args()
|
| 283 |
+
# p.config = "baseline_iter.yaml"
|
| 284 |
+
if p.config is not None:
|
| 285 |
+
with open(p.config, 'r') as f:
|
| 286 |
+
try:
|
| 287 |
+
default_arg = yaml.load(f, Loader=yaml.FullLoader)
|
| 288 |
+
except AttributeError:
|
| 289 |
+
default_arg = yaml.load(f)
|
| 290 |
+
key = vars(p).keys()
|
| 291 |
+
for k in default_arg.keys():
|
| 292 |
+
if k not in key:
|
| 293 |
+
print('WRONG ARG: {}'.format(k))
|
| 294 |
+
assert (k in key)
|
| 295 |
+
sparser.set_defaults(**default_arg)
|
| 296 |
+
args = sparser.parse_args()
|
| 297 |
+
with open(f"./configs/{args.dataset}.yaml", 'r') as f:
|
| 298 |
+
args.dataset_info = yaml.load(f, Loader=yaml.FullLoader)
|
| 299 |
+
processor = Processor(args)
|
| 300 |
+
utils.pack_code("./", args.work_dir)
|
| 301 |
+
processor.start()
|
CorrNet_Plus/CorrNet_Plus_CSLR/modules/BiLSTM.py
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import torch
|
| 3 |
+
import torch.nn as nn
|
| 4 |
+
import torch.nn.functional as F
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
class BiLSTMLayer(nn.Module):
|
| 8 |
+
def __init__(self, input_size, debug=False, hidden_size=512, num_layers=1, dropout=0.3,
|
| 9 |
+
bidirectional=True, rnn_type='LSTM', num_classes=-1):
|
| 10 |
+
super(BiLSTMLayer, self).__init__()
|
| 11 |
+
|
| 12 |
+
self.dropout = dropout
|
| 13 |
+
self.num_layers = num_layers
|
| 14 |
+
self.input_size = input_size
|
| 15 |
+
self.bidirectional = bidirectional
|
| 16 |
+
self.num_directions = 2 if bidirectional else 1
|
| 17 |
+
self.hidden_size = int(hidden_size / self.num_directions)
|
| 18 |
+
self.rnn_type = rnn_type
|
| 19 |
+
self.debug = debug
|
| 20 |
+
self.rnn = getattr(nn, self.rnn_type)(
|
| 21 |
+
input_size=self.input_size,
|
| 22 |
+
hidden_size=self.hidden_size,
|
| 23 |
+
num_layers=self.num_layers,
|
| 24 |
+
dropout=self.dropout,
|
| 25 |
+
bidirectional=self.bidirectional)
|
| 26 |
+
# for name, param in self.rnn.named_parameters():
|
| 27 |
+
# if name[:6] == 'weight':
|
| 28 |
+
# nn.init.orthogonal_(param)
|
| 29 |
+
|
| 30 |
+
def forward(self, src_feats, src_lens, hidden=None):
|
| 31 |
+
"""
|
| 32 |
+
Args:
|
| 33 |
+
- src_feats: (max_src_len, batch_size, D)
|
| 34 |
+
- src_lens: (batch_size)
|
| 35 |
+
Returns:
|
| 36 |
+
- outputs: (max_src_len, batch_size, hidden_size * num_directions)
|
| 37 |
+
- hidden : (num_layers, batch_size, hidden_size * num_directions)
|
| 38 |
+
"""
|
| 39 |
+
# (max_src_len, batch_size, D)
|
| 40 |
+
packed_emb = nn.utils.rnn.pack_padded_sequence(src_feats, src_lens)
|
| 41 |
+
|
| 42 |
+
# rnn(gru) returns:
|
| 43 |
+
# - packed_outputs: shape same as packed_emb
|
| 44 |
+
# - hidden: (num_layers * num_directions, batch_size, hidden_size)
|
| 45 |
+
if hidden is not None and self.rnn_type == 'LSTM':
|
| 46 |
+
half = int(hidden.size(0) / 2)
|
| 47 |
+
hidden = (hidden[:half], hidden[half:])
|
| 48 |
+
packed_outputs, hidden = self.rnn(packed_emb, hidden)
|
| 49 |
+
|
| 50 |
+
# outputs: (max_src_len, batch_size, hidden_size * num_directions)
|
| 51 |
+
rnn_outputs, _ = nn.utils.rnn.pad_packed_sequence(packed_outputs)
|
| 52 |
+
|
| 53 |
+
if self.bidirectional:
|
| 54 |
+
# (num_layers * num_directions, batch_size, hidden_size)
|
| 55 |
+
# => (num_layers, batch_size, hidden_size * num_directions)
|
| 56 |
+
hidden = self._cat_directions(hidden)
|
| 57 |
+
|
| 58 |
+
if isinstance(hidden, tuple):
|
| 59 |
+
# cat hidden and cell states
|
| 60 |
+
hidden = torch.cat(hidden, 0)
|
| 61 |
+
|
| 62 |
+
return {
|
| 63 |
+
"predictions": rnn_outputs,
|
| 64 |
+
"hidden": hidden
|
| 65 |
+
}
|
| 66 |
+
|
| 67 |
+
def _cat_directions(self, hidden):
|
| 68 |
+
""" If the encoder is bidirectional, do the following transformation.
|
| 69 |
+
Ref: https://github.com/IBM/pytorch-seq2seq/blob/master/seq2seq/models/DecoderRNN.py#L176
|
| 70 |
+
-----------------------------------------------------------
|
| 71 |
+
In: (num_layers * num_directions, batch_size, hidden_size)
|
| 72 |
+
(ex: num_layers=2, num_directions=2)
|
| 73 |
+
|
| 74 |
+
layer 1: forward__hidden(1)
|
| 75 |
+
layer 1: backward_hidden(1)
|
| 76 |
+
layer 2: forward__hidden(2)
|
| 77 |
+
layer 2: backward_hidden(2)
|
| 78 |
+
|
| 79 |
+
-----------------------------------------------------------
|
| 80 |
+
Out: (num_layers, batch_size, hidden_size * num_directions)
|
| 81 |
+
|
| 82 |
+
layer 1: forward__hidden(1) backward_hidden(1)
|
| 83 |
+
layer 2: forward__hidden(2) backward_hidden(2)
|
| 84 |
+
"""
|
| 85 |
+
|
| 86 |
+
def _cat(h):
|
| 87 |
+
return torch.cat([h[0:h.size(0):2], h[1:h.size(0):2]], 2)
|
| 88 |
+
|
| 89 |
+
if isinstance(hidden, tuple):
|
| 90 |
+
# LSTM hidden contains a tuple (hidden state, cell state)
|
| 91 |
+
hidden = tuple([_cat(h) for h in hidden])
|
| 92 |
+
else:
|
| 93 |
+
# GRU hidden
|
| 94 |
+
hidden = _cat(hidden)
|
| 95 |
+
|
| 96 |
+
return hidden
|
CorrNet_Plus/CorrNet_Plus_CSLR/modules/__init__.py
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .BiLSTM import BiLSTMLayer
|
| 2 |
+
from .tconv import TemporalConv
|
CorrNet_Plus/CorrNet_Plus_CSLR/modules/resnet.py
ADDED
|
@@ -0,0 +1,295 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import torch.nn as nn
|
| 3 |
+
import torch.utils.model_zoo as model_zoo
|
| 4 |
+
import torch.nn.functional as F
|
| 5 |
+
from torch.utils.checkpoint import checkpoint
|
| 6 |
+
__all__ = [
|
| 7 |
+
'ResNet', 'resnet10', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
|
| 8 |
+
'resnet152', 'resnet200'
|
| 9 |
+
]
|
| 10 |
+
model_urls = {
|
| 11 |
+
'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',
|
| 12 |
+
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
|
| 13 |
+
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
|
| 14 |
+
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
|
| 15 |
+
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
|
| 16 |
+
}
|
| 17 |
+
|
| 18 |
+
class AttentionPool2d(nn.Module):
|
| 19 |
+
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None, clusters=1):
|
| 20 |
+
super().__init__()
|
| 21 |
+
#self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
|
| 22 |
+
self.k_proj = nn.Linear(embed_dim, embed_dim)
|
| 23 |
+
self.q_proj = nn.Linear(embed_dim, embed_dim)
|
| 24 |
+
self.v_proj = nn.Linear(embed_dim, embed_dim)
|
| 25 |
+
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
|
| 26 |
+
self.num_heads = num_heads
|
| 27 |
+
self.clusters = clusters
|
| 28 |
+
self.query = nn.Parameter(torch.rand(self.clusters, 1, embed_dim), requires_grad=True)
|
| 29 |
+
|
| 30 |
+
def forward(self, x):
|
| 31 |
+
N, C, T, H, W= x.shape
|
| 32 |
+
x = x.flatten(start_dim=3).permute(3, 0, 2, 1).reshape(-1, N*T, C).contiguous() # NCTHW -> (HW)(NT)C
|
| 33 |
+
#x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)(NT)C
|
| 34 |
+
#x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)(NT)C
|
| 35 |
+
x, _ = F.multi_head_attention_forward(
|
| 36 |
+
#query=x[:1], key=x, value=x,
|
| 37 |
+
query=self.query.repeat(1,N*T,1), key=x, value=x,
|
| 38 |
+
embed_dim_to_check=x.shape[-1],
|
| 39 |
+
num_heads=self.num_heads,
|
| 40 |
+
q_proj_weight=self.q_proj.weight,
|
| 41 |
+
k_proj_weight=self.k_proj.weight,
|
| 42 |
+
v_proj_weight=self.v_proj.weight,
|
| 43 |
+
in_proj_weight=None,
|
| 44 |
+
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
|
| 45 |
+
bias_k=None,
|
| 46 |
+
bias_v=None,
|
| 47 |
+
add_zero_attn=False,
|
| 48 |
+
dropout_p=0,
|
| 49 |
+
out_proj_weight=self.c_proj.weight,
|
| 50 |
+
out_proj_bias=self.c_proj.bias,
|
| 51 |
+
use_separate_proj_weight=True,
|
| 52 |
+
training=self.training,
|
| 53 |
+
need_weights=False
|
| 54 |
+
)
|
| 55 |
+
return x.view(self.clusters,N,T,C).contiguous().permute(1,3,2,0) #PNTC->NCTP
|
| 56 |
+
|
| 57 |
+
class UnfoldTemporalWindows(nn.Module):
|
| 58 |
+
def __init__(self, window_size=9, window_stride=1, window_dilation=1):
|
| 59 |
+
super().__init__()
|
| 60 |
+
self.window_size = window_size
|
| 61 |
+
self.window_stride = window_stride
|
| 62 |
+
self.window_dilation = window_dilation
|
| 63 |
+
|
| 64 |
+
self.padding = (window_size + (window_size-1) * (window_dilation-1) - 1) // 2
|
| 65 |
+
self.unfold = nn.Unfold(kernel_size=(self.window_size, 1),
|
| 66 |
+
dilation=(self.window_dilation, 1),
|
| 67 |
+
stride=(self.window_stride, 1),
|
| 68 |
+
padding=(self.padding, 0))
|
| 69 |
+
|
| 70 |
+
def forward(self, x):
|
| 71 |
+
# Input shape: (N,C,T,H,W), out: (N,C,T,V*window_size)
|
| 72 |
+
N, C, T, H, W = x.shape
|
| 73 |
+
x = x.view(N, C, T, H*W)
|
| 74 |
+
x = self.unfold(x) #(N, C*Window_Size, T, H*W)
|
| 75 |
+
# Permute extra channels from window size to the graph dimension; -1 for number of windows
|
| 76 |
+
x = x.view(N, C, self.window_size, T, H, W).permute(0,1,3,2,4,5).reshape(N, C, T, self.window_size, H, W).contiguous()# NCTSHW
|
| 77 |
+
return x
|
| 78 |
+
|
| 79 |
+
class Temporal_weighting(nn.Module):
|
| 80 |
+
def __init__(self, input_size ):
|
| 81 |
+
super().__init__()
|
| 82 |
+
hidden_size = input_size//16
|
| 83 |
+
self.conv_transform = nn.Conv1d(input_size, hidden_size, kernel_size=1, stride=1, padding=0)
|
| 84 |
+
self.conv_back = nn.Conv1d(hidden_size, input_size, kernel_size=1, stride=1, padding=0)
|
| 85 |
+
#self.conv_enhance = nn.Conv1d(hidden_size, hidden_size, kernel_size=9, stride=1, padding=4)
|
| 86 |
+
self.num = 3
|
| 87 |
+
self.conv_enhance = nn.ModuleList([
|
| 88 |
+
nn.Conv1d(hidden_size, hidden_size, kernel_size=3, stride=1, padding=int(i+1), groups=hidden_size, dilation=int(i+1)) for i in range(self.num)
|
| 89 |
+
])
|
| 90 |
+
self.weights = nn.Parameter(torch.ones(self.num) / self.num, requires_grad=True)
|
| 91 |
+
self.alpha = nn.Parameter(torch.zeros(1), requires_grad=True)
|
| 92 |
+
self.relu = nn.ReLU(inplace=True)
|
| 93 |
+
|
| 94 |
+
def forward(self, x):
|
| 95 |
+
out = self.conv_transform(x.mean(-1).mean(-1))
|
| 96 |
+
aggregated_out = 0
|
| 97 |
+
for i in range(self.num):
|
| 98 |
+
aggregated_out += self.conv_enhance[i](out) * self.weights[i]
|
| 99 |
+
out = self.conv_back(aggregated_out)
|
| 100 |
+
return x*(F.sigmoid(out.unsqueeze(-1).unsqueeze(-1))-0.5) * self.alpha
|
| 101 |
+
|
| 102 |
+
class Get_Correlation(nn.Module):
|
| 103 |
+
def __init__(self, channels, neighbors=3):
|
| 104 |
+
super().__init__()
|
| 105 |
+
reduction_channel = channels//16
|
| 106 |
+
|
| 107 |
+
self.down_conv2 = nn.Conv3d(channels, channels, kernel_size=1, bias=False)
|
| 108 |
+
self.neighbors = neighbors
|
| 109 |
+
self.clusters = 1
|
| 110 |
+
self.weights2 = nn.Parameter(torch.ones(self.neighbors*2) / (self.neighbors*2), requires_grad=True)
|
| 111 |
+
self.unfold = UnfoldTemporalWindows(2*self.neighbors+1)
|
| 112 |
+
self.weights3 = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 113 |
+
self.weights4 = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 114 |
+
self.attpool = AttentionPool2d(spacial_dim=None, embed_dim=channels, num_heads=1, clusters=self.clusters)
|
| 115 |
+
self.mlp = nn.Sequential(nn.Conv3d(channels, reduction_channel, kernel_size=1),
|
| 116 |
+
nn.GELU(),
|
| 117 |
+
nn.Conv3d(reduction_channel, channels, kernel_size=1),)
|
| 118 |
+
|
| 119 |
+
# For generating aggregated_x with multi-scale conv
|
| 120 |
+
self.down_conv = nn.Conv3d(channels, reduction_channel, kernel_size=1, bias=False)
|
| 121 |
+
self.spatial_aggregation1 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,1,1), groups=reduction_channel)
|
| 122 |
+
self.spatial_aggregation2 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,2,2), dilation=(1,2,2), groups=reduction_channel)
|
| 123 |
+
self.spatial_aggregation3 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,3,3), dilation=(1,3,3), groups=reduction_channel)
|
| 124 |
+
self.weights = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 125 |
+
self.conv_back = nn.Conv3d(reduction_channel, channels, kernel_size=1, bias=False)
|
| 126 |
+
|
| 127 |
+
def forward(self, x):
|
| 128 |
+
N, C, T, H, W = x.shape
|
| 129 |
+
def clustering(query, key):
|
| 130 |
+
affinities = torch.einsum('bctp,bctl->btpl', query, key)
|
| 131 |
+
return torch.einsum('bctl,btpl->bctp', key, F.sigmoid(affinities)-0.5)
|
| 132 |
+
|
| 133 |
+
x_mean = x.mean(3, keepdim=True).mean(4, keepdim=False)
|
| 134 |
+
x_max = x.max(-1, keepdim=False)[0].max(-1, keepdim=True)[0]
|
| 135 |
+
x_att = self.attpool(x) #NCTP
|
| 136 |
+
x2 = self.down_conv2(x)
|
| 137 |
+
upfold = self.unfold(x2)
|
| 138 |
+
upfold = (torch.concat([upfold[:,:,:,:self.neighbors], upfold[:,:,:,self.neighbors+1:]],3)* self.weights2.view(1, 1, 1, -1, 1, 1)).view(N, C, T, -1)
|
| 139 |
+
x_mean = x_mean*self.weights4[0] + x_max*self.weights4[1] + x_att*self.weights4[2]
|
| 140 |
+
x_mean = clustering(x_mean, upfold)
|
| 141 |
+
features = x_mean.view(N, C, T, self.clusters, 1)
|
| 142 |
+
|
| 143 |
+
x_down = self.down_conv(x)
|
| 144 |
+
aggregated_x = self.spatial_aggregation1(x_down)*self.weights[0] + self.spatial_aggregation2(x_down)*self.weights[1] \
|
| 145 |
+
+ self.spatial_aggregation3(x_down)*self.weights[2]
|
| 146 |
+
aggregated_x = self.conv_back(aggregated_x)
|
| 147 |
+
|
| 148 |
+
features = features * (F.sigmoid(aggregated_x)-0.5)
|
| 149 |
+
return features
|
| 150 |
+
|
| 151 |
+
|
| 152 |
+
def conv3x3(in_planes, out_planes, stride=1):
|
| 153 |
+
# 3x3x3 convolution with padding
|
| 154 |
+
return nn.Conv3d(
|
| 155 |
+
in_planes,
|
| 156 |
+
out_planes,
|
| 157 |
+
kernel_size=(1,3,3),
|
| 158 |
+
stride=(1,stride,stride),
|
| 159 |
+
padding=(0,1,1),
|
| 160 |
+
bias=False)
|
| 161 |
+
|
| 162 |
+
class BasicBlock(nn.Module):
|
| 163 |
+
expansion = 1
|
| 164 |
+
|
| 165 |
+
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
| 166 |
+
super(BasicBlock, self).__init__()
|
| 167 |
+
self.conv1 = conv3x3(inplanes, planes, stride)
|
| 168 |
+
self.bn1 = nn.BatchNorm3d(planes)
|
| 169 |
+
self.relu = nn.ReLU(inplace=True)
|
| 170 |
+
self.conv2 = conv3x3(planes, planes)
|
| 171 |
+
self.bn2 = nn.BatchNorm3d(planes)
|
| 172 |
+
self.downsample = downsample
|
| 173 |
+
self.stride = stride
|
| 174 |
+
|
| 175 |
+
def forward(self, x):
|
| 176 |
+
residual = x
|
| 177 |
+
|
| 178 |
+
out = self.conv1(x)
|
| 179 |
+
out = self.bn1(out)
|
| 180 |
+
out = self.relu(out)
|
| 181 |
+
|
| 182 |
+
out = self.conv2(out)
|
| 183 |
+
out = self.bn2(out)
|
| 184 |
+
|
| 185 |
+
if self.downsample is not None:
|
| 186 |
+
residual = self.downsample(x)
|
| 187 |
+
|
| 188 |
+
out += residual
|
| 189 |
+
out = self.relu(out)
|
| 190 |
+
|
| 191 |
+
return out
|
| 192 |
+
|
| 193 |
+
class ResNet(nn.Module):
|
| 194 |
+
|
| 195 |
+
def __init__(self, block, layers, num_classes=1000):
|
| 196 |
+
self.inplanes = 64
|
| 197 |
+
super(ResNet, self).__init__()
|
| 198 |
+
self.conv1 = nn.Conv3d(3, 64, kernel_size=(1,7,7), stride=(1,2,2), padding=(0,3,3),
|
| 199 |
+
bias=False)
|
| 200 |
+
self.bn1 = nn.BatchNorm3d(64)
|
| 201 |
+
self.relu = nn.ReLU(inplace=True)
|
| 202 |
+
self.maxpool = nn.MaxPool3d(kernel_size=(1,3,3), stride=(1,2,2), padding=(0,1,1))
|
| 203 |
+
self.layer1 = self._make_layer(block, 64, layers[0])
|
| 204 |
+
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
|
| 205 |
+
self.corr2 = Get_Correlation(self.inplanes, neighbors=1)
|
| 206 |
+
self.temporal_weight2 = Temporal_weighting(self.inplanes)
|
| 207 |
+
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
|
| 208 |
+
self.corr3 = Get_Correlation(self.inplanes, neighbors=3)
|
| 209 |
+
self.temporal_weight3 = Temporal_weighting(self.inplanes)
|
| 210 |
+
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
|
| 211 |
+
self.corr4 = Get_Correlation(self.inplanes, neighbors=5)
|
| 212 |
+
self.temporal_weight4 = Temporal_weighting(self.inplanes)
|
| 213 |
+
self.alpha = nn.Parameter(torch.zeros(3), requires_grad=True)
|
| 214 |
+
self.avgpool = nn.AvgPool2d(7, stride=1)
|
| 215 |
+
self.fc = nn.Linear(512 * block.expansion, num_classes)
|
| 216 |
+
|
| 217 |
+
for m in self.modules():
|
| 218 |
+
if isinstance(m, nn.Conv3d) or isinstance(m, nn.Conv2d):
|
| 219 |
+
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
| 220 |
+
elif isinstance(m, nn.BatchNorm3d) or isinstance(m, nn.BatchNorm2d):
|
| 221 |
+
nn.init.constant_(m.weight, 1)
|
| 222 |
+
nn.init.constant_(m.bias, 0)
|
| 223 |
+
|
| 224 |
+
def _make_layer(self, block, planes, blocks, stride=1):
|
| 225 |
+
downsample = None
|
| 226 |
+
if stride != 1 or self.inplanes != planes * block.expansion:
|
| 227 |
+
downsample = nn.Sequential(
|
| 228 |
+
nn.Conv3d(self.inplanes, planes * block.expansion,
|
| 229 |
+
kernel_size=1, stride=(1,stride,stride), bias=False),
|
| 230 |
+
nn.BatchNorm3d(planes * block.expansion),
|
| 231 |
+
)
|
| 232 |
+
|
| 233 |
+
layers = []
|
| 234 |
+
layers.append(block(self.inplanes, planes, stride, downsample))
|
| 235 |
+
self.inplanes = planes * block.expansion
|
| 236 |
+
for i in range(1, blocks):
|
| 237 |
+
layers.append(block(self.inplanes, planes))
|
| 238 |
+
|
| 239 |
+
return nn.Sequential(*layers)
|
| 240 |
+
|
| 241 |
+
def forward(self, x):
|
| 242 |
+
N, C, T, H, W = x.size()
|
| 243 |
+
x = self.conv1(x)
|
| 244 |
+
x = self.bn1(x)
|
| 245 |
+
x = self.relu(x)
|
| 246 |
+
x = self.maxpool(x)
|
| 247 |
+
|
| 248 |
+
x = self.layer1(x)
|
| 249 |
+
x = self.layer2(x)
|
| 250 |
+
x = x + self.corr2(x) * self.alpha[0]
|
| 251 |
+
x = x + self.temporal_weight2(x)
|
| 252 |
+
x = self.layer3(x)
|
| 253 |
+
x = x + self.corr3(x) * self.alpha[1]
|
| 254 |
+
x = x + self.temporal_weight3(x)
|
| 255 |
+
x = self.layer4(x)
|
| 256 |
+
x = x + self.corr4(x) * self.alpha[2]
|
| 257 |
+
x = x + self.temporal_weight4(x)
|
| 258 |
+
|
| 259 |
+
x = x.transpose(1,2).contiguous()
|
| 260 |
+
x = x.view((-1,)+x.size()[2:]) #bt,c,h,w
|
| 261 |
+
|
| 262 |
+
x = self.avgpool(x)
|
| 263 |
+
x = x.view(x.size(0), -1) #bt,c
|
| 264 |
+
x = self.fc(x) #bt,c
|
| 265 |
+
|
| 266 |
+
return x
|
| 267 |
+
|
| 268 |
+
def resnet18(**kwargs):
|
| 269 |
+
"""Constructs a ResNet-18 based model.
|
| 270 |
+
"""
|
| 271 |
+
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
|
| 272 |
+
checkpoint = model_zoo.load_url(model_urls['resnet18'], map_location=torch.device('cpu'))
|
| 273 |
+
layer_name = list(checkpoint.keys())
|
| 274 |
+
for ln in layer_name :
|
| 275 |
+
if 'conv' in ln or 'downsample.0.weight' in ln:
|
| 276 |
+
checkpoint[ln] = checkpoint[ln].unsqueeze(2)
|
| 277 |
+
model.load_state_dict(checkpoint, strict=False)
|
| 278 |
+
del checkpoint
|
| 279 |
+
import gc
|
| 280 |
+
gc.collect()
|
| 281 |
+
return model
|
| 282 |
+
|
| 283 |
+
|
| 284 |
+
def resnet34(**kwargs):
|
| 285 |
+
"""Constructs a ResNet-34 model.
|
| 286 |
+
"""
|
| 287 |
+
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
|
| 288 |
+
return model
|
| 289 |
+
|
| 290 |
+
def test():
|
| 291 |
+
net = resnet18()
|
| 292 |
+
y = net(torch.randn(1,3,224,224))
|
| 293 |
+
print(y.size())
|
| 294 |
+
|
| 295 |
+
#test()
|
CorrNet_Plus/CorrNet_Plus_CSLR/modules/tconv.py
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import torch
|
| 3 |
+
import collections
|
| 4 |
+
import torch.nn as nn
|
| 5 |
+
import torch.nn.functional as F
|
| 6 |
+
|
| 7 |
+
class Temporal_LiftPool(nn.Module):
|
| 8 |
+
def __init__(self, input_size, kernel_size=2):
|
| 9 |
+
super(Temporal_LiftPool, self).__init__()
|
| 10 |
+
self.kernel_size = kernel_size
|
| 11 |
+
self.predictor = nn.Sequential(
|
| 12 |
+
nn.Conv1d(input_size, input_size, kernel_size=3, stride=1, padding=1, groups=input_size),
|
| 13 |
+
nn.ReLU(inplace=True),
|
| 14 |
+
nn.Conv1d(input_size, input_size, kernel_size=1, stride=1, padding=0),
|
| 15 |
+
nn.Tanh(),
|
| 16 |
+
)
|
| 17 |
+
|
| 18 |
+
self.updater = nn.Sequential(
|
| 19 |
+
nn.Conv1d(input_size, input_size, kernel_size=3, stride=1, padding=1, groups=input_size),
|
| 20 |
+
nn.ReLU(inplace=True),
|
| 21 |
+
nn.Conv1d(input_size, input_size, kernel_size=1, stride=1, padding=0),
|
| 22 |
+
nn.Tanh(),
|
| 23 |
+
)
|
| 24 |
+
self.predictor[2].weight.data.fill_(0.0)
|
| 25 |
+
self.updater[2].weight.data.fill_(0.0)
|
| 26 |
+
self.weight1 = Local_Weighting(input_size)
|
| 27 |
+
self.weight2 = Local_Weighting(input_size)
|
| 28 |
+
|
| 29 |
+
def forward(self, x):
|
| 30 |
+
B, C, T= x.size()
|
| 31 |
+
Xe = x[:,:,:T:self.kernel_size]
|
| 32 |
+
Xo = x[:,:,1:T:self.kernel_size]
|
| 33 |
+
d = Xo - self.predictor(Xe)
|
| 34 |
+
s = Xe + self.updater(d)
|
| 35 |
+
loss_u = torch.norm(s-Xo, p=2)
|
| 36 |
+
loss_p = torch.norm(d, p=2)
|
| 37 |
+
s = torch.cat((x[:,:,:0:self.kernel_size], s, x[:,:,T::self.kernel_size]),2)
|
| 38 |
+
return self.weight1(s)+self.weight2(d), loss_u, loss_p
|
| 39 |
+
|
| 40 |
+
class Local_Weighting(nn.Module):
|
| 41 |
+
def __init__(self, input_size ):
|
| 42 |
+
super(Local_Weighting, self).__init__()
|
| 43 |
+
self.conv = nn.Conv1d(input_size, input_size, kernel_size=5, stride=1, padding=2)
|
| 44 |
+
self.insnorm = nn.InstanceNorm1d(input_size, affine=True)
|
| 45 |
+
self.conv.weight.data.fill_(0.0)
|
| 46 |
+
|
| 47 |
+
def forward(self, x):
|
| 48 |
+
out = self.conv(x)
|
| 49 |
+
return x + x*(F.sigmoid(self.insnorm(out))-0.5)
|
| 50 |
+
|
| 51 |
+
class TemporalConv(nn.Module):
|
| 52 |
+
def __init__(self, input_size, hidden_size, conv_type=2, use_bn=False, num_classes=-1):
|
| 53 |
+
super(TemporalConv, self).__init__()
|
| 54 |
+
self.use_bn = use_bn
|
| 55 |
+
self.input_size = input_size
|
| 56 |
+
self.hidden_size = hidden_size
|
| 57 |
+
self.num_classes = num_classes
|
| 58 |
+
self.conv_type = conv_type
|
| 59 |
+
|
| 60 |
+
if self.conv_type == 0:
|
| 61 |
+
self.kernel_size = ['K3']
|
| 62 |
+
elif self.conv_type == 1:
|
| 63 |
+
self.kernel_size = ['K5', "P2"]
|
| 64 |
+
self.strides = [0]
|
| 65 |
+
elif self.conv_type == 2:
|
| 66 |
+
self.kernel_size = ['K5', "P2", 'K5', "P2"]
|
| 67 |
+
self.strides = [4,0]
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
self.temporal_conv = nn.ModuleList([])
|
| 71 |
+
#nums = 0
|
| 72 |
+
for layer_idx, ks in enumerate(self.kernel_size):
|
| 73 |
+
input_sz = self.input_size if layer_idx == 0 else self.hidden_size
|
| 74 |
+
if ks[0] == 'P':
|
| 75 |
+
#nums += 1
|
| 76 |
+
#if nums == 2:
|
| 77 |
+
# self.temporal_conv.append(nn.MaxPool1d(kernel_size=int(ks[1]), ceil_mode=False))
|
| 78 |
+
#elif nums == 1:
|
| 79 |
+
self.temporal_conv.append(Temporal_LiftPool(input_size=input_sz, kernel_size=int(ks[1])))
|
| 80 |
+
#self.temporal_conv.append(nn.MaxPool1d(kernel_size=int(ks[1]), ceil_mode=False))
|
| 81 |
+
#self.temporal_conv.append(nn.AvgPool1d(kernel_size=int(ks[1]), ceil_mode=False))
|
| 82 |
+
|
| 83 |
+
elif ks[0] == 'K':
|
| 84 |
+
self.temporal_conv.append(
|
| 85 |
+
nn.Sequential(
|
| 86 |
+
nn.Conv1d(input_sz, self.hidden_size, kernel_size=int(ks[1]), stride=1, padding=0),
|
| 87 |
+
nn.BatchNorm1d(self.hidden_size),
|
| 88 |
+
nn.ReLU(inplace=True),
|
| 89 |
+
)
|
| 90 |
+
)
|
| 91 |
+
|
| 92 |
+
if self.num_classes != -1:
|
| 93 |
+
self.fc = nn.Linear(self.hidden_size, self.num_classes)
|
| 94 |
+
|
| 95 |
+
def update_lgt(self, feat_len):
|
| 96 |
+
for ks in self.kernel_size:
|
| 97 |
+
if ks[0] == 'P':
|
| 98 |
+
feat_len //= int(ks[1])
|
| 99 |
+
else:
|
| 100 |
+
feat_len -= int(ks[1]) - 1
|
| 101 |
+
return feat_len
|
| 102 |
+
|
| 103 |
+
def forward(self, frame_feat, lgt):
|
| 104 |
+
visual_feat = frame_feat
|
| 105 |
+
loss_LiftPool_u = 0
|
| 106 |
+
loss_LiftPool_p = 0
|
| 107 |
+
i = 0
|
| 108 |
+
for tempconv in self.temporal_conv:
|
| 109 |
+
if isinstance(tempconv, Temporal_LiftPool):
|
| 110 |
+
visual_feat, loss_u, loss_d = tempconv(visual_feat) #self.strides[i])
|
| 111 |
+
i +=1
|
| 112 |
+
loss_LiftPool_u += loss_u
|
| 113 |
+
loss_LiftPool_p += loss_d
|
| 114 |
+
else:
|
| 115 |
+
visual_feat = tempconv(visual_feat)
|
| 116 |
+
lgt = self.update_lgt(lgt)
|
| 117 |
+
logits = None if self.num_classes == -1 \
|
| 118 |
+
else self.fc(visual_feat.transpose(1, 2)).transpose(1, 2)
|
| 119 |
+
return {
|
| 120 |
+
"visual_feat": visual_feat.permute(2, 0, 1),
|
| 121 |
+
"conv_logits": logits.permute(2, 0, 1),
|
| 122 |
+
"feat_len": lgt.cpu(),
|
| 123 |
+
"loss_LiftPool_u": loss_LiftPool_u,
|
| 124 |
+
"loss_LiftPool_p": loss_LiftPool_p,
|
| 125 |
+
}
|
CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess-CSL-Daily.py
ADDED
|
@@ -0,0 +1,178 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import os
|
| 3 |
+
import cv2
|
| 4 |
+
import pdb
|
| 5 |
+
import glob
|
| 6 |
+
import pandas
|
| 7 |
+
import argparse
|
| 8 |
+
import numpy as np
|
| 9 |
+
from tqdm import tqdm
|
| 10 |
+
from functools import partial
|
| 11 |
+
from multiprocessing import Pool
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def csv2dict(dataset_root, anno_path):
|
| 15 |
+
with open(anno_path,'r', encoding='utf-8') as f:
|
| 16 |
+
inputs_list = f.readlines()
|
| 17 |
+
info_dict = dict()
|
| 18 |
+
#info_dict['prefix'] = dataset_root #+ "/fullFrame-210x260px"
|
| 19 |
+
print(f"Generate information dict from {anno_path}")
|
| 20 |
+
for file_idx, file_info in tqdm(enumerate(inputs_list[1:]), total=len(inputs_list)-1): # Exclude first line
|
| 21 |
+
index, name, length, gloss, char, word, postag = file_info.strip().split("|")
|
| 22 |
+
info_dict[file_idx] = {
|
| 23 |
+
'fileid': name,
|
| 24 |
+
'folder': name+'/*.jpg',
|
| 25 |
+
'signer': 'unknown',
|
| 26 |
+
'label': gloss,
|
| 27 |
+
'num_frames': length,
|
| 28 |
+
'original_info': "|".join(file_info.split("|")[1:]), #start from 'name', for model inference
|
| 29 |
+
}
|
| 30 |
+
return info_dict
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
def generate_gt_stm(info, save_path):
|
| 34 |
+
with open(save_path, "w") as f:
|
| 35 |
+
for k, v in info.items():
|
| 36 |
+
if not isinstance(k, int):
|
| 37 |
+
continue
|
| 38 |
+
f.writelines(f"{v['fileid']} 1 {v['signer']} 0.0 1.79769e+308 {v['label']}\n")
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
def sign_dict_update(total_dict, info):
|
| 42 |
+
for k, v in info.items():
|
| 43 |
+
if not isinstance(k, int):
|
| 44 |
+
continue
|
| 45 |
+
split_label = v['label'].split()
|
| 46 |
+
for gloss in split_label:
|
| 47 |
+
if gloss not in total_dict.keys():
|
| 48 |
+
total_dict[gloss] = 1
|
| 49 |
+
else:
|
| 50 |
+
total_dict[gloss] += 1
|
| 51 |
+
return total_dict
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def resize_img(img_path, dsize='210x260px'):
|
| 55 |
+
dsize = tuple(int(res) for res in re.findall("\d+", dsize))
|
| 56 |
+
img = cv2.imread(img_path)
|
| 57 |
+
if img is None:
|
| 58 |
+
print(f'image destroyed: {img_path}, please manually modify the numframes')
|
| 59 |
+
return None
|
| 60 |
+
img = cv2.resize(img, dsize, interpolation=cv2.INTER_LANCZOS4)
|
| 61 |
+
return img
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
def resize_dataset(video_idx, dsize, info_dict, dataset_root, target_path):
|
| 65 |
+
info = info_dict[video_idx]
|
| 66 |
+
img_list = glob.glob(f"{dataset_root}/{info['folder']}")
|
| 67 |
+
if len(img_list) == len(glob.glob(f"{target_path}/{info['folder']}")):
|
| 68 |
+
return
|
| 69 |
+
for img_path in img_list:
|
| 70 |
+
rs_img = resize_img(img_path, dsize=dsize)
|
| 71 |
+
if rs_img is None:
|
| 72 |
+
info_dict[video_idx]['num_frames'] = info_dict[video_idx]['num_frames']-1
|
| 73 |
+
continue
|
| 74 |
+
rs_img_path = f"{target_path}/{info['fileid']}/{img_path.split('/')[-1]}"
|
| 75 |
+
rs_img_dir = os.path.dirname(rs_img_path)
|
| 76 |
+
if not os.path.exists(rs_img_dir):
|
| 77 |
+
os.makedirs(rs_img_dir)
|
| 78 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 79 |
+
else:
|
| 80 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
def run_mp_cmd(processes, process_func, process_args):
|
| 84 |
+
with Pool(processes) as p:
|
| 85 |
+
outputs = list(tqdm(p.imap(process_func, process_args), total=len(process_args)))
|
| 86 |
+
return outputs
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
def run_cmd(func, args):
|
| 90 |
+
return func(args)
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
if __name__ == '__main__':
|
| 94 |
+
parser = argparse.ArgumentParser(
|
| 95 |
+
description='Data process for Visual Alignment Constraint for Continuous Sign Language Recognition.')
|
| 96 |
+
parser.add_argument('--dataset', type=str, default='CSL-Daily',
|
| 97 |
+
help='save prefix')
|
| 98 |
+
parser.add_argument('--dataset-root', type=str, default='/disk1/dataset/CSL-Daily/sentence/frames_512x512',
|
| 99 |
+
help='path to the dataset')
|
| 100 |
+
parser.add_argument('--target-path', type=str, default='/disk1/dataset/CSL-Daily_256x256px',
|
| 101 |
+
help='target path to the dataset')
|
| 102 |
+
parser.add_argument('--annotation-file', type=str, default='video_map.txt',
|
| 103 |
+
help='annotation file')
|
| 104 |
+
parser.add_argument('--split-file', type=str, default='split_1.txt',
|
| 105 |
+
help='split file')
|
| 106 |
+
parser.add_argument('--output-res', type=str, default='256x256px',
|
| 107 |
+
help='resize resolution for image sequence')
|
| 108 |
+
parser.add_argument('--process-image', '-p', action='store_true', default=False,
|
| 109 |
+
help='resize image')
|
| 110 |
+
parser.add_argument('--multiprocessing', '-m', action='store_true', default=False,
|
| 111 |
+
help='whether adopts multiprocessing to accelate the preprocess')
|
| 112 |
+
|
| 113 |
+
args = parser.parse_args()
|
| 114 |
+
mode = ["train", "dev", "test"]
|
| 115 |
+
sign_dict = dict()
|
| 116 |
+
if not os.path.exists(f"./{args.dataset}"):
|
| 117 |
+
os.makedirs(f"./{args.dataset}")
|
| 118 |
+
|
| 119 |
+
# generate information dict
|
| 120 |
+
information = csv2dict(args.dataset_root, f"./{args.dataset}/{args.annotation_file}")
|
| 121 |
+
video_index = np.arange(len(information))
|
| 122 |
+
if args.process_image:
|
| 123 |
+
print(f"Resize image to {args.output_res}")
|
| 124 |
+
if args.multiprocessing:
|
| 125 |
+
run_mp_cmd(100, partial(resize_dataset, dsize=args.output_res, info_dict=information, dataset_root=args.dataset_root, target_path=args.target_path), video_index)
|
| 126 |
+
else:
|
| 127 |
+
for idx in tqdm(video_index):
|
| 128 |
+
run_cmd(partial(resize_dataset, dsize=args.output_res, info_dict=information, dataset_root=args.dataset_root, target_path=args.target_path), idx)
|
| 129 |
+
#resize_dataset(idx, dsize=args.output_res, info_dict=information)
|
| 130 |
+
else:
|
| 131 |
+
print("Don't resize images")
|
| 132 |
+
|
| 133 |
+
with open(f"./{args.dataset}/{args.split_file}",'r', encoding='utf-8') as f:
|
| 134 |
+
files_list = f.readlines()
|
| 135 |
+
train_files = []
|
| 136 |
+
dev_files = []
|
| 137 |
+
test_files = []
|
| 138 |
+
for file_idx, file_info in tqdm(enumerate(files_list[1:]), total=len(files_list)-1): # Exclude first line
|
| 139 |
+
name, split = file_info.strip().split("|")
|
| 140 |
+
if split == 'train':
|
| 141 |
+
train_files.append(name)
|
| 142 |
+
elif split == 'dev':
|
| 143 |
+
dev_files.append(name)
|
| 144 |
+
elif split == 'test':
|
| 145 |
+
test_files.append(name)
|
| 146 |
+
assert len(train_files) + len(dev_files) + len(test_files) == len(information)
|
| 147 |
+
information_pack = dict()
|
| 148 |
+
for md in mode:
|
| 149 |
+
information_pack[md] = dict()
|
| 150 |
+
train_id = 0
|
| 151 |
+
dev_id = 0
|
| 152 |
+
test_id = 0
|
| 153 |
+
for info_key, info_data in information.items():
|
| 154 |
+
if info_data['fileid'] in train_files:
|
| 155 |
+
information_pack['train'][train_id] = info_data
|
| 156 |
+
train_id += 1
|
| 157 |
+
elif info_data['fileid'] in dev_files:
|
| 158 |
+
information_pack['dev'][dev_id] = info_data
|
| 159 |
+
dev_id +=1
|
| 160 |
+
elif info_data['fileid'] in test_files:
|
| 161 |
+
information_pack['test'][test_id] = info_data
|
| 162 |
+
test_id +=1
|
| 163 |
+
else:
|
| 164 |
+
information_pack['train'][train_id] = info_data #S000007_P0003_T00
|
| 165 |
+
train_id += 1
|
| 166 |
+
assert len(information_pack['train']) + len(information_pack['dev']) + len(information_pack['test']) == len(information)
|
| 167 |
+
|
| 168 |
+
for md in mode:
|
| 169 |
+
np.save(f"./{args.dataset}/{md}_info.npy", information_pack[md])
|
| 170 |
+
# generate groudtruth stm for evaluation
|
| 171 |
+
generate_gt_stm(information_pack[md], f"./{args.dataset}/{args.dataset}-groundtruth-{md}.stm")
|
| 172 |
+
# update the total gloss dict
|
| 173 |
+
sign_dict_update(sign_dict, information)
|
| 174 |
+
sign_dict = sorted(sign_dict.items(), key=lambda d: d[0])
|
| 175 |
+
save_dict = {}
|
| 176 |
+
for idx, (key, value) in enumerate(sign_dict):
|
| 177 |
+
save_dict[key] = [idx + 1, value]
|
| 178 |
+
np.save(f"./{args.dataset}/gloss_dict.npy", save_dict)
|
CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess-CSL.py
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import os
|
| 3 |
+
import cv2
|
| 4 |
+
import pdb
|
| 5 |
+
import glob
|
| 6 |
+
import pandas
|
| 7 |
+
import argparse
|
| 8 |
+
import numpy as np
|
| 9 |
+
from tqdm import tqdm
|
| 10 |
+
from functools import partial
|
| 11 |
+
from multiprocessing import Pool
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def csv2dict(dataset_root, anno_path):
|
| 15 |
+
with open(anno_path,'r', encoding='utf-8') as f:
|
| 16 |
+
inputs_list = f.readlines()
|
| 17 |
+
info_dict = dict()
|
| 18 |
+
#info_dict['prefix'] = dataset_root #+ "/fullFrame-210x260px"
|
| 19 |
+
print(f"Generate information dict from {anno_path}")
|
| 20 |
+
for file_idx, file_info in tqdm(enumerate(inputs_list), total=len(inputs_list)):
|
| 21 |
+
name, label = file_info.strip().split("|")
|
| 22 |
+
num_frames = len(glob.glob(f"{name}/*.jpg"))
|
| 23 |
+
info_dict[file_idx] = {
|
| 24 |
+
'fileid': name,
|
| 25 |
+
'folder': name.split('/')[-1],
|
| 26 |
+
'signer': 'unknown',
|
| 27 |
+
'label': label,
|
| 28 |
+
'num_frames': num_frames,
|
| 29 |
+
'original_info': file_info,
|
| 30 |
+
}
|
| 31 |
+
return info_dict
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def generate_gt_stm(info, save_path):
|
| 35 |
+
with open(save_path, "w") as f:
|
| 36 |
+
for k, v in info.items():
|
| 37 |
+
if not isinstance(k, int):
|
| 38 |
+
continue
|
| 39 |
+
f.writelines(f"{v['fileid']} 1 {v['signer']} 0.0 1.79769e+308 {v['label']}\n")
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
def sign_dict_update(total_dict, info):
|
| 43 |
+
for k, v in info.items():
|
| 44 |
+
if not isinstance(k, int):
|
| 45 |
+
continue
|
| 46 |
+
split_label = v['label'].split()
|
| 47 |
+
for gloss in split_label:
|
| 48 |
+
if gloss not in total_dict.keys():
|
| 49 |
+
total_dict[gloss] = 1
|
| 50 |
+
else:
|
| 51 |
+
total_dict[gloss] += 1
|
| 52 |
+
return total_dict
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
def resize_img(img_path, dsize='210x260px'):
|
| 56 |
+
dsize = tuple(int(res) for res in re.findall("\d+", dsize))
|
| 57 |
+
img = cv2.imread(img_path)
|
| 58 |
+
if img is None:
|
| 59 |
+
print(f'image destroyed: {img_path}, please manually modify the numframes')
|
| 60 |
+
return None
|
| 61 |
+
img = cv2.resize(img, dsize, interpolation=cv2.INTER_LANCZOS4)
|
| 62 |
+
return img
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def resize_dataset(video_idx, dsize, info_dict, target_path):
|
| 66 |
+
info = info_dict[video_idx]
|
| 67 |
+
img_list = glob.glob(f"{info['fileid']}/*.jpg")
|
| 68 |
+
if len(img_list) == len(glob.glob(f"{target_path}/features/fullFrame-{dsize}/{info['folder']}/*.jpg")):
|
| 69 |
+
return
|
| 70 |
+
for img_path in img_list:
|
| 71 |
+
rs_img = resize_img(img_path, dsize=dsize)
|
| 72 |
+
if rs_img is None:
|
| 73 |
+
info_dict[video_idx]['num_frames'] = info_dict[video_idx]['num_frames']-1
|
| 74 |
+
continue
|
| 75 |
+
rs_img_path = f"{target_path}/features/fullFrame-{dsize}/{info['folder']}/{img_path.split('/')[-1]}"
|
| 76 |
+
rs_img_dir = os.path.dirname(rs_img_path)
|
| 77 |
+
if not os.path.exists(rs_img_dir):
|
| 78 |
+
os.makedirs(rs_img_dir)
|
| 79 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 80 |
+
else:
|
| 81 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
def run_mp_cmd(processes, process_func, process_args):
|
| 85 |
+
with Pool(processes) as p:
|
| 86 |
+
outputs = list(tqdm(p.imap(process_func, process_args), total=len(process_args)))
|
| 87 |
+
return outputs
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
def run_cmd(func, args):
|
| 91 |
+
return func(args)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
if __name__ == '__main__':
|
| 95 |
+
parser = argparse.ArgumentParser(
|
| 96 |
+
description='Data process for Visual Alignment Constraint for Continuous Sign Language Recognition.')
|
| 97 |
+
parser.add_argument('--dataset', type=str, default='CSL',
|
| 98 |
+
help='save prefix')
|
| 99 |
+
parser.add_argument('--dataset-root', type=str, default='/disk1/dataset/CSL_Continuous',
|
| 100 |
+
help='path to the dataset')
|
| 101 |
+
parser.add_argument('--target-path', type=str, default='/disk1/dataset/CSL_Continuous_Resized',
|
| 102 |
+
help='target path to the dataset')
|
| 103 |
+
parser.add_argument('--annotation-prefix', type=str, default='{}.txt',
|
| 104 |
+
help='annotation prefix')
|
| 105 |
+
parser.add_argument('--output-res', type=str, default='256x256px',
|
| 106 |
+
help='resize resolution for image sequence')
|
| 107 |
+
parser.add_argument('--process-image', '-p', action='store_true', default=False,
|
| 108 |
+
help='resize image')
|
| 109 |
+
parser.add_argument('--multiprocessing', '-m', action='store_true', default=False,
|
| 110 |
+
help='whether adopts multiprocessing to accelate the preprocess')
|
| 111 |
+
|
| 112 |
+
args = parser.parse_args()
|
| 113 |
+
mode = ["train", "dev"]
|
| 114 |
+
sign_dict = dict()
|
| 115 |
+
if not os.path.exists(f"./{args.dataset}"):
|
| 116 |
+
os.makedirs(f"./{args.dataset}")
|
| 117 |
+
for md in mode:
|
| 118 |
+
# generate information dict
|
| 119 |
+
information = csv2dict(args.dataset_root, f"./{args.dataset}/{args.annotation_prefix.format(md)}")
|
| 120 |
+
video_index = np.arange(len(information) - 1)
|
| 121 |
+
if args.process_image:
|
| 122 |
+
print(f"Resize image to {args.output_res}")
|
| 123 |
+
if args.multiprocessing:
|
| 124 |
+
run_mp_cmd(100, partial(resize_dataset, dsize=args.output_res, info_dict=information, target_path=args.target_path), video_index)
|
| 125 |
+
else:
|
| 126 |
+
for idx in tqdm(video_index):
|
| 127 |
+
run_cmd(partial(resize_dataset, dsize=args.output_res, info_dict=information, target_path=args.target_path), idx)
|
| 128 |
+
#resize_dataset(idx, dsize=args.output_res, info_dict=information)
|
| 129 |
+
else:
|
| 130 |
+
print("Don't resize images")
|
| 131 |
+
np.save(f"./{args.dataset}/{md}_info.npy", information)
|
| 132 |
+
# update the total gloss dict
|
| 133 |
+
sign_dict_update(sign_dict, information)
|
| 134 |
+
# generate groudtruth stm for evaluation
|
| 135 |
+
generate_gt_stm(information, f"./{args.dataset}/{args.dataset}-groundtruth-{md}.stm")
|
| 136 |
+
# resize images
|
| 137 |
+
sign_dict = sorted(sign_dict.items(), key=lambda d: d[0])
|
| 138 |
+
save_dict = {}
|
| 139 |
+
for idx, (key, value) in enumerate(sign_dict):
|
| 140 |
+
save_dict[key] = [idx + 1, value]
|
| 141 |
+
np.save(f"./{args.dataset}/gloss_dict.npy", save_dict)
|
CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess-T.py
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import os
|
| 3 |
+
import cv2
|
| 4 |
+
import pdb
|
| 5 |
+
import glob
|
| 6 |
+
import pandas
|
| 7 |
+
import argparse
|
| 8 |
+
import numpy as np
|
| 9 |
+
from tqdm import tqdm
|
| 10 |
+
from functools import partial
|
| 11 |
+
from multiprocessing import Pool
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def csv2dict(anno_path, dataset_type):
|
| 15 |
+
inputs_list = pandas.read_csv(anno_path)
|
| 16 |
+
inputs_list = (inputs_list.to_dict()['name|video|start|end|speaker|orth|translation'].values())
|
| 17 |
+
info_dict = dict()
|
| 18 |
+
info_dict['prefix'] = anno_path.rsplit("/", 3)[0] + "/features/fullFrame-210x260px"
|
| 19 |
+
print(f"Generate information dict from {anno_path}")
|
| 20 |
+
for file_idx, file_info in tqdm(enumerate(inputs_list), total=len(inputs_list)):
|
| 21 |
+
name, video, start, end, speaker, orth, translation = file_info.split("|")
|
| 22 |
+
num_frames = len(glob.glob(f"{info_dict['prefix']}/{dataset_type}/{video}"))
|
| 23 |
+
info_dict[file_idx] = {
|
| 24 |
+
'fileid': name,
|
| 25 |
+
'folder': f"{dataset_type}/{video}",
|
| 26 |
+
'signer': speaker,
|
| 27 |
+
'label': orth,
|
| 28 |
+
'num_frames': num_frames,
|
| 29 |
+
'original_info': file_info,
|
| 30 |
+
}
|
| 31 |
+
return info_dict
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def generate_gt_stm(info, save_path):
|
| 35 |
+
with open(save_path, "w") as f:
|
| 36 |
+
for k, v in info.items():
|
| 37 |
+
if not isinstance(k, int):
|
| 38 |
+
continue
|
| 39 |
+
f.writelines(f"{v['fileid']} 1 {v['signer']} 0.0 1.79769e+308 {v['label']}\n")
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
def sign_dict_update(total_dict, info):
|
| 43 |
+
for k, v in info.items():
|
| 44 |
+
if not isinstance(k, int):
|
| 45 |
+
continue
|
| 46 |
+
split_label = v['label'].split()
|
| 47 |
+
for gloss in split_label:
|
| 48 |
+
if gloss not in total_dict.keys():
|
| 49 |
+
total_dict[gloss] = 1
|
| 50 |
+
else:
|
| 51 |
+
total_dict[gloss] += 1
|
| 52 |
+
return total_dict
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
def resize_img(img_path, dsize='210x260px'):
|
| 56 |
+
dsize = tuple(int(res) for res in re.findall("\d+", dsize))
|
| 57 |
+
img = cv2.imread(img_path)
|
| 58 |
+
img = cv2.resize(img, dsize, interpolation=cv2.INTER_LANCZOS4)
|
| 59 |
+
return img
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def resize_dataset(video_idx, dsize, info_dict):
|
| 63 |
+
info = info_dict[video_idx]
|
| 64 |
+
img_list = glob.glob(f"{info_dict['prefix']}/{info['folder']}")
|
| 65 |
+
for img_path in img_list:
|
| 66 |
+
rs_img = resize_img(img_path, dsize=dsize)
|
| 67 |
+
rs_img_path = img_path.replace("210x260px", dsize)
|
| 68 |
+
rs_img_dir = os.path.dirname(rs_img_path)
|
| 69 |
+
if not os.path.exists(rs_img_dir):
|
| 70 |
+
os.makedirs(rs_img_dir)
|
| 71 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 72 |
+
else:
|
| 73 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
def run_mp_cmd(processes, process_func, process_args):
|
| 77 |
+
with Pool(processes) as p:
|
| 78 |
+
outputs = list(tqdm(p.imap(process_func, process_args), total=len(process_args)))
|
| 79 |
+
return outputs
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
def run_cmd(func, args):
|
| 83 |
+
return func(args)
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
if __name__ == '__main__':
|
| 87 |
+
parser = argparse.ArgumentParser(
|
| 88 |
+
description='Data process for Visual Alignment Constraint for Continuous Sign Language Recognition.')
|
| 89 |
+
parser.add_argument('--dataset', type=str, default='phoenix2014-T',
|
| 90 |
+
help='save prefix')
|
| 91 |
+
parser.add_argument('--dataset-root', type=str, default='/disk1/dataset/PHOENIX-2014-T-release-v3/PHOENIX-2014-T',
|
| 92 |
+
help='path to the dataset')
|
| 93 |
+
parser.add_argument('--annotation-prefix', type=str, default='annotations/manual/PHOENIX-2014-T.{}.corpus.csv',
|
| 94 |
+
help='annotation prefix')
|
| 95 |
+
parser.add_argument('--output-res', type=str, default='256x256px',
|
| 96 |
+
help='resize resolution for image sequence')
|
| 97 |
+
parser.add_argument('--process-image', '-p', action='store_true',
|
| 98 |
+
help='resize image')
|
| 99 |
+
parser.add_argument('--multiprocessing', '-m', action='store_true',
|
| 100 |
+
help='whether adopts multiprocessing to accelate the preprocess')
|
| 101 |
+
|
| 102 |
+
args = parser.parse_args()
|
| 103 |
+
mode = ["dev", "test", "train"]
|
| 104 |
+
sign_dict = dict()
|
| 105 |
+
if not os.path.exists(f"./{args.dataset}"):
|
| 106 |
+
os.makedirs(f"./{args.dataset}")
|
| 107 |
+
for md in mode:
|
| 108 |
+
# generate information dict
|
| 109 |
+
information = csv2dict(f"{args.dataset_root}/{args.annotation_prefix.format(md)}", dataset_type=md)
|
| 110 |
+
np.save(f"./{args.dataset}/{md}_info.npy", information)
|
| 111 |
+
# update the total gloss dict
|
| 112 |
+
sign_dict_update(sign_dict, information)
|
| 113 |
+
# generate groudtruth stm for evaluation
|
| 114 |
+
generate_gt_stm(information, f"./{args.dataset}/{args.dataset}-groundtruth-{md}.stm")
|
| 115 |
+
# resize images
|
| 116 |
+
video_index = np.arange(len(information) - 1)
|
| 117 |
+
print(f"Resize image to {args.output_res}")
|
| 118 |
+
if args.process_image:
|
| 119 |
+
if args.multiprocessing:
|
| 120 |
+
run_mp_cmd(10, partial(resize_dataset, dsize=args.output_res, info_dict=information), video_index)
|
| 121 |
+
else:
|
| 122 |
+
for idx in tqdm(video_index):
|
| 123 |
+
run_cmd(partial(resize_dataset, dsize=args.output_res, info_dict=information), idx)
|
| 124 |
+
#resize_dataset(idx, dsize=args.output_res, info_dict=information)
|
| 125 |
+
sign_dict = sorted(sign_dict.items(), key=lambda d: d[0])
|
| 126 |
+
save_dict = {}
|
| 127 |
+
for idx, (key, value) in enumerate(sign_dict):
|
| 128 |
+
save_dict[key] = [idx + 1, value]
|
| 129 |
+
np.save(f"./{args.dataset}/gloss_dict.npy", save_dict)
|
CorrNet_Plus/CorrNet_Plus_CSLR/preprocess/dataset_preprocess.py
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import os
|
| 3 |
+
import cv2
|
| 4 |
+
import pdb
|
| 5 |
+
import glob
|
| 6 |
+
import pandas
|
| 7 |
+
import argparse
|
| 8 |
+
import numpy as np
|
| 9 |
+
from tqdm import tqdm
|
| 10 |
+
from functools import partial
|
| 11 |
+
from multiprocessing import Pool
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def csv2dict(anno_path, dataset_type):
|
| 15 |
+
inputs_list = pandas.read_csv(anno_path)
|
| 16 |
+
if dataset_type == 'train':
|
| 17 |
+
broken_data = [2390]
|
| 18 |
+
inputs_list.drop(broken_data, inplace=True)
|
| 19 |
+
inputs_list = (inputs_list.to_dict()['id|folder|signer|annotation'].values())
|
| 20 |
+
info_dict = dict()
|
| 21 |
+
info_dict['prefix'] = anno_path.rsplit("/", 3)[0] + "/features/fullFrame-210x260px"
|
| 22 |
+
print(f"Generate information dict from {anno_path}")
|
| 23 |
+
for file_idx, file_info in tqdm(enumerate(inputs_list), total=len(inputs_list)):
|
| 24 |
+
fileid, folder, signer, label = file_info.split("|")
|
| 25 |
+
num_frames = len(glob.glob(f"{info_dict['prefix']}/{dataset_type}/{folder}"))
|
| 26 |
+
info_dict[file_idx] = {
|
| 27 |
+
'fileid': fileid,
|
| 28 |
+
'folder': f"{dataset_type}/{folder}",
|
| 29 |
+
'signer': signer,
|
| 30 |
+
'label': label,
|
| 31 |
+
'num_frames': num_frames,
|
| 32 |
+
'original_info': file_info,
|
| 33 |
+
}
|
| 34 |
+
return info_dict
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
def generate_gt_stm(info, save_path):
|
| 38 |
+
with open(save_path, "w") as f:
|
| 39 |
+
for k, v in info.items():
|
| 40 |
+
if not isinstance(k, int):
|
| 41 |
+
continue
|
| 42 |
+
f.writelines(f"{v['fileid']} 1 {v['signer']} 0.0 1.79769e+308 {v['label']}\n")
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
def sign_dict_update(total_dict, info):
|
| 46 |
+
for k, v in info.items():
|
| 47 |
+
if not isinstance(k, int):
|
| 48 |
+
continue
|
| 49 |
+
split_label = v['label'].split()
|
| 50 |
+
for gloss in split_label:
|
| 51 |
+
if gloss not in total_dict.keys():
|
| 52 |
+
total_dict[gloss] = 1
|
| 53 |
+
else:
|
| 54 |
+
total_dict[gloss] += 1
|
| 55 |
+
return total_dict
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
def resize_img(img_path, dsize='210x260px'):
|
| 59 |
+
dsize = tuple(int(res) for res in re.findall("\d+", dsize))
|
| 60 |
+
img = cv2.imread(img_path)
|
| 61 |
+
img = cv2.resize(img, dsize, interpolation=cv2.INTER_LANCZOS4)
|
| 62 |
+
return img
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def resize_dataset(video_idx, dsize, info_dict):
|
| 66 |
+
info = info_dict[video_idx]
|
| 67 |
+
img_list = glob.glob(f"{info_dict['prefix']}/{info['folder']}")
|
| 68 |
+
for img_path in img_list:
|
| 69 |
+
rs_img = resize_img(img_path, dsize=dsize)
|
| 70 |
+
rs_img_path = img_path.replace("210x260px", dsize)
|
| 71 |
+
rs_img_dir = os.path.dirname(rs_img_path)
|
| 72 |
+
if not os.path.exists(rs_img_dir):
|
| 73 |
+
os.makedirs(rs_img_dir)
|
| 74 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 75 |
+
else:
|
| 76 |
+
cv2.imwrite(rs_img_path, rs_img)
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
def run_mp_cmd(processes, process_func, process_args):
|
| 80 |
+
with Pool(processes) as p:
|
| 81 |
+
outputs = list(tqdm(p.imap(process_func, process_args), total=len(process_args)))
|
| 82 |
+
return outputs
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
def run_cmd(func, args):
|
| 86 |
+
return func(args)
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
if __name__ == '__main__':
|
| 90 |
+
parser = argparse.ArgumentParser(
|
| 91 |
+
description='Data process for Visual Alignment Constraint for Continuous Sign Language Recognition.')
|
| 92 |
+
parser.add_argument('--dataset', type=str, default='phoenix2014',
|
| 93 |
+
help='save prefix')
|
| 94 |
+
parser.add_argument('--dataset-root', type=str, default='../dataset/phoenix2014/phoenix-2014-multisigner',
|
| 95 |
+
help='path to the dataset')
|
| 96 |
+
parser.add_argument('--annotation-prefix', type=str, default='annotations/manual/{}.corpus.csv',
|
| 97 |
+
help='annotation prefix')
|
| 98 |
+
parser.add_argument('--output-res', type=str, default='256x256px',
|
| 99 |
+
help='resize resolution for image sequence')
|
| 100 |
+
parser.add_argument('--process-image', '-p', action='store_true',
|
| 101 |
+
help='resize image')
|
| 102 |
+
parser.add_argument('--multiprocessing', '-m', action='store_true',
|
| 103 |
+
help='whether adopts multiprocessing to accelate the preprocess')
|
| 104 |
+
|
| 105 |
+
args = parser.parse_args()
|
| 106 |
+
mode = ["dev", "test", "train"]
|
| 107 |
+
sign_dict = dict()
|
| 108 |
+
if not os.path.exists(f"./{args.dataset}"):
|
| 109 |
+
os.makedirs(f"./{args.dataset}")
|
| 110 |
+
for md in mode:
|
| 111 |
+
# generate information dict
|
| 112 |
+
information = csv2dict(f"{args.dataset_root}/{args.annotation_prefix.format(md)}", dataset_type=md)
|
| 113 |
+
np.save(f"./{args.dataset}/{md}_info.npy", information)
|
| 114 |
+
# update the total gloss dict
|
| 115 |
+
sign_dict_update(sign_dict, information)
|
| 116 |
+
# generate groudtruth stm for evaluation
|
| 117 |
+
generate_gt_stm(information, f"./{args.dataset}/{args.dataset}-groundtruth-{md}.stm")
|
| 118 |
+
# resize images
|
| 119 |
+
video_index = np.arange(len(information) - 1)
|
| 120 |
+
print(f"Resize image to {args.output_res}")
|
| 121 |
+
if args.process_image:
|
| 122 |
+
if args.multiprocessing:
|
| 123 |
+
run_mp_cmd(10, partial(resize_dataset, dsize=args.output_res, info_dict=information), video_index)
|
| 124 |
+
else:
|
| 125 |
+
for idx in tqdm(video_index):
|
| 126 |
+
run_cmd(partial(resize_dataset, dsize=args.output_res, info_dict=information), idx)
|
| 127 |
+
sign_dict = sorted(sign_dict.items(), key=lambda d: d[0])
|
| 128 |
+
save_dict = {}
|
| 129 |
+
for idx, (key, value) in enumerate(sign_dict):
|
| 130 |
+
save_dict[key] = [idx + 1, value]
|
| 131 |
+
np.save(f"./{args.dataset}/gloss_dict.npy", save_dict)
|
CorrNet_Plus/CorrNet_Plus_CSLR/requirements.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
matplotlib==3.4.3
|
| 2 |
+
numpy==1.20.3
|
| 3 |
+
opencv_python==4.5.5.64
|
| 4 |
+
pandas==1.3.4
|
| 5 |
+
Pillow==9.4.0
|
| 6 |
+
PyYAML==6.0
|
| 7 |
+
scipy==1.7.1
|
| 8 |
+
six==1.16.0
|
| 9 |
+
tqdm==4.62.3
|
CorrNet_Plus/CorrNet_Plus_CSLR/seq_scripts.py
ADDED
|
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import pdb
|
| 3 |
+
import sys
|
| 4 |
+
import copy
|
| 5 |
+
import torch
|
| 6 |
+
import numpy as np
|
| 7 |
+
import torch.nn as nn
|
| 8 |
+
from tqdm import tqdm
|
| 9 |
+
import torch.nn.functional as F
|
| 10 |
+
import matplotlib.pyplot as plt
|
| 11 |
+
from evaluation.slr_eval.wer_calculation import evaluate
|
| 12 |
+
from torch.cuda.amp import autocast as autocast
|
| 13 |
+
from torch.cuda.amp import GradScaler
|
| 14 |
+
import gc
|
| 15 |
+
|
| 16 |
+
def seq_train(loader, model, optimizer, device, epoch_idx, recoder):
|
| 17 |
+
model.train()
|
| 18 |
+
loss_value = []
|
| 19 |
+
clr = [group['lr'] for group in optimizer.optimizer.param_groups]
|
| 20 |
+
scaler = GradScaler()
|
| 21 |
+
for batch_idx, data in enumerate(tqdm(loader)):
|
| 22 |
+
vid = device.data_to_device(data[0])
|
| 23 |
+
vid_lgt = device.data_to_device(data[1])
|
| 24 |
+
label = device.data_to_device(data[2])
|
| 25 |
+
label_lgt = device.data_to_device(data[3])
|
| 26 |
+
optimizer.zero_grad()
|
| 27 |
+
with autocast():
|
| 28 |
+
ret_dict = model(vid, vid_lgt, label=label, label_lgt=label_lgt)
|
| 29 |
+
loss, _ = model.criterion_calculation(ret_dict, label, label_lgt)
|
| 30 |
+
if np.isinf(loss.item()) or np.isnan(loss.item()):
|
| 31 |
+
print('loss is nan')
|
| 32 |
+
#print(data[-1])
|
| 33 |
+
print(str(data[1])+' frames')
|
| 34 |
+
print(str(data[3])+' glosses')
|
| 35 |
+
del ret_dict
|
| 36 |
+
del loss
|
| 37 |
+
continue
|
| 38 |
+
scaler.scale(loss).backward()
|
| 39 |
+
scaler.step(optimizer.optimizer)
|
| 40 |
+
scaler.update()
|
| 41 |
+
# nn.utils.clip_grad_norm_(model.rnn.parameters(), 5)
|
| 42 |
+
loss_value.append(loss.item())
|
| 43 |
+
if batch_idx % recoder.log_interval == 0:
|
| 44 |
+
recoder.print_log(
|
| 45 |
+
'\tEpoch: {}, Batch({}/{}) done. Loss: {:.8f} lr:{:.6f}'
|
| 46 |
+
.format(epoch_idx, batch_idx, len(loader), loss.item(), clr[0]))
|
| 47 |
+
del ret_dict
|
| 48 |
+
del loss
|
| 49 |
+
optimizer.scheduler.step()
|
| 50 |
+
recoder.print_log('\tMean training loss: {:.10f}.'.format(np.mean(loss_value)))
|
| 51 |
+
del loss_value
|
| 52 |
+
del clr
|
| 53 |
+
gc.collect()
|
| 54 |
+
torch.cuda.empty_cache()
|
| 55 |
+
return
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
def seq_eval(cfg, loader, model, device, mode, epoch, work_dir, recoder,
|
| 59 |
+
evaluate_tool="python"):
|
| 60 |
+
model.eval()
|
| 61 |
+
total_sent = []
|
| 62 |
+
total_info = []
|
| 63 |
+
total_conv_sent = []
|
| 64 |
+
stat = {i: [0, 0] for i in range(len(loader.dataset.dict))}
|
| 65 |
+
for batch_idx, data in enumerate(tqdm(loader)):
|
| 66 |
+
recoder.record_timer("device")
|
| 67 |
+
vid = device.data_to_device(data[0])
|
| 68 |
+
vid_lgt = device.data_to_device(data[1])
|
| 69 |
+
label = device.data_to_device(data[2])
|
| 70 |
+
label_lgt = device.data_to_device(data[3])
|
| 71 |
+
with torch.no_grad():
|
| 72 |
+
ret_dict = model(vid, vid_lgt, label=label, label_lgt=label_lgt)
|
| 73 |
+
|
| 74 |
+
total_info += [file_name.split("|")[0] for file_name in data[-1]]
|
| 75 |
+
total_sent += ret_dict['recognized_sents']
|
| 76 |
+
total_conv_sent += ret_dict['conv_sents']
|
| 77 |
+
try:
|
| 78 |
+
python_eval = True if evaluate_tool == "python" else False
|
| 79 |
+
write2file(work_dir + "output-hypothesis-{}.ctm".format(mode), total_info, total_sent)
|
| 80 |
+
write2file(work_dir + "output-hypothesis-{}-conv.ctm".format(mode), total_info,
|
| 81 |
+
total_conv_sent)
|
| 82 |
+
conv_ret = evaluate(
|
| 83 |
+
prefix=work_dir, mode=mode, output_file="output-hypothesis-{}-conv.ctm".format(mode),
|
| 84 |
+
evaluate_dir=cfg.dataset_info['evaluation_dir'],
|
| 85 |
+
evaluate_prefix=cfg.dataset_info['evaluation_prefix'],
|
| 86 |
+
output_dir="epoch_{}_result/".format(epoch),
|
| 87 |
+
python_evaluate=python_eval,
|
| 88 |
+
)
|
| 89 |
+
lstm_ret = evaluate(
|
| 90 |
+
prefix=work_dir, mode=mode, output_file="output-hypothesis-{}.ctm".format(mode),
|
| 91 |
+
evaluate_dir=cfg.dataset_info['evaluation_dir'],
|
| 92 |
+
evaluate_prefix=cfg.dataset_info['evaluation_prefix'],
|
| 93 |
+
output_dir="epoch_{}_result/".format(epoch),
|
| 94 |
+
python_evaluate=python_eval,
|
| 95 |
+
triplet=True,
|
| 96 |
+
)
|
| 97 |
+
except:
|
| 98 |
+
print("Unexpected error:", sys.exc_info()[0])
|
| 99 |
+
lstm_ret = 100.0
|
| 100 |
+
finally:
|
| 101 |
+
pass
|
| 102 |
+
del conv_ret
|
| 103 |
+
del total_sent
|
| 104 |
+
del total_info
|
| 105 |
+
del total_conv_sent
|
| 106 |
+
del vid
|
| 107 |
+
del vid_lgt
|
| 108 |
+
del label
|
| 109 |
+
del label_lgt
|
| 110 |
+
gc.collect()
|
| 111 |
+
recoder.print_log(f"Epoch {epoch}, {mode} {lstm_ret: 2.2f}%", f"{work_dir}/{mode}.txt")
|
| 112 |
+
return lstm_ret
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
def seq_feature_generation(loader, model, device, mode, work_dir, recoder):
|
| 116 |
+
model.eval()
|
| 117 |
+
|
| 118 |
+
src_path = os.path.abspath(f"{work_dir}{mode}")
|
| 119 |
+
tgt_path = os.path.abspath(f"./features/{mode}")
|
| 120 |
+
if not os.path.exists("./features/"):
|
| 121 |
+
os.makedirs("./features/")
|
| 122 |
+
|
| 123 |
+
if os.path.islink(tgt_path):
|
| 124 |
+
curr_path = os.readlink(tgt_path)
|
| 125 |
+
if work_dir[1:] in curr_path and os.path.isabs(curr_path):
|
| 126 |
+
return
|
| 127 |
+
else:
|
| 128 |
+
os.unlink(tgt_path)
|
| 129 |
+
else:
|
| 130 |
+
if os.path.exists(src_path) and len(loader.dataset) == len(os.listdir(src_path)):
|
| 131 |
+
os.symlink(src_path, tgt_path)
|
| 132 |
+
return
|
| 133 |
+
|
| 134 |
+
for batch_idx, data in tqdm(enumerate(loader)):
|
| 135 |
+
recoder.record_timer("device")
|
| 136 |
+
vid = device.data_to_device(data[0])
|
| 137 |
+
vid_lgt = device.data_to_device(data[1])
|
| 138 |
+
with torch.no_grad():
|
| 139 |
+
ret_dict = model(vid, vid_lgt)
|
| 140 |
+
if not os.path.exists(src_path):
|
| 141 |
+
os.makedirs(src_path)
|
| 142 |
+
start = 0
|
| 143 |
+
for sample_idx in range(len(vid)):
|
| 144 |
+
end = start + data[3][sample_idx]
|
| 145 |
+
filename = f"{src_path}/{data[-1][sample_idx].split('|')[0]}_features.npy"
|
| 146 |
+
save_file = {
|
| 147 |
+
"label": data[2][start:end],
|
| 148 |
+
"features": ret_dict['framewise_features'][sample_idx][:, :vid_lgt[sample_idx]].T.cpu().detach(),
|
| 149 |
+
}
|
| 150 |
+
np.save(filename, save_file)
|
| 151 |
+
start = end
|
| 152 |
+
assert end == len(data[2])
|
| 153 |
+
os.symlink(src_path, tgt_path)
|
| 154 |
+
|
| 155 |
+
|
| 156 |
+
def write2file(path, info, output):
|
| 157 |
+
filereader = open(path, "w")
|
| 158 |
+
for sample_idx, sample in enumerate(output):
|
| 159 |
+
for word_idx, word in enumerate(sample):
|
| 160 |
+
filereader.writelines(
|
| 161 |
+
"{} 1 {:.2f} {:.2f} {}\n".format(info[sample_idx],
|
| 162 |
+
word_idx * 1.0 / 100,
|
| 163 |
+
(word_idx + 1) * 1.0 / 100,
|
| 164 |
+
word[0]))
|
CorrNet_Plus/CorrNet_Plus_CSLR/slr_network.py
ADDED
|
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import copy
|
| 3 |
+
import utils
|
| 4 |
+
import torch
|
| 5 |
+
import types
|
| 6 |
+
import numpy as np
|
| 7 |
+
import torch.nn as nn
|
| 8 |
+
import torch.nn.functional as F
|
| 9 |
+
import torchvision.models as models
|
| 10 |
+
from modules.criterions import SeqKD
|
| 11 |
+
from modules import BiLSTMLayer, TemporalConv
|
| 12 |
+
import modules.resnet as resnet
|
| 13 |
+
|
| 14 |
+
class Identity(nn.Module):
|
| 15 |
+
def __init__(self):
|
| 16 |
+
super(Identity, self).__init__()
|
| 17 |
+
|
| 18 |
+
def forward(self, x):
|
| 19 |
+
return x
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
class NormLinear(nn.Module):
|
| 23 |
+
def __init__(self, in_dim, out_dim):
|
| 24 |
+
super(NormLinear, self).__init__()
|
| 25 |
+
self.weight = nn.Parameter(torch.Tensor(in_dim, out_dim))
|
| 26 |
+
nn.init.xavier_uniform_(self.weight, gain=nn.init.calculate_gain('relu'))
|
| 27 |
+
|
| 28 |
+
def forward(self, x):
|
| 29 |
+
outputs = torch.matmul(x, F.normalize(self.weight, dim=0))
|
| 30 |
+
return outputs
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
class SLRModel(nn.Module):
|
| 34 |
+
def __init__(
|
| 35 |
+
self, num_classes, c2d_type, conv_type, use_bn=False,
|
| 36 |
+
hidden_size=1024, gloss_dict=None, loss_weights=None,
|
| 37 |
+
weight_norm=True, share_classifier=True
|
| 38 |
+
):
|
| 39 |
+
super(SLRModel, self).__init__()
|
| 40 |
+
self.decoder = None
|
| 41 |
+
self.loss = dict()
|
| 42 |
+
self.criterion_init()
|
| 43 |
+
self.num_classes = num_classes
|
| 44 |
+
self.loss_weights = loss_weights
|
| 45 |
+
#self.conv2d = getattr(models, c2d_type)(pretrained=True)
|
| 46 |
+
self.conv2d = getattr(resnet, c2d_type)()
|
| 47 |
+
self.conv2d.fc = Identity()
|
| 48 |
+
|
| 49 |
+
self.conv1d = TemporalConv(input_size=512,
|
| 50 |
+
hidden_size=hidden_size,
|
| 51 |
+
conv_type=conv_type,
|
| 52 |
+
use_bn=use_bn,
|
| 53 |
+
num_classes=num_classes)
|
| 54 |
+
self.decoder = utils.Decode(gloss_dict, num_classes, 'beam')
|
| 55 |
+
self.temporal_model = BiLSTMLayer(rnn_type='LSTM', input_size=hidden_size, hidden_size=hidden_size,
|
| 56 |
+
num_layers=2, bidirectional=True)
|
| 57 |
+
if weight_norm:
|
| 58 |
+
self.classifier = NormLinear(hidden_size, self.num_classes)
|
| 59 |
+
self.conv1d.fc = NormLinear(hidden_size, self.num_classes)
|
| 60 |
+
else:
|
| 61 |
+
self.classifier = nn.Linear(hidden_size, self.num_classes)
|
| 62 |
+
self.conv1d.fc = nn.Linear(hidden_size, self.num_classes)
|
| 63 |
+
if share_classifier:
|
| 64 |
+
self.conv1d.fc = self.classifier
|
| 65 |
+
#self.register_backward_hook(self.backward_hook)
|
| 66 |
+
|
| 67 |
+
def backward_hook(self, module, grad_input, grad_output):
|
| 68 |
+
for g in grad_input:
|
| 69 |
+
g[g != g] = 0
|
| 70 |
+
|
| 71 |
+
def masked_bn(self, inputs, len_x):
|
| 72 |
+
def pad(tensor, length):
|
| 73 |
+
return torch.cat([tensor, tensor.new(length - tensor.size(0), *tensor.size()[1:]).zero_()])
|
| 74 |
+
|
| 75 |
+
x = torch.cat([inputs[len_x[0] * idx:len_x[0] * idx + lgt] for idx, lgt in enumerate(len_x)])
|
| 76 |
+
x = self.conv2d(x)
|
| 77 |
+
x = torch.cat([pad(x[sum(len_x[:idx]):sum(len_x[:idx + 1])], len_x[0])
|
| 78 |
+
for idx, lgt in enumerate(len_x)])
|
| 79 |
+
return x
|
| 80 |
+
|
| 81 |
+
def forward(self, x, len_x, label=None, label_lgt=None):
|
| 82 |
+
if len(x.shape) == 5:
|
| 83 |
+
# videos
|
| 84 |
+
batch, temp, channel, height, width = x.shape
|
| 85 |
+
#inputs = x.reshape(batch * temp, channel, height, width)
|
| 86 |
+
#framewise = self.masked_bn(inputs, len_x)
|
| 87 |
+
#framewise = framewise.reshape(batch, temp, -1).transpose(1, 2)
|
| 88 |
+
framewise = self.conv2d(x.permute(0,2,1,3,4)).view(batch, temp, -1).permute(0,2,1) # btc -> bct
|
| 89 |
+
else:
|
| 90 |
+
# frame-wise features
|
| 91 |
+
framewise = x
|
| 92 |
+
|
| 93 |
+
conv1d_outputs = self.conv1d(framewise, len_x)
|
| 94 |
+
# x: T, B, C
|
| 95 |
+
x = conv1d_outputs['visual_feat']
|
| 96 |
+
lgt = conv1d_outputs['feat_len']
|
| 97 |
+
tm_outputs = self.temporal_model(x, lgt)
|
| 98 |
+
outputs = self.classifier(tm_outputs['predictions'])
|
| 99 |
+
pred = None if self.training \
|
| 100 |
+
else self.decoder.decode(outputs, lgt, batch_first=False, probs=False)
|
| 101 |
+
conv_pred = None if self.training \
|
| 102 |
+
else self.decoder.decode(conv1d_outputs['conv_logits'], lgt, batch_first=False, probs=False)
|
| 103 |
+
|
| 104 |
+
return {
|
| 105 |
+
#"framewise_features": framewise,
|
| 106 |
+
#"visual_features": x,
|
| 107 |
+
"feat_len": lgt,
|
| 108 |
+
"conv_logits": conv1d_outputs['conv_logits'],
|
| 109 |
+
"sequence_logits": outputs,
|
| 110 |
+
"conv_sents": conv_pred,
|
| 111 |
+
"recognized_sents": pred,
|
| 112 |
+
"loss_LiftPool_u": conv1d_outputs['loss_LiftPool_u'],
|
| 113 |
+
"loss_LiftPool_p": conv1d_outputs['loss_LiftPool_p'],
|
| 114 |
+
}
|
| 115 |
+
|
| 116 |
+
def criterion_calculation(self, ret_dict, label, label_lgt):
|
| 117 |
+
loss = 0
|
| 118 |
+
total_loss = {}
|
| 119 |
+
for k, weight in self.loss_weights.items():
|
| 120 |
+
if k == 'ConvCTC':
|
| 121 |
+
total_loss['ConvCTC'] = weight * self.loss['CTCLoss'](ret_dict["conv_logits"].log_softmax(-1),
|
| 122 |
+
label.cpu().int(), ret_dict["feat_len"].cpu().int(),
|
| 123 |
+
label_lgt.cpu().int()).mean()
|
| 124 |
+
loss += total_loss['ConvCTC']
|
| 125 |
+
elif k == 'SeqCTC':
|
| 126 |
+
total_loss['SeqCTC'] = weight * self.loss['CTCLoss'](ret_dict["sequence_logits"].log_softmax(-1),
|
| 127 |
+
label.cpu().int(), ret_dict["feat_len"].cpu().int(),
|
| 128 |
+
label_lgt.cpu().int()).mean()
|
| 129 |
+
loss += total_loss['SeqCTC']
|
| 130 |
+
elif k == 'Dist':
|
| 131 |
+
total_loss['Dist'] = weight * self.loss['distillation'](ret_dict["conv_logits"],
|
| 132 |
+
ret_dict["sequence_logits"].detach(),
|
| 133 |
+
use_blank=False)
|
| 134 |
+
loss += total_loss['Dist']
|
| 135 |
+
elif k == 'Cu':
|
| 136 |
+
total_loss['Cu'] = weight * ret_dict["loss_LiftPool_u"]
|
| 137 |
+
loss += total_loss['Cu']
|
| 138 |
+
elif k == 'Cp':
|
| 139 |
+
total_loss['Cp'] = weight * ret_dict["loss_LiftPool_p"]
|
| 140 |
+
loss += total_loss['Cp']
|
| 141 |
+
return loss, total_loss
|
| 142 |
+
|
| 143 |
+
def criterion_init(self):
|
| 144 |
+
self.loss['CTCLoss'] = torch.nn.CTCLoss(reduction='none', zero_infinity=False)
|
| 145 |
+
self.loss['distillation'] = SeqKD(T=8)
|
| 146 |
+
return self.loss
|
CorrNet_Plus/CorrNet_Plus_CSLR/test_one_video.py
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#Ref: https://blog.csdn.net/weixin_41735859/article/details/106474768
|
| 2 |
+
import numpy as np
|
| 3 |
+
import os
|
| 4 |
+
import glob
|
| 5 |
+
import cv2
|
| 6 |
+
from utils import video_augmentation
|
| 7 |
+
from slr_network import SLRModel
|
| 8 |
+
import torch
|
| 9 |
+
from collections import OrderedDict
|
| 10 |
+
import utils
|
| 11 |
+
from decord import VideoReader, cpu
|
| 12 |
+
import argparse
|
| 13 |
+
VIDEO_FORMATS = [".mp4", ".avi", ".mov", ".mkv"]
|
| 14 |
+
|
| 15 |
+
def is_image_by_extension(file_path):
|
| 16 |
+
_, file_extension = os.path.splitext(file_path)
|
| 17 |
+
|
| 18 |
+
image_extensions = ['.jpg', '.jpeg', '.png', '.gif', '.bmp']
|
| 19 |
+
|
| 20 |
+
return file_extension.lower() in image_extensions
|
| 21 |
+
|
| 22 |
+
def load_video(video_path, max_frames_num=360):
|
| 23 |
+
if type(video_path) == str:
|
| 24 |
+
vr = VideoReader(video_path, ctx=cpu(0))
|
| 25 |
+
elif type(video_path) == list:
|
| 26 |
+
vr = VideoReader(video_path[0], ctx=cpu(0))
|
| 27 |
+
else:
|
| 28 |
+
raise ValueError(f"Not support video input : {type(video_path)}")
|
| 29 |
+
total_frame_num = len(vr)
|
| 30 |
+
if total_frame_num> max_frames_num:
|
| 31 |
+
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int)
|
| 32 |
+
else:
|
| 33 |
+
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, dtype=int)
|
| 34 |
+
frame_idx = uniform_sampled_frames.tolist()
|
| 35 |
+
spare_frames = vr.get_batch(frame_idx).asnumpy()
|
| 36 |
+
return [cv2.cvtColor(tmp, cv2.COLOR_BGR2RGB) for tmp in spare_frames] # (frames, height, width, channels)
|
| 37 |
+
|
| 38 |
+
parser = argparse.ArgumentParser()
|
| 39 |
+
parser.add_argument("--model_path", type=str, help="The path to pretrained weights")
|
| 40 |
+
parser.add_argument("--video_path", type=str, help="The path to a video file or a dir contains extracted images from a video")
|
| 41 |
+
parser.add_argument("--device", type=int, default=0, help="Which device to run inference")
|
| 42 |
+
parser.add_argument("--language", type=str, default='phoenix', choices=['phoenix', 'csl'], help="The target sign language")
|
| 43 |
+
parser.add_argument("--max_frames_num", type=int, default=360, help="The max input frames sampled from an input video")
|
| 44 |
+
|
| 45 |
+
args = parser.parse_args()
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
device_id = args.device # specify which gpu to use
|
| 49 |
+
if args.language == 'phoenix':
|
| 50 |
+
dataset = 'phoenix2014'
|
| 51 |
+
elif args.language == 'csl':
|
| 52 |
+
dataset = 'CSL-Daily'
|
| 53 |
+
else:
|
| 54 |
+
raise ValueError("Please select target language from ['phoenix', 'csl'] in your command")
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
# Load data and apply transformation
|
| 58 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy' # Use the gloss dict of phoenix14 dataset
|
| 59 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 60 |
+
|
| 61 |
+
if os.path.isdir(args.video_path): # extracted images of a video
|
| 62 |
+
img_list = []
|
| 63 |
+
for img_path in sorted(os.listdir(args.video_path)):
|
| 64 |
+
cur_path = os.path.join(args.video_path, img_path)
|
| 65 |
+
if is_image_by_extension(cur_path):
|
| 66 |
+
img_list.append(cv2.cvtColor(cv2.imread(cur_path), cv2.COLOR_BGR2RGB))
|
| 67 |
+
elif os.path.splitext(args.video_path)[-1] in VIDEO_FORMATS: # Video case
|
| 68 |
+
try:
|
| 69 |
+
img_list = load_video(args.video_path, args.max_frames_num) # frames [height, width, channels]
|
| 70 |
+
except Exception as e:
|
| 71 |
+
raise ValueError(f"Error {e} in loading video")
|
| 72 |
+
|
| 73 |
+
transform = video_augmentation.Compose([
|
| 74 |
+
video_augmentation.CenterCrop(224),
|
| 75 |
+
video_augmentation.Resize(1.0),
|
| 76 |
+
video_augmentation.ToTensor(),
|
| 77 |
+
])
|
| 78 |
+
vid, label = transform(img_list, None, None)
|
| 79 |
+
vid = vid.float() / 127.5 - 1
|
| 80 |
+
vid = vid.unsqueeze(0)
|
| 81 |
+
|
| 82 |
+
left_pad = 0
|
| 83 |
+
last_stride = 1
|
| 84 |
+
total_stride = 1
|
| 85 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 86 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 87 |
+
if ks[0] == 'K':
|
| 88 |
+
left_pad = left_pad * last_stride
|
| 89 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 90 |
+
elif ks[0] == 'P':
|
| 91 |
+
last_stride = int(ks[1])
|
| 92 |
+
total_stride = total_stride * last_stride
|
| 93 |
+
|
| 94 |
+
max_len = vid.size(1)
|
| 95 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 96 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 97 |
+
max_len = max_len + left_pad + right_pad
|
| 98 |
+
vid = torch.cat(
|
| 99 |
+
(
|
| 100 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 101 |
+
vid[0],
|
| 102 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 103 |
+
)
|
| 104 |
+
, dim=0).unsqueeze(0)
|
| 105 |
+
|
| 106 |
+
device = utils.GpuDataParallel()
|
| 107 |
+
device.set_device(device_id)
|
| 108 |
+
# Define model and load state-dict
|
| 109 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 110 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 111 |
+
state_dict = torch.load(args.model_path)['model_state_dict']
|
| 112 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 113 |
+
model.load_state_dict(state_dict, strict=True)
|
| 114 |
+
model = model.to(device.output_device)
|
| 115 |
+
model.cuda()
|
| 116 |
+
|
| 117 |
+
model.eval()
|
| 118 |
+
|
| 119 |
+
vid = device.data_to_device(vid)
|
| 120 |
+
vid_lgt = device.data_to_device(video_length)
|
| 121 |
+
ret_dict = model(vid, vid_lgt, label=None, label_lgt=None)
|
| 122 |
+
print('output glosses : {}'.format(ret_dict['recognized_sents']))
|
| 123 |
+
# Example
|
| 124 |
+
# output glosses : [[('ICH', 0), ('LUFT', 1), ('WETTER', 2), ('GERADE', 3), ('loc-SUEDWEST', 4), ('TEMPERATUR', 5), ('__PU__', 6), ('KUEHL', 7), ('SUED', 8), ('WARM', 9), ('ICH', 10), ('IX', 11)]]
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/__init__.py
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from .device import GpuDataParallel
|
| 2 |
+
from .decode import Decode
|
| 3 |
+
from .optimizer import Optimizer
|
| 4 |
+
from .pack_code import pack_code
|
| 5 |
+
from .parameters import get_parser
|
| 6 |
+
from .random_state import RandomState
|
| 7 |
+
from .record import Recorder
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/__pycache__/decode.cpython-38.pyc
ADDED
|
Binary file (3.81 kB). View file
|
|
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/decode.py
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import pdb
|
| 3 |
+
import time
|
| 4 |
+
import torch
|
| 5 |
+
import ctcdecode
|
| 6 |
+
import numpy as np
|
| 7 |
+
from itertools import groupby
|
| 8 |
+
import torch.nn.functional as F
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
class Decode(object):
|
| 12 |
+
def __init__(self, gloss_dict, num_classes, search_mode, blank_id=0):
|
| 13 |
+
self.i2g_dict = dict((v[0], k) for k, v in gloss_dict.items())
|
| 14 |
+
self.g2i_dict = {v: k for k, v in self.i2g_dict.items()}
|
| 15 |
+
self.num_classes = num_classes
|
| 16 |
+
self.search_mode = search_mode
|
| 17 |
+
self.blank_id = blank_id
|
| 18 |
+
vocab = [chr(x) for x in range(20000, 20000 + num_classes)]
|
| 19 |
+
self.ctc_decoder = ctcdecode.CTCBeamDecoder(vocab, beam_width=10, blank_id=blank_id,
|
| 20 |
+
num_processes=10)
|
| 21 |
+
|
| 22 |
+
def decode(self, nn_output, vid_lgt, batch_first=True, probs=False):
|
| 23 |
+
if not batch_first:
|
| 24 |
+
nn_output = nn_output.permute(1, 0, 2)
|
| 25 |
+
if self.search_mode == "max":
|
| 26 |
+
return self.MaxDecode(nn_output, vid_lgt)
|
| 27 |
+
else:
|
| 28 |
+
return self.BeamSearch(nn_output, vid_lgt, probs)
|
| 29 |
+
|
| 30 |
+
def BeamSearch(self, nn_output, vid_lgt, probs=False):
|
| 31 |
+
'''
|
| 32 |
+
CTCBeamDecoder Shape:
|
| 33 |
+
- Input: nn_output (B, T, N), which should be passed through a softmax layer
|
| 34 |
+
- Output: beam_resuls (B, N_beams, T), int, need to be decoded by i2g_dict
|
| 35 |
+
beam_scores (B, N_beams), p=1/np.exp(beam_score)
|
| 36 |
+
timesteps (B, N_beams)
|
| 37 |
+
out_lens (B, N_beams)
|
| 38 |
+
'''
|
| 39 |
+
if not probs:
|
| 40 |
+
nn_output = nn_output.softmax(-1).cpu()
|
| 41 |
+
vid_lgt = vid_lgt.cpu()
|
| 42 |
+
beam_result, beam_scores, timesteps, out_seq_len = self.ctc_decoder.decode(nn_output, vid_lgt)
|
| 43 |
+
ret_list = []
|
| 44 |
+
for batch_idx in range(len(nn_output)):
|
| 45 |
+
first_result = beam_result[batch_idx][0][:out_seq_len[batch_idx][0]]
|
| 46 |
+
if len(first_result) != 0:
|
| 47 |
+
first_result = torch.stack([x[0] for x in groupby(first_result)])
|
| 48 |
+
ret_list.append([(self.i2g_dict[int(gloss_id)], idx) for idx, gloss_id in
|
| 49 |
+
enumerate(first_result)])
|
| 50 |
+
return ret_list
|
| 51 |
+
|
| 52 |
+
def MaxDecode(self, nn_output, vid_lgt):
|
| 53 |
+
index_list = torch.argmax(nn_output, axis=2)
|
| 54 |
+
batchsize, lgt = index_list.shape
|
| 55 |
+
ret_list = []
|
| 56 |
+
for batch_idx in range(batchsize):
|
| 57 |
+
group_result = [x[0] for x in groupby(index_list[batch_idx][:vid_lgt[batch_idx]])]
|
| 58 |
+
filtered = [*filter(lambda x: x != self.blank_id, group_result)]
|
| 59 |
+
if len(filtered) > 0:
|
| 60 |
+
max_result = torch.stack(filtered)
|
| 61 |
+
max_result = [x[0] for x in groupby(max_result)]
|
| 62 |
+
else:
|
| 63 |
+
max_result = filtered
|
| 64 |
+
ret_list.append([(self.i2g_dict[int(gloss_id)], idx) for idx, gloss_id in
|
| 65 |
+
enumerate(max_result)])
|
| 66 |
+
return ret_list
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/device.py
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import pdb
|
| 3 |
+
import torch
|
| 4 |
+
import torch.nn as nn
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
class GpuDataParallel(object):
|
| 8 |
+
def __init__(self):
|
| 9 |
+
self.gpu_list = []
|
| 10 |
+
self.output_device = None
|
| 11 |
+
|
| 12 |
+
def set_device(self, device):
|
| 13 |
+
device = str(device)
|
| 14 |
+
if device != 'None':
|
| 15 |
+
self.gpu_list = [i for i in range(len(device.split(',')))]
|
| 16 |
+
os.environ["CUDA_VISIBLE_DEVICES"] = device
|
| 17 |
+
output_device = self.gpu_list[0]
|
| 18 |
+
self.occupy_gpu(self.gpu_list)
|
| 19 |
+
self.output_device = output_device if len(self.gpu_list) > 0 else "cpu"
|
| 20 |
+
|
| 21 |
+
def model_to_device(self, model):
|
| 22 |
+
# model = convert_model(model)
|
| 23 |
+
model = model.to(self.output_device)
|
| 24 |
+
if len(self.gpu_list) > 1:
|
| 25 |
+
model = nn.DataParallel(
|
| 26 |
+
model,
|
| 27 |
+
device_ids=self.gpu_list,
|
| 28 |
+
output_device=self.output_device)
|
| 29 |
+
return model
|
| 30 |
+
|
| 31 |
+
def data_to_device(self, data):
|
| 32 |
+
if isinstance(data, torch.FloatTensor):
|
| 33 |
+
return data.to(self.output_device)
|
| 34 |
+
elif isinstance(data, torch.DoubleTensor):
|
| 35 |
+
return data.float().to(self.output_device)
|
| 36 |
+
elif isinstance(data, torch.ByteTensor):
|
| 37 |
+
return data.long().to(self.output_device)
|
| 38 |
+
elif isinstance(data, torch.LongTensor):
|
| 39 |
+
return data.to(self.output_device)
|
| 40 |
+
elif isinstance(data, list) or isinstance(data, tuple):
|
| 41 |
+
return [self.data_to_device(d) for d in data]
|
| 42 |
+
else:
|
| 43 |
+
raise ValueError(data.shape, "Unknown Dtype: {}".format(data.dtype))
|
| 44 |
+
|
| 45 |
+
def criterion_to_device(self, loss):
|
| 46 |
+
return loss.to(self.output_device)
|
| 47 |
+
|
| 48 |
+
def occupy_gpu(self, gpus=None):
|
| 49 |
+
"""
|
| 50 |
+
make program appear on nvidia-smi.
|
| 51 |
+
"""
|
| 52 |
+
if len(gpus) == 0:
|
| 53 |
+
torch.zeros(1).cuda()
|
| 54 |
+
else:
|
| 55 |
+
gpus = [gpus] if isinstance(gpus, int) else list(gpus)
|
| 56 |
+
for g in gpus:
|
| 57 |
+
torch.zeros(1).cuda(g)
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/optimizer.py
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import torch
|
| 3 |
+
import numpy as np
|
| 4 |
+
import torch.optim as optim
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
class Optimizer(object):
|
| 8 |
+
def __init__(self, model, optim_dict):
|
| 9 |
+
self.optim_dict = optim_dict
|
| 10 |
+
if self.optim_dict["optimizer"] == 'SGD':
|
| 11 |
+
self.optimizer = optim.SGD(
|
| 12 |
+
model,
|
| 13 |
+
lr=self.optim_dict['base_lr'],
|
| 14 |
+
momentum=0.9,
|
| 15 |
+
nesterov=self.optim_dict['nesterov'],
|
| 16 |
+
weight_decay=self.optim_dict['weight_decay']
|
| 17 |
+
)
|
| 18 |
+
elif self.optim_dict["optimizer"] == 'Adam':
|
| 19 |
+
alpha = self.optim_dict['learning_ratio']
|
| 20 |
+
self.optimizer = optim.Adam(
|
| 21 |
+
# [
|
| 22 |
+
# {'params': model.conv2d.parameters(), 'lr': self.optim_dict['base_lr']*alpha},
|
| 23 |
+
# {'params': model.conv1d.parameters(), 'lr': self.optim_dict['base_lr']*alpha},
|
| 24 |
+
# {'params': model.rnn.parameters()},
|
| 25 |
+
# {'params': model.classifier.parameters()},
|
| 26 |
+
# ],
|
| 27 |
+
# model.conv1d.fc.parameters(),
|
| 28 |
+
model.parameters(),
|
| 29 |
+
lr=self.optim_dict['base_lr'],
|
| 30 |
+
weight_decay=self.optim_dict['weight_decay']
|
| 31 |
+
)
|
| 32 |
+
else:
|
| 33 |
+
raise ValueError()
|
| 34 |
+
self.scheduler = self.define_lr_scheduler(self.optimizer, self.optim_dict['step'])
|
| 35 |
+
|
| 36 |
+
def define_lr_scheduler(self, optimizer, milestones):
|
| 37 |
+
if self.optim_dict["optimizer"] in ['SGD', 'Adam']:
|
| 38 |
+
lr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=milestones, gamma=0.2)
|
| 39 |
+
return lr_scheduler
|
| 40 |
+
else:
|
| 41 |
+
raise ValueError()
|
| 42 |
+
|
| 43 |
+
def zero_grad(self):
|
| 44 |
+
self.optimizer.zero_grad()
|
| 45 |
+
|
| 46 |
+
def step(self):
|
| 47 |
+
self.optimizer.step()
|
| 48 |
+
|
| 49 |
+
def state_dict(self):
|
| 50 |
+
return self.optimizer.state_dict()
|
| 51 |
+
|
| 52 |
+
def load_state_dict(self, state_dict):
|
| 53 |
+
self.optimizer.load_state_dict(state_dict)
|
| 54 |
+
|
| 55 |
+
def to(self, device):
|
| 56 |
+
for state in self.optimizer.state.values():
|
| 57 |
+
for k, v in state.items():
|
| 58 |
+
if isinstance(v, torch.Tensor):
|
| 59 |
+
state[k] = v.to(device)
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/pack_code.py
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import subprocess
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
import logging
|
| 4 |
+
import os
|
| 5 |
+
|
| 6 |
+
logger = logging.getLogger(__name__)
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def pack_code(git_root, run_dir):
|
| 10 |
+
if os.path.isdir(f"{git_root}/.git"):
|
| 11 |
+
subprocess.run(
|
| 12 |
+
['git', 'archive', '-o', f"{run_dir}/code.tar.gz", 'HEAD'],
|
| 13 |
+
check=True,
|
| 14 |
+
)
|
| 15 |
+
diff_process = subprocess.run(
|
| 16 |
+
['git', 'diff', 'HEAD'],
|
| 17 |
+
check=True, stdout=subprocess.PIPE, text=True,
|
| 18 |
+
)
|
| 19 |
+
if diff_process.stdout:
|
| 20 |
+
logger.warning('Working tree is dirty. Patch:\n%s', diff_process.stdout)
|
| 21 |
+
with open(f"{run_dir}/dirty.patch", 'w') as f:
|
| 22 |
+
f.write(diff_process.stdout)
|
| 23 |
+
else:
|
| 24 |
+
logger.warning('.git does not exist in current dir')
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/parameters.py
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
def get_parser():
|
| 5 |
+
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
|
| 6 |
+
# parameter priority: command line > config > default
|
| 7 |
+
parser = argparse.ArgumentParser(
|
| 8 |
+
description='The pytorch implementation for Visual Alignment Constraint '
|
| 9 |
+
'for Continuous Sign Language Recognition.')
|
| 10 |
+
parser.add_argument(
|
| 11 |
+
'--work-dir',
|
| 12 |
+
default='./work_dir/temp',
|
| 13 |
+
help='the work folder for storing results')
|
| 14 |
+
parser.add_argument(
|
| 15 |
+
'--config',
|
| 16 |
+
default='./configs/baseline.yaml',
|
| 17 |
+
help='path to the configuration file')
|
| 18 |
+
parser.add_argument(
|
| 19 |
+
'--random_fix',
|
| 20 |
+
type=str2bool,
|
| 21 |
+
default=True,
|
| 22 |
+
help='fix random seed or not')
|
| 23 |
+
parser.add_argument(
|
| 24 |
+
'--device',
|
| 25 |
+
type=str,
|
| 26 |
+
default=0,
|
| 27 |
+
help='the indexes of GPUs for training or testing')
|
| 28 |
+
|
| 29 |
+
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
|
| 30 |
+
# processor
|
| 31 |
+
parser.add_argument(
|
| 32 |
+
'--phase', default='train', help='can be train, test and features')
|
| 33 |
+
|
| 34 |
+
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
|
| 35 |
+
# debug
|
| 36 |
+
parser.add_argument(
|
| 37 |
+
'--save-interval',
|
| 38 |
+
type=int,
|
| 39 |
+
default=200,
|
| 40 |
+
help='the interval for storing models (#epochs)')
|
| 41 |
+
parser.add_argument(
|
| 42 |
+
'--random-seed',
|
| 43 |
+
type=int,
|
| 44 |
+
default=0,
|
| 45 |
+
help='the default value for random seed.')
|
| 46 |
+
parser.add_argument(
|
| 47 |
+
'--eval-interval',
|
| 48 |
+
type=int,
|
| 49 |
+
default=100,
|
| 50 |
+
help='the interval for evaluating models (#epochs)')
|
| 51 |
+
parser.add_argument(
|
| 52 |
+
'--print-log',
|
| 53 |
+
type=str2bool,
|
| 54 |
+
default=True,
|
| 55 |
+
help='print logging or not')
|
| 56 |
+
parser.add_argument(
|
| 57 |
+
'--log-interval',
|
| 58 |
+
type=int,
|
| 59 |
+
default=20,
|
| 60 |
+
help='the interval for printing messages (#iteration)')
|
| 61 |
+
parser.add_argument(
|
| 62 |
+
'--evaluate-tool', default="python", help='sclite or python')
|
| 63 |
+
|
| 64 |
+
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
|
| 65 |
+
# feeder
|
| 66 |
+
parser.add_argument(
|
| 67 |
+
'--feeder', default='dataloader_video.BaseFeeder', help='data loader will be used')
|
| 68 |
+
parser.add_argument(
|
| 69 |
+
'--dataset',
|
| 70 |
+
default=None,
|
| 71 |
+
help='data loader will be used'
|
| 72 |
+
)
|
| 73 |
+
parser.add_argument(
|
| 74 |
+
'--dataset-info',
|
| 75 |
+
default=dict(),
|
| 76 |
+
help='data loader will be used'
|
| 77 |
+
)
|
| 78 |
+
parser.add_argument(
|
| 79 |
+
'--num-worker',
|
| 80 |
+
type=int,
|
| 81 |
+
default=4,
|
| 82 |
+
help='the number of worker for data loader')
|
| 83 |
+
parser.add_argument(
|
| 84 |
+
'--feeder-args',
|
| 85 |
+
default=dict(),
|
| 86 |
+
help='the arguments of data loader')
|
| 87 |
+
|
| 88 |
+
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
|
| 89 |
+
# model
|
| 90 |
+
parser.add_argument('--model', default=None, help='the model will be used')
|
| 91 |
+
parser.add_argument(
|
| 92 |
+
'--model-args',
|
| 93 |
+
type=dict,
|
| 94 |
+
default=dict(),
|
| 95 |
+
help='the arguments of model')
|
| 96 |
+
parser.add_argument(
|
| 97 |
+
'--load-weights',
|
| 98 |
+
default=None,
|
| 99 |
+
help='load weights for network initialization')
|
| 100 |
+
parser.add_argument(
|
| 101 |
+
'--load-checkpoints',
|
| 102 |
+
default=None,
|
| 103 |
+
help='load checkpoints for continue training')
|
| 104 |
+
parser.add_argument(
|
| 105 |
+
'--decode-mode',
|
| 106 |
+
default="max",
|
| 107 |
+
help='search mode for decode, max or beam')
|
| 108 |
+
parser.add_argument(
|
| 109 |
+
'--ignore-weights',
|
| 110 |
+
type=str,
|
| 111 |
+
default=[],
|
| 112 |
+
nargs='+',
|
| 113 |
+
help='the name of weights which will be ignored in the initialization')
|
| 114 |
+
|
| 115 |
+
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
|
| 116 |
+
# optim
|
| 117 |
+
parser.add_argument(
|
| 118 |
+
'--batch-size', type=int, default=16, help='training batch size')
|
| 119 |
+
parser.add_argument(
|
| 120 |
+
'--test-batch-size', type=int, default=8, help='test batch size')
|
| 121 |
+
|
| 122 |
+
default_optimizer_dict = {
|
| 123 |
+
"base_lr": 1e-2,
|
| 124 |
+
"optimizer": "SGD",
|
| 125 |
+
"nesterov": False,
|
| 126 |
+
"step": [5, 10],
|
| 127 |
+
"weight_decay": 0.00005,
|
| 128 |
+
"start_epoch": 1,
|
| 129 |
+
}
|
| 130 |
+
default_loss_dict = {
|
| 131 |
+
"SeqCTC": 1.0,
|
| 132 |
+
}
|
| 133 |
+
|
| 134 |
+
parser.add_argument(
|
| 135 |
+
'--loss-weights',
|
| 136 |
+
default=default_loss_dict,
|
| 137 |
+
help='loss selection'
|
| 138 |
+
)
|
| 139 |
+
|
| 140 |
+
parser.add_argument(
|
| 141 |
+
'--optimizer-args',
|
| 142 |
+
default=default_optimizer_dict,
|
| 143 |
+
help='the arguments of optimizer')
|
| 144 |
+
|
| 145 |
+
parser.add_argument(
|
| 146 |
+
'--num-epoch',
|
| 147 |
+
type=int,
|
| 148 |
+
default=80,
|
| 149 |
+
help='stop training in which epoch')
|
| 150 |
+
return parser
|
| 151 |
+
|
| 152 |
+
|
| 153 |
+
def str2bool(v):
|
| 154 |
+
if v.lower() in ('yes', 'true', 't', 'y', '1'):
|
| 155 |
+
return True
|
| 156 |
+
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
|
| 157 |
+
return False
|
| 158 |
+
else:
|
| 159 |
+
raise argparse.ArgumentTypeError('Boolean value expected.')
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/random_state.py
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import torch
|
| 3 |
+
import random
|
| 4 |
+
import numpy as np
|
| 5 |
+
import os
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
class RandomState(object):
|
| 9 |
+
def __init__(self, seed):
|
| 10 |
+
torch.set_num_threads(1)
|
| 11 |
+
os.environ['PYTHONHASHSEED'] = str(seed)
|
| 12 |
+
torch.backends.cudnn.deterministic = True
|
| 13 |
+
torch.backends.cudnn.benchmark = False
|
| 14 |
+
torch.manual_seed(seed)
|
| 15 |
+
torch.cuda.manual_seed(seed)
|
| 16 |
+
torch.cuda.manual_seed_all(seed)
|
| 17 |
+
np.random.seed(seed)
|
| 18 |
+
random.seed(seed)
|
| 19 |
+
|
| 20 |
+
def save_rng_state(self):
|
| 21 |
+
rng_dict = {}
|
| 22 |
+
rng_dict["torch"] = torch.get_rng_state()
|
| 23 |
+
rng_dict["cuda"] = torch.cuda.get_rng_state_all()
|
| 24 |
+
rng_dict["numpy"] = np.random.get_state()
|
| 25 |
+
rng_dict["random"] = random.getstate()
|
| 26 |
+
return rng_dict
|
| 27 |
+
|
| 28 |
+
def set_rng_state(self, rng_dict):
|
| 29 |
+
torch.set_rng_state(rng_dict["torch"])
|
| 30 |
+
torch.cuda.set_rng_state_all(rng_dict["cuda"])
|
| 31 |
+
np.random.set_state(rng_dict["numpy"])
|
| 32 |
+
random.setstate(rng_dict["random"])
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/record.py
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import time
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
class Recorder(object):
|
| 6 |
+
def __init__(self, work_dir, print_log, log_interval):
|
| 7 |
+
self.cur_time = time.time()
|
| 8 |
+
self.print_log_flag = print_log
|
| 9 |
+
self.log_interval = log_interval
|
| 10 |
+
self.log_path = '{}/log.txt'.format(work_dir)
|
| 11 |
+
self.timer = dict(dataloader=0.001, device=0.001, forward=0.001, backward=0.001)
|
| 12 |
+
|
| 13 |
+
def print_time(self):
|
| 14 |
+
localtime = time.asctime(time.localtime(time.time()))
|
| 15 |
+
self.print_log("Local current time : " + localtime)
|
| 16 |
+
|
| 17 |
+
def print_log(self, str, path=None, print_time=True):
|
| 18 |
+
if path is None:
|
| 19 |
+
path = self.log_path
|
| 20 |
+
if print_time:
|
| 21 |
+
localtime = time.asctime(time.localtime(time.time()))
|
| 22 |
+
str = "[ " + localtime + ' ] ' + str
|
| 23 |
+
print(str)
|
| 24 |
+
if self.print_log_flag:
|
| 25 |
+
with open(path, 'a') as f:
|
| 26 |
+
f.writelines(str)
|
| 27 |
+
f.writelines("\n")
|
| 28 |
+
|
| 29 |
+
def record_time(self):
|
| 30 |
+
self.cur_time = time.time()
|
| 31 |
+
return self.cur_time
|
| 32 |
+
|
| 33 |
+
def split_time(self):
|
| 34 |
+
split_time = time.time() - self.cur_time
|
| 35 |
+
self.record_time()
|
| 36 |
+
return split_time
|
| 37 |
+
|
| 38 |
+
def timer_reset(self):
|
| 39 |
+
self.cur_time = time.time()
|
| 40 |
+
self.timer = dict(dataloader=0.001, device=0.001, forward=0.001, backward=0.001)
|
| 41 |
+
|
| 42 |
+
def record_timer(self, key):
|
| 43 |
+
self.timer[key] += self.split_time()
|
| 44 |
+
|
| 45 |
+
def print_time_statistics(self):
|
| 46 |
+
proportion = {
|
| 47 |
+
k: '{:02d}%'.format(int(round(v * 100 / sum(self.timer.values()))))
|
| 48 |
+
for k, v in self.timer.items()}
|
| 49 |
+
self.print_log(
|
| 50 |
+
'\tTime consumption: [Data]{dataloader}, [GPU]{device}, [Forward]{forward}, [Backward]{backward}'.format(
|
| 51 |
+
**proportion))
|
CorrNet_Plus/CorrNet_Plus_CSLR/utils/video_augmentation.py
ADDED
|
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ----------------------------------------
|
| 2 |
+
# Written by Yuecong Min
|
| 3 |
+
# ----------------------------------------
|
| 4 |
+
import cv2
|
| 5 |
+
import pdb
|
| 6 |
+
import PIL
|
| 7 |
+
import copy
|
| 8 |
+
import scipy.misc
|
| 9 |
+
import torch
|
| 10 |
+
import random
|
| 11 |
+
import numbers
|
| 12 |
+
import numpy as np
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class Compose(object):
|
| 16 |
+
def __init__(self, transforms):
|
| 17 |
+
self.transforms = transforms
|
| 18 |
+
|
| 19 |
+
def __call__(self, image, label, file_info=None):
|
| 20 |
+
for t in self.transforms:
|
| 21 |
+
if file_info is not None and isinstance(t, WERAugment):
|
| 22 |
+
image, label = t(image, label, file_info)
|
| 23 |
+
else:
|
| 24 |
+
image = t(image)
|
| 25 |
+
return image, label
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
class WERAugment(object):
|
| 29 |
+
def __init__(self, boundary_path):
|
| 30 |
+
self.boundary_dict = np.load(boundary_path, allow_pickle=True).item()
|
| 31 |
+
self.K = 3
|
| 32 |
+
|
| 33 |
+
def __call__(self, video, label, file_info):
|
| 34 |
+
ind = np.arange(len(video)).tolist()
|
| 35 |
+
if file_info not in self.boundary_dict.keys():
|
| 36 |
+
return video, label
|
| 37 |
+
binfo = copy.deepcopy(self.boundary_dict[file_info])
|
| 38 |
+
binfo = [0] + binfo + [len(video)]
|
| 39 |
+
k = np.random.randint(min(self.K, len(label) - 1))
|
| 40 |
+
for i in range(k):
|
| 41 |
+
ind, label, binfo = self.one_operation(ind, label, binfo)
|
| 42 |
+
ret_video = [video[i] for i in ind]
|
| 43 |
+
return ret_video, label
|
| 44 |
+
|
| 45 |
+
def one_operation(self, *inputs):
|
| 46 |
+
prob = np.random.random()
|
| 47 |
+
if prob < 0.3:
|
| 48 |
+
return self.delete(*inputs)
|
| 49 |
+
elif 0.3 <= prob < 0.7:
|
| 50 |
+
return self.substitute(*inputs)
|
| 51 |
+
else:
|
| 52 |
+
return self.insert(*inputs)
|
| 53 |
+
|
| 54 |
+
@staticmethod
|
| 55 |
+
def delete(ind, label, binfo):
|
| 56 |
+
del_wd = np.random.randint(len(label))
|
| 57 |
+
ind = ind[:binfo[del_wd]] + ind[binfo[del_wd + 1]:]
|
| 58 |
+
duration = binfo[del_wd + 1] - binfo[del_wd]
|
| 59 |
+
del label[del_wd]
|
| 60 |
+
binfo = [i for i in binfo[:del_wd]] + [i - duration for i in binfo[del_wd + 1:]]
|
| 61 |
+
return ind, label, binfo
|
| 62 |
+
|
| 63 |
+
@staticmethod
|
| 64 |
+
def insert(ind, label, binfo):
|
| 65 |
+
ins_wd = np.random.randint(len(label))
|
| 66 |
+
ins_pos = np.random.choice(binfo)
|
| 67 |
+
ins_lab_pos = binfo.index(ins_pos)
|
| 68 |
+
|
| 69 |
+
ind = ind[:ins_pos] + ind[binfo[ins_wd]:binfo[ins_wd + 1]] + ind[ins_pos:]
|
| 70 |
+
duration = binfo[ins_wd + 1] - binfo[ins_wd]
|
| 71 |
+
label = label[:ins_lab_pos] + [label[ins_wd]] + label[ins_lab_pos:]
|
| 72 |
+
binfo = binfo[:ins_lab_pos] + [binfo[ins_lab_pos - 1] + duration] + [i + duration for i in binfo[ins_lab_pos:]]
|
| 73 |
+
return ind, label, binfo
|
| 74 |
+
|
| 75 |
+
@staticmethod
|
| 76 |
+
def substitute(ind, label, binfo):
|
| 77 |
+
sub_wd = np.random.randint(len(label))
|
| 78 |
+
tar_wd = np.random.randint(len(label))
|
| 79 |
+
|
| 80 |
+
ind = ind[:binfo[tar_wd]] + ind[binfo[sub_wd]:binfo[sub_wd + 1]] + ind[binfo[tar_wd + 1]:]
|
| 81 |
+
label[tar_wd] = label[sub_wd]
|
| 82 |
+
delta_duration = binfo[sub_wd + 1] - binfo[sub_wd] - (binfo[tar_wd + 1] - binfo[tar_wd])
|
| 83 |
+
binfo = binfo[:tar_wd + 1] + [i + delta_duration for i in binfo[tar_wd + 1:]]
|
| 84 |
+
return ind, label, binfo
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
class ToTensor(object):
|
| 88 |
+
def __call__(self, video):
|
| 89 |
+
if isinstance(video, list):
|
| 90 |
+
video = np.array(video)
|
| 91 |
+
video = torch.from_numpy(video.transpose((0, 3, 1, 2))).float()
|
| 92 |
+
if isinstance(video, np.ndarray):
|
| 93 |
+
video = torch.from_numpy(video.transpose((0, 3, 1, 2)))
|
| 94 |
+
return video
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
class RandomCrop(object):
|
| 98 |
+
"""
|
| 99 |
+
Extract random crop of the video.
|
| 100 |
+
Args:
|
| 101 |
+
size (sequence or int): Desired output size for the crop in format (h, w).
|
| 102 |
+
crop_position (str): Selected corner (or center) position from the
|
| 103 |
+
list ['c', 'tl', 'tr', 'bl', 'br']. If it is non, crop position is
|
| 104 |
+
selected randomly at each call.
|
| 105 |
+
"""
|
| 106 |
+
|
| 107 |
+
def __init__(self, size):
|
| 108 |
+
if isinstance(size, numbers.Number):
|
| 109 |
+
if size < 0:
|
| 110 |
+
raise ValueError('If size is a single number, it must be positive')
|
| 111 |
+
size = (size, size)
|
| 112 |
+
else:
|
| 113 |
+
if len(size) != 2:
|
| 114 |
+
raise ValueError('If size is a sequence, it must be of len 2.')
|
| 115 |
+
self.size = size
|
| 116 |
+
|
| 117 |
+
def __call__(self, clip):
|
| 118 |
+
crop_h, crop_w = self.size
|
| 119 |
+
if isinstance(clip[0], np.ndarray):
|
| 120 |
+
im_h, im_w, im_c = clip[0].shape
|
| 121 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 122 |
+
im_w, im_h = clip[0].size
|
| 123 |
+
else:
|
| 124 |
+
raise TypeError('Expected numpy.ndarray or PIL.Image' +
|
| 125 |
+
'but got list of {0}'.format(type(clip[0])))
|
| 126 |
+
if crop_w > im_w:
|
| 127 |
+
pad = crop_w - im_w
|
| 128 |
+
clip = [np.pad(img, ((0, 0), (pad // 2, pad - pad // 2), (0, 0)), 'constant', constant_values=0) for img in
|
| 129 |
+
clip]
|
| 130 |
+
w1 = 0
|
| 131 |
+
else:
|
| 132 |
+
w1 = random.randint(0, im_w - crop_w)
|
| 133 |
+
|
| 134 |
+
if crop_h > im_h:
|
| 135 |
+
pad = crop_h - im_h
|
| 136 |
+
clip = [np.pad(img, ((pad // 2, pad - pad // 2), (0, 0), (0, 0)), 'constant', constant_values=0) for img in
|
| 137 |
+
clip]
|
| 138 |
+
h1 = 0
|
| 139 |
+
else:
|
| 140 |
+
h1 = random.randint(0, im_h - crop_h)
|
| 141 |
+
|
| 142 |
+
if isinstance(clip[0], np.ndarray):
|
| 143 |
+
return [img[h1:h1 + crop_h, w1:w1 + crop_w, :] for img in clip]
|
| 144 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 145 |
+
return [img.crop((w1, h1, w1 + crop_w, h1 + crop_h)) for img in clip]
|
| 146 |
+
|
| 147 |
+
|
| 148 |
+
class CenterCrop(object):
|
| 149 |
+
def __init__(self, size):
|
| 150 |
+
if isinstance(size, numbers.Number):
|
| 151 |
+
self.size = (int(size), int(size))
|
| 152 |
+
else:
|
| 153 |
+
self.size = size
|
| 154 |
+
|
| 155 |
+
def __call__(self, clip):
|
| 156 |
+
try:
|
| 157 |
+
im_h, im_w, im_c = clip[0].shape
|
| 158 |
+
except ValueError:
|
| 159 |
+
print(clip[0].shape)
|
| 160 |
+
new_h, new_w = self.size
|
| 161 |
+
new_h = im_h if new_h >= im_h else new_h
|
| 162 |
+
new_w = im_w if new_w >= im_w else new_w
|
| 163 |
+
top = int(round((im_h - new_h) / 2.))
|
| 164 |
+
left = int(round((im_w - new_w) / 2.))
|
| 165 |
+
return [img[top:top + new_h, left:left + new_w] for img in clip]
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
class RandomHorizontalFlip(object):
|
| 169 |
+
def __init__(self, prob):
|
| 170 |
+
self.prob = prob
|
| 171 |
+
|
| 172 |
+
def __call__(self, clip):
|
| 173 |
+
# B, H, W, 3
|
| 174 |
+
flag = random.random() < self.prob
|
| 175 |
+
if flag:
|
| 176 |
+
clip = np.flip(clip, axis=2)
|
| 177 |
+
clip = np.ascontiguousarray(copy.deepcopy(clip))
|
| 178 |
+
return np.array(clip)
|
| 179 |
+
|
| 180 |
+
|
| 181 |
+
class RandomRotation(object):
|
| 182 |
+
"""
|
| 183 |
+
Rotate entire clip randomly by a random angle within
|
| 184 |
+
given bounds
|
| 185 |
+
Args:
|
| 186 |
+
degrees (sequence or int): Range of degrees to select from
|
| 187 |
+
If degrees is a number instead of sequence like (min, max),
|
| 188 |
+
the range of degrees, will be (-degrees, +degrees).
|
| 189 |
+
"""
|
| 190 |
+
|
| 191 |
+
def __init__(self, degrees):
|
| 192 |
+
if isinstance(degrees, numbers.Number):
|
| 193 |
+
if degrees < 0:
|
| 194 |
+
raise ValueError('If degrees is a single number,'
|
| 195 |
+
'must be positive')
|
| 196 |
+
degrees = (-degrees, degrees)
|
| 197 |
+
else:
|
| 198 |
+
if len(degrees) != 2:
|
| 199 |
+
raise ValueError('If degrees is a sequence,'
|
| 200 |
+
'it must be of len 2.')
|
| 201 |
+
self.degrees = degrees
|
| 202 |
+
|
| 203 |
+
def __call__(self, clip):
|
| 204 |
+
"""
|
| 205 |
+
Args:
|
| 206 |
+
img (PIL.Image or numpy.ndarray): List of images to be cropped
|
| 207 |
+
in format (h, w, c) in numpy.ndarray
|
| 208 |
+
Returns:
|
| 209 |
+
PIL.Image or numpy.ndarray: Cropped list of images
|
| 210 |
+
"""
|
| 211 |
+
angle = random.uniform(self.degrees[0], self.degrees[1])
|
| 212 |
+
if isinstance(clip[0], np.ndarray):
|
| 213 |
+
rotated = [scipy.misc.imrotate(img, angle) for img in clip]
|
| 214 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 215 |
+
rotated = [img.rotate(angle) for img in clip]
|
| 216 |
+
else:
|
| 217 |
+
raise TypeError('Expected numpy.ndarray or PIL.Image' +
|
| 218 |
+
'but got list of {0}'.format(type(clip[0])))
|
| 219 |
+
return rotated
|
| 220 |
+
|
| 221 |
+
|
| 222 |
+
class TemporalRescale(object):
|
| 223 |
+
def __init__(self, temp_scaling=0.2, frame_interval=1):
|
| 224 |
+
self.min_len = 32
|
| 225 |
+
self.max_len = int(np.ceil(230/frame_interval))
|
| 226 |
+
self.L = 1.0 - temp_scaling
|
| 227 |
+
self.U = 1.0 + temp_scaling
|
| 228 |
+
|
| 229 |
+
def __call__(self, clip):
|
| 230 |
+
vid_len = len(clip)
|
| 231 |
+
new_len = int(vid_len * (self.L + (self.U - self.L) * np.random.random()))
|
| 232 |
+
if new_len < self.min_len:
|
| 233 |
+
new_len = self.min_len
|
| 234 |
+
if new_len > self.max_len:
|
| 235 |
+
new_len = self.max_len
|
| 236 |
+
if (new_len - 4) % 4 != 0:
|
| 237 |
+
new_len += 4 - (new_len - 4) % 4
|
| 238 |
+
if new_len <= vid_len:
|
| 239 |
+
index = sorted(random.sample(range(vid_len), new_len))
|
| 240 |
+
else:
|
| 241 |
+
index = sorted(random.choices(range(vid_len), k=new_len))
|
| 242 |
+
return clip[index]
|
| 243 |
+
|
| 244 |
+
|
| 245 |
+
class RandomResize(object):
|
| 246 |
+
"""
|
| 247 |
+
Resize video bysoomingin and out.
|
| 248 |
+
Args:
|
| 249 |
+
rate (float): Video is scaled uniformly between
|
| 250 |
+
[1 - rate, 1 + rate].
|
| 251 |
+
interp (string): Interpolation to use for re-sizing
|
| 252 |
+
('nearest', 'lanczos', 'bilinear', 'bicubic' or 'cubic').
|
| 253 |
+
"""
|
| 254 |
+
|
| 255 |
+
def __init__(self, rate=0.0, interp='bilinear'):
|
| 256 |
+
self.rate = rate
|
| 257 |
+
self.interpolation = interp
|
| 258 |
+
|
| 259 |
+
def __call__(self, clip):
|
| 260 |
+
scaling_factor = random.uniform(1 - self.rate, 1 + self.rate)
|
| 261 |
+
|
| 262 |
+
if isinstance(clip[0], np.ndarray):
|
| 263 |
+
im_h, im_w, im_c = clip[0].shape
|
| 264 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 265 |
+
im_w, im_h = clip[0].size
|
| 266 |
+
|
| 267 |
+
new_w = int(im_w * scaling_factor)
|
| 268 |
+
new_h = int(im_h * scaling_factor)
|
| 269 |
+
new_size = (new_h, new_w)
|
| 270 |
+
if isinstance(clip[0], np.ndarray):
|
| 271 |
+
return [scipy.misc.imresize(img, size=(new_h, new_w), interp=self.interpolation) for img in clip]
|
| 272 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 273 |
+
return [img.resize(size=(new_w, new_h), resample=self._get_PIL_interp(self.interpolation)) for img in clip]
|
| 274 |
+
else:
|
| 275 |
+
raise TypeError('Expected numpy.ndarray or PIL.Image' +
|
| 276 |
+
'but got list of {0}'.format(type(clip[0])))
|
| 277 |
+
|
| 278 |
+
def _get_PIL_interp(self, interp):
|
| 279 |
+
if interp == 'nearest':
|
| 280 |
+
return PIL.Image.NEAREST
|
| 281 |
+
elif interp == 'lanczos':
|
| 282 |
+
return PIL.Image.LANCZOS
|
| 283 |
+
elif interp == 'bilinear':
|
| 284 |
+
return PIL.Image.BILINEAR
|
| 285 |
+
elif interp == 'bicubic':
|
| 286 |
+
return PIL.Image.BICUBIC
|
| 287 |
+
elif interp == 'cubic':
|
| 288 |
+
return PIL.Image.CUBIC
|
| 289 |
+
|
| 290 |
+
|
| 291 |
+
class Resize(object):
|
| 292 |
+
"""
|
| 293 |
+
Resize video bysoomingin and out.
|
| 294 |
+
Args:
|
| 295 |
+
rate (float): Video is scaled uniformly between
|
| 296 |
+
[1 - rate, 1 + rate].
|
| 297 |
+
interp (string): Interpolation to use for re-sizing
|
| 298 |
+
('nearest', 'lanczos', 'bilinear', 'bicubic' or 'cubic').
|
| 299 |
+
"""
|
| 300 |
+
|
| 301 |
+
def __init__(self, rate=0.0, interp='bilinear'):
|
| 302 |
+
self.rate = rate
|
| 303 |
+
self.interpolation = interp
|
| 304 |
+
|
| 305 |
+
def __call__(self, clip):
|
| 306 |
+
if self.rate == 1.0:
|
| 307 |
+
return clip
|
| 308 |
+
scaling_factor = self.rate
|
| 309 |
+
|
| 310 |
+
if isinstance(clip[0], np.ndarray):
|
| 311 |
+
im_h, im_w, im_c = clip[0].shape
|
| 312 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 313 |
+
im_w, im_h = clip[0].size
|
| 314 |
+
|
| 315 |
+
new_w = int(im_w * scaling_factor) if scaling_factor>0 and scaling_factor<=1 else int(scaling_factor)
|
| 316 |
+
new_h = int(im_h * scaling_factor) if scaling_factor>0 and scaling_factor<=1 else int(scaling_factor)
|
| 317 |
+
new_size = (new_w, new_h)
|
| 318 |
+
if isinstance(clip[0], np.ndarray):
|
| 319 |
+
return [np.array(PIL.Image.fromarray(img).resize(new_size)) for img in clip]
|
| 320 |
+
elif isinstance(clip[0], PIL.Image.Image):
|
| 321 |
+
return [img.resize(size=(new_w, new_h), resample=self._get_PIL_interp(self.interpolation)) for img in clip]
|
| 322 |
+
else:
|
| 323 |
+
raise TypeError('Expected numpy.ndarray or PIL.Image' +
|
| 324 |
+
'but got list of {0}'.format(type(clip[0])))
|
| 325 |
+
|
| 326 |
+
def _get_PIL_interp(self, interp):
|
| 327 |
+
if interp == 'nearest':
|
| 328 |
+
return PIL.Image.NEAREST
|
| 329 |
+
elif interp == 'lanczos':
|
| 330 |
+
return PIL.Image.LANCZOS
|
| 331 |
+
elif interp == 'bilinear':
|
| 332 |
+
return PIL.Image.BILINEAR
|
| 333 |
+
elif interp == 'bicubic':
|
| 334 |
+
return PIL.Image.BICUBIC
|
| 335 |
+
elif interp == 'cubic':
|
| 336 |
+
return PIL.Image.CUBIC
|
CorrNet_Plus/CorrNet_Plus_CSLR/weight_map_generation/resnet.py
ADDED
|
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import torch.nn as nn
|
| 3 |
+
import torch.utils.model_zoo as model_zoo
|
| 4 |
+
import torch.nn.functional as F
|
| 5 |
+
from torch.utils.checkpoint import checkpoint
|
| 6 |
+
__all__ = [
|
| 7 |
+
'ResNet', 'resnet10', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
|
| 8 |
+
'resnet152', 'resnet200'
|
| 9 |
+
]
|
| 10 |
+
model_urls = {
|
| 11 |
+
'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',
|
| 12 |
+
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
|
| 13 |
+
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
|
| 14 |
+
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
|
| 15 |
+
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
|
| 16 |
+
}
|
| 17 |
+
|
| 18 |
+
class AttentionPool2d(nn.Module):
|
| 19 |
+
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None, clusters=1):
|
| 20 |
+
super().__init__()
|
| 21 |
+
#self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
|
| 22 |
+
self.k_proj = nn.Linear(embed_dim, embed_dim)
|
| 23 |
+
self.q_proj = nn.Linear(embed_dim, embed_dim)
|
| 24 |
+
self.v_proj = nn.Linear(embed_dim, embed_dim)
|
| 25 |
+
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
|
| 26 |
+
self.num_heads = num_heads
|
| 27 |
+
self.clusters = clusters
|
| 28 |
+
self.query = nn.Parameter(torch.rand(self.clusters, 1, embed_dim), requires_grad=True)
|
| 29 |
+
|
| 30 |
+
def forward(self, x):
|
| 31 |
+
N, C, T, H, W= x.shape
|
| 32 |
+
x = x.flatten(start_dim=3).permute(3, 0, 2, 1).reshape(-1, N*T, C).contiguous() # NCTHW -> (HW)(NT)C
|
| 33 |
+
#x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)(NT)C
|
| 34 |
+
#x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)(NT)C
|
| 35 |
+
x, _ = F.multi_head_attention_forward(
|
| 36 |
+
#query=x[:1], key=x, value=x,
|
| 37 |
+
query=self.query.repeat(1,N*T,1), key=x, value=x,
|
| 38 |
+
embed_dim_to_check=x.shape[-1],
|
| 39 |
+
num_heads=self.num_heads,
|
| 40 |
+
q_proj_weight=self.q_proj.weight,
|
| 41 |
+
k_proj_weight=self.k_proj.weight,
|
| 42 |
+
v_proj_weight=self.v_proj.weight,
|
| 43 |
+
in_proj_weight=None,
|
| 44 |
+
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
|
| 45 |
+
bias_k=None,
|
| 46 |
+
bias_v=None,
|
| 47 |
+
add_zero_attn=False,
|
| 48 |
+
dropout_p=0,
|
| 49 |
+
out_proj_weight=self.c_proj.weight,
|
| 50 |
+
out_proj_bias=self.c_proj.bias,
|
| 51 |
+
use_separate_proj_weight=True,
|
| 52 |
+
training=self.training,
|
| 53 |
+
need_weights=False
|
| 54 |
+
)
|
| 55 |
+
return x.view(self.clusters,N,T,C).contiguous().permute(1,3,2,0) #PNTC->NCTP
|
| 56 |
+
|
| 57 |
+
class UnfoldTemporalWindows(nn.Module):
|
| 58 |
+
def __init__(self, window_size=9, window_stride=1, window_dilation=1):
|
| 59 |
+
super().__init__()
|
| 60 |
+
self.window_size = window_size
|
| 61 |
+
self.window_stride = window_stride
|
| 62 |
+
self.window_dilation = window_dilation
|
| 63 |
+
|
| 64 |
+
self.padding = (window_size + (window_size-1) * (window_dilation-1) - 1) // 2
|
| 65 |
+
self.unfold = nn.Unfold(kernel_size=(self.window_size, 1),
|
| 66 |
+
dilation=(self.window_dilation, 1),
|
| 67 |
+
stride=(self.window_stride, 1),
|
| 68 |
+
padding=(self.padding, 0))
|
| 69 |
+
|
| 70 |
+
def forward(self, x):
|
| 71 |
+
# Input shape: (N,C,T,H,W), out: (N,C,T,V*window_size)
|
| 72 |
+
N, C, T, H, W = x.shape
|
| 73 |
+
x = x.view(N, C, T, H*W)
|
| 74 |
+
x = self.unfold(x) #(N, C*Window_Size, T, H*W)
|
| 75 |
+
# Permute extra channels from window size to the graph dimension; -1 for number of windows
|
| 76 |
+
x = x.view(N, C, self.window_size, T, H, W).permute(0,1,3,2,4,5).reshape(N, C, T, self.window_size, H, W).contiguous()# NCTSHW
|
| 77 |
+
return x
|
| 78 |
+
|
| 79 |
+
class Temporal_weighting(nn.Module):
|
| 80 |
+
def __init__(self, input_size ):
|
| 81 |
+
super().__init__()
|
| 82 |
+
hidden_size = input_size//16
|
| 83 |
+
self.conv_transform = nn.Conv1d(input_size, hidden_size, kernel_size=1, stride=1, padding=0)
|
| 84 |
+
self.conv_back = nn.Conv1d(hidden_size, input_size, kernel_size=1, stride=1, padding=0)
|
| 85 |
+
self.num = 3
|
| 86 |
+
self.conv_enhance = nn.ModuleList([
|
| 87 |
+
nn.Conv1d(hidden_size, hidden_size, kernel_size=3, stride=1, padding=int(i+1), groups=hidden_size, dilation=int(i+1)) for i in range(self.num)
|
| 88 |
+
])
|
| 89 |
+
self.weights = nn.Parameter(torch.ones(self.num) / self.num, requires_grad=True)
|
| 90 |
+
self.alpha = nn.Parameter(torch.zeros(1), requires_grad=True)
|
| 91 |
+
self.relu = nn.ReLU(inplace=True)
|
| 92 |
+
|
| 93 |
+
def forward(self, x):
|
| 94 |
+
out = self.conv_transform(x.mean(-1).mean(-1))
|
| 95 |
+
aggregated_out = 0
|
| 96 |
+
for i in range(self.num):
|
| 97 |
+
aggregated_out += self.conv_enhance[i](out) * self.weights[i]
|
| 98 |
+
out = self.conv_back(aggregated_out)
|
| 99 |
+
return x*(F.sigmoid(out.unsqueeze(-1).unsqueeze(-1))-0.5) * self.alpha
|
| 100 |
+
|
| 101 |
+
class Get_Correlation(nn.Module):
|
| 102 |
+
def __init__(self, channels, neighbors=3, save_weight_map=False):
|
| 103 |
+
super().__init__()
|
| 104 |
+
self.save_weight_map = save_weight_map
|
| 105 |
+
reduction_channel = channels//16
|
| 106 |
+
|
| 107 |
+
self.down_conv2 = nn.Conv3d(channels, channels, kernel_size=1, bias=False)
|
| 108 |
+
self.neighbors = neighbors
|
| 109 |
+
self.clusters = 1
|
| 110 |
+
self.weights2 = nn.Parameter(torch.ones(self.neighbors*2) / (self.neighbors*2), requires_grad=True)
|
| 111 |
+
self.unfold = UnfoldTemporalWindows(2*self.neighbors+1)
|
| 112 |
+
self.weights3 = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 113 |
+
self.weights4 = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 114 |
+
self.attpool = AttentionPool2d(spacial_dim=None, embed_dim=channels, num_heads=1, clusters=self.clusters)
|
| 115 |
+
self.mlp = nn.Sequential(nn.Conv3d(channels, reduction_channel, kernel_size=1),
|
| 116 |
+
nn.GELU(),
|
| 117 |
+
nn.Conv3d(reduction_channel, channels, kernel_size=1),)
|
| 118 |
+
|
| 119 |
+
# For generating aggregated_x with multi-scale conv
|
| 120 |
+
self.down_conv = nn.Conv3d(channels, reduction_channel, kernel_size=1, bias=False)
|
| 121 |
+
self.spatial_aggregation1 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,1,1), groups=reduction_channel)
|
| 122 |
+
self.spatial_aggregation2 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,2,2), dilation=(1,2,2), groups=reduction_channel)
|
| 123 |
+
self.spatial_aggregation3 = nn.Conv3d(reduction_channel, reduction_channel, kernel_size=(9,3,3), padding=(4,3,3), dilation=(1,3,3), groups=reduction_channel)
|
| 124 |
+
self.weights = nn.Parameter(torch.ones(3) / 3, requires_grad=True)
|
| 125 |
+
self.conv_back = nn.Conv3d(reduction_channel, channels, kernel_size=1, bias=False)
|
| 126 |
+
|
| 127 |
+
def forward(self, x):
|
| 128 |
+
N, C, T, H, W = x.shape
|
| 129 |
+
def clustering(query, key):
|
| 130 |
+
affinities = torch.einsum('bctp,bctl->btpl', query, key)
|
| 131 |
+
return torch.einsum('bctl,btpl->bctp', key, F.sigmoid(affinities)-0.5)
|
| 132 |
+
|
| 133 |
+
x_mean = x.mean(3, keepdim=True).mean(4, keepdim=False)
|
| 134 |
+
x_max = x.max(-1, keepdim=False)[0].max(-1, keepdim=True)[0]
|
| 135 |
+
x_att = self.attpool(x) #NCTP
|
| 136 |
+
x2 = self.down_conv2(x)
|
| 137 |
+
upfold = self.unfold(x2)
|
| 138 |
+
upfold = (torch.concat([upfold[:,:,:,:self.neighbors], upfold[:,:,:,self.neighbors+1:]],3)* self.weights2.view(1, 1, 1, -1, 1, 1)).view(N, C, T, -1)
|
| 139 |
+
x_mean = x_mean*self.weights4[0] + x_max*self.weights4[1] + x_att*self.weights4[2]
|
| 140 |
+
x_mean = clustering(x_mean, upfold)
|
| 141 |
+
features = x_mean.view(N, C, T, self.clusters, 1)
|
| 142 |
+
|
| 143 |
+
x_down = self.down_conv(x)
|
| 144 |
+
aggregated_x = self.spatial_aggregation1(x_down)*self.weights[0] + self.spatial_aggregation2(x_down)*self.weights[1] \
|
| 145 |
+
+ self.spatial_aggregation3(x_down)*self.weights[2]
|
| 146 |
+
aggregated_x = self.conv_back(aggregated_x)
|
| 147 |
+
|
| 148 |
+
if self.save_weight_map:
|
| 149 |
+
torch.save(F.sigmoid(aggregated_x)-0.5, "./weight_map.pth")
|
| 150 |
+
return features * (F.sigmoid(aggregated_x)-0.5)
|
| 151 |
+
else:
|
| 152 |
+
return features * (F.sigmoid(aggregated_x)-0.5)
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
def conv3x3(in_planes, out_planes, stride=1):
|
| 156 |
+
# 3x3x3 convolution with padding
|
| 157 |
+
return nn.Conv3d(
|
| 158 |
+
in_planes,
|
| 159 |
+
out_planes,
|
| 160 |
+
kernel_size=(1,3,3),
|
| 161 |
+
stride=(1,stride,stride),
|
| 162 |
+
padding=(0,1,1),
|
| 163 |
+
bias=False)
|
| 164 |
+
|
| 165 |
+
class BasicBlock(nn.Module):
|
| 166 |
+
expansion = 1
|
| 167 |
+
|
| 168 |
+
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
| 169 |
+
super(BasicBlock, self).__init__()
|
| 170 |
+
self.conv1 = conv3x3(inplanes, planes, stride)
|
| 171 |
+
self.bn1 = nn.BatchNorm3d(planes)
|
| 172 |
+
self.relu = nn.ReLU(inplace=True)
|
| 173 |
+
self.conv2 = conv3x3(planes, planes)
|
| 174 |
+
self.bn2 = nn.BatchNorm3d(planes)
|
| 175 |
+
self.downsample = downsample
|
| 176 |
+
self.stride = stride
|
| 177 |
+
|
| 178 |
+
def forward(self, x):
|
| 179 |
+
residual = x
|
| 180 |
+
|
| 181 |
+
out = self.conv1(x)
|
| 182 |
+
out = self.bn1(out)
|
| 183 |
+
out = self.relu(out)
|
| 184 |
+
|
| 185 |
+
out = self.conv2(out)
|
| 186 |
+
out = self.bn2(out)
|
| 187 |
+
|
| 188 |
+
if self.downsample is not None:
|
| 189 |
+
residual = self.downsample(x)
|
| 190 |
+
|
| 191 |
+
out += residual
|
| 192 |
+
out = self.relu(out)
|
| 193 |
+
|
| 194 |
+
return out
|
| 195 |
+
|
| 196 |
+
class ResNet(nn.Module):
|
| 197 |
+
|
| 198 |
+
def __init__(self, block, layers, num_classes=1000):
|
| 199 |
+
self.inplanes = 64
|
| 200 |
+
super(ResNet, self).__init__()
|
| 201 |
+
self.conv1 = nn.Conv3d(3, 64, kernel_size=(1,7,7), stride=(1,2,2), padding=(0,3,3),
|
| 202 |
+
bias=False)
|
| 203 |
+
self.bn1 = nn.BatchNorm3d(64)
|
| 204 |
+
self.relu = nn.ReLU(inplace=True)
|
| 205 |
+
self.maxpool = nn.MaxPool3d(kernel_size=(1,3,3), stride=(1,2,2), padding=(0,1,1))
|
| 206 |
+
self.layer1 = self._make_layer(block, 64, layers[0])
|
| 207 |
+
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
|
| 208 |
+
self.corr2 = Get_Correlation(self.inplanes, neighbors=1)
|
| 209 |
+
self.temporal_weight2 = Temporal_weighting(self.inplanes)
|
| 210 |
+
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
|
| 211 |
+
self.corr3 = Get_Correlation(self.inplanes, neighbors=3, save_weight_map=True)
|
| 212 |
+
self.temporal_weight3 = Temporal_weighting(self.inplanes)
|
| 213 |
+
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
|
| 214 |
+
self.corr4 = Get_Correlation(self.inplanes, neighbors=5)
|
| 215 |
+
self.temporal_weight4 = Temporal_weighting(self.inplanes)
|
| 216 |
+
self.alpha = nn.Parameter(torch.zeros(3), requires_grad=True)
|
| 217 |
+
self.avgpool = nn.AvgPool2d(7, stride=1)
|
| 218 |
+
self.fc = nn.Linear(512 * block.expansion, num_classes)
|
| 219 |
+
|
| 220 |
+
for m in self.modules():
|
| 221 |
+
if isinstance(m, nn.Conv3d) or isinstance(m, nn.Conv2d):
|
| 222 |
+
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
| 223 |
+
elif isinstance(m, nn.BatchNorm3d) or isinstance(m, nn.BatchNorm2d):
|
| 224 |
+
nn.init.constant_(m.weight, 1)
|
| 225 |
+
nn.init.constant_(m.bias, 0)
|
| 226 |
+
|
| 227 |
+
def _make_layer(self, block, planes, blocks, stride=1):
|
| 228 |
+
downsample = None
|
| 229 |
+
if stride != 1 or self.inplanes != planes * block.expansion:
|
| 230 |
+
downsample = nn.Sequential(
|
| 231 |
+
nn.Conv3d(self.inplanes, planes * block.expansion,
|
| 232 |
+
kernel_size=1, stride=(1,stride,stride), bias=False),
|
| 233 |
+
nn.BatchNorm3d(planes * block.expansion),
|
| 234 |
+
)
|
| 235 |
+
|
| 236 |
+
layers = []
|
| 237 |
+
layers.append(block(self.inplanes, planes, stride, downsample))
|
| 238 |
+
self.inplanes = planes * block.expansion
|
| 239 |
+
for i in range(1, blocks):
|
| 240 |
+
layers.append(block(self.inplanes, planes))
|
| 241 |
+
|
| 242 |
+
return nn.Sequential(*layers)
|
| 243 |
+
|
| 244 |
+
def forward(self, x):
|
| 245 |
+
N, C, T, H, W = x.size()
|
| 246 |
+
x = self.conv1(x)
|
| 247 |
+
x = self.bn1(x)
|
| 248 |
+
x = self.relu(x)
|
| 249 |
+
x = self.maxpool(x)
|
| 250 |
+
|
| 251 |
+
x = self.layer1(x)
|
| 252 |
+
x = self.layer2(x)
|
| 253 |
+
#aug_x, affinities = self.corr2(x)
|
| 254 |
+
#x = x + aug_x * self.alpha[0]
|
| 255 |
+
#x = x + (affinities-0.5)*x
|
| 256 |
+
x = x + self.corr2(x) * self.alpha[0]
|
| 257 |
+
x = x + self.temporal_weight2(x)
|
| 258 |
+
x = self.layer3(x)
|
| 259 |
+
#aug_x, affinities = self.corr3(x)
|
| 260 |
+
#x = x + aug_x * self.alpha[1]
|
| 261 |
+
#x = x + (affinities-0.5)*x
|
| 262 |
+
x = x + self.corr3(x) * self.alpha[1]
|
| 263 |
+
x = x + self.temporal_weight3(x)
|
| 264 |
+
x = self.layer4(x)
|
| 265 |
+
#x = checkpoint(self.layer4, x)
|
| 266 |
+
#aug_x, affinities = self.corr4(x)
|
| 267 |
+
#x = x + aug_x * self.alpha[2]
|
| 268 |
+
#x = x + (affinities-0.5)*x
|
| 269 |
+
x = x + self.corr4(x) * self.alpha[2]
|
| 270 |
+
x = x + self.temporal_weight4(x)
|
| 271 |
+
#x = checkpoint(self.layer3, x)
|
| 272 |
+
|
| 273 |
+
x = x.transpose(1,2).contiguous()
|
| 274 |
+
x = x.view((-1,)+x.size()[2:]) #bt,c,h,w
|
| 275 |
+
|
| 276 |
+
x = self.avgpool(x)
|
| 277 |
+
x = x.view(x.size(0), -1) #bt,c
|
| 278 |
+
x = self.fc(x) #bt,c
|
| 279 |
+
|
| 280 |
+
return x
|
| 281 |
+
|
| 282 |
+
def resnet18(**kwargs):
|
| 283 |
+
"""Constructs a ResNet-18 based model.
|
| 284 |
+
"""
|
| 285 |
+
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
|
| 286 |
+
checkpoint = model_zoo.load_url(model_urls['resnet18'], map_location=torch.device('cpu'))
|
| 287 |
+
layer_name = list(checkpoint.keys())
|
| 288 |
+
for ln in layer_name :
|
| 289 |
+
if 'conv' in ln or 'downsample.0.weight' in ln:
|
| 290 |
+
checkpoint[ln] = checkpoint[ln].unsqueeze(2)
|
| 291 |
+
model.load_state_dict(checkpoint, strict=False)
|
| 292 |
+
del checkpoint
|
| 293 |
+
import gc
|
| 294 |
+
gc.collect()
|
| 295 |
+
return model
|
| 296 |
+
|
| 297 |
+
|
| 298 |
+
def resnet34(**kwargs):
|
| 299 |
+
"""Constructs a ResNet-34 model.
|
| 300 |
+
"""
|
| 301 |
+
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
|
| 302 |
+
return model
|
| 303 |
+
|
| 304 |
+
def test():
|
| 305 |
+
net = resnet18()
|
| 306 |
+
y = net(torch.randn(1,3,224,224))
|
| 307 |
+
print(y.size())
|
| 308 |
+
|
| 309 |
+
#test()
|
CorrNet_Plus/README.md
ADDED
|
@@ -0,0 +1,310 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CorrNet+
|
| 2 |
+
This repo holds codes of the paper: CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation [[paper]](https://arxiv.org/abs/2404.11111), which is an extension of our previous work (CVPR 2023) [[paper]](https://arxiv.org/abs/2303.03202)
|
| 3 |
+
|
| 4 |
+
For the code supporting continuous sign language recognition, refer to [CorrNet_Plus_CSLR](./CorrNet_Plus_CSLR) for the code.
|
| 5 |
+
|
| 6 |
+
We currently reserve the code of CorrNet_Plus_SLT.
|
| 7 |
+
|
| 8 |
+
## Performance
|
| 9 |
+
- On the continuous sign language cognition task, CorrNet+ achieves superior performance on PHOENIX14, PHOENIX14-T, CSL-Daily and CSL datasets.
|
| 10 |
+
|
| 11 |
+
<table align="center">
|
| 12 |
+
<tbody align="center" valign="center">
|
| 13 |
+
<tr>
|
| 14 |
+
<td rowspan="3">Method</td>
|
| 15 |
+
<td colspan="4">PHOENIX2014</td>
|
| 16 |
+
<td colspan="2">PHOENIX2014-T</td>
|
| 17 |
+
<td colspan="2">CSL-Daily</td>
|
| 18 |
+
</tr>
|
| 19 |
+
<tr>
|
| 20 |
+
<td colspan="2">Dev(%)</td>
|
| 21 |
+
<td colspan="2">Test(%)</td>
|
| 22 |
+
<td rowspan="2">Dev(%)</td>
|
| 23 |
+
<td rowspan="2">Test(%)</td>
|
| 24 |
+
<td rowspan="2">Dev(%)</td>
|
| 25 |
+
<td rowspan="2">Test(%)</td>
|
| 26 |
+
</tr>
|
| 27 |
+
<tr>
|
| 28 |
+
<td>del/ins</td>
|
| 29 |
+
<td>WER</td>
|
| 30 |
+
<td>del/ins</td>
|
| 31 |
+
<td>WER</td>
|
| 32 |
+
</tr>
|
| 33 |
+
<tr>
|
| 34 |
+
<td>CVT-SLR (CVPR2023)</td>
|
| 35 |
+
<td>6.4/2.6</td>
|
| 36 |
+
<td>19.8</td>
|
| 37 |
+
<td>6.1/2.3</td>
|
| 38 |
+
<td>20.1</td>
|
| 39 |
+
<td>19.4</td>
|
| 40 |
+
<td>20.3</td>
|
| 41 |
+
<td>-</td>
|
| 42 |
+
<td>-</td>
|
| 43 |
+
</tr>
|
| 44 |
+
<tr>
|
| 45 |
+
<td>CoSign-2s (ICCV2023)</td>
|
| 46 |
+
<td>-</td>
|
| 47 |
+
<td>19.7</td>
|
| 48 |
+
<td>-</td>
|
| 49 |
+
<td>20.1</td>
|
| 50 |
+
<td>19.5</td>
|
| 51 |
+
<td>20.1</td>
|
| 52 |
+
<td>-</td>
|
| 53 |
+
<td>-</td>
|
| 54 |
+
</tr>
|
| 55 |
+
<tr>
|
| 56 |
+
<td>AdaSize (PR2024)</td>
|
| 57 |
+
<td>7.0/2.6</td>
|
| 58 |
+
<td>19.7</td>
|
| 59 |
+
<td>7.2/3.1</td>
|
| 60 |
+
<td>20.9</td>
|
| 61 |
+
<td>19.7</td>
|
| 62 |
+
<td>21.2</td>
|
| 63 |
+
<td>31.3</td>
|
| 64 |
+
<td>30.9</td>
|
| 65 |
+
</tr>
|
| 66 |
+
<tr>
|
| 67 |
+
<td>AdaBrowse+ (ACMMM2023)</td>
|
| 68 |
+
<td>6.0/2.5</td>
|
| 69 |
+
<td>19.6</td>
|
| 70 |
+
<td>5.9/2.6</td>
|
| 71 |
+
<td>20.7</td>
|
| 72 |
+
<td>19.5</td>
|
| 73 |
+
<td>20.6</td>
|
| 74 |
+
<td>31.2</td>
|
| 75 |
+
<td>30.7</td>
|
| 76 |
+
</tr>
|
| 77 |
+
<tr>
|
| 78 |
+
<td>SEN (AAAI2023)</td>
|
| 79 |
+
<td>5.8/2.6</td>
|
| 80 |
+
<td>19.5</td>
|
| 81 |
+
<td>7.3/4.0</td>
|
| 82 |
+
<td>21.0</td>
|
| 83 |
+
<td>19.3</td>
|
| 84 |
+
<td>20.7</td>
|
| 85 |
+
<td>31.1</td>
|
| 86 |
+
<td>30.7</td>
|
| 87 |
+
</tr>
|
| 88 |
+
<tr>
|
| 89 |
+
<td>CTCA (CVPR2023)</td>
|
| 90 |
+
<td>6.2/2.9</td>
|
| 91 |
+
<td>19.5</td>
|
| 92 |
+
<td>6.1/2.6</td>
|
| 93 |
+
<td>20.1</td>
|
| 94 |
+
<td>19.3</td>
|
| 95 |
+
<td>20.3</td>
|
| 96 |
+
<td>31.3</td>
|
| 97 |
+
<td>29.4</td>
|
| 98 |
+
</tr>
|
| 99 |
+
<tr>
|
| 100 |
+
<td>C2SLR (CVPR2022)</td>
|
| 101 |
+
<td>-</td>
|
| 102 |
+
<td>20.5</td>
|
| 103 |
+
<td>-</td>
|
| 104 |
+
<td>20.4</td>
|
| 105 |
+
<td>20.2</td>
|
| 106 |
+
<td>20.4</td>
|
| 107 |
+
<td>-</td>
|
| 108 |
+
<td>-</td>
|
| 109 |
+
</tr>
|
| 110 |
+
<tr>
|
| 111 |
+
<th>CorrNet+</th>
|
| 112 |
+
<td>5.3/2.7</td>
|
| 113 |
+
<th>18.0</th>
|
| 114 |
+
<td>5.6/2.4</td>
|
| 115 |
+
<th>18.2</th>
|
| 116 |
+
<th>17.2</th>
|
| 117 |
+
<th>19.1</th>
|
| 118 |
+
<th>28.6</th>
|
| 119 |
+
<th>28.2</th>
|
| 120 |
+
</tr>
|
| 121 |
+
</tbody>
|
| 122 |
+
</table>
|
| 123 |
+
|
| 124 |
+
- On the sign language translation task, CorrNet+ achieves superior performance on PHOENIX14, PHOENIX14-T and CSL-Daily datasets.
|
| 125 |
+
|
| 126 |
+
<table>
|
| 127 |
+
<tbody align="center" valign="center">
|
| 128 |
+
<tr>
|
| 129 |
+
<td colspan="11">PHOENIX2014-T</td>
|
| 130 |
+
</tr>
|
| 131 |
+
<tr>
|
| 132 |
+
<td>Method</td>
|
| 133 |
+
<td colspan="5">Dev(%)</td>
|
| 134 |
+
<td colspan="5">Test(%)</td>
|
| 135 |
+
</tr>
|
| 136 |
+
<tr>
|
| 137 |
+
<td></td>
|
| 138 |
+
<td>Rouge</td>
|
| 139 |
+
<td>BLEU1</td>
|
| 140 |
+
<td>BLEU2</td>
|
| 141 |
+
<td>BLEU3</td>
|
| 142 |
+
<td>BLEU4</td>
|
| 143 |
+
<td>Rouge</td>
|
| 144 |
+
<td>BLEU1</td>
|
| 145 |
+
<td>BLEU2</td>
|
| 146 |
+
<td>BLEU3</td>
|
| 147 |
+
<td>BLEU4</td>
|
| 148 |
+
</tr>
|
| 149 |
+
<tr>
|
| 150 |
+
<td>SignBT (CVPR2021)</td>
|
| 151 |
+
<td>50.29</td>
|
| 152 |
+
<td>51.11</td>
|
| 153 |
+
<td>37.90</td>
|
| 154 |
+
<td>29.80</td>
|
| 155 |
+
<td>24.45</td>
|
| 156 |
+
<td>49.54</td>
|
| 157 |
+
<td>50.80</td>
|
| 158 |
+
<td>37.75</td>
|
| 159 |
+
<td>29.72</td>
|
| 160 |
+
<td>24.32</td>
|
| 161 |
+
</tr>
|
| 162 |
+
<tr>
|
| 163 |
+
<td>MMTLB (CVPR2022)</td>
|
| 164 |
+
<td>53.10</td>
|
| 165 |
+
<td>53.95</td>
|
| 166 |
+
<td>41.12</td>
|
| 167 |
+
<td>33.14</td>
|
| 168 |
+
<td>27.61</td>
|
| 169 |
+
<td>52.65</td>
|
| 170 |
+
<td>53.97</td>
|
| 171 |
+
<td>41.75</td>
|
| 172 |
+
<td>33.84</td>
|
| 173 |
+
<td>28.39</td>
|
| 174 |
+
</tr>
|
| 175 |
+
<tr>
|
| 176 |
+
<td>SLTUNET (ICLR2023)</td>
|
| 177 |
+
<td>52.23</td>
|
| 178 |
+
<td>-</td>
|
| 179 |
+
<td>-</td>
|
| 180 |
+
<td>-</td>
|
| 181 |
+
<td>27.87</td>
|
| 182 |
+
<td>52.11</td>
|
| 183 |
+
<td>52.92</td>
|
| 184 |
+
<td>41.76</td>
|
| 185 |
+
<td>33.99</td>
|
| 186 |
+
<td>28.47</td>
|
| 187 |
+
</tr>
|
| 188 |
+
<tr>
|
| 189 |
+
<td>TwoStream-SLT (NeuIPS2023)</td>
|
| 190 |
+
<td>54.08</td>
|
| 191 |
+
<td>54.32</td>
|
| 192 |
+
<td>41.99</td>
|
| 193 |
+
<td>34.15</td>
|
| 194 |
+
<td>28.66</td>
|
| 195 |
+
<td>53.48</td>
|
| 196 |
+
<td>54.90</td>
|
| 197 |
+
<td>42.43</td>
|
| 198 |
+
<td>34.46</td>
|
| 199 |
+
<td>28.95</td>
|
| 200 |
+
</tr>
|
| 201 |
+
<tr>
|
| 202 |
+
<td>CorrNet+</td>
|
| 203 |
+
<th>54.54</th>
|
| 204 |
+
<th>54.56</th>
|
| 205 |
+
<th>42.31</th>
|
| 206 |
+
<th>34.48</th>
|
| 207 |
+
<th>29.13</th>
|
| 208 |
+
<th>53.76</th>
|
| 209 |
+
<th>55.32</th>
|
| 210 |
+
<th>42.74</th>
|
| 211 |
+
<th>34.86</th>
|
| 212 |
+
<th>29.42</th>
|
| 213 |
+
</tr>
|
| 214 |
+
<tr>
|
| 215 |
+
<td colspan="11">CSL-Daily</td>
|
| 216 |
+
</tr>
|
| 217 |
+
<tr>
|
| 218 |
+
<td>Method</td>
|
| 219 |
+
<td colspan="5">Dev(%)</td>
|
| 220 |
+
<td colspan="5">Test(%)</td>
|
| 221 |
+
</tr>
|
| 222 |
+
<tr>
|
| 223 |
+
<td></td>
|
| 224 |
+
<td>Rouge</td>
|
| 225 |
+
<td>BLEU1</td>
|
| 226 |
+
<td>BLEU2</td>
|
| 227 |
+
<td>BLEU3</td>
|
| 228 |
+
<td>BLEU4</td>
|
| 229 |
+
<td>Rouge</td>
|
| 230 |
+
<td>BLEU1</td>
|
| 231 |
+
<td>BLEU2</td>
|
| 232 |
+
<td>BLEU3</td>
|
| 233 |
+
<td>BLEU4</td>
|
| 234 |
+
</tr>
|
| 235 |
+
<tr>
|
| 236 |
+
<td>SignBT (CVPR2021)</td>
|
| 237 |
+
<td>49.49</td>
|
| 238 |
+
<td>51.46</td>
|
| 239 |
+
<td>37.23</td>
|
| 240 |
+
<td>27.51</td>
|
| 241 |
+
<td>20.80</td>
|
| 242 |
+
<td>49.31</td>
|
| 243 |
+
<td>51.42</td>
|
| 244 |
+
<td>37.26</td>
|
| 245 |
+
<td>27.76</td>
|
| 246 |
+
<td>21.34</td>
|
| 247 |
+
</tr>
|
| 248 |
+
<tr>
|
| 249 |
+
<td>MMTLB (CVPR2022)</td>
|
| 250 |
+
<td>53.38</td>
|
| 251 |
+
<td>53.81</td>
|
| 252 |
+
<td>40.84</td>
|
| 253 |
+
<td>31.29</td>
|
| 254 |
+
<td>24.42</td>
|
| 255 |
+
<td>53.25</td>
|
| 256 |
+
<td>53.31</td>
|
| 257 |
+
<td>40.41</td>
|
| 258 |
+
<td>30.87</td>
|
| 259 |
+
<td>23.92</td>
|
| 260 |
+
</tr>
|
| 261 |
+
<tr>
|
| 262 |
+
<td>SLTUNET (ICLR2023)</td>
|
| 263 |
+
<td>53.58</td>
|
| 264 |
+
<td>-</td>
|
| 265 |
+
<td>-</td>
|
| 266 |
+
<td>-</td>
|
| 267 |
+
<td>23.99</td>
|
| 268 |
+
<td>54.08</td>
|
| 269 |
+
<td>54.98</td>
|
| 270 |
+
<td>41.44</td>
|
| 271 |
+
<td>31.84</td>
|
| 272 |
+
<td>25.01</td>
|
| 273 |
+
</tr>
|
| 274 |
+
<tr>
|
| 275 |
+
<td>TwoStream-SLT (NeuIPS2023)</td>
|
| 276 |
+
<td>55.10</td>
|
| 277 |
+
<td>55.21</td>
|
| 278 |
+
<td>42.31</td>
|
| 279 |
+
<td>32.71</td>
|
| 280 |
+
<td>25.76</td>
|
| 281 |
+
<td>55.72</td>
|
| 282 |
+
<td>55.44</td>
|
| 283 |
+
<td>42.59</td>
|
| 284 |
+
<td>32.87</td>
|
| 285 |
+
<td>25.79</td>
|
| 286 |
+
</tr>
|
| 287 |
+
<tr>
|
| 288 |
+
<td>CorrNet+</td>
|
| 289 |
+
<th>55.52</th>
|
| 290 |
+
<th>55.64</th>
|
| 291 |
+
<th>42.78</th>
|
| 292 |
+
<th>33.13</th>
|
| 293 |
+
<th>26.14</th>
|
| 294 |
+
<th>55.84</th>
|
| 295 |
+
<th>55.82</th>
|
| 296 |
+
<th>42.96</th>
|
| 297 |
+
<th>33.26</th>
|
| 298 |
+
<th>26.14</th>
|
| 299 |
+
</tr>
|
| 300 |
+
</tbody>
|
| 301 |
+
</table>
|
| 302 |
+
|
| 303 |
+
## Visualizations
|
| 304 |
+
|
| 305 |
+
As shown below, our method intelligently models the human body trajectories across adjacent frames and pays special attention to the moving human body parts.
|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
|
| 309 |
+
## Data preparation, Environment, Training, Inference and Visualizations
|
| 310 |
+
For detailed instructions of data preparation, environment, training, inference and visualizations, please refer to each sub-repo for guidance.
|
slt_new/README.md
ADDED
|
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CorrNet+_CSLR
|
| 2 |
+
This repo holds codes of the paper: CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation, which is an extension of our previous work (CVPR 2023) [[paper]](https://arxiv.org/abs/2303.03202)
|
| 3 |
+
|
| 4 |
+
This sub-repo holds the code for supporting the continuous sign language recognition task with CorrNet+.
|
| 5 |
+
|
| 6 |
+
(**Update on 2025/01/28**) We release a demo for Continuous sign language recognition that supports multi-images and video inputs! You can watch the demo video to watch its effects, or deploy a demo locally to test its performance.
|
| 7 |
+
|
| 8 |
+
https://github.com/user-attachments/assets/a7354510-e5e0-44af-b283-39707f625a9b
|
| 9 |
+
|
| 10 |
+
<div align=center>
|
| 11 |
+
The web demo video
|
| 12 |
+
</div>
|
| 13 |
+
|
| 14 |
+
## Prerequisites
|
| 15 |
+
|
| 16 |
+
- This project is implemented in Pytorch (better >=1.13 to be compatible with ctcdecode or these may exist errors). Thus please install Pytorch first.
|
| 17 |
+
|
| 18 |
+
- ctcdecode==0.4 [[parlance/ctcdecode]](https://github.com/parlance/ctcdecode),for beam search decode.
|
| 19 |
+
|
| 20 |
+
- [Optional] sclite [[kaldi-asr/kaldi]](https://github.com/kaldi-asr/kaldi), install kaldi tool to get sclite for evaluation. After installation, create a soft link toward the sclite:
|
| 21 |
+
`mkdir ./software`
|
| 22 |
+
`ln -s PATH_TO_KALDI/tools/sctk-2.4.10/bin/sclite ./software/sclite`
|
| 23 |
+
|
| 24 |
+
You may use the python version evaluation tool for convenience (by setting 'evaluate_tool' as 'python' in line 16 of ./configs/baseline.yaml), but sclite can provide more detailed statistics.
|
| 25 |
+
|
| 26 |
+
- You can install other required modules by conducting
|
| 27 |
+
`pip install -r requirements.txt`
|
| 28 |
+
|
| 29 |
+
## Implementation
|
| 30 |
+
The implementation for the CorrNet+ is given in [./modules/resnet.py](https://github.com/hulianyuyy/CorrNet_Plus/CorrNet_Plus_CSLR/modules/resnet.py).
|
| 31 |
+
|
| 32 |
+
It's then equipped with after each stage in ResNet in line 195 [./modules/resnet.py](https://github.com/hulianyuyy/CorrNet_Plus/CorrNet_Plus_CSLR/modules/resnet.py).
|
| 33 |
+
|
| 34 |
+
We later found that the Identification Module with only spatial decomposition could perform on par with what we report in the paper (spatial-temporal decomposition) and is slighter faster, and thus implement it as such.
|
| 35 |
+
|
| 36 |
+
## Data Preparation
|
| 37 |
+
You can choose any one of following datasets to verify the effectiveness of CorrNet+.
|
| 38 |
+
|
| 39 |
+
### PHOENIX2014 dataset
|
| 40 |
+
1. Download the RWTH-PHOENIX-Weather 2014 Dataset [[download link]](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX/). Our experiments based on phoenix-2014.v3.tar.gz.
|
| 41 |
+
|
| 42 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 43 |
+
`ln -s PATH_TO_DATASET/phoenix2014-release ./dataset/phoenix2014`
|
| 44 |
+
|
| 45 |
+
3. The original image sequence is 210x260, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
cd ./preprocess
|
| 49 |
+
python dataset_preprocess.py --process-image --multiprocessing
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### PHOENIX2014-T dataset
|
| 53 |
+
1. Download the RWTH-PHOENIX-Weather 2014 Dataset [[download link]](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/)
|
| 54 |
+
|
| 55 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 56 |
+
`ln -s PATH_TO_DATASET/PHOENIX-2014-T-release-v3/PHOENIX-2014-T ./dataset/phoenix2014-T`
|
| 57 |
+
|
| 58 |
+
3. The original image sequence is 210x260, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
cd ./preprocess
|
| 62 |
+
python dataset_preprocess-T.py --process-image --multiprocessing
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
if you get an error like ```IndexError: list index out of range``` on the PHOENIX2014-T dataset, you may refer to [this issue](https://github.com/hulianyuyy/CorrNet/issues/10#issuecomment-1660363025) to tackle the problem.
|
| 66 |
+
### CSL dataset
|
| 67 |
+
|
| 68 |
+
1. Request the CSL Dataset from this website [[download link]](https://ustc-slr.github.io/openresources/cslr-dataset-2015/index.html)
|
| 69 |
+
|
| 70 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 71 |
+
`ln -s PATH_TO_DATASET ./dataset/CSL`
|
| 72 |
+
|
| 73 |
+
3. The original image sequence is 1280x720, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
cd ./preprocess
|
| 77 |
+
python dataset_preprocess-CSL.py --process-image --multiprocessing
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### CSL-Daily dataset
|
| 81 |
+
|
| 82 |
+
1. Request the CSL-Daily Dataset from this website [[download link]](http://home.ustc.edu.cn/~zhouh156/dataset/csl-daily/)
|
| 83 |
+
|
| 84 |
+
2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
|
| 85 |
+
`ln -s PATH_TO_DATASET ./dataset/CSL-Daily`
|
| 86 |
+
|
| 87 |
+
3. The original image sequence is 1280x720, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
cd ./preprocess
|
| 91 |
+
python dataset_preprocess-CSL-Daily.py --process-image --multiprocessing
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Inference
|
| 95 |
+
|
| 96 |
+
### PHOENIX2014 dataset
|
| 97 |
+
|
| 98 |
+
| Backbone | Dev WER | Test WER | Pretrained model |
|
| 99 |
+
| -------- | ---------- | ----------- | --- |
|
| 100 |
+
| ResNet18 | 18.0% | 18.2% | [[Baidu]](https://pan.baidu.com/s/1vlCMSuqZiZkvidg4wrDlZQ?pwd=w5w9) <br />[[Google Drive]](https://drive.google.com/file/d/1jcRv4Gl98mvS4mmLH5dBU_-iN3qGq8Si/view?usp=sharing) |
|
| 101 |
+
|
| 102 |
+
|
| 103 |
+
### PHOENIX2014-T dataset
|
| 104 |
+
|
| 105 |
+
| Backbone | Dev WER | Test WER | Pretrained model |
|
| 106 |
+
| -------- | ---------- | ----------- | --- |
|
| 107 |
+
| ResNet18 | 17.2% | 19.1% | [[Baidu]](https://pan.baidu.com/s/1PcQtWOhiTEq9RFgBZ2hWhQ?pwd=nm3c) <br />[[Google Drive]](https://drive.google.com/file/d/1uBaKoB2JaB3ydYXmpn1tv0mBZ7cAF8J9/view?usp=sharing) |
|
| 108 |
+
|
| 109 |
+
### CSL-Daily dataset
|
| 110 |
+
|
| 111 |
+
| Backbone | Dev WER | Test WER | Pretrained model |
|
| 112 |
+
| -------- | ---------- | ----------- | --- |
|
| 113 |
+
| ResNet18 | 28.6% | 28.2% | [[Baidu]](https://pan.baidu.com/s/1SbulBImqn78FEYFZV5Oz1w?pwd=mx8m) <br />[[Google Drive]](https://drive.google.com/file/d/1Ve_uzEB1teTmebuQ1XAMFQ0UV0EVEGyM/view?usp=sharing) |
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
To evaluate the pretrained model, choose the dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml first, and run the command below:
|
| 117 |
+
`python main.py --config ./config/baseline.yaml --device your_device --load-weights path_to_weight.pt --phase test`
|
| 118 |
+
|
| 119 |
+
## Training
|
| 120 |
+
|
| 121 |
+
The priorities of configuration files are: command line > config file > default values of argparse. To train the SLR model, run the command below:
|
| 122 |
+
|
| 123 |
+
`python main.py --config ./config/baseline.yaml --device your_device`
|
| 124 |
+
|
| 125 |
+
Note that you can choose the target dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml.
|
| 126 |
+
|
| 127 |
+
## Visualizations
|
| 128 |
+
For Grad-CAM visualization of spatial weight maps, you can replace the resnet.py under "./modules" with the resnet.py under "./weight_map_generation", and then run ```python generate_weight_map.py``` with your own hyperparameters.
|
| 129 |
+
|
| 130 |
+
For Grad-CAM visualization of correlation maps, you can replace the resnet.py under "./modules" with the resnet.py under "./corr_map_generation", and then run ```python generate_corr_map.py``` with your own hyperparameters.
|
| 131 |
+
|
| 132 |
+
### Test with one video input
|
| 133 |
+
Except performing inference on datasets, we provide a `test_one_video.py` to perform inference with only one video input. An example command is
|
| 134 |
+
|
| 135 |
+
`python test_one_video.py --model_path /path_to_pretrained_weights --video_path /path_to_your_video --device your_device`
|
| 136 |
+
|
| 137 |
+
The `video_path` can be the path to a video file or a dir contains extracted images from a video.
|
| 138 |
+
|
| 139 |
+
Acceptable paramters:
|
| 140 |
+
- `model_path`, the path to pretrained weights.
|
| 141 |
+
- `video_path`, the path to a video file or a dir contains extracted images from a video.
|
| 142 |
+
- `device`, which device to run inference, default=0.
|
| 143 |
+
- `language`, the target sign language, default='phoenix', choices=['phoenix', 'csl'].
|
| 144 |
+
- `max_frames_num`, the max input frames sampled from an input video, default=360.
|
| 145 |
+
|
| 146 |
+
### Demo
|
| 147 |
+
We provide a demo to allow deploying continuous sign language recognition models locally to test its effects. The demo page is shown as follows.
|
| 148 |
+
<div align=center>
|
| 149 |
+
<img width="800" src="./demo.jpg"/>
|
| 150 |
+
<h4> The page of our demo</h4>
|
| 151 |
+
</div>
|
| 152 |
+
The demo video can be found in the top of this page. An example command is
|
| 153 |
+
|
| 154 |
+
`python demo.py --model_path /path_to_pretrained_weights --device your_device`
|
| 155 |
+
|
| 156 |
+
Acceptable paramters:
|
| 157 |
+
- `model_path`, the path to pretrained weights.
|
| 158 |
+
- `device`, which device to run inference, default=0.
|
| 159 |
+
- `language`, the target sign language, default='phoenix', choices=['phoenix', 'csl'].
|
| 160 |
+
- `max_frames_num`, the max input frames sampled from an input video, default=360.
|
| 161 |
+
|
| 162 |
+
After running the command, you can visit `http://0.0.0.0:7862` to play with the demo. You can also change it into an public URL by setting `share=True` in line 176 in `demo.py`.
|
slt_new/__init__.py
ADDED
|
File without changes
|
slt_new/comparison_checklist.md
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CorrNet+ 与 ASLLRP 实现对比检查清单
|
| 2 |
+
|
| 3 |
+
## 1. 数据集格式对比 ✓
|
| 4 |
+
|
| 5 |
+
### Phoenix2014 数据集
|
| 6 |
+
- 样本数量:Train: 5672, Dev: 540, Test: 629
|
| 7 |
+
- 每个样本:一个完整视频对应一个gloss序列
|
| 8 |
+
- 平均序列长度:较长(具体数值未提供)
|
| 9 |
+
- 帧率:25fps
|
| 10 |
+
- 分辨率:224x224
|
| 11 |
+
|
| 12 |
+
### ASLLRP 数据集
|
| 13 |
+
- 样本数量:Train: 1073, Dev: 136, Test: 134
|
| 14 |
+
- 每个样本:一个完整视频对应一个gloss序列
|
| 15 |
+
- 平均序列长度:8个glosses,平均110帧/视频
|
| 16 |
+
- 帧率:24fps
|
| 17 |
+
- 分辨率:原始256x256,调整为224x224
|
| 18 |
+
- Gloss词汇量:1244个唯一glosses
|
| 19 |
+
|
| 20 |
+
## 2. 数据预处理对比 ✓
|
| 21 |
+
|
| 22 |
+
### 视频预处理
|
| 23 |
+
- **原版 Phoenix2014**:
|
| 24 |
+
- 使用CenterCrop裁剪到224x224
|
| 25 |
+
- 不使用RandomCrop或RandomHorizontalFlip
|
| 26 |
+
- TemporalRescale: min=32, max=230帧
|
| 27 |
+
|
| 28 |
+
- **我们的 ASLLRP**:
|
| 29 |
+
- 使用CenterCrop裁剪到224x224(与原版一致)
|
| 30 |
+
- 不使用RandomCrop或RandomHorizontalFlip(与原版一致)
|
| 31 |
+
- TemporalRescale: min=32, max=230帧(与原版一致)
|
| 32 |
+
- **问题**:ASLLRP有视频短至26帧,低于min=32的限制
|
| 33 |
+
|
| 34 |
+
### Gloss预处理
|
| 35 |
+
- 两者都将gloss文本转换为ID序列
|
| 36 |
+
- 都使用相同的词汇表处理方式
|
| 37 |
+
- 都添加blank token用于CTC
|
| 38 |
+
|
| 39 |
+
## 3. 模型架构对比
|
| 40 |
+
|
| 41 |
+
### Temporal Convolution 配置
|
| 42 |
+
- **原版 CorrNet+**: `conv_type=2` -> ['K5', 'P2', 'K5', 'P2']
|
| 43 |
+
- 第一个K5卷积:输入长度减少4 (kernel_size-1)
|
| 44 |
+
- 第一个P2池化:长度减半
|
| 45 |
+
- 第二个K5卷积:长度再减少4
|
| 46 |
+
- 第二个P2池化:长度再减半
|
| 47 |
+
- **总下采样率**: 约4倍 (对于100帧输入,输出约23帧)
|
| 48 |
+
|
| 49 |
+
- **我们的实现**: 使用相同的`conv_type=1` -> ['K5', 'P2']
|
| 50 |
+
- K5卷积:长度减少4
|
| 51 |
+
- P2池化:长度减半
|
| 52 |
+
- **总下采样率**: 约2倍 (对于100帧输入,输出约48帧)
|
| 53 |
+
|
| 54 |
+
### Temporal_LiftPool 差异
|
| 55 |
+
- **原版**:
|
| 56 |
+
```python
|
| 57 |
+
Xe = x[:,:,:T:self.kernel_size] # 从0开始每隔kernel_size取样
|
| 58 |
+
Xo = x[:,:,1:T:self.kernel_size] # 从1开始每隔kernel_size取样
|
| 59 |
+
s = torch.cat((x[:,:,:0:self.kernel_size], s, x[:,:,T::self.kernel_size]),2)
|
| 60 |
+
```
|
| 61 |
+
- 返回拼接后的序列,保持了时间维度的某些信息
|
| 62 |
+
|
| 63 |
+
- **我们的修复版**:
|
| 64 |
+
```python
|
| 65 |
+
Xe = x[:,:,::self.kernel_size] # 从0开始每隔kernel_size取样
|
| 66 |
+
Xo = x[:,:,self.kernel_size-1::self.kernel_size] # 从kernel_size-1开始
|
| 67 |
+
# 修剪到相同长度后直接返回加权结果
|
| 68 |
+
```
|
| 69 |
+
- 只返回下采样后的结果,时间维度严格减半
|
| 70 |
+
|
| 71 |
+
### ResNet配置
|
| 72 |
+
- **原版**: 使用`AvgPool2d(7, stride=1)`,需要224x224输入
|
| 73 |
+
- **我们的实现**: 已修正为相同配置
|
| 74 |
+
|
| 75 |
+
### BiLSTM配置
|
| 76 |
+
- 两者相同:2层双向LSTM,hidden_size=1024
|
| 77 |
+
|
| 78 |
+
## 4. 训练配置对比
|
| 79 |
+
|
| 80 |
+
### 优化器和学习率
|
| 81 |
+
- **原版 Phoenix2014**:
|
| 82 |
+
- 初始学习率: 0.0001
|
| 83 |
+
- 学习率调度: StepLR,在epoch 40和60时乘以0.2
|
| 84 |
+
- 批量大小: 2
|
| 85 |
+
- 使用autocast和GradScaler进行混合精度训练
|
| 86 |
+
|
| 87 |
+
- **我们的 ASLLRP**:
|
| 88 |
+
- 初始学习率: 0.0001 (相同)
|
| 89 |
+
- 学习率调度: StepLR,在epoch 40和60时乘以0.2 (相同)
|
| 90 |
+
- 批量大小: 2 (相同)
|
| 91 |
+
- **禁用了autocast和GradScaler** (为了调试CTC loss问题)
|
| 92 |
+
|
| 93 |
+
### 损失权重
|
| 94 |
+
- 两者相同:
|
| 95 |
+
```python
|
| 96 |
+
'ConvCTC': 1.0,
|
| 97 |
+
'SeqCTC': 1.0,
|
| 98 |
+
'Dist': 5.0,
|
| 99 |
+
'Cu': 0.0005,
|
| 100 |
+
'Cp': 0.0005
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
## 5. 解码器对比
|
| 104 |
+
|
| 105 |
+
### Beam Search配置
|
| 106 |
+
- 两者都使用相同的beam search解码器
|
| 107 |
+
- Beam width: 默认值(通常是10)
|
| 108 |
+
- 使用相同的解码函数
|
| 109 |
+
|
| 110 |
+
## 6. 评估方式对比
|
| 111 |
+
|
| 112 |
+
- 两者使用相同的WER计算方法
|
| 113 |
+
- 都使用CTM格式输出
|
| 114 |
+
- 评估脚本相同
|
| 115 |
+
|
| 116 |
+
## 7. 关键差异分析
|
| 117 |
+
|
| 118 |
+
### 可能导致只预测单词的原因:
|
| 119 |
+
|
| 120 |
+
1. **时间下采样不足**:
|
| 121 |
+
- 我们使用`conv_type=1`,下采样率只有2倍
|
| 122 |
+
- 原版使用`conv_type=2`,下采样率4倍
|
| 123 |
+
- ASLLRP的短视频(26-110帧)可能需要更激进的下采样
|
| 124 |
+
|
| 125 |
+
2. **Temporal_LiftPool实现差异**:
|
| 126 |
+
- 原版返回拼接序列,保留了更多时间信息
|
| 127 |
+
- 我们的版本直接返回下采样结果
|
| 128 |
+
- 这可能影响了时间建模能力
|
| 129 |
+
|
| 130 |
+
3. **数据集特性差异**:
|
| 131 |
+
- Phoenix2014: 平均150帧,较长序列
|
| 132 |
+
- ASLLRP: 平均110帧,但有很短的视频(26帧)
|
| 133 |
+
- TemporalRescale的min=32限制可能导致短视频被强制拉长
|
| 134 |
+
|
| 135 |
+
4. **混合精度训练**:
|
| 136 |
+
- 原版使用autocast和GradScaler
|
| 137 |
+
- 我们禁用了它们,可能影响训练动态
|
| 138 |
+
|
| 139 |
+
### 建议修复方向:
|
| 140 |
+
|
| 141 |
+
1. **修改conv_type为2**:使用更激进的时间下采样
|
| 142 |
+
2. **恢复原版Temporal_LiftPool**:保持时间信息的完整性
|
| 143 |
+
3. **调整TemporalRescale参数**:降低min值以适应短视频
|
| 144 |
+
4. **恢复混合精度训练**:在解决CTC loss问题后重新启用
|
slt_new/demo.py
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import os
|
| 3 |
+
import glob
|
| 4 |
+
import cv2
|
| 5 |
+
from utils import video_augmentation
|
| 6 |
+
from slr_network import SLRModel
|
| 7 |
+
import torch
|
| 8 |
+
from collections import OrderedDict
|
| 9 |
+
import utils
|
| 10 |
+
from PIL import Image
|
| 11 |
+
import argparse
|
| 12 |
+
|
| 13 |
+
import numpy as np
|
| 14 |
+
VIDEO_FORMATS = [".mp4", ".avi", ".mov", ".mkv"]
|
| 15 |
+
os.environ['GRADIO_TEMP_DIR'] = 'gradio_temp'
|
| 16 |
+
import gradio as gr
|
| 17 |
+
import os
|
| 18 |
+
import warnings
|
| 19 |
+
from decord import VideoReader, cpu
|
| 20 |
+
warnings.filterwarnings("ignore")
|
| 21 |
+
|
| 22 |
+
def is_image_by_extension(file_path):
|
| 23 |
+
_, file_extension = os.path.splitext(file_path)
|
| 24 |
+
|
| 25 |
+
image_extensions = ['.jpg', '.jpeg', '.png', '.gif', '.bmp']
|
| 26 |
+
|
| 27 |
+
return file_extension.lower() in image_extensions
|
| 28 |
+
|
| 29 |
+
def load_video(video_path, max_frames_num=360):
|
| 30 |
+
if type(video_path) == str:
|
| 31 |
+
vr = VideoReader(video_path, ctx=cpu(0))
|
| 32 |
+
elif type(video_path) == list:
|
| 33 |
+
vr = VideoReader(video_path[0], ctx=cpu(0))
|
| 34 |
+
else:
|
| 35 |
+
raise ValueError(f"Not support video input : {type(video_path)}")
|
| 36 |
+
total_frame_num = len(vr)
|
| 37 |
+
if total_frame_num> max_frames_num:
|
| 38 |
+
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, max_frames_num, dtype=int)
|
| 39 |
+
else:
|
| 40 |
+
uniform_sampled_frames = np.linspace(0, total_frame_num - 1, dtype=int)
|
| 41 |
+
frame_idx = uniform_sampled_frames.tolist()
|
| 42 |
+
spare_frames = vr.get_batch(frame_idx).asnumpy()
|
| 43 |
+
return [cv2.cvtColor(tmp, cv2.COLOR_BGR2RGB) for tmp in spare_frames] # (frames, height, width, channels)
|
| 44 |
+
|
| 45 |
+
def run_inference(inputs):
|
| 46 |
+
"""
|
| 47 |
+
Run inference on one input sample.
|
| 48 |
+
|
| 49 |
+
Args:
|
| 50 |
+
args: Command-line arguments.
|
| 51 |
+
"""
|
| 52 |
+
img_list = []
|
| 53 |
+
if isinstance(inputs, list): # Multi-image case
|
| 54 |
+
for x in inputs:
|
| 55 |
+
if is_image_by_extension(x):
|
| 56 |
+
img_list.append(cv2.cvtColor(cv2.imread(x), cv2.COLOR_BGR2RGB) )
|
| 57 |
+
|
| 58 |
+
elif os.path.splitext(inputs)[-1] in VIDEO_FORMATS: # Video case
|
| 59 |
+
try:
|
| 60 |
+
img_list = load_video(inputs, args.max_frames_num) # frames [height, width, channels]
|
| 61 |
+
except Exception as e:
|
| 62 |
+
raise ValueError(f"Error {e} in loading video")
|
| 63 |
+
else:
|
| 64 |
+
raise ValueError("Video path is incorrect!")
|
| 65 |
+
|
| 66 |
+
transform = video_augmentation.Compose([
|
| 67 |
+
video_augmentation.CenterCrop(224),
|
| 68 |
+
video_augmentation.Resize(1.0),
|
| 69 |
+
video_augmentation.ToTensor(),
|
| 70 |
+
])
|
| 71 |
+
vid, label = transform(img_list, None, None)
|
| 72 |
+
vid = vid.float() / 127.5 - 1
|
| 73 |
+
vid = vid.unsqueeze(0)
|
| 74 |
+
|
| 75 |
+
left_pad = 0
|
| 76 |
+
last_stride = 1
|
| 77 |
+
total_stride = 1
|
| 78 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 79 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 80 |
+
if ks[0] == 'K':
|
| 81 |
+
left_pad = left_pad * last_stride
|
| 82 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 83 |
+
elif ks[0] == 'P':
|
| 84 |
+
last_stride = int(ks[1])
|
| 85 |
+
total_stride = total_stride * last_stride
|
| 86 |
+
|
| 87 |
+
max_len = vid.size(1)
|
| 88 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 89 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 90 |
+
max_len = max_len + left_pad + right_pad
|
| 91 |
+
vid = torch.cat(
|
| 92 |
+
(
|
| 93 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 94 |
+
vid[0],
|
| 95 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 96 |
+
)
|
| 97 |
+
, dim=0).unsqueeze(0)
|
| 98 |
+
|
| 99 |
+
vid = device.data_to_device(vid)
|
| 100 |
+
vid_lgt = device.data_to_device(video_length)
|
| 101 |
+
ret_dict = model(vid, vid_lgt, label=None, label_lgt=None)
|
| 102 |
+
return ret_dict['recognized_sents'] # [[('ICH', 0), ('LUFT', 1), ('WETTER', 2), ('GERADE', 3), ('loc-SUEDWEST', 4), ('TEMPERATUR', 5), ('__PU__', 6), ('KUEHL', 7), ('SUED', 8), ('WARM', 9), ('ICH', 10), ('IX', 11)]]
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
def parse_args():
|
| 106 |
+
"""
|
| 107 |
+
Parse command-line arguments.
|
| 108 |
+
"""
|
| 109 |
+
parser = argparse.ArgumentParser()
|
| 110 |
+
parser.add_argument("--model_path", type=str, help="The path to pretrained weights")
|
| 111 |
+
parser.add_argument("--device", type=int, default=0)
|
| 112 |
+
parser.add_argument("--language", type=str, default='phoenix', choices=['phoenix', 'csl'])
|
| 113 |
+
parser.add_argument("--max_frames_num", type=int, default=360)
|
| 114 |
+
|
| 115 |
+
return parser.parse_args()
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
if __name__ == "__main__":
|
| 119 |
+
args = parse_args()
|
| 120 |
+
|
| 121 |
+
# Load tokenizer, model and image processor
|
| 122 |
+
model_path = os.path.expanduser(args.model_path)
|
| 123 |
+
|
| 124 |
+
device_id = args.device # specify which gpu to use
|
| 125 |
+
if args.language == 'phoenix':
|
| 126 |
+
dataset = 'phoenix2014'
|
| 127 |
+
elif args.language == 'csl':
|
| 128 |
+
dataset = 'CSL-Daily'
|
| 129 |
+
else:
|
| 130 |
+
raise ValueError("Please select target language from ['phoenix', 'csl'] in your command")
|
| 131 |
+
|
| 132 |
+
model_weights = args.model_path
|
| 133 |
+
|
| 134 |
+
# Load data and apply transformation
|
| 135 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy' # Use the gloss dict of phoenix14 dataset
|
| 136 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 137 |
+
|
| 138 |
+
device = utils.GpuDataParallel()
|
| 139 |
+
device.set_device(device_id)
|
| 140 |
+
# Define model and load state-dict
|
| 141 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 142 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 143 |
+
state_dict = torch.load(model_weights)['model_state_dict']
|
| 144 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 145 |
+
model.load_state_dict(state_dict, strict=True)
|
| 146 |
+
model = model.to(device.output_device)
|
| 147 |
+
model.cuda()
|
| 148 |
+
|
| 149 |
+
model.eval()
|
| 150 |
+
|
| 151 |
+
def identity(x):
|
| 152 |
+
return x
|
| 153 |
+
|
| 154 |
+
with gr.Blocks(title='Continuous sign language recognition') as demo:
|
| 155 |
+
gr.Markdown("<center><font size=5>Continuous sign language recognition</center></font>")
|
| 156 |
+
gr.Markdown("**Upload multiple images or a video** to get the recognized glossess.")
|
| 157 |
+
with gr.Tab('Multi-Images'):
|
| 158 |
+
with gr.Row():
|
| 159 |
+
with gr.Column(scale=1):
|
| 160 |
+
multiple_image_show = gr.Gallery(label="Show the input images", height=200)
|
| 161 |
+
Multi_image_input = gr.UploadButton(label="Click to upload multiple images", file_types = ['.png','.jpg','.jpeg', '.bmp'], file_count = "multiple")
|
| 162 |
+
multiple_image_button = gr.Button("Run")
|
| 163 |
+
with gr.Column(scale=1):
|
| 164 |
+
multiple_image_output = gr.Textbox(label="Output")
|
| 165 |
+
with gr.Tab('Video'):
|
| 166 |
+
with gr.Row():
|
| 167 |
+
with gr.Column(scale=1):
|
| 168 |
+
Video_input = gr.Video(sources=["upload"], label="Upload a video file")
|
| 169 |
+
video_button = gr.Button("Run")
|
| 170 |
+
with gr.Column(scale=1):
|
| 171 |
+
video_output = gr.Textbox(label="Output")
|
| 172 |
+
multiple_image_button.click(identity, inputs=[Multi_image_input], outputs=multiple_image_show)
|
| 173 |
+
multiple_image_button.click(run_inference, inputs=Multi_image_input, outputs=multiple_image_output)
|
| 174 |
+
video_button.click(run_inference, inputs=Video_input, outputs=video_output)
|
| 175 |
+
|
| 176 |
+
demo.launch(share=False,server_name="0.0.0.0", server_port=7862)
|
slt_new/generate_corr_map.py
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#Ref: https://blog.csdn.net/weixin_41735859/article/details/106474768
|
| 2 |
+
import numpy as np
|
| 3 |
+
import os
|
| 4 |
+
import glob
|
| 5 |
+
import cv2
|
| 6 |
+
from utils import video_augmentation
|
| 7 |
+
from slr_network import SLRModel
|
| 8 |
+
import torch
|
| 9 |
+
from collections import OrderedDict
|
| 10 |
+
import utils
|
| 11 |
+
|
| 12 |
+
gpu_id = 0 # The GPU to use
|
| 13 |
+
dataset = 'phoenix2014' # support [phoenix2014, phoenix2014-T, CSL-Daily]
|
| 14 |
+
prefix = './dataset/phoenix2014/phoenix-2014-multisigner' # ['./dataset/CSL-Daily', './dataset/phoenix2014-T', './dataset/phoenix2014/phoenix-2014-multisigner']
|
| 15 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy'
|
| 16 |
+
model_weights = 'path_to_model.pt'
|
| 17 |
+
select_id = 539 # The video selected to show. 539 for 31October_2009_Saturday_tagesschau_default-8, 0 for 01April_2010_Thursday_heute_default-1, 1 for 01August_2011_Monday_heute_default-6, 2 for 01December_2011_Thursday_heute_default-3
|
| 18 |
+
#name = '01April_2010_Thursday_heute_default-1'
|
| 19 |
+
|
| 20 |
+
# Load data and apply transformation
|
| 21 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 22 |
+
inputs_list = np.load(f"./preprocess/{dataset}/dev_info.npy", allow_pickle=True).item()
|
| 23 |
+
name = inputs_list[select_id]['fileid']
|
| 24 |
+
print(f'Generating correlation maps for {name}')
|
| 25 |
+
img_folder = os.path.join(prefix, "features/fullFrame-256x256px/" + inputs_list[select_id]['folder']) if 'phoenix' in dataset else os.path.join(prefix, inputs_list[select_id]['folder'])
|
| 26 |
+
img_list = sorted(glob.glob(img_folder))
|
| 27 |
+
img_list = [cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB) for img_path in img_list]
|
| 28 |
+
label_list = []
|
| 29 |
+
for phase in inputs_list[select_id]['label'].split(" "):
|
| 30 |
+
if phase == '':
|
| 31 |
+
continue
|
| 32 |
+
if phase in gloss_dict.keys():
|
| 33 |
+
label_list.append(gloss_dict[phase][0])
|
| 34 |
+
transform = video_augmentation.Compose([
|
| 35 |
+
video_augmentation.CenterCrop(224),
|
| 36 |
+
video_augmentation.Resize(1.0),
|
| 37 |
+
video_augmentation.ToTensor(),
|
| 38 |
+
])
|
| 39 |
+
vid, label = transform(img_list, label_list, None)
|
| 40 |
+
vid = vid.float() / 127.5 - 1
|
| 41 |
+
vid = vid.unsqueeze(0)
|
| 42 |
+
|
| 43 |
+
left_pad = 0
|
| 44 |
+
last_stride = 1
|
| 45 |
+
total_stride = 1
|
| 46 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 47 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 48 |
+
if ks[0] == 'K':
|
| 49 |
+
left_pad = left_pad * last_stride
|
| 50 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 51 |
+
elif ks[0] == 'P':
|
| 52 |
+
last_stride = int(ks[1])
|
| 53 |
+
total_stride = total_stride * last_stride
|
| 54 |
+
|
| 55 |
+
max_len = vid.size(1)
|
| 56 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 57 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 58 |
+
max_len = max_len + left_pad + right_pad
|
| 59 |
+
vid = torch.cat(
|
| 60 |
+
(
|
| 61 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 62 |
+
vid[0],
|
| 63 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 64 |
+
)
|
| 65 |
+
, dim=0).unsqueeze(0)
|
| 66 |
+
|
| 67 |
+
fmap_block = list()
|
| 68 |
+
#grad_block = list()
|
| 69 |
+
|
| 70 |
+
device = utils.GpuDataParallel()
|
| 71 |
+
device.set_device(gpu_id)
|
| 72 |
+
# Define model and load state-dict
|
| 73 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 74 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 75 |
+
state_dict = torch.load(model_weights)['model_state_dict']
|
| 76 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 77 |
+
model.load_state_dict(state_dict, strict=True)
|
| 78 |
+
model = model.to(device.output_device)
|
| 79 |
+
model.cuda()
|
| 80 |
+
|
| 81 |
+
model.eval()
|
| 82 |
+
|
| 83 |
+
print(vid.shape)
|
| 84 |
+
vid = device.data_to_device(vid)
|
| 85 |
+
vid_lgt = device.data_to_device(video_length)
|
| 86 |
+
label = device.data_to_device([torch.LongTensor(label)])
|
| 87 |
+
label_lgt = device.data_to_device(torch.LongTensor([len(label_list)]))
|
| 88 |
+
ret_dict = model(vid, vid_lgt, label=label, label_lgt=label_lgt, dataset=dataset)
|
slt_new/generate_weight_map.py
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#Ref: https://blog.csdn.net/weixin_41735859/article/details/106474768
|
| 2 |
+
import numpy as np
|
| 3 |
+
import os
|
| 4 |
+
import glob
|
| 5 |
+
import cv2
|
| 6 |
+
from utils import video_augmentation
|
| 7 |
+
from slr_network import SLRModel
|
| 8 |
+
import torch
|
| 9 |
+
from collections import OrderedDict
|
| 10 |
+
import utils
|
| 11 |
+
|
| 12 |
+
gpu_id = 0 # The GPU to use
|
| 13 |
+
dataset = 'phoenix2014' # support [phoenix2014, phoenix2014-T, CSL-Daily]
|
| 14 |
+
prefix = './dataset/phoenix2014/phoenix-2014-multisigner' # ['./dataset/CSL-Daily', './dataset/phoenix2014-T', './dataset/phoenix2014/phoenix-2014-multisigner']
|
| 15 |
+
dict_path = f'./preprocess/{dataset}/gloss_dict.npy'
|
| 16 |
+
model_weights = 'path_to_model.pt'
|
| 17 |
+
select_id = 2 # The video selected to show. 539 for 31October_2009_Saturday_tagesschau_default-8, 0 for 01April_2010_Thursday_heute_default-1, 1 for 01August_2011_Monday_heute_default-6, 2 for 01December_2011_Thursday_heute_default-3
|
| 18 |
+
#name = '01April_2010_Thursday_heute_default-1'
|
| 19 |
+
|
| 20 |
+
# Load data and apply transformation
|
| 21 |
+
gloss_dict = np.load(dict_path, allow_pickle=True).item()
|
| 22 |
+
inputs_list = np.load(f"./preprocess/{dataset}/dev_info.npy", allow_pickle=True).item()
|
| 23 |
+
name = inputs_list[select_id]['fileid']
|
| 24 |
+
print(f'Generating CAM for {name}')
|
| 25 |
+
img_folder = os.path.join(prefix, "features/fullFrame-256x256px/" + inputs_list[select_id]['folder']) if 'phoenix' in dataset else os.path.join(prefix, inputs_list[select_id]['folder'])
|
| 26 |
+
img_list = sorted(glob.glob(img_folder))
|
| 27 |
+
img_list = [cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB) for img_path in img_list]
|
| 28 |
+
label_list = []
|
| 29 |
+
for phase in inputs_list[select_id]['label'].split(" "):
|
| 30 |
+
if phase == '':
|
| 31 |
+
continue
|
| 32 |
+
if phase in gloss_dict.keys():
|
| 33 |
+
label_list.append(gloss_dict[phase][0])
|
| 34 |
+
transform = video_augmentation.Compose([
|
| 35 |
+
video_augmentation.CenterCrop(224),
|
| 36 |
+
video_augmentation.Resize(1.0),
|
| 37 |
+
video_augmentation.ToTensor(),
|
| 38 |
+
])
|
| 39 |
+
vid, label = transform(img_list, label_list, None)
|
| 40 |
+
vid = vid.float() / 127.5 - 1
|
| 41 |
+
vid = vid.unsqueeze(0)
|
| 42 |
+
|
| 43 |
+
left_pad = 0
|
| 44 |
+
last_stride = 1
|
| 45 |
+
total_stride = 1
|
| 46 |
+
kernel_sizes = ['K5', "P2", 'K5', "P2"]
|
| 47 |
+
for layer_idx, ks in enumerate(kernel_sizes):
|
| 48 |
+
if ks[0] == 'K':
|
| 49 |
+
left_pad = left_pad * last_stride
|
| 50 |
+
left_pad += int((int(ks[1])-1)/2)
|
| 51 |
+
elif ks[0] == 'P':
|
| 52 |
+
last_stride = int(ks[1])
|
| 53 |
+
total_stride = total_stride * last_stride
|
| 54 |
+
|
| 55 |
+
max_len = vid.size(1)
|
| 56 |
+
video_length = torch.LongTensor([np.ceil(vid.size(1) / total_stride) * total_stride + 2*left_pad ])
|
| 57 |
+
right_pad = int(np.ceil(max_len / total_stride)) * total_stride - max_len + left_pad
|
| 58 |
+
max_len = max_len + left_pad + right_pad
|
| 59 |
+
vid = torch.cat(
|
| 60 |
+
(
|
| 61 |
+
vid[0,0][None].expand(left_pad, -1, -1, -1),
|
| 62 |
+
vid[0],
|
| 63 |
+
vid[0,-1][None].expand(max_len - vid.size(1) - left_pad, -1, -1, -1),
|
| 64 |
+
)
|
| 65 |
+
, dim=0).unsqueeze(0)
|
| 66 |
+
|
| 67 |
+
fmap_block = list()
|
| 68 |
+
|
| 69 |
+
device = utils.GpuDataParallel()
|
| 70 |
+
device.set_device(gpu_id)
|
| 71 |
+
# Define model and load state-dict
|
| 72 |
+
model = SLRModel( num_classes=len(gloss_dict)+1, c2d_type='resnet18', conv_type=2, use_bn=1, gloss_dict=gloss_dict,
|
| 73 |
+
loss_weights={'ConvCTC': 1.0, 'SeqCTC': 1.0, 'Dist': 25.0}, )
|
| 74 |
+
state_dict = torch.load(model_weights)['model_state_dict']
|
| 75 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 76 |
+
model.load_state_dict(state_dict, strict=True)
|
| 77 |
+
model = model.to(device.output_device)
|
| 78 |
+
model.cuda()
|
| 79 |
+
|
| 80 |
+
model.train()
|
| 81 |
+
|
| 82 |
+
def forward_hook(module, input, output):
|
| 83 |
+
fmap_block.append(output) #N, C, T, H, ,W
|
| 84 |
+
if 'phoenix' in dataset:
|
| 85 |
+
model.conv2d.corr2.conv_back.register_forward_hook(forward_hook)
|
| 86 |
+
else:
|
| 87 |
+
model.conv2d.corr3.conv_back.register_forward_hook(forward_hook) # For CSL-Daily
|
| 88 |
+
#model.conv2d.layer4[-1].conv1.register_backward_hook(backward_hook)
|
| 89 |
+
|
| 90 |
+
def cam_show_img(img, feature_map, grads, out_dir): # img: ntchw, feature_map: ncthw, grads: ncthw
|
| 91 |
+
N, C, T, H, W = feature_map.shape
|
| 92 |
+
cam = np.zeros(feature_map.shape[2:], dtype=np.float32) # thw
|
| 93 |
+
grads = grads[0,:].reshape([C, T, -1])
|
| 94 |
+
weights = np.mean(grads, axis=-1)
|
| 95 |
+
for i in range(C):
|
| 96 |
+
for j in range(T):
|
| 97 |
+
cam[j] += weights[i,j] * feature_map[0, i, j, :, :]
|
| 98 |
+
cam = np.maximum(cam, 0)
|
| 99 |
+
|
| 100 |
+
if not os.path.exists(out_dir):
|
| 101 |
+
os.makedirs(out_dir)
|
| 102 |
+
else:
|
| 103 |
+
import shutil
|
| 104 |
+
shutil.rmtree(out_dir)
|
| 105 |
+
os.makedirs(out_dir)
|
| 106 |
+
for i in range(T):
|
| 107 |
+
out_cam = cam[i]
|
| 108 |
+
out_cam = out_cam - np.min(out_cam)
|
| 109 |
+
out_cam = out_cam / (1e-7 + out_cam.max())
|
| 110 |
+
out_cam = cv2.resize(out_cam, (img.shape[3], img.shape[4]))
|
| 111 |
+
out_cam = (255 * out_cam).astype(np.uint8)
|
| 112 |
+
heatmap = cv2.applyColorMap(out_cam, cv2.COLORMAP_JET)
|
| 113 |
+
cam_img = np.float32(heatmap) / 255 + (img[0,i]/2+0.5).permute(1,2,0).cpu().data.numpy()
|
| 114 |
+
cam_img = cam_img/np.max(cam_img)
|
| 115 |
+
cam_img = np.uint8(255 * cam_img)
|
| 116 |
+
path_cam_img = os.path.join(out_dir, f"cam_{i}.jpg")
|
| 117 |
+
cv2.imwrite(path_cam_img, cam_img)
|
| 118 |
+
print('Generate cam.jpg')
|
| 119 |
+
|
| 120 |
+
print(vid.shape)
|
| 121 |
+
vid = device.data_to_device(vid)
|
| 122 |
+
vid_lgt = device.data_to_device(video_length)
|
| 123 |
+
label = device.data_to_device([torch.LongTensor(label)])
|
| 124 |
+
label_lgt = device.data_to_device(torch.LongTensor([len(label_list)]))
|
| 125 |
+
ret_dict = model(vid, vid_lgt, label=label, label_lgt=label_lgt)
|
| 126 |
+
|
| 127 |
+
model.zero_grad()
|
| 128 |
+
for i in range(ret_dict['sequence_logits'].size(0)):
|
| 129 |
+
idx = np.argmax(ret_dict['sequence_logits'].cpu().data.numpy()[i,0]) #TBC
|
| 130 |
+
class_loss = ret_dict['sequence_logits'][i, 0, idx]
|
| 131 |
+
class_loss.backward(retain_graph=True)
|
| 132 |
+
# 生成cam
|
| 133 |
+
grads_val = torch.load('./weight_map.pth').cpu().data.numpy()
|
| 134 |
+
fmap = fmap_block[0].cpu().data.numpy()
|
| 135 |
+
# 保存cam图片
|
| 136 |
+
cam_show_img(vid, fmap, grads_val, out_dir='./agg_map')
|
slt_new/main.py
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
|
| 3 |
+
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
|
| 4 |
+
import pdb
|
| 5 |
+
import sys
|
| 6 |
+
import cv2
|
| 7 |
+
import yaml
|
| 8 |
+
import torch
|
| 9 |
+
import random
|
| 10 |
+
import importlib
|
| 11 |
+
import faulthandler
|
| 12 |
+
import numpy as np
|
| 13 |
+
import torch.nn as nn
|
| 14 |
+
import shutil
|
| 15 |
+
import inspect
|
| 16 |
+
import time
|
| 17 |
+
from collections import OrderedDict
|
| 18 |
+
|
| 19 |
+
faulthandler.enable()
|
| 20 |
+
import utils
|
| 21 |
+
from modules.sync_batchnorm import convert_model
|
| 22 |
+
from seq_scripts import seq_train, seq_eval, seq_feature_generation
|
| 23 |
+
from torch.cuda.amp import autocast as autocast
|
| 24 |
+
|
| 25 |
+
class Processor():
|
| 26 |
+
def __init__(self, arg):
|
| 27 |
+
self.arg = arg
|
| 28 |
+
if os.path.exists(self.arg.work_dir):
|
| 29 |
+
# Auto-remove for non-interactive mode
|
| 30 |
+
print(f'Work dir {self.arg.work_dir} exists, removing...')
|
| 31 |
+
shutil.rmtree(self.arg.work_dir)
|
| 32 |
+
os.makedirs(self.arg.work_dir)
|
| 33 |
+
else:
|
| 34 |
+
os.makedirs(self.arg.work_dir)
|
| 35 |
+
shutil.copy2(__file__, self.arg.work_dir)
|
| 36 |
+
shutil.copy2('./configs/baseline.yaml', self.arg.work_dir)
|
| 37 |
+
shutil.copy2('./modules/tconv.py', self.arg.work_dir)
|
| 38 |
+
shutil.copy2('./modules/resnet.py', self.arg.work_dir)
|
| 39 |
+
self.recoder = utils.Recorder(self.arg.work_dir, self.arg.print_log, self.arg.log_interval)
|
| 40 |
+
self.save_arg()
|
| 41 |
+
if self.arg.random_fix:
|
| 42 |
+
self.rng = utils.RandomState(seed=self.arg.random_seed)
|
| 43 |
+
self.device = utils.GpuDataParallel()
|
| 44 |
+
self.recoder = utils.Recorder(self.arg.work_dir, self.arg.print_log, self.arg.log_interval)
|
| 45 |
+
self.dataset = {}
|
| 46 |
+
self.data_loader = {}
|
| 47 |
+
self.gloss_dict = np.load(self.arg.dataset_info['dict_path'], allow_pickle=True).item()
|
| 48 |
+
# Check if gloss_dict contains blank token
|
| 49 |
+
has_blank = any('blank' in str(k).lower() for k in self.gloss_dict.keys())
|
| 50 |
+
# If blank is not in dict, add 1 for blank token (like Phoenix2014)
|
| 51 |
+
# If blank is in dict, use dict length as is (like ASLLRP)
|
| 52 |
+
self.arg.model_args['num_classes'] = len(self.gloss_dict) if has_blank else len(self.gloss_dict) + 1
|
| 53 |
+
self.model, self.optimizer = self.loading()
|
| 54 |
+
|
| 55 |
+
def start(self):
|
| 56 |
+
if self.arg.phase == 'train':
|
| 57 |
+
best_dev = 100.0
|
| 58 |
+
best_epoch = 0
|
| 59 |
+
total_time = 0
|
| 60 |
+
epoch_time = 0
|
| 61 |
+
self.recoder.print_log('Parameters:\n{}\n'.format(str(vars(self.arg))))
|
| 62 |
+
seq_model_list = []
|
| 63 |
+
for epoch in range(self.arg.optimizer_args['start_epoch'], self.arg.num_epoch):
|
| 64 |
+
save_model = epoch % self.arg.save_interval == 0
|
| 65 |
+
eval_model = epoch % self.arg.eval_interval == 0
|
| 66 |
+
epoch_time = time.time()
|
| 67 |
+
# train end2end model
|
| 68 |
+
seq_train(self.data_loader['train'], self.model, self.optimizer,
|
| 69 |
+
self.device, epoch, self.recoder)
|
| 70 |
+
if eval_model:
|
| 71 |
+
dev_wer = seq_eval(self.arg, self.data_loader['dev'], self.model, self.device,
|
| 72 |
+
'dev', epoch, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 73 |
+
self.recoder.print_log("Dev WER: {:05.2f}%".format(dev_wer))
|
| 74 |
+
if dev_wer < best_dev:
|
| 75 |
+
best_dev = dev_wer
|
| 76 |
+
best_epoch = epoch
|
| 77 |
+
model_path = "{}_best_model.pt".format(self.arg.work_dir)
|
| 78 |
+
self.save_model(epoch, model_path)
|
| 79 |
+
self.recoder.print_log('Save best model')
|
| 80 |
+
self.recoder.print_log('Best_dev: {:05.2f}, Epoch : {}'.format(best_dev, best_epoch))
|
| 81 |
+
if save_model:
|
| 82 |
+
model_path = "{}dev_{:05.2f}_epoch{}_model.pt".format(self.arg.work_dir, dev_wer, epoch)
|
| 83 |
+
seq_model_list.append(model_path)
|
| 84 |
+
print("seq_model_list", seq_model_list)
|
| 85 |
+
self.save_model(epoch, model_path)
|
| 86 |
+
epoch_time = time.time() - epoch_time
|
| 87 |
+
total_time += epoch_time
|
| 88 |
+
torch.cuda.empty_cache()
|
| 89 |
+
self.recoder.print_log('Epoch {} costs {} mins {} seconds'.format(epoch, int(epoch_time)//60, int(epoch_time)%60))
|
| 90 |
+
self.recoder.print_log('Training costs {} hours {} mins {} seconds'.format(int(total_time)//60//60, int(total_time)//60%60, int(total_time)%60))
|
| 91 |
+
elif self.arg.phase == 'test':
|
| 92 |
+
if self.arg.load_weights is None and self.arg.load_checkpoints is None:
|
| 93 |
+
print('Please appoint --weights.')
|
| 94 |
+
self.recoder.print_log('Model: {}.'.format(self.arg.model))
|
| 95 |
+
self.recoder.print_log('Weights: {}.'.format(self.arg.load_weights))
|
| 96 |
+
# train_wer = seq_eval(self.arg, self.data_loader["train_eval"], self.model, self.device,
|
| 97 |
+
# "train", 6667, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 98 |
+
dev_wer = seq_eval(self.arg, self.data_loader["dev"], self.model, self.device,
|
| 99 |
+
"dev", 6667, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 100 |
+
test_wer = seq_eval(self.arg, self.data_loader["test"], self.model, self.device,
|
| 101 |
+
"test", 6667, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 102 |
+
self.recoder.print_log('Evaluation Done.\n')
|
| 103 |
+
elif self.arg.phase == "features":
|
| 104 |
+
for mode in ["train", "dev", "test"]:
|
| 105 |
+
seq_feature_generation(
|
| 106 |
+
self.data_loader[mode + "_eval" if mode == "train" else mode],
|
| 107 |
+
self.model, self.device, mode, self.arg.work_dir, self.recoder
|
| 108 |
+
)
|
| 109 |
+
elif self.arg.phase == 'finetune':
|
| 110 |
+
best_dev = 100.0
|
| 111 |
+
best_epoch = 0
|
| 112 |
+
total_time = 0
|
| 113 |
+
epoch_time = 0
|
| 114 |
+
self.recoder.print_log('Parameters:\n{}\n'.format(str(vars(self.arg))))
|
| 115 |
+
seq_model_list = []
|
| 116 |
+
for name, m in self.model.conv2d.named_modules():
|
| 117 |
+
m.requires_grad = False
|
| 118 |
+
for name, m in self.model.conv1d.named_modules():
|
| 119 |
+
if 'fc' not in name:
|
| 120 |
+
m.requires_grad = False
|
| 121 |
+
for name, m in self.model.temporal_model.named_modules():
|
| 122 |
+
m.requires_grad = False
|
| 123 |
+
from slr_network import NormLinear
|
| 124 |
+
self.model.classifier = NormLinear(1024, len(self.gloss_dict) + 1).cuda()
|
| 125 |
+
self.model.conv1d.fc = self.model.classifier
|
| 126 |
+
|
| 127 |
+
for epoch in range(self.arg.optimizer_args['start_epoch'], self.arg.num_epoch):
|
| 128 |
+
save_model = epoch % self.arg.save_interval == 0
|
| 129 |
+
eval_model = epoch % self.arg.eval_interval == 0
|
| 130 |
+
epoch_time = time.time()
|
| 131 |
+
# train end2end model
|
| 132 |
+
seq_train(self.data_loader['train'], self.model, self.optimizer,
|
| 133 |
+
self.device, epoch, self.recoder)
|
| 134 |
+
if eval_model:
|
| 135 |
+
dev_wer = seq_eval(self.arg, self.data_loader['dev'], self.model, self.device,
|
| 136 |
+
'dev', epoch, self.arg.work_dir, self.recoder, self.arg.evaluate_tool)
|
| 137 |
+
self.recoder.print_log("Dev WER: {:05.2f}%".format(dev_wer))
|
| 138 |
+
if dev_wer < best_dev:
|
| 139 |
+
best_dev = dev_wer
|
| 140 |
+
best_epoch = epoch
|
| 141 |
+
model_path = "{}_best_model.pt".format(self.arg.work_dir)
|
| 142 |
+
self.save_model(epoch, model_path)
|
| 143 |
+
self.recoder.print_log('Save best model')
|
| 144 |
+
self.recoder.print_log('Best_dev: {:05.2f}, Epoch : {}'.format(best_dev, best_epoch))
|
| 145 |
+
if save_model:
|
| 146 |
+
model_path = "{}dev_{:05.2f}_epoch{}_model.pt".format(self.arg.work_dir, dev_wer, epoch)
|
| 147 |
+
seq_model_list.append(model_path)
|
| 148 |
+
print("seq_model_list", seq_model_list)
|
| 149 |
+
self.save_model(epoch, model_path)
|
| 150 |
+
epoch_time = time.time() - epoch_time
|
| 151 |
+
total_time += epoch_time
|
| 152 |
+
torch.cuda.empty_cache()
|
| 153 |
+
self.recoder.print_log('Epoch {} costs {} mins {} seconds'.format(epoch, int(epoch_time)//60, int(epoch_time)%60))
|
| 154 |
+
self.recoder.print_log('Training costs {} hours {} mins {} seconds'.format(int(total_time)//60//60, int(total_time)//60%60, int(total_time)%60))
|
| 155 |
+
|
| 156 |
+
def save_arg(self):
|
| 157 |
+
arg_dict = vars(self.arg)
|
| 158 |
+
if not os.path.exists(self.arg.work_dir):
|
| 159 |
+
os.makedirs(self.arg.work_dir)
|
| 160 |
+
with open('{}/config.yaml'.format(self.arg.work_dir), 'w') as f:
|
| 161 |
+
yaml.dump(arg_dict, f)
|
| 162 |
+
|
| 163 |
+
def save_model(self, epoch, save_path):
|
| 164 |
+
torch.save({
|
| 165 |
+
'epoch': epoch,
|
| 166 |
+
'model_state_dict': self.model.state_dict(),
|
| 167 |
+
'optimizer_state_dict': self.optimizer.state_dict(),
|
| 168 |
+
'scheduler_state_dict': self.optimizer.scheduler.state_dict(),
|
| 169 |
+
'rng_state': self.rng.save_rng_state(),
|
| 170 |
+
}, save_path)
|
| 171 |
+
|
| 172 |
+
def loading(self):
|
| 173 |
+
self.device.set_device(self.arg.device)
|
| 174 |
+
print("Loading model")
|
| 175 |
+
model_class = import_class(self.arg.model)
|
| 176 |
+
model = model_class(
|
| 177 |
+
**self.arg.model_args,
|
| 178 |
+
gloss_dict=self.gloss_dict,
|
| 179 |
+
loss_weights=self.arg.loss_weights,
|
| 180 |
+
)
|
| 181 |
+
shutil.copy2(inspect.getfile(model_class), self.arg.work_dir)
|
| 182 |
+
optimizer = utils.Optimizer(model, self.arg.optimizer_args)
|
| 183 |
+
|
| 184 |
+
if self.arg.load_weights:
|
| 185 |
+
self.load_model_weights(model, self.arg.load_weights)
|
| 186 |
+
elif self.arg.load_checkpoints:
|
| 187 |
+
self.load_checkpoint_weights(model, optimizer)
|
| 188 |
+
model = self.model_to_device(model)
|
| 189 |
+
# Handle DataParallel wrapper
|
| 190 |
+
if isinstance(model, nn.DataParallel):
|
| 191 |
+
self.kernel_sizes = model.module.conv1d.kernel_size
|
| 192 |
+
else:
|
| 193 |
+
self.kernel_sizes = model.conv1d.kernel_size
|
| 194 |
+
print("Loading model finished.")
|
| 195 |
+
self.load_data()
|
| 196 |
+
return model, optimizer
|
| 197 |
+
|
| 198 |
+
def model_to_device(self, model):
|
| 199 |
+
model = model.to(self.device.output_device)
|
| 200 |
+
if len(self.device.gpu_list) > 1:
|
| 201 |
+
# Use DataParallel for multi-GPU training
|
| 202 |
+
model = nn.DataParallel(model, device_ids=self.device.gpu_list, output_device=self.device.output_device)
|
| 203 |
+
print(f"Using DataParallel on GPUs: {self.device.gpu_list}")
|
| 204 |
+
model = convert_model(model)
|
| 205 |
+
model.cuda()
|
| 206 |
+
return model
|
| 207 |
+
|
| 208 |
+
def load_model_weights(self, model, weight_path):
|
| 209 |
+
state_dict = torch.load(weight_path)
|
| 210 |
+
if len(self.arg.ignore_weights):
|
| 211 |
+
for w in self.arg.ignore_weights:
|
| 212 |
+
if state_dict.pop(w, None) is not None:
|
| 213 |
+
print('Successfully Remove Weights: {}.'.format(w))
|
| 214 |
+
else:
|
| 215 |
+
print('Can Not Remove Weights: {}.'.format(w))
|
| 216 |
+
weights = self.modified_weights(state_dict['model_state_dict'], False)
|
| 217 |
+
# weights = self.modified_weights(state_dict['model_state_dict'])
|
| 218 |
+
model.load_state_dict(weights, strict=True)
|
| 219 |
+
|
| 220 |
+
@staticmethod
|
| 221 |
+
def modified_weights(state_dict, modified=False):
|
| 222 |
+
state_dict = OrderedDict([(k.replace('.module', ''), v) for k, v in state_dict.items()])
|
| 223 |
+
if not modified:
|
| 224 |
+
return state_dict
|
| 225 |
+
modified_dict = dict()
|
| 226 |
+
return modified_dict
|
| 227 |
+
|
| 228 |
+
def load_checkpoint_weights(self, model, optimizer):
|
| 229 |
+
self.load_model_weights(model, self.arg.load_checkpoints)
|
| 230 |
+
state_dict = torch.load(self.arg.load_checkpoints)
|
| 231 |
+
|
| 232 |
+
if len(torch.cuda.get_rng_state_all()) == len(state_dict['rng_state']['cuda']):
|
| 233 |
+
print("Loading random seeds...")
|
| 234 |
+
self.rng.set_rng_state(state_dict['rng_state'])
|
| 235 |
+
if "optimizer_state_dict" in state_dict.keys():
|
| 236 |
+
print("Loading optimizer parameters...")
|
| 237 |
+
optimizer.load_state_dict(state_dict["optimizer_state_dict"])
|
| 238 |
+
optimizer.to(self.device.output_device)
|
| 239 |
+
if "scheduler_state_dict" in state_dict.keys():
|
| 240 |
+
print("Loading scheduler parameters...")
|
| 241 |
+
optimizer.scheduler.load_state_dict(state_dict["scheduler_state_dict"])
|
| 242 |
+
|
| 243 |
+
self.arg.optimizer_args['start_epoch'] = state_dict["epoch"] + 1
|
| 244 |
+
self.recoder.print_log("Resuming from checkpoint: epoch {self.arg.optimizer_args['start_epoch']}")
|
| 245 |
+
|
| 246 |
+
def load_data(self):
|
| 247 |
+
print("Loading data")
|
| 248 |
+
from tqdm import tqdm
|
| 249 |
+
self.feeder = import_class(self.arg.feeder)
|
| 250 |
+
shutil.copy2(inspect.getfile(self.feeder), self.arg.work_dir)
|
| 251 |
+
if self.arg.dataset == 'CSL':
|
| 252 |
+
dataset_list = zip(["train", "dev"], [True, False])
|
| 253 |
+
elif 'phoenix' in self.arg.dataset:
|
| 254 |
+
dataset_list = zip(["train", "dev", "test"], [True, False, False])
|
| 255 |
+
elif self.arg.dataset == 'CSL-Daily':
|
| 256 |
+
dataset_list = zip(["train", "dev", "test"], [True, False, False])
|
| 257 |
+
elif self.arg.dataset == 'ASLLRP':
|
| 258 |
+
dataset_list = zip(["train", "dev", "test"], [True, False, False])
|
| 259 |
+
|
| 260 |
+
dataset_list = list(dataset_list)
|
| 261 |
+
for idx, (mode, train_flag) in enumerate(tqdm(dataset_list, desc="Creating data loaders")):
|
| 262 |
+
arg = self.arg.feeder_args
|
| 263 |
+
arg["prefix"] = self.arg.dataset_info['dataset_root']
|
| 264 |
+
arg["mode"] = mode.split("_")[0]
|
| 265 |
+
arg["transform_mode"] = train_flag
|
| 266 |
+
self.dataset[mode] = self.feeder(gloss_dict=self.gloss_dict, kernel_size= self.kernel_sizes, dataset=self.arg.dataset, **arg)
|
| 267 |
+
print(f" Building DataLoader for {mode} set...")
|
| 268 |
+
self.data_loader[mode] = self.build_dataloader(self.dataset[mode], mode, train_flag)
|
| 269 |
+
print("Loading data finished.")
|
| 270 |
+
def init_fn(self, worker_id):
|
| 271 |
+
np.random.seed(int(self.arg.random_seed)+worker_id)
|
| 272 |
+
def build_dataloader(self, dataset, mode, train_flag):
|
| 273 |
+
print(f" Initializing {self.arg.num_worker} workers for {mode} DataLoader...")
|
| 274 |
+
loader = torch.utils.data.DataLoader(
|
| 275 |
+
dataset,
|
| 276 |
+
batch_size=self.arg.batch_size if mode == "train" else self.arg.test_batch_size,
|
| 277 |
+
shuffle=train_flag,
|
| 278 |
+
drop_last=train_flag,
|
| 279 |
+
num_workers=self.arg.num_worker, # if train_flag else 0
|
| 280 |
+
collate_fn=self.feeder.collate_fn,
|
| 281 |
+
pin_memory=True,
|
| 282 |
+
worker_init_fn=self.init_fn,
|
| 283 |
+
persistent_workers=True if self.arg.num_worker > 0 else False, # Keep workers alive
|
| 284 |
+
prefetch_factor=2, # Prefetch batches
|
| 285 |
+
)
|
| 286 |
+
|
| 287 |
+
# Force worker initialization by accessing first batch
|
| 288 |
+
if self.arg.num_worker > 0:
|
| 289 |
+
print(f" Warming up workers...")
|
| 290 |
+
import time
|
| 291 |
+
start_time = time.time()
|
| 292 |
+
try:
|
| 293 |
+
_ = next(iter(loader))
|
| 294 |
+
print(f" Workers initialized in {time.time() - start_time:.1f}s")
|
| 295 |
+
except StopIteration:
|
| 296 |
+
pass
|
| 297 |
+
|
| 298 |
+
return loader
|
| 299 |
+
|
| 300 |
+
|
| 301 |
+
def import_class(name):
|
| 302 |
+
components = name.rsplit('.', 1)
|
| 303 |
+
mod = importlib.import_module(components[0])
|
| 304 |
+
mod = getattr(mod, components[1])
|
| 305 |
+
return mod
|
| 306 |
+
|
| 307 |
+
|
| 308 |
+
if __name__ == '__main__':
|
| 309 |
+
sparser = utils.get_parser()
|
| 310 |
+
p = sparser.parse_args()
|
| 311 |
+
# p.config = "baseline_iter.yaml"
|
| 312 |
+
if p.config is not None:
|
| 313 |
+
with open(p.config, 'r') as f:
|
| 314 |
+
try:
|
| 315 |
+
default_arg = yaml.load(f, Loader=yaml.FullLoader)
|
| 316 |
+
except AttributeError:
|
| 317 |
+
default_arg = yaml.load(f)
|
| 318 |
+
key = vars(p).keys()
|
| 319 |
+
for k in default_arg.keys():
|
| 320 |
+
if k not in key:
|
| 321 |
+
print('WRONG ARG: {}'.format(k))
|
| 322 |
+
assert (k in key)
|
| 323 |
+
sparser.set_defaults(**default_arg)
|
| 324 |
+
args = sparser.parse_args()
|
| 325 |
+
with open(f"./configs/{args.dataset}.yaml", 'r') as f:
|
| 326 |
+
args.dataset_info = yaml.load(f, Loader=yaml.FullLoader)
|
| 327 |
+
processor = Processor(args)
|
| 328 |
+
utils.pack_code("./", args.work_dir)
|
| 329 |
+
processor.start()
|
slt_new/requirements.txt
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
matplotlib==3.4.3
|
| 2 |
+
numpy==1.20.3
|
| 3 |
+
opencv_python==4.5.5.64
|
| 4 |
+
pandas==1.3.4
|
| 5 |
+
Pillow==9.4.0
|
| 6 |
+
PyYAML==6.0
|
| 7 |
+
scipy==1.7.1
|
| 8 |
+
six==1.16.0
|
| 9 |
+
tqdm==4.62.3
|
slt_new/seq_scripts.py
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import pdb
|
| 3 |
+
import sys
|
| 4 |
+
import copy
|
| 5 |
+
import torch
|
| 6 |
+
import numpy as np
|
| 7 |
+
import torch.nn as nn
|
| 8 |
+
from tqdm import tqdm
|
| 9 |
+
import torch.nn.functional as F
|
| 10 |
+
import matplotlib.pyplot as plt
|
| 11 |
+
from evaluation.slr_eval.wer_calculation import evaluate
|
| 12 |
+
#from torch.cuda.amp import autocast as autocast
|
| 13 |
+
#from torch.cuda.amp import GradScaler
|
| 14 |
+
import gc
|
| 15 |
+
|
| 16 |
+
def seq_train(loader, model, optimizer, device, epoch_idx, recoder):
|
| 17 |
+
model.train()
|
| 18 |
+
loss_value = []
|
| 19 |
+
clr = [group['lr'] for group in optimizer.optimizer.param_groups]
|
| 20 |
+
#scaler = GradScaler()
|
| 21 |
+
pbar = tqdm(loader)
|
| 22 |
+
for batch_idx, data in enumerate(pbar):
|
| 23 |
+
vid = device.data_to_device(data[0])
|
| 24 |
+
vid_lgt = device.data_to_device(data[1])
|
| 25 |
+
label = device.data_to_device(data[2])
|
| 26 |
+
label_lgt = device.data_to_device(data[3])
|
| 27 |
+
optimizer.zero_grad()
|
| 28 |
+
#with autocast():
|
| 29 |
+
ret_dict = model(vid, vid_lgt)
|
| 30 |
+
# criterion_calculation should be called outside DataParallel
|
| 31 |
+
loss, _ = model.module.criterion_calculation(ret_dict, label, label_lgt) if isinstance(model, nn.DataParallel) else model.criterion_calculation(ret_dict, label, label_lgt)
|
| 32 |
+
if np.isinf(loss.item()) or np.isnan(loss.item()):
|
| 33 |
+
pdb.set_trace()
|
| 34 |
+
|
| 35 |
+
# Normal training step
|
| 36 |
+
#scaler.scale(loss).backward()
|
| 37 |
+
loss.backward()
|
| 38 |
+
#scaler.unscale_(optimizer.optimizer)
|
| 39 |
+
torch.nn.utils.clip_grad_norm_(model.parameters(), 5.0)
|
| 40 |
+
#scaler.step(optimizer.optimizer)
|
| 41 |
+
optimizer.optimizer.step()
|
| 42 |
+
#scaler.update()
|
| 43 |
+
loss_value.append(loss.item())
|
| 44 |
+
if batch_idx % recoder.log_interval == 0:
|
| 45 |
+
recoder.print_log(
|
| 46 |
+
'\tEpoch: {}, Batch({}/{}) done. Loss: {:.8f} lr:{:.6f}'
|
| 47 |
+
.format(epoch_idx, batch_idx, len(loader), loss.item(), clr[0]))
|
| 48 |
+
del ret_dict
|
| 49 |
+
del loss
|
| 50 |
+
optimizer.scheduler.step()
|
| 51 |
+
recoder.print_log('\tMean training loss: {:.10f}.'.format(np.mean(loss_value)))
|
| 52 |
+
del loss_value
|
| 53 |
+
del clr
|
| 54 |
+
gc.collect()
|
| 55 |
+
torch.cuda.empty_cache()
|
| 56 |
+
return
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
def seq_eval(cfg, loader, model, device, mode, epoch, work_dir, recoder,
|
| 60 |
+
evaluate_tool="python"):
|
| 61 |
+
model.eval()
|
| 62 |
+
total_sent = []
|
| 63 |
+
total_info = []
|
| 64 |
+
total_conv_sent = []
|
| 65 |
+
stat = {i: [0, 0] for i in range(len(loader.dataset.dict))}
|
| 66 |
+
for batch_idx, data in enumerate(tqdm(loader)):
|
| 67 |
+
recoder.record_timer("device")
|
| 68 |
+
vid = device.data_to_device(data[0])
|
| 69 |
+
vid_lgt = device.data_to_device(data[1])
|
| 70 |
+
label = device.data_to_device(data[2])
|
| 71 |
+
label_lgt = device.data_to_device(data[3])
|
| 72 |
+
with torch.no_grad():
|
| 73 |
+
ret_dict = model(vid, vid_lgt)
|
| 74 |
+
|
| 75 |
+
total_info += [file_name.split("|")[0] for file_name in data[-1]]
|
| 76 |
+
total_sent += ret_dict['recognized_sents']
|
| 77 |
+
total_conv_sent += ret_dict['conv_sents']
|
| 78 |
+
try:
|
| 79 |
+
python_eval = True if evaluate_tool == "python" else False
|
| 80 |
+
write2file(work_dir + "output-hypothesis-{}.ctm".format(mode), total_info, total_sent)
|
| 81 |
+
write2file(work_dir + "output-hypothesis-{}-conv.ctm".format(mode), total_info,
|
| 82 |
+
total_conv_sent)
|
| 83 |
+
conv_ret = evaluate(
|
| 84 |
+
prefix=work_dir, mode=mode, output_file="output-hypothesis-{}-conv.ctm".format(mode),
|
| 85 |
+
evaluate_dir=cfg.dataset_info['evaluation_dir'],
|
| 86 |
+
evaluate_prefix=cfg.dataset_info['evaluation_prefix'],
|
| 87 |
+
output_dir="epoch_{}_result/".format(epoch),
|
| 88 |
+
python_evaluate=python_eval,
|
| 89 |
+
)
|
| 90 |
+
lstm_ret = evaluate(
|
| 91 |
+
prefix=work_dir, mode=mode, output_file="output-hypothesis-{}.ctm".format(mode),
|
| 92 |
+
evaluate_dir=cfg.dataset_info['evaluation_dir'],
|
| 93 |
+
evaluate_prefix=cfg.dataset_info['evaluation_prefix'],
|
| 94 |
+
output_dir="epoch_{}_result/".format(epoch),
|
| 95 |
+
python_evaluate=python_eval,
|
| 96 |
+
triplet=True,
|
| 97 |
+
)
|
| 98 |
+
except:
|
| 99 |
+
print("Unexpected error:", sys.exc_info()[0])
|
| 100 |
+
lstm_ret = 100.0
|
| 101 |
+
finally:
|
| 102 |
+
pass
|
| 103 |
+
if 'conv_ret' in locals():
|
| 104 |
+
del conv_ret
|
| 105 |
+
del total_sent
|
| 106 |
+
del total_info
|
| 107 |
+
del total_conv_sent
|
| 108 |
+
del vid
|
| 109 |
+
del vid_lgt
|
| 110 |
+
del label
|
| 111 |
+
del label_lgt
|
| 112 |
+
gc.collect()
|
| 113 |
+
recoder.print_log(f"Epoch {epoch}, {mode} {lstm_ret: 2.2f}%", f"{work_dir}/{mode}.txt")
|
| 114 |
+
return lstm_ret
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
def seq_feature_generation(loader, model, device, mode, work_dir, recoder):
|
| 118 |
+
model.eval()
|
| 119 |
+
|
| 120 |
+
src_path = os.path.abspath(f"{work_dir}{mode}")
|
| 121 |
+
tgt_path = os.path.abspath(f"./features/{mode}")
|
| 122 |
+
if not os.path.exists("./features/"):
|
| 123 |
+
os.makedirs("./features/")
|
| 124 |
+
|
| 125 |
+
if os.path.islink(tgt_path):
|
| 126 |
+
curr_path = os.readlink(tgt_path)
|
| 127 |
+
if work_dir[1:] in curr_path and os.path.isabs(curr_path):
|
| 128 |
+
return
|
| 129 |
+
else:
|
| 130 |
+
os.unlink(tgt_path)
|
| 131 |
+
else:
|
| 132 |
+
if os.path.exists(src_path) and len(loader.dataset) == len(os.listdir(src_path)):
|
| 133 |
+
os.symlink(src_path, tgt_path)
|
| 134 |
+
return
|
| 135 |
+
|
| 136 |
+
for batch_idx, data in tqdm(enumerate(loader)):
|
| 137 |
+
recoder.record_timer("device")
|
| 138 |
+
vid = device.data_to_device(data[0])
|
| 139 |
+
vid_lgt = device.data_to_device(data[1])
|
| 140 |
+
with torch.no_grad():
|
| 141 |
+
ret_dict = model(vid, vid_lgt)
|
| 142 |
+
if not os.path.exists(src_path):
|
| 143 |
+
os.makedirs(src_path)
|
| 144 |
+
start = 0
|
| 145 |
+
for sample_idx in range(len(vid)):
|
| 146 |
+
end = start + data[3][sample_idx]
|
| 147 |
+
filename = f"{src_path}/{data[-1][sample_idx].split('|')[0]}_features.npy"
|
| 148 |
+
save_file = {
|
| 149 |
+
"label": data[2][start:end],
|
| 150 |
+
"features": ret_dict['framewise_features'][sample_idx][:, :vid_lgt[sample_idx]].T.cpu().detach(),
|
| 151 |
+
}
|
| 152 |
+
np.save(filename, save_file)
|
| 153 |
+
start = end
|
| 154 |
+
assert end == len(data[2])
|
| 155 |
+
os.symlink(src_path, tgt_path)
|
| 156 |
+
|
| 157 |
+
|
| 158 |
+
def write2file(path, info, output):
|
| 159 |
+
filereader = open(path, "w")
|
| 160 |
+
for sample_idx, sample in enumerate(output):
|
| 161 |
+
for word_idx, word in enumerate(sample):
|
| 162 |
+
filereader.writelines(
|
| 163 |
+
"{} 1 {:.2f} {:.2f} {}\n".format(info[sample_idx],
|
| 164 |
+
word_idx * 1.0 / 100,
|
| 165 |
+
(word_idx + 1) * 1.0 / 100,
|
| 166 |
+
word[0]))
|
slt_new/slr_network.py
ADDED
|
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pdb
|
| 2 |
+
import copy
|
| 3 |
+
import utils
|
| 4 |
+
import torch
|
| 5 |
+
import types
|
| 6 |
+
import numpy as np
|
| 7 |
+
import torch.nn as nn
|
| 8 |
+
import torch.nn.functional as F
|
| 9 |
+
import torchvision.models as models
|
| 10 |
+
from modules.criterions import SeqKD
|
| 11 |
+
from modules import BiLSTMLayer, TemporalConv
|
| 12 |
+
import modules.resnet as resnet
|
| 13 |
+
|
| 14 |
+
class Identity(nn.Module):
|
| 15 |
+
def __init__(self):
|
| 16 |
+
super(Identity, self).__init__()
|
| 17 |
+
|
| 18 |
+
def forward(self, x):
|
| 19 |
+
return x
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
class NormLinear(nn.Module):
|
| 23 |
+
def __init__(self, in_dim, out_dim):
|
| 24 |
+
super(NormLinear, self).__init__()
|
| 25 |
+
self.weight = nn.Parameter(torch.Tensor(in_dim, out_dim))
|
| 26 |
+
nn.init.xavier_uniform_(self.weight, gain=nn.init.calculate_gain('relu'))
|
| 27 |
+
|
| 28 |
+
def forward(self, x):
|
| 29 |
+
outputs = torch.matmul(x, F.normalize(self.weight, dim=0))
|
| 30 |
+
return outputs
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
class SLRModel(nn.Module):
|
| 34 |
+
def __init__(
|
| 35 |
+
self, num_classes, c2d_type, conv_type, use_bn=False,
|
| 36 |
+
hidden_size=1024, gloss_dict=None, loss_weights=None,
|
| 37 |
+
weight_norm=True, share_classifier=True
|
| 38 |
+
):
|
| 39 |
+
super(SLRModel, self).__init__()
|
| 40 |
+
self.decoder = None
|
| 41 |
+
self.loss = dict()
|
| 42 |
+
self.criterion_init()
|
| 43 |
+
self.num_classes = num_classes
|
| 44 |
+
self.loss_weights = loss_weights
|
| 45 |
+
#self.conv2d = getattr(models, c2d_type)(pretrained=True)
|
| 46 |
+
self.conv2d = getattr(resnet, c2d_type)()
|
| 47 |
+
self.conv2d.fc = nn.Identity()
|
| 48 |
+
|
| 49 |
+
self.conv1d = TemporalConv(input_size=512,
|
| 50 |
+
hidden_size=hidden_size,
|
| 51 |
+
conv_type=conv_type,
|
| 52 |
+
use_bn=use_bn,
|
| 53 |
+
num_classes=num_classes)
|
| 54 |
+
# Auto-detect blank token in gloss_dict
|
| 55 |
+
blank_id = num_classes - 1 # Default for datasets without blank in dict
|
| 56 |
+
for gloss, idx in gloss_dict.items():
|
| 57 |
+
if 'blank' in str(gloss).lower():
|
| 58 |
+
blank_id = idx
|
| 59 |
+
break
|
| 60 |
+
self.decoder = utils.Decode(gloss_dict, num_classes, 'beam', blank_id=blank_id)
|
| 61 |
+
self.temporal_model = BiLSTMLayer(rnn_type='LSTM', input_size=hidden_size, hidden_size=hidden_size,
|
| 62 |
+
num_layers=2, bidirectional=True)
|
| 63 |
+
if weight_norm:
|
| 64 |
+
self.classifier = NormLinear(hidden_size, self.num_classes)
|
| 65 |
+
self.conv1d.fc = NormLinear(hidden_size, self.num_classes)
|
| 66 |
+
else:
|
| 67 |
+
self.classifier = nn.Linear(hidden_size, self.num_classes)
|
| 68 |
+
self.conv1d.fc = nn.Linear(hidden_size, self.num_classes)
|
| 69 |
+
if share_classifier:
|
| 70 |
+
self.conv1d.fc = self.classifier
|
| 71 |
+
#self.register_backward_hook(self.backward_hook)
|
| 72 |
+
|
| 73 |
+
def backward_hook(self, module, grad_input, grad_output):
|
| 74 |
+
for g in grad_input:
|
| 75 |
+
g[g != g] = 0
|
| 76 |
+
|
| 77 |
+
def masked_bn(self, inputs, len_x):
|
| 78 |
+
def pad(tensor, length):
|
| 79 |
+
return torch.cat([tensor, tensor.new(length - tensor.size(0), *tensor.size()[1:]).zero_()])
|
| 80 |
+
|
| 81 |
+
x = torch.cat([inputs[len_x[0] * idx:len_x[0] * idx + lgt] for idx, lgt in enumerate(len_x)])
|
| 82 |
+
x = self.conv2d(x)
|
| 83 |
+
x = torch.cat([pad(x[sum(len_x[:idx]):sum(len_x[:idx + 1])], len_x[0])
|
| 84 |
+
for idx, lgt in enumerate(len_x)])
|
| 85 |
+
return x
|
| 86 |
+
|
| 87 |
+
def forward(self, x, len_x):
|
| 88 |
+
if len(x.shape) == 5:
|
| 89 |
+
# videos
|
| 90 |
+
batch, temp, channel, height, width = x.shape
|
| 91 |
+
#inputs = x.reshape(batch * temp, channel, height, width)
|
| 92 |
+
#framewise = self.masked_bn(inputs, len_x)
|
| 93 |
+
#framewise = framewise.reshape(batch, temp, -1).transpose(1, 2)
|
| 94 |
+
# x shape: (B, T, C, H, W) -> permute to (B, C, T, H, W) for 3D Conv
|
| 95 |
+
x_permuted = x.permute(0,2,1,3,4) # (B,T,C,H,W) -> (B,C,T,H,W)
|
| 96 |
+
framewise = self.conv2d(x_permuted)
|
| 97 |
+
# Ensure we have B*T, 512 dimensions
|
| 98 |
+
framewise = framewise.view(batch, temp, -1).permute(0,2,1) # btc -> bct
|
| 99 |
+
else:
|
| 100 |
+
# frame-wise features
|
| 101 |
+
framewise = x
|
| 102 |
+
|
| 103 |
+
conv1d_outputs = self.conv1d(framewise, len_x)
|
| 104 |
+
# x: T, B, C
|
| 105 |
+
x = conv1d_outputs['visual_feat']
|
| 106 |
+
lgt = conv1d_outputs['feat_len']
|
| 107 |
+
tm_outputs = self.temporal_model(x, lgt)
|
| 108 |
+
outputs = self.classifier(tm_outputs['predictions'])
|
| 109 |
+
pred = None if self.training \
|
| 110 |
+
else self.decoder.decode(outputs, lgt, batch_first=False, probs=False)
|
| 111 |
+
conv_pred = None if self.training \
|
| 112 |
+
else self.decoder.decode(conv1d_outputs['conv_logits'], lgt, batch_first=False, probs=False)
|
| 113 |
+
|
| 114 |
+
# Ensure feat_len is a tensor for proper DataParallel gathering
|
| 115 |
+
if not isinstance(lgt, torch.Tensor):
|
| 116 |
+
lgt = torch.tensor(lgt, device=x.device)
|
| 117 |
+
|
| 118 |
+
return {
|
| 119 |
+
#"framewise_features": framewise,
|
| 120 |
+
#"visual_features": x,
|
| 121 |
+
"feat_len": lgt,
|
| 122 |
+
"conv_logits": conv1d_outputs['conv_logits'],
|
| 123 |
+
"sequence_logits": outputs,
|
| 124 |
+
"conv_sents": conv_pred,
|
| 125 |
+
"recognized_sents": pred,
|
| 126 |
+
"loss_LiftPool_u": conv1d_outputs['loss_LiftPool_u'],
|
| 127 |
+
"loss_LiftPool_p": conv1d_outputs['loss_LiftPool_p'],
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
def criterion_calculation(self, ret_dict, label, label_lgt):
|
| 131 |
+
loss = 0
|
| 132 |
+
total_loss = {}
|
| 133 |
+
|
| 134 |
+
# Match original CorrNet+ implementation exactly
|
| 135 |
+
for k, weight in self.loss_weights.items():
|
| 136 |
+
if k == 'ConvCTC':
|
| 137 |
+
total_loss['ConvCTC'] = weight * self.loss['CTCLoss'](
|
| 138 |
+
ret_dict["conv_logits"].log_softmax(-1),
|
| 139 |
+
label.cpu().int(),
|
| 140 |
+
ret_dict["feat_len"].cpu().int(),
|
| 141 |
+
label_lgt.cpu().int()
|
| 142 |
+
).mean()
|
| 143 |
+
loss += total_loss['ConvCTC']
|
| 144 |
+
elif k == 'SeqCTC':
|
| 145 |
+
total_loss['SeqCTC'] = weight * self.loss['CTCLoss'](
|
| 146 |
+
ret_dict["sequence_logits"].log_softmax(-1),
|
| 147 |
+
label.cpu().int(),
|
| 148 |
+
ret_dict["feat_len"].cpu().int(),
|
| 149 |
+
label_lgt.cpu().int()
|
| 150 |
+
).mean()
|
| 151 |
+
loss += total_loss['SeqCTC']
|
| 152 |
+
elif k == 'Dist':
|
| 153 |
+
total_loss['Dist'] = weight * self.loss['distillation'](
|
| 154 |
+
ret_dict["conv_logits"],
|
| 155 |
+
ret_dict["sequence_logits"].detach(),
|
| 156 |
+
use_blank=False
|
| 157 |
+
)
|
| 158 |
+
loss += total_loss['Dist']
|
| 159 |
+
elif k == 'Cu':
|
| 160 |
+
total_loss['Cu'] = weight * ret_dict["loss_LiftPool_u"]
|
| 161 |
+
loss += total_loss['Cu']
|
| 162 |
+
elif k == 'Cp':
|
| 163 |
+
total_loss['Cp'] = weight * ret_dict["loss_LiftPool_p"]
|
| 164 |
+
loss += total_loss['Cp']
|
| 165 |
+
return loss, total_loss
|
| 166 |
+
|
| 167 |
+
def criterion_init(self):
|
| 168 |
+
self.loss['CTCLoss'] = torch.nn.CTCLoss(reduction='none', zero_infinity=False)
|
| 169 |
+
self.loss['distillation'] = SeqKD(T=8)
|
| 170 |
+
return self.loss
|
slt_new/train_asllrp.sh
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# Training script for ASLLRP dataset
|
| 3 |
+
|
| 4 |
+
# Activate conda environment
|
| 5 |
+
source /research/cbim/vast/sf895/miniforge3/etc/profile.d/conda.sh
|
| 6 |
+
conda activate signx-slt
|
| 7 |
+
|
| 8 |
+
# Set CUDA device - use all 8 GPUs
|
| 9 |
+
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
|
| 10 |
+
|
| 11 |
+
# Start training
|
| 12 |
+
python main.py \
|
| 13 |
+
--config ./configs/asllrp.yaml \
|
| 14 |
+
--device 0,1,2,3,4,5,6,7 \
|
| 15 |
+
--phase train \
|
| 16 |
+
--print-log True \
|
| 17 |
+
--log-interval 50 \
|
| 18 |
+
--save-interval 5 \
|
| 19 |
+
--eval-interval 1 \
|
| 20 |
+
--num-epoch 80 \
|
| 21 |
+
--batch-size 16 \
|
| 22 |
+
--test-batch-size 32 \
|
| 23 |
+
--num-worker 32
|
slt_new/train_asllrp_single.sh
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
source /research/cbim/vast/sf895/miniforge3/bin/activate signx-slt
|
| 3 |
+
CUDA_VISIBLE_DEVICES=0 python main.py \
|
| 4 |
+
--config ./configs/asllrp.yaml \
|
| 5 |
+
--device 0 \
|
| 6 |
+
--phase train
|
slt_new/训练细节.txt
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
基于我的分析,以下是CorrNet+的训练细节:
|
| 2 |
+
|
| 3 |
+
CorrNet+ 训练配置和性能
|
| 4 |
+
|
| 5 |
+
硬件配置
|
| 6 |
+
|
| 7 |
+
- GPU数量: 2个GPU(device: 0,1)
|
| 8 |
+
- 批大小: 每GPU 2个样本,总批大小 = 4
|
| 9 |
+
|
| 10 |
+
训练设置
|
| 11 |
+
|
| 12 |
+
- 总epoch数: 80
|
| 13 |
+
- 优化器: Adam
|
| 14 |
+
- 学习率: 0.0001
|
| 15 |
+
- 学习率调整: 在第40和60个epoch降低学习率(×0.2)
|
| 16 |
+
|
| 17 |
+
数据集规模(以CSL-Daily为例)
|
| 18 |
+
|
| 19 |
+
- CSL-Daily训练集:约18,401个视频
|
| 20 |
+
- Phoenix2014训练集:约5,672个视频
|
| 21 |
+
|
| 22 |
+
总训练步数计算
|
| 23 |
+
|
| 24 |
+
对于CSL-Daily:
|
| 25 |
+
- 每个epoch步数 = 18,401 / 4 ≈ 4,600步
|
| 26 |
+
- 总步数 = 4,600 × 80 = 368,000步
|
| 27 |
+
|
| 28 |
+
对于您的ASLLRP数据集:
|
| 29 |
+
- 训练样本:1,073个
|
| 30 |
+
- 批大小:16(8个GPU)
|
| 31 |
+
- 每个epoch步数 = 1,073 / 16 ≈ 67步
|
| 32 |
+
- 总步数 = 67 × 80 = 5,360步
|
| 33 |
+
|
| 34 |
+
性能指标(WER)
|
| 35 |
+
|
| 36 |
+
根据README表格,CorrNet+在不同数据集上的表现:
|
| 37 |
+
- Phoenix2014: Dev 5.3%, Test 5.6%
|
| 38 |
+
- Phoenix2014-T: Dev 18.0%, Test 18.2%
|
| 39 |
+
- CSL-Daily: Dev 17.2%, Test 19.1%
|
| 40 |
+
|
| 41 |
+
训练时间估算
|
| 42 |
+
|
| 43 |
+
- 原论文使用2个GPU训练80个epoch
|
| 44 |
+
- 您使用8个GPU,理论上可以加速约4倍
|
| 45 |
+
- 但您的数据集规模较小(1,073 vs 18,401),所以训练会快很多
|
| 46 |
+
|
| 47 |
+
建议
|
| 48 |
+
|
| 49 |
+
1.
|
| 50 |
+
学习率调整:由于您使用8个GPU(批大小16),可以考虑将学习率提高到0.0002或0.0004
|
| 51 |
+
2. 早期验证:您的数据集较小,可能在20-40个epoch就能看到不错的结果
|
| 52 |
+
3. 过拟合风险:数据集较小时要注意过拟合,可以增加dropout或使用更强的数据增强
|
| 53 |
+
|
| 54 |
+
您的训练正在进行中,建议等第一个epoch结束后查看验证集WER,这样可以评估初始性能
|
upload.log
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|