FangSen9000
Delete the original version of CSLR plus
35db2f5

CorrNet+_CSLR

This repo holds codes of the paper: CorrNet+: Sign Language Recognition and Translation via Spatial-Temporal Correlation, which is an extension of our previous work (CVPR 2023) [paper]

This sub-repo holds the code for supporting the continuous sign language recognition task with CorrNet+.

(Update on 2025/01/28) We release a demo for Continuous sign language recognition that supports multi-images and video inputs! You can watch the demo video to watch its effects, or deploy a demo locally to test its performance.

https://github.com/user-attachments/assets/a7354510-e5e0-44af-b283-39707f625a9b

The web demo video

Prerequisites

  • This project is implemented in Pytorch (better >=1.13 to be compatible with ctcdecode or these may exist errors). Thus please install Pytorch first.

  • ctcdecode==0.4 [parlance/ctcdecode],for beam search decode.

  • [Optional] sclite [kaldi-asr/kaldi], install kaldi tool to get sclite for evaluation. After installation, create a soft link toward the sclite: mkdir ./software ln -s PATH_TO_KALDI/tools/sctk-2.4.10/bin/sclite ./software/sclite

    You may use the python version evaluation tool for convenience (by setting 'evaluate_tool' as 'python' in line 16 of ./configs/baseline.yaml), but sclite can provide more detailed statistics.

  • You can install other required modules by conducting pip install -r requirements.txt

Implementation

The implementation for the CorrNet+ is given in ./modules/resnet.py.

It's then equipped with after each stage in ResNet in line 195 ./modules/resnet.py.

We later found that the Identification Module with only spatial decomposition could perform on par with what we report in the paper (spatial-temporal decomposition) and is slighter faster, and thus implement it as such.

Data Preparation

You can choose any one of following datasets to verify the effectiveness of CorrNet+.

PHOENIX2014 dataset

  1. Download the RWTH-PHOENIX-Weather 2014 Dataset [download link]. Our experiments based on phoenix-2014.v3.tar.gz.

  2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
    ln -s PATH_TO_DATASET/phoenix2014-release ./dataset/phoenix2014

  3. The original image sequence is 210x260, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.

    cd ./preprocess
    python dataset_preprocess.py --process-image --multiprocessing
    

PHOENIX2014-T dataset

  1. Download the RWTH-PHOENIX-Weather 2014 Dataset [download link]

  2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
    ln -s PATH_TO_DATASET/PHOENIX-2014-T-release-v3/PHOENIX-2014-T ./dataset/phoenix2014-T

  3. The original image sequence is 210x260, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.

    cd ./preprocess
    python dataset_preprocess-T.py --process-image --multiprocessing
    

if you get an error like IndexError: list index out of range on the PHOENIX2014-T dataset, you may refer to this issue to tackle the problem.

CSL dataset

  1. Request the CSL Dataset from this website [download link]

  2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
    ln -s PATH_TO_DATASET ./dataset/CSL

  3. The original image sequence is 1280x720, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.

    cd ./preprocess
    python dataset_preprocess-CSL.py --process-image --multiprocessing
    

CSL-Daily dataset

  1. Request the CSL-Daily Dataset from this website [download link]

  2. After finishing dataset download, extract it. It is suggested to make a soft link toward downloaded dataset.
    ln -s PATH_TO_DATASET ./dataset/CSL-Daily

  3. The original image sequence is 1280x720, we resize it to 256x256 for augmentation. Run the following command to generate gloss dict and resize image sequence.

    cd ./preprocess
    python dataset_preprocess-CSL-Daily.py --process-image --multiprocessing
    

Inference

PHOENIX2014 dataset

Backbone Dev WER Test WER Pretrained model
ResNet18 18.0% 18.2% [Baidu]
[Google Drive]

PHOENIX2014-T dataset

Backbone Dev WER Test WER Pretrained model
ResNet18 17.2% 19.1% [Baidu]
[Google Drive]

CSL-Daily dataset

Backbone Dev WER Test WER Pretrained model
ResNet18 28.6% 28.2% [Baidu]
[Google Drive]

​ To evaluate the pretrained model, choose the dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml first, and run the command below:
python main.py --config ./config/baseline.yaml --device your_device --load-weights path_to_weight.pt --phase test

Training

The priorities of configuration files are: command line > config file > default values of argparse. To train the SLR model, run the command below:

python main.py --config ./config/baseline.yaml --device your_device

Note that you can choose the target dataset from phoenix2014/phoenix2014-T/CSL/CSL-Daily in line 3 in ./config/baseline.yaml.

Visualizations

For Grad-CAM visualization of spatial weight maps, you can replace the resnet.py under "./modules" with the resnet.py under "./weight_map_generation", and then run python generate_weight_map.py with your own hyperparameters.

For Grad-CAM visualization of correlation maps, you can replace the resnet.py under "./modules" with the resnet.py under "./corr_map_generation", and then run python generate_corr_map.py with your own hyperparameters.

Test with one video input

Except performing inference on datasets, we provide a test_one_video.py to perform inference with only one video input. An example command is

python test_one_video.py --model_path /path_to_pretrained_weights --video_path /path_to_your_video --device your_device

The video_path can be the path to a video file or a dir contains extracted images from a video.

Acceptable paramters:

  • model_path, the path to pretrained weights.
  • video_path, the path to a video file or a dir contains extracted images from a video.
  • device, which device to run inference, default=0.
  • language, the target sign language, default='phoenix', choices=['phoenix', 'csl'].
  • max_frames_num, the max input frames sampled from an input video, default=360.

Demo

We provide a demo to allow deploying continuous sign language recognition models locally to test its effects. The demo page is shown as follows.

The page of our demo

The demo video can be found in the top of this page. An example command is

python demo.py --model_path /path_to_pretrained_weights --device your_device

Acceptable paramters:

  • model_path, the path to pretrained weights.
  • device, which device to run inference, default=0.
  • language, the target sign language, default='phoenix', choices=['phoenix', 'csl'].
  • max_frames_num, the max input frames sampled from an input video, default=360.

After running the command, you can visit http://0.0.0.0:7862 to play with the demo. You can also change it into an public URL by setting share=True in line 176 in demo.py.