TD3Net-weights / README.md
nielsr's picture
nielsr HF Staff
Improve model card for TD3Net with abstract, results, and updated paper link
13765e4 verified
|
raw
history blame
4.49 kB
metadata
library_name: pytorch
pipeline_tag: automatic-speech-recognition
tags:
  - Lipreading
  - TD3Net

TD3Net: A Temporal Densely Connected Multi-Dilated Convolutional Network for Lipreading

This repository provides pretrained weights for the model presented in the paper:

📄 TD3Net: A Temporal Densely Connected Multi-Dilated Convolutional Network for Lipreading
🔗 Official code: GitHub Repository

Abstract

The word-level lipreading approach typically employs a two-stage framework with separate frontend and backend architectures to model dynamic lip movements. Each component has been extensively studied, and in the backend architecture, temporal convolutional networks (TCNs) have been widely adopted in state-of-the-art methods. Recently, dense skip connections have been introduced in TCNs to mitigate the limited density of the receptive field, thereby improving the modeling of complex temporal representations. However, their performance remains constrained owing to potential information loss regarding the continuous nature of lip movements, caused by blind spots in the receptive field. To address this limitation, we propose TD3Net, a temporal densely connected multi-dilated convolutional network that combines dense skip connections and multi-dilated temporal convolutions as the backend architecture. TD3Net covers a wide and dense receptive field without blind spots by applying different dilation factors to skip-connected features. Experimental results on a word-level lipreading task using two large publicly available datasets, Lip Reading in the Wild (LRW) and LRW-1000, indicate that the proposed method achieves performance comparable to state-of-the-art methods. It achieved higher accuracy with fewer parameters and lower floating-point operations compared to existing TCN-based backend architectures. Moreover, visualization results suggest that our approach effectively utilizes diverse temporal features while preserving temporal continuity, presenting notable advantages in lipreading systems.

Main Results

LRW Test Dataset Performance

The experiments were conducted in the following environment: Ubuntu 20.04, Python 3.8.13, PyTorch 1.8.0, CUDA 11.1, and NVIDIA RTX 3090.

Params and FLOPs are measured for the TD3Net backend only, as this work focuses on backend efficiency. FLOPs were calculated using fvcore.

Method # Params (M) FLOPs (G) Inference time (s) Accuracy (%)
TD3Net-Base 18.69 1.56 45 89.36
TD3Net-Best 31.39 1.92 49 89.54
TD3Net-Best (w word boundary) 31.39 1.92 49 91.41

Click the accuracy value to download model weights.

Usage

To use the pre-trained models for inference, download the model weights (e.g., ckpt.best.pth.tar) from the links provided in the Main Results table above or the Main Results section of the GitHub repository. Then, you can run inference using the main.py script with the corresponding configuration:

# Example for TD3Net-Best
# Ensure you replace ./path/to/downloaded/td3net_best/ckpt.best.pth.tar with the actual path to your downloaded weights
CUDA_VISIBLE_DEVICES=0 python main.py \
    --action test \
    --config-path td3net_configs/td3net_config_best.yaml \
    --model-path ./path/to/downloaded/td3net_best/ckpt.best.pth.tar

For detailed instructions on installation, data preparation, training, and other inference options, please refer to the official GitHub repository.

Citation

If you find our work useful in your research, please consider citing our paper:

@article{lee2025td3net,
  title={TD3Net: A temporal densely connected multi-dilated convolutional network for lipreading},
  author={Lee, Byung Hoon and Shin, Wooseok and Han, Sung Won},
  journal={Journal of Visual Communication and Image Representation},
  volume={111},
  pages={104540},
  year={2025},
  doi={10.1016/j.jvcir.2025.104540}
}