The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
SAGE-VQA
SAGE-VQA: Script-Aware Alignment for Video-Derived Multilingual Text-Centric Reasoning
Current Status
This project is currently associated with a manuscript under submission to IEEE Transactions on Circuits and Systems for Video Technology (TCSVT).
At the present stage, the full dataset, model checkpoints, and complete codebase are temporarily not public.
Why the resources are not released yet
To protect unpublished research outcomes during the peer-review stage and to avoid unauthorized reuse of unreleased assets, we are postponing the public release of the following materials until the paper is officially accepted and published:
- benchmark data
- model checkpoints
- inference code
- evaluation scripts
Planned Release
Once the paper is accepted and formally published, we will release the complete project resources here as soon as possible, including:
- Dataset
- Model checkpoints
- Inference scripts
- Training pipeline
- Evaluation tools
- Usage documentation
Project Summary
SAGE-VQA studies multilingual text-centric reasoning in realistic video-derived environments, with a focus on low-resource and script-diverse languages.
It is designed to address challenging scenarios such as:
- motion blur
- dynamic occlusion
- temporally evolving text
- heterogeneous writing directions and layouts
The framework includes three main components:
- Script-Aware Visual Alignment
- Hybrid Reward Modeling
- Multilingual Policy Shaping
Official Project Repository
For updates and announcements, please visit the official GitHub repository:
GitHub: https://github.com/Shajiu/SAGE-VQA
Release Notice
This page currently serves as an official placeholder and release entry for the SAGE-VQA project.
All files will be uploaded immediately after the review process is completed and the paper is formally published.
Citation
Citation information will be added after publication.
Contact
For research-related questions, please refer to the official GitHub repository or contact the authors through the project page.
Thank you for your patience and understanding.
- Downloads last month
- 11