--- title: ViTeX-Bench Leaderboard emoji: ๐Ÿ† colorFrom: blue colorTo: green sdk: gradio sdk_version: 5.49.1 app_file: app.py pinned: false license: apache-2.0 short_description: Public leaderboard for video scene text editing. --- # ๐Ÿ† ViTeX-Bench Leaderboard ๐ŸŒ [Project page](https://vitex-bench.github.io/)  ยท  ๐Ÿ“Š [Dataset](https://huggingface.co/datasets/ViTeX-Bench/ViTeX-Dataset)  ยท  ๐Ÿงช [Benchmark code](https://huggingface.co/ViTeX-Bench/ViTeX-Bench)  ยท  ๐Ÿค– [Model & Inference code](https://huggingface.co/ViTeX-Bench/ViTeX-Edit-14B)  ยท  ๐Ÿ† Leaderboard Public ranking for **video scene text editing** under the 13-metric, three-axis protocol of [ViTeX-Bench](https://huggingface.co/ViTeX-Bench/ViTeX-Bench). The full thirteen-metric vector is the unit of report. The table is sorted by **TextScore** = โˆ›(SeqAcc ยท CharAcc ยท TTS), the geometric mean of the three text-correctness primitives. TextScore is a single-axis sort key by design โ€” no cross-axis aggregate is computed, because no axis substitutes for another. SeqAcc = 0 collapses TextScore to zero, the intended semantics for methods that never produce the requested target string. ## Submitting 1. Run the official benchmark on the 157-clip frozen evaluation split: `bash scripts/run_benchmark.sh ` in the [Benchmark code repo](https://huggingface.co/ViTeX-Bench/ViTeX-Bench). 2. Upload the produced `outputs//eval.json` via the **Submit** tab. 3. The maintainers review the submission. Approved entries appear on the leaderboard. ## Companion repos - ๐ŸŒ **Project page:** https://vitex-bench.github.io/ - ๐Ÿ“Š **Dataset:** https://huggingface.co/datasets/ViTeX-Bench/ViTeX-Dataset - ๐Ÿงช **Benchmark code:** https://huggingface.co/ViTeX-Bench/ViTeX-Bench - ๐Ÿค– **Model & Inference code** (ViTeX-Edit-14B): https://huggingface.co/ViTeX-Bench/ViTeX-Edit-14B