Anonymous Authors
Point Model URL to renamed ViTeX-Edit-14B repo
d268d2c
---
title: ViTeX-Bench Leaderboard
emoji: 🏆
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
license: apache-2.0
short_description: Public leaderboard for video scene text editing.
---
# 🏆 ViTeX-Bench Leaderboard
🌐 [Project page](https://vitex-bench.github.io/)  · 
📊 [Dataset](https://huggingface.co/datasets/ViTeX-Bench/ViTeX-Dataset)  · 
🧪 [Benchmark code](https://huggingface.co/ViTeX-Bench/ViTeX-Bench)  · 
🤖 [Model & Inference code](https://huggingface.co/ViTeX-Bench/ViTeX-Edit-14B)  · 
🏆 Leaderboard
Public ranking for **video scene text editing** under the 13-metric, three-axis protocol of [ViTeX-Bench](https://huggingface.co/ViTeX-Bench/ViTeX-Bench).
The full thirteen-metric vector is the unit of report. The table is sorted by **TextScore** = ∛(SeqAcc · CharAcc · TTS), the geometric mean of the three text-correctness primitives. TextScore is a single-axis sort key by design — no cross-axis aggregate is computed, because no axis substitutes for another. SeqAcc = 0 collapses TextScore to zero, the intended semantics for methods that never produce the requested target string.
## Submitting
1. Run the official benchmark on the 157-clip frozen evaluation split: `bash scripts/run_benchmark.sh <your_method>` in the [Benchmark code repo](https://huggingface.co/ViTeX-Bench/ViTeX-Bench).
2. Upload the produced `outputs/<method>/eval.json` via the **Submit** tab.
3. The maintainers review the submission. Approved entries appear on the leaderboard.
## Companion repos
- 🌐 **Project page:** https://vitex-bench.github.io/
- 📊 **Dataset:** https://huggingface.co/datasets/ViTeX-Bench/ViTeX-Dataset
- 🧪 **Benchmark code:** https://huggingface.co/ViTeX-Bench/ViTeX-Bench
- 🤖 **Model & Inference code** (ViTeX-Edit-14B): https://huggingface.co/ViTeX-Bench/ViTeX-Edit-14B