BoxComm-Benchmark
BoxComm-Benchmark is the official benchmark release for standardized BoxComm evaluation.
Resources
- Project Page: https://gouba2333.github.io/BoxComm
- Paper: http://arxiv.org/abs/2604.04419
- Code: https://github.com/gouba2333/BoxComm
Included files
manifests/benchmark_manifest_eval_v1.jsonlmetadata/eval_metadata_v1.csvexamples/generation_prediction_example.jsonlexamples/streaming_prediction_example.jsonlmetrics/version.txt
Benchmark tasks
- Category-conditioned commentary generation
- Commentary rhythm and class-distribution evaluation
Evaluation scripts
Use the official scripts in the code repository:
scripts/eval_metrics.pyscripts/eval_streaming_cls_metrics.py
Prediction formats
Generation
{"video_id": 478, "segment_index": 0, "t_mid": 1.0, "pred_text": "..."}
Streaming
{"video_id": 478, "responses": [{"start_time": 0.0, "end_time": 1.0, "response": "..."}, ...]}
Citation
@article{wang2026boxcomm,
title={BoxComm: Benchmarking Category-Aware Commentary Generation and Narration Rhythm in Boxing},
author={Wang, Kaiwen and Zheng, Kaili and Deng, Rongrong and Shi, Yiming and Guo, Chenyi and Wu, Ji},
journal={arXiv preprint arXiv:2604.04419},
year={2026}
}