--- configs: - config_name: benchmark data_files: - split: test path: vision_language/FCMBench_v1.0_testset_full.jsonl license: apache-2.0 task_categories: - table-question-answering - text-classification pretty_name: FCMBench size_categories: - 10K **Status:** Public release (v1.0). > **Maintainers:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech) > **Contact:** [yangyehui-jk@qifu.com] --- ## Tracks Overview ### 1) Vision-Language Track (✅ Available, **FCMBench-V1.0**) Image-based financial document understanding: - **Entry:** [Vision-Language Track](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language) - **Inputs:** document images + text prompts (JSONL, one sample per line) - **Outputs:** text responses (JSONL, one sample per line) - **Evaluation:** [Evaluation Script](https://github.com/QFIN-tech/FCMBench/blob/main/vision_language/evaluation.py) #### Paper & Project Links - [**Paper (arXiv)**](https://arxiv.org/abs/2601.00150) - [**Paper (PDF)**](https://arxiv.org/pdf/2601.00150) - [**Project Page**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language) - [**Leaderboard**](vision_language/LEADERBOARD.md) - [**Dataset (ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-V1.0) - [**Dataset (Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-V1.0) #### Reference Model Demo We also provide access to an interactive demo of our Qfin-VL-Instruct model, which achieves strong performance on FCMBench-V1.0. If you are interested in trying the Gradio demo, please contact [yangyehui-jk@qifu.com] with the following information: - Name - Affiliation / Organization - Intended use (e.g., research exploration, benchmarking reference) - Contact email Access will be granted on a case-by-case basis. ### 2) Video Understanding Track (🕒 Coming Soon) ### 3) Speech Understanding & Generation Track (🕒 Coming Soon) ### 4) Multi-step / Agentic Track (🕒 Coming Soon) ## Citation ``` @misc{yang2026fcmbenchcomprehensivefinancialcredit, title={FCMBench: A Comprehensive Financial Credit Multimodal Benchmark for Real-world Applications}, author={Yehui Yang and Dalu Yang and Wenshuo Zhou and Fangxin Shang and Yifan Liu and Jie Ren and Haojun Fei and Qing Yang and Yanwu Xu and Tao Chen}, year={2026}, eprint={2601.00150}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2601.00150}, } ```