File size: 3,392 Bytes
0dc55c0 216b225 0dc55c0 79ebf9b 0dc55c0 cff8227 0dc55c0 cff8227 339faaf a0ff7c1 cff8227 0dc55c0 cff8227 0dc55c0 cff8227 0dc55c0 cff8227 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
configs:
- config_name: benchmark
data_files:
- split: test
path: vision_language/FCMBench_v1.0_testset_full.jsonl
license: apache-2.0
task_categories:
- table-question-answering
- text-classification
pretty_name: FCMBench
size_categories:
- 10K<n<100K
---

**FCMBench** is a multimodal benchmark for credit-risk–oriented workflows. It aims to provide a standard playground to promote collaborative development between academia and industry and provides standardized datasets, prompts, and evaluation scripts across multiple tracks (image, video, speech, agents, etc.)
[简体中文](./README_cn.md)
## 🔥 News
- 【**2026. 01. 01**】✨ We are proud to launch [**FCMBench-V1.0**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language), which covers 18 core certificate types, including 4,043 privacy-compliant images and 8,446 QA samples. It involves 3 types of Perception tasks and 4 types of Reasoning tasks, which are cross-referenced with 10 categories of robustness inferences. All the tasks and inferences are derived from real-world critical scenarios.
> **Status:** Public release (v1.0).
> **Maintainers:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech)
> **Contact:** [yangyehui-jk@qifu.com]
---
## Tracks Overview
### 1) Vision-Language Track (✅ Available, **FCMBench-V1.0**)
Image-based financial document understanding:
- **Entry:** [Vision-Language Track](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
- **Inputs:** document images + text prompts (JSONL, one sample per line)
- **Outputs:** text responses (JSONL, one sample per line)
- **Evaluation:** [Evaluation Script](https://github.com/QFIN-tech/FCMBench/blob/main/vision_language/evaluation.py)
#### Paper & Project Links
- [**Paper (arXiv)**](https://arxiv.org/abs/2601.00150)
- [**Paper (PDF)**](https://arxiv.org/pdf/2601.00150)
- [**Project Page**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
- [**Leaderboard**](vision_language/LEADERBOARD.md)
- [**Dataset (ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-V1.0)
- [**Dataset (Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-V1.0)
#### Reference Model Demo
We also provide access to an interactive demo of our Qfin-VL-Instruct model, which achieves strong performance on FCMBench-V1.0.
If you are interested in trying the Gradio demo, please contact [yangyehui-jk@qifu.com] with the following information:
- Name
- Affiliation / Organization
- Intended use (e.g., research exploration, benchmarking reference)
- Contact email
Access will be granted on a case-by-case basis.
### 2) Video Understanding Track (🕒 Coming Soon)
### 3) Speech Understanding & Generation Track (🕒 Coming Soon)
### 4) Multi-step / Agentic Track (🕒 Coming Soon)
## Citation
```
@misc{yang2026fcmbenchcomprehensivefinancialcredit,
title={FCMBench: A Comprehensive Financial Credit Multimodal Benchmark for Real-world Applications},
author={Yehui Yang and Dalu Yang and Wenshuo Zhou and Fangxin Shang and Yifan Liu and Jie Ren and Haojun Fei and Qing Yang and Yanwu Xu and Tao Chen},
year={2026},
eprint={2601.00150},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.00150},
}
``` |