parapluie-s commited on
Commit
cff8227
·
verified ·
1 Parent(s): 216b225

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -12,14 +12,64 @@ pretty_name: FCMBench
12
  size_categories:
13
  - 10K<n<100K
14
  ---
 
 
 
15
  **FCMBench** is a multimodal benchmark for credit-risk–oriented workflows. It aims to provide a standard playground to promote collaborative development between academia and industry and provides standardized datasets, prompts, and evaluation scripts across multiple tracks (image, video, speech, agents, etc.)
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- **FCMBench-V1.0** focuses on financial document understanding.
 
 
 
 
 
18
 
19
  #### Paper & Project Links
20
  - [**Paper (arXiv)**](https://arxiv.org/abs/2601.00150)
21
  - [**Paper (PDF)**](https://arxiv.org/pdf/2601.00150)
22
  - [**Project Page**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
23
- - [**Leaderboard**](https://github.com/QFIN-tech/FCMBench/blob/main/vision_language/LEADERBOARD.md)
24
  - [**Dataset (ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-V1.0)
25
- - [**Dataset (Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-V1.0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  size_categories:
13
  - 10K<n<100K
14
  ---
15
+
16
+ ![Main Image](https://github.com/QFIN-tech/FCMBench/raw/main/assets/FCMBench_logo.jpg)
17
+
18
  **FCMBench** is a multimodal benchmark for credit-risk–oriented workflows. It aims to provide a standard playground to promote collaborative development between academia and industry and provides standardized datasets, prompts, and evaluation scripts across multiple tracks (image, video, speech, agents, etc.)
19
+ ## 🔥 News
20
+
21
+ - 【**2026. 01. 01**】✨ We are proud to launch [**FCMBench-V1.0**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language), which covers 18 core certificate types, including 4,043 privacy-compliant images and 8,446 QA samples. It involves 3 types of Perception tasks and 4 types of Reasoning tasks, which are cross-referenced with 10 categories of robustness inferences. All the tasks and inferences are derived from real-world critical scenarios.
22
+
23
+ > **Status:** Public release (v1.0).
24
+ > **Maintainers:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech)
25
+ > **Contact:** [yangyehui-jk@qifu.com]
26
+
27
+ ---
28
+
29
+ ## Tracks Overview
30
+
31
+ ### 1) Vision-Language Track (✅ Available, **FCMBench-V1.0**)
32
 
33
+ Image-based financial document understanding:
34
+
35
+ - **Entry:** [Vision-Language Track](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
36
+ - **Inputs:** document images + text prompts (JSONL, one sample per line)
37
+ - **Outputs:** text responses (JSONL, one sample per line)
38
+ - **Evaluation:** [Evaluation Script](https://github.com/QFIN-tech/FCMBench/blob/main/vision_language/evaluation.py)
39
 
40
  #### Paper & Project Links
41
  - [**Paper (arXiv)**](https://arxiv.org/abs/2601.00150)
42
  - [**Paper (PDF)**](https://arxiv.org/pdf/2601.00150)
43
  - [**Project Page**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
44
+ - [**Leaderboard**](vision_language/LEADERBOARD.md)
45
  - [**Dataset (ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-V1.0)
46
+ - [**Dataset (Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-V1.0)
47
+
48
+ #### Reference Model Demo
49
+ We also provide access to an interactive demo of our Qfin-VL-Instruct model, which achieves strong performance on FCMBench-V1.0.
50
+ If you are interested in trying the Gradio demo, please contact [yangyehui-jk@qifu.com] with the following information:
51
+ - Name
52
+ - Affiliation / Organization
53
+ - Intended use (e.g., research exploration, benchmarking reference)
54
+ - Contact email
55
+
56
+ Access will be granted on a case-by-case basis.
57
+
58
+ ### 2) Video Understanding Track (🕒 Coming Soon)
59
+
60
+ ### 3) Speech Understanding & Generation Track (🕒 Coming Soon)
61
+
62
+ ### 4) Multi-step / Agentic Track (🕒 Coming Soon)
63
+
64
+ ## Citation
65
+ ```
66
+ @misc{yang2026fcmbenchcomprehensivefinancialcredit,
67
+ title={FCMBench: A Comprehensive Financial Credit Multimodal Benchmark for Real-world Applications},
68
+ author={Yehui Yang and Dalu Yang and Wenshuo Zhou and Fangxin Shang and Yifan Liu and Jie Ren and Haojun Fei and Qing Yang and Yanwu Xu and Tao Chen},
69
+ year={2026},
70
+ eprint={2601.00150},
71
+ archivePrefix={arXiv},
72
+ primaryClass={cs.CV},
73
+ url={https://arxiv.org/abs/2601.00150},
74
+ }
75
+ ```