Upload 2 files
Browse files- README.md +19 -29
- README_cn.md +20 -25
README.md
CHANGED
|
@@ -1,56 +1,42 @@
|
|
| 1 |
-
|
| 2 |
-
configs:
|
| 3 |
-
- config_name: benchmark
|
| 4 |
-
data_files:
|
| 5 |
-
- split: test
|
| 6 |
-
path: vision_language/FCMBench_v1.0_testset_full.jsonl
|
| 7 |
-
license: apache-2.0
|
| 8 |
-
task_categories:
|
| 9 |
-
- table-question-answering
|
| 10 |
-
- text-classification
|
| 11 |
-
pretty_name: FCMBench
|
| 12 |
-
size_categories:
|
| 13 |
-
- 10K<n<100K
|
| 14 |
-
---
|
| 15 |
|
| 16 |
-

|
| 17 |
|
| 18 |
**FCMBench** is a multimodal benchmark for credit-risk–oriented workflows. It aims to provide a standard playground to promote collaborative development between academia and industry and provides standardized datasets, prompts, and evaluation scripts across multiple tracks (image, video, speech, agents, etc.)
|
| 19 |
|
| 20 |
[简体中文](./README_cn.md)
|
| 21 |
|
| 22 |
## 🔥 News
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
| 25 |
-
- 【**2026. 01. 01**】✨ We are proud to launch [**FCMBench-V1.0**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language), which covers 18 core certificate types, including 4,043 privacy-compliant images and 8,446 QA samples. It involves 3 types of Perception tasks and 4 types of Reasoning tasks, which are cross-referenced with 10 categories of robustness inferences. All the tasks and inferences are derived from real-world critical scenarios.
|
| 26 |
-
|
| 27 |
-
> **Status:** Public release (v1.0).
|
| 28 |
> **Maintainers:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech)
|
| 29 |
-
> **Contact:** [
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
## Tracks Overview
|
| 34 |
|
| 35 |
-
### 1) Vision-Language Track (✅ Available
|
| 36 |
|
| 37 |
Image-based financial document understanding:
|
| 38 |
|
| 39 |
-
- **Entry:** [Vision-Language Track](
|
| 40 |
- **Inputs:** document images + text prompts (JSONL, one sample per line)
|
| 41 |
- **Outputs:** text responses (JSONL, one sample per line)
|
| 42 |
-
- **Evaluation:** [Evaluation Script](
|
| 43 |
|
| 44 |
#### Paper & Project Links
|
| 45 |
- [**Paper (arXiv)**](https://arxiv.org/abs/2601.00150)
|
| 46 |
-
- [**Paper (PDF)**](https://
|
| 47 |
- [**Project Page**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
|
| 48 |
-
- [**Leaderboard**](
|
| 49 |
-
- [**
|
| 50 |
-
- [**Dataset (
|
|
|
|
| 51 |
|
| 52 |
#### Reference Model Demo
|
| 53 |
-
We also provide access to an interactive demo of our Qfin-VL-Instruct model, which achieves strong performance on FCMBench
|
| 54 |
If you are interested in trying the Gradio demo, please contact [yangyehui-jk@qifu.com] with the following information:
|
| 55 |
- Name
|
| 56 |
- Affiliation / Organization
|
|
@@ -76,4 +62,8 @@ Access will be granted on a case-by-case basis.
|
|
| 76 |
primaryClass={cs.CV},
|
| 77 |
url={https://arxiv.org/abs/2601.00150},
|
| 78 |
}
|
| 79 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
|
|
|
| 3 |
|
| 4 |
**FCMBench** is a multimodal benchmark for credit-risk–oriented workflows. It aims to provide a standard playground to promote collaborative development between academia and industry and provides standardized datasets, prompts, and evaluation scripts across multiple tracks (image, video, speech, agents, etc.)
|
| 5 |
|
| 6 |
[简体中文](./README_cn.md)
|
| 7 |
|
| 8 |
## 🔥 News
|
| 9 |
+
- 【**2026. 03. 16**】✨ We released **FCMBench-V1.1**. This version adds English document images and corresponding QA pairs, expands the covered document types to 26, and increases the dataset to 5,198 images and 13,806 QA samples.
|
| 10 |
+
- 【**2026. 01. 01**】We are proud to launch **FCMBench-V1.0**, which covers 18 core certificate types, including 4,043 privacy-compliant images and 8,446 QA samples. It involves 3 types of Perception tasks and 4 types of Reasoning tasks, which are cross-referenced with 10 categories of robustness inferences. All the tasks and inferences are derived from real-world critical scenarios.
|
| 11 |
|
| 12 |
+
> **Status:** Public release (v1.1).
|
|
|
|
|
|
|
|
|
|
| 13 |
> **Maintainers:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech)
|
| 14 |
+
> **Contact:** [yangyehuisw@126.com]
|
| 15 |
|
| 16 |
---
|
| 17 |
|
| 18 |
## Tracks Overview
|
| 19 |
|
| 20 |
+
### 1) Vision-Language Track (✅ Available)
|
| 21 |
|
| 22 |
Image-based financial document understanding:
|
| 23 |
|
| 24 |
+
- **Entry:** [Vision-Language Track](vision_language)
|
| 25 |
- **Inputs:** document images + text prompts (JSONL, one sample per line)
|
| 26 |
- **Outputs:** text responses (JSONL, one sample per line)
|
| 27 |
+
- **Evaluation:** [Evaluation Script](vision_language/evaluation.py)
|
| 28 |
|
| 29 |
#### Paper & Project Links
|
| 30 |
- [**Paper (arXiv)**](https://arxiv.org/abs/2601.00150)
|
| 31 |
+
- [**Paper (PDF)**](https://github.com/QFIN-tech/FCMBench/tree/main/TechnicalReport)
|
| 32 |
- [**Project Page**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
|
| 33 |
+
- [**Leaderboard**](https://qfin-tech.github.io/FCMBench)
|
| 34 |
+
- [**Sample Data**](https://qfin-tech.github.io/FCMBench/Examples.html)
|
| 35 |
+
- [**Dataset (ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-Data)
|
| 36 |
+
- [**Dataset (Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-Data)
|
| 37 |
|
| 38 |
#### Reference Model Demo
|
| 39 |
+
We also provide access to an interactive demo of our Qfin-VL-Instruct model, which achieves strong performance on FCMBench.
|
| 40 |
If you are interested in trying the Gradio demo, please contact [yangyehui-jk@qifu.com] with the following information:
|
| 41 |
- Name
|
| 42 |
- Affiliation / Organization
|
|
|
|
| 62 |
primaryClass={cs.CV},
|
| 63 |
url={https://arxiv.org/abs/2601.00150},
|
| 64 |
}
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Star History
|
| 68 |
+
|
| 69 |
+
[](https://www.star-history.com/#QFIN-tech/FCMBench&type=date&legend=top-left)
|
README_cn.md
CHANGED
|
@@ -1,34 +1,23 @@
|
|
| 1 |
-
|
| 2 |
-
configs:
|
| 3 |
-
- config_name: benchmark
|
| 4 |
-
data_files:
|
| 5 |
-
- split: test
|
| 6 |
-
path: vision_language/FCMBench_v1.0_testset_full.jsonl
|
| 7 |
-
license: apache-2.0
|
| 8 |
-
task_categories:
|
| 9 |
-
- table-question-answering
|
| 10 |
-
- text-classification
|
| 11 |
-
pretty_name: FCMBench
|
| 12 |
-
size_categories:
|
| 13 |
-
- 10K<n<100K
|
| 14 |
-
---
|
| 15 |
|
| 16 |
-

|
| 17 |
-
|
| 18 |
-
[English Version](./README.md)
|
| 19 |
|
| 20 |
**FCMBench** 是一个面向信贷风控工作流的多模态基准测试(benchmark)。它旨在提供一个标准化的“试验场”,促进学术界与产业界的协同开发,并在多个赛道(图像、视频、语音、智能体等)上提供标准化的数据集、提示词(prompts)与评测脚本。
|
| 21 |
|
|
|
|
|
|
|
| 22 |
## 🔥 新闻
|
| 23 |
-
- 【**2026.
|
|
|
|
| 24 |
|
| 25 |
-
> **状态:** 公开发布(v1.
|
| 26 |
> **维护者:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech)
|
| 27 |
> **联系方式:** [yangyehui-jk@qifu.com]
|
| 28 |
|
|
|
|
|
|
|
| 29 |
## 赛道概览
|
| 30 |
|
| 31 |
-
### 1) 视觉-语言赛道(✅ 已开放
|
| 32 |
|
| 33 |
基于图像的金融文档理解:
|
| 34 |
|
|
@@ -41,12 +30,12 @@ size_categories:
|
|
| 41 |
- [**技术报告(arXiv)**](https://arxiv.org/abs/2601.00150)
|
| 42 |
- [**技术报告(PDF)**](https://arxiv.org/pdf/2601.00150)
|
| 43 |
- [**项目主页**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
|
| 44 |
-
- [**排行榜**](
|
| 45 |
-
- [**数据集(ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-
|
| 46 |
-
- [**数据集(Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-
|
| 47 |
|
| 48 |
#### 参考模型 Demo
|
| 49 |
-
我们也提供 Qfin-VL-Instruct 模型的交互式演示(demo)访问,该模型在 FCMBench
|
| 50 |
如果你希望试用 Gradio demo,请将以下信息发送至 [yangyehui-jk@qifu.com]:
|
| 51 |
- 姓名
|
| 52 |
- 所属单位 / 组织
|
|
@@ -63,6 +52,7 @@ size_categories:
|
|
| 63 |
|
| 64 |
## 引用
|
| 65 |
```
|
|
|
|
| 66 |
@misc{yang2026fcmbenchcomprehensivefinancialcredit,
|
| 67 |
title={FCMBench: A Comprehensive Financial Credit Multimodal Benchmark for Real-world Applications},
|
| 68 |
author={Yehui Yang and Dalu Yang and Wenshuo Zhou and Fangxin Shang and Yifan Liu and Jie Ren and Haojun Fei and Qing Yang and Yanwu Xu and Tao Chen},
|
|
@@ -72,4 +62,9 @@ size_categories:
|
|
| 72 |
primaryClass={cs.CV},
|
| 73 |
url={https://arxiv.org/abs/2601.00150},
|
| 74 |
}
|
| 75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
|
|
|
|
|
|
|
|
|
| 3 |
|
| 4 |
**FCMBench** 是一个面向信贷风控工作流的多模态基准测试(benchmark)。它旨在提供一个标准化的“试验场”,促进学术界与产业界的协同开发,并在多个赛道(图像、视频、语音、智能体等)上提供标准化的数据集、提示词(prompts)与评测脚本。
|
| 5 |
|
| 6 |
+
[English Version](./README.md)
|
| 7 |
+
|
| 8 |
## 🔥 新闻
|
| 9 |
+
- 【**2026. 03. 16**】✨ 我们发布了**FCMBench-V1.1**,该版本加入了英文证件图像和相关问答对,覆盖的证件类型增加至 26 个,图像增加至 5,198 张,问答样本增加至 13,806 条。
|
| 10 |
+
- 【**2026. 01. 01**】我们很高兴发布**FCMBench-V1.0**,该版本覆盖 18 类核心证件类型,包含 4,043 张符合隐私合规要求的图像与 8,446 条问答样本。其任务体系涵盖 3 类感知(Perception)任务与 4 类推理(Reasoning)任务,并与 10 类健壮性推理(robustness inferences)交叉引用。所有任务与推理过程均来源于真实世界的关键业务场景。
|
| 11 |
|
| 12 |
+
> **状态:** 公开发布(v1.1)。
|
| 13 |
> **维护者:** [奇富科技 / Qfin Holdings](https://github.com/QFIN-tech)
|
| 14 |
> **联系方式:** [yangyehui-jk@qifu.com]
|
| 15 |
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
## 赛道概览
|
| 19 |
|
| 20 |
+
### 1) 视觉-语言赛道(✅ 已开放)
|
| 21 |
|
| 22 |
基于图像的金融文档理解:
|
| 23 |
|
|
|
|
| 30 |
- [**技术报告(arXiv)**](https://arxiv.org/abs/2601.00150)
|
| 31 |
- [**技术报告(PDF)**](https://arxiv.org/pdf/2601.00150)
|
| 32 |
- [**项目主页**](https://github.com/QFIN-tech/FCMBench/tree/main/vision_language)
|
| 33 |
+
- [**排行榜**](https://qfin-tech.github.io/FCMBench)
|
| 34 |
+
- [**数据集(ModelScope)**](https://modelscope.cn/datasets/QFIN/FCMBench-Data)
|
| 35 |
+
- [**数据集(Hugging Face)**](https://huggingface.co/datasets/QFIN/FCMBench-Data)
|
| 36 |
|
| 37 |
#### 参考模型 Demo
|
| 38 |
+
我们也提供 Qfin-VL-Instruct 模型的交互式演示(demo)访问,该模型在 FCMBench 上表现强劲。
|
| 39 |
如果你希望试用 Gradio demo,请将以下信息发送至 [yangyehui-jk@qifu.com]:
|
| 40 |
- 姓名
|
| 41 |
- 所属单位 / 组织
|
|
|
|
| 52 |
|
| 53 |
## 引用
|
| 54 |
```
|
| 55 |
+
|
| 56 |
@misc{yang2026fcmbenchcomprehensivefinancialcredit,
|
| 57 |
title={FCMBench: A Comprehensive Financial Credit Multimodal Benchmark for Real-world Applications},
|
| 58 |
author={Yehui Yang and Dalu Yang and Wenshuo Zhou and Fangxin Shang and Yifan Liu and Jie Ren and Haojun Fei and Qing Yang and Yanwu Xu and Tao Chen},
|
|
|
|
| 62 |
primaryClass={cs.CV},
|
| 63 |
url={https://arxiv.org/abs/2601.00150},
|
| 64 |
}
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## Star History
|
| 69 |
+
|
| 70 |
+
[](https://www.star-history.com/#QFIN-tech/FCMBench&type=date&legend=top-left)
|