Improve dataset card: add metadata, paper/GitHub links, and sample usage
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,14 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# FinCDM-FinEval-KQA
|
| 2 |
|
| 3 |
-
**
|
| 4 |
|
| 5 |
## π Overview
|
| 6 |
|
| 7 |
-
**FinCDM-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
## π Dataset Description
|
| 14 |
|
|
@@ -21,21 +47,15 @@ The dataset is provided in **JSON format** and consists of multiple-choice quest
|
|
| 21 |
- **`gold`**: The 0-based index of the correct answer in the `choices` list.
|
| 22 |
- **`text`**: The correct option with a brief explanation.
|
| 23 |
|
| 24 |
-
## π Use Cases
|
| 25 |
-
|
| 26 |
-
- **Benchmarking LLMs**: Evaluate the financial knowledge reasoning capabilities of large language models.
|
| 27 |
-
- **Training QA Systems**: Develop and fine-tune question-answering systems for financial applications.
|
| 28 |
-
- **Compliance and Auditing**: Support tasks related to financial compliance, risk assessment, and auditing.
|
| 29 |
-
|
| 30 |
## π Citation
|
| 31 |
|
| 32 |
If you use this dataset in your research or applications, please cite the following paper:
|
| 33 |
|
| 34 |
```bibtex
|
| 35 |
-
@article{
|
| 36 |
title={From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models},
|
| 37 |
-
author={},
|
| 38 |
journal={arXiv preprint arXiv:2508.13491},
|
| 39 |
-
year={
|
| 40 |
-
|
| 41 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
language:
|
| 6 |
+
- zh
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- finance
|
| 10 |
+
pretty_name: FinCDM-FinEval-KQA
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
# FinCDM-FinEval-KQA
|
| 14 |
|
| 15 |
+
[**Paper**](https://huggingface.co/papers/2508.13491) | [**GitHub**](https://github.com/WHUNextGen/FinCDM)
|
| 16 |
|
| 17 |
## π Overview
|
| 18 |
|
| 19 |
+
**FinCDM-FinEval-KQA** is a specialized dataset for financial knowledge-based question answering, serving as a knowledge-skill annotated version of the FinEval benchmark. It is introduced in the research:
|
| 20 |
+
|
| 21 |
+
> ["From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models"](https://huggingface.co/papers/2508.13491)
|
| 22 |
+
|
| 23 |
+
This dataset is designed to evaluate large language models (LLMs) on their ability to perform **financial knowledge reasoning**, **compliance assessment**, and **knowledge-based question answering**. Unlike traditional benchmarks, it provides fine-grained knowledge labels to identify specific strengths and weaknesses in model performance.
|
| 24 |
+
|
| 25 |
+
## π Usage
|
| 26 |
|
| 27 |
+
You can load this dataset using the Hugging Face `datasets` library:
|
| 28 |
|
| 29 |
+
```python
|
| 30 |
+
from datasets import load_dataset
|
| 31 |
+
|
| 32 |
+
# Load FinEval-KQA dataset
|
| 33 |
+
dataset = load_dataset("NextGenWhu/FinCDM-FinEval-KQA")
|
| 34 |
+
|
| 35 |
+
# Examine the first sample
|
| 36 |
+
print(dataset['train'][0])
|
| 37 |
+
```
|
| 38 |
|
| 39 |
## π Dataset Description
|
| 40 |
|
|
|
|
| 47 |
- **`gold`**: The 0-based index of the correct answer in the `choices` list.
|
| 48 |
- **`text`**: The correct option with a brief explanation.
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
## π Citation
|
| 51 |
|
| 52 |
If you use this dataset in your research or applications, please cite the following paper:
|
| 53 |
|
| 54 |
```bibtex
|
| 55 |
+
@article{fincdm2024,
|
| 56 |
title={From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models},
|
| 57 |
+
author={Kuang, Ziyan and others},
|
| 58 |
journal={arXiv preprint arXiv:2508.13491},
|
| 59 |
+
year={2024}
|
| 60 |
+
}
|
| 61 |
+
```
|