Improve dataset card: add metadata, paper/GitHub links, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +35 -15
README.md CHANGED
@@ -1,14 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
  # FinCDM-FinEval-KQA
2
 
3
- **Repository**: [NextGenWhu/FinCDM-CPA-KQA](https://huggingface.co/datasets/NextGenWhu/FinCDM-CPA-KQA)
4
 
5
  ## πŸ“– Overview
6
 
7
- **FinCDM-CPA-KQA** is a specialized dataset for financial knowledge-based question answering, derived from the research presented in:
 
 
 
 
 
 
8
 
9
- > ["From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models"](https://arxiv.org/abs/2508.13491)
10
 
11
- This dataset is designed to evaluate large language models (LLMs) on their ability to perform **financial knowledge reasoning**, **compliance assessment**, and **knowledge-based question answering**. It serves as a robust benchmark for assessing how well models understand and apply financial domain knowledge, making it valuable for both research and practical applications in finance.
 
 
 
 
 
 
 
 
12
 
13
  ## πŸ“Š Dataset Description
14
 
@@ -21,21 +47,15 @@ The dataset is provided in **JSON format** and consists of multiple-choice quest
21
  - **`gold`**: The 0-based index of the correct answer in the `choices` list.
22
  - **`text`**: The correct option with a brief explanation.
23
 
24
- ## πŸš€ Use Cases
25
-
26
- - **Benchmarking LLMs**: Evaluate the financial knowledge reasoning capabilities of large language models.
27
- - **Training QA Systems**: Develop and fine-tune question-answering systems for financial applications.
28
- - **Compliance and Auditing**: Support tasks related to financial compliance, risk assessment, and auditing.
29
-
30
  ## πŸ“ Citation
31
 
32
  If you use this dataset in your research or applications, please cite the following paper:
33
 
34
  ```bibtex
35
- @article{FinCDM2025,
36
  title={From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models},
37
- author={},
38
  journal={arXiv preprint arXiv:2508.13491},
39
- year={2025},
40
- url={https://arxiv.org/abs/2508.13491}
41
- }
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - zh
7
+ - en
8
+ tags:
9
+ - finance
10
+ pretty_name: FinCDM-FinEval-KQA
11
+ ---
12
+
13
  # FinCDM-FinEval-KQA
14
 
15
+ [**Paper**](https://huggingface.co/papers/2508.13491) | [**GitHub**](https://github.com/WHUNextGen/FinCDM)
16
 
17
  ## πŸ“– Overview
18
 
19
+ **FinCDM-FinEval-KQA** is a specialized dataset for financial knowledge-based question answering, serving as a knowledge-skill annotated version of the FinEval benchmark. It is introduced in the research:
20
+
21
+ > ["From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models"](https://huggingface.co/papers/2508.13491)
22
+
23
+ This dataset is designed to evaluate large language models (LLMs) on their ability to perform **financial knowledge reasoning**, **compliance assessment**, and **knowledge-based question answering**. Unlike traditional benchmarks, it provides fine-grained knowledge labels to identify specific strengths and weaknesses in model performance.
24
+
25
+ ## πŸš€ Usage
26
 
27
+ You can load this dataset using the Hugging Face `datasets` library:
28
 
29
+ ```python
30
+ from datasets import load_dataset
31
+
32
+ # Load FinEval-KQA dataset
33
+ dataset = load_dataset("NextGenWhu/FinCDM-FinEval-KQA")
34
+
35
+ # Examine the first sample
36
+ print(dataset['train'][0])
37
+ ```
38
 
39
  ## πŸ“Š Dataset Description
40
 
 
47
  - **`gold`**: The 0-based index of the correct answer in the `choices` list.
48
  - **`text`**: The correct option with a brief explanation.
49
 
 
 
 
 
 
 
50
  ## πŸ“ Citation
51
 
52
  If you use this dataset in your research or applications, please cite the following paper:
53
 
54
  ```bibtex
55
+ @article{fincdm2024,
56
  title={From Scores to Skills: A Cognitive Diagnosis Framework for Evaluating Financial Large Language Models},
57
+ author={Kuang, Ziyan and others},
58
  journal={arXiv preprint arXiv:2508.13491},
59
+ year={2024}
60
+ }
61
+ ```