File size: 5,137 Bytes
7ddbd9b
 
2377bed
 
 
7ddbd9b
2377bed
c9090f6
 
 
7ddbd9b
fe62c21
7ddbd9b
2377bed
 
 
 
 
 
 
 
 
7ddbd9b
 
 
 
fe62c21
c9090f6
 
7ddbd9b
c9090f6
 
 
 
6694010
c9090f6
7ddbd9b
fe62c21
7ddbd9b
c9090f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3626976
c9090f6
 
 
3626976
c9090f6
 
 
 
7ddbd9b
c9090f6
7ddbd9b
c9090f6
7ddbd9b
c9090f6
 
 
7ddbd9b
c9090f6
fe62c21
c9090f6
 
 
fe62c21
c9090f6
64abb65
c9090f6
 
 
64abb65
6694010
64abb65
6694010
64abb65
c9090f6
6694010
 
 
 
 
 
 
 
c9090f6
6694010
fe62c21
c9090f6
 
 
 
 
 
 
 
 
 
 
 
 
7ddbd9b
c9090f6
7ddbd9b
c9090f6
 
 
 
7ddbd9b
c9090f6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
library_name: transformers
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
tags:
- finance
- reasoning
- chain-of-thought
- financial-analysis
model-index:
- name: ODA-Fin-SFT-8B
  results: []
datasets:
- OpenDataArena/ODA-Fin-SFT-318k
language:
- en
- zh
metrics:
- accuracy
- f1
pipeline_tag: question-answering
---



<div align="center">
  <h1>Unlocking Data Value in Finance: A Study on Distillation
and Difficulty-Aware Training</h1>

</div>

<div align="center">
  
[![Paper](https://img.shields.io/badge/arXiv-Paper-red)](https://arxiv.org/abs/2603.07223)
[![Collections](https://img.shields.io/badge/馃-Collections-yellow)](https://huggingface.co/collections/OpenDataArena/oda-finance)

</div>

<figure align="center">
  <img src="imgs/model_compare.png" width="100%" alt="Model Performance Comparison">
  <figcaption><em>Average score across Financial benchmarks. ODA-Fin-RL/SFT-8B demonstrates strong performance relative to thinking models with significantly more parameters.</em></figcaption>
</figure>

---

This repository provides **ODA-Fin-SFT-8B**, a financial language model trained on high-quality Chain-of-Thought data. For the reinforcement learning version, see [ODA-Fin-RL-8B](https://huggingface.co/OpenDataArena/ODA-Fin-RL-8B).

## 馃摉 Overview

**ODA-Fin-SFT-8B** is an 8B-parameter financial language model built on Qwen3-8B, fine-tuned on the **ODA-Fin-SFT-318K** dataset鈥攁 meticulously curated corpus of 318K samples with high-quality Chain-of-Thought (CoT) reasoning traces distilled from **Qwen3-235B-A22B-Thinking**. This model establishes a robust foundation for financial reasoning, demonstrating state-of-the-art performance across diverse financial tasks.

### 馃幆 Key Highlights

- **Base Model**: Qwen3-8B
- **Training Data**: ODA-Fin-SFT-318K (318K samples with verified CoT)
- **Training Method**: Supervised Fine-Tuning with full-parameter updates
- **Avg Performance**: 72.1% across 9 financial benchmarks
- **Key Strengths**: 
  - Balanced performance across general financial understanding, sentiment analysis, and numerical reasoning
  - Serves as optimal initialization for subsequent RL training

---

## 馃 Model Training

### Training Configuration

```yaml
Base Model: Qwen/Qwen3-8B
Training Framework: Full-parameter fine-tuning
Hardware: 16脳NVIDIA A100 (80GB)
Sequence Length: 16,384 tokens
Batch Size: 1 per device
Gradient Accumulation: 16 steps
Learning Rate: 1.0e-5 (cosine schedule)
Warmup Ratio: 0.1
Epochs: 3
Training Data: ODA-Fin-SFT-318K
```


---

## 馃搳 Model Performance


Models trained on ODA-Fin-SFT-318K demonstrate superior performance across 9 financial benchmarks:

<figure align="center">
  <img src="imgs/main_results_table.png" width="100%" alt="p">
  <figcaption><em>Main Results. 'FinIQ', 'HL' and 'CFQA' refer to FinanceIQ, Headlines, and ConvFinQA benchmarks.</em></figcaption>
</figure>

---


## 馃搳 Benchmark Details

### General Financial Understanding

- **FinEval** (Chinese): Financial domain knowledge across banking, insurance, securities (Acc/zh)
- **Finova**: Agent-level financial reasoning and compliance verification (Acc/zh)
- **FinanceIQ**: Professional certifications (CPA, CFA) expertise assessment (Acc/zh)

### Sentiment Analysis

- **FOMC**: Hawkish vs. Dovish monetary policy stance classification (Weighted-F1/en)
- **FPB**: Financial PhraseBank sentiment classification (Weighted-F1/en)
- **Headlines**: Financial news headline sentiment interpretation (Weighted-F1/en)

### Numerical Reasoning

- **FinQA**: Complex numerical reasoning over financial reports (Acc/en)
- **TaTQA**: Hybrid tabular-textual arithmetic operations (Acc/en)
- **ConvFinQA**: Multi-turn conversational numerical analysis (Acc/en)

---

## 馃摎 Citation

```bibtex
@misc{cao2026unlockingdatavaluefinance,
      title={Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training}, 
      author={Chuxue Cao and Honglin Lin and Zhanping Zhong and Xin Gao and Mengzhang Cai and Conghui He and Sirui Han and Lijun Wu},
      year={2026},
      eprint={2603.07223},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2603.07223}, 
}
```

---

## 馃搫 License

This model is released under the [Apache 2.0 License](https://opensource.org/licenses/Apache-2.0). The training data (ODA-Fin-SFT-318K) aggregates from 25+ open-source repositories, each with their own licenses.

---

## 馃 Acknowledgments

We thank the creators of DianJin-R1-Data, Agentar-DeepFinance-100K, financial_phrasebank, Finance-Instruct-500k, and others. We also thank the Qwen team for the powerful Qwen3 series models.

---

## 馃敆 Related Resources

- **SFT Dataset**: [ODA-Fin-SFT-318K](https://huggingface.co/datasets/OpenDataArena/ODA-Fin-SFT-318k)
- **RL Dataset**: [ODA-Fin-RL-12K](https://huggingface.co/datasets/OpenDataArena/ODA-Fin-RL-12K)
<!-- - **RL Model**: [ODA-Fin-SFT-8B](https://huggingface.co/OpenDataArena/ODA-Fin-SFT-8B) -->
- **RL Model**: [ODA-Fin-RL-8B](https://huggingface.co/OpenDataArena/ODA-Fin-RL-8B)

<!-- - **Paper**: [arXiv:2512.XXXXX](https://arxiv.org/abs/2512.XXXXX) -->