--- library_name: transformers license: apache-2.0 base_model: - Qwen/Qwen3-8B tags: - finance - reasoning - chain-of-thought - financial-analysis model-index: - name: ODA-Fin-SFT-8B results: [] datasets: - OpenDataArena/ODA-Fin-SFT-318k language: - en - zh metrics: - accuracy - f1 pipeline_tag: question-answering ---

Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training

[![Paper](https://img.shields.io/badge/arXiv-Paper-red)](https://arxiv.org/abs/2603.07223) [![Collections](https://img.shields.io/badge/πŸ€—-Collections-yellow)](https://huggingface.co/collections/OpenDataArena/oda-finance)
Model Performance Comparison
Average score across Financial benchmarks. ODA-Fin-RL/SFT-8B demonstrates strong performance relative to thinking models with significantly more parameters.
--- This repository provides **ODA-Fin-SFT-8B**, a financial language model trained on high-quality Chain-of-Thought data. For the reinforcement learning version, see [ODA-Fin-RL-8B](https://huggingface.co/OpenDataArena/ODA-Fin-RL-8B). ## πŸ“– Overview **ODA-Fin-SFT-8B** is an 8B-parameter financial language model built on Qwen3-8B, fine-tuned on the **ODA-Fin-SFT-318K** datasetβ€”a meticulously curated corpus of 318K samples with high-quality Chain-of-Thought (CoT) reasoning traces distilled from **Qwen3-235B-A22B-Thinking**. This model establishes a robust foundation for financial reasoning, demonstrating state-of-the-art performance across diverse financial tasks. ### 🎯 Key Highlights - **Base Model**: Qwen3-8B - **Training Data**: ODA-Fin-SFT-318K (318K samples with verified CoT) - **Training Method**: Supervised Fine-Tuning with full-parameter updates - **Avg Performance**: 72.1% across 9 financial benchmarks - **Key Strengths**: - Balanced performance across general financial understanding, sentiment analysis, and numerical reasoning - Serves as optimal initialization for subsequent RL training --- ## 🧠 Model Training ### Training Configuration ```yaml Base Model: Qwen/Qwen3-8B Training Framework: Full-parameter fine-tuning Hardware: 16Γ—NVIDIA A100 (80GB) Sequence Length: 16,384 tokens Batch Size: 1 per device Gradient Accumulation: 16 steps Learning Rate: 1.0e-5 (cosine schedule) Warmup Ratio: 0.1 Epochs: 3 Training Data: ODA-Fin-SFT-318K ``` --- ## πŸ“Š Model Performance Models trained on ODA-Fin-SFT-318K demonstrate superior performance across 9 financial benchmarks:
p
Main Results. 'FinIQ', 'HL' and 'CFQA' refer to FinanceIQ, Headlines, and ConvFinQA benchmarks.
--- ## πŸ“Š Benchmark Details ### General Financial Understanding - **FinEval** (Chinese): Financial domain knowledge across banking, insurance, securities (Acc/zh) - **Finova**: Agent-level financial reasoning and compliance verification (Acc/zh) - **FinanceIQ**: Professional certifications (CPA, CFA) expertise assessment (Acc/zh) ### Sentiment Analysis - **FOMC**: Hawkish vs. Dovish monetary policy stance classification (Weighted-F1/en) - **FPB**: Financial PhraseBank sentiment classification (Weighted-F1/en) - **Headlines**: Financial news headline sentiment interpretation (Weighted-F1/en) ### Numerical Reasoning - **FinQA**: Complex numerical reasoning over financial reports (Acc/en) - **TaTQA**: Hybrid tabular-textual arithmetic operations (Acc/en) - **ConvFinQA**: Multi-turn conversational numerical analysis (Acc/en) --- ## πŸ“š Citation ```bibtex @misc{cao2026unlockingdatavaluefinance, title={Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training}, author={Chuxue Cao and Honglin Lin and Zhanping Zhong and Xin Gao and Mengzhang Cai and Conghui He and Sirui Han and Lijun Wu}, year={2026}, eprint={2603.07223}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2603.07223}, } ``` --- ## πŸ“„ License This model is released under the [Apache 2.0 License](https://opensource.org/licenses/Apache-2.0). The training data (ODA-Fin-SFT-318K) aggregates from 25+ open-source repositories, each with their own licenses. --- ## 🀝 Acknowledgments We thank the creators of DianJin-R1-Data, Agentar-DeepFinance-100K, financial_phrasebank, Finance-Instruct-500k, and others. We also thank the Qwen team for the powerful Qwen3 series models. --- ## πŸ”— Related Resources - **SFT Dataset**: [ODA-Fin-SFT-318K](https://huggingface.co/datasets/OpenDataArena/ODA-Fin-SFT-318k) - **RL Dataset**: [ODA-Fin-RL-12K](https://huggingface.co/datasets/OpenDataArena/ODA-Fin-RL-12K) - **RL Model**: [ODA-Fin-RL-8B](https://huggingface.co/OpenDataArena/ODA-Fin-RL-8B)