Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training

Paper Collections

Model Performance Comparison
Average score across Financial benchmarks. ODA-Fin-RL/SFT-8B demonstrates strong performance relative to thinking models with significantly more parameters.

This repository provides ODA-Fin-SFT-8B, a financial language model trained on high-quality Chain-of-Thought data. For the reinforcement learning version, see ODA-Fin-RL-8B.

📖 Overview

ODA-Fin-SFT-8B is an 8B-parameter financial language model built on Qwen3-8B, fine-tuned on the ODA-Fin-SFT-318K dataset—a meticulously curated corpus of 318K samples with high-quality Chain-of-Thought (CoT) reasoning traces distilled from Qwen3-235B-A22B-Thinking. This model establishes a robust foundation for financial reasoning, demonstrating state-of-the-art performance across diverse financial tasks.

🎯 Key Highlights

  • Base Model: Qwen3-8B
  • Training Data: ODA-Fin-SFT-318K (318K samples with verified CoT)
  • Training Method: Supervised Fine-Tuning with full-parameter updates
  • Avg Performance: 72.1% across 9 financial benchmarks
  • Key Strengths:
    • Balanced performance across general financial understanding, sentiment analysis, and numerical reasoning
    • Serves as optimal initialization for subsequent RL training

🧠 Model Training

Training Configuration

Base Model: Qwen/Qwen3-8B
Training Framework: Full-parameter fine-tuning
Hardware: 16×NVIDIA A100 (80GB)
Sequence Length: 16,384 tokens
Batch Size: 1 per device
Gradient Accumulation: 16 steps
Learning Rate: 1.0e-5 (cosine schedule)
Warmup Ratio: 0.1
Epochs: 3
Training Data: ODA-Fin-SFT-318K

📊 Model Performance

Models trained on ODA-Fin-SFT-318K demonstrate superior performance across 9 financial benchmarks:

p
Main Results. 'FinIQ', 'HL' and 'CFQA' refer to FinanceIQ, Headlines, and ConvFinQA benchmarks.

📊 Benchmark Details

General Financial Understanding

  • FinEval (Chinese): Financial domain knowledge across banking, insurance, securities (Acc/zh)
  • Finova: Agent-level financial reasoning and compliance verification (Acc/zh)
  • FinanceIQ: Professional certifications (CPA, CFA) expertise assessment (Acc/zh)

Sentiment Analysis

  • FOMC: Hawkish vs. Dovish monetary policy stance classification (Weighted-F1/en)
  • FPB: Financial PhraseBank sentiment classification (Weighted-F1/en)
  • Headlines: Financial news headline sentiment interpretation (Weighted-F1/en)

Numerical Reasoning

  • FinQA: Complex numerical reasoning over financial reports (Acc/en)
  • TaTQA: Hybrid tabular-textual arithmetic operations (Acc/en)
  • ConvFinQA: Multi-turn conversational numerical analysis (Acc/en)

📚 Citation

@misc{cao2026unlockingdatavaluefinance,
      title={Unlocking Data Value in Finance: A Study on Distillation and Difficulty-Aware Training}, 
      author={Chuxue Cao and Honglin Lin and Zhanping Zhong and Xin Gao and Mengzhang Cai and Conghui He and Sirui Han and Lijun Wu},
      year={2026},
      eprint={2603.07223},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2603.07223}, 
}

📄 License

This model is released under the Apache 2.0 License. The training data (ODA-Fin-SFT-318K) aggregates from 25+ open-source repositories, each with their own licenses.


🤝 Acknowledgments

We thank the creators of DianJin-R1-Data, Agentar-DeepFinance-100K, financial_phrasebank, Finance-Instruct-500k, and others. We also thank the Qwen team for the powerful Qwen3 series models.


🔗 Related Resources

Downloads last month
11
Safetensors
Model size
308k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OpenDataArena/ODA-Fin-SFT-8B

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(1047)
this model
Finetunes
1 model
Quantizations
1 model

Dataset used to train OpenDataArena/ODA-Fin-SFT-8B

Collection including OpenDataArena/ODA-Fin-SFT-8B

Paper for OpenDataArena/ODA-Fin-SFT-8B