Financial LLM Advisor
A domain-specialized large language model for financial analysis and investment reasoning, fine-tuned on financial instruction datasets to outperform general-purpose models on financial tasks.
This model builds on Phi-3.5-mini and is optimized for investment analysis, financial reasoning, and entity extraction while remaining cheap to run and fast enough for production deployment.
Overview
General-purpose LLMs are powerful but often lack deep financial domain reasoning.
Financial LLM Advisor addresses this by fine-tuning a strong small model on curated financial instruction data to create an efficient AI financial analyst.
Key characteristics:
- Domain-specific financial reasoning
- Sub-300ms inference latency
- Low operational cost
- Runs on consumer GPUs
- Fully reproducible training pipeline
The model is designed for investment research workflows, including analysis of earnings reports, financial statements, and analyst reports. :contentReference[oaicite:0]{index=0}
Model Details
| Property | Value |
|---|---|
| Base Model | microsoft/phi-3.5-mini-instruct |
| Parameters | 3.8B |
| Fine-tuning Method | LoRA |
| LoRA Rank | 16 |
| Training Data | 50K examples from Finance-Instruct-500k |
| Training Time | ~8–10 hours |
| Hardware | RTX 4090 |
| Adapter Size | ~120MB |
The LoRA adapter modifies only ~1.2M parameters, allowing efficient domain adaptation without full model retraining. :contentReference[oaicite:1]{index=1}
Performance
Benchmark comparison across multiple financial tasks.
| Model | Financial Reasoning | Q&A F1 | NER F1 | p99 Latency |
|---|---|---|---|---|
| Llama-3.2-7B | 68.5% | 0.72 | 0.76 | 320ms |
| Phi-3.5-mini (baseline) | 65.2% | 0.68 | 0.72 | 280ms |
| Financial LLM Advisor | 78.1% | 0.81 | 0.86 | 185ms |
The fine-tuned model significantly improves financial reasoning while remaining extremely efficient to deploy. :contentReference[oaicite:2]{index=2}
Capabilities
The model performs well on tasks such as:
- multi-step financial reasoning
- earnings report analysis
- risk assessment
- valuation discussions
- financial entity extraction
- investment Q&A
Example prompt:
What are the key risks for Apple in 2024?
Example response:
- analyzes revenue composition
- evaluates margins
- identifies strategic risks
- suggests investment implications
Architecture
The system architecture is built around parameter-efficient fine-tuning.
Financial Data
↓
Phi-3.5-mini Base Model
↓
LoRA Adapter (r=16)
↓
Inference Server
↓
Financial Analysis Output
This design allows the model to maintain fast inference and small memory footprint while gaining financial domain expertise. :contentReference[oaicite:3]{index=3}
Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 3 |
| Learning Rate | 2e-4 |
| Batch Size | 16 |
| Quantization | 8-bit |
| Max Sequence Length | 512 |
Training uses supervised fine-tuning (SFT) with LoRA adapters to efficiently adapt the base model to financial reasoning tasks. :contentReference[oaicite:4]{index=4}
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "selmantayyar/financial-llm-advisor"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "Analyze the investment risks for Tesla."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.7,
top_p=0.95
)
print(tokenizer.decode(outputs[0]))
Intended Use
This model is designed for:
- financial education
- investment research assistance
- financial document analysis
- experimentation with domain-specific LLMs
Limitations
- Not a replacement for professional financial advice
- May hallucinate financial facts
- Performance depends on prompt quality
- Not trained on proprietary financial datasets
License
MIT License
Author
Selman Tayyar
- Downloads last month
- 1
Model tree for selmantayyar/financial-llm-advisor
Base model
microsoft/Phi-3.5-mini-instruct