library_name: transformers
tags: []
FinPlan-1
FinPlan-1 is a an LLM trained to assist with the creation of basic personal financial plans for individuals. This model is built off of the Fino1 model which is itself a version of Llama-3.1-8B-Instruct, which was CoT fine tuned to improve its finaicial reasoning ability.
Model Details
Model Description -- Introduction
According to Bankrate’s 2025 Emergency Savings Report, only 41% of American’s would be able to use their personal savings to pay for a $1,000 emergency expense, with the rest “financing it with a credit card they’d pay off over time, reducing their spending on other things, taking out a personal loan, borrowing from family or friends or other methods.”
The financial health of American’s is based on a number of factors but one important component is basic financial literacy and having a financial plan. The financial planning component is one area I think LLMs can be of assistance. This LLM is my attempt to further train and fine tune a model which has been trained on financial reasoning tasks to assist individuals with two key aspects of financial planning.
- Assist with the creation of a budget spreadsheet to enable individuals to keep track of their finances and understand where their money is going.
- Provide assistance with planning for short, medium and long term goals including breaking those goals down into monthly savings targets, and suggesting broad investment vehicles to fit each goal's timeframe.
While current LLM's can perform these tasks to an extent, they are often inconsistent with their responce structure, can sometimes struggle with breaking down basic mathematics and frequently go beyond the basic tasks at hand reccomending inappropriate savings and investiment vehicles for individual savings goals. The Fino-1 8B model is certainly well trained for the corporate financial reasoning tasks but its reccomendations for savings and investment vehicles were often too agressive for short term goals and may reccomend long term savings vehicles which carry tax penalties if not used approporately. This model uses LoRA on a proceedureally generated budgeting dataset as well as few shot prompting using a separate dataset based around short, medium and long term goals to enchance the ability of Fino-1 8B to accomplish these tasks.
The results of this training and prompting method are encouraging as the model consistently produces budget spreadsheets (through the generation of executable python code) as well as somewhat reliable savings plan assistance with the use of few shot prompting. These training methods do have an impact on this model's performance on standard benchmarks like gsm8k and mmlu resulting in drops in performance on both tasks compared with the base model, however this loss in generalization is made up for in the model's improved ability to accomplish the tasks of assisting indivudals with budgeting and fixed term savinings goals.
- Developed by: Timothy Austin Rodriguez
- Funded by [optional]: University of Virginia
- Training type: LoRA - Few Shot Prompting (3)
- Language(s) (NLP): Python
- License: MIT
- Finetuned from model [optional]: Fino1-8B [which is fine tuned from Llama 3.1 8B Instruct]
Training Data
This model is trained on a procedurally generated synthetic dataset that provides structured prompts and responses to assist the underlying Fino-1 8B model with creating executable python code which creates and exports budget spreadsheet to a Microsoft Excel .xlsx format. THis dataset (attached to this repository) is comprised of 3000 examples which were divided into a train/validation split of 2500 for training and 500 for validation. The code used to create this dataset including the seeds() can be located in the ipynb files attached to this repository.
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]