ThinkTim21 commited on
Commit
b70e610
·
verified ·
1 Parent(s): 3abf4ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -11,7 +11,7 @@ Created with Chat GPT 4o using a link to this model repository, and a brief prom
11
  # FinPlan-1
12
 
13
  FinPlan-1 is a an LLM trained to assist with the creation of basic personal financial plans for individuals. This model is built off of the
14
- Fino1 model which is itself a version of Llama-3.1-8B-Instruct, which was CoT fine-tuned to improve its financial reasoning ability.
15
 
16
 
17
 
@@ -317,11 +317,10 @@ I strongly recommend double checking the figures presented by this model. While
317
  that safeguard is not implemented for the goals task. Further, this model should be limited in its use for out of scope tasks as the generalization benchmarks demonstrated that
318
  compared to its base model, this model exhibits decreased reasoning ability outside its domain specific task.
319
 
320
- In order to improve this model I would recommend future model trainers and tuners focus on adjusting this model to default to producing python code for all mathematics based
321
- prompts. Sticking with python for mathematics processing should allow the model to perform more highly on the goals task while retaining performance on the
322
-
323
- [More Information Needed]
324
-
325
 
326
  ### Compute Infrastructure
327
 
 
11
  # FinPlan-1
12
 
13
  FinPlan-1 is a an LLM trained to assist with the creation of basic personal financial plans for individuals. This model is built off of the
14
+ Fino1 8B model which is itself a version of Llama-3.1-8B-Instruct, which was CoT fine-tuned to improve its financial reasoning ability.
15
 
16
 
17
 
 
317
  that safeguard is not implemented for the goals task. Further, this model should be limited in its use for out of scope tasks as the generalization benchmarks demonstrated that
318
  compared to its base model, this model exhibits decreased reasoning ability outside its domain specific task.
319
 
320
+ To improve this model I recommend future model trainers and tuners focus on adjusting this model to default to producing python code for all mathematics based
321
+ prompts. Sticking with python for mathematics processing should allow the model to perform more highly on the goals task while retaining performance on the budgeting task.
322
+ This creates two tasks which have similar expected response outputs rather than attempting to train the same model for two very different tasks within the same
323
+ domain as was done here.
 
324
 
325
  ### Compute Infrastructure
326