vkerkez commited on
Commit
0b37687
·
verified ·
1 Parent(s): a37ae40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -13,7 +13,11 @@ GitVac is like a vacuum cleaner for code fixes. It's a series of 3B, 8B, 14B, an
13
 
14
 
15
  # How were the models made?
16
- I distilled samples from r1 through multiple rounds of trial and error. About 2.4k questions fired off, with 1.1k making the verification cut. My rough estimate puts it around 45%.
 
 
 
 
17
 
18
  # How is verification done?
19
  A lot of models are already trained on function calling syntax.
@@ -541,5 +545,3 @@ This does a few things:
541
 
542
  With this dataset, we can fine-tune to get a base model. This model can then be further improved through RLHF (Reinforcement Learning from Human Feedback) and GRPO (Guided Reward Policy Optimization) training, where it will continuously learn from new datasets generated by the pipeline. This creates a virtuous cycle of improvement, with each iteration building on the knowledge gained from previous runs.
543
  I should probably write up a whole separate post on this extended pipeline someday. For now enjoy this repo!
544
-
545
- Disclaimer: These are purely made for this sience project and were not meant to be used comercially.
 
13
 
14
 
15
  # How were the models made?
16
+ I distilled samples from r1 through multiple rounds of trial and error. About 2.4k questions fired off, with 1.1k making the verification cut. My rough estimate puts r1 around 15% after multiple tries.
17
+
18
+ # Training Data
19
+ *The data to train the models came from r1 outputs via distalation.*
20
+ However to gauge accuracuy of the model o3/r1 were used to do eval with the same prompts.
21
 
22
  # How is verification done?
23
  A lot of models are already trained on function calling syntax.
 
545
 
546
  With this dataset, we can fine-tune to get a base model. This model can then be further improved through RLHF (Reinforcement Learning from Human Feedback) and GRPO (Guided Reward Policy Optimization) training, where it will continuously learn from new datasets generated by the pipeline. This creates a virtuous cycle of improvement, with each iteration building on the knowledge gained from previous runs.
547
  I should probably write up a whole separate post on this extended pipeline someday. For now enjoy this repo!