Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# GitVac
|
| 2 |
Don't forget to vacuum your git repo.
|
| 3 |
|
|
@@ -538,3 +541,5 @@ This does a few things:
|
|
| 538 |
|
| 539 |
With this dataset, we can fine-tune to get a base model. This model can then be further improved through RLHF (Reinforcement Learning from Human Feedback) and GRPO (Guided Reward Policy Optimization) training, where it will continuously learn from new datasets generated by the pipeline. This creates a virtuous cycle of improvement, with each iteration building on the knowledge gained from previous runs.
|
| 540 |
I should probably write up a whole separate post on this extended pipeline someday. For now enjoy this repo!
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-2.0
|
| 3 |
+
---
|
| 4 |
# GitVac
|
| 5 |
Don't forget to vacuum your git repo.
|
| 6 |
|
|
|
|
| 541 |
|
| 542 |
With this dataset, we can fine-tune to get a base model. This model can then be further improved through RLHF (Reinforcement Learning from Human Feedback) and GRPO (Guided Reward Policy Optimization) training, where it will continuously learn from new datasets generated by the pipeline. This creates a virtuous cycle of improvement, with each iteration building on the knowledge gained from previous runs.
|
| 543 |
I should probably write up a whole separate post on this extended pipeline someday. For now enjoy this repo!
|
| 544 |
+
|
| 545 |
+
Disclaimer: These are purely made for this sience project and were not meant to be used comercially.
|