fix small typo (#2)
Browse files- fix small typo (c14f48116ba8c527e05d70216058166175ae9e38)
Co-authored-by: Moritz Laurer <MoritzLaurer@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -108,7 +108,7 @@ Following this, we perform instruction fine-tuning on [The Cauldron](https://hug
|
|
| 108 |
- [atlas-math-sets](https://huggingface.co/datasets/AtlasUnified/atlas-math-sets)
|
| 109 |
- [goat](https://huggingface.co/datasets/tiedong/goat)
|
| 110 |
|
| 111 |
-
We use Lora to train the parameters initialized from pre-trained backbones and full fine-tuning for newly initialized parameters (modality connector), as we find this strategy to be more stable as
|
| 112 |
|
| 113 |
More details (training procedure, data selection, hyper-parameters, etc.) along with lessons learned from our ablations will be available in an upcoming technical report.
|
| 114 |
|
|
|
|
| 108 |
- [atlas-math-sets](https://huggingface.co/datasets/AtlasUnified/atlas-math-sets)
|
| 109 |
- [goat](https://huggingface.co/datasets/tiedong/goat)
|
| 110 |
|
| 111 |
+
We use Lora to train the parameters initialized from pre-trained backbones and full fine-tuning for newly initialized parameters (modality connector), as we find this strategy to be more stable as well as more computationally efficient.
|
| 112 |
|
| 113 |
More details (training procedure, data selection, hyper-parameters, etc.) along with lessons learned from our ablations will be available in an upcoming technical report.
|
| 114 |
|