Update README.md
#1
by
IAMJB - opened
README.md
CHANGED
|
@@ -14,7 +14,15 @@ All the checkpoints are fine-tuned based on the checkpoints of [OLMo1b-HF](https
|
|
| 14 |
|
| 15 |
Citation:
|
| 16 |
```
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
```
|
| 19 |
|
| 20 |
---
|
|
|
|
| 14 |
|
| 15 |
Citation:
|
| 16 |
```
|
| 17 |
+
@misc{sun2024amurocharanalyzing,
|
| 18 |
+
title={Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models},
|
| 19 |
+
author={Kaiser Sun and Mark Dredze},
|
| 20 |
+
year={2024},
|
| 21 |
+
eprint={2408.06663},
|
| 22 |
+
archivePrefix={arXiv},
|
| 23 |
+
primaryClass={cs.CL},
|
| 24 |
+
url={https://arxiv.org/abs/2408.06663},
|
| 25 |
+
}
|
| 26 |
```
|
| 27 |
|
| 28 |
---
|