Commit
·
24d98f3
1
Parent(s):
fb31517
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- bigcode/starcoderdata
|
| 5 |
+
- tiiuae/falcon-refinedweb
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
A Reproduction of [OpenLLaMA](https://github.com/openlm-research/open_llama) using 128 H100 GPUs in Bfloat16.
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.
|