Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,8 @@ language:
|
|
| 16 |
Falcon-11B is still undertrained, as can be seen by this graph:
|
| 17 |

|
| 18 |
This is why the choice is made by prune 50% of the layers.
|
| 19 |
-
Note that
|
|
|
|
| 20 |
|
| 21 |
# sliced
|
| 22 |
|
|
|
|
| 16 |
Falcon-11B is still undertrained, as can be seen by this graph:
|
| 17 |

|
| 18 |
This is why the choice is made by prune 50% of the layers.
|
| 19 |
+
Note that \~1B of continued pre-training (\~1M rows of 1k tokens) is still required to restore the perplexity of this model in the desired language.
|
| 20 |
+
I'm planning on doing that for certain languages, depending on how much compute will be available.
|
| 21 |
|
| 22 |
# sliced
|
| 23 |
|