Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ language:
|
|
| 6 |
- cs
|
| 7 |
---
|
| 8 |
# Introduction
|
| 9 |
-
CSMPT7b is a large Czech language model continously pretrained on 272b training tokens from English [MPT7b](https://huggingface.co/mosaicml/mpt-7b) model. Model was pretrained on ~67b token [Large Czech Collection](https://huggingface.co/datasets/BUT-FIT/
|
| 10 |
Training was done on [Karolina](https://www.it4i.cz/en) cluster.
|
| 11 |
|
| 12 |
|
|
|
|
| 6 |
- cs
|
| 7 |
---
|
| 8 |
# Introduction
|
| 9 |
+
CSMPT7b is a large Czech language model continously pretrained on 272b training tokens from English [MPT7b](https://huggingface.co/mosaicml/mpt-7b) model. Model was pretrained on ~67b token [Large Czech Collection](https://huggingface.co/datasets/BUT-FIT/BUT-LCC) using Czech tokenizer, obtained using our vocabulary swap method (see below).
|
| 10 |
Training was done on [Karolina](https://www.it4i.cz/en) cluster.
|
| 11 |
|
| 12 |
|