Update README.md
Browse files
README.md
CHANGED
|
@@ -29,7 +29,7 @@ tags:
|
|
| 29 |
7B rocm-rwkv pth record: I called this model Tlanuwa since I added an extra training focusing on cherokee after each run.
|
| 30 |
|
| 31 |
9B rocm-rwkv pth record: 40 layers embd=4096 ctx= 16384 I am calling this model Quetzal. I called this model Quetzal since I added an extra training focusing on Spanish and the dataset Axolotl-Spanish-Nahuatl after each run.
|
| 32 |
-
- rwkv-9Q-stp101-N8.pth: 9B rocm-rwkv model trained with Slim pajama chunk1-10 for the first epoch and an aditional training with chunk1-2 and a mix of multi-language and code after that I am using the N8 dataset. I am currendly with the N8 dataset 4.222 GTokes. This pth has a loss of 1.904.
|
| 33 |
|
| 34 |
|
| 35 |
|
|
|
|
| 29 |
7B rocm-rwkv pth record: I called this model Tlanuwa since I added an extra training focusing on cherokee after each run.
|
| 30 |
|
| 31 |
9B rocm-rwkv pth record: 40 layers embd=4096 ctx= 16384 I am calling this model Quetzal. I called this model Quetzal since I added an extra training focusing on Spanish and the dataset Axolotl-Spanish-Nahuatl after each run.
|
| 32 |
+
- rwkv-9Q-stp101-N8.pth: 9B rocm-rwkv model trained with Slim pajama chunk1-10 for the first epoch and an aditional training with chunk1-2 and a mix of multi-language and code after that I am using the N8 dataset. I am currendly with the N8 dataset 4.222 GTokes. This pth has a loss of 1.904 regarding the N8 dataset.
|
| 33 |
|
| 34 |
|
| 35 |
|