File size: 1,009 Bytes
f5cb29d b666896 caecd4b f5cb29d caecd4b f5cb29d c8aeba3 552b457 14f02ed 3c448df fe123e1 542eaf0 6ebd4e2 1f4d58c a88da36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
---
datasets:
- Fredtt3/LLaDA-Sample-10BT
- Fredtt3/LLaDA-Sample-ES
language:
- en
- es
pipeline_tag: text-generation
library_name: transformers
---
# New checkpoint trained on an NVIDIA H100 for 8,000 steps and 65,536,000 tokens
It is not yet a competent model because it does not meet the minimum training requirement of 20-30 tokens per parameter. However, it can give us a better idea of how a better-trained model would perform.
If you want to try how to use it here is a file of how to use it in [test_gen.py](https://github.com/F4k3r22/LLaDA-from-scratch/blob/main/test_gen.py) Or using this [Google Colab](https://colab.research.google.com/drive/1jPIPu9qHEFMkANzUEkeOxUW6hS3DeVwd?usp=sharing) notebook
Example of the results it gives:

For those who want to train and get the correct format to be able to load it with `transformers`, everything needed is in [`pre_trainv2.py`](https://github.com/F4k3r22/LLaDA-from-scratch/blob/main/pre_trainv2.py) of the project repo |