|
|
--- |
|
|
datasets: |
|
|
- Fredtt3/LLaDA-Sample-10BT |
|
|
- Fredtt3/LLaDA-Sample-ES |
|
|
language: |
|
|
- en |
|
|
- es |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
|
|
|
# New checkpoint trained on an NVIDIA H100 for 8,000 steps and 65,536,000 tokens |
|
|
|
|
|
It is not yet a competent model because it does not meet the minimum training requirement of 20-30 tokens per parameter. However, it can give us a better idea of how a better-trained model would perform. |
|
|
|
|
|
If you want to try how to use it here is a file of how to use it in [test_gen.py](https://github.com/F4k3r22/LLaDA-from-scratch/blob/main/test_gen.py) Or using this [Google Colab](https://colab.research.google.com/drive/1jPIPu9qHEFMkANzUEkeOxUW6hS3DeVwd?usp=sharing) notebook |
|
|
|
|
|
Example of the results it gives: |
|
|
|
|
|
 |
|
|
|
|
|
For those who want to train and get the correct format to be able to load it with `transformers`, everything needed is in [`pre_trainv2.py`](https://github.com/F4k3r22/LLaDA-from-scratch/blob/main/pre_trainv2.py) of the project repo |