| --- |
| language: |
| - en |
| license: apache-2.0 |
| tags: |
| - text-generation-inference |
| - transformers |
| - unsloth |
| - llama |
| - trl |
| - sft |
| base_model: unsloth/tinyllama-chat-bnb-4bit |
| pipeline_tag: text-generation |
| datasets: Ramikan-BR/code.evol.instruct.wiz.oss_python.json |
| --- |
| |
| datasets: code.evol.instruct.wiz.oss_python.json |
| ```python |
| ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 |
| \\ /| Num examples = 937 | Num Epochs = 2 |
| O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 256 |
| \ / Total batch size = 512 | Total steps = 2 |
| "-____-" Number of trainable parameters = 201,850,880 |
| [2/2 22:36, Epoch 1/2] |
| Step Training Loss |
| 1 0.707400 |
| 2 0.717800 |
| ``` |
| # Uploaded model |
| |
| - **Developed by:** Ramikan-BR |
| - **License:** apache-2.0 |
| - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit |
| |
| This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
| |
| [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |