| --- |
| library_name: transformers |
| base_model: trl-internal-testing/tiny-random-LlamaForCausalLM |
| tags: |
| - axolotl |
| - generated_from_trainer |
| datasets: |
| - argilla/databricks-dolly-15k-curated-en |
| model-index: |
| - name: tiny-random-LlamaForCausalLM |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) |
| <details><summary>See axolotl config</summary> |
|
|
| axolotl version: `0.6.0` |
| ```yaml |
| base_model: trl-internal-testing/tiny-random-LlamaForCausalLM |
| batch_size: 32 |
| bf16: true |
| chat_template: tokenizer_default_fallback_alpaca |
| datasets: |
| - format: custom |
| path: argilla/databricks-dolly-15k-curated-en |
| type: |
| field_input: original-instruction |
| field_instruction: original-instruction |
| field_output: original-response |
| format: '{instruction} {input}' |
| no_input_format: '{instruction}' |
| system_format: '{system}' |
| system_prompt: '' |
| device_map: auto |
| eval_sample_packing: false |
| eval_steps: 200 |
| flash_attention: true |
| gpu_memory_limit: 80GiB |
| group_by_length: true |
| hub_model_id: SystemAdmin123/tiny-random-LlamaForCausalLM |
| hub_strategy: checkpoint |
| learning_rate: 0.0002 |
| logging_steps: 10 |
| lr_scheduler: cosine |
| max_steps: 2500 |
| micro_batch_size: 4 |
| model_type: AutoModelForCausalLM |
| num_epochs: 100 |
| optimizer: adamw_bnb_8bit |
| output_dir: /root/.sn56/axolotl/outputs/tiny-random-LlamaForCausalLM |
| pad_to_sequence_len: true |
| resize_token_embeddings_to_32x: false |
| sample_packing: false |
| save_steps: 400 |
| save_total_limit: 1 |
| sequence_len: 2048 |
| tokenizer_type: LlamaTokenizerFast |
| torch_dtype: bf16 |
| trust_remote_code: true |
| val_set_size: 0.1 |
| wandb_entity: '' |
| wandb_mode: online |
| wandb_name: trl-internal-testing/tiny-random-LlamaForCausalLM-argilla/databricks-dolly-15k-curated-en |
| wandb_project: Gradients-On-Demand |
| wandb_run: your_name |
| wandb_runid: default |
| warmup_ratio: 0.05 |
| |
| ``` |
|
|
| </details><br> |
|
|
| # tiny-random-LlamaForCausalLM |
|
|
| This model is a fine-tuned version of [trl-internal-testing/tiny-random-LlamaForCausalLM](https://huggingface.co/trl-internal-testing/tiny-random-LlamaForCausalLM) on the argilla/databricks-dolly-15k-curated-en dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 8.6989 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 0.0002 |
| - train_batch_size: 4 |
| - eval_batch_size: 4 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: cosine |
| - lr_scheduler_warmup_steps: 125 |
| - training_steps: 2500 |
| |
| ### Training results |
| |
| | Training Loss | Epoch | Step | Validation Loss | |
| |:-------------:|:------:|:----:|:---------------:| |
| | No log | 0.0003 | 1 | 10.3763 | |
| | 9.7054 | 0.0592 | 200 | 9.6862 | |
| | 8.9091 | 0.1184 | 400 | 8.9612 | |
| | 8.7257 | 0.1776 | 600 | 8.7627 | |
| | 8.7416 | 0.2368 | 800 | 8.7109 | |
| | 8.5944 | 0.2959 | 1000 | 8.6982 | |
| | 8.673 | 0.3551 | 1200 | 8.6963 | |
| | 8.7511 | 0.4143 | 1400 | 8.6972 | |
| | 8.729 | 0.4735 | 1600 | 8.6961 | |
| | 8.6325 | 0.5327 | 1800 | 8.6948 | |
| | 8.6338 | 0.5919 | 2000 | 8.6946 | |
| | 8.7376 | 0.6511 | 2200 | 8.6954 | |
| | 8.573 | 0.7103 | 2400 | 8.6989 | |
| |
| |
| ### Framework versions |
| |
| - Transformers 4.48.1 |
| - Pytorch 2.4.1+cu124 |
| - Datasets 3.2.0 |
| - Tokenizers 0.21.0 |
| |