| | --- |
| | library_name: transformers |
| | license: apache-2.0 |
| | base_model: mistralai/Mistral-Nemo-Instruct-2407 |
| | tags: |
| | - axolotl |
| | - generated_from_trainer |
| | <!-- datasets: |
| | - ./train_data.jsonl--> |
| | model-index: |
| | - name: linabot |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) |
| | <details><summary>See axolotl config</summary> |
| |
|
| | axolotl version: `0.8.0` |
| | ```yaml |
| | base_model: mistralai/Mistral-Nemo-Instruct-2407 |
| | model_type: MistralForCausalLM |
| | hub_model_id: Alignment-Lab-AI/linabot |
| | strict: false |
| | chat_template: tokenizer_default |
| | plugins: |
| | - axolotl.integrations.liger.LigerPlugin |
| | liger_rope: true |
| | liger_rms_norm: true |
| | liger_glu_activation: true |
| | liger_layer_norm: true |
| | liger_fused_linear_cross_entropy: true |
| | datasets: |
| | - path: "./train_data.jsonl" |
| | type: chat_template |
| | field_messages: messages |
| | message_property_mappings: |
| | role: role |
| | content: content |
| | roles_to_train: ['assistant'] |
| | train_on_eos: turn |
| | |
| | learning_rate: 2e-5 |
| | lr_scheduler: cosine |
| | weight_decay: 0.03 |
| | warmup_steps: 450 |
| | dataset_prepared_path: |
| | val_set_size: 0.2 |
| | output_dir: ./outputs/out |
| | |
| | sequence_len: 10400 |
| | sample_packing: true |
| | pad_to_sequence_len: true |
| | eval_sample_packing: true |
| | |
| | wandb_project: linabot |
| | wandb_entity: |
| | wandb_watch: all |
| | wandb_name: |
| | wandb_log_model: |
| | |
| | gradient_accumulation_steps: 1 |
| | micro_batch_size: 4 |
| | num_epochs: 5 |
| | optimizer: adalomo |
| | lr_scheduler: cosine |
| | learning_rate: 0.0002024 |
| | flash_attention: true |
| | flash_attn_cross_entropy: false |
| | flash_attn_rms_norm: true |
| | flash_attn_fuse_qkv: false |
| | flash_attn_fuse_mlp: true |
| | torch_compile_mode: "max-autotune" |
| | bf16: auto |
| | tf32: false |
| | |
| | gradient_checkpointing: true |
| | resume_from_checkpoint: |
| | logging_steps: 1 |
| | |
| | evals_per_epoch: 8 |
| | saves_per_epoch: 1 |
| | weight_decay: 0.03 |
| | special_tokens: |
| | bos_token: "<s>" |
| | eos_token: "</s>" |
| | pad_token: "<pad>" |
| | |
| | ``` |
| |
|
| | </details><br> |
| |
|
| | # linabot |
| |
|
| | This model is a fine-tuned version of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) on the ./train_data.jsonl dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.4157 |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.0002024 |
| | - train_batch_size: 4 |
| | - eval_batch_size: 4 |
| | - seed: 42 |
| | - optimizer: Use adalomo and the args are: |
| | No additional optimizer arguments |
| | - lr_scheduler_type: cosine |
| | - lr_scheduler_warmup_steps: 450 |
| | - num_epochs: 5.0 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:------:|:----:|:---------------:| |
| | | 0.4415 | 0.0083 | 1 | 0.5382 | |
| | | 0.4958 | 0.125 | 15 | 0.5380 | |
| | | 0.371 | 0.25 | 30 | 0.5371 | |
| | | 0.4364 | 0.375 | 45 | 0.5347 | |
| | | 0.3777 | 0.5 | 60 | 0.5309 | |
| | | 0.3962 | 0.625 | 75 | 0.5244 | |
| | | 0.3341 | 0.75 | 90 | 0.5168 | |
| | | 0.3259 | 0.875 | 105 | 0.5070 | |
| | | 0.3238 | 1.0 | 120 | 0.4966 | |
| | | 0.36 | 1.125 | 135 | 0.4866 | |
| | | 0.264 | 1.25 | 150 | 0.4793 | |
| | | 0.3319 | 1.375 | 165 | 0.4714 | |
| | | 0.3731 | 1.5 | 180 | 0.4641 | |
| | | 0.325 | 1.625 | 195 | 0.4581 | |
| | | 0.3477 | 1.75 | 210 | 0.4526 | |
| | | 0.2851 | 1.875 | 225 | 0.4481 | |
| | | 0.2732 | 2.0 | 240 | 0.4416 | |
| | | 0.3367 | 2.125 | 255 | 0.4388 | |
| | | 0.2605 | 2.25 | 270 | 0.4366 | |
| | | 0.2725 | 2.375 | 285 | 0.4333 | |
| | | 0.3374 | 2.5 | 300 | 0.4291 | |
| | | 0.275 | 2.625 | 315 | 0.4250 | |
| | | 0.1803 | 2.75 | 330 | 0.4214 | |
| | | 0.3441 | 2.875 | 345 | 0.4189 | |
| | | 0.1505 | 3.0 | 360 | 0.4172 | |
| | | 0.2216 | 3.125 | 375 | 0.4186 | |
| | | 0.1833 | 3.25 | 390 | 0.4185 | |
| | | 0.2586 | 3.375 | 405 | 0.4153 | |
| | | 0.1754 | 3.5 | 420 | 0.4152 | |
| | | 0.2272 | 3.625 | 435 | 0.4136 | |
| | | 0.174 | 3.75 | 450 | 0.4129 | |
| | | 0.1794 | 3.875 | 465 | 0.4074 | |
| | | 0.1779 | 4.0 | 480 | 0.4086 | |
| | | 0.1752 | 4.125 | 495 | 0.4164 | |
| | | 0.1745 | 4.25 | 510 | 0.4173 | |
| | | 0.1314 | 4.375 | 525 | 0.4158 | |
| | | 0.1923 | 4.5 | 540 | 0.4155 | |
| | | 0.1848 | 4.625 | 555 | 0.4156 | |
| | | 0.1185 | 4.75 | 570 | 0.4155 | |
| | | 0.2255 | 4.875 | 585 | 0.4153 | |
| | | 0.1841 | 5.0 | 600 | 0.4157 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.50.3 |
| | - Pytorch 2.5.1+cu124 |
| | - Datasets 3.5.0 |
| | - Tokenizers 0.21.1 |
| |
|