--- library_name: peft base_model: tachiwin/pretrained_multilingual_instruct tags: - unsloth - trl - sft - generated_from_trainer model-index: - name: multilingual results: [] --- [Visualize in Weights & Biases](https://wandb.ai/tachiwin/tachiwin_finetune/runs/ms960iob) [Visualize in Weights & Biases](https://wandb.ai/tachiwin/tachiwin_finetune/runs/ms960iob) [Visualize in Weights & Biases](https://wandb.ai/tachiwin/tachiwin_finetune/runs/ms960iob) # multilingual This model is a fine-tuned version of [tachiwin/pretrained_multilingual_instruct](https://huggingface.co/tachiwin/pretrained_multilingual_instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.224270650583368e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 32 - total_train_batch_size: 1024 - optimizer: Use OptimizerNames.LION_8BIT and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 1 ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1