--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-1B-Instruct tags: - base_model:adapter:meta-llama/Llama-3.2-1B-Instruct - transformers pipeline_tag: text-generation model-index: - name: Llama-3.2-1B-Instruct-tuned results: [] --- # Llama-3.2-1B-Instruct-tuned This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5964 - Perplexity: 4.9351 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Perplexity | |:-------------:|:------:|:----:|:---------------:|:----------:| | No log | 0 | 0 | 4.5699 | 96.5354 | | No log | 0.6011 | 333 | 1.7253 | 5.6141 | | 1.8003 | 1.2022 | 666 | 1.6644 | 5.2825 | | 1.8003 | 1.8032 | 999 | 1.6342 | 5.1252 | | 1.6299 | 2.4043 | 1332 | 1.6094 | 5.0000 | | 1.6299 | 3 | 1664 | 1.5964 | 4.9351 | ### Framework versions - PEFT 0.16.0 - Transformers 4.54.1 - Pytorch 2.7.1+cu128 - Datasets 4.0.0 - Tokenizers 0.21.4