DemoDayModel_LoRA / README.md
volfan6415's picture
📦 Add model, tokenizer, and model card
0a5ee4a verified
metadata
tags:
  - finetuned

meta-llama/Llama-3.2-1B-finetuned with Atomic

Model Description

This model was fine-tuned from meta-llama/Llama-3.2-1B on fka/awesome-chatgpt-prompts data using NOLA AI’s Atomic system.

Training Data

  • Dataset name: fka/awesome-chatgpt-prompts

Training Arguments

  • Batch size: 32
  • Learning rate: 0.0001
  • Used ATOMIC Speed: True

Final Metrics

  • Training loss: 1.5815104802449544
  • Training Runtime: 0:00:46