Boot64OverFit_LoRA / README.md
volfan6415's picture
📦 Add model, tokenizer, and model card
dd64f96 verified
metadata
tags:
  - finetuned

meta-llama/Llama-3.2-1B-finetuned with Atomic

Model Description

This model was fine-tuned from meta-llama/Llama-3.2-1B on callanwu/WebWalkerQA data using NOLA AI’s Atomic system.

Training Data

  • Dataset name: callanwu/WebWalkerQA

Training Arguments

  • Batch size: 32
  • Learning rate: 0.0001
  • Used ATOMIC Speed: True

Final Metrics

  • Training loss: 0.9560312949909884
  • Training Runtime: 0:08:03