meta-llama/Llama-3.2-1B-finetuned with Atomic

Model Description

This model was fine-tuned from meta-llama/Llama-3.2-1B on fka/awesome-chatgpt-prompts data using NOLA AI’s Atomic system.

Training Data

  • Dataset name: fka/awesome-chatgpt-prompts

Training Arguments

  • Batch size: 32
  • Learning rate: 0.0001
  • Used ATOMIC Speed: True

Final Metrics

  • Training loss: 1.5815104802449544
  • Training Runtime: 0:00:46

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support