meta-llama/Llama-3.2-1B-Instruct-finetuned with Atomic
Model Description
This model was fine-tuned from meta-llama/Llama-3.2-1B-Instruct on fka/awesome-chatgpt-prompts data using NOLA AI's Atomic system.
Training Data
- Dataset name: fka/awesome-chatgpt-prompts
Training Arguments
- Batch size: 32
- Learning rate: 0.0001
- Used ATOMIC Speed: True
Final Metrics
- Training loss: 2.683500978681776
- Training Runtime: 0:00:22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support