meta-llama/Llama-3.2-3B-Instruct-finetuned with Atomic
Model Description
This model was fine-tuned from meta-llama/Llama-3.2-3B-Instruct on carseng/titleix-explainer data using NOLA AI's Atomic system.
Training Data
- Dataset name: carseng/titleix-explainer
Training Arguments
- Batch size: 32
- Learning rate: 0.0001
- Used ATOMIC Speed: True
Final Metrics
- Training loss: 1.3024479955506267
- Training Runtime: 0:08:39
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support