meta-llama/Llama-3.2-1B-finetuned with Atomic
Model Description
This model was fine-tuned from meta-llama/Llama-3.2-1B on callanwu/WebWalkerQA data using NOLA AI’s Atomic system.
Training Data
- Dataset name: callanwu/WebWalkerQA
Training Arguments
- Batch size: 32
- Learning rate: 0.0001
- Used ATOMIC Speed: True
Final Metrics
- Training loss: 0.9560312949909884
- Training Runtime: 0:08:03
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support