meta-llama/Llama-3.2-1B-finetuned with Atomic
Model Description
This model was fine-tuned from meta-llama/Llama-3.2-1B on FiscaAI/synth-ehr-icd10cm-prompt data using NOLA AI's Atomic system.
Training Data
- Dataset name: FiscaAI/synth-ehr-icd10cm-prompt
Training Arguments
- Batch size: 32
- Learning rate: 0.0001
- Used ATOMIC Speed: True
Final Metrics
- Training loss: 0.9705578587191438
- Training Runtime: 8:25:06
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support