llama3-1_8b_4o_annotated_math / train_results.json
gsmyrnis's picture
End of training
9f225d2 verified
raw
history blame contribute delete
218 Bytes
{
"epoch": 2.986175115207373,
"total_flos": 32031101919232.0,
"train_loss": 0.28372171659160544,
"train_runtime": 1010.9103,
"train_samples_per_second": 20.539,
"train_steps_per_second": 0.214
}