carseng's picture
📦 Add model, tokenizer, and model card
8a1582c verified
metadata
tags:
  - finetuned

meta-llama/Llama-3.1-8B-Instruct-finetuned with Atomic

Model Description

This model was fine-tuned from meta-llama/Llama-3.1-8B-Instruct on carseng/titleix-explainer data using NOLA AI's Atomic system.

Training Data

  • Dataset name: carseng/titleix-explainer

Training Arguments

  • Batch size: 32
  • Learning rate: 0.0001
  • Used ATOMIC Speed: True

Final Metrics

  • Training loss: 0.2629938036527323
  • Training Runtime: 1:42:46