Add Llama 3.1 training scripts

#9
by steveowk - opened

This PR adds comprehensive scripts for Llama 3.1 model training:

  • main_cpt_llama3_1.py: Continual pre-training script with LoRA configuration
  • merge_and_save.py: Model merging utility for PEFT adapters
  • ReadME.md: Documentation and usage instructions

These scripts enable fine-tuning of Llama 3.1 models with efficient parameter updates using LoRA (Low-Rank Adaptation) and provide utilities for merging the trained adapters back into the base model.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment