A straightforward implementation of the final **Self-Alignment model**, inspired by the paper "Self-Alignment with Instruction Backtranslation." This model was fine-tuned on LLaMA-2-hf using **self-curation LIMA dataset** with high-quality sythetic (Instruction, Answer) pairs. The fine-tuning process was conducted using LoRA, and the uploaded model is provided in its **merged** form.