A straightforward implementation of the Backward Model Myx, inspired by the paper "Self-Alignment with Instruction Backtranslation."
This model was fine-tuned on LLaMA-2-hf using (output, instruction) pairs {(yi, xi)} from the OpenAssistant-Guanaco training dataset.
The fine-tuning process was conducted using LoRA, and the uploaded model is provided in its merged form.