Model Card for Model ID
llama2-7b-backward-model
Model Description
Finetune the base language model (llama2 7B) with (output, instruction) pairs {(yi, xi)} from the seed data to obtain a backward model Myx := p(x|y). In other words, finetune a model that uses the output to predict the instruction. Use the openassistant-guanaco training set dataset. (25 points)
Framework versions
- PEFT 0.15.1
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Regiayoung/llama2-7b-backward-model
Base model
meta-llama/Llama-2-7b-hf